Updates from: 12/01/2023 02:12:30
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
Title: Set up a password reset flow description: Learn how to set up a password reset flow in Azure Active Directory B2C (Azure AD B2C).- - - Previously updated : 10/25/2022- Last updated : 11/27/2023 zone_pivot_groups: b2c-policy-type+
+#Customer intent: As a developer, I want to enable my users to reset their passwords without the need for admin intervention, so that they can recover their accounts if they forget their passwords.
# Set up a password reset flow in Azure Active Directory B2C
zone_pivot_groups: b2c-policy-type
In a [sign-up and sign-in journey](add-sign-up-and-sign-in-policy.md), a user can reset their own password by using the **Forgot your password?** link. This self-service password reset flow applies to local accounts in Azure Active Directory B2C (Azure AD B2C) that use an [email address](sign-in-options.md#email-sign-in) or a [username](sign-in-options.md#username-sign-in) with a password for sign-in.
+> [!TIP]
+> A user can change their password by using the self-service password reset flow if they forget their password and want to reset it. You can also choose one of the following user flow options to change a user's password:
+> - If a user knows their password and wants to change it, use a [password change flow](add-password-change-policy.md).
+> - If you want to force a user to reset their password (for example, when they sign in for the first time, when their passwords have been reset by an admin, or after they've been migrated to Azure AD B2C with random passwords), use a [force password reset](force-password-reset.md) flow.
+ The password reset flow involves the following steps: 1. On the sign-up and sign-in page, the user selects the **Forgot your password?** link. Azure AD B2C initiates the password reset flow. 1. In the next dialog that appears, the user enters their email address, and then selects **Send verification code**. Azure AD B2C sends a verification code to the user's email account. The user copies the verification code from the email, enters the code in the Azure AD B2C password reset dialog, and then selects **Verify code**.
-1. The user can then enter a new password. (After the email is verified, the user can still select the **Change e-mail** button; see [Hide the change email button](#hide-the-change-email-button).)
+1. The user can then enter a new password. (After the email is verified, the user can still select the **Change e-mail** button; see [Hide the change email button](#hide-the-change-email-button-optional) if you wish to remove it.)
:::image type="content" source="./media/add-password-reset-policy/password-reset-flow.png" alt-text="Diagram that shows three dialogs in the password reset flow." lightbox="./media/add-password-reset-policy/password-reset-flow.png":::
-> [!TIP]
-> A user can change their password by using the self-service password reset flow if they forget their password and want to reset it. You can also choose one of the following user flow options:
-> - If a user knows their password and wants to change it, use a [password change flow](add-password-change-policy.md).
-> - If you want to force a user to reset their password (for example, when they sign in for the first time, when their passwords have been reset by an admin, or after they've been migrated to Azure AD B2C with random passwords), use a [force password reset](force-password-reset.md) flow.
- The default name of the **Change email** button in *selfAsserted.html* is **changeclaims**. To find the button name, on the sign-up page, inspect the page source by using a browser tool such as _Inspect_. ## Prerequisites [!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
-### Hide the change email button
-
-After the email is verified, the user can still select **Change email**, enter another email address, and then repeat email verification. If you'd prefer to hide the **Change email** button, you can modify the CSS to hide the associated HTML elements in the dialog. For example, you can add the following CSS entry to selfAsserted.html and [customize the user interface by using HTML templates](customize-ui-with-html.md):
-
-```html
-<style type="text/css">
- .changeClaims
- {
- visibility: hidden;
- }
-</style>
-```
- ## Self-service password reset (recommended) The new password reset experience is now part of the sign-up or sign-in policy. When the user selects the **Forgot your password?** link, they are immediately sent to the Forgot Password experience. Your application no longer needs to handle the [AADB2C90118 error code](#password-reset-policy-legacy), and you don't need a separate policy for password reset.
Your application might need to detect whether the user signed in by using the Fo
::: zone-end
+### Hide the change email button (Optional)
+
+After the email is verified, the user can still select **Change email**, enter another email address, and then repeat email verification. If you'd prefer to hide the **Change email** button, you can modify the CSS to hide the associated HTML elements in the dialog. For example, you can add the following CSS entry to selfAsserted.html and [customize the user interface by using HTML templates](customize-ui-with-html.md):
+
+```html
+<style type="text/css">
+ .changeClaims
+ {
+ visibility: hidden;
+ }
+</style>
+```
+ ### Test the password reset flow 1. Select a sign-up or sign-in user flow (Recommended type) that you want to test.
The following diagram depicts the process:
1. The user selects the **Forgot your password?** link. Azure AD B2C returns the `AADB2C90118` error code to the application. 1. The application handles the error code and initiates a new authorization request. The authorization request specifies the password reset policy name, such as *B2C_1_pwd_reset*.
- ![Diagram that shows the legacy password reset user flow.](./media/add-password-reset-policy/password-reset-flow-legacy.png)
-You can see a basic [ASP.NET sample](https://github.com/AzureADQuickStarts/B2C-WebApp-OpenIDConnect-DotNet-SUSI), which demonstrates how user flows link.
+You can see a basic demonstration of how user flows link in our [ASP.NET sample](https://github.com/AzureADQuickStarts/B2C-WebApp-OpenIDConnect-DotNet-SUSI).
::: zone pivot="b2c-user-flow"
ai-services Changelog Release History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/changelog-release-history.md
This reference article provides a version-based description of Document Intellig
* Document Intelligence **1.0.0-beta.1** * **Targets REST API 2023-10-31-preview by default**
-[**Package (MVN)**](https://repo1.maven.org/maven2/com/azure/azure-ai-documentintelligence/1.0.0-beta.1/)
+[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-documentintelligence/1.0.0-beta.1)
[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav#azure-documentintelligence-client-library-for-java)
ai-services Get Started Sdks Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api.md
zone_pivot_groups: programming-languages-set-formre
> * Some platforms are still awaiting the renaming update. > * All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service.
-**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.1 (GA)** **Earlier versions:** ![blue-checkmark](../media/blue-yes-icon.png) [v3.0](?view=doc-intel-3.0.0&preserve-view=true) ![blue-checkmark](../media/blue-yes-icon.png) [v2.1](?view=doc-intel-2.1.0&preserve-view=true)
::: moniker range="doc-intel-3.1.0"
+**This content applies to:** ![checkmark](../media/yes-icon.png) **v3.1 (GA)** **Earlier versions:** ![blue-checkmark](../media/blue-yes-icon.png) [v3.0](?view=doc-intel-3.0.0&preserve-view=true) ![blue-checkmark](../media/blue-yes-icon.png) [v2.1](?view=doc-intel-2.1.0&preserve-view=true)
* Get started with Azure AI Document Intelligence latest GA version (v3.1).
ai-services Sdk Overview V2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v2-1.md
+
+ Title: Document Intelligence (formerly Form Recognizer) SDK target REST API v2.1 (GA)
+
+description: Document Intelligence v2.1 (GA) software development kits (SDKs) expose Document Intelligence models, features and capabilities, using C#, Java, JavaScript, and Python programming language.
++++
+ - devx-track-python
+ - ignite-2023
+ Last updated : 11/29/2023+
+monikerRange: 'doc-intel-2.1.0'
+++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# SDK target: REST API v2.1 (GA)
+
+![Document Intelligence checkmark](media/yes-icon.png) **REST API version v2.1 (GA) 21-06-08**
+
+Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
+
+## Supported programming languages
+
+Document Intelligence SDK supports the following languages and platforms:
+
+| Language → Document Intelligence SDK version | Package| Supported API version| Platform support |
+|:-:|:-|:-| :-|
+| [.NET/C# → 3.1.x (GA)](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[Java → 3.1.x (GA)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/3.1.1/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/3.1.1) |[v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[JavaScript → 3.1.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/3.1.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/3.1.0)|[v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[Python → 3.1.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.1.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.1.0/)|[v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)|
+
+## Supported Clients
+
+| Language| SDK version | API version | Supported clients|
+| : | :--|:- | :--|
+|.NET/C#</br> Java</br> JavaScript</br>| 3.1.x | v2.1 (default)</br>v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|.NET/C#</br> Java</br> JavaScript</br>| 3.0.x| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| Python | 3.1.x | v2.1 (default)</br>v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+| Python | 3.0.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+
+## Use Document Intelligence SDK in your applications
+
+The Document Intelligence SDK enables the use and management of the Document Intelligence service in your application. The SDK builds on the underlying Document Intelligence REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Document Intelligence SDK for your preferred language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.FormRecognizer --version 3.1.0
+```
+
+```powershell
+Install-Package Azure.AI.FormRecognizer -Version 3.1.0
+```
+
+### [Java](#tab/java)
+
+```xml
+<dependency>
+<groupId>com.azure</groupId>
+<artifactId>azure-ai-formrecognizer</artifactId>
+<version>3.1.0</version>
+</dependency>
+```
+
+```kotlin
+implementation("com.azure:azure-ai-formrecognizer:3.1.0")
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+npm i @azure/ai-form-recognizer@3.1.0
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-formrecognizer==3.1.0
+```
+++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using Azure;
+using Azure.AI.FormRecognizer.Models;
+```
+
+### [Java](#tab/java)
+
+```java
+import com.azure.ai.formrecognizer.*;
+import com.azure.ai.formrecognizer.models.*;
+
+import com.azure.core.credential.AzureKeyCredential;
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const { FormRecognizerClient, AzureKeyCredential } = require("@azure/ai-form-recognizer");
+```
+
+### [Python](#tab/python)
+
+```python
+ from azure.ai.formrecognizer import FormRecognizerClient
+ from azure.core.credentials import AzureKeyCredential
+```
+++
+### 3. Set up authentication
+
+There are two supported methods for authentication
+
+* Use a [Document Intelligence API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
+
+* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Microsoft Entra ID](../../active-directory/fundamentals/active-directory-whatis.md).
+
+#### Use your API key
+
+Here's where to find your Document Intelligence API key in the Azure portal:
++
+### [C#/.NET](#tab/csharp)
+
+```csharp
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `FormRecognizerClient` instance
+ string key = "<your-key>";
+ string endpoint = "<your-endpoint>";
+ FormRecognizerClient client = new FormRecognizerClient(new Uri(endpoint), new AzureKeyCredential(key));
+```
+
+### [Java](#tab/java)
+
+```java
+
+// create your `FormRecognizerClient` instance and `AzureKeyCredential` variable
+FormRecognizerClient formRecognizerClient = new FormRecognizerClientBuilder()
+ .credential(new AzureKeyCredential("<your-key>"))
+ .endpoint("<your-endpoint>")
+ .buildClient();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+
+// create your `FormRecognizerClient` instance and `AzureKeyCredential` variable
+async function main() {
+ const client = new FormRecognizerClient("<your-endpoint>", new AzureKeyCredential("<your-key>"));
+```
+
+### [Python](#tab/python)
+
+```python
+
+# create your `FormRecognizerClient` instance and `AzureKeyCredential` variable
+ form_recognizer_client = FormRecognizerClient(endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>"))
+```
+++
+<a name='use-an-azure-active-directory-azure-ad-token-credential'></a>
+
+#### Use a Microsoft Entra token credential
+
+> [!NOTE]
+> Regional endpoints do not support Microsoft Entra authentication. Create a [custom subdomain](../../ai-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication.
+
+Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios.
+
+### [C#/.NET](#tab/csharp)
+
+Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications:
+
+1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme):
+
+ ```console
+ dotnet add package Azure.Identity
+ ```
+
+ ```powershell
+ Install-Package Azure.Identity
+ ```
+
+1. [Register a Microsoft Entra application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret in the Microsoft Entra application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`FormRecognizerClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```csharp
+ string endpoint = "<your-endpoint>";
+ var client = new FormRecognizerClient(new Uri(endpoint), new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
+
+### [Java](#tab/java)
+
+Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications:
+
+1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true):
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.5.3</version>
+ </dependency>
+ ```
+
+1. [Register a Microsoft Entra application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Microsoft Entra application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`FormRecognizerClient`** instance and **`TokenCredential`** variable:
+
+ ```java
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ FormRecognizerClient formRecognizerClient = new FormRecognizerClientBuilder()
+ .endpoint("{your-endpoint}")
+ .credential(credential)
+ .buildClient();
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+
+### [JavaScript](#tab/javascript)
+
+Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications:
+
+1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true):
+
+ ```javascript
+ npm install @azure/identity
+ ```
+
+1. [Register a Microsoft Entra application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Microsoft Entra application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`FormRecognizerClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```javascript
+ const { FormRecognizerClient } = require("@azure/ai-form-recognizer");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ const client = new FormRecognizerClient("<your-endpoint>", new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer#create-and-authenticate-a-client).
+
+### [Python](#tab/python)
+
+Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications.
+
+1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true):
+
+ ```python
+ pip install azure-identity
+ ```
+
+1. [Register a Microsoft Entra application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Microsoft Entra application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`FormRecognizerClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ from azure.ai.formrecognizer import FormRecognizerClient
+
+ credential = DefaultAzureCredential()
+ form_recognizer_client = FormRecognizerClient(
+ endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/",
+ credential=credential
+ )
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
++++
+### 4. Build your application
+
+Create a client object to interact with the Document Intelligence SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) in a language of your choice.
+
+## Help options
+
+The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Explore Document Intelligence REST API v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)
+
+> [!div class="nextstepaction"]
+> [**Try a Document Intelligence quickstart**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)
ai-services Sdk Overview V3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-0.md
Title: Document Intelligence (formerly Form Recognizer) SDKs v3.0
+ Title: Document Intelligence (formerly Form Recognizer) SDK target REST API 2022ΓÇô08ΓÇô31 (GA)
-description: Document Intelligence v3.0 software development kits (SDKs) expose Document Intelligence models, features and capabilities, using C#, Java, JavaScript, and Python programming language.
+description: Document Intelligence 2022ΓÇô08ΓÇô31 (GA) software development kits (SDKs) expose Document Intelligence models, features and capabilities, using C#, Java, JavaScript, and Python programming language.
monikerRange: 'doc-intel-3.0.0'
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD051 -->
-# Document Intelligence SDK v3.0 (GA)
+# SDK target: REST API 2022ΓÇô08ΓÇô31 (GA)
+![Document Intelligence checkmark](media/yes-icon.png) **REST API version 2022ΓÇô08ΓÇô31 (GA)**
Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
Document Intelligence SDK supports the following languages and platforms:
| Language → Document Intelligence SDK version | Package| Supported API version| Platform support | |:-:|:-|:-| :-| | [.NET/C# → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
-|[Java → 4.0.6 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.6) |[v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[Java → 4.0.6 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.6) |[v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
|[JavaScript → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | |[Python → 3.2.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
ai-services Sdk Overview V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-1.md
Title: Document Intelligence (formerly Form Recognizer) v3.1 SDKs
+ Title: Document Intelligence (formerly Form Recognizer) SDK target REST API 2023-07-31 (GA) latest.
-description: The Document Intelligence v3.1 software development kits (SDKs) expose Document Intelligence models, features and capabilities that are in active development for C#, Java, JavaScript, or Python programming language.
+description: The Document Intelligence 2023-07-31 (GA) software development kits (SDKs) expose Document Intelligence models, features and capabilities that are in active development for C#, Java, JavaScript, or Python programming language.
Last updated 11/21/2023 monikerRange: 'doc-intel-3.1.0'-
+
<!-- markdownlint-disable MD024 -->
monikerRange: 'doc-intel-3.1.0'
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD051 -->
-# Document Intelligence SDK v3.1 latest (GA)
+# SDK target: REST API 2023-07-31 (GA) latest
-**The SDKs referenced in this article are supported by:** ![Document Intelligence checkmark](media/yes-icon.png) **REST API version 2023-07-31ΓÇöv3.1 (GA)**.
+![Document Intelligence checkmark](media/yes-icon.png) **REST API version 2023-07-31 (GA)**
Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
ai-services Sdk Overview V4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v4-0.md
+
+ Title: Document Intelligence (formerly Form Recognizer) SDK target REST API 2023-10-31-preview
+
+description: The Document Intelligence 2023-10-31-preview software development kits (SDKs) expose Document Intelligence models, features and capabilities that are in active development for C#, Java, JavaScript, or Python programming language.
++++
+ - devx-track-python
+ - ignite-2023
+ Last updated : 11/21/2023+
+monikerRange: 'doc-intel-4.0.0'
+
++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# SDK target: REST API 2023-10-31-preview
++
+![Document Intelligence checkmark](media/yes-icon.png) **REST API version 2023-10-31-preview**
+
+Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
+
+## Supported programming languages
+
+Document Intelligence SDK supports the following languages and platforms:
+
+| Language → Document Intelligence SDK version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Package| Supported API version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Platform support |
+|:-:|:-|:-| :-:|
+| [**.NET/C# → 1.0.0-beta.1 (preview)**](/dotnet/api/azure.ai.documentintelligence.documentintelligenceadministrationclient?view=azure-dotnet-preview&preserve-view=true)|[NuGet](https://www.nuget.org/packages/Azure.AI.DocumentIntelligence/1.0.0-beta.1)|[&bullet; 2023-10-31 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>[&bullet; 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; 2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+ |[**Java → 1.0.0-beta.1 (preview)**](/java/api/overview/azure/ai-documentintelligence-readme?view=azure-java-preview&preserve-view=true) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-documentintelligence/1.0.0-beta.1) |[&bullet; 2023-10-31 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>[&bullet; 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; 2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[**JavaScript → 1.0.0-beta.1 (preview)**](/javascript/api/overview/azure/ai-document-intelligence-rest-readme?view=azure-node-preview&preserve-view=true)| [npm](https://www.npmjs.com/package/@azure-rest/ai-document-intelligence)|[&bullet; 2023-10-31 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>[&bullet; 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[**Python → 1.0.0b1 (preview)**](/python/api/overview/azure/ai-documentintelligence-readme?view=azure-python-preview&preserve-view=true) | [PyPI](https://pypi.org/project/azure-ai-documentintelligence/)|[&bullet; 2023-10-31 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>[&bullet; 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+
+## Supported Clients
+
+The following tables present the correlation between each SDK version the supported API versions of the Document Intelligence service.
+
+### [C#/.NET](#tab/csharp)
+
+| Language| SDK alias | API version (default) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients|
+| : | :--|:- | :--|
+ |**.NET/C# 1.0.0-beta.1 (preview)**| v4.0 (preview)| 2023-10-31-preview|**DocumentIntelligenceClient**</br>**DocumentIntelligenceAdministrationClient**|
+|**.NET/C# 4.1.0**| v3.1 latest (GA)| 2023-07-31|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C# 4.0.0**| v3.0 (GA)| 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**.NET/C# 3.1.x**| v2.1 | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**.NET/C# 3.0.x**| v2.0 | v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+
+### [Java](#tab/java)
+
+| Language| SDK alias | API version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients|
+| : | :--|:- | :--|
+|**Java 1.0.0-beta.1 (preview)**| v4.0 preview| 2023-10-31 (default)|**DocumentIntelligenceClient**</br>**DocumentIntelligenceAdministrationClient**|
+|**Java 4.1.0**| v3.1 latest (GA)| 2023-07-31 (default)|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**Java 4.0.0**</br>| v3.0 (GA)| 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**Java 3.1.x**| v2.1 | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**Java 3.0.x**| v2.0| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+
+### [JavaScript](#tab/javascript)
+
+| Language| SDK alias | API version (default) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients|
+| : | :--|:- | :--|
+|**JavaScript 1.0.0-beta.1**| v4.0 (preview)| 2023-10-31 (default)|**DocumentIntelligenceClient**</br>**DocumentIntelligenceAdministrationClient**|
+|**JavaScript 5.0.0**| v3.1 latest (GA)| 2023-07-31 (default)|**DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**JavaScript 4.0.0**</br>| v3.0 (GA)| 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient** |
+|**JavaScript 3.1.x**</br>| v2.1 | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+|**JavaScript 3.0.x**</br>| v2.0| v2.0 | **FormRecognizerClient**</br>**FormTrainingClient** |
+
+### [Python](#tab/python)
+
+| Language| SDK alias | API version (default) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; | Supported clients|
+| : | :--|:- | :--|
+| **Python 1.0.0b1**| v4.0 (preview)| 2023-10-31 (default) |**DocumentIntelligenceClient**</br>**DocumentIntelligenceAdministrationClient**|
+| **Python 3.3.0**| v3.1 latest (GA)| 2023-07-31 (default) | **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python 3.2.x**| v3.0 (GA)| 2022-08-31| **DocumentAnalysisClient**</br>**DocumentModelAdministrationClient**|
+| **Python 3.1.x**| v2.1 | v2.1 | **FormRecognizerClient**</br>**FormTrainingClient** |
+| **Python 3.0.0** | v2.0 | v2.0 |**FormRecognizerClient**</br>**FormTrainingClient** |
+++
+## Use Document Intelligence SDK in your applications
+
+The Document Intelligence SDK enables the use and management of the Document Intelligence service in your application. The SDK builds on the underlying Document Intelligence REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Document Intelligence SDK for your preferred language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.DocumentIntelligence --version 1.0.0-beta.1
+```
+
+```powershell
+Install-Package Azure.AI.FormRecognizer -Version 1.0.0-beta.1
+```
+
+### [Java](#tab/java)
+
+```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-ai-documentintelligence</artifactId>
+ <version>1.0.0-beta.1</version>
+ </dependency>
+
+```
+
+```kotlin
+implementation("com.azure:azure-ai-documentintelligence:1.0.0-beta.1")
+
+```
+
+### [JavaScript](#tab/javascript)
+
+```console
+npm i @azure-rest/ai-document-intelligence@1.0.0-beta.1
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-documentintelligence==1.0.0b1
+```
+++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using Azure;
+using Azure.AI.DocumentIntelligence;
+```
+
+### [Java](#tab/java)
+
+```java
+import com.azure.ai.documentintelligence.*;
+import com.azure.ai.documentintelligence.models.*;
+
+import com.azure.core.credential.AzureKeyCredential;
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const { AzureKeyCredential, DocumentIntelligence } = require("@azure-rest/ai-document-intelligence@1.0.0-beta.1");
+```
+
+### [Python](#tab/python)
+
+```python
+from azure.ai.documentintelligence import DocumentIntelligenceClient
+from azure.core.credentials import AzureKeyCredential
+```
+++
+### 3. Set up authentication
+
+There are two supported methods for authentication
+
+* Use a [Document Intelligence API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
+
+* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Microsoft Entra ID](../../active-directory/fundamentals/active-directory-whatis.md).
+
+#### Use your API key
+
+Here's where to find your Document Intelligence API key in the Azure portal:
++
+### [C#/.NET](#tab/csharp)
+
+```csharp
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentIntelligenceClient` instance
+string key = "<your-key>";
+string endpoint = "<your-endpoint>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentIntelligenceClient client = new DocumentIntelligenceClient(new Uri(endpoint), new AzureKeyCredential(key));
+```
+
+### [Java](#tab/java)
+
+```java
+
+// create your `DocumentIntelligenceClient` instance and `AzureKeyCredential` variable
+DocumentIntelligenceClient documentIntelligenceClient = new DocumentIntelligenceClientBuilder()
+ .credential(new AzureKeyCredential("<your-key>"))
+ .endpoint("<your-endpoint>")
+ .buildClient();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+
+// create your `DocumentIntelligenceClient` instance and `AzureKeyCredential` variable
+async function main() {
+ const client = DocumentIntelligence(process.env["your-endpoint>"], {
+ key: process.env["<your-key>"],
+});
+```
+
+### [Python](#tab/python)
+
+```python
+
+# create your `DocumentIntelligenceClient` instance and `AzureKeyCredential` variable
+ endpoint = "<your-endpoint>"
+ credential = AzureKeyCredential("<your-key>")
+ document_analysis_client = DocumentIntelligenceClient(endpoint, credential)
+```
+++
+<a name='use-an-azure-active-directory-azure-ad-token-credential'></a>
+
+#### Use a Microsoft Entra token credential
+
+> [!NOTE]
+> Regional endpoints do not support Microsoft Entra authentication. Create a [custom subdomain](../../ai-services/authentication.md?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication.
+
+Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios.
+
+### [C#/.NET](#tab/csharp)
+
+Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications:
+
+1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme):
+
+ ```console
+ dotnet add package Azure.Identity
+ ```
+
+ ```powershell
+ Install-Package Azure.Identity
+ ```
+
+1. [Register a Microsoft Entra application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret in the Microsoft Entra application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentIntelligenceClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```csharp
+ string endpoint = "<your-endpoint>";
+ var client = new DocumentIntelligenceClient(new Uri(endpoint), new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/documentintelligence/Azure.AI.DocumentIntelligence/README.md#authenticate-the-client)
+
+### [Java](#tab/java)
+
+Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications:
+
+1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true):
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.5.3</version>
+ </dependency>
+ ```
+
+1. [Register a Microsoft Entra application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Microsoft Entra application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentIntelligenceClient`** instance and **`TokenCredential`** variable:
+
+ ```java
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ DocumentIntelligenceClient documentIntelligenceClient = new DocumentIntelligenceClientBuilder()
+ .endpoint("{your-endpoint}")
+ .credential(credential)
+ .buildClient();
+ ```
+
+For more information, *see* [Authentication](https://github.com/Azure/azure-sdk-for-jav#authentication)
+
+### [JavaScript](#tab/javascript)
+
+Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications:
+
+1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true):
+
+ ```javascript
+ npm install @azure/identity
+ ```
+
+1. [Register a Microsoft Entra application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Microsoft Entra application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentIntelligenceClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```javascript
+ const { DocumentIntelligenceClient } = require("@azure-rest/ai-document-intelligence@1.0.0-beta.1");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ const client = new DocumentIntelligenceClient("<your-endpoint>", new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/documentintelligence/ai-document-intelligence-rest#create-and-authenticate-a-documentintelligenceclient).
+
+### [Python](#tab/python)
+
+Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications.
+
+1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true):
+
+ ```python
+ pip install azure-identity
+ ```
+
+1. [Register a Microsoft Entra application and create a new service principal](../../ai-services/authentication.md?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Document Intelligence by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Microsoft Entra application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentIntelligenceClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ from azure.ai.documentintelligence import DocumentIntelligenceClient
+
+ credential = DefaultAzureCredential()
+ client = DocumentIntelligenceClient(
+ endpoint="<your-endpoint>",
+ credential=credential
+ )
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/blob/7c42462ac662522a6fd21b17d2a20f4cd40d0356/sdk/documentintelligence/azure-ai-documentintelligence/README.md#authenticate-the-client)
+++
+### 4. Build your application
+
+Create a client object to interact with the Document Intelligence SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) in a language of your choice.
+
+## Help options
+
+The [Microsoft Q&A](/answers/tags/440/document-intelligence) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-ai-document-intelligence) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure, use the following tags so that we see your question
+
+* Microsoft Q&A: **`Azure AI Document Intelligence`**.
+
+* Stack Overflow: **`azure-ai-document-intelligence`**
+
+## Next steps
+
+> [!div class="nextstepaction"]
+>Explore [**Document Intelligence REST API 2023-10-31-rest**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) operations.
ai-services Provisioned Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-throughput.md
The provisioned throughput capability allows you to specify the amount of throug
- **Predictable performance:** stable max latency and throughput for uniform workloads. - **Reserved processing capacity:** A deployment configures the amount of throughput. Once deployed, the throughput is available whether used or not.-- **Cost savings:** High throughput workloads will result in cost savings vs token-based consumption.
+- **Cost savings:** High throughput workloads may provide cost savings vs token-based consumption.
An Azure OpenAI Deployment is a unit of management for a specific OpenAI Model. A deployment provides customer access to a model for inference and integrates additional features like Content Moderation ([See content moderation documentation](content-filter.md)).
Provisioned throughput quota represents a specific amount of total throughput yo
Quota is specific to a (deployment type, mode, region) triplet and isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. Customers can raise a support request to move the quota across deployment types, models, or regions but we can't guarantee that it will be possible.
-While we make every attempt to ensure that quota is always deployable, quota does not represent a guarantee that the underlying capacity is available for the customer to use. The service assigns capacity to the customer at deployment time and if capacity is unavailable the deployment will fail with an out of capacity error.
+While we make every attempt to ensure that quota is always deployable, quota does not represent a guarantee that the underlying capacity is available for the customer to use. The service assigns capacity to the customer at deployment time and if capacity is unavailable the deployment will fail with an out of capacity error.
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
To get started, [connect your data source](../use-your-data-quickstart.md) using
> [!NOTE] > To get started, you need to already have been approved for [Azure OpenAI access](../overview.md#how-do-i-get-access-to-azure-openai) and have an [Azure OpenAI Service resource](../how-to/create-resource.md) with either the gpt-35-turbo or the gpt-4 models deployed.
-<!--## Data source options
-
-Azure OpenAI on your data uses an [Azure AI Search](/azure/search/search-what-is-azure-search) index to determine what data to retrieve based on user inputs and provided conversation history. We recommend using Azure OpenAI Studio to create your index from a blob storage or local files. See the [quickstart article](../use-your-data-quickstart.md?pivots=programming-language-studio) for more information.-->
- ## Data formats and file types Azure OpenAI on your data supports the following filetypes:
You can modify the following additional settings in the **Data parameters** sect
|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 5. This is the `topNDocuments` parameter in the API. | | **Strictness** | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. The default value is 3. |
-## Virtual network support & private endpoint support (Azure AI Search only)
+## Azure Role-based access controls (Azure RBAC) for adding data sources
-See the following table for scenarios supported by virtual networks and private endpoints **when you bring your own Azure AI Search index**.
+To add a new data source to Azure OpenAI on your data, you need the following Azure RBAC roles.
-| Network access to the Azure OpenAI Resource | Network access to the Azure AI Search resource | Is vector search enabled? | Azure OpenAI studio | Chat with the model using the API |
-||-|||--|
-| Public | Public | Either | Supported | Supported |
-| Private | Public | Yes | Not supported | Supported |
-| Private | Public | No | Supported | Supported |
-| Regardless of resource access allowances | Private | Either | Not supported | Supported |
-Additionally, data ingestion has the following configuration support:
-
-| Network access to the Azure OpenAI Resource | Network access to the Azure AI Search resource | Azure OpenAI studio support | [Ingestion API](../reference.md#start-an-ingestion-job) support |
-||-|--|--|
-| Public | Public | Supported | Supported |
-| Private | Regardless of resource access allowances. | Not supported | Not supported |
-| Public | Private | Not supported | Not supported |
+|Azure RBAC role | Which resource needs this role? | Needed when |
+||||
+| [Cognitive Services OpenAI Contributor](../how-to/role-based-access-control.md#cognitive-services-openai-contributor) | The Azure AI Search resource, to access Azure OpenAI resource. | You want to use Azure OpenAI on your data. |
+|[Search Index Data Reader](/azure/role-based-access-control/built-in-roles#search-index-data-reader) | The Azure OpenAI resource, to access the Azure AI Search resource. | You want to use Azure OpenAI on your data. |
+|[Search Service Contributor](/azure/role-based-access-control/built-in-roles#search-service-contributor) | The Azure OpenAI resource, to access the Azure AI Search resource. | You plan to create a new Azure AI Search index. |
+|[Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) | You have an existing Blob storage container that you want to use, instead of creating a new one. | The Azure AI Search and Azure OpenAI resources, to access the storage account. |
+| [Cognitive Services OpenAI User](../how-to/role-based-access-control.md#cognitive-services-openai-user) | The web app, to access the Azure OpenAI resource. | You want to deploy a web app. |
+| [Contributor](/azure/role-based-access-control/built-in-roles#contributor) | Your subscription, to access Azure Resource Manager. | You want to deploy a web app. |
+| [Cognitive Services Contributor Role](/azure/role-based-access-control/built-in-roles#cognitive-services-contributor) | The Azure AI Search resource, to access Azure OpenAI resource. | You want to deploy a [web app](#using-the-web-app). |
+## Virtual network support & private endpoint support (Azure AI Search only)
+> [!TIP]
+> For instructions on setting up your resources to work on a virtual private network or private endpoint, see [Use Azure OpenAI on your data securely](../how-to/use-your-data-securely.md)
### Azure OpenAI resources
Learn more about the [manual approval workflow](/azure/private-link/private-endp
After you approve the request in your search service, you can start using the [chat completions extensions API](/azure/ai-services/openai/reference#completions-extensions). Public network access can be disabled for that search service.
-### Storage accounts
-
-Storage accounts in virtual networks, firewalls, and private endpoints are supported by Azure OpenAI on your data. To use a storage account in a private network:
-
-1. Ensure you have the system assigned managed identity principal enabled for your Azure OpenAI and Azure AI Search resources.
- 1. Using the Azure portal, navigate to your resource, and select **Identity** from the navigation menu on the left side of the screen.
- 1. Set **Status** to **On**.
- 1. Perform these steps for both of your Azure OpenAI and Azure AI Search resources.
-
- :::image type="content" source="../media/use-your-data/managed-identity.png" alt-text="A screenshot showing managed identity settings in the Azure portal." lightbox="../media/use-your-data/managed-identity.png":::
-
-1. Navigate back to your storage account. Select **Access Control (IAM)** for your resource. Select **Add**, then **Add role assignment**. In the window that appears, add the **Storage Data Contributor** role to the storage resource for your Azure OpenAI and search resource's managed identity.
- 1. Assign access to **Managed Identity**.
- 1. If you have multiple search resources, Perform this step for each search resource.
-
- :::image type="content" source="../media/use-your-data/add-role-assignment.png" alt-text="A screenshot showing the role assignment option in the Azure portal." lightbox="../media/use-your-data/add-role-assignment.png":::
-
-1. If your storage account hasn't already been network restricted, go to networking tab and select **Enabled from selected virtual networks and IP addresses**.
-
- :::image type="content" source="../media/use-your-data/enable-virtual-network.png" alt-text="A screenshot showing the option for enabling virtual networks in the Azure portal." lightbox="../media/use-your-data/enable-virtual-network.png":::
-
-## Azure Role-based access controls (Azure RBAC)
-
-To add a new data source to your Azure OpenAI resource, you need the following Azure RBAC roles.
--
-|Azure RBAC role | Which resource needs this role? | Needed when |
-||||
-| [Cognitive Services OpenAI Contributor](../how-to/role-based-access-control.md#cognitive-services-openai-contributor) | The Azure AI Search resource, to access Azure OpenAI resource. | You want to use Azure OpenAI on your data. |
-|[Search Index Data Reader](/azure/role-based-access-control/built-in-roles#search-index-data-reader) | The Azure OpenAI resource, to access the Azure AI Search resource. | You want to use Azure OpenAI on your data. |
-|[Search Service Contributor](/azure/role-based-access-control/built-in-roles#search-service-contributor) | The Azure OpenAI resource, to access the Azure AI Search resource. | You plan to create a new Azure AI Search index. |
-|[Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) | You have an existing Blob storage container that you want to use, instead of creating a new one. | The Azure AI Search and Azure OpenAI resources, to access the storage account. |
-| [Cognitive Services OpenAI User](../how-to/role-based-access-control.md#cognitive-services-openai-user) | The web app, to access the Azure OpenAI resource. | You want to deploy a web app. |
-| [Contributor](/azure/role-based-access-control/built-in-roles#contributor) | Your subscription, to access Azure Resource Manager. | You want to deploy a web app. |
-| [Cognitive Services Contributor Role](/azure/role-based-access-control/built-in-roles#cognitive-services-contributor) | The Azure AI Search resource, to access Azure OpenAI resource. | You want to deploy a [web app](#using-the-web-app). |
---- ## Document-level access control (Azure AI Search only) Azure OpenAI on your data lets you restrict the documents that can be used in responses for different users with Azure AI Search [security filters](/azure/search/search-security-trimming-for-azure-search-with-aad). When you enable document level access, the search results returned from Azure AI Search and used to generate a response will be trimmed based on user Microsoft Entra group membership. You can only enable document-level access on existing Azure AI Search indexes. To enable document-level access:
ai-services Use Your Data Securely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md
+
+ Title: 'Using your data with Azure OpenAI securely'
+
+description: Use this article to learn about securely using your data for text generation in Azure OpenAI.
+#
+++++ Last updated : 11/29/2023
+recommendations: false
++
+# Securely use Azure OpenAI on your data
+
+Use this article to learn how to use Azure OpenAI on Your Data securely by protecting data with virtual networks and private endpoints.
+
+## Data ingestion architecture
+
+When you ingest data into Azure OpenAI on your data, the following process is used to process the data and store it in blob storage. This applies to the following data sources:
+* local files
+* Azure blob storage
+* URLs
++
+1. The ingestion process is started when a client sends data to be processed.
+1. Ingestion assets (indexers, indexes, data sources, a [custom skill](/azure/search/cognitive-search-custom-skill-interface) and container in the search resource) are created in the Azure AI Search resource and Azure storage account.
+1. If the ingestion is triggered by a [scheduled refresh](../concepts/use-your-data.md#schedule-automatic-index-refreshes-azure-ai-search-only), the ingestion process starts at `[3]`.
+1. Azure OpenAI's `preprocessing-jobs` API implements the [Azure AI Search customer skill web API protocol](/azure/search/cognitive-search-custom-skill-web-api), and processes the documents in a queue.
+1. Azure OpenAI:
+ 1. Internally uses the indexer created earlier to crack the documents.
+ 1. Uses a heuristic-based algorithm to perform chunking, honoring table layouts and other formatting elements in the chunk boundary to ensure the best chunking quality.
+ 1. If you choose to enable vector search, uses the current embedding model to vectorize the chunks, if `embeddingDeploymentName` is specified in the request header.
+1. When all the data that the service is monitoring are processed, Azure OpenAI triggers another indexer.
+1. The indexer stores the processed data into an Azure AI Search service.
+
+For the managed identities used in service calls, only system assigned managed identities are supported. User assigned managed identities aren't supported.
+
+## Inference architecture
++
+When you send API calls to chat with an Azure OpenAI model on your data, the service needs to retrieve the index fields during inference to perform fields mapping automatically if the fields mapping isn't explicitly set in the request. Therefore the service requires the Azure OpenAI identity to have the `Search Service Contributor` role for the search service even during inference.
+++
+## Resources setup
+
+Use the following sections to set your resources for secure usage. If you plan to set up security for your resources, you should complete all of the following sections. For more information on inbound and outbound data flow, see the [Azure AI search documentation](/azure/search/search-security-overview).
+
+## Security support for Azure OpenAI
++
+### Inbound security: networking
+
+You can set the Azure OpenAI service networking by allowing access from the **Selected Networks and Private Endpoints** section in the Azure portal.
+++
+If you use [Azure Management REST API](/rest/api/cognitiveservices/accountmanagement/accounts/update), you can set `networkAcls.defaultAction` as `Deny`
+
+```json
+...
+"networkAcls": {
+ "defaultAction": "Deny",
+ "ipRules": [
+ {
+ "value": "4.155.49.0/24"
+ }
+ ]
+},
+"privateEndpointConnections": [],
+"publicNetworkAccess": "Enabled"
+...
+```
+
+> [!NOTE]
+> To use Azure OpenAI Studio, you cannot set `publicNetworkAccess` as `Disabled`, because you need to add your local IP to the IP rules, so Azure OpenAI Studio can call the Azure OpenAI API for both ingestion and inference from your browser.
+
+### Inbound security: trusted service
+
+To allow Azure AI Search to call Azure OpenAI `preprocessing-jobs` as custom skill web API, while Azure OpenAI is network restricted, you'll need to set up Azure OpenAI to bypass Azure AI Search as a trusted service. Azure OpenAI will identify the traffic from Azure AI Search by verifying the claims in the JSON Web Token (JWT). Azure AI Search must use the system assigned managed identity authentication to call the custom skill web API. Set `networkAcls.bypass` as `AzureServices` from the management API. See [Virtual networks article](/azure/ai-services/cognitive-services-virtual-networks?tabs=portal#grant-access-to-trusted-azure-services-for-azure-openai) for more information.
+
+### Outbound security: managed identity
+
+To allow other services to recognize Azure OpenAI via Azure Active Directory (Azure AD) authentication, you need to assign a managed identity for your Azure OpenAI service. The easiest way is to toggle on system assigned managed identity on Azure portal.
+
+You can also add a user assigned managed identity, but using user assigned managed identities is only supported by the inference API, not in the ingestion API.
+
+> [!TIP]
+> Unless you are in an advanced stage of development and ready for production, we recommend using the system assigned managed identity.
+
+To set the managed identities via the management API, see [the management API reference documentation](/rest/api/cognitiveservices/accountmanagement/accounts/update#identity).
+
+```json
+
+"identity": {
+ "principalId": "12345678-abcd-1234-5678-abc123def", "tenantId": "1234567-abcd-1234-1234-abcd1234", "type": "SystemAssigned, UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/1234-5678-abcd-1234-1234abcd/resourceGroups/my-resource-group",
+ "principalId": "12345678-abcd-1234-5678-abcdefg1234",
+ "clientId": "12345678-abcd-efgh-1234-12345678"
+ }
+ }
+```
+
+## Security support for Azure AI Search
+
+### Inbound security: authentication
+As Azure OpenAI will use managed identity to access Azure AI Search, you need to enable Azure AD based authentication in your Azure AI Search. To do it on Azure portal, select **Both** in the **Keys** tab in the Azure portal.
++
+To enable Azure AD via the REST API, set `authOptions` as `aadOrApiKey`. See the [Azure AI Search RBAC article](/azure/search/search-security-rbac?tabs=config-svc-rest%2Croles-portal%2Ctest-portal%2Ccustom-role-portal%2Cdisable-keys-portal#configure-role-based-access-for-data-plane) for more information.
+
+```json
+"disableLocalAuth": false,
+"authOptions": {
+ "aadOrApiKey": {
+ "aadAuthFailureMode": "http401WithBearerChallenge"
+ }
+}
+```
+
+To use Azure OpenAI Studio, you can't disable the API key based authentication for Azure AI Search, because Azure OpenAI Studio uses the API key to call the Azure AI Search API from your browser.
+
+> [!TIP]
+> For the best security, when you are ready for production and no longer need to use Azure OpenAI Studio for testing, we recommend that you disable the API key. See the [Azure AI Search RBAC article](/azure/search/search-security-rbac?tabs=config-svc-portal%2Croles-portal%2Ctest-portal%2Ccustom-role-portal%2Cdisable-keys-portal#disable-api-key-authentication) for details.
+
+### Inbound security: networking
+
+Use **Selected networks** in the Azure portal. Azure AI Search doesn't support bypassing trusted services, so it is the most complex part in the setup. Create a private endpoint for theAzure OpenAI on your data (as a multitenant service managed by Microsoft), and link it to your Azure AI Search resource. This requires you to submit an [application form](https://aka.ms/applyacsvpnaoaioyd).
+
+> [!NOTE]
+> To use Azure OpenAI Studio, you cannot disable public network access, and you need to add your local IP to the IP rules, because Azure AI Studio calls the search API from your browser to list available indexes.
++
+### Outbound security: managed identity
+
+To allow other services to recognize the Azure AI Search using Azure AD authentication, you need to assign a managed identity for your Azure AI Search service. The easiest way is to toggle the system assigned managed identity in the Azure portal to **on**.
++
+User assigned managed identities aren't supported.
+
+## Security support for Azure blob storage
+
+### Inbound security: networking
+In the Azure portal, navigate to your storage account networking tab and select **Enabled from selected virtual networks and IP addresses**.
++
+Make sure **Allow Azure services on the trusted services list to access this storage account** is selected, so Azure OpenAI and Azure AI Search can bypass the network restriction of your storage account when using a managed identity for authentication.
+
+To use Azure OpenAI Studio, make sure to add your local IP to the IP rules, so the Azure OpenAI Studio can upload files to the storage account from your browser.
+
+## Role assignments
+
+So far you have already setup each resource work independently. Next you will need to allow the services to authorize each other. The minimum requirements are listed below.
+
+|Role| Assignee | Resource | Description |
+|--|--|--|--|
+| `Search Index Data Reader` | Azure OpenAI | Azure AI Search | Inference service queries the data from the index. |
+| `Search Service Contributor` | Azure OpenAI | Azure AI Search | Inference service queries the index schema for auto fields mapping. Data ingestion service creates index, data sources, skill set, indexer, and queries the indexer status. |
+| `Storage Blob Data Contributor` | Azure OpenAI | Storage Account | Reads from the input container, and writes the pre-process result to the output container. |
+| `Cognitive Services OpenAI Contributor` | Azure AI Search | Azure OpenAI | Custom skill |
+| `Storage Blob Data Contributor` | Azure AI Search | Storage Account | Reads blob and writes knowledge store |
+| `Cognitive Services OpenAI Contributor` | Signed-in User | Azure OpenAI | Calls public ingestion or inference API from Azure OpenAI Studio.|
+
+See the [Azure RBAC documentation](/azure/role-based-access-control/role-assignments-portal) for instructions on setting these roles in the Azure portal. You can use the [available script on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/blob/main/scripts/role_assignment.sh) to add the role assignments programmatically. You need to have the `Owner` role on these resources to do role assignments.
+
+## Using the API
++
+### Local test setup
+
+Make sure your sign-in credential has `Cognitive Services OpenAI Contributor` role on your Azure OpenAI resource.
++
+Also, make sure that the IP your development machine is allowlisted in the IP rules, so you can call the Azure OpenAI API.
++
+### Ingestion API
++
+See the [ingestion API reference article](/azure/ai-services/openai/reference#start-an-ingestion-job) for details on the request and response objects used by the ingestion API.
+
+Additional notes:
+
+* `JOB_NAME` in the API path will be used as the index name in Azure AI Search.
+* Use the `Authorization` header rather than api-key.
+* Explicitly set `storageEndpoint` header, this is required if the `storageConnectionString` is in keyless format. It starts with `ResourceId=`.
+* Use `ResourceId=` format for `storageConnectionString`. This indicates that Azure OpenAI and Azure AI Search will use managed identity to authenticate the storage account, which is required to bypass network restrictions.
+* **Do not** set the `searchServiceAdminKey` header. The system-assigned identity of the Azure OpenAI resource will be used to authenticate Azure AI Search.
+* **Do not** set `embeddingEndpoint` or `embeddingKey`. Instead, use the `embeddingDeploymentName` header to enable text vectorization.
++
+**Submit job example**
+
+```bash
+accessToken=$(az account get-access-token --resource https://cognitiveservices.azure.com/ --query "accessToken" --output tsv)
+curl -i -X PUT https://my-resource.openai.azure.com/openai/extensions/on-your-data/ingestion-jobs/vpn1025a?api-version=2023-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Authorization: Bearer $accessToken" \
+-H "storageEndpoint: https://mystorage.blob.core.windows.net/" \
+-H "storageConnectionString: ResourceId=/subscriptions/1234567-abcd-1234-5678-1234abcd/resourceGroups/my-resource/providers/Microsoft.Storage/storageAccounts/mystorage" \
+-H "storageContainer: my-container" \
+-H "searchServiceEndpoint: https://mysearch.search.windows.net" \
+-H "embeddingDeploymentName: ada" \
+-d \
+'
+{
+}
+'
+```
+
+**Get job status example**
+
+```bash
+accessToken=$(az account get-access-token --resource https://cognitiveservices.azure.com/ --query "accessToken" --output tsv)
+curl -i -X GET https://my-resource.openai.azure.com/openai/extensions/on-your-data/ingestion-jobs/abc1234?api-version=2023-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Authorization: Bearer $accessToken"
+```
+
+### Inference API
+
+See the [inference API reference article](/azure/ai-services/openai/reference#completions-extensions) for details on the request and response objects used by the inference API.
+
+Additional notes:
+
+* **Do not** set `dataSources[0].parameters.key`. The service will use system assigned managed identity to authenticate the Azure AI Search.
+* **Do not** set `embeddingEndpoint` or `embeddingKey`. Instead, to enable vector search (with `queryType` set properly), use `embeddingDeploymentName`.
+
+Example:
+
+```bash
+accessToken=$(az account get-access-token --resource https://cognitiveservices.azure.com/ --query "accessToken" --output tsv)
+curl -i -X POST https://my-resource.openai.azure.com/openai/deployments/turbo/extensions/chat/completions?api-version=2023-10-01-preview \
+-H "Content-Type: application/json" \
+-H "Authorization: Bearer $accessToken" \
+-d \
+'
+{
+ "dataSources": [
+ {
+ "type": "AzureCognitiveSearch",
+ "parameters": {
+ "endpoint": "https://my-search-service.search.windows.net",
+ "indexName": "my-index",
+ "queryType": "vector",
+ "embeddingDeploymentName": "ada"
+ }
+ }
+ ],
+ "messages": [
+ {
+ "role": "user",
+ "content": "Who is the primary DRI for QnA v2 Authoring service?"
+ }
+ ]
+}
+'
+```
+
+## Azure OpenAI Studio
+
+You should be able to use all Azure OpenAI Studio features, including both ingestion and inference.
+
+## Web app
+The web app published from the Studio will communicate with Azure OpenAI. If Azure OpenAI is network restricted, the web app need to be set up correctly for outbound networking.
+
+1. Set Azure OpenAI allow inbound traffic from your virtual network.
+
+ :::image type="content" source="../media/use-your-data/web-app-configure-inbound-traffic.png" alt-text="A screenshot showing inbound traffic configuration for the web app." lightbox="../media/use-your-data/web-app-configure-inbound-traffic.png":::
+
+1. Configure the web app for outbound virtual network integration
+
+ :::image type="content" source="../media/use-your-data/web-app-configure-outbound-traffic.png" alt-text="A screenshot showing outbound traffic configuration for the web app." lightbox="../media/use-your-data/web-app-configure-outbound-traffic.png":::
++
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
- ignite-2023 Previously updated : 10/17/2023 Last updated : 11/30/2023 recommendations: false keywords: # What's new in Azure OpenAI Service
+## December 2023
+
+### Azure OpenAI on your data
+
+- Full VPN and private endpoint support for Azure OpenAI on your data, including security support for: storage accounts, Azure OpenAI resources, and Azure AI Search service resources.
+- New article for using [Azure OpenAI on your data securely](./how-to/use-your-data-securely.md) by protecting data with virtual networks and private endpoints.
+ ## November 2023
+### New data source support in Azure OpenAI on your data
+- You can now use [Azure Cosmos DB for MongoDB vCore](./concepts/use-your-data.md?tabs=mongo-db.md#ingesting-your-data) as well as URLs/web addresses as data sources to ingest your data and chat with a supported Azure OpenAI model.
### GPT-4 Turbo Preview & GPT-3.5-Turbo-1106 released
Try out DALL-E 3 by following a [quickstart](./dall-e-quickstart.md).
- **Content Credentials in all DALL-E models**: AI-generated images from all DALL-E models now include a digital credential that discloses the content as AI-generated. Applications that display image assets can leverage the open source [Content Authenticity Initiative SDK](https://opensource.contentauthenticity.org/docs/js-sdk/getting-started/quick-start/) to display credentials in their AI generated images. [Content Credentials in Azure OpenAI](/azure/ai-services/openai/concepts/content-credentials) - - **New RAI models** - **Jailbreak risk detection**: Jailbreak attacks are user prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or to break the rules set in the System Message. The jailbreak risk detection model is optional (default off), and available in annotate and filter model. It runs on user prompts.
Try out DALL-E 3 by following a [quickstart](./dall-e-quickstart.md).
- [Tutorial: fine-tuning GPT-3.5-Turbo](./tutorials/fine-tune.md)
+### Azure OpenAI on your data
+
+- New [custom parameters](./concepts/use-your-data.md#custom-parameters) for determining the number of retrieved documents and strictness.
+ - The strictness setting sets the threshold to categorize documents as relevant to your queries.
+ - The retrieved documents setting specifies the number of top-scoring documents from your data index used to generate responses.
+- You can see data ingestion/upload status in the Azure OpenAI Studio.
+- Support for [private endpoints & VPNs for blob containers](./how-to/use-your-data-securely.md#security-support-for-azure-blob-storage)
+ ## September 2023 ### GPT-4
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/faq.md
Previously updated : 07/18/2023 Last updated : 11/30/2023
#### Should I specify the source language in a request?
-If the language of the content in the source document is known, it's recommended to specify the source language in the request to get a better translation. If the document has content in multiple languages or the language is unknown, then don't specify the source language in the request. Document Translation automatically identifies language for each text segment and translates.
+If the language of the content in the source document is known, we recommend that you specify the source language in the request to get a better translation. If the document has content in multiple languages or the language is unknown, then don't specify the source language in the request. Document Translation automatically identifies language for each text segment and translates.
#### To what extent are the layout, structure, and formatting maintained?
-When text is translated from the source to target language, the overall length of translated text may differ from source. The result could be reflow of text across pages. The same fonts may not be available both in source and target language. In general, the same font style is applied in target language to retain formatting closer to source.
+When text is translated from the source to target language, the overall length of translated text can differ from source. The result could be reflow of text across pages. The same fonts aren't always available in both source and target language. In general, the same font style is applied in target language to retain formatting closer to source.
#### Will the text in an image within a document gets translated?
-No. The text in an image within a document won't get translated.
+No. The text in an image within a document isn't translated.
#### Can Document Translation translate content from scanned documents? Yes. Document Translation translates content from _scanned PDF_ documents.
-#### Will my document be translated if it's password protected?
+#### Can encrypted or password-protected documents be translated?
-No. If your scanned or text-embedded PDFs are password-locked, you must remove the lock before submission.
+No. The service can't translate encrypted or password-protected documents. If your scanned or text-embedded PDFs are password-locked, you must remove the lock before submission.
#### If I'm using managed identities, do I also need a SAS token URL?
-No. Don't include SAS token URLSΓÇöyour requests will fail. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests.
+No. Don't include SAS token-appended URLS. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests.
+
+#### Which PDF format renders the best results?
+
+PDF documents generated from digital file formats (also known as "native" PDFs) provide optimal output. Scanned PDFs are images of printed documents scanned into an electronic format. Translating scanned PDF files can result in loss of the original formatting, layout, and style, and affect the quality of the translation.
ai-studio Ai Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/ai-resources.md
description: This article introduces concepts about Azure AI resources. -+ - ignite-2023
ai-studio Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/connections.md
description: This article introduces connections in Azure AI Studio -+ - ignite-2023
ai-studio Content Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/content-filtering.md
description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI Studio. -+ - ignite-2023
ai-studio Deployments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/deployments-overview.md
description: Learn about deploying models, flows, and web apps with Azure AI Studio. -+ - ignite-2023
ai-studio Evaluation Approach Gen Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-approach-gen-ai.md
description: Explore the broader domain of monitoring and evaluating large language models through the establishment of precise metrics, the development of test sets for measurement, and the implementation of iterative testing. -+ - ignite-2023
ai-studio Evaluation Improvement Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-improvement-strategies.md
description: Explore various strategies for addressing the challenges posed by large language models and mitigating potential harms. -+ - ignite-2023
ai-studio Evaluation Metrics Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-metrics-built-in.md
description: Discover the supported built-in metrics for evaluating large language models, understand their application and usage, and learn how to interpret them effectively. -+ - ignite-2023
ai-studio Rbac Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md
description: This article introduces role-based access control in Azure AI Studio -+ - ignite-2023
ai-studio Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/retrieval-augmented-generation.md
description: This article introduces retrieval augmented generation for use in generative AI applications. -+ - ignite-2023
ai-studio Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/autoscale.md
description: Learn how you can manage and increase quotas for resources with Azure AI Studio. -+ - ignite-2023
ai-studio Cli Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/cli-install.md
description: This article provides instructions on how to install and get started with the Azure AI CLI. -+ - ignite-2023
ai-studio Commitment Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/commitment-tier.md
description: Learn how to sign up for commitment tier pricing instead of pay-as-you-go pricing. -+ - ignite-2023
ai-studio Configure Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-managed-network.md
description: Learn how to configure a managed network for Azure AI -+ - ignite-2023
ai-studio Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-private-link.md
description: Learn how to configure a private link for Azure AI -+ Last updated 11/15/2023
ai-studio Connections Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/connections-add.md
description: Learn how to add a new connection in Azure AI Studio -+ - ignite-2023
ai-studio Costs Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/costs-plan-manage.md
description: Learn how to plan for and manage costs for Azure AI Studio by using cost analysis in the Azure portal. -+ - ignite-2023
ai-studio Create Azure Ai Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-azure-ai-resource.md
description: This article describes how to create and manage an Azure AI resource -+ - ignite-2023
ai-studio Create Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-compute.md
description: This article provides instructions on how to create and manage compute instances in Azure AI Studio. -+ - ignite-2023
ai-studio Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-runtime.md
description: Learn how to create and manage prompt flow runtimes in Azure AI Studio. -+ - ignite-2023
ai-studio Create Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-projects.md
description: This article describes how to create an Azure AI Studio project. -+ - ignite-2023
ai-studio Data Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-add.md
description: Learn how to add and manage data in your Azure AI project -+ - ignite-2023
ai-studio Deploy Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models.md
description: Learn how to deploy large language models with Azure AI Studio. -+ - ignite-2023
ai-studio Evaluate Flow Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-flow-results.md
description: This article provides instructions on how to view evaluation results in Azure AI Studio. -+ - ignite-2023
ai-studio Evaluate Generative Ai App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-generative-ai-app.md
description: Evaluate your generative AI application with Azure AI Studio UI and SDK. -+ Last updated 11/15/2023
ai-studio Evaluate Prompts Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-prompts-playground.md
description: Quickly test and evaluate prompts in Azure AI Studio playground. -+ - ignite-2023
ai-studio Flow Bulk Test Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-bulk-test-evaluation.md
description: Learn how to submit batch run and use built-in evaluation methods in prompt flow to evaluate how well your flow performs with a large dataset with Azure AI Studio. -+ - ignite-2023
ai-studio Flow Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-deploy.md
description: Learn how to deploy a flow as a managed online endpoint for real-time inference with Azure AI Studio. -+ - ignite-2023
ai-studio Flow Develop Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-develop-evaluation.md
description: Learn how to customize or create your own evaluation flow tailored to your tasks and objectives, and then use in a batch run as an evaluation method in prompt flow with Azure AI Studio. -+ - ignite-2023
ai-studio Flow Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-develop.md
description: This article provides instructions on how to build with prompt flow. -+ - ignite-2023
ai-studio Flow Tune Prompts Using Variants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-tune-prompts-using-variants.md
description: Learn how to tune prompts using variants in Prompt flow with Azure AI Studio. -+ - ignite-2023
ai-studio Generate Data Qa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/generate-data-qa.md
description: This article provides instructions on how to generate question and answer pairs from your source dataset. -+ - ignite-2023
ai-studio Index Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md
description: Learn how to create and use a vector index for performing Retrieval Augmented Generation (RAG). -+ - ignite-2023
ai-studio Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog.md
description: This article introduces foundation model capabilities and the model catalog in Azure AI Studio. -+ - ignite-2023
ai-studio Models Foundation Azure Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/models-foundation-azure-ai.md
description: This article introduces Azure AI capabilities in Azure AI Studio. -+ - ignite-2023
ai-studio Monitor Quality Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/monitor-quality-safety.md
description: Learn how to monitor quality and safety of deployed applications with Azure AI Studio. -+ - ignite-2023
ai-studio Content Safety Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/content-safety-tool.md
description: This article introduces the Content Safety tool for flows in Azure AI Studio. -+ - ignite-2023
ai-studio Embedding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/embedding-tool.md
description: This article introduces the Embedding tool for flows in Azure AI Studio. -+ - ignite-2023
ai-studio Faiss Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/faiss-index-lookup-tool.md
description: This article introduces the Faiss Index Lookup tool for flows in Azure AI Studio. -+ - ignite-2023
ai-studio Llm Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/llm-tool.md
description: This article introduces the LLM tool for flows in Azure AI Studio. -+ - ignite-2023
ai-studio Prompt Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-tool.md
description: This article introduces the Prompt tool for flows in Azure AI Studio. -+ - ignite-2023
ai-studio Python Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/python-tool.md
description: This article introduces the Python tool for flows in Azure AI Studio. -+ Last updated 11/15/2023
ai-studio Serp Api Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/serp-api-tool.md
description: This article introduces the Serp API tool for flows in Azure AI Studio. -+ - ignite-2023
ai-studio Vector Db Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/vector-db-lookup-tool.md
description: This article introduces the Vector DB Lookup tool for flows in Azure AI Studio. -+ - ignite-2023
ai-studio Vector Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/vector-index-lookup-tool.md
description: This article introduces the Vector index lookup tool for flows in Azure AI Studio. -+ - ignite-2023
ai-studio Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow.md
description: This article introduces prompt flow in Azure AI Studio. -+ - ignite-2023
ai-studio Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/quota.md
description: This article provides instructions on how to manage and increase quotas for resources with Azure AI Studio. -+ - ignite-2023
ai-studio Sdk Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/sdk-install.md
description: This article provides instructions on how to get started with the Azure AI SDK. -+ - ignite-2023
ai-studio Simulator Interaction Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/simulator-interaction-data.md
description: This article provides instructions on how to use the Azure AI simulator for interaction data. -+ - ignite-2023
ai-studio Troubleshoot Deploy And Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/troubleshoot-deploy-and-monitor.md
description: This article provides instructions on how to troubleshoot your deployments and monitors in Azure AI Studio. -+ - ignite-2023
ai-studio Vscode Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/vscode-web.md
description: This article provides instructions on how to get started with Azure AI projects in VS Code (Web). -+ - ignite-2023
ai-studio Content Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/content-safety.md
description: Use this article to moderate text and images with content safety in Azure AI Studio. -+ - ignite-2023
ai-studio Hear Speak Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/hear-speak-playground.md
description: Hear and speak with chat models in the Azure AI Studio playground. -+ - ignite-2023
ai-studio Playground Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/playground-completions.md
description: Use this article to generate product name ideas in the Azure AI Studio playground. -+ - ignite-2023
ai-studio Deploy Chat Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md
description: Use this article to deploy a web app for chat on your data in the Azure AI Studio playground. -+ - ignite-2023
ai-studio Deploy Copilot Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md
description: Use this article to build and deploy a question and answer copilot with prompt flow in Azure AI Studio -+ Last updated 11/15/2023
ai-studio Screen Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/screen-reader.md
description: This tutorial guides you through using Azure AI Studio with a screen reader. -+ - ignite-2023
ai-studio What Is Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md
description: Azure AI Studio brings together capabilities from across multiple A
keywords: Azure AI services, cognitive-+ Last updated 11/15/2023
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
You can provide outbound (egress) connectivity to the internet for Overlay pods
You can configure ingress connectivity to the cluster using an ingress controller, such as Nginx or [HTTP application routing](./http-application-routing.md). You cannot configure ingress connectivity using Azure App Gateway. For details see [Limitations with Azure CNI Overlay](#limitations-with-azure-cni-overlay).
+## Limitations
+
+Azure CNI Overlay networking in AKS currently has the following limitations:
+
+* In case you are using your own subnet to deploy the cluster, the names of the subnet, VNET and resource group which contains the VNET, must be 63 characters or less. This comes from the fact that these names will be used as labels in AKS worker nodes, and are therefore subjected to [Kubernetes label syntax rules](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
+ ## Regional availability for ARM64 node pools Azure CNI Overlay is currently unavailable for ARM64 node pools in the following regions:
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kub
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 11/20/2023 Last updated : 11/30/2023 # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
mountOptions:
### Create NFS file share storage class
-Create a file named `nfs-sc.yaml` and copy the manifest below.
+Create a file named `nfs-sc.yaml` and copy the manifest below. For a list of supported `mountOptions`, see [NFS mount options][nfs-file-share-mount-options]
```yml apiVersion: storage.k8s.io/v1
The output of the commands resembles the following example:
[nfs-overview]:/windows-server/storage/nfs/nfs-overview [kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec [csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md
-[data-plane-api]: https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azcore/internal/shared/shared.go
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply - <!-- LINKS - internal --> [csi-drivers-overview]: csi-storage-drivers.md [azure-disk-csi]: azure-disk-csi.md
The output of the commands resembles the following example:
[concepts-storage]: concepts-storage.md [node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks [storage-skus]: ../storage/common/storage-redundancy.md
-[storage-tiers]: ../storage/files/storage-files-planning.md#storage-tiers
[private-endpoint-overview]: ../private-link/private-endpoint-overview.md [persistent-volume]: concepts-storage.md#persistent-volumes [share-snapshots-overview]: ../storage/files/storage-snapshots-files.md
-[access-tiers-overview]: ../storage/blobs/access-tiers-overview.md
-[tag-resources]: ../azure-resource-manager/management/tag-resources.md
[statically-provision-a-volume]: azure-csi-files-storage-provision.md#statically-provision-a-volume [azure-private-endpoint-dns]: ../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration [azure-netapp-files-mount-options-best-practices]: ../azure-netapp-files/performance-linux-mount-options.md#rsize-and-wsize
+[nfs-file-share-mount-options]: ../storage/files/storage-files-how-to-mount-nfs-shares.md#mount-options
aks Quick Windows Container Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-portal.md
Title: Create a Windows Server container on an Azure Kubernetes Service (AKS) cl
description: Learn how to quickly create a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using the Azure portal. Previously updated : 08/03/2023 Last updated : 11/30/2023 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
The ASP.NET sample application is provided as part of the [.NET Framework Sample
spec: type: LoadBalancer ports:
- - protocol: TCP
+ - protocol: TCP
port: 80 selector: app: sample
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
Title: Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using PowerShell description: Learn how to quickly create a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 07/11/2023 Last updated : 11/30/2023 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
The ASP.NET sample application is provided as part of the [.NET Framework Sample
spec: type: LoadBalancer ports:
- - protocol: TCP
+ - protocol: TCP
port: 80 selector: app: sample
aks Operator Best Practices Advanced Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-advanced-scheduler.md
spec:
cpu: 4.0 memory: 16Gi nodeSelector:
- hardware:
- values: highmem
+ hardware: highmem
``` When you use these scheduler options, work with your application developers and owners to allow them to correctly define their pod specifications.
This article focused on advanced Kubernetes scheduler features. For more informa
[aks-best-practices-isolation]: operator-best-practices-cluster-isolation.md [use-multiple-node-pools]: create-node-pools.md [taint-node-pool]: manage-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool
-[use-gpus-aks]: gpu-cluster.md
+[use-gpus-aks]: gpu-cluster.md
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Note the following important changes to make before you upgrade to any of the av
> [!NOTE] > Alias minor version requires Azure CLI version 2.37 or above as well as API version 20220401 or above. Use `az upgrade` to install the latest version of the CLI.
-AKS allows you to create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster runs the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster runs **`1.21.7`**, which is the latest GA patch version of *1.21*.
+AKS allows you to create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster runs the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster runs **`1.21.7`**, which is the latest GA patch version of *1.21*. If you want to upgrade your patch version in the same minor version, please use [auto-upgrade](./auto-upgrade-cluster.md#use-cluster-auto-upgrade).
-When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` doesn't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` triggers an upgrade to the latest GA `1.15` patch.
-
-To see what patch you're on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The property `currentKubernetesVersion` shows the whole Kubernetes version.
+To see what patch you're on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The `currentKubernetesVersion` property shows the whole Kubernetes version.
``` {
New minor version | Supported Version List
-- | - 1.17.a | 1.17.a, 1.17.b, 1.16.c, 1.16.d, 1.15.e, 1.15.f
-When a new minor version is introduced, the oldest supported minor version and patch releases are deprecated and removed. For example, the current supported version list is:
+When a new minor version is introduced, the oldest minor version and patch releases supported are deprecated and removed. For example, let's say the current supported version list is:
``` 1.17.a
When a new minor version is introduced, the oldest supported minor version and p
When AKS releases 1.18.\*, all the 1.15.\* versions go out of support 30 days later. - AKS also supports a maximum of two **patch** releases of a given minor version. For example, given the following supported versions: ```
Downgrades aren't supported.
Additionally, AKS doesn't make any runtime or other guarantees for clusters outside of the supported versions list.
-### What happens when a user scales a Kubernetes cluster with a minor version that isn't supported?
+### What happens when you scale a Kubernetes cluster with a minor version that isn't supported?
For minor versions not supported by AKS, scaling in or out should continue to work. Since there are no guarantees with quality of service, we recommend upgrading to bring your cluster back into support.
-### Can a user stay on a Kubernetes version forever?
+### Can you stay on a Kubernetes version forever?
+
+If a cluster has been out of support for more than three (3) minor versions and has been found to carry security risks, Azure proactively contacts you to upgrade your cluster. If you don't take further action, Azure reserves the right to automatically upgrade your cluster on your behalf.
-If a cluster has been out of support for more than three (3) minor versions and has been found to carry security risks, Azure proactively contacts you to upgrade your cluster. If you don't take further action, Azure reserves the right to automatically upgrade your cluster on your behalf.
+### What happens if you scale a Kubernetes cluster with a minor version that isn't supported?
+
+For minor versions not supported by AKS, scaling in or out should continue to work. Since there are no guarantees with quality of service, we recommend upgrading to bring your cluster back into support.
### What version does the control plane support if the node pool isn't in one of the supported AKS versions?
api-management Http Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/http-data-source-policy.md
type User {
<value>application/json</value> </set-header> <set-body>@{
- var args = context.Request.Body.As<JObject>(true)["arguments"];
+ var args = context.GraphQL.Arguments;
JObject jsonObject = new JObject(); jsonObject.Add("name", args["name"]) return jsonObject.ToString();
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 443 | Outbound | TCP | VirtualNetwork / AzureKeyVault | **Access to Azure Key Vault** | External & Internal | | * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / EventHub | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and [Azure Monitor](api-management-howto-use-azure-monitor.md) (optional) | External & Internal | | * / 445 | Outbound | TCP | VirtualNetwork / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal |
-| * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
+| * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | **Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md)** | External & Internal |
| * / 6380 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access external Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal | | * / 6381 - 6383 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access internal Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal | | * / 4290 | Inbound & Outbound | UDP | VirtualNetwork / VirtualNetwork | Sync Counters for [Rate Limit](rate-limit-policy.md) policies between machines (optional) | External & Internal |
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / Azure Event Hubs | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional)| External & Internal | | * / 445 | Outbound | TCP | VirtualNetwork / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal | | * / 443, 12000 | Outbound | TCP | VirtualNetwork / AzureCloud | Health and Monitoring Extension & Dependency on Event Grid (if events notification activated) (optional) | External & Internal |
-| * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal |
+| * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | **Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md)** | External & Internal |
| * / 6380 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access external Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal | | * / 6381 - 6383 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access internal Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal | | * / 4290 | Inbound & Outbound | UDP | VirtualNetwork / VirtualNetwork | Sync Counters for [Rate Limit](rate-limit-policy.md) policies between machines (optional) | External & Internal |
app-service Cli Continuous Deployment Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-vsts.md
This sample script creates an app in App Service with its related resources, and
:::code language="azurecli" source="~/azure_cli_scripts/app-service/deploy-vsts-continuous/deploy-vsts-continuous-webapp-only.sh" id="FullScript":::
-### To configure continuous deployment from GitHub
+### To configure continuous deployment from Azure DevOps
-Create the following variables containing your GitHub information.
+Create the following variables containing your Azure DevOps information.
```azurecli gitrepo=<Replace with your Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) repo URL>
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
ms.devlang: csharp,java,javascript,python Last updated 04/12/2022-+ # Tutorial: Connect to Azure databases from App Service without secrets using a managed identity
Prepare your environment for the Azure CLI.
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
-<a name='1-grant-database-access-to-azure-ad-user'></a>
-## 1. Grant database access to Microsoft Entra user
+## 1. Install the Service Connector passwordless extension
-First, enable Microsoft Entra authentication to the Azure database by assigning a Microsoft Entra user as the administrator of the server. For the scenario in the tutorial, you'll use this user to connect to your Azure database from the local development environment. Later, you set up the managed identity for your App Service app to connect from within Azure.
-> [!NOTE]
-> This user is different from the Microsoft account you used to sign up for your Azure subscription. It must be a user that you created, imported, synced, or invited into Microsoft Entra ID. For more information on allowed Microsoft Entra users, see [Microsoft Entra features and limitations in SQL Database](/azure/azure-sql/database/authentication-aad-overview#azure-ad-features-and-limitations).
-
-1. If your Microsoft Entra tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Microsoft Entra ID](../active-directory/fundamentals/add-users-azure-active-directory.md).
+## 2. Create a passwordless connection
-1. Find the object ID of the Microsoft Entra user using the [`az ad user list`](/cli/azure/ad/user#az-ad-user-list) and replace *\<user-principal-name>*. The result is saved to a variable.
+Next, create a passwordless connection with Service Connector.
- ```azurecli-interactive
- azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].id --output tsv)
- ```
+> [!TIP]
+> The Azure portal can help you compose the commands below. In the Azure portal, go to your [Azure App Service](../service-connector/quickstart-portal-app-service-connection.md) resource, select **Service Connector** from the left menu and select **Create**. Fill out the form with all required parameters. Azure automaticaly generates the connection creation command, which you can copy to use in the CLI or execute in Azure Cloud Shell.
-# [Azure SQL Database](#tab/sqldatabase)
+# [Azure SQL Database](#tab/sqldatabase-sc)
-3. Add this Microsoft Entra user as an Active Directory administrator using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az-sql-server-ad-admin-create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
-
- ```azurecli-interactive
- az sql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name ADMIN --object-id $azureaduser
- ```
+The following Azure CLI command uses a `--client-type` parameter.
- For more information on adding an Active Directory administrator, see [Provision a Microsoft Entra administrator for your server](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-managed-instance)
+1. Optionally run the `az webapp connection create sql -h` to get the supported client types.
-# [Azure Database for MySQL](#tab/mysql)
+1. Choose a client type and run the corresponding command. Replace the placeholders below with your own information.
-3. Add this Microsoft Entra user as an Active Directory administrator using [`az mysql server ad-admin create`](/cli/azure/mysql/server/ad-admin#az-mysql-server-ad-admin-create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+ # [User-assigned managed identity](#tab/userassigned-sc)
```azurecli-interactive
- az mysql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name <user-principal-name> --object-id $azureaduser
+ az webapp connection create sql \
+ --resource-group <group-name> \
+ --name <server-name> \
+ --target-resource-group <sql-group-name> \
+ --server <sql-name> \
+ --database <database-name> \
+ --user-identity client-id=<client-id> subs-id=<subscription-id> \
+ --client-type <client-type>
```
- > [!NOTE]
- > The command is currently unavailable for Azure Database for MySQL Flexible Server.
-
-# [Azure Database for PostgreSQL](#tab/postgresql)
-
-3. Add this Microsoft Entra user as an Active Directory administrator using [`az postgres server ad-admin create`](/cli/azure/postgres/server/ad-admin#az-postgres-server-ad-admin-create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+ # [System-assigned managed identity](#tab/systemassigned-sc)
```azurecli-interactive
- az postgres server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name <user-principal-name> --object-id $azureaduser
+ az webapp connection create sql \
+ --resource-group <group-name> \
+ --name <server-name> \
+ --target-resource-group <group-name> \
+ --server <sql-name> \
+ --database <database-name> \
+ --system-identity \
+ --client-type <client-type>
```
- > [!NOTE]
- > The command is currently unavailable for Azure Database for PostgreSQL Flexible Server.
+ --
+# [Azure Database for MySQL](#tab/mysql-sc)
-## 2. Configure managed identity for app
+> [!NOTE]
+> For Azure Database for MySQL - Flexible Server, you must first [manually set up Microsoft Entra authentication](../mysql/flexible-server/how-to-azure-ad.md), which requires a separate user-assigned managed identity and specific Microsoft Graph permissions. This step can't be automated.
-Next, you configure your App Service app to connect to SQL Database with a managed identity.
+1. Manually [set up Microsoft Entra authentication for Azure Database for MySQL - Flexible Server](../mysql/flexible-server/how-to-azure-ad.md).
-1. Enable a managed identity for your App Service app with the [az webapp identity assign](/cli/azure/webapp/identity#az-webapp-identity-assign) command in the Cloud Shell. In the following command, replace *\<app-name>*.
+1. Optionally run the command `az webapp connection create mysql-flexible -h` to get the supported client types.
- # [System-assigned identity](#tab/systemassigned/sqldatabase)
+1. Choose a client type and run the corresponding command. The following Azure CLI command uses a `--client-type` parameter.
- ```azurecli-interactive
- az webapp identity assign --resource-group <group-name> --name <app-name>
- ```
-
- # [System-assigned identity](#tab/systemassigned/mysql)
+ # [User-assigned managed identity](#tab/userassigned-sc)
```azurecli-interactive
- az webapp identity assign --resource-group <group-name> --name <app-name> --output tsv --query principalId
- az ad sp show --id <output-from-previous-command> --output tsv --query appId
+ az webapp connection create mysql-flexible \
+ --resource-group <group-name> \
+ --name <server-name> \
+ --target-resource-group <group-name> \
+ --server <mysql-name> \
+ --database <database-name> \
+ --user-identity client-id=XX subs-id=XX mysql-identity-id=$IDENTITY_RESOURCE_ID \
+ --client-type <client-type>
```
- The output of [az ad sp show](/cli/azure/ad/sp#az-ad-sp-show) is the application ID of the system-assigned identity. You'll need it later.
-
- # [System-assigned identity](#tab/systemassigned/postgresql)
+ # [System-assigned managed identity](#tab/systemassigned-sc)
```azurecli-interactive
- az webapp identity assign --resource-group <group-name> --name <app-name> --output tsv --query principalId
- az ad sp show --id <output-from-previous-command> --output tsv --query appId
+ az webapp connection create mysql-flexible \
+ --resource-group <group-name> \
+ --name <server-name> \
+ --target-resource-group <group-name> \
+ --server <mysql-name> \
+ --database <database-name> \
+ --system-identity mysql-identity-id=$IDENTITY_RESOURCE_ID \
+ --client-type <client-type>
```
- The output of [az ad sp show](/cli/azure/ad/sp#az-ad-sp-show) is the application ID of the system-assigned identity. You'll need it later.
-
- # [User-assigned identity](#tab/userassigned)
+ --
- ```azurecli-interactive
- # Create a user-assigned identity and get its client ID
- az identity create --name <identity-name> --resource-group <group-name> --output tsv --query "id"
- # assign identity to app
- az webapp identity assign --resource-group <group-name> --name <app-name> --identities <output-of-previous-command>
- # get client ID of identity for later
- az webapp identity show --name <identity-name> --resource-group <group-name> --output tsv --query "clientId"
- ```
+# [Azure Database for PostgreSQL](#tab/postgresql-sc)
- The output of [az webapp identity show](/cli/azure/webapp/identity#az-webapp-identity-show) is the client ID of the user-assigned identity. You'll need it later.
+The following Azure CLI command uses a `--client-type` parameter.
- --
+1. Optionally run the command `az webapp connection create postgres-flexible -h` to get a list of all supported client types.
- > [!NOTE]
- > To enable managed identity for a [deployment slot](deploy-staging-slots.md), add `--slot <slot-name>` and use the name of the slot in *\<slot-name>*.
-
-1. The identity needs to be granted permissions to access the database. In the Cloud Shell, sign in to your database with the following command. Replace _\<server-name>_ with your server name, _\<database-name>_ with the database name your app uses, and _\<aad-user-name>_ and _\<aad-password>_ with your Microsoft Entra user's credentials from [1. Grant database access to Microsoft Entra user]().
+1. Choose a client type and run the corresponding command.
- # [Azure SQL Database](#tab/sqldatabase)
+ # [User-assigned managed identity](#tab/userassigned-sc)
```azurecli-interactive
- sqlcmd -S <server-name>.database.windows.net -d <database-name> -U <aad-user-name> -P "<aad-password>" -G -l 30
+ az webapp connection create postgres-flexible \
+ --resource-group <group-name> \
+ --name <server-name> \
+ --target-resource-group <group-name> \
+ --server <postgresql-name> \
+ --database <database-name> \
+ --user-identity client-id=XX subs-id=XX \
+ --client-type java
```
- # [Azure Database for MySQL](#tab/mysql)
+ # [System-assigned managed identity](#tab/systemassigned-sc)
```azurecli-interactive
- # Sign into Azure using the Azure AD user from "1. Grant database access to Azure AD user"
- az login --allow-no-subscriptions
- # Get access token for MySQL with the Azure AD user
- az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken
- # Sign into the MySQL server using the token
- mysql -h <server-name>.mysql.database.azure.com --user <aad-user-name>@<server-name> --enable-cleartext-plugin --password=<token-output-from-last-command> --ssl
+ az webapp connection create postgres-flexible \
+ --resource-group <group-name> \
+ --name <server-name> \
+ --target-resource-group <group-name> \
+ --server <postgresql-name> \
+ --database <database-name> \
+ --system-identity \
+ --client-type <client-type>
```
- The full username *\<aad-user-name>@\<server-name>* looks like `admin1@contoso.onmicrosoft.com@mydbserver1`.
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```azurecli-interactive
- # Sign into Azure using the Azure AD user from "1. Grant database access to Azure AD user"
- az login --allow-no-subscriptions
- # Get access token for PostgreSQL with the Azure AD user
- az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken
- # Sign into the Postgres server
- psql "host=<server-name>.postgres.database.azure.com port=5432 dbname=<database-name> user=<aad-user-name>@<server-name> password=<token-output-from-last-command>"
- ```
-
- The full username *\<aad-user-name>@\<server-name>* looks like `admin1@contoso.onmicrosoft.com@mydbserver1`.
- --
-1. Run the following database commands to grant the permissions your app needs. For example,
-
- # [System-assigned identity](#tab/systemassigned/sqldatabase)
-
- ```sql
- CREATE USER [<app-name>] FROM EXTERNAL PROVIDER;
- ALTER ROLE db_datareader ADD MEMBER [<app-name>];
- ALTER ROLE db_datawriter ADD MEMBER [<app-name>];
- ALTER ROLE db_ddladmin ADD MEMBER [<app-name>];
- GO
- ```
-
- For a [deployment slot](deploy-staging-slots.md), use *\<app-name>/slots/\<slot-name>* instead of *\<app-name>*.
-
- # [User-assigned identity](#tab/userassigned/sqldatabase)
-
- ```sql
- CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER;
- ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
- ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
- ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>];
- GO
- ```
-
- # [System-assigned identity](#tab/systemassigned/mysql)
+1. Grant permission to pre-created tables
- ```sql
- SET aad_auth_validate_oids_in_tenant = OFF;
- CREATE AADUSER '<mysql-user-name>' IDENTIFIED BY '<application-id-of-system-assigned-identity>';
- GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER ON *.* TO '<mysql-user-name>'@'%' WITH GRANT OPTION;
- FLUSH PRIVILEGES;
- ```
-
- Whatever name you choose for *\<mysql-user-name>*, it's the MySQL user you'll use to connect to the database later from your code in App Service.
-
- # [User-assigned identity](#tab/userassigned/mysql)
-
- ```sql
- SET aad_auth_validate_oids_in_tenant = OFF;
- CREATE AADUSER '<mysql-user-name>' IDENTIFIED BY '<client-id-of-user-assigned-identity>';
- GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER ON *.* TO '<mysql-user-name>'@'%' WITH GRANT OPTION;
- FLUSH PRIVILEGES;
- ```
-
- Whatever name you choose for *\<mysql-user-name>*, it's the MySQL user you'll use to connect to the database later from your code in App Service.
- # [System-assigned identity](#tab/systemassigned/postgresql)
-
- ```sql
- SET aad_validate_oids_in_tenant = off;
- CREATE ROLE <postgresql-user-name> WITH LOGIN PASSWORD '<application-id-of-system-assigned-identity>' IN ROLE azure_ad_user;
- ```
-
- Whatever name you choose for *\<postgresql-user-name>*, it's the PostgreSQL user you'll use to connect to the database later from your code in App Service.
-
- # [User-assigned identity](#tab/userassigned/postgresql)
+--
- ```sql
- SET aad_validate_oids_in_tenant = off;
- CREATE ROLE <postgresql-user-name> WITH LOGIN PASSWORD '<application-id-of-user-assigned-identity>' IN ROLE azure_ad_user;
- ```
+This Service Connector command completes the following tasks in the background:
- Whatever name you choose for *\<postgresql-user-name>*, it's the PostgreSQL user you'll use to connect to the database later from your code in App Service.
+* Enable system-assigned managed identity, or assign a user identity for the app `<server-name>` hosted by Azure App Service.
+* Set the Microsoft Entra admin to the current signed-in user.
+* Add a database user for the system-assigned managed identity or user-assigned managed identity. Grant all privileges of the database `<database-name>` to this user. The username can be found in the connection string in preceding command output.
+* Set configurations named `AZURE_MYSQL_CONNECTIONSTRING`, `AZURE_POSTGRESQL_CONNECTIONSTRING`, or `AZURE_SQL_CONNECTIONSTRING` to the Azure resource based on the database type.
+* For App Service, the configurations are set in the **App Settings** blade.
- --
+If you encounter any problem when creating a connection, refer to [Troubleshooting](../service-connector/tutorial-passwordless.md#troubleshooting) for help.
## 3. Modify your code In this section, connectivity to the Azure database in your code follows the `DefaultAzureCredential` pattern for all language stacks. `DefaultAzureCredential` is flexible enough to adapt to both the development environment and the Azure environment. When running locally, it can retrieve the logged-in Azure user from the environment of your choice (Visual Studio, Visual Studio Code, Azure CLI, or Azure PowerShell). When running in Azure, it retrieves the managed identity. So it's possible to have connectivity to database both at development time and in production. The pattern is as follows:
-1. Instantiate a `DefaultAzureCredential` from the Azure Identity client library. If you're using a user-assigned identity, specify the client ID of the identity.
-1. Get an access token for the resource URI respective to the database type.
- - For Azure SQL Database: `https://database.windows.net/.default`
- - For Azure Database for MySQL: `https://ossrdbms-aad.database.windows.net/.default`
- - For Azure Database for PostgreSQL: `https://ossrdbms-aad.database.windows.net/.default`
-1. Add the token to your connection string.
-1. Open the connection.
-
-For Azure Database for MySQL and Azure Database for PostgreSQL, the database username that you created in [2. Configure managed identity for app](#2-configure-managed-identity-for-app) is also required in the connection string.
-
-# [.NET Framework](#tab/netfx)
-
-1. In Visual Studio, open the Package Manager Console and add the NuGet packages you need:
-
- # [Azure SQL Database](#tab/sqldatabase)
-
- ```powershell
- Install-Package Azure.Identity
- Install-Package System.Data.SqlClient
- ```
-
- # [Azure Database for MySQL](#tab/mysql)
-
- ```powershell
- Install-Package Azure.Identity
- Install-Package MySql.Data
- ```
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```powershell
- Install-Package Azure.Identity
- Install-Package Npgsql
- ```
-
- --
-
-1. Connect to the Azure database by adding an access token. If you're using a user-assigned identity, make sure you uncomment the applicable lines.
-
- # [Azure SQL Database](#tab/sqldatabase)
-
- ```csharp
- // Uncomment one of the two lines depending on the identity type
- //var credential = new Azure.Identity.DefaultAzureCredential(); // system-assigned identity
- //var credential = new Azure.Identity.DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity
-
- // Get token for Azure SQL Database
- var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://database.windows.net/.default" }));
-
- // Add the token to the SQL connection
- var connection = new System.Data.SqlClient.SqlConnection("Server=tcp:<server-name>.database.windows.net;Database=<database-name>;TrustServerCertificate=True");
- connection.AccessToken = token.Token;
-
- // Open the SQL connection
- connection.Open();
- ```
-
- For a more detailed tutorial, see [Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity](tutorial-connect-msi-sql-database.md).
-
- # [Azure Database for MySQL](#tab/mysql)
-
- ```csharp
- using Azure.Identity;
-
- ...
-
- // Uncomment one of the two lines depending on the identity type
- //var credential = new DefaultAzureCredential(); // system-assigned identity
- //var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity
-
- // Get token for Azure Database for MySQL
- var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net/.default" }));
-
- // Set MySQL user depending on the environment
- string user;
- if (String.IsNullOrEmpty(Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT")))
- user = "<aad-user-name>@<server-name>";
- else user = "<mysql-user-name>@<server-name>";
-
- // Add the token to the MySQL connection
- var connectionString = "Server=<server-name>.mysql.database.azure.com;" +
- "Port=3306;" +
- "SslMode=Required;" +
- "Database=<database-name>;" +
- "Uid=" + user+ ";" +
- "Password="+ token.Token;
- var connection = new MySql.Data.MySqlClient.MySqlConnection(connectionString);
-
- connection.Open();
- ```
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```csharp
- using Azure.Identity;
-
- ...
-
- // Uncomment one of the two lines depending on the identity type
- //var credential = new DefaultAzureCredential(); // system-assigned identity
- //var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity
-
- // Get token for Azure Database for PostgreSQL
- var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net/.default" }));
-
- // Check if in Azure and set user accordingly
- string postgresqlUser;
- if (String.IsNullOrEmpty(Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT")))
- postgresqlUser = "<aad-user-name>@<server-name>";
- else postgresqlUser = "<postgresql-user-name>@<server-name>";
-
- // Add the token to the PostgreSQL connection
- var connectionString = "Server=<server-name>.postgres.database.azure.com;" +
- "Port=5432;" +
- "Database=<database-name>;" +
- "User Id=" + postgresqlUser + ";" +
- "Password="+ token.Token;
- var connection = new Npgsql.NpgsqlConnection(connectionString);
-
- connection.Open();
- ```
-
- --
-
-# [.NET 6](#tab/dotnet)
-
-1. Install the .NET packages you need into your .NET project:
-
- # [Azure SQL Database](#tab/sqldatabase)
-
- ```dotnetcli
- dotnet add package Microsoft.Data.SqlClient
- ```
-
- # [Azure Database for MySQL](#tab/mysql)
-
- ```dotnetcli
- dotnet add package Azure.Identity
- dotnet add package MySql.Data
- ```
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```dotnetcli
- dotnet add package Azure.Identity
- dotnet add package Npgsql
- ```
-
- --
-
-1. Connect to the Azure database by adding an access token. If you're using a user-assigned identity, make sure you uncomment the applicable lines.
-
- # [Azure SQL Database](#tab/sqldatabase)
-
- ```csharp
- using Microsoft.Data.SqlClient;
-
- ...
-
- // Uncomment one of the two lines depending on the identity type
- //SqlConnection connection = new SqlConnection("Server=tcp:<server-name>.database.windows.net;Database=<database-name>;Authentication=Active Directory Default;TrustServerCertificate=True"); // system-assigned identity
- //SqlConnection connection = new SqlConnection("Server=tcp:<server-name>.database.windows.net;Database=<database-name>;Authentication=Active Directory Default;User Id=<client-id-of-user-assigned-identity>;TrustServerCertificate=True"); // user-assigned identity
-
- // Open the SQL connection
- connection.Open();
- ```
-
- [Microsoft.Data.SqlClient](/sql/connect/ado-net/sql/azure-active-directory-authentication?view=azuresqldb-current&preserve-view=true) provides integrated support of Microsoft Entra authentication. In this case, the [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication?view=azuresqldb-current&preserve-view=true#using-active-directory-default-authentication) uses `DefaultAzureCredential` to retrieve the required token for you and adds it to the database connection directly.
-
- For a more detailed tutorial, see [Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity](tutorial-connect-msi-sql-database.md).
-
- # [Azure Database for MySQL](#tab/mysql)
-
- ```csharp
- using Azure.Identity;
-
- ...
-
- // Uncomment one of the two lines depending on the identity type
- //var credential = new DefaultAzureCredential(); // system-assigned identity
- //var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity
-
- // Get token for Azure Database for MySQL
- var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net/.default" }));
-
- // Set MySQL user depending on the environment
- string user;
- if (String.IsNullOrEmpty(Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT")))
- user = "<aad-user-name>@<server-name>";
- else user = "<mysql-user-name>@<server-name>";
-
- // Add the token to the MySQL connection
- var connectionString = "Server=<server-name>.mysql.database.azure.com;" +
- "Port=3306;" +
- "SslMode=Required;" +
- "Database=<database-name>;" +
- "Uid=" + user+ ";" +
- "Password="+ token.Token;
- var connection = new MySql.Data.MySqlClient.MySqlConnection(connectionString);
-
- connection.Open();
- ```
-
- The `if` statement sets the MySQL username based on which identity the token applies to. The token is then passed in to the MySQL connection as the password for the Azure identity. For more information, see [Connect with Managed Identity to Azure Database for MySQL](../postgresql/howto-connect-with-managed-identity.md).
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```csharp
- using Azure.Identity;
-
- ...
-
- // Uncomment one of the two lines depending on the identity type
- //var credential = new DefaultAzureCredential(); // system-assigned identity
- //var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = '<client-id-of-user-assigned-identity>' }); // user-assigned identity
-
- // Get token for Azure Database for PostgreSQL
- var token = credential.GetToken(new Azure.Core.TokenRequestContext(new[] { "https://ossrdbms-aad.database.windows.net/.default" }));
-
- // Check if in Azure and set user accordingly
- string postgresqlUser;
- if (String.IsNullOrEmpty(Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT")))
- postgresqlUser = "<aad-user-name>@<server-name>";
- else postgresqlUser = "<postgresql-user-name>@<server-name>";
-
- // Add the token to the PostgreSQL connection
- var connectionString = "Server=<server-name>.postgres.database.azure.com;" +
- "Port=5432;" +
- "Database=<database-name>;" +
- "User Id=" + postgresqlUser + ";" +
- "Password="+ token.Token;
- var connection = new Npgsql.NpgsqlConnection(connectionString);
-
- connection.Open();
- ```
-
- The `if` statement sets the PostgreSQL username based on which identity the token applies to. The token is then passed in to the PostgreSQL connection as the password for the Azure identity. For more information, see [Connect with Managed Identity to Azure Database for PostgreSQL](../postgresql/howto-connect-with-managed-identity.md).
-
- --
-
-# [Node.js](#tab/nodejs)
-
-1. Install the required npm packages you need into your Node.js project:
-
- # [Azure SQL Database](#tab/sqldatabase)
-
- ```terminal
- npm install --save @azure/identity
- npm install --save tedious
- ```
-
- # [Azure Database for MySQL](#tab/mysql)
-
- ```terminal
- npm install --save @azure/identity
- npm install --save mysql2
- ```
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```terminal
- npm install --save @azure/identity
- npm install --save pg
- ```
-
- --
-
-1. Connect to the Azure database by adding an access token. If you're using a user-assigned identity, make sure you uncomment the applicable lines.
-
- # [Azure SQL Database](#tab/sqldatabase)
-
- ```javascript
- const { Connection, Request } = require("tedious");
- const { DefaultAzureCredential } = require("@azure/identity");
-
- // Uncomment one of the two lines depending on the identity type
- //const credential = new DefaultAzureCredential(); // system-assigned identity
- //const credential = new DefaultAzureCredential({ managedIdentityClientId: '<client-id-of-user-assigned-identity>' }); // user-assigned identity
-
- // Get token for Azure SQL Database
- const accessToken = await credential.getToken("https://database.windows.net/.default");
-
- // Create connection to database
- const connection = new Connection({
- server: '<server-name>.database.windows.net',
- authentication: {
- type: 'azure-active-directory-access-token',
- options: {
- token: accessToken.token
- }
- },
- options: {
- database: '<database-name>',
- encrypt: true,
- port: 1433
- }
- });
-
- // Open the database connection
- connection.connect();
- ```
-
- The [tedious](https://tediousjs.github.io/tedious/) library also has an authentication type `azure-active-directory-msi-app-service`, which doesn't require you to retrieve the token yourself, but the use of `DefaultAzureCredential` in this example works both in App Service and in your local development environment. For more information, see [Quickstart: Use Node.js to query a database in Azure SQL Database or Azure SQL Managed Instance](/azure/azure-sql/database/connect-query-nodejs)
-
- # [Azure Database for MySQL](#tab/mysql)
-
- ```javascript
- const mysql = require('mysql2');
- const { DefaultAzureCredential } = require("@azure/identity");
-
- // Uncomment one of the two lines depending on the identity type
- //const credential = new DefaultAzureCredential(); // system-assigned identity
- //const credential = new DefaultAzureCredential({ managedIdentityClientId: '<client-id-of-user-assigned-identity>' }); // user-assigned identity
-
- // Get token for Azure Database for MySQL
- const accessToken = await credential.getToken("https://ossrdbms-aad.database.windows.net/.default");
-
- // Set MySQL user depending on the environment
- if(process.env.IDENTITY_ENDPOINT) {
- var mysqlUser = '<mysql-user-name>@<server-name>';
- } else {
- var mysqlUser = '<aad-user-name>@<server-name>';
- }
-
- // Add the token to the MySQL connection
- var config =
- {
- host: '<server-name>.mysql.database.azure.com',
- user: mysqlUser,
- password: accessToken.token,
- database: '<database-name>',
- port: 3306,
- insecureAuth: true,
- authPlugins: {
- mysql_clear_password: () => () => {
- return Buffer.from(accessToken.token + '\0')
- }
- }
- };
-
- const conn = new mysql.createConnection(config);
-
- // Open the database connection
- conn.connect(
- function (err) {
- if (err) {
- console.log("!!! Cannot connect !!! Error:");
- throw err;
- }
- else
- {
- ...
- }
- });
- ```
-
- The `if` statement sets the MySQL username based on which identity the token applies to. The token is then passed in to the [standard MySQL connection](../mysql/connect-nodejs.md) as the password of the Azure identity.
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```javascript
- const pg = require('pg');
- const { DefaultAzureCredential } = require("@azure/identity");
-
- // Uncomment one of the two lines depending on the identity type
- //const credential = new DefaultAzureCredential(); // system-assigned identity
- //const credential = new DefaultAzureCredential({ managedIdentityClientId: '<client-id-of-user-assigned-identity>' }); // user-assigned identity
-
- // Get token for Azure Database for PostgreSQL
- const accessToken = await credential.getToken("https://ossrdbms-aad.database.windows.net/.default");
-
- // Set PosrgreSQL user depending on the environment
- if(process.env.IDENTITY_ENDPOINT) {
- var postgresqlUser = '<postgresql-user-name>@<server-name>';
- } else {
- var postgresqlUser = '<aad-user-name>@<server-name>';
- }
-
- // Add the token to the PostgreSQL connection
- var config =
- {
- host: '<server-name>.postgres.database.azure.com',
- user: postgresqlUser,
- password: accessToken.token,
- database: '<database-name>',
- port: 5432
- };
-
- const client = new pg.Client(config);
-
- // Open the database connection
- client.connect(err => {
- if (err) throw err;
- else {
- // Do something with the connection...
- }
- });
-
- ```
+1. Instantiate a `DefaultAzureCredential` from the Azure Identity client library. If you're using a user-assigned identity, specify the client ID of the identity.
+2. Get an access token for the resource URI respective to the database type.
+ * For Azure SQL Database: `https://database.windows.net/.default`
+ * For Azure Database for MySQL: `https://ossrdbms-aad.database.windows.net/.default`
+ * For Azure Database for PostgreSQL: `https://ossrdbms-aad.database.windows.net/.default`
+3. Add the token to your connection string.
+4. Open the connection.
- The `if` statement sets the PostgreSQL username based on which identity the token applies to. The token is then passed in to the [standard PostgreSQL connection](../postgresql/connect-nodejs.md) as the password of the Azure identity.
+# [Azure SQL Database](#tab/sqldatabase-sc)
- --
-
-# [Python](#tab/python)
-
-1. In your Python project, install the required packages.
-
- # [Azure SQL Database](#tab/sqldatabase)
-
- ```terminal
- pip install azure-identity
- pip install pyodbc
- ```
- The required [ODBC Driver 17 for SQL Server](/sql/connect/odbc/download-odbc-driver-for-sql-server) is already installed in App Service. To run the same code locally, install it in your local environment too.
+# [Azure Database for MySQL](#tab/mysql-sc)
- # [Azure Database for MySQL](#tab/mysql)
-
- ```terminal
- pip install azure-identity
- pip install mysql-connector-python
- ```
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```terminal
- pip install azure-identity
- pip install psycopg2-binary
- ```
-
- --
-
-1. Connect to the Azure database by using an access token:
-
- # [Azure SQL Database](#tab/sqldatabase)
-
- ```python
- from azure.identity import DefaultAzureCredential
- import pyodbc, struct
-
- # Uncomment one of the two lines depending on the identity type
- #credential = DefaultAzureCredential() # system-assigned identity
- #credential = DefaultAzureCredential(managed_identity_client_id='<client-id-of-user-assigned-identity>') # user-assigned identity
-
- # Get token for Azure SQL Database and convert to UTF-16-LE for SQL Server driver
- token = credential.get_token("https://database.windows.net/.default").token.encode("UTF-16-LE")
- token_struct = struct.pack(f'<I{len(token)}s', len(token), token)
-
- # Connect with the token
- SQL_COPT_SS_ACCESS_TOKEN = 1256
- connString = f"Driver={{ODBC Driver 17 for SQL Server}};SERVER=<server-name>.database.windows.net;DATABASE=<database-name>"
- conn = pyodbc.connect(connString, attrs_before={SQL_COPT_SS_ACCESS_TOKEN: token_struct})
- ```
-
- The ODBC Driver 17 for SQL Server also supports an authentication type `ActiveDirectoryMsi`. You can connect from App Service without getting the token yourself, simply with the connection string `Driver={{ODBC Driver 17 for SQL Server}};SERVER=<server-name>.database.windows.net;DATABASE=<database-name>;Authentication=ActiveDirectoryMsi`. The difference with the above code is that it gets the token with `DefaultAzureCredential`, which works both in App Service and in your local development environment.
-
- For more information about PyODBC, see [PyODBC SQL Driver](/sql/connect/python/pyodbc/python-sql-driver-pyodbc).
-
- # [Azure Database for MySQL](#tab/mysql)
-
- ```python
- from azure.identity import DefaultAzureCredential
- import mysql.connector
- import os
-
- # Uncomment one of the two lines depending on the identity type
- #credential = DefaultAzureCredential() # system-assigned identity
- #credential = DefaultAzureCredential(managed_identity_client_id='<client-id-of-user-assigned-identity>') # user-assigned identity
-
- # Get token for Azure Database for MySQL
- token = credential.get_token("https://ossrdbms-aad.database.windows.net/.default")
-
- # Set MySQL user depending on the environment
- if 'IDENTITY_ENDPOINT' in os.environ:
- mysqlUser = '<mysql-user-name>@<server-name>'
- else:
- mysqlUser = '<aad-user-name>@<server-name>'
-
- # Connect with the token
- os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'
- config = {
- 'host': '<server-name>.mysql.database.azure.com',
- 'database': '<database-name>',
- 'user': mysqlUser,
- 'password': token.token
- }
- conn = mysql.connector.connect(**config)
- print("Connection established")
- ```
-
- The `if` statement sets the MySQL username based on which identity the token applies to. The token is then passed in to the [standard MySQL connection](../mysql/connect-python.md) as the password of the Azure identity.
-
- The `LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN` environment variable enables the [Cleartext plugin](https://dev.mysql.com/doc/refman/8.0/en/cleartext-pluggable-authentication.html) in the MySQL Connector (see [Use Microsoft Entra ID for authentication with MySQL](../mysql/howto-configure-sign-in-azure-ad-authentication.md#compatibility-with-application-drivers)).
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```python
- from azure.identity import DefaultAzureCredential
- import psycopg2
-
- # Uncomment one of the two lines depending on the identity type
- #credential = DefaultAzureCredential() # system-assigned identity
- #credential = DefaultAzureCredential(managed_identity_client_id='<client-id-of-user-assigned-identity>') # user-assigned identity
-
- # Get token for Azure Database for PostgreSQL
- token = credential.get_token("https://ossrdbms-aad.database.windows.net/.default")
-
- # Set PostgreSQL user depending on the environment
- if 'IDENTITY_ENDPOINT' in os.environ:
- postgresUser = '<postgres-user-name>@<server-name>'
- else:
- postgresUser = '<aad-user-name>@<server-name>'
-
- # Connect with the token
- host = "<server-name>.postgres.database.azure.com"
- dbname = "<database-name>"
- conn_string = "host={0} user={1} dbname={2} password={3}".format(host, postgresUser, dbname, token.token)
- conn = psycopg2.connect(conn_string)
- ```
- The `if` statement sets the PostgreSQL username based on which identity the token applies to. The token is then passed in to the [standard PostgreSQL connection](../postgresql/connect-python.md) as the password of the Azure identity.
+# [Azure Database for PostgreSQL](#tab/postgresql-sc)
- Whatever database driver you use, make sure it can send the token as clear text (see [Use Microsoft Entra ID for authentication with MySQL](../mysql/howto-configure-sign-in-azure-ad-authentication.md#compatibility-with-application-drivers)).
-
- --
-
-# [Java](#tab/java)
-
-1. Add the required dependencies to your project's BOM file.
-
- # [Azure SQL Database](#tab/sqldatabase)
-
- ```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.4.6</version>
- </dependency>
- <dependency>
- <groupId>com.microsoft.sqlserver</groupId>
- <artifactId>mssql-jdbc</artifactId>
- <version>10.2.0.jre11</version>
- </dependency>
- ```
-
- # [Azure Database for MySQL](#tab/mysql)
-
- ```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.4.6</version>
- </dependency>
- <dependency>
- <groupId>mysql</groupId>
- <artifactId>mysql-connector-java</artifactId>
- <version>8.0.28</version>
- </dependency>
- ```
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.4.6</version>
- </dependency>
- <dependency>
- <groupId>org.postgresql</groupId>
- <artifactId>postgresql</artifactId>
- <version>42.3.3</version>
- </dependency>
- ```
-
- --
-
-1. Connect to Azure database by using an access token:
-
- # [Azure SQL Database](#tab/sqldatabase)
-
- ```java
- import com.azure.identity.*;
- import com.azure.core.credential.*;
- import com.microsoft.sqlserver.jdbc.SQLServerDataSource;
- import java.sql.*;
-
- ...
-
- // Uncomment one of the two lines depending on the identity type
- //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().build(); // system-assigned identity
- //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().managedIdentityClientId('<client-id-of-user-assigned-identity>")'build(); // user-assigned identity
-
- // Get the token
- TokenRequestContext request = new TokenRequestContext();
- request.addScopes("https://database.windows.net//.default");
- AccessToken token=creds.getToken(request).block();
-
- // Set token in your SQL connection
- SQLServerDataSource ds = new SQLServerDataSource();
- ds.setServerName("<server-name>.database.windows.net");
- ds.setDatabaseName("<database-name>");
- ds.setAccessToken(token.getToken());
-
- // Connect
- try {
- Connection connection = ds.getConnection();
- Statement stmt = connection.createStatement();
- ResultSet rs = stmt.executeQuery("SELECT SUSER_SNAME()");
- if (rs.next()) {
- System.out.println("Signed into database as: " + rs.getString(1));
- }
- }
- catch (Exception e) {
- System.out.println(e.getMessage());
- }
- ```
-
- The [JDBC Driver for SQL Server] also has an authentication type [ActiveDirectoryMsi](/sql/connect/jdbc/connecting-using-azure-active-directory-authentication#connect-using-activedirectorymsi-authentication-mode), which is easier to use for App Service. The above code gets the token with `DefaultAzureCredential`, which works both in App Service and in your local development environment.
-
- # [Azure Database for MySQL](#tab/mysql)
-
- ```java
- import com.azure.identity.*;
- import com.azure.core.credential.*;
- import java.sql.*;
-
- ...
-
- // Uncomment one of the two lines depending on the identity type
- //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().build(); // system-assigned identity
- //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().managedIdentityClientId('<client-id-of-user-assigned-identity>")'build(); // user-assigned identity
-
- // Get the token
- TokenRequestContext request = new TokenRequestContext();
- request.addScopes("https://ossrdbms-aad.database.windows.net/.default");
- AccessToken token=creds.getToken(request).block();
-
- // Set MySQL user depending on the environment
- String mysqlUser;
- if (System.getenv("IDENTITY_ENDPOINT" != null)) {
- mysqlUser = "<aad-user-name>@<server-name>";
- }
- else {
- mysqlUser = "<mysql-user-name>@<server-name>";
- }
-
- // Set token in your SQL connection
- try {
- Connection connection = DriverManager.getConnection(
- "jdbc:mysql://<server-name>.mysql.database.azure.com/<database-name>",
- mysqlUser,
- token.getToken());
- Statement stmt = connection.createStatement();
- ResultSet rs = stmt.executeQuery("SELECT USER();");
- if (rs.next()) {
- System.out.println("Signed into database as: " + rs.getString(1));
- }
- }
- catch (Exception e) {
- System.out.println(e.getMessage());
- }
- ```
-
- The `if` statement sets the MySQL username based on which identity the token applies to. The token is then passed in to the [standard MySQL connection](../mysql/connect-java.md) as the password of the Azure identity.
-
- # [Azure Database for PostgreSQL](#tab/postgresql)
-
- ```java
- import com.azure.identity.*;
- import com.azure.core.credential.*;
- import java.sql.*;
-
- ...
-
- // Uncomment one of the two lines depending on the identity type
- //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().build(); // system-assigned identity
- //DefaultAzureCredential creds = new DefaultAzureCredentialBuilder().managedIdentityClientId('<client-id-of-user-assigned-identity>")'build(); // user-assigned identity
-
- // Get the token
- TokenRequestContext request = new TokenRequestContext();
- request.addScopes("https://ossrdbms-aad.database.windows.net/.default");
- AccessToken token=creds.getToken(request).block();
-
- // Set PostgreSQL user depending on the environment
- String postgresUser;
- if (System.getenv("IDENTITY_ENDPOINT") != null) {
- postgresUser = "<aad-user-name>@<server-name>";
- }
- else {
- postgresUser = "<postgresql-user-name>@<server-name>";
- }
-
- // Set token in your SQL connection
- try {
- Connection connection = DriverManager.getConnection(
- "jdbc:postgresql://<server-name>.postgres.database.azure.com:5432/<database-name>",
- postgresUser,
- token.getToken());
- Statement stmt = connection.createStatement();
- ResultSet rs = stmt.executeQuery("select current_user;");
- if (rs.next()) {
- System.out.println("Signed into database as: " + rs.getString(1));
- }
- }
- catch (Exception e) {
- System.out.println(e.getMessage());
- }
- ```
-
- The `if` statement sets the PostgreSQL username based on which identity the token applies to. The token is then passed in to the [standard PostgreSQL connection](../postgresql/connect-nodejs.md) as the password of the identity. To see how you can do it similarly with specific frameworks, see:
-
- - [Spring Data JDBC](/azure/developer/java/spring-framework/configure-spring-data-jdbc-with-azure-postgresql)
- - [Spring Data JPA](/azure/developer/java/spring-framework/configure-spring-data-jpa-with-azure-postgresql)
- - [Spring Data R2DBC](/azure/developer/java/spring-framework/configure-spring-data-r2dbc-with-azure-postgresql)
- --
--
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
This sample code uses `DefaultAzureCredential` to get a useable token for your Azure database from Microsoft Entra ID and then adds it to the database connection. While you can customize `DefaultAzureCredential`, it's already versatile by default. It gets a token from the signed-in Microsoft Entra user or from a managed identity, depending on whether you run it locally in your development environment or in App Service.
-Without any further changes, your code is ready to be run in Azure. To debug your code locally, however, your develop environment needs a signed-in Microsoft Entra user. In this step, you configure your environment of choice by signing in [with your Microsoft Entra user](#1-grant-database-access-to-azure-ad-user).
+Without any further changes, your code is ready to be run in Azure. To debug your code locally, however, your develop environment needs a signed-in Microsoft Entra user. In this step, you configure your environment of choice by signing in with your Microsoft Entra user.
# [Visual Studio Windows](#tab/windowsclient)
You're now ready to develop and debug your app with the SQL Database as the back
## 5. Test and publish
-1. Run your code in your dev environment. Your code uses the [signed-in Microsoft Entra user](#1-grant-database-access-to-azure-ad-user)) in your environment to connect to the back-end database. The user can access the database because it's configured as a Microsoft Entra administrator for the database.
+1. Run your code in your dev environment. Your code uses the signed-in Microsoft Entra user in your environment to connect to the back-end database. The user can access the database because it's configured as a Microsoft Entra administrator for the database.
1. Publish your code to Azure using the preferred publishing method. In App Service, your code uses the app's managed identity to connect to the back-end database.
You're now ready to develop and debug your app with the SQL Database as the back
- [I get the error `Login failed for user '<token-identified principal>'.`](#i-get-the-error-login-failed-for-user-token-identified-principal) - [I made changes to App Service authentication or the associated app registration. Why do I still get the old token?](#i-made-changes-to-app-service-authentication-or-the-associated-app-registration-why-do-i-still-get-the-old-token) - [How do I add the managed identity to a Microsoft Entra group?](#how-do-i-add-the-managed-identity-to-an-azure-ad-group)-- [I get the error `mysql: unknown option '--enable-cleartext-plugin'`.](#i-get-the-error-mysql-unknown-optionenable-cleartext-plugin) - [I get the error `SSL connection is required. Please specify SSL options and retry`.](#i-get-the-error-ssl-connection-is-required-please-specify-ssl-options-and-retry) #### Does managed identity support SQL Server?
az ad group member list -g $groupid
To grant database permissions for a Microsoft Entra group, see documentation for the respective database type.
-#### I get the error `mysql: unknown option '--enable-cleartext-plugin'`.
-
-If you're using a MariaDB client, the `--enable-cleartext-plugin` option isn't required.
- #### I get the error `SSL connection is required. Please specify SSL options and retry`. Connecting to the Azure database requires additional settings and is beyond the scope of this tutorial. For more information, see one of the following links:
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
This tutorial shows how to build, configure, and deploy a secure [Quarkus](https
* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/java/). * Knowledge of Java with [Quarkus](https://quarkus.io) development.
-## 1. Run the sample application
+## 1. Run the sample
-The tutorial uses [Quarkus sample: Hibernate ORM with Panache and RESTEasy](https://github.com/Azure-Samples/msdocs-quarkus-postgresql-sample-app), which comes with a [dev container](https://docs.github.com/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers) configuration. The easiest way to run it is in a GitHub codespace.
+For your convenience, the sample repository, [Hibernate ORM with Panache and RESTEasy](https://github.com/Azure-Samples/msdocs-quarkus-postgresql-sample-app), includes a [dev container](https://docs.github.com/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers) configuration. The dev container has everything you need to develop an application, including the database, cache, and all environment variables needed by the sample application. The dev container can run in a [GitHub codespace](https://docs.github.com/en/codespaces/overview), which means you can run the sample on any computer with a web browser.
:::row::: :::column span="2":::
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
The Azure Automation Process Automation feature supports several types of runboo
| Type | Description | |: |: |
-| [PowerShell](#powershell-runbooks) |Textual runbook based on Windows PowerShell scripting. The currently supported versions are: PowerShell 5.1 (GA), PowerShell 7.1 (preview), and PowerShell 7.2 (GA).|
+| [PowerShell](#powershell-runbooks) |Textual runbook based on Windows PowerShell scripting. The currently supported versions are: PowerShell 7.2 (GA), PowerShell 5.1 (GA), and PowerShell 7.1 (preview). |
| [PowerShell Workflow](#powershell-workflow-runbooks)|Textual runbook based on Windows PowerShell Workflow scripting. | | [Python](#python-runbooks) |Textual runbook based on Python scripting. The currently supported versions are: Python 2.7 (GA), Python 3.8 (GA), and Python 3.10 (preview). | | [Graphical](#graphical-runbooks)|Graphical runbook based on Windows PowerShell and created and edited completely in the graphical editor in Azure portal. |
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Arc resource bridge typically releases a new version on a monthly cadence, at th
## Next steps * Learn how [Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md).
-* Learn how [Azure Arc-enabled SCVMM extends Azure's governance and management capabilities to System Center managed infrastructure(../system-center-virtual-machine-manager/overview.md).
+* Learn how [Azure Arc-enabled SCVMM extends Azure's governance and management capabilities to System Center managed infrastructure](../system-center-virtual-machine-manager/overview.md).
* Learn about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines). * Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.
azure-arc License Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md
In all cases, you're required to attest to conformance with SA or SPLA. There is
As you migrate and modernize your Windows Server 2012 and Windows 2012 R2 infrastructure through the end of 2023, you can utilize the flexibility of monthly billing with Windows Server 2012 ESUs enabled by Azure Arc for cost savings benefits.
-As servers no longer require ESUs because they've been migrated to Azure, Azure VMware Solution (AVS), or Azure Stack HCI (where theyΓÇÖre eligible for free ESUs), or updated to Windows Server 2016 or higher, you can modify the number of cores associated with a license or delete/deactivate licenses. You can also link the license to a new scope of additional servers. See [Programmatically deploy and manage Azure Arc Extended Security Updates licenses](api-extended-security-updates.md) to learn more. For information about no-cost ESUs through Azure Stack HCI, see [Free Extended Security Updates through Azure Stack HCI](/azure-stack/hci/manage/azure-benefits-esu?tabs=windows-server-2012).
+As servers no longer require ESUs because they've been migrated to Azure, Azure VMware Solution (AVS), or Azure Stack HCI **where theyΓÇÖre eligible for free ESUs**, or updated to Windows Server 2016 or higher, you can modify the number of cores associated with a license or delete/deactivate licenses. You can also link the license to a new scope of additional servers. See [Programmatically deploy and manage Azure Arc Extended Security Updates licenses](api-extended-security-updates.md) to learn more. For information about no-cost ESUs through Azure Stack HCI, see [Free Extended Security Updates through Azure Stack HCI](/azure-stack/hci/manage/azure-benefits-esu?tabs=windows-server-2012).
> [!NOTE] > This process is not automatic; billing is tied to the activated licenses and you are responsible for modifying your provisioned licensing to take advantage of cost savings.
In this scenario, you should provision a Windows Server 2012 Datacenter license
### Scenario 8: An insurance customer is running a 16 node VMware cluster with 1024 physical cores on-premises. 44 of the VMs on the cluster are running Windows Server 2012 R2. Those 44 VMs consume 506 virtual cores, which was calculated by summing up the maximum of 8 or the actual number of cores assigned to each VM.
-In this scenario, you could either license the entire cluster with 1024 Windows Server 2012 Datacenter ESU physical cores or license each VM individually with a total of 506 standard edition virtual cores. In this case, it's cheaper to purchase an Arc ESU Windows Server 2012 Standard edition license associated with 506 virtual cores. You'll need to onboard each of the 44 VMs to Azure Arc and then link the license to the Arc machines. If you migrate the VMs to Azure VMware Solution (AVS), these servers become eligible for free WS2012 ESUs and do not need to be licensed for ESUs through Azure Arc.
+In this scenario, you could either license the entire cluster with 1024 Windows Server 2012 Datacenter ESU physical cores or license each VM individually with a total of 506 standard edition virtual cores. In this case, it's cheaper to purchase an Arc ESU Windows Server 2012 Standard edition license associated with 506 virtual cores. You'll need to onboard each of the 44 VMs to Azure Arc and then link the license to the Arc machines.
+> [!IMPORTANT]
+> If you migrate the VMs to Azure VMware Solution (AVS), these servers become eligible for free WS2012 ESUs and should not enroll in ESUs enabled through Azure Arc.
+>
## Next steps * Find out more about [planning for Windows Server and SQL Server end of support](https://www.microsoft.com/en-us/windows-server/extended-security-updates) and [getting Extended Security Updates](/windows-server/get-started/extended-security-updates-deploy).
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
With Windows Server 2012 and Windows Server 2012 R2 having reached end of suppor
The purpose of this article is to help you understand the benefits and how to prepare to use Arc-enabled servers to enable delivery of ESUs.
+> [!NOTE]
+> Azure VMware Solutions (AVS) machines are eligible for free ESUs and should not enroll in ESUs enabled through Azure Arc.
+>
## Key benefits Delivering ESUs to your Windows Server 2012/2012 R2 machines provides the following key benefits:
azure-arc Troubleshoot Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-extended-security-updates.md
If you're unable to successfully link your Azure Arc-enabled server to an activa
- **Operating system:** Only Azure Arc-enabled servers running the Windows Server 2012 and 2012 R2 operating system are eligible to enroll in Extended Security Updates. -- **Environment:** The connected machine should not be running on Azure Stack HCI, Azure VMware solution, or as an Azure virtual machine. In these scenarios, WS2012 ESUs are available for free. For information about no-cost ESUs through Azure Stack HCI, see [Free Extended Security Updates through Azure Stack HCI](/azure-stack/hci/manage/azure-benefits-esu?tabs=windows-server-2012).
+- **Environment:** The connected machine should not be running on Azure Stack HCI, Azure VMware solution (AVS), or as an Azure virtual machine. **In these scenarios, WS2012 ESUs are available for free**. For information about no-cost ESUs through Azure Stack HCI, see [Free Extended Security Updates through Azure Stack HCI](/azure-stack/hci/manage/azure-benefits-esu?tabs=windows-server-2012).
- **License properties:** Verify the license is activated and has been allocated sufficient physical or virtual cores to support the intended scope of servers.
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
public void warmup( @WarmupTrigger Object warmupContext, ExecutionContext contex
# [Model v4](#tab/nodejs-v4)
-The following example shows a warmup trigger [JavaScript function](functions-reference-node.md) that runs on each new instance when added to your app.
+The following example shows a [JavaScript function](functions-reference-node.md) with a warmup trigger that runs on each new instance when added to your app:
-```javascript
-import { app } from "@azure/functions";
-
-app.warmup('warmupTrigger1', {
- handler: (warmupContext, context) => {
- context.log('Function App instance is warm.');
- },
-});
-```
# [Model v3](#tab/nodejs-v3)
module.exports = async function (warmupContext, context) {
}; ``` ++ ::: zone-end ::: zone pivot="programming-language-typescript" # [Model v4](#tab/nodejs-v4)
-The following example shows a warmup trigger [JavaScript function](functions-reference-node.md) that runs on each new instance when added to your app.
+The following example shows a [TypeScript function](functions-reference-node.md) with a warmup trigger that runs on each new instance when added to your app:
-```TypeScript
-import { app, InvocationContext, WarmupContextOptions } from "@azure/functions";
-
-export async function warmupFunction(warmupContext: WarmupContextOptions, context: InvocationContext): Promise<void> {
- context.log('Function App instance is warm.');
-}
-
-app.warmup('warmup', {
- handler: warmupFunction,
-});
-```
# [Model v3](#tab/nodejs-v3) TypeScript samples aren't documented for model v3. ++ ::: zone-end ::: zone pivot="programming-language-powershell" Here's the *function.json* file:
azure-functions Functions Create Maven Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-intellij.md
Specifically, this article shows you:
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An [Azure supported Java Development Kit (JDK)](/azure/developer/java/fundamentals/java-support-on-azure) for Java, version 8 or 11
+- An [Azure supported Java Development Kit (JDK)](/azure/developer/java/fundamentals/java-support-on-azure) for Java, version 8, 11, or 17
- An [IntelliJ IDEA](https://www.jetbrains.com/idea/download/) Ultimate Edition or Community Edition installed - [Maven 3.5.0+](https://maven.apache.org/download.cgi) - Latest [Function Core Tools](https://github.com/Azure/azure-functions-core-tools)
To debug the project locally, follow these steps:
:::image type="content" source="media/functions-create-first-java-intellij/local-debug-functions-button.png" alt-text="Local debug function app button." lightbox="media/functions-create-first-java-intellij/local-debug-functions-button.png":::
-1. Click on line *31* of the file *src/main/java/org/example/functions/HttpTriggerFunction.java* to add a breakpoint. Access the endpoint `http://localhost:7071/api/HttpTrigger-Java?name=Azure` again and you'll find the breakpoint is hit. You can then try more debug features like **Step**, **Watch**, and **Evaluation**. Stop the debug session by clicking the **Stop** button.
+1. Click on line *20* of the file *src/main/java/org/example/functions/HttpTriggerFunction.java* to add a breakpoint. Access the endpoint `http://localhost:7071/api/HttpTrigger-Java?name=Azure` again and you'll find the breakpoint is hit. You can then try more debug features like **Step**, **Watch**, and **Evaluation**. Stop the debug session by clicking the **Stop** button.
:::image type="content" source="media/functions-create-first-java-intellij/local-debug-functions-break.png" alt-text="Local debug function app break." lightbox="media/functions-create-first-java-intellij/local-debug-functions-break.png":::
To debug the project locally, follow these steps:
To deploy your project to Azure, follow these steps:
-1. Right click your project in IntelliJ Project explorer, then select **Azure -> Deploy to Azure Functions**.
+1. Click and expand the Azure icon in IntelliJ Project explorer, then select **Deploy to Azure -> Deploy to Azure Functions**.
:::image type="content" source="media/functions-create-first-java-intellij/deploy-functions-to-azure.png" alt-text="Deploy project to Azure." lightbox="media/functions-create-first-java-intellij/deploy-functions-to-azure.png":::
azure-maps Release Notes Drawing Tools Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-drawing-tools-module.md
This document contains information about new features and other changes to the Azure Maps Drawing Tools Module.
+## [1.0.3]
+
+### Other changes (1.0.3)
+
+- Updated CDN links in the readme.
+ ## [1.0.2] ### Bug fixes (1.0.2)
Stay up to date on Azure Maps:
> [Azure Maps Blog]
+[1.0.3]: https://www.npmjs.com/package/azure-maps-drawing-tools/v/1.0.3
[1.0.2]: https://www.npmjs.com/package/azure-maps-drawing-tools/v/1.0.2 [Azure Maps Drawing Tools Samples]: https://samples.azuremaps.com/?search=Drawing [Azure Maps Blog]: https://techcommunity.microsoft.com/t5/azure-maps-blog/bg-p/AzureMapsBlog
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (latest)
+### [3.0.3] (November 29, 2023)
+
+#### New features (3.0.3)
+
+- Included ESM support.
+
+#### Other changes (3.0.3)
+
+- The accessibility feature for screen reader has been upgraded to utilize the Search V2 API (reverse geocoding).
+
+- Enhanced accessibility in the Compass and Pitch controls.
+ ### [3.0.2] (November 1, 2023) #### Bug fixes (3.0.2)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2
+### [2.3.5] (November 29, 2023)
+
+#### Other changes (2.3.5)
+
+- The accessibility feature for screen reader has been upgraded to utilize the Search V2 API (reverse geocoding).
+ ### [2.3.4] (November 1, 2023) #### Other changes (2.3.4)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.3
[3.0.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.2 [3.0.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.1 [3.0.0]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0
Stay up to date on Azure Maps:
[3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.3.5]: https://www.npmjs.com/package/azure-maps-control/v/2.3.5
[2.3.4]: https://www.npmjs.com/package/azure-maps-control/v/2.3.4 [2.3.3]: https://www.npmjs.com/package/azure-maps-control/v/2.3.3 [2.3.2]: https://www.npmjs.com/package/azure-maps-control/v/2.3.2
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
# Using Azure Monitor Application Insights with Spring Boot
+> [!NOTE]
+> With _Spring Boot native image applications_, you can use [this project](https://aka.ms/AzMonSpringNative).
+ There are two options for enabling Application Insights Java with Spring Boot: JVM argument and programmatically. ## Enabling with JVM argument
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
Let's assume the input log message body is `User account with userId 123456xx fa
"body": { "toAttributes": { "rules": [
- "^User account with userId (?<redactedUserId>\\d+) .*"
+ "^User account with userId (?<redactedUserId>[\\da-zA-Z]+)[\\w\\s]+"
] } }
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
The simplest way to add your own spans is by using OpenTelemetry's `@WithSpan` a
Spans populate the `requests` and `dependencies` tables in Application Insights.
-1. Add `opentelemetry-instrumentation-annotations-1.21.0.jar` (or later) to your application:
+1. Add `opentelemetry-instrumentation-annotations-1.32.0.jar` (or later) to your application:
```xml <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-instrumentation-annotations</artifactId>
- <version>1.21.0</version>
+ <version>1.32.0</version>
</dependency> ```
Not available in .NET.
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.14</version>
+ <version>3.4.18</version>
</dependency> ```
azure-monitor Azure Monitor Rest Api Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-rest-api-index.md
+
+ Title: Azure Monitor REST API index
+description: Lists the operation groups for the Azure Monitor REST API, which includes Application Insights, Log Analytics, and Monitor.
Last updated : 11/30/2023+++
+# Azure Monitor REST API index
+
+Organized by subject area.
+
+| Operation groups | Description |
+|-|-|
+| [Operations](/rest/api/monitor/alertsmanagement/operations) | Lists the available REST API operations for Azure Monitor. |
+| ***Activity Log*** | |
+| [Activity log(s)](/rest/api/monitor/activity-logs) | Get a list of event entries in the [activity log](./essentials/platform-logs-overview.md). |
+| [(Activity log) event categories](/rest/api/monitor/event-categories) | Lists the types of Activity Log Entries. |
+| [Activity log profiles](/rest/api/monitor/log-profiles) | Operations to manage [activity log profiles](./essentials/platform-logs-overview.md) so you can route activity log events to other locations. |
+| [Activity log tenant events](/rest/api/monitor/tenant-activity-logs) | Gets the [Activity Log](./essentials/platform-logs-overview.md) event entries for a specific tenant. |
+| ***Alerts Management and Action Groups*** | |
+| [Action groups](/rest/api/monitor/action-groups) | Manages and lists [action groups](./alerts/action-groups.md). |
+| [Activity log alerts](/rest/api/monitor/activity-log-alerts) | Manages and lists [activity log alert rules](./alerts/alerts-types.md#activity-log-alerts). |
+| [Alert management](/rest/api/monitor/alertsmanagement/alerts) | Lists and updates [fired alerts](./alerts/alerts-overview.md). |
+| [Alert processing rules](/rest/api/monitor/alertsmanagement/alert-processing-rules) | Manages and lists [alert processing rules](./alerts/alerts-processing-rules.md). |
+| [Metric alert baseline](/rest/api/monitor/baselines) | List the metric baselines used in alert rules with [dynamic thresholds](./alerts/alerts-dynamic-thresholds.md). |
+| [Metric alerts](/rest/api/monitor/metric-alerts) | Manages and lists [metric alert rules](./alerts/alerts-overview.md). |
+| [Metric alerts status](/rest/api/monitor/metric-alerts-status) | Lists the status of [metric alert rules](./alerts/alerts-overview.md). |
+| [Prometheus rule groups](/rest/api/monitor/prometheus-rule-groups) | Manages and lists [Prometheus rule groups](./essentials/prometheus-rule-groups.md) (alert rules and recording rules). |
+| [Scheduled query rules - 2023-03-15 (preview)](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2023-03-15-preview&preserve-view=true) | Manages and lists [log alert rules](./alerts/alerts-types.md#log-alerts). |
+| [Scheduled query rules - 2018-04-16](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2018-04-16&preserve-view=true) | Manages and lists [log alert rules](./alerts/alerts-types.md#log-alerts). |
+| [Scheduled query rules - 2021-08-01](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2021-08-01&preserve-view=true) | Manages and lists [log alert rules](./alerts/alerts-types.md#log-alerts). |
+| [Smart Detector alert rules](/rest/api/monitor/smart-detector-alert-rules) | Manages and lists [smart detection alert rules](./alerts/alerts-types.md#smart-detection-alerts). |
+| ***Application Insights*** | |
+| [Components](/rest/api/application-insights/components) | Enables you to manage components that contain Application Insights data. |
+| [Data Access](./logs/api/overview.md) | Query Application Insights data. |
+| [Events](/rest/api/application-insights/events) | Retrieve the data for a single event or multiple events by event type and retrieve the Odata EDMX metadata for an application. |
+| [Metadata](/rest/api/application-insights/metadata) | Retrieve and export the metadata information for an application. |
+| [Metrics](/rest/api/application-insights/metrics) | Retrieve or export the metric data for an application and retrieve the metadata describing the available metrics for an application. |
+| [Query](/rest/api/application-insights/query) | The Query operation group, which includes Execute and Get operations, enables running analytics queries on resources and retrieving the results, even for large data sets that require extended processing time. |
+| [Web Tests](/rest/api/application-insights/web-tests) | Set up web tests to monitor a web endpointΓÇÖs availability and responsiveness. |
+| [Workbooks](/rest/api/application-insights/workbooks) | Manage Azure workbooks for an Application Insights component resource and retrieve workbooks within resource group or subscription by category. |
+| ***Autoscale Settings*** | |
+| [Autoscale settings](/rest/api/monitor/autoscale-settings) | Operations to manage autoscale settings. |
+| [Predictive metric](/rest/api/monitor/predictive-metric) | Retrieves predicted autoscale metric data. |
+| ***Data Collection Endpoints*** | |
+| [Data collection endpoints](/rest/api/monitor/data-collection-endpoints) | Create and manage a data collection endpoint and retrieve the data collection endpoints within a resource group or subscription. |
+| ***Data Collection Rules*** | Create and manage a data collection rule and retrieve the data collection rules within a resource group or subscription. |
+| [Data collection rule associations](/rest/api/monitor/data-collection-rule-associations) | Create and manage a data collection rule association and retrieve the data collection rule associations for a data collection endpoint, resource, or data collection rule. |
+| [Data collection rules](/rest/api/monitor/data-collection-rules) | Create and manage a data collection rule and retrieve the data collection rules within a resource group or subscription. |
+| ***Diagnostic Settings*** | |
+| [Diagnostic settings](/rest/api/monitor/diagnostic-settings) | Operations to create, update, and retrieve the [diagnostic settings](./essentials/platform-logs-overview.md) for a resource. Controls the routing of metric data and diagnostic logs. |
+| [Diagnostic settings category](/rest/api/monitor/diagnostic-settings-category) | Relates to the [possible categories](./essentials/resource-logs-schema.md) for a given resource. |
+| [Management group diagnostic settings](/rest/api/monitor/management-group-diagnostic-settings) | Manage the management group diagnostic settings for a resource and retrieve the management group diagnostic settings list for a management group. |
+| [Subscription diagnostic settings](/rest/api/monitor/subscription-diagnostic-settings) | Manage the subscription diagnostic settings for a resource and retrieve the subscription diagnostic settings list for a subscriptionId. |
+| ***Manage Log Analytics workspaces and related resources*** | |
+| [Available service tiers](/rest/api/loganalytics/available-service-tiers) | Retrieve the available service tiers for a Log Analytics workspace. |
+| [Clusters](/rest/api/loganalytics/clusters) | Manage Log Analytics clusters. |
+| [Data Collector Logs (Preview)](/rest/api/loganalytics/data%20collector%20logs%20%28preview%29) | Delete or retrieve a data collector log tables for a Log Analytics workspace and retrieve all data collector log tables for a Log Analytics workspace. |
+| [Data exports](/rest/api/loganalytics/data-exports) | Manage a data export for a Log Analytics workspace or retrieve the data export instances within a Log Analytics workspace. |
+| [Data Sources](/rest/api/loganalytics/data-sources) | Create or update data sources. |
+| [Deleted workspaces](/rest/api/loganalytics/deleted-workspaces) | Retrieve the recently deleted workspaces within a subscription or resource group. |
+| [Gateways](/rest/api/loganalytics/gateways) | Delete a Log Analytics gateway. |
+| [Intelligence Packs](/rest/api/loganalytics/intelligence-packs) | Enable or disable an intelligence pack for a Log Analytics workspace or retrieve all intelligence packs for a Log Analytics workspace. |
+| [Linked Services](/rest/api/loganalytics/linked-services) | Create or update linked services. |
+| [Linked Storage Accounts](/rest/api/loganalytics/linked-storage-accounts) | Manage a link relation between a workspace and storage accounts and retrieve all linked storage accounts associated with a workspace. |
+| [Management Groups](/rest/api/loganalytics/management-groups) | Retrieve all management groups connected to a Log Analytics workspace. |
+| [Metadata](/rest/api/loganalytics/metadata) | Retrieve the metadata information for a Log Analytics workspace. |
+| [Operation Statuses](/rest/api/loganalytics/operation-statuses) | Retrieve the status of a long running azure asynchronous operation. |
+| [Operations](/rest/api/loganalytics/operations) | Retrieve all of the available OperationalInsights Rest API operations. |
+| [Query](/rest/api/loganalytics/query) | Execute a batch of Analytics queries for data. |
+| [Query pack queries](/rest/api/monitor/query-pack-queries) | Manage a query defined within a Log Analytics QueryPack and retrieve or search the list of queries defined within a Log Analytics QueryPack. |
+| [Query packs](/rest/api/monitor/query-packs) | Manage a Log Analytics QueryPack including updating its tags and retrieve a list of all Log Analytics QueryPacks within a subscription or resource group. |
+| [Saved Searches](/rest/api/loganalytics/saved-searches) | Create or update saved searches. |
+| [Storage Insights](/rest/api/loganalytics/storage-insights) | Create or update storage insights. |
+| [Tables](/rest/api/loganalytics/tables) | Manage Log Analytics workspace tables. |
+| [Workspace purge](/rest/api/loganalytics/workspace-purge) | Retrieve the status of an ongoing purge operation or purge the data in a Log Analytics workspace. |
+| [Workspace schema](/rest/api/loganalytics/workspace-schema) | Retrieves the schema for a Log Analytics workspace. |
+| [Workspace shared keys](/rest/api/loganalytics/workspace-shared-keys) | Retrieve or regenerate the shared keys for a Log Analytics workspace. |
+| [Workspace usages](/rest/api/loganalytics/workspace-usages) | Retrieve the usage metrics for a Log Analytics workspace. |
+| [Workspaces](/rest/api/loganalytics/workspaces) | Manage Log Analytics workspaces. |
+| ***Metrics*** | |
+| [Azure Monitor Workspaces](/rest/api/monitor/azure-monitor-workspaces) | Manage an Azure Monitor workspace and retrieve the Azure Monitor workspaces within a resource group or subscription. |
+| [Metric definitions](/rest/api/monitor/metric-definitions) | Lists the metric definitions available for the resource. That is, what [specific metrics](/azure/azure-monitor/reference/supported-metrics/metrics-index) can you collect. |
+| [Metric namespaces](/rest/api/monitor/metric-namespaces) | Lists the metric namespaces. Most relevant when using [custom metrics](./essentials/metrics-custom-overview.md). |
+| [Metrics Batch](/rest/api/monitor/metrics-batch) | List the metric values for multiple resources. |
+| [Metrics](/rest/api/monitor/metrics) | Lists the metric values for a resource you identify. |
+| [Metrics ΓÇô Custom](/rest/api/monitor/metrics-custom) | Post the metric values for a resource. |
+| ***Private Link Networking*** | |
+| [Private endpoint connections (preview)](/rest/api/monitor/private-endpoint-connections) | Approve, reject, delete, and retrieve a private endpoint connection and retrieve all project endpoint connections on a private link scope. |
+| [Private link resources (preview)](/rest/api/monitor/private-link-resources) | Retrieve a single private link resource that needs to be created for an Azure Monitor PrivateLinkScope or all private link resources within an Azure Monitor PrivateLinkScope resource that need to be created for an Azure Monitor PrivateLinkScope. |
+| [Private link scope operation status (preview)](/rest/api/monitor/private-link-scope-operation-status) | Retrieves the status of an Azure asynchronous operation associated with a private link scope operation. |
+| [Private link scoped resources (preview)](/rest/api/monitor/private-link-scoped-resources) | Approve, reject, delete, and retrieve a scoped resource object and retrieve all scoped resource objects within an Azure Monitor PrivateLinkScope resource. |
+| [Private link scopes (preview)](/rest/api/monitor/private-link-scopes) | Manage an Azure Monitor PrivateLinkScope including its tags and retrieve a list of all Azure Monitor PrivateLinkScopes within a subscription or resource group. |
+| ***Query log data*** | |
+| [Data Access](./logs/api/overview.md) | Query Log Analytics data. |
+| ***Send Custom Log Data to Log Analytics*** | |
+| [Logs Ingestion](./logs/logs-ingestion-api-overview.md) | Lets you send data to a Log Analytics workspace using either a [REST API call](./logs/logs-ingestion-api-overview.md#rest-api-call) or [client libraries](./logs/logs-ingestion-api-overview.md#client-libraries). |
+| ***Retired or being retired*** | |
+| [Alerts (classic) rule incidents](/rest/api/monitor/alert-rule-incidents) | [Being retired in 2019](/previous-versions/azure/azure-monitor/alerts/monitoring-classic-retirement) in the public cloud. Older classic alerts functions. Gets an incident associated to a [classic metric alert rule](./alerts/alerts-classic.overview.md). When an alert rule fires because the threshold is crossed in the up or down direction, an incident is created and an entry added to the [Activity Log](./essentials/platform-logs-overview.md). |
+| [Alert (classic) rules](/rest/api/monitor/alert-rules) | [Being retired in 2019](/previous-versions/azure/azure-monitor/alerts/monitoring-classic-retirement) in the public cloud. Provides operations for managing [classic alert](./alerts/alerts-classic.overview.md) rules. |
+| [Data Collector](/rest/api/loganalytics/create-request) | Data Collector API Reference. |
azure-monitor Create Custom Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table.md
Last updated 10/23/2023
-# Customer intent: As a Log Analytics workspace administrator, I want to create a table with a custom schema to store logs from an Azure or non-Azure data source.
+# Customer intent: As a Log Analytics workspace administrator, I want to manage table schemas and be able create a table with a custom schema to store logs from an Azure or non-Azure data source.
# Add or delete tables and columns in Azure Monitor Logs [Data collection rules](../essentials/data-collection-rule-overview.md) let you [filter and transform log data](../essentials/data-collection-transformations.md) before sending the data to an [Azure table or a custom table](../logs/manage-logs-tables.md#table-type-and-schema). This article explains how to create custom tables and add custom columns to tables in your Log Analytics workspace.
+> [!IMPORTANT]
+> Whenever you update a table schema, be sure to [update any data collection rules](../essentials/data-collection-rule-overview.md) that send data to the table. The table schema you define in your data collection rule determines how Azure Monitor streams data to the destination table. Azure Monitor does not update data collection rules automatically when you make table schema changes.
+ ## Prerequisites To create a custom table, you need:
To create a custom table, you need:
Azure tables have predefined schemas. To store log data in a different schema, use data collection rules to define how to collect, transform, and send the data to a custom table in your Log Analytics workspace.
+> [!IMPORTANT]
+> Custom tables have a suffix of **_CL**; for example, *tablename_CL*. The Azure portal adds the **_CL** suffix to the table name automatically. When you create a custom table using a different method, you need to add the **_CL** suffix yourself. The *tablename_CL* in the [DataFlows Streams](../essentials/data-collection-rule-structure.md#dataflows) properties in your data collection rules must match the *tablename_CL* name in the Log Analytics workspace.
+ > [!NOTE] > For information about creating a custom table for logs you ingest with the deprecated Log Analytics agent, also known as MMA or OMS, see [Collect text logs with the Log Analytics agent](../agents/data-sources-custom-logs.md#define-a-custom-log-table).
To create a custom table, run the [az monitor log-analytics workspace table crea
Use the [Tables - Update PATCH API](/rest/api/loganalytics/tables/update) to create a custom table with the PowerShell code below. This code creates a table called *MyTable_CL* with two columns. Modify this schema to collect a different table.
-> [!IMPORTANT]
-> Custom tables have a suffix of *_CL*; for example, *tablename_CL*. The *tablename_CL* in the DataFlows Streams must match the *tablename_CL* name in the Log Analytics workspace.
- 1. Select the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**. :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="../logs/media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening Cloud Shell in the Azure portal.":::
azure-monitor Log Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-overview.md
If you want to start with an empty script and write it yourself, close the examp
## Log Analytics interface
-The following image identifies four Log Analytics components.
+The following image identifies four Log Analytics components. These four Log Analytics components are:
+
+1. [Top action bar](#top-action-bar)
+1. [Left sidebar](#left-sidebar)
+1. [Query window](#query-window)
+1. [Results window](#results-window)
:::image type="content" source="media/log-analytics-overview/log-analytics.png" lightbox="media/log-analytics-overview/log-analytics.png" alt-text="Screenshot that shows the Log Analytics interface with four features identified.":::
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 04/11/2023 Last updated : 11/30/2023
You can change the default settings of the Azure portal to meet your own preferences.
-Most settings are available from the **Settings** menu in the top right section of global page header.
+To view and manage your settings, select the **Settings** menu icon in the top right section of the global page header to open the **Portal settings** page.
:::image type="content" source="media/set-preferences/settings-top-header.png" alt-text="Screenshot showing the settings icon in the global page header.":::
+Within **Portal settings**, you'll see different sections. This article describes the available options for each section.
+ ## Directories + subscriptions The **Directories + subscriptions** page lets you manage directories and set subscription filters. ### Switch and manage directories
-In the **Directories** section, you'll see your **Current directory** (which you're currently signed in to).
+In the **Directories** section, you'll see your **Current directory** (the directory that you're currently signed in to).
-The **Startup directory** shows the default directory when you sign in to the Azure portal. To choose a different startup directory, select **change** to go to the [Appearance + startup views](#appearance--startup-views) page, where you can change this option.
+The **Startup directory** shows the default directory when you sign in to the Azure portal (or **Last visited** if you've chosen that option). To choose a different startup directory, select **change** to open the [Appearance + startup views](#appearance--startup-views) page, where you can change your selection.
To see a full list of directories to which you have access, select **All Directories**.
To switch to a different directory, find the directory that you want to work in,
:::image type="content" source="media/set-preferences/settings-directories-subscriptions-default-filter.png" alt-text="Screenshot showing the Directories settings pane.":::
-## Subscription filters
+### Subscription filters
You can choose the subscriptions that are filtered by default when you sign in to the Azure portal. This can be helpful if you have a primary list of subscriptions you work with but use others occasionally.
To use customized filters, select **Advanced filters**. You'll be prompted to co
:::image type="content" source="media/set-preferences/settings-advanced-filters-enable.png" alt-text="Screenshot showing the confirmation dialog box for Advanced filters.":::
-This will enable the **Advanced filters** page, where you can create and manage multiple subscription filters. Any currently selected subscriptions will be saved as an imported filter that you can use again. If you want to stop using advanced filters, select the toggle again to restore the default subscription view. Any custom filters you've created will be saved and will be available to use if you enable **Advanced filters** in the future.
+After you continue, the **Advanced filters** page appears in the left navigation menu of **Portal settings**. You can create and manage multiple subscription filters on this page. Your currently selected subscriptions are saved as an imported filter that you can use again. You'll see this filter selected on the **Directories + subscriptions** page.
+
+If you want to stop using advanced filters, select the toggle again to restore the default subscription view. Any custom filters you've created are saved and will be available to use if you enable **Advanced filters** in the future.
:::image type="content" source="media/set-preferences/settings-advanced-filters-disable.png" alt-text="Screenshot showing the confirmation dialog box for disabling Advanced filters.":::
-### Advanced filters
+## Advanced filters
After enabling the **Advanced filters** page, you can create, modify, or delete subscription filters. The **Default** filter shows all subscriptions to which you have access. This filter is used if there are no other filters, or when the active filter fails to include any subscriptions. You may also see a filter named **Imported-filter**, which includes all subscriptions that had been selected previously.
-To change the filter that is currently in use, select that filter from the **Advanced filter** drop-down box. You can also select **Modify advanced filters** to go to the **Advanced filters** page, where you can create, modify, and delete your filters.
+To change the filter that is currently in use, select **Activate** next to that filter.
### Create a filter
To create a new filter, select **Create a filter**. You can create up to ten fil
Each filter must have a unique name that is between 8 and 50 characters long and contains only letters, numbers, and hyphens.
-After you've named your filter, enter at least one condition. In the **Filter type** field, select either **Subscription name**, **Subscription ID**, or **Subscription state**. Then select an operator and enter a value to filter on.
+After you've named your filter, enter at least one condition. In the **Filter type** field, select **Management group**, **Subscription ID**, **Subscription name**, or **Subscription state**. Then select an operator and the value to filter on.
When you're finished adding conditions, select **Create**. Your filter will then appear in the list in **Active filters**.
You can modify or rename an existing filter by selecting the pencil icon in that
> [!NOTE] > If you modify a filter that is currently active, and the changes result in 0 subscriptions, the **Default** filter will become active instead. You can't activate a filter which doesn't include any subscriptions.
-To delete a filter, select the trash can icon in that filter's row. You can't delete the **Default** filter or any filter that is currently active.
+To delete a filter, select the trash can icon in that filter's row. You can't delete the **Default** filter or a filter that is currently active.
## Appearance + startup views
-The **Appearance + startup views** pane has two sections. The **Appearance** section lets you choose menu behavior, your color theme, and whether to use a high-contrast theme, and the **Startup views** section lets you set options for what you see when you first sign in to the Azure portal.
+The **Appearance + startup views** pane has two sections. The **Appearance** section lets you choose menu behavior, your color theme, and whether to use a high-contrast theme.
:::image type="content" source="media/set-preferences/azure-portal-settings-appearance.png" alt-text="Screenshot showing the Appearance section of Appearance + startup views.":::
The **Appearance + startup views** pane has two sections. The **Appearance** sec
The **Menu behavior** section lets you choose how the default Azure portal menu behaves. -- **Flyout**: The menu will be hidden until you need it. You can select the menu icon in the upper left hand corner to open or close the menu.-- **Docked**: The menu will always be visible. You can collapse the menu to provide more working space.
+- **Flyout**: The menu is hidden until you need it. You can select the menu icon in the upper left hand corner to open or close the menu.
+- **Docked**: The menu is always visible. You can collapse the menu to provide more working space.
### Choose a theme or enable high contrast
Alternatively, you can choose a theme from the **High contrast theme** section.
### Startup page
-Choose one of the following options for the page you'll see when you first sign in to the Azure portal.
+The **Startup views** section lets you set options for what you see when you first sign in to the Azure portal.
++
+Choose one of the following options for **Startup page**. This setting determines which page you see when you first sign in to the Azure portal.
- **Home**: Displays the home page, with shortcuts to popular Azure services, a list of resources you've used most recently, and useful links to tools, documentation, and more.-- **Dashboard**: Displays your most recently used dashboard. Dashboards can be customized to create a workspace designed just for you. For example, you can build a dashboard that is project, task, or role focused. For more information, see [Create and share dashboards in the Azure portal](azure-portal-dashboards.md).
+- **Dashboard**: Displays your most recently used dashboard. Dashboards can be customized to create a workspace designed just for you. For more information, see [Create and share dashboards in the Azure portal](azure-portal-dashboards.md).
### Startup directory Choose one of the following options for the directory to work in when you first sign in to the Azure portal. -- **Sign in to your last visited directory**: When you sign in to the Azure portal, you'll start in whichever directory you'd been working in last time.-- **Select a directory**: Choose this option to select one of your directories. You'll start in that directory every time you sign in to the Azure portal, even if you had been working in a different directory last time.-
+- **Last visited**: When you sign in to the Azure portal, you'll start in the same directory from your previous visit.
+- **Select a directory**: Choose this option to select a specific directory. You'll start in that directory every time you sign in to the Azure portal, even if you had been working in a different directory last time.
## Language + region
-Choose your language and the regional format that will influence how data such as dates and currency will appear in the Azure portal.
+Here, you can choose the language used in the Azure portal. You can also select a regional format to determine the format for dates, time, and currency.
:::image type="content" source="media/set-preferences/azure-portal-settings-language-region.png" alt-text="Screenshot showing the Language + region settings pane.":::
Use the drop-down list to select from the list of available languages. This sett
Select an option to control the way dates, time, numbers, and currency are shown in the Azure portal.
-The options shown in the **Regional format** drop-down list changes based on the option you selected for **Language**. For example, if you select **English** as your language, and then select **English (United States)** as the regional format, currency is shown in U.S. dollars. If you select **English** as your language and then select **English (Europe)** as the regional format, currency is shown in euros.
+The options shown in the **Regional format** drop-down list correspond to the **Language** options. For example, if you select **English** as your language, and then select **English (United States)** as the regional format, currency is shown in U.S. dollars. If you select **English** as your language and then select **English (Europe)** as the regional format, currency is shown in euros. You can also select a regional format that is different from your language selection.
-Select **Apply** to update your language and regional format settings.
+Once you have made the desired changes to your language and regional format settings, select **Apply**.
## My information
The **My information** page lets you update the email address that is used for u
Near the top of the **My information** page, you'll see options to export, restore, or delete settings. ### Export user settings
Information about your custom settings is stored in Azure. You can export the fo
- User settings like favorite subscriptions or directories - Themes and other custom portal settings
-To export your portal settings, select **Export settings** from the top of the **My information** pane. This creates a *.json* file that contains your user settings data.
+To export your portal settings, select **Export settings** from the top of the **My information** pane. This creates a JSON file that contains your user settings data.
-Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the *.json* file. However, you can use this file to review the settings you selected. It can be useful to have a backup of your selections if you choose to delete your settings and private dashboards.
+Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the JSON file. However, you can use this file to review the settings you selected. It can be useful to have a backup of your selections if you choose to delete your settings and private dashboards.
### Restore default settings
If you've made changes to the Azure portal settings and want to discard them, se
Information about your custom settings is stored in Azure. You can delete the following user data: - Private dashboards in the Azure portal-- User settings like favorite subscriptions or directories
+- User settings, such as favorite subscriptions or directories
- Themes and other custom portal settings
-It's a good idea to export and review your settings before you delete them, as described above. Rebuilding [dashboards](azure-portal-dashboards.md) or redoing custom settings can be time-consuming.
+It's a good idea to export and review your settings before you delete them, as described in the previous section. Rebuilding [dashboards](azure-portal-dashboards.md) or redoing custom settings can be time-consuming.
[!INCLUDE [GDPR-related guidance](../../includes/gdpr-intro-sentence.md)]
If you're a Global Administrator, and you want to enforce an idle timeout settin
To confirm that the inactivity timeout policy is set correctly, select **Notifications** from the global page header and verify that a success notification is listed. - ### Enable or disable pop-up notifications Notifications are system messages related to your current session. They provide information such as showing your current credit balance, confirming your last action, or letting you know when resources you created become available. When pop-up notifications are turned on, the messages briefly display in the top corner of your screen. To enable or disable pop-up notifications, select or clear **Enable pop-up notifications**.
-To read all notifications received during your current session, select **Notifications** from the global header.
+To read all notifications received during your current session, select the **Notifications** icon from the global header.
:::image type="content" source="media/set-preferences/read-notifications.png" alt-text="Screenshot showing the Notifications icon in the global header.":::
To view notifications from previous sessions, look for events in the Activity lo
- [View supported browsers and devices](azure-portal-supported-browsers-devices.md) - [Add, remove, and rearrange favorites](azure-portal-add-remove-sort-favorites.md) - [Create and share custom dashboards](azure-portal-dashboards.md)-- [Watch Azure portal how-to videos](azure-portal-video-series.md)
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
Title: Deploy Zerto disaster recovery on Azure VMware Solution
description: Learn how to implement Zerto disaster recovery for on-premises VMware or Azure VMware Solution virtual machines. Previously updated : 7/7/2023 Last updated : 11/29/2023
In this scenario, the primary site is an Azure VMware Solution private cloud in
## Install Zerto on Azure VMware Solution
-Currently, Zerto disaster recovery on Azure VMware Solution is in an Initial Availability (IA) phase. In the IA phase, you must contact Microsoft to request and qualify for IA support.
-
-To request IA support for Zerto on Azure VMware Solution, submit this [Install Zerto on AVS form](https://aka.ms/ZertoAVSinstall) with the required information. In the IA phase, Azure VMware Solution only supports manual installation and onboarding of Zerto. However, Microsoft works with you to ensure that you can manually install Zerto on your private cloud.
-
-> [!NOTE]
-> As part of the manual installation, Microsoft creates a new vCenter user account for Zerto. This user account is only for Zerto Virtual Manager (ZVM) to perform operations on the Azure VMware Solution vCenter. When installing ZVM on Azure VMware Solution, donΓÇÖt select the ΓÇ£Select to enforce roles and permissions using Zerto vCenter privilegesΓÇ¥ option.
-
-After the ZVM installation, select the options from the Zerto Virtual Manager **Site Settings**.
--
->[!NOTE]
->General Availability of Azure VMware Solution will enable self-service installation and Day 2 operations of Zerto on Azure VMware Solution.
-
-## Configure Zerto for disaster recovery
-
-To configure Zerto for the on-premises VMware to Azure VMware Solution disaster recovery and Azure VMware Solution to Azure VMware Solution Cloud disaster recovery scenarios, see the [Zerto Virtual Manager Administration Guide vSphere Environment](https://help.zerto.com/bundle/Admin.VC.HTML/page/Introduction_to_the_Zerto_Solution.htm).
-
-For more information, see the [Zerto technical documentation](https://www.zerto.com/myzerto/technical-documentation/).
-
-## Ongoing management of Zerto
--- As you scale your Azure VMware Solution private cloud operations, you might need to add new Azure VMware Solution hosts for Zerto protection or configure Zerto disaster recovery to new Azure VMware Solution vSphere Clusters. In both these scenarios, you're required to open a Support Request with the Azure VMware Solution team in the Initial Availability phase. Open the [support ticket](https://rc.portal.azure.com/#create/Microsoft.Support) from the Azure portal for these Day 2 configurations.-
- :::image type="content" source="media/zerto-disaster-recovery/support-request-zerto-disaster-recovery.png" alt-text="Screenshot that shows the support request for Day 2 Zerto disaster recovery configurations.":::
+To deploy Zerto on Azure VMware Solution, follow these [instructions](https://help.zerto.com/bundle/Install.AVS.HTML/page/Prerequisites_Zerto_AVS.htm).
## FAQs
azure-web-pubsub Reference Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-functions-bindings.md
Use the function trigger to handle requests from Azure Web PubSub service.
```cs [FunctionName("WebPubSubTrigger")] public static void Run(
- [WebPubSubTrigger("<hub>", WebPubSubEventType.User, "message")] UserEventRequest request)
+ [WebPubSubTrigger("<hub>", WebPubSubEventType.User, "message")] UserEventRequest request, ILogger log)
{
- Console.WriteLine($"Request from: {request.ConnectionContext.UserId}");
- Console.WriteLine($"Request message data: {request.Data}");
- Console.WriteLine($"Request message dataType: {request.DataType}");
+ log.LogInformation($"Request from: {request.ConnectionContext.UserId}");
+ log.LogInformation($"Request message data: {request.Data}");
+ log.LogInformation($"Request message dataType: {request.DataType}");
} ```
Define function in `index.js`.
```js module.exports = function (context, data) {
- console.log('Request from: ', context.bindingData.request.connectionContext.userId);
- console.log('Request message data: ', data);
- console.log('Request message dataType: ', context.bindingData.request.dataType);
+ context.log('Request from: ', context.bindingData.request.connectionContext.userId);
+ context.log('Request message data: ', data);
+ context.log('Request message dataType: ', context.bindingData.request.dataType);
} ```
public static WebPubSubConnection Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req, [WebPubSubConnection(Hub = "<hub>", UserId = "{query.userid}")] WebPubSubConnection connection) {
- Console.WriteLine("login");
return connection; } ```
public static WebPubSubConnection Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req, [WebPubSubConnection(Hub = "<hub>", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection) {
- Console.WriteLine("login");
return connection; } ```
Define function in `index.js`.
```js module.exports = async function (context, req, wpsContext) { // in the case request is a preflight or invalid, directly return prebuild response by extension.
- if (!wpsContext.hasError || wpsContext.isPreflight)
+ if (wpsContext.hasError || wpsContext.isPreflight)
{ return wpsContext.response; }
backup Azure Kubernetes Service Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-overview.md
- ignite-2023 Previously updated : 11/14/2023 Last updated : 11/30/2023
You can now use the Custom Hooks capability available in Azure Backup for AKS.
Azure Backup for AKS enables you to execute Custom Hooks as part of the backup and restore operation. Hooks are commands configured to run one or more commands to execute in a pod under a container during the backup operation or after restore. They allow you to define these hooks as a custom resource and deploy in the AKS cluster to be backed up or restored. Once the custom resource is deployed in the AKS cluster in the required Namespace, you need to provide the details as input for the Configure Backup/Restore flow, and the Backup extension runs the hooks as defined in the YAML file.
->[!Note]
->Hooks aren't executed in a *shell* on the containers.
- There are two types of hooks: ### Backup Hooks
metadata:
name: bkphookname0 namespace: default spec:
- # BackupHook Name. This is the name of the hook that will be executed during backup.
- # compulsory
- name: hook1
- # Namespaces where this hook will be executed.
- includedNamespaces:
- - hrweb
- excludedNamespaces:
- labelSelector:
- # PreHooks is a list of BackupResourceHooks to execute prior to backing up an item.
- preHooks:
- - exec:
- # Container is the container in the pod where the command should be executed.
- container: webcontainer
- # Command is the command and arguments to execute.
- command:
- - /bin/uname
- - -a
- # OnError specifies how Velero should behave if it encounters an error executing this hook
- onError: Continue
- # Timeout is the amount of time to wait for the hook to complete before considering it failed.
- timeout: 10s
- - exec:
- command:
- - /bin/bash
- - -c
- - echo hello > hello.txt && echo goodbye > goodbye.txt
- container: webcontainer
- onError: Continue
- # PostHooks is a list of BackupResourceHooks to execute after backing up an item.
- postHooks:
- - exec:
- container: webcontainer
- command:
- - /bin/uname
- - -a
- onError: Continue
- timeout: 10s
+ # BackupHook is a list of hooks to execute before and after backing up a resource.
+ backupHook:
+ # BackupHook Name. This is the name of the hook that will be executed during backup.
+ # compulsory
+ - name: hook1
+ # Namespaces where this hook will be executed.
+ includedNamespaces:
+ - hrweb
+ excludedNamespaces:
+ labelSelector:
+ # PreHooks is a list of BackupResourceHooks to execute prior to backing up an item.
+ preHooks:
+ - exec:
+ # Container is the container in the pod where the command should be executed.
+ container: webcontainer
+ # Command is the command and arguments to execute.
+ command:
+ - /bin/uname
+ - -a
+ # OnError specifies how Velero should behave if it encounters an error executing this hook
+ onError: Continue
+ # Timeout is the amount of time to wait for the hook to complete before considering it failed.
+ timeout: 10s
+ - exec:
+ command:
+ - /bin/bash
+ - -c
+ - echo hello > hello.txt && echo goodbye > goodbye.txt
+ container: webcontainer
+ onError: Continue
+ # PostHooks is a list of BackupResourceHooks to execute after backing up an item.
+ postHooks:
+ - exec:
+ container: webcontainer
+ command:
+ - /bin/uname
+ - -a
+ onError: Continue
+ timeout: 10s
```
metadata:
name: restorehookname0 namespace: default spec:
- # Name is the name of this hook.
- name: myhook-1
- # Restored Namespaces where this hook will be executed.
- includedNamespaces:
- excludedNamespaces:
- labelSelector:
- # PostHooks is a list of RestoreResourceHooks to execute during and after restoring a resource.
- postHooks:
- - exec:
- # Container is the container in the pod where the command should be executed.
- container: webcontainer
- # Command is the command and arguments to execute from within a container after a pod has been restored.
- command:
- - /bin/bash
- - -c
- - echo hello > hello.txt && echo goodbye > goodbye.txt
- # OnError specifies how Velero should behave if it encounters an error executing this hook
- # default value is Continue
- onError: Continue
- # Timeout is the amount of time to wait for the hook to complete before considering it failed.
- execTimeout: 30s
- # WaitTimeout defines the maximum amount of time Velero should wait for the container to be ready before attempting to run the command.
- waitTimeout: 5m
--
+ # RestoreHook is a list of hooks to execute after restoring a resource.
+ restoreHook:
+ # Name is the name of this hook.
+ - name: myhook-1
+ # Restored Namespaces where this hook will be executed.
+ includedNamespaces:
+ excludedNamespaces:
+ labelSelector:
+ # PostHooks is a list of RestoreResourceHooks to execute during and after restoring a resource.
+ postHooks:
+ - exec:
+ # Container is the container in the pod where the command should be executed.
+ container: webcontainer
+ # Command is the command and arguments to execute from within a container after a pod has been restored.
+ command:
+ - /bin/bash
+ - -c
+ - echo hello > hello.txt && echo goodbye > goodbye.txt
+ # OnError specifies how Velero should behave if it encounters an error executing this hook
+ # default value is Continue
+ onError: Continue
+ # Timeout is the amount of time to wait for the hook to complete before considering it failed.
+ execTimeout: 30s
+ # WaitTimeout defines the maximum amount of time Velero should wait for the container to be ready before attempting to run the command.
+ waitTimeout: 5m
```
backup Azure Kubernetes Service Cluster Manage Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md
To enable Trusted Access between Backup vault and AKS cluster, use the following
```azurecli-interactive az aks trustedaccess rolebinding create \
- -g $myResourceGroup \ 
- --cluster-name $myAKSCluster 
- -n <randomRoleBindingName> \ 
- --source-resource-id <vaultID> \ 
+ -g <aksclusterrg> \
+ --cluster-name <aksclustername> \
+ -n <randomRoleBindingName> \
+ --source-resource-id $(az dataprotection backup-vault show -g <vaultrg> -v <VaultName> --query id -o tsv) \
--roles Microsoft.DataProtection/backupVaults/backup-operator ```
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md
Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Previously updated : 11/02/2023 Last updated : 11/29/2023
Backups run in accordance with the policy schedule. Learn how to [run an on-dema
You can run an on-demand backup using SAP HANA native clients to local file-system instead of Backint. Learn more how to [manage operations using SAP native clients](sap-hana-database-manage.md#manage-operations-using-sap-hana-native-clients).
+## Configure multistreaming data backups for higher throughput using Backint
+
+To configure multistreaming data backups, see the [SAP documentation](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/18db704959a24809be8d01cc0a409681.html).
++
+### Support matrix
+
+- **Supported HANA versions**: SAP HANA 2.0 SP05 and prior.
+- **Parameters to enable SAP HANA settings for multistreaming**:
+ - *parallel_data_backup_backint_channels*
+ - *data_backup_buffer_size (optional)*
+
+ >[!Note]
+ >By setting the above HANA parameters will lead to increased memory and CPU utilization. We recommend that you monitor the memory consumption and CPU utilization as overutilization might negatively impact the backup and other HANA operations.
+
+- **Backup performance for databases**: The performance gain will be more prominent for larger databases.
+
+- **Database size applicable for multistreaming**: The number of multistreaming channels applies to all data backups *larger than 128 GB*. Data backups smaller than 128 GB always use only one channel.
+
+- **Supported backup throughput**: Multistreaming currently supports the data backup throughput of up to *1.5 GBps*. Recovery throughput is slower than the backup throughput.
+
+- **VM configuration applicable for multistreaming**: To utilize the benefits of multistreaming, the VM needs to have a minimum configuration of *16 vCPUs* and *128 GB* of RAM.
+- **Limiting factors**: Throughput of *total disk LVM striping* and *VM network*, whichever hits first.
+
+Learn more about [SAP HANA Azure Virtual Machine storage](/azure/sap/workloads/hana-vm-operations-storage) and [SAP HANA Azure virtual machine Premium SSD storage configurations](/azure/sap/workloads/hana-vm-premium-ssd-v1) configurations.
+ ## Next steps * Learn how to [restore SAP HANA databases running on Azure VMs](./sap-hana-db-restore.md)
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 09/18/2023 Last updated : 11/30/2023
Back up Azure VMs with locks | Supported for managed VMs. <br><br> Not supported
Configure standalone Azure VMs in Windows Storage Spaces | Not supported. [Restore Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for the flexible orchestration model to back up and restore a single Azure VM. Restore with managed identities | Supported for managed Azure VMs. <br><br> Not supported for classic and unmanaged Azure VMs. <br><br> Cross-region restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Back up trusted launch VMs</a> | Backup is supported. <br><br> Backup of trusted launch VMs is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through a [Recovery Services vault](./backup-azure-arm-vms-prepare.md), the [pane for managing a VM](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [pane for creating a VM](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where trusted launch VMs are available. <br><br> - Configuration of backups, alerts, and monitoring for trusted launch VMs is currently not supported through the backup center. <br><br> - Migration of an existing [Gen2 VM](../virtual-machines/generation-2.md) (protected with Azure Backup) to a trusted launch VM is currently not supported. [Learn how to create a trusted launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). <br><br> - Item-level restore is supported for the scenarios mentioned [here](backup-support-matrix-iaas.md#support-for-file-level-restore).
+<a name="tvm-backup">Back up trusted launch VMs</a> | Backup is supported. <br><br> Backup of trusted launch VMs is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through a [Recovery Services vault](./backup-azure-arm-vms-prepare.md), the [pane for managing a VM](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [pane for creating a VM](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where trusted launch VMs are available. <br><br> - Configuration of backups, alerts, and monitoring for trusted launch VMs is currently not supported through the backup center. <br><br> - Migration of an existing [Gen2 VM](../virtual-machines/generation-2.md) (protected with Azure Backup) to a trusted launch VM is currently not supported. [Learn how to create a trusted launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). <br><br> - Item-level restore is supported for the scenarios mentioned [here](backup-support-matrix-iaas.md#support-for-file-level-restore). <br><br> Note that if the trusted launch VM was created by converting a Standard VM, ensure that you remove all the recovery points created using Standard policy before enabling the backup operation for the VM.
[Back up confidential VMs](../confidential-computing/confidential-vm-overview.md) | The backup support is in limited preview. <br><br> Backup is supported only for confidential VMs that have no confidential disk encryption and for confidential VMs that have confidential OS disk encryption through a platform-managed key (PMK). <br><br> Backup is currently not supported for confidential VMs that have confidential OS disk encryption through a customer-managed key (CMK). <br><br> **Feature details** <br><br> - Backup is supported in [all regions where confidential VMs are available](../confidential-computing/confidential-vm-overview.md#regions). <br><br> - Backup is supported only if you're using [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can configure backup through the [pane for creating a VM](backup-azure-arm-vms-prepare.md), the [pane for managing a VM](backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [Recovery Services vault](backup-azure-arm-vms-prepare.md). <br><br> - [Cross-region restore](backup-azure-arm-restore-vms.md#cross-region-restore) and file recovery (item-level restore) for confidential VMs are currently not supported. ## VM storage support
cloud-services Cloud Services Role Enable Remote Desktop Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-powershell.md
Remove-AzureServiceRemoteDesktopExtension -ServiceName $servicename -UninstallCo
> [!NOTE] > To completely remove the extension configuration, you should call the *remove* cmdlet with the **UninstallConfiguration** parameter. >
-> The **UninstallConfiguration** parameter uninstalls any extension configuration that is applied to the service. Every extension configuration is associated with the service configuration. Calling the *remove* cmdlet without **UninstallConfiguration** disassociates the <mark>deployment</mark> from the extension configuration, thus effectively removing the extension. However, the extension configuration remains associated with the service.
+> The **UninstallConfiguration** parameter uninstalls any extension configuration that is applied to the service. Every extension configuration is associated with the service configuration. Calling the *remove* cmdlet without **UninstallConfiguration** disassociates the **deployment** from the extension configuration, thus effectively removing the extension. However, the extension configuration remains associated with the service.
## Additional resources
cosmos-db Spark Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-databricks.md
This article details how to work with Azure Cosmos DB for Apache Cassandra from
* **Cassandra Spark connector:** - To integrate Azure Cosmos DB for Apache Cassandra with Spark, the Cassandra connector should be attached to the Azure Databricks cluster. To attach the cluster:
- * Review the Databricks runtime version, the Spark version. Then find the [maven coordinates](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that are compatible with the Cassandra Spark connector, and attach it to the cluster. See ["Upload a Maven package or Spark package"](https://docs.databricks.com/user-guide/libraries.html) article to attach the connector library to the cluster. We recommend selecting Databricks runtime version 10.4 LTS, which supports Spark 3.2.1. To add the Apache Spark Cassandra Connector, your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0` in Maven coordinates. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
+ * Review the Databricks runtime version, the Spark version. Then find the [maven coordinates](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that are compatible with the Cassandra Spark connector, and attach it to the cluster. See ["Upload a Maven package or Spark package"](https://docs.databricks.com/libraries) article to attach the connector library to the cluster. We recommend selecting Databricks runtime version 10.4 LTS, which supports Spark 3.2.1. To add the Apache Spark Cassandra Connector, your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0` in Maven coordinates. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
* **Azure Cosmos DB for Apache Cassandra-specific library:** - If you're using Spark 2.x, a custom connection factory is required to configure the retry policy from the Cassandra Spark connector to Azure Cosmos DB for Apache Cassandra. Add the `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0`[maven coordinates](https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) to attach the library to the cluster.
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/free-tier.md
Azure Cosmos DB for MongoDB vCore now introduces a new SKU, the "Free Tier," ena
boasting command and feature parity with a regular Azure Cosmos DB for MongoDB vCore account. It makes it easy for you to get started, develop, test your applications, or even run small production workloads for free. With Free Tier, you get a dedicated MongoDB cluster with 32-GB storage, perfect
-for all of your learning & evaluation needs. Users can provision a single free DB server per supported Azure region for a given subscription. This feature is currently available for our users in the East US, and Southeast Asia regions.
+for all of your learning & evaluation needs. Users can provision a single free DB server per supported Azure region for a given subscription. This feature is currently available for our users in the East US, West Europe and Southeast Asia regions.
## Get started
specify your storage requirements, and you're all set. Rest assured, your data,
## Restrictions * For a given subscription, only one free tier account is permissible in a region.
-* Free tier is currently available in East US, and Southeast Asia regions only.
+* Free tier is currently available in East US, West Europe and Southeast Asia regions only.
* High availability, Azure Active Directory (Azure AD) and Diagnostic Logging are not supported.
cosmos-db Migrate Relational Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-relational-data.md
We can also use Spark in [Azure Databricks](https://azure.microsoft.com/services
> [!NOTE] > For clarity and simplicity, the code snippets include dummy database passwords explicitly inline, but you should ideally use Azure Databricks secrets.
-First, we create and attach the required [SQL connector](/connectors/sql/) and [Azure Cosmos DB connector](https://docs.databricks.com/data/data-sources/azure/cosmosdb-connector.html) libraries to our Azure Databricks cluster. Restart the cluster to make sure libraries are loaded.
+First, we create and attach the required [SQL connector](/connectors/sql/) and [Azure Cosmos DB connector](/azure/databricks/external-data/cosmosdb-connector) libraries to our Azure Databricks cluster. Restart the cluster to make sure libraries are loaded.
:::image type="content" source="./media/migrate-relational-data/databricks1.png" alt-text="Screenshot that shows where to create and attach the required SQL connector and Azure Cosmos DB connector libraries to our Azure Databricks cluster.":::
cosmos-db Concepts Sharding Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-sharding-models.md
Drawbacks:
## Sharding tradeoffs
-<br />
-
-|| Schema-based sharding | Row-based sharding|
-||||
-|Multi-tenancy model|Separate schema per tenant|Shared tables with tenant ID columns|
-|Citus version|12.0+|All versions|
-|Extra steps compared to vanilla PostgreSQL|None, only a config change|Use create_distributed_table on each table to distribute & colocate tables by tenant ID|
-|Number of tenants|1-10k|1-1 M+|
-|Data modeling requirement|No foreign keys across distributed schemas|Need to include a tenant ID column (a distribution column, also known as a sharding key) in each table, and in primary keys, foreign keys|
-|SQL requirement for single node queries|Use a single distributed schema per query|Joins and WHERE clauses should include tenant_id column|
-|Parallel cross-tenant queries|No|Yes|
-|Custom table definitions per tenant|Yes|No|
-|Access control|Schema permissions|Schema permissions|
-|Data sharing across tenants|Yes, using reference tables (in a separate schema)|Yes, using reference tables|
-|Tenant to shard isolation|Every tenant has its own shard group by definition|Can give specific tenant IDs their own shard group via isolate_tenant_to_new_shard|
+| | Schema-based sharding | Row-based sharding |
+| | | |
+| **Multi-tenancy model** | Separate schema per tenant | Shared tables with tenant ID columns |
+| **Citus version** | 12.0+ | All versions |
+| **Extra steps compared to vanilla PostgreSQL** | None, only a config change | Use create_distributed_table on each table to distribute & colocate tables by tenant ID |
+| **Number of tenants** | 1-10k | 1-1 M+ |
+| **Data modeling requirement** | No foreign keys across distributed schemas | Need to include a tenant ID column (a distribution column, also known as a sharding key) in each table, and in primary keys, foreign keys |
+| **SQL requirement for single node queries** | Use a single distributed schema per query | Joins and WHERE clauses should include tenant_id column |
+| **Parallel cross-tenant queries** | No | Yes |
+| **Custom table definitions per tenant** | Yes | No |
+| **Access control** | Schema permissions | Schema permissions |
+| **Data sharing across tenants** | Yes, using reference tables (in a separate schema) | Yes, using reference tables |
+| **Tenant to shard isolation** | Every tenant has its own shard group by definition | Can give specific tenant IDs their own shard group via isolate_tenant_to_new_shard |
cost-management-billing Aws Integration Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-manage.md
description: This article helps you understand how to use cost analysis and budgets in Cost Management to manage your AWS costs and usage. Previously updated : 11/09/2023 Last updated : 11/30/2023
This error means that Cost Management is unable to see the Cost and Usage report
**Error code:** _AccessDeniedListReports_
-This error means that Cost Management is unable to list the object in the S3 bucket where the CUR is located. AWS IAM policy requires a permission on the bucket and on the objects in the bucket. See [Create a role and policy in AWS](aws-integration-set-up-configure.md#create-a-role-and-policy-in-aws).
+This error means that Cost Management is unable to list the object in the S3 bucket where the CUR is located. AWS IAM policy requires a permission on the bucket and on the objects in the bucket. See [Create a role and policy in AWS](aws-integration-set-up-configure.md#create-a-policy-and-role-in-aws).
### Collection failed with Access Denied - Download report **Error code:** _AccessDeniedDownloadReport_
-This error means that Cost Management is unable to access and download the CUR files stored in the Amazon S3 bucket. Make sure that the AWS JSON policy attached to the role resembles the example shown at the bottom of the [Create a role and policy in AWS](aws-integration-set-up-configure.md#create-a-role-and-policy-in-aws) section.
+This error means that Cost Management is unable to access and download the CUR files stored in the Amazon S3 bucket. Make sure that the AWS JSON policy attached to the role resembles the example shown at the bottom of the [Create a role and policy in AWS](aws-integration-set-up-configure.md#create-a-policy-and-role-in-aws) section.
### Collection failed since we did not find the Cost and Usage Report **Error code:** _FailedToFindReport_
-This error means that Cost Management can't find the Cost and Usage report that was defined in the connector. Make sure it isn't deleted and that the AWS JSON policy attached to the role resembles the example shown at the bottom of the [Create a role and policy in AWS](aws-integration-set-up-configure.md#create-a-role-and-policy-in-aws) section.
+This error means that Cost Management can't find the Cost and Usage report that was defined in the connector. Make sure it isn't deleted and that the AWS JSON policy attached to the role resembles the example shown at the bottom of the [Create a role and policy in AWS](aws-integration-set-up-configure.md#create-a-policy-and-role-in-aws) section.
### Unable to create or verify connector due to Cost and Usage Report definitions mismatch
cost-management-billing Aws Integration Set Up Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
description: This article walks you through setting up and configuring AWS Cost and Usage report integration with Cost Management. Previously updated : 04/05/2023 Last updated : 11/30/2023
Use the **Cost & Usage Reports** page of the Billing and Cost Management console
8. For **S3 bucket**, choose **Configure**. 9. In the Configure S3 Bucket dialog box, enter a bucket name and the Region where you want to create a new bucket and choose **Next**. 10. Select **I have confirmed that this policy is correct**, then select **Save**.
-11. (Optional) For Report path prefix, enter the report path prefix that you want prepended to the name of your report.
-If you don't specify a prefix, the default prefix is the name that you specified for the report. The date range has the `/report-name/date-range/` format.
+11. (Optional) For Report path prefix, enter the report path prefix that you want prepended to the name of your report.
+ If skipped, the default prefix is the name that you specified for the report. The date range has the `/report-name/date-range/` format.
12. For **Time unit**, choose **Hourly**. 13. For **Report versioning**, choose whether you want each version of the report to overwrite the previous version, or if you want more new reports. 14. For **Enable data integration for**, no selection is required. 15. For **Compression**, select **GZIP**. 16. Select **Next**.
-17. After you've reviewed the settings for your report, select **Review and Complete**.
-
- Note the report name. You'll use it in later steps.
+17. After you review the settings for your report, select **Review and Complete**.
+ Note the report name. You use it in later steps.
It can take up to 24 hours for AWS to start delivering reports to your Amazon S3 bucket. After delivery starts, AWS updates the AWS Cost and Usage report files at least once a day. You can continue configuring your AWS environment without waiting for delivery to start. > [!NOTE] > Cost and usage reports configured at the member (linked) account level aren't currently supported.
-## Create a role and policy in AWS
+## Create a policy and role in AWS
Cost Management accesses the S3 bucket where the Cost and Usage report is located several times a day. The service needs access to credentials to check for new data. You create a role and policy in AWS to allow Cost Management to access it. To enable role-based access to an AWS account in Cost Management, the role is created in the AWS console. You need to have the _role ARN_ and _external ID_ from the AWS console. Later, you use them on the **Create an AWS connector** page in Cost Management.
-Use the Create a New Role wizard:
+### Use the Create Policy wizard
-1. Sign in to your AWS console and select **Services**.
-2. In the list of services, select **IAM**.
-3. Select **Roles** and then select **Create Role**.
-4. On the **Select trusted entity** page, select **AWS account** and then under **An AWS account**, select **Another AWS account**.
-5. Under **Account ID**, enter **432263259397**.
-6. Under **Options**, select **Require external ID (Best practice when a third party will assume this role)**.
-7. Under **External ID**, enter the external ID, which is a shared passcode between the AWS role and Cost Management. The same external ID is also used on the **New Connector** page in Cost Management. Microsoft recommends that you use a strong passcode policy when entering the external ID. The external ID should comply with AWS restrictions:
- - Type: String
- - Length constraints: Minimum length of 2. Maximum length of 1224.
- - Must satisfy regular expression pattern: [\w+=,.@: /-]*
- > [!NOTE]
- > Don't change the selection for **Require MFA**. It should remain cleared.
-8. Select **Next: Permissions**.
-9. Select **Create policy**. A new browser tab opens where you create a policy.
-10. Select **Choose a service**.
+1. Sign in into your AWS console and select **Services**.
+2. In the list of services, select **IAM**.
+3. Select **Policies**.
+4. Select **Create policy**.
+5. Select **Choose a service**.
-Configure permission for the Cost and Usage report:
+### Configure permission for the Cost and Usage report
1. Enter **Cost and Usage Report**. 2. Select **Access level** > **Read** > **DescribeReportDefinitions**. This step allows Cost Management to read what CUR reports are defined and determine if they match the report definition prerequisite.
-3. Select **Add additional permissions**.
+3. Select **Add more permissions**.
-Configure permission for your S3 bucket and objects:
+### Configure permission for your S3 bucket and objects
1. Select **Choose a service**. 2. Enter **S3**. 3. Select **Access level** > **List** > **ListBucket**. This action gets the list of objects in the S3 Bucket. 4. Select **Access level** > **Read** > **GetObject**. This action allows the download of billing files.
-5. Select **Resources**.
-6. Select **bucket ΓÇô Add ARN**.
-7. In **Bucket name**, enter the bucket used to store the CUR files.
-8. Select **object ΓÇô Add ARN**.
-9. In **Bucket name**, enter the bucket used to store the CUR files.
-10. In **Object name**, select **Any**.
-11. Select **Add additional permissions**.
+5. Select **Resources** > **Specific**.
+6. In **bucket**, select the **Add ARNs** link to open another window.
+7. In **Resource Bucket name**, enter the bucket used to store the CUR files.
+8. Select **Add ARNs**.
+9. In **object**, select **Any**.
+10. Select **Add more permissions**.
-Configure permission for Cost Explorer:
+### Configure permission for Cost Explorer
1. Select **Choose a service**. 2. Enter **Cost Explorer Service**. 3. Select **All Cost Explorer Service actions (ce:\*)**. This action validates that the collection is correct.
-4. Select **Add additional permissions**.
+4. Select **Add more permissions**.
-Add permission for AWS Organizations:
+### Add permission for AWS Organizations
1. Enter **Organizations**. 2. Select **Access level** > **List** > **ListAccounts**. This action gets the names of the accounts.
-3. Select **Add Additional permissions**.
+3. Select **Add more permissions**.
-Configure permissions for Policies
+### Configure permissions for Policies
1. Enter **IAM**. 1. Select Access level > List > **ListAttachedRolePolicies** and **ListPolicyVersions** and **ListRoles**. 1. Select Access level > Read > **GetPolicyVersion**. 1. Select **Resources** > policy, and then select **Any**. These actions allow verification that only the minimal required set of permissions were granted to the connector.
-1. Select role - **Add ARN**. The account number should be automatically populated.
-1. In **Role name with path**, enter a role name and note it. You need to use it in the final role creation step.
-1. Select **Add**.
-1. Select **Next: Tags**. You may enter tags you wish to use or skip this step. This step isn't required to create a connector in Cost Management.
-1. Select **Next: Review Policy**.
-1. In Review Policy, enter a name for the new policy. Verify that you entered the correct information, and then select **Create Policy**.
-1. Go back to the previous tab and refresh the policies list. On the search bar, search for your new policy.
-1. Select **Next: Review**.
-1. Enter the same role name you defined and noted while configuring the IAM permissions. Verify that you entered the correct information, and then select **Create Role**.
-
-Note the role ARN and the external ID used in the preceding steps when you created the role. You'll use them later when you set up the Cost Management connector.
+1. Select **Next**.
+
+### Review and create
+1. In Review Policy, enter a name for the new policy. Verify that you entered the correct information.
+1. Add tags. You can enter tags you wish to use or skip this step. This step isn't required to create a connector in Cost Management.
+1. Select **Create policy** to complete this procedure.
The policy JSON should resemble the following example. Replace `bucketname` with the name of your S3 bucket, `accountname` with your account number and `rolename` with the role name you created.
The policy JSON should resemble the following example. Replace `bucketname` with
} ```
+### Use the Create a New Role wizard
+
+1. Sign in to your AWS console and select **Services**.
+2. In the list of services, select **IAM**.
+3. Select **Roles** and then select **Create Role**.
+4. On the **Select trusted entity** page, select **AWS account** and then under **An AWS account**, select **Another AWS account**.
+5. Under **Account ID**, enter **432263259397**.
+6. Under **Options**, select **Require external ID (Best practice when a third party will assume this role)**.
+7. Under **External ID**, enter the external ID, which is a shared passcode between the AWS role and Cost Management. Note the external ID, because you use it on the **New Connector** page in Cost Management. Microsoft recommends that you use a strong passcode policy when entering the external ID. The external ID should comply with AWS restrictions:
+ - Type: String
+ - Length constraints: Minimum length of 2. Maximum length of 1224.
+ - Must satisfy regular expression pattern: `[\w+=,.@: /-]*`
+ > [!NOTE]
+ > Don't change the selection for **Require MFA**. It should remain cleared.
+8. Select **Next**.
+9. On the search bar, search for your new policy and select it.
+10. Select **Next**.
+11. In **Role details**, enter a role name. Verify that you entered the correct information. Note the name entered because you use it later when you set up the Cost Management connector.
+12. Optionally, add tags. You can enter any tags like or skip this step. This step isn't required to create a connector in Cost Management.
+13. Select **Create role**.
+ ## Set up a new connector for AWS in Azure Use the following information to create an AWS connector and start monitoring your AWS costs.
Use the following information to create an AWS connector and start monitoring yo
1. Select **Add connector**. 1. On the **Create connector** page, in **Display name**, enter a name for your connector. :::image type="content" source="./media/aws-integration-setup-configure/create-aws-connector01.png" alt-text="Example of the page for creating an AWS connector" :::
-1. Optionally, select the default management group. It will store all discovered linked accounts. You can set it up later.
+1. Optionally, select the default management group. It stores all discovered linked accounts. You can set it up later.
1. In the **Billing** section, select **Auto-Renew** to **On** if you want to ensure continuous operation. If you select the automatic option, you must select a billing subscription. 1. For **Role ARN**, enter the value that you used when you set up the role in AWS. 1. For **External ID**, enter the value that you used when you set up the role in AWS.
When you select a connector on the **Connectors for AWS** page, you can:
## Set up Azure management groups
-Place your Azure subscriptions and AWS linked accounts in the same management group to create a single location where you can see cross-cloud provider information. If you haven't already configured your Azure environment with management groups, see [Initial setup of management groups](../../governance/management-groups/overview.md#initial-setup-of-management-groups).
+Place your Azure subscriptions and AWS linked accounts in the same management group to create a single location where you can see cross-cloud provider information. If you want to configure your Azure environment with management groups, see [Initial setup of management groups](../../governance/management-groups/overview.md#initial-setup-of-management-groups).
If you want to separate costs, you can create a management group that holds just AWS linked accounts.
AWS linked accounts always inherit permissions from the management group that th
## Next steps -- Now that you've set up and configured AWS Cost and Usage report integration, continue to [Manage AWS costs and usage](aws-integration-manage.md).
+- Now that you set up and configured AWS Cost and Usage report integration, continue to [Manage AWS costs and usage](aws-integration-manage.md).
- If you're unfamiliar with cost analysis, see [Explore and analyze costs with cost analysis](quick-acm-cost-analysis.md) quickstart. - If you're unfamiliar with budgets in Azure, see [Create and manage budgets](tutorial-acm-create-budgets.md).
cost-management-billing Export Cost Data Storage Account Sas Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/export-cost-data-storage-account-sas-key.md
Title: Export cost data with an Azure Storage account SAS key
description: This article helps partners create a SAS key and configure Cost Management exports. Previously updated : 06/07/2023 Last updated : 11/29/2023
The following information applies to Microsoft partners only.
-Often, partners don't have their own Azure subscriptions in the tenant that's associated with their own Microsoft Partner Agreement. Partners with a Microsoft Partner Agreement plan who are global admins of their billing account can export and copy cost data into a storage account in a different tenant using a shared access service (SAS) key. In other words, a storage account with a SAS key allows the partner to use a storage account that's outside of their partner agreement to receive exported information. This article helps partners create a SAS key and configure Cost Management exports.
+Often, partners don't have their own Azure subscriptions in the tenant associated with their own Microsoft Partner Agreement. Partners with a Microsoft Partner Agreement plan who are global admins of their billing account can export and copy cost data into a storage account in a different tenant using a shared access service (SAS) key. In other words, a storage account with a SAS key allows the partner to use a storage account that's outside of their partner agreement to receive exported information. This article helps partners create a SAS key and configure Cost Management exports.
## Requirements -- You must be a partner with a Microsoft Partner Agreement and have customers on the Azure Plan.
+- You need must be a partner with a Microsoft Partner Agreement. Your customers on the Azure plan must have Microsoft Customer Agreement that is signed.
+ - SAS key-based export isn't supported for indirect enterprise agreements.
- You must be global admin for your partner organization's billing account. - You must have access to configure a storage account that's in a different tenant of your partner organization. You're responsible for maintaining permissions and data access when your export data to your storage account. - The storage account must not have a firewall configured.
Get a storage account SAS token or create one using the Azure portal. To create
1. Choose expiration and dates. Make sure to update your export SAS token before it expires. The longer the time period you configure before expiration, the longer your export runs before needing a new SAS token. 1. Select **HTTPS only** for _Allowed protocols_. 1. Select **Basic** for _Preferred routing tier_.
-1. Select **key1** for _Signing key_. If you rotate or update the key that's used to sign the SAS token, you'll need to regenerate a new SAS token for your export.
+1. Select **key1** for _Signing key_. If you rotate or update the key used to sign the SAS token, you must regenerate a new SAS token.
1. Select **Generate SAS and connection string**. The **SAS token** value shown is the token that you need when you configure exports.
Get a storage account SAS token or create one using the Azure portal. To create
Navigate to **Exports** at the billing account scope and create a new export using the following steps. 1. Select **Create**.
-1. Configure the Export details as you would for a normal export. You can configure the export to use an existing directory or container or you can specify a new directory or container and exports will create them for you.
+1. Configure the Export details as you would for a normal export. You can configure the export to use an existing directory or container or you can specify a new directory or container. The export process creates them for you.
1. When configuring Storage, select **Use a SAS token**. :::image type="content" source="./media/export-cost-data-storage-account-sas-key/new-export.png" alt-text="Screenshot showing the New export where you select SAS token." lightbox="./media/export-cost-data-storage-account-sas-key/new-export.png" ::: 1. Enter the name of the storage account and paste in your SAS token. 1. Specify an existing container or Directory or identify new ones to be created. 1. Select **Create**.
-The SAS token-based export only works while the token remains valid. Reset the token before the current one expires, or your export will stop working. Because the token provides access to your storage account, protect the token as carefully as you would any other sensitive information. You're responsible to maintain permissions and data access when your export data to your storage account.
+The SAS token-based export only works while the token remains valid. Reset the token before the current one expires, or your export stops working. Because the token provides access to your storage account, protect the token as carefully as you would any other sensitive information. You're responsible to maintain permissions and data access when your export data to your storage account.
## Troubleshoot exports using SAS tokens
The following are common issues that might happen when you configure or use SAS
- Not seeing the key is expected behavior. After the SAS Export is configured, the key is hidden for security reasons. - You can't access the storage account from the tenant where the export is configured.
- - It's expected behavior. If the storage account is in another tenant, you need to navigate to that tenant first in the Azure portal to find the storage account.
+ - The behavior is expected. If the storage account is in another tenant, you need to navigate to that tenant first in the Azure portal to find the storage account.
- Your export fails because of a SAS token-related error. - Your export works only while the SAS token remains valid. Create a new key and run the export.
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 09/12/2023 Last updated : 11/29/2023
Data export is available for various Azure account types, including [Enterprise
For Azure Storage accounts: - Write permissions are required to change the configured storage account, independent of permissions on the export. - Your Azure storage account must be configured for blob or file storage.-- Don't configure exports to a storage container that's configured as a destination in an [object replication rule](../../storage/blobs/object-replication-overview.md#object-replication-policies-and-rules).
+- Don't configure exports to a storage container when configured as a destination in an [object replication rule](../../storage/blobs/object-replication-overview.md#object-replication-policies-and-rules).
- To export to storage accounts with configured firewalls, you need other privileges on the storage account. The other privileges are only required during export creation or modification. They are: - Owner role on the storage account. Or - Any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions. Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall. - The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
- :::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing the From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" :::
+ :::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" :::
If you have a new subscription, you can't immediately use Cost Management features. It might take up to 48 hours before you can use all Cost Management features.
Remove-AzCostManagementExport -Name DemoExport -Scope 'subscriptions/00000000-00
If you need to export to a storage account behind the firewall for security and compliance requirements, ensure that you have all [prerequisites](#prerequisites) met.
-Enable **Allow trusted Azure services access** on the storage account from the Exports page. Here's a screenshot showing the page.
+Enable **Allow trusted Azure services access** on the storage account. You can turn that on while configuring the firewall of the storage account, from the Networking page. Here's a screenshot showing the page.
-A system-assigned managed identity is created for a new job export when it's created or modified. You must have permissions because Cost Management uses the privilege to assign the *StorageBlobDataContributor* role to the managed identity. The permission is restricted to the storage account container scope. After the export job is created or updated, the user doesn't require Owner permissions for regular runtime operations.
+If you missed enabling that setting, you can easily do so from the **Exports** page when creating a new export.
++
+A system-assigned managed identity is created for a new job export when created or modified. You must have permissions because Cost Management uses the privilege to assign the *StorageBlobDataContributor* role to the managed identity. The permission is restricted to the storage account container scope. After the export job is created or updated, the user doesn't require Owner permissions for regular runtime operations.
> [!NOTE] > - When a user updates destination details or deletes an export, the *StorageBlobDataContributor* role assigned to the managed identity is automatically removed. To enable the system to remove the role assignment, the user must have `microsoft.Authorization/roleAssignments/delete` permissions. If the permissions aren't available, the user needs to manually remove the role assignment on the managed identity.
Add exports to the list of trusted services. For more information, see [Trusted
### Export schedule
-Scheduled exports are affected by the time and day of week of when you initially create the export. When you create a scheduled export, the export runs at the same frequency for each export that runs later. For example, for a daily export of month-to-date costs export set at a daily frequency, the export runs during once each UTC day. Similarly for a weekly export, the export runs every week on the same UTC day as it is scheduled. Individual export runs can occur at different times throughout the day. So, avoid taking a firm dependency on the exact time of the export runs. Run timing depends on the active load present in Azure during a given UTC day. When an export run begins, your data should be available within 4 hours.
+Scheduled exports get affected by the time and day of week of when you initially create the export. When you create a scheduled export, the export runs at the same frequency for each export that runs later. For example, for a daily export of month-to-date costs export set at a daily frequency, the export runs during once each UTC day. Similarly for a weekly export, the export runs every week on the same UTC day as it is scheduled. Individual export runs can occur at different times throughout the day. So, avoid taking a firm dependency on the exact time of the export runs. Run timing depends on the active load present in Azure during a given UTC day. When an export run begins, your data should be available within 4 hours.
Exports are scheduled using Coordinated Universal Time (UTC). The Exports API always uses and displays UTC.
Each export creates a new file, so older exports aren't overwritten.
#### Create an export for multiple subscriptions
-If you have an Enterprise Agreement, then you can use a management group to aggregate subscription cost information in a single container. Then you can export cost management data for the management group. When you create an export in the Azure portal, select the **Actual Costs** option. When you create a management group export using the API, create a *usage export*.
-
-Currently, exports at the management group scope only support usage charges. Purchases including reservations and savings plans aren't present in your exports file.
-
-Exports for management groups of other subscription types aren't supported.
+You can use a management group to aggregate subscription cost information in a single container. Exports support management group scope for Enterprise Agreement but not for Microsoft Customer Agreement or other subscription types. Multiple currencies are also not supported in management group exports.
-Multiple currencies are not supported in management group exports.
+Exports at the management group scope support only usage charges, purchases (including reservations and savings plans). Amortized cost reports aren't supported. When you create an export from the Azure portal for a management group scope, the metric field isn't shown because it defaults to the usage type. When you create a management group scope export using the REST API, choose [ExportType](/rest/api/cost-management/exports/create-or-update#exporttype) as `Usage`.
-1. If you haven't already created a management group, create one group and assign subscriptions to it.
+1. Create one management group and assign subscriptions to it, if you haven't already.
1. In cost analysis, set the scope to your management group and select **Select this management group**. :::image type="content" source="./media/tutorial-export-acm-data/management-group-scope.png" alt-text="Example showing the Select this management group option" lightbox="./media/tutorial-export-acm-data/management-group-scope.png"::: 1. Create an export at the scope to get cost management data for the subscriptions in the management group.
Partitioning isn't currently supported for resource groups or management group s
#### Update existing exports to use file partitioning
-If you have existing exports and you want to set up file partitioning, create a new export. File partitioning is only available with the latest Exports version. There may be minor changes to some of the fields in the usage files that get created.
+If you have existing exports and you want to set up file partitioning, create a new export. File partitioning is only available with the latest Exports version. There might be minor changes to some of the fields in the usage files that get created.
-If you enable file partitioning on an existing export, you may see minor changes to the fields in file output. Any changes are due to updates that were made to Exports after you initially set yours up.
+If you enable file partitioning on an existing export, you might see minor changes to the fields in file output. Any changes are due to updates that were made to Exports after you initially set yours up.
#### Partitioning output
In Storage Explorer, navigate to the container that you want to open and select
![Example information shown in Storage Explorer](./media/tutorial-export-acm-data/storage-explorer.png)
-The file opens with the program or application that's set to open CSV file extensions. Here's an example in Excel.
+The file opens with the program or application set to open CSV file extensions. Here's an example in Excel.
![Example exported CSV data shown in Excel](./media/tutorial-export-acm-data/example-export-data.png)
Select an export to view the run history.
### Export runs twice a day for the first five days of the month
-If you've created a daily export, you have two runs per day for the first five days of each month. One run executes and creates a file with the current monthΓÇÖs cost data. It's the run that's available for you to see in the run history. A second run also executes to create a file with all the costs from the prior month. The second run isn't currently visible in the run history. Azure executes the second run to ensure that your latest file for the past month contains all charges exactly as seen on your invoice. It runs because there are cases where latent usage and charges are included in the invoice up to 72 hours after the calendar month has closed. To learn more about Cost Management usage data updates, see [Cost and usage data updates and retention](understand-cost-mgt-data.md#cost-and-usage-data-updates-and-retention).
+There are two runs per day for the first five days of each month after you create a daily export. One run executes and creates a file with the current monthΓÇÖs cost data. It's the run that's available for you to see in the run history. A second run also executes to create a file with all the costs from the prior month. The second run isn't currently visible in the run history. Azure executes the second run to ensure that your latest file for the past month contains all charges exactly as seen on your invoice. It runs because there are cases where latent usage and charges are included in the invoice up to 72 hours after the calendar month is closed. To learn more about Cost Management usage data updates, see [Cost and usage data updates and retention](understand-cost-mgt-data.md#cost-and-usage-data-updates-and-retention).
>[!NOTE] > Daily export created between 1st to 5th of the current month would not generate data for the previous month as the export schedule starts from the date of creation.
If you've created a daily export, you have two runs per day for the first five d
One of the purposes of exporting your Cost Management data is to access the data from external systems. You might use a dashboard system or other financial system. Such systems vary widely so showing an example wouldn't be practical. However, you can get started with accessing your data from your applications at [Introduction to Azure Storage](../../storage/common/storage-introduction.md).
+## Exports FAQ
+
+Here are some frequently asked questions and answers about exports.
+
+### Why do I see garbled characters when I open exported cost files with Microsoft Excel?
+
+If you see garbled characters in Excel and you use an Asian-based language, such as Japanese or Chinese, you can resolve this issue with the following steps:
+
+For new versions of Excel:
+
+1. Open Excel.
+1. Select the **Data** tab at the top.
+1. Select the **From Text/CSV** option.
+ :::image type="content" source="./media/tutorial-export-acm-data/new-excel-from-text.png" alt-text="Screenshot showing the Excel From Text/CSV option." lightbox="./media/tutorial-export-acm-data/new-excel-from-text.png" :::
+1. Select the CSV file that you want to import.
+1. In the next box, set **File origin** to **65001: Unicode (UTF-8)**.
+ :::image type="content" source="./media/tutorial-export-acm-data/new-excel-file-origin.png" alt-text="Screenshot showing the Excel File origin option." lightbox="./media/tutorial-export-acm-data/new-excel-file-origin.png" :::
+1. Select **Load**.
+
+For older versions of MS Excel:
+
+1. Open Excel.
+1. Select the **Data** tab at the top.
+1. Select the **From Text** option and then select the CSV file that you want to import.
+1. Excel shows the Text Import Wizard.
+1. In the wizard, select the **Delimited** option.
+1. In the **File origin** field, select **65001 : Unicode (UTF-8)**.
+1. Select **Next**.
+1. Next, select the **Comma** option and then select **Finish**.
+1. In the dialog window that appears, select **OK**.
+
+### Why does the aggregated cost from the exported file differ from the cost displayed in Cost Analysis?
+
+You might have discrepancies between the aggregated cost from the exported file and the cost displayed in Cost Analysis. Determine if the tool you use to read and aggregate the total cost is truncating decimal values. This issue can happen in tools like Power BI and Microsoft Excel. Determine if decimal places are getting dropped when cost values are converted into integers. Losing decimal values can result in a loss of precision and misrepresentation of the aggregated cost.
+
+To manually transform a column to a decimal number in Power BI, follow these steps:
+
+1. Go to the Table view.
+1. Select **Transform data**.
+1. Right-click the required column.
+1. Change the type to a decimal number.
+ ## Next steps In this tutorial, you learned how to:
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
Previously updated : 10/16/2023 Last updated : 11/30/2023
Azure has the following policies for cancellations, exchanges, and refunds.
- Only reservation owners can process an exchange. [Learn how to Add or change users who can manage a reservation](manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default). - An exchange is processed as a refund and a repurchase ΓÇô different transactions are created for the cancellation and the new reservation purchase. The prorated reservation amount is refunded for the reservations that's traded-in. You're charged fully for the new purchase. The prorated reservation amount is the daily prorated residual value of the reservation being returned. - You can exchange or refund reservations even if the enterprise agreement used to purchase the reservation is expired and was renewed as a new agreement.-- The new reservation's lifetime commitment should equal or be greater than the returned reservation's remaining commitment. Example: for a three-year reservation that's $100 per month and exchanged after the 18th payment, the new reservation's lifetime commitment should be $1,800 or more (paid monthly or upfront).
+- The new reservation's lifetime commitment should equal or be greater than the returned reservation's remaining commitment. Example: for a three-year reservation that's 100 USD per month and exchanged after the 18th payment, the new reservation's lifetime commitment should be 1,800 USD or more (paid monthly or upfront).
- The new reservation purchased as part of exchange has a new term starting from the time of exchange. - There's no penalty or annual limits for exchanges. - Exchanges will be unavailable for all compute reservations - Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations - purchased on or after **January 1, 2024**. Compute reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect. For more information about the exchange policy change, see [Changes to the Azure reservation exchange policy](reservation-exchange-policy-changes.md).
Azure has the following policies for cancellations, exchanges, and refunds.
- Only reservation order owners can process a refund. [Learn how to Add or change users who can manage a reservation](manage-reserved-vm-instance.md#who-can-manage-a-reservation-by-default). - For CSP program, the 50,000 USD limit is per customer.
-Let's look at an example with the previous points in mind. If you bought a $300,000 reservation, you can exchange it at any time for another reservation that equals or costs more (of the remaining reservation balance, not the original purchase price). For this example:
+Let's look at an example with the previous points in mind. If you bought a 300,000 USD reservation, you can exchange it at any time for another reservation that equals or costs more (of the remaining reservation balance, not the original purchase price). For this example:
- There's no penalty or annual limits for exchanges. - The refund that results from the exchange doesn't count against the refund limit.
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
This Azure Blob Storage connector is supported for the following capabilities:
| Supported capabilities|IR | Managed private endpoint| || --| --|
-|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
-|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô <small> Exclude storage account V1|
-|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
-|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
-|[Delete activity](delete-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô Exclude storage account V1|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô Exclude storage account V1|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô Exclude storage account V1|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô Exclude storage account V1|
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|Γ£ô Exclude storage account V1|
*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
data-factory Connector Azure File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-file-storage.md
This Azure Files connector is supported for the following capabilities:
| Supported capabilities|IR | Managed private endpoint| || --| --|
-|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
-|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
-|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
-|[Delete activity](delete-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô Exclude storage account V1|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô Exclude storage account V1|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô Exclude storage account V1|
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|Γ£ô Exclude storage account V1|
*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md
This Azure SQL Managed Instance connector is supported for the following capabil
| Supported capabilities|IR | Managed private endpoint| || --| --|
-|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô <small> Public preview |
-|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô <small> Public preview |
-|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô <small> Public preview |
-|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô <small> Public preview |
-|[Script activity](transform-data-using-script.md)|&#9312; &#9313;|Γ£ô <small> Public preview |
-|[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|Γ£ô <small> Public preview |
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô Public preview |
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô Public preview |
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô Public preview |
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô Public preview |
+|[Script activity](transform-data-using-script.md)|&#9312; &#9313;|Γ£ô Public preview |
+|[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|Γ£ô Public preview |
*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
To copy data to SQL Managed Instance, the following properties are supported in
| WriteBehavior | Specify the write behavior for copy activity to load data into Azure SQL MI. <br/> The allowed value is **Insert** and **Upsert**. By default, the service uses insert to load data. | No | | upsertSettings | Specify the group of the settings for write behavior. <br/> Apply when the WriteBehavior option is `Upsert`. | No | | ***Under `upsertSettings`:*** | | |
-| useTempDB | Specify whether to use the a global temporary table or physical table as the interim table for upsert. <br>By default, the service uses global temporary table as the interim table. value is `true`. | No |
+| useTempDB | Specify whether to use a global temporary table or physical table as the interim table for upsert. <br>By default, the service uses global temporary table as the interim table. value is `true`. | No |
| interimSchemaName | Specify the interim schema for creating interim table if physical table is used. Note: user need to have the permission for creating and deleting table. By default, interim table will share the same schema as sink table. <br/> Apply when the useTempDB option is `False`. | No | | keys | Specify the column names for unique row identification. Either a single key or a series of keys can be used. If not specified, the primary key is used. | No |
data-factory Connector Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-table-storage.md
This Azure Table storage connector is supported for the following capabilities:
| Supported capabilities|IR | Managed private endpoint| || --| --|
-|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
-|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô <small> Exclude storage account V1|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô Exclude storage account V1|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô Exclude storage account V1|
*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
data-factory Copy Activity Schema And Type Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-schema-and-type-mapping.md
Copy activity currently supports the following interim data types: Boolean, Byte
The following data type conversions are supported between the interim types from source to sink.
-| Source\Sink | Boolean | Byte array | Decimal | Date/Time (1)</small> | Float-point <small>(2)</small> | GUID | Integer <small>(3) | String | TimeSpan |
+| Source\Sink | Boolean | Byte array | Decimal | Date/Time (1) | Float-point (2) | GUID | Integer (3) | String | TimeSpan |
| -- | - | - | - | - | | - | -- | | -- | | Boolean | Γ£ô | | Γ£ô | | Γ£ô | | Γ£ô | Γ£ô | | | Byte array | | Γ£ô | | | | | | Γ£ô | |
In the following example, the input dataset has a structure, and it points to a
} ```
-In this sample, the output dataset has a structure and it points to a table in Salesfoce.
+In this sample, the output dataset has a structure and it points to a table in Salesforce.
```json {
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
If you select the **Install Azure PowerShell** type for your express custom setu
If you select the **Install licensed component** type for your express custom setup, you can then select an integrated component from our ISV partners in the **Component name** drop-down list:
-* If you select the **SentryOne's Task Factory** component, you can install the [Task Factory](https://www.sentryone.com/products/task-factory/high-performance-ssis-components) suite of components from SentryOne on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **2020.21.2**.
+* If you select the **SentryOne's Task Factory** component, you can install the [Task Factory](https://www.solarwinds.com/resources/it-glossary/ssis-components) suite of components from SentryOne on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **2020.21.2**.
* If you select the **oh22's HEDDA.IO** component, you can install the [HEDDA.IO](https://github.com/oh22is/HEDDA.IO/tree/master/SSIS-IR) data quality/cleansing component from oh22 on your Azure-SSIS IR. To do so, you need to purchase their service beforehand. The current integrated version is **1.0.14**.
data-factory Solution Template Replicate Multiple Objects Sap Cdc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-replicate-multiple-objects-sap-cdc.md
A sample control file is as below:
## Next steps - [Azure Data Factory SAP CDC](sap-change-data-capture-introduction-architecture.md)
+- [SAP CDC advanced topics](sap-change-data-capture-advanced-topics.md)
- [Azure Data Factory change data capture](concepts-change-data-capture.md)
databox-online Azure Stack Edge Deploy Aks On Azure Stack Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge.md
Previously updated : 10/17/2023 Last updated : 11/29/2023 # Customer intent: As an IT admin, I need to understand how to deploy and configure Azure Kubernetes service on Azure Stack Edge.
Use this step to configure the virtual switch for Kubernetes compute traffic.
[![Screenshot that shows the Kubernetes page in the Azure portal.](./media/azure-stack-edge-deploy-aks-on-azure-stack-edge/azure-stack-edge-kubernetes-page.png)](./media/azure-stack-edge-deploy-aks-on-azure-stack-edge/azure-stack-edge-kubernetes-page.png#lightbox)
-1. Enable the compute on a port that has internet access. For example, in this case, port 2 that was connected to the internet is enabled for compute. Internet access allows you to retrieve container images from AKS.
+1. Enable the compute on a port that has internet access. For example, in this case, port 2 that was connected to the internet is enabled for compute. Internet access allows you to retrieve container images from AKS.
+ The Azure consistent services virtual IP must be able to reach this compute virtual switch network either via external routing or by creating an Azure consistent services virtual IP on the same network.
+
1. For Kubernetes nodes, specify a contiguous range of six static IPs in the same subnet as the network for this port. As part of the AKS deployment, two clusters are created, a management cluster and a target cluster. The IPs that you specified are used as follows:
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 09/28/2023 Last updated : 11/29/2023 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
You'll now add the prepared node to the first node and form the cluster. Before
## Configure virtual IPs
-For Azure consistent services and NFS, you'll also need to define a virtual IP that allows you to connect to a clustered device as opposed to a specific node. A virtual IP is an available IP in the cluster network and any client connecting to the cluster network on the two-node device should be able to access this IP.
-
+For Azure consistent services and NFS, you'll also need to define a virtual IP that allows you to connect to a clustered device as opposed to a specific node. A virtual IP is an available IP in the cluster network and any client connecting to the cluster network on the two-node device should be able to access this IP. Azure consistent services and NFS must be on the same network.
### For Azure Consistent Services
databox-online Azure Stack Edge Pro 2 Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md
Previously updated : 07/19/2023 Last updated : 11/30/2023 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
You'll now add the prepared node to the first node and form the cluster. Before
## Configure virtual IPs
-For Azure consistent services and NFS, you'll also need to define a virtual IP that allows you to connect to a clustered device as opposed to a specific node. A virtual IP is an available IP in the cluster network and any client connecting to the cluster network on the two-node device should be able to access this IP.
-
+For Azure consistent services and NFS, you'll also need to define a virtual IP that allows you to connect to a clustered device as opposed to a specific node. A virtual IP is an available IP in the cluster network and any client connecting to the cluster network on the two-node device should be able to access this IP. Azure consistent services and NFS must be on the same network.
### For Azure Consistent Services
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
You might choose to configure extra scan result methods, such as **Event Grid**
Blob index tags can be used by applications to automate workflows, but aren't tamper-resistant. Read more on [setting up response](defender-for-storage-configure-malware-scan.md#setting-up-response-to-malware-scanning).
+> [!NOTE]
+> Access to index tags requires permissions. For more information see [Get, set, and update blob index tags](/azure/storage/blobs/storage-blob-index-how-to#get-set-and-update-blob-index-tags).
+ ### Defender for Cloud security alerts When a malicious file is detected, Microsoft Defender for Cloud generates a [Microsoft Defender for Cloud security alert](alerts-overview.md#what-are-security-alerts). To see the alert, go to **Microsoft Defender for Cloud** security alerts.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Learn more about the new [cloud security graph, attack path analysis, and the cl
Until now, Defender for Cloud based its posture assessments for VMs on agent-based solutions. To help customers maximize coverage and reduce onboarding and management friction, we're releasing agentless scanning for VMs to preview.
-With agentless scanning for VMs, you get wide visibility on installed software and software CVEs. You get the visibility without the challenges of agent installation and maintenance, network connectivity requirements, and performance affect on your workloads. The analysis is powered by Microsoft Defender vulnerability management.
+With agentless scanning for VMs, you get wide visibility on installed software and software CVEs. You get the visibility without the challenges of agent installation and maintenance, network connectivity requirements, and performance affect on your workloads. The analysis is powered by Microsoft Defender Vulnerability Management.
Agentless vulnerability scanning is available in both Defender Cloud Security Posture Management (CSPM) and in [Defender for Servers P2](defender-for-servers-introduction.md), with native support for AWS and Azure VMs.
defender-for-cloud Tutorial Enable Servers Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-servers-plan.md
After enabling the Log Analytics agent/Azure Monitor agent, you'll be presented
Vulnerability assessment for machines allows you to select between two vulnerability assessment solutions: -- Microsoft Defender vulnerability management
+- Microsoft Defender Vulnerability Management
- Microsoft Defender for Cloud integrated Qualys scanner **To select either of the vulnerability assessment solutions**:
deployment-environments How To Authenticate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-authenticate.md
Title: Authenticate to Azure Deployment Environments REST APIs
-description: Learn how to authenticate to Azure Deployment Environments REST APIs.
+description: Learn how to authenticate to Azure Deployment Environments REST APIs by using Microsoft Entra ID.
Previously updated : 09/07/2023 Last updated : 11/22/2023
-# Authenticating to Azure Deployment Environments REST APIs
-> [!TIP]
-> Before authenticating, ensure that the user or identity has the appropriate permissions to perform the desired action. For more information, see [configuring project admins](./how-to-configure-project-admin.md) and [configuring environment users](./how-to-configure-deployment-environments-user.md).
+# Authenticate to Azure Deployment Environments REST APIs
+> [!TIP]
+> Before authenticating, ensure that the user or identity has the appropriate permissions to perform the desired action. For more information, see [Provide access for dev team leads](./how-to-configure-project-admin.md) and [Provide access for developers](./how-to-configure-deployment-environments-user.md).
<a name='using-azure-ad-authentication-for-rest-apis'></a>
-## Using Microsoft Entra authentication for REST APIs
+## Use Microsoft Entra ID authentication for REST APIs
-Use the following procedures to authenticate with Microsoft Entra ID. You can follow along in [Azure Cloud Shell](../../articles/cloud-shell/quickstart.md), on an Azure virtual machine, or on your local machine.
+Use the following procedures to access Azure Deployment Environments REST APIs by using Microsoft Entra ID. You can follow along in [Azure Cloud Shell](../../articles/cloud-shell/quickstart.md), on an Azure virtual machine, or on your local machine.
-### Sign in to the user's Azure subscription
+### Sign in to your Azure subscription
Start by authenticating with Microsoft Entra ID by using the Azure CLI. This step isn't required in Azure Cloud Shell.
Start by authenticating with Microsoft Entra ID by using the Azure CLI. This ste
az login ```
-The command opens a browser window to the Microsoft Entra authentication page. It requires you to give your Microsoft Entra user ID and password.
+The command opens a browser window to the Microsoft Azure authentication page, where you can choose an account. The page requires you to give your Microsoft Entra ID username and password.
-Next, set the correct subscription context. If you authenticate from an incorrect subscription or tenant you may receive unexpected 403 Forbidden errors.
+Next, set the correct subscription context. If you authenticate from an incorrect subscription or tenant, you might receive unexpected **403 Forbidden** errors.
```azurecli az account set --subscription <subscription_id> ``` - <a name='retrieve-the-azure-ad-access-token'></a>
-### Retrieve the Microsoft Entra access token
+### Retrieve the Microsoft Entra ID access token
-Use the Azure CLI to acquire an access token for the Microsoft Entra authenticated user.
-Note that the resource ID is different depending on if you are accessing administrator (control plane) APIs or developer (data plane) APIs.
+Use the Azure CLI to acquire an access token for the Microsoft Entra ID authenticated user. The resource ID is different depending on if you access administrator (control plane) APIs or developer (data plane) APIs.
For administrator APIs, use the following command: ```azurecli-interactive
For developer APIs, use the following command:
az account get-access-token --resource https://devcenter.azure.com ```
-After authentication is successful, Microsoft Entra ID returns an access token for current Azure subscription:
+After authentication is successful, Microsoft Entra ID returns an access token for the current Azure subscription:
```json {
After authentication is successful, Microsoft Entra ID returns an access token f
} ```
-The token is a Base64 string. The token is valid for at least 5 minutes with the maximum of 90 minutes. The expiresOn defines the actual token expiration time.
+The token is a Base64 string. The token is valid for at least five minutes. The maximum duration is 90 minutes. The `expiresOn` defines the actual token expiration time.
> [!TIP]
-> Developer API tokens for the service are encrypted and cannot be decoded using JWT decoding tools. They can only be processed by the service.
+> Developer API tokens for the service are encrypted and can't be decoded using JWT decoding tools. They can only be processed by the service.
-### Using a bearer token to access REST APIs
-To access REST APIs, you must set the Authorization header on your request. The header value should be the string `Bearer` followed by a space and the token you received in the previous step.
+### Use a bearer token to access REST APIs
+
+To access REST APIs, you must set the authorization header on your request. The header value should be the string `Bearer` followed by a space and the token you received in the previous step.
## Next steps-- Review [Microsoft Entra fundamentals](../../articles/active-directory/fundamentals/whatis.md).+
+- [Review Microsoft Entra ID fundamentals](../../articles/active-directory/fundamentals/whatis.md)
dev-box How To Manage Dev Box Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md
To manage a dev box pool, you need the following permissions:
In Microsoft Dev Box, a dev box pool is a collection of dev boxes that you manage together. You must have at least one dev box pool before users can create a dev box.
-The following steps show you how to create a dev box pool that's associated with a project. You use an existing dev box definition and network connection in the dev center to configure the pool.
+The following steps show you how to create a dev box pool associated with a project. You use an existing dev box definition and network connection in the dev center to configure the pool.
If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure Microsoft Dev Box ](quickstart-configure-dev-box-service.md) to create them.
The Azure portal deploys the dev box pool and runs health checks to ensure that
:::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-grid-populated.png" alt-text="Screenshot that shows a list of dev box pools and status information.":::
+## Manage dev boxes in a pool
+
+You can manage existing dev boxes in a dev box pool through the Azure portal. You can start, stop, or delete dev boxes. You must be a member of the Project Admin role at the project level to manage dev boxes in pools.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, enter **projects**, in the list of results, select **Projects**.
+
+1. Select the project that contains the dev box pool that you want to manage.
+
+1. Select **Dev box pools**.
+
+1. Select the pool that contains the dev box that you want to manage.
+
+ :::image type="content" source="media/how-to-manage-dev-box-pools/manage-dev-box-pool.png" alt-text="Screenshot showing a list of dev box pools in Azure portal." lightbox="media/how-to-manage-dev-box-pools/manage-dev-box-pool.png":::
+
+1. Scroll to the far right, and select the Dev box operations menu (**...**) for the dev box that you want to manage.
+
+ :::image type="content" source="media/how-to-manage-dev-box-pools/manage-dev-box-in-azure-portal.png" alt-text="Screenshot of the Azure portal, showing dev boxes in a dev box pool." lightbox="media/how-to-manage-dev-box-pools/manage-dev-box-in-azure-portal.png":::
+
+1. Depending on the current state of the dev box, you can select **Start**, **Stop**, or **Delete**.
+
+ :::image type="content" source="media/how-to-manage-dev-box-pools/dev-box-operations-menu.png" alt-text="Screenshot of the Azure portal, showing the menu for managing a dev box." lightbox="media/how-to-manage-dev-box-pools/dev-box-operations-menu.png":::
+ ## Delete a dev box pool You can delete a dev box pool when you're no longer using it.
dev-box Quickstart Configure Dev Box Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-arm-template.md
Previously updated : 09/20/2023 Last updated : 11/28/2023 #Customer intent: As an enterprise admin, I want to understand how to create and configure dev box components with an ARM template so that I can provide dev box projects for my users.
This quickstart describes how to use an Azure Resource Manager (ARM) template to
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
+This [Dev Box with customized image](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.devcenter/devbox-with-customized-image) template deploys a simple Dev Box environment that you can use for testing and exploring the service.
+
+It creates the following Dev Box resources: dev center, project, network connection, dev box definition, and dev box pool. Once the template is deployed, you can go to the [developer portal](https://aka.ms/devbox-portal) to [create your dev box](quickstart-create-dev-box.md).
+ If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal. ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Owner or Contributor role on an Azure subscription or resource group. - Microsoft Entra AD. Your organization must use Microsoft Entra AD for identity and access management.
+- Microsoft Intune subscription. Your organization must use Microsoft Intune for device management.
## Review the template
-The template used in this QuickStart is fromΓÇ»[Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/devbox-with-builtin-image/)
+The template used in this QuickStart is fromΓÇ»[Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/devbox-with-customized-image/).
+The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.devcenter/devbox-with-customized-image/azuredeploy.json)
Multiple Azure resources are defined in the template:
Multiple Azure resources are defined in the template:
- [Microsoft.DevCenter/projects](/azure/templates/microsoft.devcenter/projects): create a project. - [Microsoft.DevCenter/networkConnections](/azure/templates/microsoft.devcenter/networkConnections): create a network connection. - [Microsoft.DevCenter/devcenters/devboxdefinitions](/azure/templates/microsoft.devcenter/devcenters/devboxdefinitions): create a dev box definition.
+- [Microsoft.DevCenter/devcenters/galleries](/azure/templates/microsoft.devcenter/devcenters/galleries): create an Azure Compute Gallery.
- [Microsoft.DevCenter/projects/pools](/azure/templates/microsoft.devcenter/projects/pools): create a dev box pool.
-### Find more templates
-
-To find more templates that are related to Microsoft Dev Box, see [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.devcenter).
-
-For example, the [Dev Box with customized image](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.devcenter/devbox-with-customized-image) template creates the following Dev Box resources: dev center, project, network connection, dev box definition, and dev box pool. You can then go to the [developer portal](https://aka.ms/devbox-portal) to [create your dev box](quickstart-create-dev-box.md).
-
-Next, you can use a template to [add other customized images for Base, Java, .NET and Data](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.devcenter/devbox-with-customized-image#add-other-customized-image-for-base-java-net-and-data). These images have the following software and tools installed:
--
-|Image type |Software and tools |
-|||
-|Base |Git, Azure CLI, VS Code, VS Code Extension for GitHub Copilot |
-|Java |Git, Azure CLI, VS Code, Maven, OpenJdk11, VS Code Extension for Java Pack |
-|.NET |Git, Azure CLI, VS Code,.NET SDK, Visual Studio |
-|Data |Git, Azure CLI, VS Code,Python 3, VS Code Extension for Python and Jupyter |
- ## Deploy the template 1. Select **Open Cloudshell** from the following code block to open Azure Cloud Shell, and then follow the instructions to sign in to Azure. ```azurepowershell-interactive
- $vnetAddressPrefixes = Read-Host -Prompt "Enter a vnet address prefixes like 10.0.0.0/16"
- $subnetAddressPrefixes = Read-Host -Prompt "Enter a vnet address prefixes like 10.0.0.0/24"
- $location = Read-Host -Prompt "Enter the location (e.g. eastus)"
-
- $resourceGroupName = "rg-devbox-test"
- $templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.devcenter/devbox-with-builtin-image/azuredeploy.json"
- New-AzResourceGroup -Name $resourceGroupName -Location $location
- New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri -vnetAddressPrefixes $vnetAddressPrefixes -subnetAddressPrefixes $subnetAddressPrefixes -location $location
-
- Write-Host "After all the resources are provisioned, go to https://devportal.microsoft.com/ to create a Dev Box. You can also refer to this guide: [Quickstart: Create a dev box - Microsoft Dev Box | Microsoft Learn](https://learn.microsoft.com/azure/dev-box/quickstart-create-dev-box)."
- Write-Host "Press [ENTER] to continue."
+ $userPrincipalName = Read-Host "Please enter user principal name e.g. alias@xxx.com"
+ $resourceGroupName = Read-Host "Please enter resource group name e.g. rg-devbox-dev"
+ $location = Read-Host "Please enter region name e.g. eastus"
+ $templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.devcenter/devbox-with-customized-image/azuredeploy.json"
+ $userPrincipalId=(Get-AzADUser -UserPrincipalName $userPrincipalName).Id
+ if($userPrincipalId){
+ Write-Host "Start provisioning..."
+ az group create -l $location -n $resourceGroupName
+ az group deployment create -g $resourceGroupName --template-uri $templateUri --parameters userPrincipalId=$userPrincipalId
+ }else {
+ Write-Host "User Principal Name cannot be found."
+ }
+
+ Write-Host "Provisioning Completed."
+ ``` Wait until you see the prompt from the console.
Next, you can use a template to [add other customized images for Base, Java, .NE
3. Right-click the shell console pane and then selectΓÇ»**Paste**. 4. Enter the values.
-It takes about 10 minutes to deploy the template. When completed, the output is similar to:
-
- :::image type="content" source="media/quickstart-configure-dev-box-arm-template/dev-box-template-output.png" alt-text="Screenshot showing the output of the template.":::
+It takes about 30 minutes to deploy the template.
Azure PowerShell is used to deploy the template. You can also use the Azure portal and Azure CLI. To learn other deployment methods, seeΓÇ»[Deploy templates](../azure-resource-manager/templates/deploy-portal.md).
-#### Depending on your configuration, you may want to change the following parameters:
+### Required parameters:
-- *Resource group name:* The default resource group name is ΓÇ£rg-devbox-testΓÇ¥; you can change it by editing `$resourceGroupName = "rg-devbox-test` in the template.
+- *User Principal ID*: The user principal ID of the user or group that is granted the *Devcenter Dev Box User* role.
+- *User Principal Type*: The type of user principal. Valid values are *User* or *Group*.
+- *Location*: The location where the resources are deployed. Choose a location close to the dev boxes users to reduce latency.
-- *Subnet:* If you have an existing subnet, you can use the parameter `-existingSubnetId` to pass the existing subnet ID. The template doesn't create a new Virtual network and subnet if you specify an existing one.
+Alternatively, you can provide access to a dev box project in the Azure portal, see [Provide user-level access to projects for developers](how-to-dev-box-user.md).
-- *Dev Box User role:* To grant the role [*DevCenter Dev Box User*](how-to-dev-box-user.md) to your user at Dev box project level, pass the principal ID to the `-principalId` parameter.
- - **User:** You can find the principal ID listed as the object ID on the user Overview page.
- :::image type="content" source="media/quickstart-configure-dev-box-arm-template/user-object-id.png" alt-text="Screenshot showing the user overview page with object ID highlighted.":::
- - **Group:** You can find the principal ID listed as the object ID on the group Overview page.
- :::image type="content" source="media/quickstart-configure-dev-box-arm-template/group-object-id.png" alt-text="Screenshot showing the group overview page with object ID highlighted.":::
+### Virtual network security considerations
+
+Planning for a Microsoft Dev Box deployment covers many areas, including securing the virtual network (VNet). For more information, see [Azure network security overview](../security/fundamentals/network-overview.md).
-Alternatively, you can provide access to a dev box project in the Azure portal, see [Provide user-level access to projects for developers](how-to-dev-box-user.md)
-
## Review deployed resources 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **Resource groups** from the left pane. 3. Select the resource group that you created in the previous section.
- :::image type="content" source="media/quickstart-configure-dev-box-arm-template/dev-box-template-resources.png" alt-text="Screenshot showing the newly created dev box resource group and the resources it contains in the Azure portal.":::
-
-1. Select the Dev Center. Its default name is dc-*resource-token*.
+ :::image type="content" source="media/quickstart-configure-dev-box-arm-template/dev-box-template-resources.png" alt-text="Screenshot showing the newly created dev box resource group and the resources it contains in the Azure portal." lightbox="media/quickstart-configure-dev-box-arm-template/dev-box-template-resources.png":::
## Clean up resources When you no longer need them, delete the resource group: Go to the Azure portal, select the resource group that contains these resources, and then select Delete.
+## Find more templates
+
+To find more templates that are related to Microsoft Dev Box, see [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.devcenter).
+
+For example, you can use a template to [add other customized images for Base, Java, .NET and Data](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.devcenter/devbox-with-customized-image#add-other-customized-image-for-base-java-net-and-data). These images have the following software and tools installed:
++
+|Image type |Software and tools |
+|||
+|Base |Git, Azure CLI, VS Code, VS Code Extension for GitHub Copilot |
+|Java |Git, Azure CLI, VS Code, Maven, OpenJdk11, VS Code Extension for Java Pack |
+|.NET |Git, Azure CLI, VS Code,.NET SDK, Visual Studio |
+|Data |Git, Azure CLI, VS Code,Python 3, VS Code Extension for Python and Jupyter |
+ ## Next steps - [Quickstart: Create a dev box](quickstart-create-dev-box.md)
dns Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/cli-samples.md
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Delegate Subdomain Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/delegate-subdomain-ps.md
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Dns Alerts Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alerts-metrics.md
na Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Dns Alias Appservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alias-appservice.md
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Dns Delegate Domain Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-delegate-domain-azure-dns.md
Previously updated : 09/27/2022 Last updated : 11/30/2023 #Customer intent: As an experienced network administrator, I want to configure Azure DNS, so I can host DNS zones.
dns Dns Domain Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-domain-delegation.md
description: Understand how to change domain delegation and use Azure DNS name s
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-for-azure-services.md
na Previously updated : 09/27/2022 Last updated : 11/30/2023 # How Azure DNS works with other Azure services
dns Dns Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-bicep.md
description: Learn how to create a DNS zone and record in Azure DNS. This is a s
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Dns Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-template.md
Title: 'Quickstart: Create an Azure DNS zone and record - Azure Resource Manager
description: Learn how to create a DNS zone and record in Azure DNS. This article is a step-by-step quickstart to create and manage your first DNS zone and record using Azure Resource Manager template (ARM template). -- Previously updated : 09/27/2022++ Last updated : 11/30/2023
To find more templates that are related to Azure Traffic Manager, see [Azure Qui
1. Enter the values.
- The template deployment creates a zone with one `A` record pointing to two IP addresses. The resource group name is the project name with **rg** appended.
+ The template deployment creates a zone with one `A` record pointing to two IP addresses. The resource group name is the project name with `rg` appended.
It takes a couple seconds to deploy the template. When completed, the output is similar to:
Azure PowerShell is used to deploy the template. In addition to Azure PowerShell
1. Select **Resource groups** from the left pane.
-1. Select the resource group that you created in the previous section. The default resource group name is the project name with **rg** appended.
+1. Select the resource group that you created in the previous section. The default resource group name is the project name with `rg` appended.
1. The resource group should contain the following resources seen here:
dns Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-cli.md
Previously updated : 09/27/2022 Last updated : 11/30/2023 #Customer intent: As an administrator or developer, I want to learn how to configure Azure DNS using the Azure CLI so I can use Azure DNS for my name resolution.
dns Dns Getstarted Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-powershell.md
Title: 'Quickstart: Create an Azure DNS zone and record - Azure PowerShell'
-description: Learn how to create a DNS zone and record in Azure DNS. This is a step-by-step quickstart to create and manage your first DNS zone and record using Azure PowerShell.
+description: Learn how to create a DNS zone and record in Azure DNS. This article is a step-by-step quickstart to create and manage your first DNS zone and record using Azure PowerShell.
Previously updated : 09/27/2022 Last updated : 11/30/2023
In this quickstart, you create your first DNS zone and record using Azure PowerShell. You can also perform these steps using the [Azure portal](dns-getstarted-portal.md) or the [Azure CLI](dns-getstarted-cli.md).
-A DNS zone is used to host the DNS records for a particular domain. To start hosting your domain in Azure DNS, you need to create a DNS zone for that domain name. Each DNS record for your domain is then created inside this DNS zone. Finally, to publish your DNS zone to the Internet, you need to configure the name servers for the domain. Each of these steps is described below.
+A DNS zone is used to host the DNS records for a particular domain. To start hosting your domain in Azure DNS, you need to create a DNS zone for that domain name. Each DNS record for your domain is then created inside this DNS zone. Finally, to publish your DNS zone to the Internet, you need to configure the name servers for the domain. Each of these steps is described in this article.
:::image type="content" source="media/dns-getstarted-portal/environment-diagram.png" alt-text="Diagram of DNS deployment environment using the Azure PowerShell." border="false":::
New-AzDnsZone -Name contoso.xyz -ResourceGroupName MyResourceGroup
## Create a DNS record
-You create record sets by using the `New-AzDnsRecordSet` cmdlet. The following example creates a record with the relative name "www" in the DNS Zone "contoso.xyz", in resource group "MyResourceGroup". The fully qualified name of the record set is "www.contoso.xyz". The record type is "A", with IP address "10.10.10.10", and the TTL is 3600 seconds.
+Create record sets by using the `New-AzDnsRecordSet` cmdlet. The following example creates a record with the relative name `www` in the DNS Zone `contoso.xyz`, in resource group `MyResourceGroup`. The fully qualified name of the record set is `www.contoso.xyz`. The record type is `A`, with IP address `10.10.10.10`, and the TTL is 3600 seconds.
```powershell New-AzDnsRecordSet -Name www -RecordType A -ZoneName contoso.xyz -ResourceGroupName MyResourceGroup -Ttl 3600 -DnsRecords (New-AzDnsRecordConfig -IPv4Address "10.10.10.10")
Remove-AzResourceGroup -Name MyResourceGroup
## Next steps
-Now that you've created your first DNS zone and record using Azure PowerShell, you can create records for a web app in a custom domain.
+Now that your first DNS zone and record is created using Azure PowerShell, you can create records for a web app in a custom domain.
> [!div class="nextstepaction"] > [Create DNS records for a web app in a custom domain](./dns-web-sites-custom-domain.md)
dns Dns Import Export Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-import-export-portal.md
description: Learn how to import and export a DNS (Domain Name System) zone file
Previously updated : 10/20/2023 Last updated : 11/30/2023
dns Dns Operations Dnszones Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones-cli.md
ms.devlang: azurecli
na Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Dns Operations Dnszones Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones-portal.md
na Previously updated : 09/27/2022 Last updated : 11/30/2023 # How to manage DNS Zones in the Azure portal
dns Dns Operations Dnszones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones.md
na Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Dns Operations Recordsets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets-cli.md
ms.devlang: azurecli
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Dns Operations Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets.md
na Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-overview.md
description: Overview of DNS hosting service on Microsoft Azure. Host your domai
Previously updated : 09/27/2022 Last updated : 11/30/2023 #Customer intent: As an administrator, I want to evaluate Azure DNS so I can determine if I want to use it instead of my current DNS service.
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 11/03/2023 Last updated : 11/30/2023
dns Dns Protect Private Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-protect-private-zones-recordsets.md
Title: Protecting private DNS Zones and Records - Azure DNS
description: In this learning path, get started protecting private DNS zones and record sets in Microsoft Azure DNS. --++ Previously updated : 09/27/2022 Last updated : 11/30/2023 ms.devlang: azurecli
dns Dns Protect Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-protect-zones-recordsets.md
Title: Protecting DNS Zones and Records - Azure DNS description: In this learning path, get started protecting DNS zones and record sets in Microsoft Azure DNS. -+ Previously updated : 09/27/2022- Last updated : 11/30/2023+ ms.devlang: azurecli
dns Dns Reverse Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-for-azure-services.md
na Previously updated : 09/27/2022 Last updated : 11/30/2023
A third party shouldn't have access to create reverse DNS records for Azure serv
This validation is only done when the reverse DNS record is set or modified. Periodic revalidation isn't done.
-For example, suppose the Public Ip address resource has the DNS name `contosoapp1.northus.cloudapp.azure.com` and IP address `23.96.52.53`. The reverse FQDN for the Public IP address can be specified as:
+For example, suppose the Public IP address resource has the DNS name `contosoapp1.northus.cloudapp.azure.com` and IP address `23.96.52.53`. The reverse FQDN for the Public IP address can be specified as:
* The DNS name for the Public IP address: `contosoapp1.northus.cloudapp.azure.com`. * The DNS name for a different PublicIpAddress in the same subscription, such as: `contosoapp2.westus.cloudapp.azure.com`.
dns Dns Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-sdk.md
ms.devlang: csharp
na Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Dns Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-troubleshoot.md
Previously updated : 09/27/2022 Last updated : 11/30/2023 # Azure DNS troubleshooting guide
To resolve common issues, try one or more of the following steps:
DNS name resolution is a multi-step process, which can fail for many reasons. The following steps help you investigate why DNS resolution is failing for a DNS record in a zone hosted in Azure DNS.
-1. Confirm that the DNS records have been configured correctly in Azure DNS. Review the DNS records in the Azure portal, checking that the zone name, record name, and record type are correct.
+1. Confirm that the DNS records are configured correctly in Azure DNS. Review the DNS records in the Azure portal, checking that the zone name, record name, and record type are correct.
2. Confirm that the DNS records resolve correctly on the Azure DNS name servers. - If you make DNS queries from your local PC, you may see cached results that donΓÇÖt reflect the current state of the name servers. Also, corporate networks often use DNS proxy servers, which prevent DNS queries from being directed to specific name servers. To avoid these problems, use a web-based name resolution service such as [digwebinterface](https://digwebinterface.com). - Be sure to specify the correct name servers for your DNS zone, as shown in the Azure portal. - Check that the DNS name is correct (you have to specify the fully qualified name, including the zone name) and the record type is correct
-3. Confirm that the DNS domain name has been correctly [delegated to the Azure DNS name servers](dns-domain-delegation.md). There are a [many 3rd-party web sites that offer DNS delegation validation](https://www.bing.com/search?q=dns+check+tool). This test is a *zone* delegation test, so you should only enter the DNS zone name and not the fully qualified record name.
+3. Confirm that the DNS domain name is correctly [delegated to the Azure DNS name servers](dns-domain-delegation.md). There are a [many 3rd-party web sites that offer DNS delegation validation](https://www.bing.com/search?q=dns+check+tool). This test is a *zone* delegation test, so you should only enter the DNS zone name and not the fully qualified record name.
4. Having completed the above, your DNS record should now resolve correctly. To verify, you can again use [digwebinterface](https://digwebinterface.com), this time using the default name server settings. ### Recommended articles
The following scenario demonstrates where a configuration error has led to the u
**Unhealthy Delegation**
-A primary zone contains NS delegation records, which help delegate traffic from the primary to the child zones. If any NS delegation record is present in the parent zone, the DNS server is supposed to mask all other records below the NS delegation record, except glue records, and direct traffic to the respective child zone based on the user query. If a parent zone contains other records meant for the child zones (delegated zones) below the NS delegation record, the zone will be marked unhealthy, and its status is **Degraded**.
+A primary zone contains NS delegation records, which help delegate traffic from the primary to the child zones. If any NS delegation record is present in the parent zone, the DNS server is supposed to mask all other records below the NS delegation record (except glue records) and direct traffic to the respective child zone based on the user query. If a parent zone contains other records meant for the child zones (delegated zones) below the NS delegation record, the zone will be marked unhealthy and its status is **Degraded**.
**What are glue records?** - These are records under the delegation record, which help direct traffic to the delegated/child zones using their IP addresses and are configured as seen in the following.
In the preceding example, **child** is the NS delegation records. The records _*
**How can you fix it?** - To resolve, locate and remove all records except glue records under NS delegation records in your parent zone.
-**How to locate unhealthy delegation records?** - A script has been created to find the unhealthy delegation records in your zone. The script will report records, which are unhealthy.
+**How to locate unhealthy delegation records?** - A script is provided to find the unhealthy delegation records in your zone. The script will report records, which are unhealthy.
1. Save the script located at: [Find unhealthy DNS records in Azure DNS - PowerShell script sample](./scripts/find-unhealthy-dns-records.md)
dns Dns Web Sites Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-web-sites-custom-domain.md
Previously updated : 09/27/2022 Last updated : 11/30/2023 #Customer intent: As an experienced network administrator, I want to create DNS records in Azure DNS, so I can host a web app in a custom domain.
dns Private Dns Autoregistration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-autoregistration.md
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Private Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-cli.md
Previously updated : 09/27/2022 Last updated : 11/30/2023 #Customer intent: As an experienced network administrator, I want to create an Azure private DNS zone, so I can resolve host names on my private virtual networks.
dns Private Dns Getstarted Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Private Dns Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-migration-guide.md
description: This guide provides step by step instruction on how to migrate lega
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Private Dns Virtual Network Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-virtual-network-links.md
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Private Resolver Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-reliability.md
Previously updated : 09/27/2022 #Required; mm/dd/yyyy format. Last updated : 11/30/2023 #Required; mm/dd/yyyy format. #Customer intent: As a customer, I want to understand reliability support for Azure DNS Private Resolver. I need to avoid failures and respond to them so that I can minimize down time and data loss.
dns Dns Cli Create Dns Zone Record https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/dns-cli-create-dns-zone-record.md
Previously updated : 09/27/2022 Last updated : 11/30/2023
dns Tutorial Alias Pip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-pip.md
Previously updated : 09/27/2022 Last updated : 11/30/2023 #Customer intent: As an experienced network administrator, I want to configure Azure an DNS alias record to refer to an Azure public IP address.
dns Tutorial Alias Rr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-rr.md
Previously updated : 09/27/2022 Last updated : 11/30/2023 #Customer intent: As an experienced network administrator, I want to configure Azure an DNS alias record to refer to a resource record within the zone.
dns Tutorial Alias Tm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-tm.md
Previously updated : 09/27/2022 Last updated : 11/30/2023 #Customer intent: As an experienced network administrator, I want to configure Azure DNS alias records to use my apex domain name with Traffic Manager.
dns Tutorial Dns Private Resolver Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-dns-private-resolver-failover.md
Previously updated : 09/27/2022 Last updated : 11/30/2023 #Customer intent: As an administrator, I want to avoid having a single point of failure for DNS resolution.
dns Tutorial Public Dns Zones Child https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-public-dns-zones-child.md
ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897
Previously updated : 09/27/2022 Last updated : 11/30/2023
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
A `client-secret` is a string value your app can use in place of a certificate t
3. Create a `client-secret` for the `client-id` that you used to create your Azure Data Manager for Energy instance. 4. Add one now by clicking on *New Client Secret*. 5. Record the secret's `value` for later use in your client application code.
-6. The Service Principal [SPN] of the app id and client secret has the Infra Admin access to the instance.
+6. The access token of the app id and client secret has the Infra Admin access to the instance.
> [!CAUTION] > Don't forget to record the secret's value. This secret value is never displayed again after you leave this page of 'client secret' creation.
curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oa
2. Get the service principal access token using [Generate service principal access token](how-to-manage-users.md#generate-service-principal-access-token). 3. If you try to directly use user tokens for adding entitlements, it results in 401 error. The service principal access token must be used to add initial users in the system and those users (with admin access) can then manage more users. 4. Use the service principal access token to do these three steps using the commands outlined in the following sections.
-5. Add the users to the `users@<data-partition-id>.<domain>` OSDU group.
-6. Get the OSDU group such as `service.legal.editor@<data-partition-id>.<domain>` you want to add the user to.
-7. Add the users to that group.
+ 1. Add the users to the `users@<data-partition-id>.<domain>` OSDU group.
+ 1. Get the OSDU group such as `service.legal.editor@<data-partition-id>.<domain>` you want to add the user to.
+ 1. Add the users to that group.
## Get the list of all available groups in a data partition
event-grid Configure Firewall Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-firewall-mqtt.md
# Configure IP firewall for Azure Event Grid namespaces (MQTT)
-By default, Event Grid namespaces and entities in them such as Message Queuing Telemetry Transport (MQTT) topic spaces are accessible from internet as long as the request comes with valid authentication (access key) and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Only the MQTT clients that fall into the allowed IP range can connect to publish and subscribe. Clients originating from any other IP address are rejected and receive a 403 (Forbidden) response. For more information about network security features supported by Event Grid, see [Network security for Event Grid](network-security.md).
+By default, Event Grid namespaces and entities in them such as Message Queuing Telemetry Transport (MQTT) topic spaces are accessible from internet as long as the request comes with valid authentication (access key) and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Only the MQTT clients that fall into the allowed IP range can connect to publish and subscribe. Clients originating from any other IP address are rejected and receive a 403 (Forbidden) response. For more information about network security features supported by Event Grid, see [Network security for Event Grid](network-security.md).
This article describes how to configure IP firewall settings for an Event Grid namespace. For complete steps for creating a namespace, see [Create and manage namespaces](create-view-manage-namespaces.md).
event-grid Configure Firewall Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-firewall-namespaces.md
+
+ Title: Configure IP firewall for Azure Event Grid namespaces
+description: This article describes how to configure firewall settings for Azure Event Grid namespaces.
+ Last updated : 11/29/2023++++
+# Configure IP firewall for Azure Event Grid namespaces
+By default, Event Grid namespaces and entities are accessible from internet as long as the request comes with valid authentication (access key) and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Only the clients that fall into the allowed IP range can connect to Azure Event Grid to publish events or pull events. Clients originating from any other IP address are rejected and receive a 403 (Forbidden) response. For more information about network security features supported by Event Grid, see [Network security for Event Grid](network-security.md).
+
+This article describes how to configure IP firewall settings for an Event Grid namespace. For complete steps for creating a namespace, see [Create and manage namespaces](create-view-manage-namespaces.md).
+
+## Create a namespace with IP firewall settings
+
+1. On the **Networking** page, if you want to allow clients to connect to the namespace endpoint via a public IP address, select **Public access** for **Connectivity method** if it's not already selected.
+2. You can restrict access to the topic from specific IP addresses by specifying values for the **Address range** field. Specify a single IPv4 address or a range of IP addresses in Classless inter-domain routing (CIDR) notation.
+
+ :::image type="content" source="./media/configure-firewall-namespaces/ip-firewall-settings.png" alt-text="Screenshot that shows IP firewall settings on the Networking page of the Create namespace wizard.":::
+
+## Update a namespace with IP firewall settings
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. In the **search box**, enter **Event Grid Namespaces** and select **Event Grid Namespaces** from the results.
+
+ :::image type="content" source="./media/create-view-manage-namespaces/portal-search-box-namespaces.png" alt-text="Screenshot showing Event Grid Namespaces in the search results.":::
+1. Select your Event Grid namespace in the list to open the **Event Grid Namespace** page for your namespace.
+1. On the **Event Grid Namespace** page, select **Networking** on the left menu.
+1. Specify values for the **Address range** field. Specify a single IPv4 address or a range of IP addresses in Classless inter-domain routing (CIDR) notation.
+
+ :::image type="content" source="./media/configure-firewall-namespaces/namespace-ip-firewall-settings.png" alt-text="Screenshot that shows IP firewall settings on the Networking page of an existing namespace.":::
+
+## Next steps
+See [Allow access via private endpoints](configure-private-endpoints-pull.md).
event-grid Configure Private Endpoints Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-private-endpoints-mqtt.md
# Configure private endpoints for Azure Event Grid namespaces with MQTT enabled
-You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow ingress of events directly from your virtual network to entities in your Event Grid namespaces securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. The private endpoint uses an IP address from the virtual network address space for your namespace. When an MQTT client on a private network connects to the MQTT broker on a private link, the client can publish and subscribe to MQTT messages. For more conceptual information, see [Network security](network-security.md).
+You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow ingress of events directly from your virtual network to entities in your Event Grid namespaces securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. The private endpoint uses an IP address from the virtual network address space for your namespace. When an MQTT client on a private network connects to the MQTT broker on a private link, the client can publish and subscribe to MQTT messages. For more conceptual information, see [Network security](network-security.md).
This article shows you how to enable private network access for an Event Grid namespace. For complete steps for creating a namespace, see [Create and manage namespaces](create-view-manage-namespaces.md).
The following sections show you how to approve or reject a private endpoint conn
1. In the search bar, type in **Event Grid Namespaces**, and select it to see the list of namespaces. 1. Select the **namespace** that you want to manage. 1. Select the **Networking** tab.
-1. If there are any connections that are pending, you'll see a connection listed with **Pending** in the provisioning state.
+1. If there are any connections that are pending, you see a connection listed with **Pending** in the provisioning state.
## Approve a private endpoint You can approve a private endpoint that's in the pending state. To approve, follow these steps:
event-grid Configure Private Endpoints Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-private-endpoints-pull.md
Last updated 11/15/2023
# Configure private endpoints for Azure Event Grid namespaces
-You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow ingress of events directly from your virtual network to entities in your Event Grid namespaces securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. The private endpoint uses an IP address from the virtual network address space for your namespace. For more conceptual information, see [Network security](network-security.md).
+You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow clients from only your virtual network to connect to your Event Grid namespace securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. The private endpoint uses an IP address from the virtual network address space for your namespace. A client in a private network can connect to the Event Grid namespace and publish events or pull events. For more conceptual information, see [Network security](network-security-namespaces.md).
This article shows you how to enable private network access for an Event Grid namespace. For complete steps for creating a namespace, see [Create and manage namespaces](create-view-manage-namespaces.md).
The following sections show you how to approve or reject a private endpoint conn
1. In the search bar, type in **Event Grid Namespaces**, and select it to see the list of namespaces. 1. Select the **namespace** that you want to manage. 1. Select the **Networking** tab.
-1. If there are any connections that are pending, you'll see a connection listed with **Pending** in the provisioning state.
+1. If there are any connections that are pending, you see a connection listed with **Pending** in the provisioning state.
### To approve a private endpoint You can approve a private endpoint that's in the pending state. To approve, follow these steps:
To delete a private endpoint, follow these steps:
:::image type="content" source="./media/configure-private-endpoints-mqtt/remove-private-endpoint.png" alt-text="Screenshot showing the Private endpoint connection tab with Remove button selected."::: ## Next steps
-To learn about how to configure IP firewall settings, see [Configure IP firewall for Azure Event Grid namespaces](configure-firewall-mqtt.md).
+To learn about how to configure IP firewall settings, see [Configure IP firewall for Azure Event Grid namespaces](configure-firewall-namespaces.md).
event-grid Create View Manage System Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-system-topics.md
Title: Create, view, and manage system topics in Azure Event Grid (portal) description: This article shows how view existing system topic, create Azure Event Grid system topics using the Azure portal. Previously updated : 07/07/2020 Last updated : 11/30/2023 # Create, view, and manage Event Grid system topics in the Azure portal
You can create a system topic for an Azure resource (Storage account, Event Hubs
When you use the **Events** page in the Azure portal to create an event subscription for an event raised by an Azure source (for example: Azure Storage account), the portal creates a system topic for the Azure resource and then creates a subscription for the system topic. You specify the name of the system topic if you're creating an event subscription on the Azure resource for the first time. From the second time onwards, the system topic name is displayed for you in the read-only mode. See [Quickstart: Route Blob storage events to web endpoint with the Azure portal](blob-event-quickstart-portal.md#subscribe-to-the-blob-storage) for detailed steps. - Using the **Event Grid System Topics** page. You create a system topic manually in this case by using the following steps.
-1. Sign in to [Azure portal](https://portal.azure.com).
-2. In the search box at the top, type **Event Grid System Topics**, and then press **ENTER**.
-
- :::image type="content" source="./media/create-view-manage-system-topics/search-system-topics.png" alt-text="Screenshot that shows the Azure portal with Event Grid System Topics in the search box.":::
-3. On the **Event Grid System Topics** page, select **+ Create** on the toolbar.
-
- :::image type="content" source="./media/create-view-manage-system-topics/add-system-topic-menu.png" alt-text="Screenshot that shows in the Event Grid System Topics page with the Create button selected.":::
-4. On the **Create Event Grid System Topic** page, do the following steps:
- 1. Select the **topic type**. In the following example, **Storage Accounts** option is selected.
- 2. Select the **Azure subscription** that has your storage account resource.
- 3. Select the **resource group** that has the storage account.
- 4. Select the **storage account**.
- 5. Enter a **name** for the system topic to be created.
+ 1. Sign in to [Azure portal](https://portal.azure.com).
+ 2. In the search box at the top, type **Event Grid System Topics**, and then press **ENTER**.
- > [!NOTE]
- > You can use this system topic name to search metrics and diagnostic logs.
- 6. Select **Review + create**.
-
- ![Create system topic](./media/create-view-manage-system-topics/create-system-topic-page.png)
- 5. Review settings and select **Create**.
+ :::image type="content" source="./media/create-view-manage-system-topics/search-system-topics.png" alt-text="Screenshot that shows the Azure portal with Event Grid System Topics in the search box.":::
+ 3. On the **Event Grid System Topics** page, select **+ Create** on the toolbar.
+
+ :::image type="content" source="./media/create-view-manage-system-topics/add-system-topic-menu.png" alt-text="Screenshot that shows in the Event Grid System Topics page with the Create button selected." lightbox="./media/create-view-manage-system-topics/add-system-topic-menu.png":::
+ 4. On the **Create Event Grid System Topic** page, do the following steps:
+ 1. Select the **topic type**. In the following example, **Storage Accounts** option is selected.
+ 2. Select the **Azure subscription** that has your storage account resource.
+ 3. Select the **resource group** that has the storage account.
+ 4. Select the **storage account**.
+ 5. Enter a **name** for the system topic to be created.
- ![Review and create system topic](./media/create-view-manage-system-topics/system-topic-review-create.png)
- 6. After the deployment succeeds, select **Go to resource** to see the **Event Grid System Topic** page for the system topic you created.
+ > [!NOTE]
+ > You can use this system topic name to search metrics and diagnostic logs.
+ 6. Select **Review + create**.
+
+ :::image type="content" source="./media/create-view-manage-system-topics/create-system-topic-page.png" alt-text="Screenshot that shows the Create System Topic page.":::
+ 5. Review settings and select **Create**.
+
+ :::image type="content" source="./media/create-view-manage-system-topics/system-topic-review-create.png" alt-text="Screenshot that shows the Review & Create page.":::
+ 6. After the deployment succeeds, select **Go to resource** to see the **Event Grid System Topic** page for the system topic you created.
- ![System topic page](./media/create-view-manage-system-topics/system-topic-page.png)
+ :::image type="content" source="./media/create-view-manage-system-topics/system-topic-page.png" alt-text="Screenshot that shows the System Topic home page." lightbox="./media/create-view-manage-system-topics/system-topic-page.png":::
## View all system topics
Follow these steps to view all existing Event Grid system topics.
1. Follow instructions from the [View system topics](#view-all-system-topics) section to view all system topics, and select the system topic that you want to delete from the list. 2. On the **Event Grid System Topic** page, select **Delete** on the toolbar.
- ![System topic - delete button](./media/create-view-manage-system-topics/system-topic-delete-button.png)
+ :::image type="content" source="./media/create-view-manage-system-topics/system-topic-delete-button.png" alt-text="Screenshot that shows the System Topic page with the Delete button selected.":::
3. On the confirmation page, select **OK** to confirm the deletion. It deletes the system topic and also all the event subscriptions for the system topic. ## Create an event subscription 1. Follow instructions from the [View system topics](#view-all-system-topics) section to view all system topics, and select the system topic that you want to delete from the list. 2. On the **Event Grid System Topic** page, select **+ Event Subscription** from the toolbar.
- ![System topic - add event subscription button](./media/create-view-manage-system-topics/add-event-subscription-button.png)
+ :::image type="content" source="./media/create-view-manage-system-topics/add-event-subscription-button.png" alt-text="Screenshot that shows the System Topic page with Add Event Subscription button selected.":::
3. Confirm that the **Topic Type**, **Source Resource**, and **Topic Name** are automatically populated. Enter a name, select an **Endpoint Type**, and specify the **endpoint**. Then, select **Create** to create the event subscription. ![System topic - create event subscription](./media/create-view-manage-system-topics/create-event-subscription.png)
event-grid Event Schema Maintenance Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-maintenance-configuration.md
+
+ Title: Azure Maintenance Configuration as an Event Grid source
+description: The article provides details on Azure Maintenance Configuration as an Event Grid source.
+++ Last updated : 11/29/2023++
+# Azure Maintenance Configuration as an Event Grid source
+
+This article provides the properties and schema for Azure Maintenance Configurations events. For an introduction to event schemas, see [Azure Event Grid event schema](./event-schema.md). It also gives you links to articles to use Maintenance Configuration as an event source.
+
+## Available event types
+
+Maintenance Configuration emits the following event types:
+
+**Event type** | **Description**
+| |
+Microsoft.Maintenance.PreMaintenanceEvent | Raised before maintenance job start and gives user an opportunity to perform pre maintenance operations. |
+Microsoft.Maintenance.PostMaintenanceEvent | Raised after maintenance job completes and gives an opportunity to perform post maintenance operations.
+
+## Example event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+Following is an example of a schema for the Pre-Maintenance event:
+
+```json
+[{
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration/providers/microsoft.maintenance/applyupdates/20230509150000",
+ "topic": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration",
+ "subject": "contosomaintenanceconfiguration",
+"data":
+{
+ "correlationId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration/providers/microsoft.maintenance/applyupdates/20230509150000",
+ "maintenanceConfigurationId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration",
+ "startDateTime": "2023-05-09T15:00:00Z",
+ "endDateTime": "2023-05-09T18:55:00Z",
+ "cancellationCutOffDateTime": "2023-05-09T14:59:00Z",
+ "resourceSubscriptionIds": ["subscription guid 1", "subscription guid 2"]
+}
+"eventType": "Microsoft.Maintenance.PreMaintenanceEvent",
+"eventTime": "2023-05-09T14:25:00.3717473Z",
+ "dataVersion": "1.0",
+ "metadataVersion": "1"
+}]
+```
+
+# [Cloud event schema](#tab/cloud-event-schema)
+
+Following is an example for a schema of a pre-maintenance event:
++
+```json
+[{
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration/providers/microsoft.maintenance/applyupdates/20230509150000",
+ "source": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration",
+ "subject": "contosomaintenanceconfiguration",
+"data":
+{
+ "correlationId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration/providers/microsoft.maintenance/applyupdates/20230509150000",
+ "maintenanceConfigurationId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration",
+ "startDateTime": "2023-05-09T15:00:00Z",
+ "endDateTime": "2023-05-09T18:55:00Z",
+ "cancellationCutOffDateTime": "2023-05-09T14:59:00Z",
+ "resourceSubscriptionIds": ["subscription guid 1", "subscription guid 2"]
+}
+"type": "Microsoft.Maintenance.PreMaintenanceEvent",
+"time": "2023-05-09T14:25:00.3717473Z",
+ "specversion": "1.0"
+}]
+```
++
+# [Event Grid event schema](#tab/event-grid-event-schema)
+Following is an example of a schema for a post-maintenance event:
+
+```json
+[{
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration/providers/microsoft.maintenance/applyupdates/20230509150000",
+ "topic": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration",
+ "subject": "contosomaintenanceconfiguration",
+"data":
+{
+ "correlationId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration/providers/microsoft.maintenance/applyupdates/20230509150000",
+ "maintenanceConfigurationId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration",
+ "status": "Succeeded",
+ "startDateTime": "2023-05-09T15:00:00Z",
+ "endDateTime": "2023-05-09T18:55:00Z",
+ "resourceSubscriptionIds": ["subscription guid 1", "subscription guid 2"]
+}
+"eventType": "Microsoft.Maintenance.PostMaintenanceEvent",
+"eventTime": "2023-05-09T15:55:00.3717473Z",
+ "dataVersion": "1.0",
+ "metadataVersion": "1"
+}]
+```
+
+# [Cloud event schema](#tab/cloud-event-schema)
+
+Following is an example for a post maintenance event:
+
+```json
+[{
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration/providers/microsoft.maintenance/applyupdates/20230509150000",
+ "source": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration",
+ "subject": "contosomaintenanceconfiguration",
+"data":
+{
+ "correlationId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration/providers/microsoft.maintenance/applyupdates/20230509150000",
+ "maintenanceConfigurationId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Maintenance/maintenanceConfigurations/contosomaintenanceconfiguration",
+ "status": "Succeeded",
+ "startDateTime": "2023-05-09T15:00:00Z",
+ "endDateTime": "2023-05-09T18:55:00Z",
+ "resourceSubscriptionIds": ["subscription guid 1", "subscription guid 2"]
+}
+"type": "Microsoft.Maintenance.PostMaintenanceEvent",
+"time": "2023-05-09T15:55:00.3717473Z",
+ "specversion": "1.0"
+}]
+```
+++
+## Event properties
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+An event has the following top-level data:
+
+**Property** | **Type** | **Description** |
+ | | |
+topic | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
+subject | string | Publisher-defined path to the event subject. |
+eventType | string | One of the registered event types for this event source. |
+eventTime | string | The time the event is generated based on the provider's UTC time. |
+ID | string | Unique identifier for the event |
+data | object | App Configuration event data. |
+dataVersion | string | The schema version of the data object. The publisher defines the schema version. |
+metadataVersion | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
++
+# [Cloud event schema](#tab/cloud-event-schema)
+
+An event has the following top-level data:
+
+**Property** | **Type** | **Description** |
+ | | |
+source | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
+subject | string | Publisher-defined path to the event subject. |
+type | string | One of the registered event types for this event source. |
+time | string | The time the event is generated based on the provider's UTC time. |
+ID | string | Unique identifier for the event. |
+data | object | App Configuration event data. |
+specversion | string | CloudEvents schema specification version.
+++
+The data object has the following properties:
+
+**Property** | **Type** | **Description** |
+ | | |
+correlationId | string | The resource ID of specific maintenance schedule instance. |
+maintenanceConfigurationId | string | The resource ID of maintenance configuration. |
+startDateTime | string | The maintenance schedule start time. |
+endDateTime | string | The maintenance schedule end time. |
+cancellationCutOffDateTime | string | The maintenance schedule instance cancellation cut-off time. |
+resourceSubscriptionIds | string | The subscription IDs from which VMs are included in this schedule instance. |
+status | string | The completion status of maintenance schedule instance.
+
+## Next steps
+
+- For an introduction to Azure Event Grid, see [What is Event Grid?](./overview.md)
+- For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](./subscription-creation-schema.md).
++
event-grid Network Security Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/network-security-namespaces.md
+
+ Title: Network security for Azure Event Grid namespaces
+description: This article describes how to use service tags for egress, IP firewall rules for ingress, and private endpoints for ingress with Azure Event Grid namespaces.
++
+ - ignite-2023
Last updated : 11/15/2023++++
+# Network security for Azure Event Grid namespaces
+This article describes how to use the following security features with Azure Event Grid:
+
+- Service tags for egress
+- IP Firewall rules
+- Private endpoints
++
+## Service tags
+A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. For more information about service tags, see [Service tags overview](../virtual-network/service-tags-overview.md).
+
+You can use service tags to define network access controls on network security groups or Azure firewall. Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (for example, Azure Event Grid) in the appropriate source or destination fields of a rule, you can allow or deny the traffic for the corresponding service.
++
+| Service tag | Purpose | Can use inbound or outbound? | Can be regional? | Can use with Azure Firewall? |
+| | -- |::|::|::|
+| AzureEventGrid | Azure Event Grid. | Both | No | No |
++
+## IP firewall
+By default, Event Grid namespaces and entities are accessible from internet as long as the request comes with valid authentication (access key) and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Clients originating from any other IP address are rejected and receive a 403 (Forbidden) response.
+
+For Message Queueing Telemetry Transport (MQTT) scenarios, only the clients that fall into the allowed IP range can connect to publish and subscribe for events. For more information, see [Configure IP firewall for Event Grid namespaces in MQTT scenarios](configure-firewall-mqtt.md).
+
+For non-MQTT scenarios, only the clients that fall into the allowed IP range can connect to Azure Event Grid to publish events or pull events. For more information, see [Configure IP firewall for Event Grid namespaces in non-MQTT scenarios](configure-firewall-namespaces.md).
+
+The steps are same for both scenarios. These articles provide some additional information specific to the MQTT or non-MQTT scenarios.
+
+## Private endpoints
+A private endpoint is a special network interface for an Azure service in your VNet. When you create a private endpoint for your namespace, it provides secure connectivity between clients on your VNet and your Event Grid namespace. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Event Grid service uses a secure private link. Using private endpoints for your Event Grid namespace enables you to:
+
+- Secure access to your namespace from a virtual network over the Microsoft backbone network as opposed to the public internet.
+- Securely connect from on-premises networks that connect to the virtual network using VPN or Express Routes with private-peering.
+
+When you create a private endpoint for a namespace in your virtual network, a consent request is sent for approval to the resource owner. If the user requesting the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved. Otherwise, the connection is in **pending** state until approved. Applications in the virtual network can connect to the Event Grid service over the private endpoint seamlessly, using the same connection strings and authorization mechanisms that they would use otherwise. Resource owners can manage consent requests and the private endpoints, through the **Private endpoints** tab for the resource in the Azure portal.
+
+When an MQTT client in a private network connects to the MQTT broker on a private link, the client can publish and subscribe to MQTT messages. For more information, see [Configure private endpoints for namespaces in MQTT scenarios](configure-private-endpoints-mqtt.md).
+
+In non-MQTT scenarios, a client in a private network can connect to the Event Grid namespace and publish events or pull events. For more information, see [Configure private endpoints for namespaces in non-MQTT scenarios](configure-private-endpoints-pull.md).
+
+### Connect to private endpoints
+You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow ingress of events directly from your virtual network to entities in your Event Grid namespaces securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. The private endpoint uses an IP address from the virtual network address space for your namespace.
+
+Clients on a virtual network using the private endpoint should use the same connection string for the namespace as clients connecting to the public endpoint. Domain Name System (DNS) resolution automatically routes connections from the virtual network to the namespace over a private link. Event Grid creates a [private DNS zone](../dns/private-dns-overview.md) attached to the virtual network with the necessary update for the private endpoints, by default. However, if you're using your own DNS server, you might need to make additional changes to your DNS configuration.
+
+When an MQTT client on a private network connects to the MQTT broker on a private link, the client can publish and subscribe to MQTT messages.
+
+### DNS changes for private endpoints
+When you create a private endpoint, the DNS CNAME record for the resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, a private DNS zone is created that corresponds to the private link's subdomain.
+
+When you resolve the namespace endpoint URL from outside the virtual network with the private endpoint, it resolves to the public endpoint of the service. The DNS resource records for 'namespaceA', when resolved from **outside the VNet** hosting the private endpoint, are:
+
+| Name | Type | Value |
+| | -| |
+| `namespaceA.westus.eventgrid.azure.net` | CNAME | `namespaceA.westus.privatelink.eventgrid.azure.net` |
+| `namespaceA.westus.privatelink.eventgrid.azure.net` | CNAME | \<Azure traffic manager profile\>
+
+You can deny or control access for a client outside the virtual network through the public endpoint using the [IP firewall](#ip-firewall).
+
+When resolved from the virtual network hosting the private endpoint, the namespace endpoint URL resolves to the private endpoint's IP address. The DNS resource records for the namespace 'namespaceA', when resolved from **inside the VNet** hosting the private endpoint, are:
+
+| Name | Type | Value |
+| | -| |
+| `namespaceA.westus.eventgrid.azure.net` | CNAME | `namespaceA.westus.privatelink.eventgrid.azure.net` |
+| `namespaceA.westus.privatelink.eventgrid.azure.net` | A | 10.0.0.5
+
+This approach enables access to the namespace using the same connection string for clients on the virtual network hosting the private endpoints, and clients outside the virtual network.
+
+If you're using a custom DNS server on your network, clients can resolve the Fully Qualified Domain Name (FQDN) for the namespace endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network, or configure the A records for `namespaceName.regionName.privatelink.eventgrid.azure.net` with the private endpoint IP address.
+
+The recommended DNS zone name is `privatelink.eventgrid.azure.net`.
+
+### Private endpoints and publishing
+
+The following table describes the various states of the private endpoint connection and the effects on publishing:
+
+| Connection State | Successfully publish (Yes/No) |
+| | -|
+| Approved | Yes |
+| Rejected | No |
+| Pending | No |
+| Disconnected | No |
+
+For publishing to be successful, the private endpoint connection state should be **approved**. If a connection is rejected, it can't be approved using the Azure portal. The only possibility is to delete the connection and create a new one instead.
++
+## Quotas and limits
+There's a limit on the number of IP firewall rules and private endpoint connections per namespace. See [Event Grid quotas and limits](quotas-limits.md).
+
+## Related articles
+If you're using MQTT, see the following articles:
+
+- [Configure IP firewall for Event Grid namespaces in MQTT scenarios](configure-firewall-mqtt.md)
+- [Configure private endpoints for Event Grid namespaces in MQTT scenarios](configure-private-endpoints-mqtt.md)
+
+For pull-based event delivery, see the following articles:
+
+- [Configure IP firewall for Event Grid namespaces in non-MQTT scenarios](configure-firewall-namespaces.md)
+- [Configure private endpoints for Event Grid namespaces in non-MQTT scenarios](configure-private-endpoints-pull.md)
+
event-grid Subscribe Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-through-portal.md
Title: Azure Event Grid subscriptions through portal
description: This article describes how to create Event Grid subscriptions for the supported sources, such as Azure Blob Storage, by using the Azure portal. Previously updated : 09/12/2022 Last updated : 11/30/2023 # Subscribe to events through portal
To create an Event Grid subscription for any of the supported [event sources](co
1. Select **All services**.
- ![Select all services](./media/subscribe-through-portal/select-all-services.png)
-
+ :::image type="content" source="./media/subscribe-through-portal/select-all-services.png" alt-text="Screenshot that shows the Azure portal with All Services selected on the left menu.":::
1. Search for **Event Grid Subscriptions** and select it from the available options.
- ![Screen capture shows Search in the Azure portal with Event Grid Subscriptions selected.](./media/subscribe-through-portal/search.png)
-
+ :::image type="content" source="./media/subscribe-through-portal/search.png" alt-text="Screenshot that shows Event Grid Subscription in the search box in the Azure portal.":::
1. Select **+ Event Subscription**.
- ![Add subscription](./media/subscribe-through-portal/add-subscription.png)
-
-1. Select the type of subscription you want to create. For example, to subscribe to events for your Azure subscription, select **Azure Subscriptions** and the target subscription.
-
- ![Select Azure subscription](./media/subscribe-through-portal/azure-subscription.png)
-
-1. To subscribe to all event types for this event source, keep the **Subscribe to all event types** option checked. Otherwise, select the event types for this subscription.
+ :::image type="content" source="./media/subscribe-through-portal/add-subscription.png" alt-text="Screenshot that shows the select of Add Event Subscription menu on the Event Grid Subscriptions page.":::
+1. On the **Create Event Subscription** page, follow these steps:
+ 1. Enter a name for the event subscription.
+ 1. Select the type of event source (**topic type**) on which you want to create a subscription. For example, to subscribe to events for your Azure storage account, select **Storage Accounts**.
+
+ :::image type="content" source="./media/subscribe-through-portal/azure-subscription.png" alt-text="Screenshot that shows the Create Event Subscription page.":::
+ 1. Select the Azure subscription that contains the storage account.
+ 1. Select the resource group that has the storage account.
+ 1. Then, select the storage account.
- ![Select event types](./media/subscribe-through-portal/select-event-types.png)
+ :::image type="content" source="./media/subscribe-through-portal/create-event-subscription.png" alt-text="Screenshot that shows the Create Event Subscription page with the storage account selected.":::
+1. Select the event types that you want to receive on the event subscription.
+ :::image type="content" source="./media/subscribe-through-portal/select-event-types.png" alt-text="Screenshot that shows the selection of event types.":::
1. Provide more details about the event subscription, such as the endpoint for handling events and a subscription name.
- ![Screenshot that shows the "Endpoint Details" and "Event Subscription Details" sections with a subscription name value entered.](./media/subscribe-through-portal/provide-subscription-details.png)
+ :::image type="content" source="./media/subscribe-through-portal/select-end-point.png" alt-text="Screenshot that shows the selection of an endpoint.":::
> [!NOTE] > - For a list of supported event handlers, see [Event handlers](event-handlers.md). > - If you enable managed identity for a topic or domain, you'll need to add the managed identity to the appropriate role-based access control (RBAC) role on the destination for the messages to be delivered successfully. For more information, see [Supported destinations and Azure roles](add-identity-roles.md#supported-destinations-and-azure-roles). 1. To enable dead lettering and customize retry policies, select **Additional Features**.
- ![Select additional features](./media/subscribe-through-portal/select-additional-features.png)
-
-1. Select a container to use for storing events that aren't delivered, and set how retries are sent.
-
- ![Enable dead lettering and retry](./media/subscribe-through-portal/set-deadletter-retry.png)
-
+ :::image type="content" source="./media/subscribe-through-portal/select-additional-features.png" alt-text="Screenshot that shows the Additional features tab of the Create Event Subscription page.":::
1. When done, select **Create**. ## Create subscription on resource Some event sources support creating an event subscription through the portal interface for that resource. Select the event source, and look for **Events** in left pane.
-![Provide subscription details](./media/subscribe-through-portal/resource-events.png)
The portal presents you with options for creating an event subscription that is relevant to that source.
firewall Enable Top Ten And Flow Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/enable-top-ten-and-flow-trace.md
Azure Firewall has two new diagnostics logs you can use to help monitor your fir
The Top flows log (known in the industry as Fat Flows), shows the top connections that are contributing to the highest throughput through the firewall.
-It's suggested to activate Top flows logs only when troubleshooting a specific issue to avoid excessive CPU usage of Azure Firewall.
-
+> [!TIP]
+> Activate Top flows logs only when troubleshooting a specific issue to avoid excessive CPU usage of Azure Firewall.
+>
### Prerequisites
There are a few ways to verify the update was successful, but you can navigate t
Currently, the firewall logs show traffic through the firewall in the first attempt of a TCP connection, known as the *syn* packet. However, this doesn't show the full journey of the packet in the TCP handshake. As a result, it's difficult to troubleshoot if a packet is dropped, or asymmetric routing has occurred.
-To avoid excessive disk usage caused by Flow trace logs in Azure Firewall with many short-lived connections, it's recommended to activate the logs only when troubleshooting a specific issue for diagnostic purposes.
+> [!TIP]
+> To avoid excessive disk usage caused by Flow trace logs in Azure Firewall with many short-lived connections, activate the logs only when troubleshooting a specific issue for diagnostic purposes.
The following additional properties can be added: - SYN-ACK
governance Determine Non Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/determine-non-compliance.md
Title: Determine causes of non-compliance description: When a resource is non-compliant, there are many possible reasons. Discover what caused the non-compliance with the policy. Previously updated : 10/26/2023 Last updated : 11/30/2023
in the policy definition:
## Component details for Resource Provider modes
-For assignments with a
-[Resource Provider mode](../concepts/definition-structure.md#resource-provider-modes), select the
-_Non-compliant_ resource to open a deeper view. The **Component Compliance** tab shows more information specific to the Resource Provider mode on the assigned policy with the
-_Non-compliant_ **Component** and **Component ID**.
+For assignments with a Resource Provider mode, select the _Non-compliant_ resource to view its component compliance records. The **Component Compliance** tab shows more information specific to the [Resource Provider mode](../concepts/definition-structure.md#resource-provider-modes) like **Component Name**, **Component ID**, and **Type**.
## Compliance details for guest configuration
detection is triggered when the Azure Resource Manager properties are added, rem
:::image type="content" source="../media/determine-non-compliance/change-history-visual-diff.png" alt-text="Screenshot of the Change History Visual Diff of the before and after state of properties on the Change history page." :::
-The _visual diff_ aides in identifying changes to a resource. The changes detected might not be
+ The _visual diff_ aides in identifying changes to a resource. The changes detected might not be
related to the current compliance state of the resource. Change history data is provided by [Azure Resource Graph](../../resource-graph/overview.md). To
hdinsight-aks Azure Service Bus Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/azure-service-bus-demo.md
+
+ Title: Use Apache Flink on HDInsight on AKS with Azure Service Bus
+description: Use Apache Flink DataStream API on HDInsight on AKS with Azure Service Bus
++ Last updated : 11/27/2023+
+# Use Apache Flink on HDInsight on AKS with Azure Service Bus
++
+This article provides an overview and demonstration of Apache Flink DataStream API on HDInsight on AKS for Azure Service Bus. A Flink job demonstration is designed to read messages from an [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) and writes them to [Azure Data Lake Storage Gen2](./assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md) (ADLS Gen2).
+
+## Prerequisites
+
+- [Flink Cluster 1.16.0 on HDInsight on AKS](./flink-create-cluster-portal.md)
+- For this demonstration, we use a Window VM as maven project develop env in the same VNET as HDInsight on AKS.
+- During the [creation](./flink-create-cluster-portal.md) of the Flink cluster, you are required to ensure that SSH access is selected. This enables you to access the cluster using Secure Shell (SSH).
+- Set up an [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) instance.
+- To proceed with the integration, obtain the necessary connection string, topic name, and subscription name for your [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview).
+
+## Develop Apache Flink job
+
+This job is designed to read messages from an [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) and writes them to [Data Lake Storage Gen2](./assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md) (ADLS Gen2).
+
+### Submit the JAR into Azure HDInsight on AKS with Flink
+
+To initiate a job, transfer the JAR file into the webssh pod and submit the job using the following command:
+
+```
+bin/flink run -c contoso.example.ServiceBusToAdlsGen2 -j AzureServiceBusDemo-1.0-SNAPSHOT.jar
+Job has been submitted with JobID fc5793361a914821c968b5746a804570
+```
+
+### Confirm job submission on Flink UI
+
+After submitting the job, access the Flink Dashboard UI and click on the running job for further details.
++
+### Sending message from Azure Service Bus Explorer
+
+Navigate to the Service Bus Explorer on the Azure portal and send messages to the corresponding Service Bus.
+++
+### Check job run details on Apache Flink UI
+
+Review the running details of the job on the Flink UI for insights.
++
+### Confirm output file in ADLS gen2 on Portal
+
+After successfully publishing messages, verify the output file generated by the Flink job in ADLS Gen2 storage on the Azure portal.
+++
+## Source Code
+
+In the POM.xml file, we define the project's dependencies using Maven, ensuring a smooth and organized management of libraries and plugins. Here's a snippet illustrating how to include dependencies for a project, such as one involving Apache Flink:
+
+```Maven Pom.xml
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+
+ <groupId>contoso.example</groupId>
+ <artifactId>AzureServiceBusDemo</artifactId>
+ <version>1.0-SNAPSHOT</version>
+
+ <properties>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ <flink.version>1.16.0</flink.version>
+ <java.version>1.8</java.version>
+ </properties>
+ <dependencies>
+ <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-java -->
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-streaming-java</artifactId>
+ <version>${flink.version}</version>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-java</artifactId>
+ <version>${flink.version}</version>
+ </dependency>
+ <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-clients -->
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-clients</artifactId>
+ <version>${flink.version}</version>
+ </dependency>
+ <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-connector-files -->
+ <dependency>
+ <groupId>org.apache.flink</groupId>
+ <artifactId>flink-connector-files</artifactId>
+ <version>${flink.version}</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-core</artifactId>
+ <version>1.26.0</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.4.1</version>
+ </dependency>
+<!-- <dependency>-->
+<!-- <groupId>com.microsoft.azure</groupId>-->
+<!-- <artifactId>azure-servicebus</artifactId>-->
+<!-- <version>3.6.0</version>-->
+<!-- </dependency>-->
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-servicebus</artifactId>
+ <version>7.5.0</version>
+ </dependency>
+ </dependencies>
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-assembly-plugin</artifactId>
+ <version>3.0.0</version>
+ <configuration>
+ <appendAssemblyId>false</appendAssemblyId>
+ <descriptorRefs>
+ <descriptorRef>jar-with-dependencies</descriptorRef>
+ </descriptorRefs>
+ </configuration>
+ <executions>
+ <execution>
+ <id>make-assembly</id>
+ <phase>package</phase>
+ <goals>
+ <goal>single</goal>
+ </goals>
+ </execution>
+ </executions>
+ </plugin>
+ </plugins>
+ </build>
+</project>
+```
+
+**main function: ServiceBusToAdlsGen2.java**
+
+```
+package contoso.example;
+
+import org.apache.flink.api.common.serialization.SimpleStringEncoder;
+import org.apache.flink.configuration.MemorySize;
+import org.apache.flink.connector.file.sink.FileSink;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.filesystem.rollingpolicies.DefaultRollingPolicy;
+
+import java.time.Duration;
+
+public class ServiceBusToAdlsGen2 {
+
+ public static void main(String[] args) throws Exception {
+
+ // Set up the execution environment
+ final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+
+ final String connectionString = "Endpoint=sb://contososervicebus.servicebus.windows.net/;SharedAccessKeyName=policy1;SharedAccessKey=<key>";
+ final String topicName = "topic1";
+ final String subName = "subscription1";
+
+ // Create a source function for Azure Service Bus
+ SessionBasedServiceBusSource sourceFunction = new SessionBasedServiceBusSource(connectionString, topicName, subName);
+
+ // Create a data stream using the source function
+ DataStream<String> stream = env.addSource(sourceFunction);
+
+ // Process the data (this is where you'd put your processing logic)
+ DataStream<String> processedStream = stream.map(value -> processValue(value));
+ processedStream.print();
+
+ // 3. sink to gen2
+ String outputPath = "abfs://<container>@<account>.dfs.core.windows.net/data/ServiceBus/Topic1";
+// String outputPath = "src/ServiceBugOutput/";
+
+ final FileSink<String> sink = FileSink
+ .forRowFormat(new Path(outputPath), new SimpleStringEncoder<String>("UTF-8"))
+ .withRollingPolicy(
+ DefaultRollingPolicy.builder()
+ .withRolloverInterval(Duration.ofMinutes(2))
+ .withInactivityInterval(Duration.ofMinutes(3))
+ .withMaxPartSize(MemorySize.
+ ofMebiBytes(5))
+ .build())
+ .build();
+ // Add the sink function to the processed stream
+ processedStream.sinkTo(sink);
+
+ // Execute the job
+ env.execute("ServiceBusToDataLakeJob");
+ }
+
+ private static String processValue(String value) {
+ // Implement your processing logic here
+ return value;
+ }
+}
+```
+
+**input source: SessionBasedServiceBusSource.java**
+
+```
+package contoso.example;
+
+import com.azure.messaging.servicebus.ServiceBusClientBuilder;
+import com.azure.messaging.servicebus.ServiceBusSessionReceiverAsyncClient;
+import org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;
+import org.apache.flink.streaming.api.functions.source.SourceFunction;
+
+public class SessionBasedServiceBusSource extends RichParallelSourceFunction<String> {
+
+ private final String connectionString;
+ private final String topicName;
+ private final String subscriptionName;
+ private volatile boolean isRunning = true;
+ private ServiceBusSessionReceiverAsyncClient sessionReceiver;
+
+ public SessionBasedServiceBusSource(String connectionString, String topicName, String subscriptionName) {
+ this.connectionString = connectionString;
+ this.topicName = topicName;
+ this.subscriptionName = subscriptionName;
+ }
+
+ @Override
+ public void run(SourceFunction.SourceContext<String> ctx) throws Exception {
+ ServiceBusSessionReceiverAsyncClient sessionReceiver = new ServiceBusClientBuilder()
+ .connectionString(connectionString)
+ .sessionReceiver()
+ .topicName(topicName)
+ .subscriptionName(subscriptionName)
+ .buildAsyncClient();
+
+ sessionReceiver.acceptNextSession()
+ .flatMapMany(session -> session.receiveMessages())
+ .doOnNext(message -> {
+ try {
+ ctx.collect(message.getBody().toString());
+ } catch (Exception e) {
+ System.out.printf("An error occurred: %s.", e.getMessage());
+ }
+ })
+ .doOnError(error -> System.out.printf("An error occurred: %s.", error.getMessage()))
+ .blockLast();
+ }
+
+ @Override
+ public void cancel() {
+ isRunning = false;
+ if (sessionReceiver != null) {
+ sessionReceiver.close();
+ }
+ }
+}
+```
+
+### Key components and functionality of the provided code
+
+#### Main code: ServiceBusToAdlsgen2.java
+
+This Java class, `ServiceBusToAdlsgen2`, orchestrates the entire Flink job for the `DStreamAPI AzureServiceBusDemo`, shows a robust integration between Apache Flink and Azure Service Bus.
+
+1. **Setting up the execution environment**
+
+ The `StreamExecutionEnvironment.getExecutionEnvironment()` method is used to set up the execution environment for the Flink job.
+
+1. **Creating a source function for Azure Service Bus**
+
+ A `SessionBasedServiceBusSource` object is created with the connection string, topic name, and subscription name for your Azure Service Bus. This object is a source function that can be used to create a data stream.
+
+1. **Creating a data stream**
+
+ The `env.addSource(sourceFunction)` method is used to create a data stream from the source function. Each message from the Azure Service Bus topic becomes an element in this stream.
+
+1. **Processing the data**
+
+ The `stream.map(value -> processValue(value))` method is used to process each element in the stream. In this case, the `processValue` method is applied to each element. This is where youΓÇÖd put your processing logic.
+
+1. **Creating a sink for Azure Data Lake Storage Gen2**
+
+ A `FileSink object` is created with the output path and a `SimpleStringEncoder`. The `withRollingPolicy` method is used to set a rolling policy for the sink.
+
+1. **Adding the sink function to the processed stream**
+
+ The `processedStream.sinkTo(sink)` method is used to add the sink function to the processed stream. Each processed element is written to a file in Azure Data Lake Storage Gen2.
+
+1. **Executing the job**
+
+ Finally, the `env.execute("ServiceBusToDataLakeJob")` method is used to execute the Flink job. This starts reading messages from the Azure Service Bus topic, process them, and write them to Azure Data Lake Storage Gen2.
+
+#### Flink source function: SessionBasedServiceBusSource.java
+
+This Flink source function, encapsulated within the `SessionBasedServiceBusSource.java` class, establishing a connection with Azure Service Bus, retrieving messages, and integrating with Apache Flink for parallel data processing. The following is key aspects of this source function:
+
+1. **Class Definition**
+
+ The `SessionBasedServiceBusSource` class extends `RichParallelSourceFunction<String>`, which is a base class for implementing a parallel data source in Flink.
+
+1. **Instance Variables**
+
+ The `connectionString`, `topicName`, and `subscriptionName` variables hold the connection string, topic name, and subscription name for your Azure Service Bus. The isRunning flag is used to control the execution of the source function. The `sessionReceiver` is an instance of `erviceBusSessionReceiverAsyncClient`, which is used to receive messages from the Service Bus.
+
+1. **Constructor**
+
+ The constructor initializes the instance variables with the provided values.
+
+1. **run() Method**
+
+ This method is where the source function starts to emit data to Flink. It creates a `ServiceBusSessionReceiverAsyncClient`, accepts the next available session, and starts receiving messages from that session. Each messageΓÇÖs body is then collected into the Flink source context.
+
+1. **cancel() Method**
+
+ This method is called when the source function needs to be stopped. It sets the isRunning flag to false and closes the `sessionReceiver`.
+
+## Reference
+
+- To learn more about Azure Service Bus, refer to the [What is Azure Service Bus?](/azure/service-bus-messaging/service-bus-messaging-overview).
+- For guidance on creating topics, consult the [Service Bus Explorer](/azure/service-bus-messaging/explorer).
+- Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
++++++++
healthcare-apis Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/configure-metrics.md
+
+ Title: Monitor performance metrics for the MedTech service in Azure Health Data Services
+description: Learn how to monitor the performance metrics of the MedTech service in Azure Health Data Services. Find out how to configure, display, and save the metrics in an Azure portal dashboard.
+++++ Last updated : 11/21/2023+++
+# Monitor performance metrics for the MedTech service
+Gain insights into the health, availability, latency, traffic, and errors of your organization's MedTech services by monitoring MedTech service metrics in the Azure portal. To help you identify patterns or trends, pin tiles for the metrics to an Azure portal dashboard for easy access and visualization.
+
+## Configure service metrics
+
+1. In the Azure portal, go to the Azure Health Data Services workspace. Go to **Services** > **MedTech service**.
+
+ :::image type="content" source="media\configure-metrics\select-medtech-service.png" alt-text="Screenshot showing how to open the MedTech service in a workspace." lightbox="media\configure-metrics\select-medtech-service.png":::
+
+2. Select the MedTech service that you want to monitor metrics for. In this example, the MedTech service is named **mt-azuredocsdemo**.
+
+ :::image type="content" source="media\configure-metrics\select-medtech-service2.png" alt-text="Screenshot showing the MedTech service to display metrics for." lightbox="media\configure-metrics\select-medtech-service2.png":::
+
+3. In the left pane, select **Monitoring** > **Metrics**.
+
+ :::image type="content" source="media\configure-metrics\monitor-metrics.png" alt-text="Screenshot showing the selection of the Metrics menu item in the MedTech service." lightbox="media\configure-metrics\monitor-metrics.png":::
+
+4. Choose **Add metric**.
+
+5. Select a metric from the drop-down list.
+
+The service performance metrics you can monitor are:
+
+Metric category|Metric name|Metric description|
+|--|--|--|
+|Availability|IotConnector Health Status|The overall health of the MedTech service.|
+|Errors|Total Error Count|The total number of errors.|
+|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](overview-of-device-data-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](overview-of-device-data-processing-stages.md#normalize) performs normalization on raw incoming messages.|
+|Traffic|Number of FHIR resources saved|The total number of FHIR&reg; resources [updated or persisted](overview-of-device-data-processing-stages.md#persist) by the MedTech service.|
+|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](overview-of-device-data-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.|
+|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-data-processing-stages.md#transform) of the MedTech service.|
+|Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.|
+|Traffic|Number of Normalized Messages|The number of normalized messages.|
+
+The screenshot shows an example of a line chart that monitors the **Number of Incoming Messages**.
++
+## Save metrics as a tile on an Azure dashboard
+
+To keep your MedTech service metrics settings and view the metrics again later, pin them as a tile on an Azure dashboard. For steps, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md).
+
+To learn more about advanced metrics display and sharing options, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
+
+## Next steps
+
+[Enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
+
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
- Title: How to configure the MedTech service metrics - Azure Health Data Services
-description: Learn how to configure the MedTech service metrics.
----- Previously updated : 06/19/2023---
-# How to configure the MedTech service metrics
-
-In this article, learn how to configure the MedTech service metrics in the Azure portal. Also learn how to pin the MedTech service metrics tile to an Azure portal dashboard for later viewing.
-
-The MedTech service metrics can be used to help determine the health and performance of your MedTech service and can be useful with troubleshooting and seeing patterns and/or trends with your MedTech service.
-
-## Metric types for the MedTech service
-
-This table shows the available MedTech service metrics and the information that the metrics are capturing and displaying within the Azure portal:
-
-Metric category|Metric name|Metric description|
-|--|--|--|
-|Availability|IotConnector Health Status|The overall health of the MedTech service.|
-|Errors|Total Error Count|The total number of errors.|
-|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](overview-of-device-data-processing-stages.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.|
-|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](overview-of-device-data-processing-stages.md#normalize) performs normalization on raw incoming messages.|
-|Traffic|Number of Fhir resources saved|The total number of FHIR&reg; resources [updated or persisted](overview-of-device-data-processing-stages.md#persist) by the MedTech service.|
-|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](overview-of-device-data-processing-stages.md#ingest) (for example, the device events) from the configured source event hub.|
-|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](overview-of-device-data-processing-stages.md#transform) of the MedTech service.|
-|Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.|
-|Traffic|Number of Normalized Messages|The number of normalized messages.|
-
-## Configure the MedTech service metrics
-
-1. Within your Azure Health Data Services workspace, select **MedTech service** under **Services**.
-
- :::image type="content" source="media\how-to-configure-metrics\workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service within the workspace." lightbox="media\how-to-configure-metrics\workspace-displayed-with-connectors-button.png":::
-
-2. Select the MedTech service that you would like to display metrics for. For this example, we'll select a MedTech service named **mt-azuredocsdemo**. You'll be selecting a MedTech service within your own Azure Health Data Services workspace.
-
- :::image type="content" source="media\how-to-configure-metrics\select-medtech-service.png" alt-text="Screenshot of select the MedTech service you would like to display metrics for." lightbox="media\how-to-configure-metrics\select-medtech-service.png":::
-
-3. Select **Metrics** within the MedTech service page.
-
- :::image type="content" source="media\how-to-configure-metrics\select-metrics-under-monitoring.png" alt-text="Screenshot of select the Metrics option within your MedTech service." lightbox="media\how-to-configure-metrics\select-metrics-under-monitoring.png":::
-
-4. The MedTech service metrics page will open allowing you to use the drop-down menus to view and select the metrics that are available for the MedTech service.
-
- :::image type="content" source="media\how-to-configure-metrics\select-metrics-to-display.png" alt-text="Screenshot the MedTech service metrics page with drop-down menus." lightbox="media\how-to-configure-metrics\select-metrics-to-display.png":::
-
-5. Select the metrics combinations that you want to display for your MedTech service. For this example, we'll be choosing the following selections:
-
- * **Scope** = Your MedTech service name (**Default**)
- * **Metric Namespace** = Standard metrics (**Default**)
- * **Metric** = The MedTech service metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
- * **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**.
-
-6. You can now see your MedTech service metrics for **Number of Incoming Messages** displayed on the MedTech service metrics page.
-
- :::image type="content" source="media\how-to-configure-metrics\select-metrics-being-displayed.png" alt-text="Screenshot of select metrics to display." lightbox="media\how-to-configure-metrics\select-metrics-being-displayed.png":::
-
-7. You can add more metrics for your MedTech service by selecting **Add metric**.
-
- :::image type="content" source="media\how-to-configure-metrics\select-add-metric.png" alt-text="Screenshot of select Add metric to add more MedTech service metrics." lightbox="media\how-to-configure-metrics\select-add-metric.png":::
-
-8. Then select the metrics that you would like to add to your MedTech service.
-
- :::image type="content" source="media\how-to-configure-metrics\select-more-metrics.png" alt-text="Screenshot of select more metrics to add to your MedTech service." lightbox="media\how-to-configure-metrics\select-more-metrics.png":::
-
- > [!IMPORTANT]
- > If you leave the MedTech service metrics page, the metrics settings for your MedTech service are lost and will have to be recreated. If you would like to save your MedTech service metrics for future viewing, you can pin them to an Azure portal dashboard as a tile.
- >
- > To learn how to create an Azure portal dashboard and pin tiles, see [How to create an Azure portal dashboard and pin tiles](how-to-configure-metrics.md#how-to-create-an-azure-portal-dashboard-and-pin-tiles)
-
- > [!TIP]
- > To learn more about advanced metrics display and sharing options, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
-
-## How to create an Azure portal dashboard and pin tiles
-
-To learn how to create an Azure portal dashboard and pin tiles, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md)
-
-## Next steps
-
-[How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
-
healthcare-apis How To Use Monitoring And Health Checks Tabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-and-health-checks-tabs.md
In this article, learn how to use the MedTech service monitoring and health chec
> [!IMPORTANT] > If you leave the MedTech service monitoring tab, any customized settings you have made to the monitoring settings are lost and will have to be recreated. If you would like to save your customizations for future viewing, you can pin them to an Azure portal dashboard as a tile. >
- > To learn how to customize and save metrics settings to an Azure portal dashboard and tile, see [How to configure the MedTech service metrics](how-to-configure-metrics.md).
+ > To learn how to customize and save metrics settings to an Azure portal dashboard and tile, see [How to configure the MedTech service metrics](configure-metrics.md).
5. **Optional** - Select the **pin icon** to save the metrics tile to an Azure portal dashboard of your choosing.
Metric category|Metric name|Metric description|
## Next steps
-[How to configure the MedTech service metrics](how-to-configure-metrics.md)
+[How to configure the MedTech service metrics](configure-metrics.md)
[How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
healthcare-apis Troubleshoot Errors Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-deployment.md
This article provides troubleshooting steps and fixes for MedTech service deploy
> > [How to use the MedTech service monitoring and health checks tabs](how-to-use-monitoring-and-health-checks-tabs.md) >
-> [How to configure the MedTech service metrics](how-to-configure-metrics.md)
+> [How to configure the MedTech service metrics](configure-metrics.md)
> > [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-export-test-results.md
timeStamp,elapsed,label,responseCode,responseMessage,threadName,dataType,success
## Access and download load test results >[!IMPORTANT]
->For load tests with more than 45 engine instances or a greater than 3-hour test run duration, the results file will not be available for download. You can configure a [JMeter Backend Listener](#export-test-results-using-jmeter-backend-listeners) to export the results to a data store of your choice.
+>For load tests with more than 45 engine instances or a greater than 3-hour test run duration, the results file is not available for download. You can [configure a JMeter Backend Listener to export the results](#export-test-results-using-jmeter-backend-listeners) to a data store of your choice or [copy the results from a storage account container](#copy-test-artifacts-from-a-storage-account-container).
# [Azure portal](#tab/portal) To download the test results for a test run in the Azure portal:
A sample JMeter script that uses a [backend listener for Azure Application Insig
The following code snippet shows an example of a backend listener, for Azure Application Insights, in a JMX file: :::code language="xml" source="~/azure-load-testing-samples/jmeter-backend-listeners/sample-backend-listener-appinsights.jmx" range="85-126" :::
+## Copy test artifacts from a storage account container
+
+>[!IMPORTANT]
+>Copying test artifacts from a storage account container is only enabled for load tests with more than 45 engine instances or with a test run duration greater than three hours.
+
+To copy the test results and log files for a test run from a storage account, in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+
+1. On the left pane, select **Tests** to view a list of tests, and then select your test.
+
+ :::image type="content" source="media/how-to-export-test-results/test-list.png" alt-text="Screenshot that shows the list of tests for an Azure Load Testing resource.":::
+1. From the list of test runs, select your test run.
+
+ :::image type="content" source="media/how-to-export-test-results/test-runs-list.png" alt-text="Screenshot that shows the list of test runs for a test in an Azure Load Testing resource.":::
+
+ >[!TIP]
+ > To limit the number of tests to display in the list, you can use the search box and the **Time range** filter.
+
+1. On the **Test run details** pane, select **Copy artifacts**.
+
+ :::image type="content" source="media/how-to-export-test-results/test-run-page-copy-artifacts.png" alt-text="Screenshot that shows how to copy the test artifacts from the 'Test run details' pane.":::
+
+ > [!NOTE]
+ > A load test run needs to be in the *Done*, *Stopped*, or *Failed* status for the results file to be available for download.
+
+1. Copy the SAS URL of the storage account container.
+
+ You can use the SAS URL in the [Azure Storage Explorer](/azure/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows#shared-access-signature-sas-url) or [AzCopy](/azure/storage/common/storage-use-azcopy-blobs-copy#copy-a-container) to copy the results CSV files and the log files for the test run to your storage account.
+
+ The SAS URL is valid for 60 minutes from the time it gets generated. If the URL expires, select **Copy artifacts** to generate a new SAS URL.
+ ## Next steps - Learn more about [Diagnosing failing load tests](./how-to-diagnose-failing-load-test.md).
load-testing How To Use A Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-use-a-managed-identity.md
# Use managed identities for Azure Load Testing
-This article shows how to create a managed identity for Azure Load Testing. You can use a managed identity to securely access other Azure resources. For example, you use a managed identity to read secrets or certificates from Azure Key Vault in your load test.
+This article shows how to create a managed identity for Azure Load Testing. You can use a managed identity to securely read secrets or certificates from Azure Key Vault in your load test.
-A managed identity from Microsoft Entra ID allows your load testing resource to easily access other Microsoft Entra protected resources, such as Azure Key Vault. The identity is managed by the Azure platform and doesn't require you to manage or rotate any secrets. For more information about managed identities in Microsoft Entra ID, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview).
+A managed identity from Microsoft Entra ID allows your load testing resource to easily access Microsoft Entra protected Azure Key Vault. The identity is managed by the Azure platform and doesn't require you to manage or rotate any secrets. For more information about managed identities in Microsoft Entra ID, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview).
Azure Load Testing supports two types of identities:
load-testing Resource Jmeter Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-jmeter-support.md
The following table lists the Apache JMeter features and their support in Azure
| Scripting | - BeanShell<br/>- JSR223 script | | | Configuration elements | All configuration elements are supported. | Example: [Read data from a CSV file](./how-to-read-csv-data.md) | | JMeter properties | Azure Load Testing supports uploading a single user properties file per load test to override JMeter configuration settings or add custom properties.<br/>System properties files aren't supported. | [Configure JMeter user properties](./how-to-configure-user-properties.md) |
-| Plugins | Azure Load Testing lets you use plugins from https://jmeter-plugins.org, or upload a Java archive (JAR) file with your own plugin code.<br/>The [Web Driver sampler](https://jmeter-plugins.org/wiki/WebDriverSampler/) and any plugins that use backend listeners aren't supported. | [Customize a load test with plugins](./how-to-use-jmeter-plugins.md) |
+| Plugins | Azure Load Testing lets you use plugins from https://jmeter-plugins.org, or upload a Java archive (JAR) file with your own plugin code.| [Customize a load test with plugins](./how-to-use-jmeter-plugins.md) |
+| Web Driver sampler | Due to the resource intensive nature of WebDriver tests, you can run tests with a load of up to four virtual users associated with the [Web Driver sampler](https://jmeter-plugins.org/wiki/WebDriverSampler/). Tests with higher load associated with the Web Driver sampler can result in errors. In such a case, reduce the load and try again.<br/>You can have a higher load associated with other samplers, like HTTP sampler, in the same test. | |
| Listeners | Azure Load Testing ignores all [Results Collectors](https://jmeter.apache.org/api/org/apache/jmeter/reporters/ResultCollector.html), which includes visualizers such as the [results tree](https://jmeter.apache.org/usermanual/component_reference.html#View_Results_Tree) or [graph results](https://jmeter.apache.org/usermanual/component_reference.html#Graph_Results). | | | Dashboard report | The Azure Load Testing dashboard shows the client metrics, and optionally the server-side metrics. <br/>You can export the load test results to use them in a reporting tool or [generate the JMeter dashboard](https://jmeter.apache.org/usermanual/generating-dashboard.html#report) on your local machine.| [Export test results](./how-to-export-test-results.md) | | Test fragments| Not supported. | |
logic-apps Export From Ise To Standard Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-ise-to-standard-logic-app.md
This article provides information about the export process and shows how to expo
- By default, if an Azure connector has a built-in connector version, the export tool automatically converts the Azure connector to the built-in connector. No option exists to opt out from this behavior.
+- If the connection ID is incorrectly formatted, an error is thrown. Before you export your workflow, make sure that the connection IDs for your connectors match the following format:
+
+ `subscriptionId/{subscription-ID}/resourceGroups/{resource-group-name}/providers/Microsoft.Logic/integrationServiceEnvironments/{integration-service-environment-name}/managedApis/{managed-api-name}`
+ ## Exportable operation types | Operation | JSON type |
machine-learning How To Troubleshoot Kubernetes Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-compute.md
# Troubleshoot Kubernetes Compute
-In this article, you'll learn how to troubleshoot common workload (including training jobs and endpoints) errors on the [Kubernetes compute](./how-to-attach-kubernetes-to-workspace.md).
+In this article, you learn how to troubleshoot common workload (including training jobs and endpoints) errors on the [Kubernetes compute](./how-to-attach-kubernetes-to-workspace.md).
## Inference guide
The common Kubernetes endpoint errors on Kubernetes compute are categorized into
### Kubernetes compute errors
-Below is a list of error types in **compute scope** that you might encounter when using Kubernetes compute to create online endpoints and online deployments for real-time model inference, which you can trouble shoot by following the guidelines:
+ The common error types in **compute scope** that you might encounter when using Kubernetes compute to create online endpoints and online deployments for real-time model inference, which you can trouble shoot by following the guidelines:
* [ERROR: GenericComputeError](#error-genericcomputeerror)
Below is a list of error types in **compute scope** that you might encounter whe
#### ERROR: GenericComputeError
-The error message is as below:
+The error message is as:
```bash Failed to get compute information.
This error should occur when system failed to get the compute information from t
* Check the Kubernetes cluster health. * You can view the cluster health check report for any issues, for example, if the cluster is not reachable. * You can go to your workspace portal to check the compute status.
-* Check if the instance types is information is correct. You can check the supported instance types in the [Kubernetes compute](./how-to-attach-kubernetes-to-workspace.md) documentation.
+* Check if the instance types are information is correct. You can check the supported instance types in the [Kubernetes compute](./how-to-attach-kubernetes-to-workspace.md) documentation.
* Try to detach and reattach the compute to the workspace if applicable. > [!NOTE]
The error message is as follows:
```bash The compute information is invalid. ```
-There is a compute target validation process when deploying models to your Kubernetes cluster. This error should occur when the compute information is invalid when validating, for example the compute target is not found, or the configuration of Azure Machine Learning extension has been updated in your Kubernetes cluster.
+There is a compute target validation process when deploying models to your Kubernetes cluster. This error should occur when the compute information is invalid. For example, the compute target is not found, or the configuration of Azure Machine Learning extension has been updated in your Kubernetes cluster.
You can check the following items to troubleshoot the issue: * Check whether the compute target you used is correct and existing in your workspace.
For AKS clusters:
* Check if the AKS cluster is shut down. * If the cluster isn't running, you need to start the cluster first. * Check if the AKS cluster has enabled selected network by using authorized IP ranges.
- * If the AKS cluster has enabled authorized IP ranges, please make sure all the **Azure Machine Learning control plane IP ranges** have been enabled for the AKS cluster. More information you can see this [document](how-to-deploy-kubernetes-extension.md#limitations).
+ * If the AKS cluster has enabled authorized IP ranges, make sure all the **Azure Machine Learning control plane IP ranges** have been enabled for the AKS cluster. More information you can see this [document](how-to-deploy-kubernetes-extension.md#limitations).
For an AKS cluster or an Azure Arc enabled Kubernetes cluster:
You can check the following items to troubleshoot the issue:
#### ERROR: RefreshExtensionIdentityNotSet
-This error occurs when the extension is installed but the extension identity is not correctly assigned. You can try to re-install the extension to fix it.
+This error occurs when the extension is installed but the extension identity is not correctly assigned. You can try to reinstall the extension to fix it.
> Please notice this error is only for managed clusters
This error occurs when the extension is installed but the extension identity is
### How to check sslCertPemFile and sslKeyPemFile is correct?
-Use the commands below to run a baseline check for your cert and key. This is to allow for any known errors to be surfaced. Expect the second command to return "RSA key ok" without prompting you for password.
+In order to allow for any known errors to be surfaced, you can use the commands to run a baseline check for your cert and key. Expect the second command to return "RSA key ok" without prompting you for password.
```bash openssl x509 -in cert.pem -noout -text openssl rsa -in key.pem -noout -check ```
-Run the commands below to verify whether sslCertPemFile and sslKeyPemFile are matched:
+Run the commands to verify whether sslCertPemFile and sslKeyPemFile are matched:
```bash openssl x509 -in cert.pem -noout -modulus | md5sum openssl rsa -in key.pem -noout -modulus | md5sum ```
+For sslCertPemFile, it is the public certificate. It should include the certificate chain which includes the following certificates and should be in the sequence of the server certificate, the intermediate CA certificate and the root CA certificate:
+* The server certificate: the server presents to the client during the TLS handshake. It contains the serverΓÇÖs public key, domain name, and other information. The server certificate is signed by an intermediate certificate authority (CA) that vouches for the serverΓÇÖs identity.
+* The intermediate CA certificate: the intermediate CA presents to the client to prove its authority to sign the server certificate. It contains the intermediate CAΓÇÖs public key, name, and other information. The intermediate CA certificate is signed by a root CA that vouches for the intermediate CAΓÇÖs identity.
+* The root CA certificate: the root CA presents to the client to prove its authority to sign the intermediate CA certificate. It contains the root CAΓÇÖs public key, name, and other information. The root CA certificate is self-signed and trusted by the client.
+ ## Training guide
-When the training job is running, you can check the job status in the workspace portal. When you encounter some abnormal job status, such as the job retried multiple times, or the job has been stuck in initializing state, or even the job has eventually failed, you can follow the guide below to troubleshoot the issue.
+When the training job is running, you can check the job status in the workspace portal. When you encounter some abnormal job status, such as the job retried multiple times, or the job has been stuck in initializing state, or even the job has eventually failed, you can follow the guide to troubleshoot the issue.
### Job retry debugging
-If the training job pod running in the cluster was terminated due to the node running to node OOM (out of memory), the job will be **automatically retried** to another available node.
+If the training job pod running in the cluster was terminated due to the node running to node OOM (out of memory), the job is **automatically retried** to another available node.
To further debug the root cause of the job try, you can go to the workspace portal to check the job retry log.
-* Each retry log will be recorded in a new log folder with the format of "retry-<retry number\>"(such as: retry-001).
+* Each retry log is recorded in a new log folder with the format of "retry-<retry number\>"(such as: retry-001).
-Then you can get the retry job-node mapping information as mentioned above, to figure out which node the retry-job has been running on.
+Then you can get the retry job-node mapping information, to figure out which node the retry-job has been running on.
:::image type="content" source="media/how-to-troubleshoot-kubernetes-compute/job-retry-log.png" alt-text="Screenshot of adding a new extension to the Azure Arc-enabled Kubernetes cluster from the Azure portal."::: You can get job-node mapping information from the **amlarc_cr_bootstrap.log** under system_logs folder.
-The host name of the node which the job pod is running on will be indicated in this log, for example:
+The host name of the node, which the job pod is running on is indicated in this log, for example:
```bash ++ echo 'Run on node: ask-agentpool-17631869-vmss0000"
If the error message is:
Azure Machine Learning Kubernetes job failed. E45004:"Training feature is not enabled, please enable it when install the extension." ```
-Please check whether you have `enableTraining=True` set when doing the Azure Machine Learning extension installation. More details could be found at [Deploy Azure Machine Learning extension on AKS or Arc Kubernetes cluster](how-to-deploy-kubernetes-extension.md)
+Check whether you have `enableTraining=True` set when doing the Azure Machine Learning extension installation. More details could be found at [Deploy Azure Machine Learning extension on AKS or Arc Kubernetes cluster](how-to-deploy-kubernetes-extension.md)
### Job failed. 400
If you need to access Azure Container Registry (ACR) for Docker image, and to ac
To access Azure Container Registry (ACR) from a Kubernetes compute cluster for Docker images, or access a storage account for training data, you need to attach the Kubernetes compute with a system-assigned or user-assigned managed identity enabled.
-In the above training scenario, this **computing identity** is necessary for Kubernetes compute to be used as a credential to communicate between the ARM resource bound to the workspace and the Kubernetes computing cluster. So without this identity, the training job will fail and report missing account key or sas token. Take accessing storage account for example, if you don't specify a managed identity to your Kubernetes compute, the job fails with the following error message:
+In the above training scenario, this **computing identity** is necessary for Kubernetes compute to be used as a credential to communicate between the ARM resource bound to the workspace and the Kubernetes computing cluster. So without this identity, the training job fails and reports missing account key or sas token. Take accessing storage account, for example, if you don't specify a managed identity to your Kubernetes compute, the job fails with the following error message:
```bash Unable to mount data store workspaceblobstore. Give either an account key or SAS token ```
-This is because machine learning workspace default storage account without any credentials is not accessible for training jobs in Kubernetes compute.
+The cause is machine learning workspace default storage account without any credentials is not accessible for training jobs in Kubernetes compute.
To mitigate this issue, you can assign Managed Identity to the compute in compute attach step, or you can assign Managed Identity to the compute after it has been attached. More details could be found at [Assign Managed Identity to the compute target](how-to-attach-kubernetes-to-workspace.md#assign-managed-identity-to-the-compute-target).
If you need to access the AzureBlob for data upload or download in your training
Unable to upload project files to working directory in AzureBlob because the authorization failed. ```
-This is because the authorization failed when the job tries to upload the project files to the AzureBlob. You can check the following items to troubleshoot the issue:
+The cause is the authorization failed when the job tries to upload the project files to the AzureBlob. You can check the following items to troubleshoot the issue:
* Make sure the storage account has enabled the exceptions of ΓÇ£Allow Azure services on the trusted service list to access this storage accountΓÇ¥ and the workspace is in the resource instances list. * Make sure the workspace has a system assigned managed identity. ## Private link issue
-We could use the method below to check private link setup by logging into one pod in the Kubernetes cluster and then check related network settings.
+We could use the method to check private link setup by logging into one pod in the Kubernetes cluster and then check related network settings.
* Find workspace ID in Azure portal or get this ID by running `az ml workspace show` in the command line. * Show all azureml-fe pods run by `kubectl get po -n azureml -l azuremlappname=azureml-fe`.
If you set up private link from VNet to workspace correctly, then the internal I
curl https://{workspace_id}.workspace.westcentralus.api.azureml.ms/metric/v2.0/subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}/api/2.0/prometheus/post -X POST -x {proxy_address} -d {} -v -k ```
-If the proxy and workspace with private link is configured correctly, you can see it's trying to connect to an internal IP. This will return a response with http 401, which is expected when you don't provide token.
+When the proxy and workspace are correctly set up with a private link, you should observe an attempt to connect to an internal IP. A response with an HTTP 401 status code is expected in this scenario if a token is not provided.
## Other known issues ### Kubernetes compute update does not take effect
-At this time, the CLI v2 and SDK v2 do not allow updating any configuration of an existing Kubernetes compute. For example, changing the namespace will not take effect.
+At this time, the CLI v2 and SDK v2 do not allow updating any configuration of an existing Kubernetes compute. For example, changing the namespace does not take effect.
### Workspace or resource group name end with '-'
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
More information about how to use ARM template can be found from [ARM template d
| Date | Version |Version description | ||||
+|Nov 21, 2023 | 1.1.39| Fixed vulnerabilities. Refined error message. Increased stability for relayserver API. |
+|Nov 1, 2023 | 1.1.37| Update data plane envoy version. |
|Oct 11, 2023 | 1.1.35| Fix vulnerable image. Bug fixes. | |Aug 25, 2023 | 1.1.34| Fix vulnerable image. Return more detailed identity error. Bug fixes. | |July 18, 2023 | 1.1.29| Add new identity operator errors. Bug fixes. |
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
Previously updated : 06/12/2023 Last updated : 11/25/2023 #Customer intent: As a non-coding data scientist, I want to use automated machine learning to build a demand forecasting model. # Tutorial: Forecast demand with no-code automated machine learning in the Azure Machine Learning studio
-Learn how to create a [time-series forecasting model](concept-automated-ml.md#time-series-forecasting) without writing a single line of code using automated machine learning in the Azure Machine Learning studio. This model will predict rental demand for a bike sharing service.
+Learn how to create a [time-series forecasting model](concept-automated-ml.md#time-series-forecasting) without writing a single line of code using automated machine learning in the Azure Machine Learning studio. This model predicts rental demand for a bike sharing service.
-You won't write any code in this tutorial, you'll use the studio interface to perform training. You'll learn how to do the following tasks:
+You don't write any code in this tutorial, you use the studio interface to perform training. You learn how to do the following tasks:
> [!div class="checklist"] > * Create and load a dataset.
Also try automated machine learning for these other model types:
## Sign in to the studio
-For this tutorial, you create your automated ML experiment run in Azure Machine Learning studio, a consolidated web interface that includes machine learning tools to perform data science scenarios for data science practitioners of all skill levels. The studio is not supported on Internet Explorer browsers.
+For this tutorial, you create your automated ML experiment run in Azure Machine Learning studio, a consolidated web interface that includes machine learning tools to perform data science scenarios for data science practitioners of all skill levels. The studio isn't supported on Internet Explorer browsers.
1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
Before you configure your experiment, upload your data file to your workspace in
1. Select **Next** on the bottom left
- 1. On the **Datastore and file selection** form, select the default datastore that was automatically set up during your workspace creation, **workspaceblobstore (Azure Blob Storage)**. This is the storage location where you'll upload your data file.
+ 1. On the **Datastore and file selection** form, select the default datastore that was automatically set up during your workspace creation, **workspaceblobstore (Azure Blob Storage)**. This is the storage location where you upload your data file.
- 1. Select **Upload files** from the **Upload** drop-down..
+ 1. Select **Upload files** from the **Upload** drop-down.
1. Choose the **bike-no.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv).
After you load and configure your data, set up your remote compute target and se
Field | Description | Value for tutorial -|| Compute name | A unique name that identifies your compute context. | bike-compute
- Min / Max nodes| To profile data, you must specify 1 or more nodes.|Min nodes: 1<br>Max nodes: 6
+ Min / Max nodes| To profile data, you must specify one or more nodes.|Min nodes: 1<br>Max nodes: 6
Idle seconds before scale down | Idle time before the cluster is automatically scaled down to the minimum node count.|120 (default) Advanced settings | Settings to configure and authorize a virtual network for your experiment.| None
Complete the setup for your automated ML experiment by specifying the machine le
Primary metric| Evaluation metric that the machine learning algorithm will be measured by.|Normalized root mean squared error Explain best model| Automatically shows explainability on the best model created by automated ML.| Enable Blocked algorithms | Algorithms you want to exclude from the training job| Extreme Random Trees
- Additional forecasting settings| These settings help improve the accuracy of your model. <br><br> _**Forecast target lags:**_ how far back you want to construct the lags of the target variable <br> _**Target rolling window**_: specifies the size of the rolling window over which features, such as the *max, min* and *sum*, will be generated. | <br><br>Forecast&nbsp;target&nbsp;lags: None <br> Target&nbsp;rolling&nbsp;window&nbsp;size: None
+ Additional forecasting settings| These settings help improve the accuracy of your model. <br><br> _**Forecast target lags:**_ how far back you want to construct the lags of the target variable <br> _**Target rolling window**_: specifies the size of the rolling window over which features, such as the *max, min* and *sum*, is generated. | <br><br>Forecast&nbsp;target&nbsp;lags: None <br> Target&nbsp;rolling&nbsp;window&nbsp;size: None
Exit criterion| If a criteria is met, the training job is stopped. |Training&nbsp;job&nbsp;time (hours): 3 <br> Metric&nbsp;score&nbsp;threshold: None Concurrency| The maximum number of parallel iterations executed per iteration| Max&nbsp;concurrent&nbsp;iterations: 6
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-custom-image.md
Previously updated : 08/11/2021 Last updated : 11/14/2023
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-In this article, learn how to use a custom Docker image when you're training models with Azure Machine Learning. You'll use the example scripts in this article to classify pet images by creating a convolutional neural network.
+In this article, learn how to use a custom Docker image when you're training models with Azure Machine Learning. You use the example scripts in this article to classify pet images by creating a convolutional neural network.
Azure Machine Learning provides a default Docker base image. You can also use Azure Machine Learning environments to specify a different base image, such as one of the maintained [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers) or your own [custom image](../how-to-deploy-custom-container.md). Custom base images allow you to closely manage your dependencies and maintain tighter control over component versions when running training jobs.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
## November 2023 -- **Modify multiple server parameters using Azure CLI**
+- **Enhanced replica provisioning experience**
- You can now conveniently update multiple server parameters for your Azure Database for MySQL - Flexible Server using Azure CLI. [Learn more](./how-to-configure-server-parameters-cli.md#modify-a-server-parameter-value)
+ Replica provisioning experience will now provide extra flexibility to modify the replica compute and storage settings during the provisioning workflow. You can choose to modify the compute settings of the replica server at the time of provisioning instead of having to make the changes post provisioning of replica server. The feature will also enable modifying the backup retention days of the replica server and configure it to have a different value than that of the source server
+- **Modify multiple server parameters using Azure CLI**
+
+ You can now conveniently update multiple server parameters for your Azure Database for MySQL - Flexible Server using Azure CLI. [Learn more](./how-to-configure-server-parameters-cli.md#modify-a-server-parameter-value)
- **Accelerated logs in Azure Database for MySQL - Flexible Server (Preview)**
network-watcher Connection Monitor Create Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-portal.md
Title: Create a connection monitor - Azure portal description: Learn how to create a monitor in Azure Network Watcher connection monitor using the Azure portal.- + -- Previously updated : 11/05/2022-
-#Customer intent: I need to create a connection monitor to monitor communication between one VM and another.
Last updated : 11/30/2023+
+#CustomerIntent: As an Azure administrator, I want learn how to create a connection monitor in Network Watcher so I can monitor the communication between one VM and another.
-# Create an Azure Network Watcher connection monitor using the Azure portal
+# Create a connection monitor using the Azure portal
This article describes how to create a monitor in Connection Monitor by using the Azure portal. Connection Monitor supports hybrid and Azure cloud deployments.
This article describes how to create a monitor in Connection Monitor by using th
> > To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new connection monitor in Azure Network Watcher before February 19, 2024.
-> [!IMPORTANT]
-> Connection Monitor supports end-to-end connectivity checks from and to Azure Virtual Machine Scale Sets. These checks enable faster performance monitoring and network troubleshooting across scale sets.
- ## Before you begin In monitors that you create by using Connection Monitor, you can add on-premises machines, Azure virtual machines (VMs), and Azure Virtual Machine Scale Sets as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or on any other URL or IP.
In the Azure portal, to create a test group in a connection monitor, specify val
* **Test Groups**: You can add one or more test groups to a connection monitor. These test groups can consist of multiple Azure or non-Azure endpoints. * For selected Azure VMs or Azure Virtual Machine Scale Sets and non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the npm solution for non-Azure endpoints will be auto enabled after the creation of the connection monitor begins. * If the selected virtual machine scale set is set for a manual upgrade, you'll have to upgrade the scale set after Network Watcher extension installation to continue setting up the connection monitor with virtual machine scale set as endpoints. If the virtual machine scale set is set to auto upgrade, you don't need to worry about any upgrading after the Network Watcher extension is installed.
- * In the previously mentioned scenario, you can consent to an auto upgrade of a virtual machine scale set with auto enabling of the Network Watcher extension during the creation of the connection monitor for Virtual Machine Scale Sets with manual upgrading. This would eliminate your having to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
+ * In the previously mentioned scenario, you can consent to an auto upgrade of a virtual machine scale set with auto enabling of the Network Watcher extension during the creation of the connection monitor for Virtual Machine Scale Sets with manual upgrading. This would eliminate having to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
- :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up test groups and consent for auto-upgrading of a virtual machine scale set in the connection monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up test groups and consent for autoupgrading of a virtual machine scale set in the connection monitor.":::
* **Disable test group**: You can select this checkbox to disable monitoring for all sources and destinations that the test group specifies. This checkbox is cleared by default. ## Create alerts for a connection monitor
You can set up alerts on tests that are failing, based on the thresholds set in
In the Azure portal, to create alerts for a connection monitor, specify values for these fields: -- **Create alert**: You can select this checkbox to create a metric alert in Azure Monitor. When you select this checkbox, the other fields will be enabled for editing. Additional charges for the alert will be applicable, based on the [pricing for alerts](https://azure.microsoft.com/pricing/details/monitor/).
+- **Create alert**: You can select this checkbox to create a metric alert in Azure Monitor. When you select this checkbox, the other fields are enabled for editing. Additional charges for the alert will be applicable, based on the [pricing for alerts](https://azure.microsoft.com/pricing/details/monitor/).
- **Scope** > **Resource** > **Hierarchy**: These values are automatically entered, based on the values specified on the **Basics** pane.
Connection monitors have these scale limits:
## Next steps
-* [Learn how to analyze monitoring data and set alerts](./connection-monitor-overview.md#analyze-monitoring-data-and-set-alerts)
-* [Learn how to diagnose problems in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network)
+* [Learn how to analyze monitoring data and set alerts](./connection-monitor-overview.md#analyze-monitoring-data-and-set-alerts).
+* [Learn how to diagnose problems in your network](./connection-monitor-overview.md#diagnose-issues-in-your-network).
network-watcher Diagnose Communication Problem Between Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-communication-problem-between-networks.md
Last updated 11/29/2023
# Tutorial: Diagnose a communication problem between virtual networks using the Azure portal
-This tutorial shows you how to use Azure Network Watcher [VPN troubleshoot](network-watcher-troubleshoot-overview.md) capability to diagnose and troubleshoot a connectivity issue between two virtual networks. The virtual networks are connected via VPN gateways using VNet-to-VNet connections.
+This tutorial shows you how to use Azure Network Watcher [VPN troubleshoot](vpn-troubleshoot-overview.md) capability to diagnose and troubleshoot a connectivity issue between two virtual networks. The virtual networks are connected via VPN gateways using VNet-to-VNet connections.
:::image type="content" source="./media/diagnose-communication-problem-between-networks/vpn-troubleshoot-tutorial-diagram.png" alt-text="Diagram shows the resources created in the tutorial.":::
network-watcher Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-overview.md
You can select any item in the grid view. Select the icon in the **Flowlog Confi
TheΓÇ»**Alert** box on the right side of the page provides a view of all Traffic Analytics workspace-based alerts across all subscriptions. Select the alert counts to go to a detailed alerts page. ## <a name="diagnostictoolkit"></a> Diagnostic Toolkit
-Diagnostic Toolkit provides access to all the diagnostic features available for troubleshooting the network. You can use this drop-down list to access features like [packet capture](../network-watcher/network-watcher-packet-capture-overview.md), [VPN troubleshooting](../network-watcher/network-watcher-troubleshoot-overview.md), [connection troubleshooting](../network-watcher/network-watcher-connectivity-overview.md), [next hop](../network-watcher/network-watcher-next-hop-overview.md), and [IP flow verify](../network-watcher/network-watcher-ip-flow-verify-overview.md):
+Diagnostic Toolkit provides access to all the diagnostic features available for troubleshooting the network. You can use this drop-down list to access features like [packet capture](../network-watcher/network-watcher-packet-capture-overview.md), [VPN troubleshooting](../network-watcher/vpn-troubleshoot-overview.md), [connection troubleshooting](../network-watcher/network-watcher-connectivity-overview.md), [next hop](../network-watcher/network-watcher-next-hop-overview.md), and [IP flow verify](../network-watcher/network-watcher-ip-flow-verify-overview.md):
:::image type="content" source="./media/network-insights-overview/diagnostic-toolkit.png" alt-text="Screenshot shows the Diagnostic Toolkit tab in Azure Monitor network insights." lightbox="./media/network-insights-overview/diagnostic-toolkit.png":::
network-watcher Network Watcher Diagnose On Premises Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-diagnose-on-premises-connectivity.md
These issues are hard to troubleshoot and root causes are often non-intuitive. I
## Troubleshooting using Azure Network Watcher
-To diagnose your connection, connect to Azure PowerShell and initiate the `Start-AzNetworkWatcherResourceTroubleshooting` cmdlet. You can find the details on using this cmdlet at [Troubleshoot Virtual Network Gateway and connections - PowerShell](network-watcher-troubleshoot-manage-powershell.md). This cmdlet may take up to few minutes to complete.
+To diagnose your connection, connect to Azure PowerShell and initiate the `Start-AzNetworkWatcherResourceTroubleshooting` cmdlet. You can find the details on using this cmdlet at [Troubleshoot Virtual Network Gateway and connections - PowerShell](vpn-troubleshoot-powershell.md). This cmdlet may take up to few minutes to complete.
Once the cmdlet completes, you can navigate to the storage location specified in the cmdlet to get detailed information on about the issue and logs. Azure Network Watcher creates a zip folder that contains the following log files:
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Title: NSG flow logs
+ Title: NSG flow logs overview
description: Learn about NSG flow logs feature of Azure Network Watcher, which allows you to log information about IP traffic flowing through a network security group. Previously updated : 11/28/2023 Last updated : 11/30/2023
-#CustomerIntent: As an Azure administrator, I want to learn about NSG flow logs so that I can monitor my network and optimize its performance.
+#CustomerIntent: As an Azure administrator, I want to learn about NSG flow logs so that I can log my network traffic to analyze and optimize the network performance.
# Flow logging for network security groups
When you delete a network security group, the associated flow log resource is de
### Storage account - **Location**: The storage account must be in the same region as the network security group.-- **Subscription**: The storage account must be in a subscription associated with the same Microsoft Entra tenant as the network security group's subscription.
+- **Subscription**: The storage account must be in the same subscription of the network security group or in a subscription associated with the same Microsoft Entra tenant of the network security group's subscription.
- **Performance tier**: The storage account must be standard. Premium storage accounts aren't supported. - **Self-managed key rotation**: If you change or rotate the access keys to your storage account, NSG flow logs stop working. To fix this problem, you must disable and then re-enable NSG flow logs.
network-watcher Network Watcher Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-overview.md
Network Watcher offers seven network diagnostic tools that help troubleshoot and
### VPN troubleshoot
-**VPN troubleshoot** enables you to troubleshoot virtual network gateways and their connections. For more information, see [VPN troubleshoot overview](network-watcher-troubleshoot-overview.md) and [Diagnose a communication problem between networks](diagnose-communication-problem-between-networks.md).
+**VPN troubleshoot** enables you to troubleshoot virtual network gateways and their connections. For more information, see [VPN troubleshoot overview](vpn-troubleshoot-overview.md) and [Diagnose a communication problem between networks](diagnose-communication-problem-between-networks.md).
## Traffic
network-watcher Network Watcher Troubleshoot Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-cli.md
- Title: Troubleshoot Azure VNet gateway and connections - Azure CLI-
-description: This page explains how to use the Azure Network Watcher troubleshoot Azure CLI.
----- Previously updated : 07/25/2022----
-# Troubleshoot virtual network gateway and connections with Azure Network Watcher using Azure CLI
-
-> [!div class="op_single_selector"]
-> - [Portal](diagnose-communication-problem-between-networks.md)
-> - [PowerShell](network-watcher-troubleshoot-manage-powershell.md)
-> - [Azure CLI](network-watcher-troubleshoot-manage-cli.md)
-> - [REST API](network-watcher-troubleshoot-manage-rest.md)
-
-Network Watcher provides many capabilities as it relates to understanding your network resources in Azure. One of these capabilities is resource troubleshooting. Resource troubleshooting can be called through the portal, PowerShell, CLI, or REST API. When called, Network Watcher inspects the health of a Virtual Network Gateway or a Connection and returns its findings.
-
-To perform the steps in this article, you need to [install the Azure CLI](/cli/azure/install-azure-cli) for Windows, Linux, or macOS.
-
-## Before you begin
-
-This scenario assumes you have already followed the steps in [Create a Network Watcher](network-watcher-create.md) to create a Network Watcher.
-
-For a list of supported gateway types visit, [Supported Gateway types](network-watcher-troubleshoot-overview.md#supported-gateway-types).
-
-## Overview
-
-Resource troubleshooting provides the ability troubleshoot issues that arise with Virtual Network Gateways and Connections. When a request is made to resource troubleshooting, logs are being queried and inspected. When inspection is complete, the results are returned. Resource troubleshooting requests are long running requests, which could take multiple minutes to return a result. The logs from troubleshooting are stored in a container on a storage account that is specified.
-
-## Retrieve a Virtual Network Gateway Connection
-
-In this example, resource troubleshooting is being ran on a Connection. You can also pass it a Virtual Network Gateway. The following cmdlet lists the vpn-connections in a resource group.
-
-```azurecli
-az network vpn-connection list --resource-group resourceGroupName
-```
-
-Once you have the name of the connection, you can run this command to get its resource Id:
-
-```azurecli
-az network vpn-connection show --resource-group resourceGroupName --ids vpnConnectionIds
-```
-
-## Create a storage account
-
-Resource troubleshooting returns data about the health of the resource, it also saves logs to a storage account to be reviewed. In this step, we create a storage account, if an existing storage account exists you can use it.
-
-1. Create the storage account
-
- ```azurecli
- az storage account create --name storageAccountName --location westcentralus --resource-group resourceGroupName --sku Standard_LRS
- ```
-
-1. Get the storage account keys
-
- ```azurecli
- az storage account keys list --resource-group resourcegroupName --account-name storageAccountName
- ```
-
-1. Create the container
-
- ```azurecli
- az storage container create --account-name storageAccountName --account-key {storageAccountKey} --name logs
- ```
-
-## Run Network Watcher resource troubleshooting
-
-You troubleshoot resources with the `az network watcher troubleshooting` cmdlet. We pass the cmdlet the resource group, the name of the Network Watcher, the Id of the connection, the Id of the storage account, and the path to the blob to store the troubleshoot results in.
-
-```azurecli
-az network watcher troubleshooting start --resource-group resourceGroupName --resource resourceName --resource-type {vnetGateway/vpnConnection} --storage-account storageAccountName --storage-path https://{storageAccountName}.blob.core.windows.net/{containerName}
-```
-
-Once you run the cmdlet, Network Watcher reviews the resource to verify the health. It returns the results to the shell and stores logs of the results in the storage account specified.
-
-## Understanding the results
-
-The action text provides general guidance on how to resolve the issue. If an action can be taken for the issue, a link is provided with additional guidance. In the case where there is no additional guidance, the response provides the url to open a support case. For more information about the properties of the response and what is included, visit [Network Watcher Troubleshoot overview](network-watcher-troubleshoot-overview.md)
-
-For instructions on downloading files from azure storage accounts, refer to [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md). Another tool that can be used is Storage Explorer. More information about Storage Explorer can be found here at the following link: [Storage Explorer](https://storageexplorer.com/)
-
-## Azure CLI troubleshooting
--
-## Next steps
-
-If settings have been changed that stop VPN connectivity, see [Manage Network Security Groups](../virtual-network/manage-network-security-group.md) to track down the network security group and security rules that may be in question.
network-watcher Network Watcher Troubleshoot Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-powershell.md
- Title: Troubleshoot Azure VNet gateway and connections - Azure PowerShell-
-description: This page explains how to use the Azure Network Watcher troubleshoot PowerShell.
----- Previously updated : 11/22/2022----
-# Troubleshoot virtual network gateway and connections with Azure Network Watcher using PowerShell
-
-> [!div class="op_single_selector"]
-> - [Portal](diagnose-communication-problem-between-networks.md)
-> - [PowerShell](network-watcher-troubleshoot-manage-powershell.md)
-> - [Azure CLI](network-watcher-troubleshoot-manage-cli.md)
-> - [REST API](network-watcher-troubleshoot-manage-rest.md)
-
-Network Watcher provides various capabilities as it relates to understanding your network resources in Azure. One of these capabilities is resource troubleshooting. Resource troubleshooting can be called through the Azure portal, PowerShell, CLI, or REST API. When called, Network Watcher inspects the health of a Virtual Network Gateway or a Connection and returns its findings.
---
-## Prerequisites
--- A [Network Watcher instance](network-watcher-create.md).-- Ensure you're using a supported Gateway type. [Learn more](network-watcher-troubleshoot-overview.md#supported-gateway-types).-
-## Overview
-
-Resource troubleshooting provides the ability to troubleshoot issues that arise with Virtual Network Gateways and Connections. When a request is made to resource troubleshooting, logs are being queried and inspected. When inspection is complete, the results are returned. Resource troubleshooting requests are long running requests, which could take multiple minutes to return a result. The logs from troubleshooting are stored in a container on a storage account that is specified.
-
-## Retrieve Network Watcher
-
-The first step is to retrieve the Network Watcher instance. The `$networkWatcher` variable is passed to the `Start-AzNetworkWatcherResourceTroubleshooting` cmdlet in step 4.
-
-```powershell
-$networkWatcher = Get-AzNetworkWatcher -Location "WestCentralUS"
-```
-
-## Retrieve a Virtual Network Gateway Connection
-
-In this example, resource troubleshooting is being ran on a Connection. You can also pass it a Virtual Network Gateway.
-
-```powershell
-$connection = Get-AzVirtualNetworkGatewayConnection -Name "2to3" -ResourceGroupName "testrg"
-```
-
-## Create a storage account
-
-Resource troubleshooting returns data about the health of the resource, it also saves logs to a storage account to be reviewed. In this step, we create a storage account, if an existing storage account exists you can use it.
-
-```powershell
-$sa = New-AzStorageAccount -Name "contosoexamplesa" -SKU "Standard_LRS" -ResourceGroupName "testrg" -Location "WestCentralUS"
-Set-AzCurrentStorageAccount -ResourceGroupName $sa.ResourceGroupName -Name $sa.StorageAccountName
-$sc = New-AzStorageContainer -Name logs
-```
-
-## Run Network Watcher resource troubleshooting
-
-You can troubleshoot resources with the [Start-AzNetworkWatcherResourceTroubleshooting](/powershell/module/az.network/start-aznetworkwatcherresourcetroubleshooting) cmdlet. We pass the cmdlet the Network Watcher object, the ID of the Connection or Virtual Network Gateway, the storage account ID, and the path to store the results.
-
-> [!NOTE]
-> The [Start-AzNetworkWatcherResourceTroubleshooting](/powershell/module/az.network/start-aznetworkwatcherresourcetroubleshooting) cmdlet is long running and may take a few minutes to complete.
-
-```powershell
-Start-AzNetworkWatcherResourceTroubleshooting -NetworkWatcher $networkWatcher -TargetResourceId $connection.Id -StorageId $sa.Id -StoragePath "$($sa.PrimaryEndpoints.Blob)$($sc.name)"
-```
-
-Once you run the cmdlet, Network Watcher reviews the resource to verify its health. It returns the results to the shell and stores logs of the results in the storage account specified.
-
-## Understanding the results
-
-The action text provides general guidance on how to resolve the issue.
--- If an action can be taken for the issue, a link is provided with additional guidance. -- If there's no guidance provided, the response provides the URL to open a support case. -
-For more information about the properties of the response and what is included, see [Network Watcher Troubleshoot overview](network-watcher-troubleshoot-overview.md).
-
-For instructions on downloading files from Azure storage accounts, refer to [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md). Another tool that can be used is Storage Explorer. For more information, see [Storage Explorer](https://storageexplorer.com/).
-
-## Next steps
-
-If VPN connectivity has been stopped due to a change in settings, see [Manage Network Security Groups](../virtual-network/manage-network-security-group.md) to track down the network security group and security rules that may be in question.
network-watcher Network Watcher Troubleshoot Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-rest.md
- Title: Troubleshoot VNet gateway and connections - Azure REST API-
-description: This page explains how to troubleshoot virtual network gateway and connections with Azure Network Watcher using REST API.
----- Previously updated : 01/07/2021---
-# Troubleshoot virtual network gateway and connections with Azure Network Watcher using REST API
-
-> [!div class="op_single_selector"]
-> - [Portal](diagnose-communication-problem-between-networks.md)
-> - [PowerShell](network-watcher-troubleshoot-manage-powershell.md)
-> - [Azure CLI](network-watcher-troubleshoot-manage-cli.md)
-> - [REST API](network-watcher-troubleshoot-manage-rest.md)
-
-Network Watcher provides many capabilities as it relates to understanding your network resources in Azure. One of these capabilities is resource troubleshooting. Resource troubleshooting can be called through the portal, PowerShell, CLI, or REST API. When called, Network Watcher inspects the health of a Virtual Network Gateway or a Connection and returns its findings.
-
-This article takes you through the different management tasks that are currently available for resource troubleshooting.
--- [**Troubleshoot a Virtual Network gateway**](#troubleshoot-a-virtual-network-gateway)-- [**Troubleshoot a Connection**](#troubleshoot-connections)-
-## Before you begin
-
-ARMclient is used to call the REST API using PowerShell. ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
-
-This scenario assumes you have already followed the steps in [Create a Network Watcher](network-watcher-create.md) to create a Network Watcher.
-
-For a list of supported gateway types visit, [Supported Gateway types](network-watcher-troubleshoot-overview.md#supported-gateway-types).
-
-## Overview
-
-Network Watcher troubleshooting provides the ability troubleshoot issues that arise with Virtual Network gateways and Connections. When a request is made to the resource troubleshooting, logs are querying and inspected. When inspection is complete, the results are returned. The troubleshoot API requests are long running requests, which could take multiple minutes to return a result. Logs are stored in a container on a storage account.
-
-## Log in with ARMClient
-
-```powershell
-armclient login
-```
-
-## Troubleshoot a Virtual Network gateway
--
-### POST the troubleshoot request
-
-The following example queries the status of a Virtual Network gateway.
-
-```powershell
-
-$subscriptionId = "00000000-0000-0000-0000-000000000000"
-$resourceGroupName = "ContosoRG"
-$NWresourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$vnetGatewayName = "ContosoVNETGateway"
-$storageAccountName = "contososa"
-$containerName = "gwlogs"
-$requestBody = @"
-{
-'TargetResourceId': '/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/${vnetGatewayName}',
-'Properties': {
-'StorageId': '/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Storage/storageAccounts/${storageAccountName}',
-'StoragePath': 'https://${storageAccountName}.blob.core.windows.net/${containerName}'
-}
-}
-"@
--
-armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${NWresourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/troubleshoot?api-version=2016-03-30" $requestBody -verbose
-```
-
-Since this operation is long running, the URI for querying the operation and the URI for the result is returned in the response header as shown in the following response:
-
-**Important Values**
-
-* **Azure-AsyncOperation** - This property contains the URI to query the Async troubleshoot operation
-* **Location** - This property contains the URI where the results are when the operation is complete
-
-```
-HTTP/1.1 202 Accepted
-Pragma: no-cache
-Retry-After: 10
-x-ms-request-id: 8a1167b7-6768-4ac1-85dc-703c9c9b9247
-Azure-AsyncOperation: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operations/8a1167b7-6768-4ac1-85dc-703c9c9b9247?api-version=2016-03-30
-Strict-Transport-Security: max-age=31536000; includeSubDomains
-Cache-Control: no-cache
-Location: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operationResults/8a1167b7-6768-4ac1-85dc-703c9c9b9247?api-version=2016-03-30
-Server: Microsoft-HTTPAPI/2.0; Microsoft-HTTPAPI/2.0
-x-ms-ratelimit-remaining-subscription-writes: 1199
-x-ms-correlation-request-id: 4364d88a-bd08-422c-a716-dbb0cdc99f7b
-x-ms-routing-request-id: NORTHCENTRALUS:20170112T183202Z:4364d88a-bd08-422c-a716-dbb0cdc99f7b
-Date: Thu, 12 Jan 2017 18:32:01 GMT
-
-null
-```
-
-### Query the async operation for completion
-
-Use the operations URI to query for the progress of the operation as seen in the following example:
-
-```powershell
-armclient get "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operations/8a1167b7-6768-4ac1-85dc-703c9c9b9247?api-version=2016-03-30" -verbose
-```
-
-While the operation is in progress, the response shows **InProgress** as seen in the following example:
-
-```json
-{
- "status": "InProgress"
-}
-```
-
-When the operation is complete the status changes to **Succeeded**.
-
-```json
-{
- "status": "Succeeded"
-}
-```
-
-### Retrieve the results
-
-Once the status returned is **Succeeded**, call a GET Method on the operationResult URI to retrieve the results.
-
-```powershell
-armclient get "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operationResults/8a1167b7-6768-4ac1-85dc-703c9c9b9247?api-version=2016-03-30" -verbose
-```
-
-The following responses are examples of a typical degraded response returned when querying the results of troubleshooting a gateway. See [Understanding the results](#understanding-the-results) to get clarification on what the properties in the response mean.
-
-```json
-{
- "startTime": "2017-01-12T10:31:41.562646-08:00",
- "endTime": "2017-01-12T18:31:48.677Z",
- "code": "Degraded",
- "results": [
- {
- "id": "PlatformInActive",
- "summary": "We are sorry, your VPN gateway is in standby mode",
- "detail": "During this time the gateway will not initiate or accept VPN connections with on premises VPN devices or other Azure VPN Gateways. This is a transient state while the Azure platform is being updated.",
- "recommendedActions": [
- {
- "actionText": "If the condition persists, please try resetting your Azure VPN gateway",
- "actionUri": "https://azure.microsoft.com/documentation/articles/vpn-gateway-resetgw-classic/",
- "actionUriText": "resetting the VPN Gateway"
- },
- {
- "actionText": "If your VPN gateway isn't up and running by the expected resolution time, contact support",
- "actionUri": "https://azure.microsoft.com/support",
- "actionUriText": "contact support"
- }
- ]
- },
- {
- "id": "NoFault",
- "summary": "This VPN gateway is running normally",
- "detail": "There aren't any known Azure platform problems affecting this VPN Connection",
- "recommendedActions": [
- {
- "actionText": "If you are still experience problems with the VPN gateway, please try resetting the VPN gateway.",
- "actionUri": "https://azure.microsoft.com/documentation/articles/vpn-gateway-resetgw-classic/",
- "actionUriText": "resetting VPN gateway"
- },
- {
- "actionText": "If you are experiencing problems you believe are caused by Azure, contact support",
- "actionUri": "https://azure.microsoft.com/support",
- "actionUriText": "contact support"
- }
- ]
- }
- ]
-}
-```
--
-## Troubleshoot Connections
-
-The following example queries the status of a Connection.
-
-```powershell
-
-$subscriptionId = "00000000-0000-0000-0000-000000000000"
-$resourceGroupName = "ContosoRG"
-$NWresourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$connectionName = "VNET2toVNET1Connection"
-$storageAccountName = "contososa"
-$containerName = "gwlogs"
-$requestBody = @{
-"TargetResourceId": "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Network/connections/${connectionName}",
-"Properties": {
-"StorageId": "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.Storage/storageAccounts/${storageAccountName}",
-"StoragePath": "https://${storageAccountName}.blob.core.windows.net/${containerName}"
-}
-
-}
-armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${NWresourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/troubleshoot?api-version=2016-03-30 $requestBody"
-```
-
-> [!NOTE]
-> The troubleshoot operation cannot be run in parallel on a Connection and its corresponding gateways. The operation must complete prior to running it on the previous resource.
-
-Since this is a long running transaction, in the response header, the URI for querying the operation and the URI for the result is returned as shown in the following response:
-
-**Important Values**
-
-* **Azure-AsyncOperation** - This property contains the URI to query the Async troubleshoot operation
-* **Location** - This property contains the URI where the results are when the operation is complete
-
-```
-HTTP/1.1 202 Accepted
-Pragma: no-cache
-Retry-After: 10
-x-ms-request-id: 8a1167b7-6768-4ac1-85dc-703c9c9b9247
-Azure-AsyncOperation: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operations/8a1167b7-6768-4ac1-85dc-703c9c9b9247?api-version=2016-03-30
-Strict-Transport-Security: max-age=31536000; includeSubDomains
-Cache-Control: no-cache
-Location: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operationResults/8a1167b7-6768-4ac1-85dc-703c9c9b9247?api-version=2016-03-30
-Server: Microsoft-HTTPAPI/2.0; Microsoft-HTTPAPI/2.0
-x-ms-ratelimit-remaining-subscription-writes: 1199
-x-ms-correlation-request-id: 4364d88a-bd08-422c-a716-dbb0cdc99f7b
-x-ms-routing-request-id: NORTHCENTRALUS:20170112T183202Z:4364d88a-bd08-422c-a716-dbb0cdc99f7b
-Date: Thu, 12 Jan 2017 18:32:01 GMT
-
-null
-```
-
-### Query the async operation for completion
-
-Use the operations URI to query for the progress of the operation as seen in the following example:
-
-```powershell
-armclient get "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operations/843b1c31-4717-4fdd-b7a6-4c786ca9c501?api-version=2016-03-30"
-```
-
-While the operation is in progress, the response shows **InProgress** as seen in the following example:
-
-```json
-{
- "status": "InProgress"
-}
-```
-
-When the operation is complete, the status changes to **Succeeded**.
-
-```json
-{
- "status": "Succeeded"
-}
-```
-
-The following responses are examples of a typical response returned when querying the results of troubleshooting a Connection.
-
-### Retrieve the results
-
-Once the status returned is **Succeeded**, call a GET Method on the operationResult URI to retrieve the results.
-
-```powershell
-armclient get "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operationResults/843b1c31-4717-4fdd-b7a6-4c786ca9c501?api-version=2016-03-30"
-```
-
-The following responses are examples of a typical response returned when querying the results of troubleshooting a Connection.
-
-```json
-{
- "startTime": "2017-01-12T14:09:19.1215346-08:00",
- "endTime": "2017-01-12T22:09:23.747Z",
- "code": "UnHealthy",
- "results": [
- {
- "id": "PlatformInActive",
- "summary": "We are sorry, your VPN gateway is in standby mode",
- "detail": "During this time the gateway will not initiate or accept VPN connections with on premises VPN devices or other Azure VPN Gateways. This
-is a transient state while the Azure platform is being updated.",
- "recommendedActions": [
- {
- "actionText": "If the condition persists, please try resetting your Azure VPN gateway",
- "actionUri": "https://azure.microsoft.com/documentation/articles/vpn-gateway-resetgw-classic/",
- "actionUriText": "resetting the VPN gateway"
- },
- {
- "actionText": "If your VPN Connection isn't up and running by the expected resolution time, contact support",
- "actionUri": "https://azure.microsoft.com/support",
- "actionUriText": "contact support"
- }
- ]
- },
- {
- "id": "NoFault",
- "summary": "This VPN Connection is running normally",
- "detail": "There aren't any known Azure platform problems affecting this VPN Connection",
- "recommendedActions": [
- {
- "actionText": "If you are still experience problems with the VPN gateway, please try resetting the VPN gateway.",
- "actionUri": "https://azure.microsoft.com/documentation/articles/vpn-gateway-resetgw-classic/",
- "actionUriText": "resetting VPN gateway"
- },
- {
- "actionText": "If you are experiencing problems you believe are caused by Azure, contact support",
- "actionUri": "https://azure.microsoft.com/support",
- "actionUriText": "contact support"
- }
- ]
- }
- ]
-}
-```
-
-## Understanding the results
-
-The action text provides general guidance on how to resolve the issue. If an action can be taken for the issue, a link is provided with additional guidance. In the case where there is no additional guidance, the response provides the url to open a support case. For more information about the properties of the response and what is included, visit [Network Watcher Troubleshoot overview](network-watcher-troubleshoot-overview.md)
-
-For instructions on downloading files from azure storage accounts, refer to [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md). Another tool that can be used is Storage Explorer. More information about Storage Explorer can be found here at the following link: [Storage Explorer](https://storageexplorer.com/)
-
-## Next steps
-
-If settings have been changed that stop VPN connectivity, see [Manage Network Security Groups](../virtual-network/manage-network-security-group.md) to track down the network security group and security rules that may be in question.
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
Title: VNet flow logs (preview)
-description: Learn about VNet flow logs feature of Azure Network Watcher.
+description: Learn about Azure Network Watcher VNet flow logs feature and how to use them to record your virtual networks traffic.
Previously updated : 11/29/2023 Last updated : 11/30/2023 #CustomerIntent: As an Azure administrator, I want to learn about VNet flow logs so that I can log my network traffic to analyze and optimize the network performance.
For continuation (`C`) and end (`E`) flow states, byte and packet counts are agg
### Storage account - **Location**: The storage account must be in the same region as the virtual network.-- **Subscription**: The storage account must be in a subscription associated with the same Microsoft Entra tenant as the virtual network's subscription.
+- **Subscription**: The storage account must be in the same subscription of the virtual network or in a subscription associated with the same Microsoft Entra tenant of the virtual network's subscription.
- **Performance tier**: The storage account must be standard. Premium storage accounts aren't supported. - **Self-managed key rotation**: If you change or rotate the access keys to your storage account, VNet flow logs stop working. To fix this problem, you must disable and then re-enable VNet flow logs.
network-watcher Vpn Troubleshoot Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vpn-troubleshoot-cli.md
+
+ Title: Troubleshoot VPN gateways and connections - Azure CLI
+
+description: Learn how to use Azure Network Watcher VPN troubleshoot capability to troubleshoot VPN virtual network gateways and their connections using the Azure CLI.
++++ Last updated : 11/30/2023++
+#CustomerIntent: As a network administrator, I want to determine why resources in a virtual network can't communicate with resources in a different virtual network over a VPN connection.
++
+# Troubleshoot VPN virtual network gateways and connections using the Azure CLI
+
+> [!div class="op_single_selector"]
+> - [Portal](diagnose-communication-problem-between-networks.md)
+> - [PowerShell](vpn-troubleshoot-powershell.md)
+> - [Azure CLI](vpn-troubleshoot-cli.md)
+
+In this article, you learn how to use Network Watcher VPN troubleshoot capability to diagnose and troubleshoot VPN virtual network gateways and their connections to solve connectivity issues between your virtual network and on-premises network. VPN troubleshoot requests are long running requests, which could take several minutes to return a result. The logs from troubleshooting are stored in a container on a storage account that is specified.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- A Network Watcher enabled in the region of the virtual network gateway. For more information, see [Enable or disable Azure Network Watcher](network-watcher-create.md?tabs=cli).
+
+- A virtual network gateway. For more information, see [Supported gateway types](vpn-troubleshoot-overview.md#supported-gateway-types).
+
+- Azure Cloud Shell or Azure CLI.
+
+ The steps in this article run the Azure CLI commands interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+ You can also [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. If you run Azure CLI locally, sign in to Azure using the [az login](/cli/azure/reference-index#az-login) command.
+
+## Troubleshoot using an existing storage account
+
+In this section, you learn how to troubleshoot a VPN virtual network gateway or a VPN connection using an existing storage account.
+
+# [**Gateway**](#tab/gateway)
+
+Use [az storage account show](/cli/azure/storage/account#az-storage-account-show) to retrieve the resource ID of the storage account. Then use [az network watcher troubleshooting start](/cli/azure/network/watcher/troubleshooting#az-network-watcher-troubleshooting-start) to start troubleshooting the VPN gateway.
+
+```azurecli-interactive
+# Place the storage account ID into a variable.
+storageId=$(az storage account show --name 'mystorageaccount' --resource-group 'myResourceGroup' --query 'id' --output tsv)
+
+# Start VPN troubleshoot session.
+az network watcher troubleshooting start --resource-group 'myResourceGroup' --resource 'myGateway' --resource-type 'vnetGateway' --storage-account $storageId --storage-path 'https://mystorageaccount.blob.core.windows.net/{containerName}'
+```
+
+# [**Connection**](#tab/connection)
+
+Use [az storage account show](/cli/azure/storage/account#az-storage-account-show) to retrieve the resource ID of the storage account. Then use [az network watcher troubleshooting start](/cli/azure/network/watcher/troubleshooting#az-network-watcher-troubleshooting-start) to start troubleshooting the VPN connection.
+
+```azurecli-interactive
+# Place the storage account ID into a variable.
+storageId=$(az storage account show --name 'mystorageaccount' --resource-group 'myResourceGroup' --query 'id' --output tsv)
+
+# Start VPN troubleshoot session.
+az network watcher troubleshooting start --resource-group 'myResourceGroup' --resource 'myConnection' --resource-type 'vpnConnection' --storage-account $storageId --storage-path 'https://mystorageaccount.blob.core.windows.net/{containerName}'
+```
+++
+After the troubleshooting request is completed, ***Healthy*** or ***UnHealthy*** is returned with action text that provides general guidance on how to resolve the issue. If an action can be taken for the issue, a link is provided with more guidance.
+
+Additionally, detailed logs are stored in the storage account container you specified in the previous command. For more information, see [Log files](vpn-troubleshoot-overview.md#log-files). You can use Storage explorer or any other way you prefer to access and download the logs. For more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+## Troubleshoot using a new storage account
+
+In this section, you learn how to troubleshoot a VPN virtual network gateway or a VPN connection using a new storage account.
+
+# [**Gateway**](#tab/gateway)
+
+Use [az storage account create](/cli/azure/storage/account#az-storage-account-create) and [az storage container create](/cli/azure/storage/container#az-storage-container-create) to create a new storage account and a container respectively. Then, use [az network watcher troubleshooting start](/cli/azure/network/watcher/troubleshooting#az-network-watcher-troubleshooting-start) to start troubleshooting the VPN gateway.
+
+```azurecli-interactive
+# Create a new storage account.
+az storage account create --name 'mystorageaccount' --resource-group 'myResourceGroup' --location 'eastus' --sku 'Standard_LRS'
+
+# Get the storage account keys.
+az storage account keys list --resource-group 'myResourceGroup' --account-name 'mystorageaccount'
+
+# Create a container.
+az storage container create --account-name 'mystorageaccount' --account-key {storageAccountKey} --name 'vpn'
+
+# Start VPN troubleshoot session.
+az network watcher troubleshooting start --resource-group 'myResourceGroup' --resource 'myGateway' --resource-type 'vnetGateway' --storage-account 'mystorageaccount' --storage-path 'https://mystorageaccount.blob.core.windows.net/vpn'
+```
+
+# [**Connection**](#tab/connection)
+
+Use [az storage account create](/cli/azure/storage/account#az-storage-account-create) and [az storage container create](/cli/azure/storage/container#az-storage-container-create) to create a new storage account and a container respectively. Then, use [az network watcher troubleshooting start](/cli/azure/network/watcher/troubleshooting#az-network-watcher-troubleshooting-start) to start troubleshooting the VPN connection.
+
+```azurecli-interactive
+# Create a new storage account.
+az storage account create --name 'mystorageaccount' --resource-group 'myResourceGroup' --location 'eastus' --sku 'Standard_LRS'
+
+# Get the storage account keys.
+az storage account keys list --resource-group 'myResourceGroup' --account-name 'mystorageaccount'
+
+# Create a container.
+az storage container create --account-name 'mystorageaccount' --account-key {storageAccountKey} --name 'vpn'
+
+# Start VPN troubleshoot session.
+az network watcher troubleshooting start --resource-group 'myResourceGroup' --resource 'myConnection' --resource-type 'vpnConnection' --storage-account 'mystorageaccount' --storage-path 'https://mystorageaccount.blob.core.windows.net/vpn'
+```
+++
+After the troubleshooting request is completed, ***Healthy*** or ***UnHealthy*** is returned with action text that provides general guidance on how to resolve the issue. If an action can be taken for the issue, a link is provided with more guidance.
+
+Additionally, detailed logs are stored in the storage account container you specified in the previous command. For more information, see [Log files](vpn-troubleshoot-overview.md#log-files). You can use Storage explorer or any other way you prefer to access and download the logs. For more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+## Related content
+
+- [Tutorial: Diagnose a communication problem between virtual networks using the Azure portal](diagnose-communication-problem-between-networks.md).
+
+- [VPN troubleshoot overview](vpn-troubleshoot-overview.md).
network-watcher Vpn Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vpn-troubleshoot-overview.md
+
+ Title: VPN troubleshoot overview
+
+description: Learn about Azure Network Watcher VPN troubleshoot capability and how to use it to troubleshoot VPN virtual network gateways and their connections.
++++ Last updated : 02/23/2023+
+#CustomerIntent: As an Azure administrator, I want to learn about VPN troubleshoot so I can use it to troubleshoot my VPN virtual network gateways and their connections whenever resources in a virtual network can't communicate with on-premises machines over a VPN connection.
++
+# VPN troubleshoot overview
+
+Virtual network gateways provide connectivity between on-premises resources and Azure Virtual Networks. Monitoring virtual network gateways and their connections are critical to ensure communication isn't broken. Azure Network Watcher provides the capability to troubleshoot virtual network gateways and their connections. The capability can be called through the Azure portal, Azure PowerShell, Azure CLI, or REST API. When called, Network Watcher diagnoses the health of the gateway, or connection, and returns the appropriate results. The request is a long running transaction. The results are returned once the diagnosis is complete.
++
+## Supported Gateway types
+
+The following table lists which gateways and connections are supported with Network Watcher troubleshooting:
+
+| Gateway or connection | Supported |
+|||
+|**Gateway types** | |
+|VPN | Supported |
+|ExpressRoute | Not Supported |
+|**VPN types** | |
+|Route Based | Supported|
+|Policy Based | Not Supported|
+|**Connection types**||
+|IPSec| Supported|
+|VNet2VNet| Supported|
+|ExpressRoute| Not Supported|
+|VPNClient| Not Supported|
+
+## Results
+
+The preliminary results returned give an overall picture of the health of the resource. Deeper information can be provided for resources as shown in the following section:
+
+The following list is the values returned with the troubleshoot API:
+
+* **startTime** - This value is the time the troubleshoot API call started.
+* **endTime** - This value is the time when the troubleshooting ended.
+* **code** - This value is UnHealthy, if there's a single diagnosis failure.
+* **results** - Results is a collection of results returned on the Connection or the virtual network gateway.
+ * **id** - This value is the fault type.
+ * **summary** - This value is a summary of the fault.
+ * **detailed** - This value provides a detailed description of the fault.
+ * **recommendedActions** - This property is a collection of recommended actions to take.
+ * **actionText** - This value contains the text describing what action to take.
+ * **actionUri** - This value provides the URI to documentation on how to act.
+ * **actionUriText** - This value is a short description of the action text.
+
+The following tables show the different fault types (**id** under results from the preceding list) that are available and if the fault creates logs.
+
+### Gateway
+
+| Fault Type | Reason | Log|
+||||
+| NoFault | When no error is detected |Yes|
+| GatewayNotFound | Can't find gateway or gateway isn't provisioned |No|
+| PlannedMaintenance | Gateway instance is under maintenance |No|
+| UserDrivenUpdate | This fault occurs when a user update is in progress. The update could be a resize operation. | No |
+| VipUnResponsive | This fault occurs when the primary instance of the gateway can't be reached due to a health probe failure. | No |
+| PlatformInActive | There's an issue with the platform. | No|
+| ServiceNotRunning | The underlying service isn't running. | No|
+| NoConnectionsFoundForGateway | No connections exist on the gateway. This fault is only a warning.| No|
+| ConnectionsNotConnected | Connections aren't connected. This fault is only a warning.| Yes|
+| GatewayCPUUsageExceeded | The current gateway CPU usage is > 95%. | Yes |
+
+### Connection
+
+| Fault Type | Reason | Log|
+||||
+| NoFault | When no error is detected |Yes|
+| GatewayNotFound | Can't find gateway or gateway isn't provisioned |No|
+| PlannedMaintenance | Gateway instance is under maintenance |No|
+| UserDrivenUpdate | This fault occurs when a user update is in progress. The update could be a resize operation. | No |
+| VipUnResponsive | This fault occurs when the primary instance of the gateway can't be reached due to a health probe failure. | No |
+| ConnectionEntityNotFound | Connection configuration is missing | No |
+| ConnectionIsMarkedDisconnected | The connection is marked "disconnected" |No|
+| ConnectionNotConfiguredOnGateway | The underlying service doesn't have the connection configured. | Yes |
+| ConnectionMarkedStandby | The underlying service is marked as standby.| Yes|
+| Authentication | Preshared key mismatch | Yes|
+| PeerReachability | The peer gateway isn't reachable. | Yes|
+| IkePolicyMismatch | The peer gateway has IKE policies that aren't supported by Azure. | Yes|
+| WfpParse Error | An error occurred parsing the WFP log. |Yes|
++
+## Log files
+
+The resource troubleshooting log files are stored in a storage account after resource troubleshooting is finished. The following image shows the example contents of a call that resulted in an error.
++
+> [!NOTE]
+> 1. In some cases, only a subset of the logs files is written to storage.
+> 2. For newer gateway versions, the IkeErrors.txt, Scrubbed-wfpdiag.txt and wfpdiag.txt.sum have been replaced by an IkeLogs.txt file that contains the whole IKE activity (not just errors).
+
+For instructions on downloading files from Azure storage accounts, see [Download a block blob](../storage/blobs/storage-quickstart-blobs-portal.md#download-a-block-blob). Another tool that can be used is Storage Explorer. For information about Azure Storage Explorer, see [Use Azure Storage Explorer to download blobs](../storage/blobs/quickstart-storage-explorer.md#download-blobs)
+
+### ConnectionStats.txt
+
+The **ConnectionStats.txt** file contains overall stats of the Connection, including ingress and egress bytes, Connection status, and the time the Connection was established.
+
+> [!NOTE]
+> If the call to the troubleshooting API returns healthy, the only thing returned in the zip file is a **ConnectionStats.txt** file.
+
+The contents of this file are similar to the following example:
+
+```
+Connectivity State : Connected
+Remote Tunnel Endpoint :
+Ingress Bytes (since last connected) : 288 B
+Egress Bytes (Since last connected) : 288 B
+Connected Since : 2/1/2017 8:22:06 PM
+```
+
+### CPUStats.txt
+
+The **CPUStats.txt** file contains CPU usage and memory available at the time of testing. The contents of this file is similar to the following example:
+
+```
+Current CPU Usage : 0 % Current Memory Available : 641 MBs
+```
+
+### IKElogs.txt
+
+The **IKElogs.txt** file contains any IKE activity that was found during monitoring.
+
+The following example shows the contents of an IKElogs.txt file.
+
+```
+Remote <IPaddress>:500: Local <IPaddress>:500: [RECEIVED][SA_AUTH] Received IKE AUTH message
+Remote <IPaddress>:500: Local <IPaddress>:500: Received Traffic Selector payload request- [Tsid 0x729 ]Number of TSIs 2: StartAddress 10.20.0.0 EndAddress 10.20.255.255 PortStart 0 PortEnd 65535 Protocol 0, StartAddress 192.168.100.0 EndAddress 192.168.100.255 PortStart 0 PortEnd 65535 Protocol 0 Number of TSRs 1:StartAddress 0.0.0.0 EndAddress 255.255.255.255 PortStart 0 PortEnd 65535 Protocol 0
+Remote <IPaddress>:500: Local <IPaddress>:500: [SEND] Proposed Traffic Selector payload will be (Final Negotiated) - [Tsid 0x729 ]Number of TSIs 2: StartAddress 10.20.0.0 EndAddress 10.20.255.255 PortStart 0 PortEnd 65535 Protocol 0, StartAddress 192.168.100.0 EndAddress 192.168.100.255 PortStart 0 PortEnd 65535 Protocol 0 Number of TSRs 1:StartAddress 0.0.0.0 EndAddress 255.255.255.255 PortStart 0 PortEnd 65535 Protocol 0
+Remote <IPaddress>:500: Local <IPaddress>:500: [RECEIVED]Received IPSec payload: Policy1:Cipher=DESIntegrity=Md5
+IkeCleanupQMNegotiation called with error 13868 and flags a
+Remote <IPaddress>:500: Local <IPaddress>:500: [SEND][NOTIFY] Sending Notify Message - Policy Mismatch
+```
+
+### IKEErrors.txt
+
+The **IKEErrors.txt** file contains any IKE errors that were found during monitoring.
+
+The following example shows the contents of an IKEErrors.txt file. Your errors might be different depending on the issue.
+
+```
+Error: Authentication failed. Check shared key. Check crypto. Check lifetimes.
+ based on log : Peer failed with Windows error 13801(ERROR_IPSEC_IKE_AUTH_FAIL)
+Error: On-prem device sent invalid payload.
+ based on log : IkeFindPayloadInPacket failed with Windows error 13843(ERROR_IPSEC_IKE_INVALID_PAYLOAD)
+```
+
+### Scrubbed-wfpdiag.txt
+
+The **Scrubbed-wfpdiag.txt** log file contains the wfp log. This log contains logging of packet drop and IKE/AuthIP failures.
+
+The following example shows the contents of the Scrubbed-wfpdiag.txt file. In this example, the pre-shared key of a Connection wasn't correct as can be seen from the third line from the bottom. The following example is just a snippet of the entire log, as the log can be lengthy depending on the issue.
+
+```
+...
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|Deleted ICookie from the high priority thread pool list
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|IKE diagnostic event:
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|Event Header:
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Timestamp: 1601-01-01T00:00:00.000Z
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Flags: 0x00000106
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Local address field set
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Remote address field set
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| IP version field set
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| IP version: IPv4
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| IP protocol: 0
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Local address: 13.78.238.92
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Remote address: 52.161.24.36
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Local Port: 0
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Remote Port: 0
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Application ID:
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| User SID: <invalid>
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|Failure type: IKE/Authip Main Mode Failure
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|Type specific info:
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Failure error code:0x000035e9
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| IKE authentication credentials are unacceptable
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36|
+[0]0368.03A4::02/02/2017-17:36:01.496 [ikeext] 3038|52.161.24.36| Failure point: Remote
+...
+```
+
+### wfpdiag.txt.sum
+
+The **wfpdiag.txt.sum** file is a log showing the buffers and events processed.
+
+The following example is the contents of a wfpdiag.txt.sum file.
+```
+Files Processed:
+ C:\Resources\directory\924336c47dd045d5a246c349b8ae57f2.GatewayTenantWorker.DiagnosticsStorage\2017-02-02T17-34-23\wfpdiag.etl
+Total Buffers Processed 8
+Total Events Processed 2169
+Total Events Lost 0
+Total Format Errors 0
+Total Formats Unknown 486
+Elapsed Time 330 sec
++--+
+|EventCount EventName EventType TMF |
++--+
+| 36 ikeext ike_addr_utils_c844 a0c064ca-d954-350a-8b2f-1a7464eef8b6|
+| 12 ikeext ike_addr_utils_c857 a0c064ca-d954-350a-8b2f-1a7464eef8b6|
+| 96 ikeext ike_addr_utils_c832 a0c064ca-d954-350a-8b2f-1a7464eef8b6|
+| 6 ikeext ike_bfe_callbacks_c133 1dc2d67f-8381-6303-e314-6c1452eeb529|
+| 6 ikeext ike_bfe_callbacks_c61 1dc2d67f-8381-6303-e314-6c1452eeb529|
+| 12 ikeext ike_sa_management_c5698 7857a320-42ee-6e90-d5d9-3f414e3ea2d3|
+| 6 ikeext ike_sa_management_c8447 7857a320-42ee-6e90-d5d9-3f414e3ea2d3|
+| 12 ikeext ike_sa_management_c494 7857a320-42ee-6e90-d5d9-3f414e3ea2d3|
+| 12 ikeext ike_sa_management_c642 7857a320-42ee-6e90-d5d9-3f414e3ea2d3|
+| 6 ikeext ike_sa_management_c3162 7857a320-42ee-6e90-d5d9-3f414e3ea2d3|
+| 12 ikeext ike_sa_management_c3307 7857a320-42ee-6e90-d5d9-3f414e3ea2d3|
+```
+
+## Considerations
+
+- Only one VPN troubleshoot operation can be run at a time per subscription. To run another VPN troubleshoot operation, wait for the previous one to complete. Triggering a new operation while a previous one hasn't completed causes the subsequent operations to fail.
+- CLI Bug: If you're using Azure CLI to run the command, the VPN Gateway and the Storage account need to be in same resource group. Customers with the resources in different resource groups can use PowerShell or the Azure portal instead.
+
+## Next step
+
+To learn how to diagnose a problem with a virtual network gateway or gateway connection, see [Diagnose communication problems between virtual networks](diagnose-communication-problem-between-networks.md).
++
network-watcher Vpn Troubleshoot Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vpn-troubleshoot-powershell.md
+
+ Title: Troubleshoot VPN gateways and connections - PowerShell
+
+description: Learn how to use Azure Network Watcher VPN troubleshoot capability to troubleshoot VPN virtual network gateways and their connections using PowerShell.
++++ Last updated : 11/29/2023++
+#CustomerIntent: As a network administrator, I want to determine why resources in a virtual network can't communicate with resources in a different virtual network over a VPN connection.
++
+# Troubleshoot VPN virtual network gateways and connections using PowerShell
+
+> [!div class="op_single_selector"]
+> - [Portal](diagnose-communication-problem-between-networks.md)
+> - [PowerShell](vpn-troubleshoot-powershell.md)
+> - [Azure CLI](vpn-troubleshoot-cli.md)
+
+In this article, you learn how to use Network Watcher VPN troubleshoot capability to diagnose and troubleshoot VPN virtual network gateways and their connections to solve connectivity issues between your virtual network and on-premises network. VPN troubleshoot requests are long running requests, which could take several minutes to return a result. The logs from troubleshooting are stored in a container on a storage account that is specified.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- A Network Watcher enabled in the region of the virtual network gateway. For more information, see [Enable or disable Azure Network Watcher](network-watcher-create.md?tabs=powershell).
+
+- A virtual network gateway. For more information about supported gateway types, see [Supported gateway types](vpn-troubleshoot-overview.md#supported-gateway-types).
+
+- Azure Cloud Shell or Azure PowerShell.
+
+ The steps in this article run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+ You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+
+## Troubleshoot using an existing storage account
+
+In this section, you learn how to troubleshoot a VPN virtual network gateway or a VPN connection using an existing storage account.
+
+# [**Gateway**](#tab/gateway)
+
+Use [Start-AzNetworkWatcherResourceTroubleshooting](/powershell/module/az.network/start-aznetworkwatcherresourcetroubleshooting) to start troubleshooting the VPN gateway.
+
+```azurepowershell-interactive
+# Place the virtual network gateway configuration into a variable.
+$vng = Get-AzVirtualNetworkGateway -Name 'myGateway' -ResourceGroupName 'myResourceGroup'
+
+# Place the storage account configuration into a variable.
+$sa = Get-AzStorageAccount -ResourceGroupName 'myResourceGroup' -Name 'mystorageaccount'
+
+# Start VPN troubleshoot session.
+Start-AzNetworkWatcherResourceTroubleshooting -Location 'eastus' -TargetResourceId $vng.Id -StorageId $sa.Id -StoragePath 'https://mystorageaccount.blob.core.windows.net/{containerName}'
+```
+
+# [**Connection**](#tab/connection)
+
+Use [Start-AzNetworkWatcherResourceTroubleshooting](/powershell/module/az.network/start-aznetworkwatcherresourcetroubleshooting) to start troubleshooting the VPN connection.
+
+```azurepowershell-interactive
+# Place the virtual network gateway configuration into a variable.
+$connection = Get-AzVirtualNetworkGatewayConnection -Name 'myConnection' -ResourceGroupName 'myResourceGroup'
+
+# Place the storage account configuration into a variable.
+$sa = Get-AzStorageAccount -ResourceGroupName 'myResourceGroup' -Name 'mystorageaccount'
+
+# Start VPN troubleshoot session.
+Start-AzNetworkWatcherResourceTroubleshooting -Location 'eastus' -TargetResourceId $connection.Id -StorageId $sa.Id -StoragePath 'https://mystorageaccount.blob.core.windows.net/{containerName}'
+```
+++
+After the troubleshooting request is completed, ***healthy*** or ***unhealthy*** is returned. Detailed logs are stored in the storage account container you specified in the previous command. For more information, see [Log files](vpn-troubleshoot-overview.md#log-files). You can use Storage explorer or any other way you prefer to access and download the logs. For more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+## Troubleshoot using a new storage account
+
+In this section, you learn how to troubleshoot a VPN virtual network gateway or a VPN connection using a new storage account.
+
+# [**Gateway**](#tab/gateway)
+
+Use [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) and [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) to create a new storage account and a container. Then, use [Start-AzNetworkWatcherResourceTroubleshooting](/powershell/module/az.network/start-aznetworkwatcherresourcetroubleshooting) to start troubleshooting the VPN gateway.
+
+```azurepowershell-interactive
+# Place the virtual network gateway configuration into a variable.
+$vng = Get-AzVirtualNetworkGateway -Name 'myGateway' -ResourceGroupName 'myResourceGroup'
+
+# Create a new storage account.
+$sa = New-AzStorageAccount -Name 'mystorageaccount' -SKU 'Standard_LRS' -ResourceGroupName 'myResourceGroup' -Location 'eastus'
+
+# Create a container.
+Set-AzCurrentStorageAccount -ResourceGroupName $sa.ResourceGroupName -Name $sa.StorageAccountName
+$sc = New-AzStorageContainer -Name 'vpn'
+
+# Start VPN troubleshoot session.
+Start-AzNetworkWatcherResourceTroubleshooting -Location 'eastus' -TargetResourceId $vng.Id -StorageId $sa.Id -StoragePath 'https://mystorageaccount.blob.core.windows.net/vpn'
+```
+
+# [**Connection**](#tab/connection)
+
+Use [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) and [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) to create a new storage account and a container. Then, use [Start-AzNetworkWatcherResourceTroubleshooting](/powershell/module/az.network/start-aznetworkwatcherresourcetroubleshooting) to start troubleshooting the VPN gateway.
+
+```azurepowershell-interactive
+# Place the virtual network gateway configuration into a variable.
+$connection = Get-AzVirtualNetworkGatewayConnection -Name 'myConnection' -ResourceGroupName 'myResourceGroup'
+
+# Create a new storage account.
+$sa = New-AzStorageAccount -Name 'mystorageaccount' -SKU 'Standard_LRS' -ResourceGroupName 'myResourceGroup' -Location 'eastus'
+
+# Create a container.
+Set-AzCurrentStorageAccount -ResourceGroupName $sa.ResourceGroupName -Name $sa.StorageAccountName
+$sc = New-AzStorageContainer -Name 'vpn'
+
+# Start VPN troubleshoot session.
+Start-AzNetworkWatcherResourceTroubleshooting -Location 'eastus' -TargetResourceId $connection.Id -StorageId $sa.Id -StoragePath 'https://mystorageaccount.blob.core.windows.net/vpn'
+```
+++
+After the troubleshooting request is completed, ***healthy*** or ***unhealthy*** is returned. Detailed logs are stored in the storage account container you specified in the previous command. For more information, see [Log files](vpn-troubleshoot-overview.md#log-files). You can use Storage explorer or any other way you prefer to access and download the logs. For more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+## Related content
+
+- [Tutorial: Diagnose a communication problem between virtual networks using the Azure portal](diagnose-communication-problem-between-networks.md).
+
+- [VPN troubleshoot overview](vpn-troubleshoot-overview.md).
networking Disaster Recovery Dns Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/disaster-recovery-dns-traffic-manager.md
na Previously updated : 04/06/2021 Last updated : 11/30/2023
operator-nexus Quickstarts Tenant Workload Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md
In order to reach the desired endpoints, you need to add the required egress end
## Nexus Kubernetes cluster availability zone
-When you're creating a Nexus Kubernetes cluster, you can schedule the cluster onto specific racks or distribute it evenly across multiple racks. This technique can improve resource utilization and fault tolerance.
+When you're creating a Nexus Kubernetes cluster, you can schedule the cluster onto specific racks or distribute it across multiple racks. This technique can improve resource utilization and fault tolerance.
If you don't specify a zone when you're creating a Nexus Kubernetes cluster, the Azure Operator Nexus platform automatically implements a default anti-affinity rule. This rule aims to prevent scheduling the cluster VM on a node that already has a VM from the same cluster, but it's a best-effort approach and can't make guarantees.
partner-solutions Astronomer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/astronomer/astronomer-troubleshoot.md
description: This article provides information about getting support and trouble
- ignite-2023 Previously updated : 11/13/2023 Last updated : 11/29/2023+ # Troubleshooting Astro (Preview) integration with Azure
If SSO isn't working for the Astronomer portal, verify you're using the correct
For more information, see the [single sign-on guidance](astronomer-manage.md#single-sign-on).
+### Unable to install Astro using a personal email
+
+Installing Apache Airflow on Astro from the Azure Marketplace using a personal email from a generic domain isn't supported. To install this service, use an email address with a unique domain, such as an email address associated with work or school, or start by creating a new user in Azure and make this user a subscription owner. For more information, see [Install Astro from the Azure Marketplace using a personal email](https://docs.astronomer.io/astro/install-azure#install-astro-from-the-azure-marketplace-using-a-personal-email).
+ ## Next steps - Learn about [managing your instance](astronomer-manage.md) of Astro.
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/troubleshoot.md
The Azure Datadog integration provides you with the ability to install Datadog a
If the Datadog agent is configured with an incorrect key, navigate to the API keys screen and change the **Default Key**. You must uninstall the Datadog agent and reinstall it to configure the virtual machine with the new API keys.
+## Diagnostic settings are active even after disabling the Datadog resource or applying necessary tag rules
+
+If logs are being emitted and diagnostic settings remain active on monitored resources even after the Datadog resource is disabled or tag rules have been modified to exclude certain resources, it's likely that there's a delete lock applied to the resource(s) or the resource group containing the resource. This lock prevents the cleanup of the diagnostic settings, and hence, logs continue to be forwarded for those resources. To resolve this, remove the delete lock from the resource or the resource group. If the lock is removed after the Datadog resource is deleted, the diagnostic settings have to be cleaned up manually to stop log forwarding.
+ ## Next steps - Learn about [managing your instance](manage.md) of Datadog.
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
If those options don't solve the problem, contact [Dynatrace support](https://s
- To collect metrics, you must have owner permission on the subscription. If you're a contributor, refer to the contributor guide mentioned in [Configure metrics and logs](dynatrace-create.md#configure-metrics-and-logs).
+### Diagnostic settings are active even after disabling the Dynatrace resource or applying necessary tag rules
+
+If logs are being emitted and diagnostic settings remain active on monitored resources even after the Dynatrace resource is disabled or tag rules have been modified to exclude certain resources, it's likely that there's a delete lock applied to the resource(s) or the resource group containing the resource. This lock prevents the cleanup of the diagnostic settings, and hence, logs continue to be forwarded for those resources. To resolve this, remove the delete lock from the resource or the resource group. If the lock is removed after the Dynatrace resource is deleted, the diagnostic settings have to be cleaned up manually to stop log forwarding.
+ ### Free trial errors - **Unable to create another free trial resource on Azure**
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/troubleshoot.md
Only users who have *Owner* or *Contributor* access on the Azure subscription ca
- Export of Metrics data isn't supported currently by the partner solutions under Azure Monitor diagnostic settings.
+## Diagnostic settings are active even after disabling the Elastic resource or applying necessary tag rules
+
+If logs are being emitted and diagnostic settings remain active on monitored resources even after the Elastic resource is disabled or tag rules have been modified to exclude certain resources, it's likely that there's a delete lock applied to the resource(s) or the resource group containing the resource. This lock prevents the cleanup of the diagnostic settings, and hence, logs continue to be forwarded for those resources. To resolve this, remove the delete lock from the resource or the resource group. If the lock is removed after the Elastic resource is deleted, the diagnostic settings have to be cleaned up manually to stop log forwarding.
+ ## Marketplace Purchase errors [!INCLUDE [marketplace-purchase-errors](../includes/marketplace-purchase-errors.md)]
partner-solutions New Relic Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-troubleshoot.md
If your Azure subscription is suspended or deleted because of payment-related is
New Relic manages the APIs for creating and managing resources, and for the storage and processing of customer telemetry data. The New Relic APIs might be on or outside Azure. If your Azure subscription and resource are working correctly but the New Relic portal shows problems with monitoring data, contact New Relic support.
+### Diagnostic settings are active even after disabling the New Relic resource or applying necessary tag rules
+
+If logs are being emitted and diagnostic settings remain active on monitored resources even after the New Relic resource is disabled or tag rules have been modified to exclude certain resources, it's likely that there's a delete lock applied to the resource(s) or the resource group containing the resource. This lock prevents the cleanup of the diagnostic settings, and hence, logs continue to be forwarded for those resources. To resolve this, remove the delete lock from the resource or the resource group. If the lock is removed after the New Relic resource is deleted, the diagnostic settings have to be cleaned up manually to stop log forwarding.
+ ## Next steps - [Manage Azure Native New Relic Service](new-relic-how-to-manage.md)
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 9/20/2023 Last updated : 11/30/2023 # Release notes - Azure Database for PostgreSQL - Flexible Server
Last updated 9/20/2023
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Flexible Server - PostgreSQL.
+## Release: December 2023
+* Public preview of [Server logs](./how-to-server-logs-portal.md).
+* General availability of [TLS Version 1.3 support](./concepts-networking-ssl-tls.md#tls-versions).
+ ## Release: November 2023 * General availability of PostgreSQL 16 for Azure Database for PostgreSQL ΓÇô Flexible Server. * General availability of [near-zero downtime scaling](concepts-compute-storage.md).
This page provides latest news and updates regarding feature additions, engine v
## Release: October 2023 * Support for [minor versions](./concepts-supported-versions.md) 15.4, 14.9, 13.12, 12.16, 11.21 <sup>$</sup> * General availability of [Grafana Monitoring Dashboard](https://grafana.com/grafana/dashboards/19556-azure-azure-postgresql-flexible-server-monitoring/) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* Public preview of Server Logs Download for Azure Database for PostgreSQL ΓÇô Flexible Server.
## Release: September 2023 * General availability of [Storage auto-grow](./concepts-compute-storage.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
private-5g-core Azure Private 5G Core Release Notes 2310 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2310.md
Previously updated : 11/07/2023 Last updated : 11/30/2023 # Azure Private 5G Core 2310 release notes The following release notes identify the new features, critical open issues, and resolved issues for the 2308 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, review the information contained in these release notes.
-This article applies to the AP5GC 2310 release (2310.0-X). This release is compatible with the Azure Stack Edge Pro 1 GPU and Azure Stack Edge Pro 2 running the ASE 2309 release and supports the 2023-09-01, 2023-06-01 and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
+This article applies to the AP5GC 2310 release (2310.0-8). This release is compatible with the Azure Stack Edge Pro 1 GPU and Azure Stack Edge Pro 2 running the ASE 2309 release and supports the 2023-09-01, 2023-06-01 and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
For more information about compatibility, see [Packet core and Azure Stack Edge compatibility](azure-stack-edge-packet-core-compatibility.md).
This feature categorizes a few metrics based on the RAN identifier, for example
### Combined 4G/5G on a single packet core This feature allows a packet core that supports both 4G and 5G networks on a single Mobile Network site. You can deploy a RAN network with both 4G and 5G radios and connect to a single packet core. - ## Issues fixed in the AP5GC 2310 release The following table provides a summary of issues fixed in this release.
- |No. |Feature | Issue |
- |--|--|--|
- | 1 | Packet Forwarding | In scenarios of sustained high load (for example, continuous setup of 100s of TCP flows per second) in 4G setups, AP5GC might encounter an internal error, leading to a short period of service disruption resulting in some call failures. |
+ |No. |Feature | Issue | SKU Fixed In |
+ |--|--|--|--|-|
+ | 1 | Packet Forwarding | In scenarios of sustained high load (for example, continuous setup of 100s of TCP flows per second) in 4G setups, AP5GC might encounter an internal error, leading to a short period of service disruption resulting in some call failures. | 2310.0-4 |
+ | 2 | Packet Forwarding | An intermittent fault at the network layer causes an outage of packet forwarding | 2310.0-8 |
+ | 3 | Diagnosability | During packet capture, uplink userplane packets can be omitted from packet captures | 2310.0-8 |
+ | 4 | Packet Forwarding | Errors in userplane packet detection rules can cause incorrect packet handling | 2310.0-8 |
+ | 5 | Diagnosability | Procedures from different subscribers appear in the same trace | 2310.0-8 |
++ ## Known issues in the AP5GC 2310 release
reliability Reliability Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-energy-data-services.md
Below is the list of primary and secondary regions for regions where disaster re
||-|| |Americas | South Central US | North Central US | |Americas | East US | West US |
+|Americas | Brazil South* | |
|Europe | North Europe | West Europe | |Europe | West Europe | North Europe |
+(*) These regions are restricted in supporting customer scenarios for disaster recovery. For more information please contact your Microsoft sales or customer representatives.
+ Azure Data Manager for Energy uses Azure Storage, Azure Cosmos DB and Elasticsearch index as underlying data stores for persisting your data partition data. These data stores offer high durability, availability, and scalability. Azure Data Manager for Energy uses [geo-zone-redundant storage](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) or GZRS to automatically replicate data to a secondary region that's hundreds of miles away from the primary region. The same security features enabled in the primary region (for example, encryption at rest using your encryption key) to protect your data are applicable to the secondary region. Similarly, Azure Cosmos DB is a globally distributed data service, which replicates the metadata (catalog) across regions. Elasticsearch index snapshots are taken at regular intervals and geo-replicated to the secondary region. All inflight data are ephemeral and therefore subject to loss. For example, in-transit data that is part of an on-going ingestion job that isn't persisted yet is lost, and you must restart the ingestion process upon recovery. > [!IMPORTANT]
role-based-access-control Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/best-practices.md
Previously updated : 11/06/2023 Last updated : 11/29/2023 #Customer intent: As a dev, devops, or it admin, I want to learn how to best use Azure RBAC.
Some roles are identified as [privileged administrator roles](./role-assignments
- Remove unnecessary privileged role assignments. - Avoid assigning a privileged administrator role when a [job function role](./role-assignments-steps.md#job-function-roles) can be used instead. - If you must assign a privileged administrator role, use a narrow scope, such as resource group or resource, instead of a broader scope, such as management group or subscription.-- If you are assigning a role with permission to create role assignments, consider adding a condition to constrain the role assignment. For more information, see [Delegate the Azure role assignment task to others with conditions (preview)](delegate-role-assignments-portal.md).
+- If you are assigning a role with permission to create role assignments, consider adding a condition to constrain the role assignment. For more information, see [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md).
For more information, see [List or manage privileged administrator role assignments](./role-assignments-list-portal.md#list-or-manage-privileged-administrator-role-assignments).
role-based-access-control Conditions Authorization Actions Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-authorization-actions-attributes.md
Previously updated : 09/20/2023 Last updated : 11/29/2023 #Customer intent: As a dev, devops, or it admin, I want to
# Authorization actions and attributes (preview) > [!IMPORTANT]
-> Delegating Azure role assignments with conditions is currently in PREVIEW.
+> Delegating Azure role assignment management with conditions is currently in PREVIEW.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Authorization actions
This section lists the authorization attributes you can use in your condition ex
## Next steps -- [Examples to delegate Azure role assignments with conditions (preview)](delegate-role-assignments-examples.md)-- [Delegate the Azure role assignment task to others with conditions (preview)](delegate-role-assignments-portal.md)
+- [Examples to delegate Azure role assignment management with conditions (preview)](delegate-role-assignments-examples.md)
+- [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md)
role-based-access-control Delegate Role Assignments Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-examples.md
Title: Examples to delegate Azure role assignments with conditions (preview) - Azure ABAC
-description: Examples to delegate the Azure role assignment task with conditions to other users by using Azure attribute-based access control (Azure ABAC).
+ Title: Examples to delegate Azure role assignment management with conditions (preview) - Azure ABAC
+description: Examples to delegate Azure role assignment management to other users by using Azure attribute-based access control (Azure ABAC).
Previously updated : 09/20/2023 Last updated : 11/29/2023 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
-# Examples to delegate Azure role assignments with conditions (preview)
+# Examples to delegate Azure role assignment management with conditions (preview)
> [!IMPORTANT]
-> Delegating Azure role assignments with conditions is currently in PREVIEW.
+> Delegating Azure role assignment management with conditions is currently in PREVIEW.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-This article lists examples to delegate the Azure role assignment task with conditions to other users.
+This article lists examples of how to delegate Azure role assignment management to other users with conditions.
## Prerequisites
role-based-access-control Delegate Role Assignments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-overview.md
Title: Delegate Azure access management to others - Azure ABAC
-description: Overview of how to delegate the Azure role assignment task to other users by using Azure attribute-based access control (Azure ABAC).
+description: Overview of how to delegate Azure role assignment management to other users by using Azure attribute-based access control (Azure ABAC).
Previously updated : 11/28/2023 Last updated : 11/29/2023
-#Customer intent: As a dev, devops, or it admin, I want to delegate the Azure role assignment task to other users who are closer to the decision, but want to limit the scope of the role assignments.
+#Customer intent: As a dev, devops, or it admin, I want to delegate Azure role assignment management to other users who are closer to the decision, but want to limit the scope of the role assignments.
# Delegate Azure access management to others In [Azure role-based access control (Azure RBAC)](overview.md), to grant access to Azure resources, you assign Azure roles. For example, if a user needs to create and manage websites in a subscription, you assign the Website Contributor role.
-Assigning Azure roles to grant access to Azure resources is a common task. As an administrator, you might get several requests to grant access that you want to delegate to someone else. However, you want to make sure the delegate has just the permissions they need to do their job. This article describes a more secure way to delegate the role assignment task to other users in your organization.
+Assigning Azure roles to grant access to Azure resources is a common task. As an administrator, you might get several requests to grant access that you want to delegate to someone else. However, you want to make sure the delegate has just the permissions they need to do their job. This article describes a more secure way to delegate role assignment management to other users in your organization.
-## Why delegate role assignments?
+## Why delegate role assignment management?
-Here are some reasons why you might want to delegate the role assignment task to others:
+Here are some reasons why you might want to delegate role assignment management to others:
- You get several requests to assign roles in your organization. - Users are blocked waiting for the role assignment they need.
Here are some reasons why you might want to delegate the role assignment task to
- Users with permission to create virtual machines can't immediately sign in to the virtual machine without the Virtual Machine Administrator Login or Virtual Machine User Login role. Instead of tracking down an administrator to assign them a login role, it's more efficient if the user can assign the login role to themselves. - A developer has permissions to create an Azure Kubernetes Service (AKS) cluster and an Azure Container Registry (ACR), but needs to assign the AcrPull role to a managed identity so that it can pull images from the ACR. Instead of tracking down an administrator to assign the AcrPull role, it's more efficient if the developer can assign the role themselves.
-## How you currently can delegate role assignments
+## How you currently can delegate role assignment management
-The [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) roles are built-in roles that allow users to create role assignments. Members of these roles can decide who can have write, read, and delete permissions for any resource in a subscription. To delegate the role assignment task to another user, you can assign the Owner or User Access Administrator role to a user.
+The [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) roles are built-in roles that allow users to create role assignments. Members of these roles can decide who can have write, read, and delete permissions for any resource in a subscription. To delegate role assignment management to another user, you can assign the Owner or User Access Administrator role to a user.
The following diagram shows how Alice can delegate role assignment responsibilities to Dara. For specific steps, see [Assign a user as an administrator of an Azure subscription](role-assignments-portal-subscription-admin.md).
The following diagram shows how Alice can delegate role assignment responsibilit
## What are the issues with the current delegation method?
-Here are the primary issues with the current method of delegating role assignments to others in your organization.
+Here are the primary issues with the current method of delegating role assignment management to others in your organization.
- Delegate has unrestricted access at the role assignment scope. This violates the principle of least privilege, which exposes you to a wider attack surface. - Delegate can assign any role to any user within their scope, including themselves.
Here are the primary issues with the current method of delegating role assignmen
Instead of assigning the Owner or User Access Administrator roles, a more secure method is to constrain a delegate's ability to create role assignments.
-## A more secure method: Delegate role assignments with conditions (preview)
+## A more secure method: Delegate role assignment management with conditions (preview)
> [!IMPORTANT]
-> Delegating Azure role assignments with conditions is currently in PREVIEW.
+> Delegating Azure role assignment management with conditions is currently in PREVIEW.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Delegating role assignments with conditions is a way to restrict the role assignments a user can create. In the preceding example, Alice can allow Dara to create some role assignments on her behalf, but not all role assignments. For example, Alice can constrain the roles that Dara can assign and constrain the principals that Dara can assign roles to. This delegation with conditions is sometimes referred to as *constrained delegation* and is implemented with [Azure attribute-based access control (Azure ABAC) conditions](conditions-overview.md).
+Delegating role assignment management with conditions is a way to restrict the role assignments a user can create. In the preceding example, Alice can allow Dara to create some role assignments on her behalf, but not all role assignments. For example, Alice can constrain the roles that Dara can assign and constrain the principals that Dara can assign roles to. This delegation with conditions is sometimes referred to as *constrained delegation* and is implemented using [Azure attribute-based access control (Azure ABAC) conditions](conditions-overview.md).
-This video provides an overview of delegating role assignments with conditions.
+This video provides an overview of delegating role assignment management with conditions.
>[!VIDEO https://www.youtube.com/embed/3eDf2thqeO4]
-## Why delegate role assignments with conditions?
+## Why delegate role assignment management with conditions?
-Here are some reasons why delegating the role assignment task to others with conditions is more secure:
+Here are some reasons why delegating role assignment management to others with conditions is more secure:
- You can restrict the role assignments the delegate is allowed to create. - You can prevent a delegate from allowing another user to assign roles.
Consider an example where Alice is an administrator with the User Access Adminis
## Role Based Access Control Administrator role
-The [Role Based Access Control Administrator (Preview)](built-in-roles.md#role-based-access-control-administrator-preview) role is a built-in role that has been designed for delegating the role assignment task to others. It has fewer permissions than [User Access Administrator](built-in-roles.md#user-access-administrator), which follows least privilege best practices. The Role Based Access Control Administrator role has following permissions:
+The [Role Based Access Control Administrator (Preview)](built-in-roles.md#role-based-access-control-administrator-preview) role is a built-in role that has been designed for delegating role assignment management to others. It has fewer permissions than [User Access Administrator](built-in-roles.md#user-access-administrator), which follows least privilege best practices. The Role Based Access Control Administrator role has following permissions:
- Create a role assignment at the specified scope - Delete a role assignment at the specified scope
Here are the ways that role assignments can be constrained with conditions. You
:::image type="content" source="./media/shared/actions-constrained.png" alt-text="Diagram of add and remove role assignments constrained to Backup Contributor or Backup Reader roles." lightbox="./media/shared/actions-constrained.png":::
-## How to delegate role assignments with conditions
+## How to delegate role assignment management with conditions
-To delegate role assignments with conditions, you assign roles as you currently do, but you also add a [condition to the role assignment](delegate-role-assignments-portal.md).
+To delegate role assignment management with conditions, you assign roles as you currently do, but you also add a [condition to the role assignment](delegate-role-assignments-portal.md).
1. Determine the permissions the delegate needs
To delegate role assignments with conditions, you assign roles as you currently
1. Select the delegate
- Select the user that you want to delegate the role assignments task to.
+ Select the user that you want to delegate role assignment management to.
1. Add a condition
To delegate role assignments with conditions, you assign roles as you currently
Choose from a list of condition templates. Select **Configure** to specify the roles, principal types, or principals.
- For more information, see [Delegate the Azure role assignment task to others with conditions (preview)](delegate-role-assignments-portal.md).
+ For more information, see [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md).
:::image type="content" source="./media/shared/condition-templates.png" alt-text="Screenshot of Add role assignment condition with a list of condition templates." lightbox="./media/shared/condition-templates.png":::
To delegate role assignments with conditions, you assign roles as you currently
If the condition templates don't work for your scenario or if you want more control, you can use the condition editor.
- For examples, see [Examples to delegate Azure role assignments with conditions (preview)](delegate-role-assignments-examples.md).
+ For examples, see [Examples to delegate Azure role assignment management with conditions (preview)](delegate-role-assignments-examples.md).
- :::image type="content" source="./media/shared/delegate-role-assignments-expression.png" alt-text="Screenshot of condition editor in Azure portal showing a role assignment condition to delegate role assignments with conditions." lightbox="./media/shared/delegate-role-assignments-expression.png":::
+ :::image type="content" source="./media/shared/delegate-role-assignments-expression.png" alt-text="Screenshot of condition editor in Azure portal showing a role assignment condition to delegate role assignment management." lightbox="./media/shared/delegate-role-assignments-expression.png":::
# [Azure PowerShell](#tab/azure-powershell)
If you want to further constrain the Key Vault Data Access Administrator role as
## Known issues
-Here are the known issues related to delegating role assignments with conditions (preview):
+Here are the known issues related to delegating role assignment management with conditions (preview):
-- You can't delegate role assignments with conditions using [Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
+- You can't delegate role assignment management with conditions using [Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
- You can't have a role assignment with a Microsoft.Storage data action and an ABAC condition that uses a GUID comparison operator. For more information, see [Troubleshoot Azure RBAC](troubleshooting.md#symptomauthorization-failed). - This preview isn't available in Azure Government or Microsoft Azure operated by 21Vianet.
Here are the known issues related to delegating role assignments with conditions
## Next steps -- [Delegate the Azure role assignment task to others with conditions (preview)](delegate-role-assignments-portal.md)
+- [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md)
- [What is Azure attribute-based access control (Azure ABAC)?](conditions-overview.md)-- [Examples to delegate Azure role assignments with conditions (preview)](delegate-role-assignments-examples.md)
+- [Examples to delegate Azure role assignment management with conditions (preview)](delegate-role-assignments-examples.md)
role-based-access-control Delegate Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-portal.md
Title: Delegate the Azure role assignment task to others with conditions (preview) - Azure ABAC
-description: How to delegate the Azure role assignment task with conditions to other users by using Azure attribute-based access control (Azure ABAC).
+ Title: Delegate Azure role assignment management to others with conditions (preview) - Azure ABAC
+description: How to delegate Azure role assignment management to other users by using Azure attribute-based access control (Azure ABAC).
Previously updated : 09/20/2023 Last updated : 11/29/2023
-#Customer intent: As a dev, devops, or it admin, I want to delegate the Azure role assignment task to other users who are closer to the decision, but want to limit the scope of the role assignments.
+#Customer intent: As a dev, devops, or it admin, I want to delegate Azure role assignment management to other users who are closer to the decision, but want to limit the scope of the role assignments.
-# Delegate the Azure role assignment task to others with conditions (preview)
+# Delegate Azure role assignment management to others with conditions (preview)
> [!IMPORTANT]
-> Delegating Azure role assignments with conditions is currently in PREVIEW.
+> Delegating Azure role assignment management with conditions is currently in PREVIEW.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-As an administrator, you might get several requests to grant access to Azure resources that you want to delegate to someone else. You could assign a user the [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator) roles, but these are highly privileged roles. This article describes a more secure way to [delegate the role assignment task](delegate-role-assignments-overview.md) to other users in your organization, but add restrictions for those role assignments. For example, you can constrain the roles that can be assigned or constrain the principals the roles can be assigned to.
+As an administrator, you might get several requests to grant access to Azure resources that you want to delegate to someone else. You could assign a user the [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator) roles, but these are highly privileged roles. This article describes a more secure way to [delegate role assignment management](delegate-role-assignments-overview.md) to other users in your organization, but add restrictions for those role assignments. For example, you can constrain the roles that can be assigned or constrain the principals the roles can be assigned to.
The following diagram shows how a delegate with conditions can only assign the Backup Contributor or Backup Reader roles to only the Marketing or Sales groups. ## Prerequisites
To help determine the permissions the delegate needs, answer the following quest
- Which principals can the delegate assign roles to? - Can delegate remove any role assignments?
-Once you know the permissions that delegate needs, you use the following steps to add a condition to the delegate's role assignment. For example conditions, see [Examples to delegate Azure role assignments with conditions (preview)](delegate-role-assignments-examples.md).
+Once you know the permissions that delegate needs, you use the following steps to add a condition to the delegate's role assignment. For example conditions, see [Examples to delegate Azure role assignment management with conditions (preview)](delegate-role-assignments-examples.md).
## Step 2: Start a new role assignment
If the condition templates don't work for your scenario or if you want more cont
The Select an action pane appears. This pane is a filtered list of actions based on the role assignment that will be the target of your condition.
- :::image type="content" source="./media/delegate-role-assignments-portal/delegate-role-assignments-actions-select.png" alt-text="Screenshot of Select an action pane to delegate role assignments with conditions." lightbox="./media/delegate-role-assignments-portal/delegate-role-assignments-actions-select.png":::
+ :::image type="content" source="./media/delegate-role-assignments-portal/delegate-role-assignments-actions-select.png" alt-text="Screenshot of Select an action pane to delegate role assignment management with conditions." lightbox="./media/delegate-role-assignments-portal/delegate-role-assignments-actions-select.png":::
1. Select the **Create or update role assignments** action you want to allow if the condition is true.
If the condition templates don't work for your scenario or if you want more cont
1. In the **Value** box, enter one or more values for the right side of the expression.
- :::image type="content" source="./media/shared/delegate-role-assignments-expression.png" alt-text="Screenshot of Build expression section to delegate role assignments with conditions." lightbox="./media/shared/delegate-role-assignments-expression.png":::
+ :::image type="content" source="./media/shared/delegate-role-assignments-expression.png" alt-text="Screenshot of Build expression section to delegate role assignment management with conditions." lightbox="./media/shared/delegate-role-assignments-expression.png":::
1. Add additional expressions as needed. > [!TIP]
- > When you add multiple expressions to delegate role assignments with conditions, you typically use the **And** operator between expressions instead of the default **Or** operator.
+ > When you add multiple expressions to delegate role assignment management with conditions, you typically use the **And** operator between expressions instead of the default **Or** operator.
1. Select **Save** to add the condition to the role assignment.
role-based-access-control Role Assignments List Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-portal.md
Previously updated : 11/06/2023 Last updated : 11/29/2023
On the **Role assignments** tab, you can list and see the count of privileged ad
1. To manage privileged administrator role assignments, see the **Privileged** card and click **View assignments**.
- On the **Manage privileged role assignments** page, you can add a condition to constrain the privileged role assignment or remove the role assignment. For more information, see [Delegate the Azure role assignment task to others with conditions (preview)](delegate-role-assignments-portal.md).
+ On the **Manage privileged role assignments** page, you can add a condition to constrain the privileged role assignment or remove the role assignment. For more information, see [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md).
:::image type="content" source="./media/role-assignments-list-portal/access-control-role-assignments-privileged-manage.png" alt-text="Screenshot of Manage privileged role assignments page showing how to add conditions or remove role assignments." lightbox="./media/role-assignments-list-portal/access-control-role-assignments-privileged-manage.png":::
role-based-access-control Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-portal.md
Previously updated : 11/06/2023 Last updated : 11/29/2023
The **Conditions** tab will look different depending on the role you selected.
# [Delegate condition](#tab/delegate-condition) > [!IMPORTANT]
-> Delegating Azure role assignments with conditions is currently in PREVIEW.
+> Delegating Azure role assignment management with conditions is currently in PREVIEW.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. If you selected one of the following privileged roles, follow the steps in this section.
If you selected one of the following privileged roles, follow the steps in this
1. Click **Add condition** to add a condition that constrains the roles and principals this user can assign roles to.
-1. Follow the steps in [Delegate the Azure role assignment task to others with conditions (preview)](delegate-role-assignments-portal.md#step-3-add-a-condition).
+1. Follow the steps in [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md#step-3-add-a-condition).
# [Storage condition](#tab/storage-condition)
search Cognitive Search Quickstart Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-quickstart-blob.md
- ignite-2023 Previously updated : 06/29/2023 Last updated : 11/30/2023 # Quickstart: Create a skillset in the Azure portal
-In this Azure AI Search quickstart, you learn how a skillset in Azure AI Search adds Optical Character Recognition (OCR), image analysis, language detection, text translation, and entity recognition to create text-searchable content in a search index.
+In this quickstart, you learn how a skillset in Azure AI Search adds Optical Character Recognition (OCR), image analysis, language detection, text translation, and entity recognition to generate text-searchable content in a search index.
-You can run the **Import data** wizard in the Azure portal to apply skills that create and transform textual content during indexing. Output is a searchable index containing AI-generated image text, captions, and entities. Generated content is queryable in the portal using [**Search explorer**](search-explorer.md).
+You can run the **Import data** wizard in the Azure portal to apply skills that create and transform textual content during indexing. Input is your raw data, usually blobs in Azure Storage. Output is a searchable index containing AI-generated image text, captions, and entities. Generated content is queryable in the portal using [**Search explorer**](search-explorer.md).
To prepare, you create a few resources and upload sample files before running the wizard.
In the following steps, set up a blob container in Azure Storage to store hetero
+ Choose the StorageV2 (general purpose V2).
-1. In Azure portal, open your Azure Storage page and create a container. You can use the default public access level.
+1. In Azure portal, open your Azure Storage page and create a container. You can use the default access level.
-1. In Container, select **Upload** to upload the sample files you downloaded in the first step. Notice that you have a wide range of content types, including images and application files that aren't full text searchable in their native formats.
+1. In Container, select **Upload** to upload the sample files. Notice that you have a wide range of content types, including images and application files that aren't full text searchable in their native formats.
:::image type="content" source="media/cognitive-search-quickstart-blob/sample-data.png" alt-text="Screenshot of source files in Azure Blob Storage." border="false":::
You're now ready to move on the Import data wizard.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
-1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, select **Import data** on the command bar to set up cognitive enrichment in four steps.
+1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, select **Import data** on the command bar to create searchable content in four steps.
:::image type="content" source="medi.png" alt-text="Screenshot of the Import data command." border="true":::
Next, configure AI enrichment to invoke OCR, image analysis, and natural languag
### Step 3: Configure the index
-An index contains your searchable content and the **Import data** wizard can usually create the schema for you by sampling the data source. In this step, review the generated schema and potentially revise any settings. Below is the default schema created for the demo Blob data set.
+An index contains your searchable content and the **Import data** wizard can usually create the schema by sampling the data source. In this step, review the generated schema and potentially revise any settings.
For this quickstart, the wizard does a good job setting reasonable defaults:
-+ Default fields are based on metadata properties for existing blobs, plus the new fields for the enrichment output (for example, `people`, `organizations`, `locations`). Data types are inferred from metadata and by data sampling.
++ Default fields are based on metadata properties of existing blobs, plus the new fields for the enrichment output (for example, `people`, `organizations`, `locations`). Data types are inferred from metadata and by data sampling. + Default document key is *metadata_storage_path* (selected because the field contains unique values).
For this quickstart, the wizard does a good job setting reasonable defaults:
:::image type="content" source="media/cognitive-search-quickstart-blob/index-fields.png" alt-text="Screenshot of the index definition page." border="true":::
-Marking a field as **Retrievable** doesn't mean that the field *must* be present in the search results. You can control search results composition by using the **$select** query parameter to specify which fields to include.
+Marking a field as **Retrievable** doesn't mean that the field *must* be present in the search results. You can control search results composition by using the **select** query parameter to specify which fields to include.
Continue to the next page.
Continue to the next page.
The indexer drives the indexing process. It specifies the data source name, a target index, and frequency of execution. The **Import data** wizard creates several objects, including an indexer that you can reset and run repeatedly.
-1. In the **Indexer** page, you can accept the default name and select **Once** to run it immediately.
+1. In the **Indexer** page, accept the default name and select **Once**.
:::image type="content" source="media/cognitive-search-quickstart-blob/indexer-def.png" alt-text="Screenshot of the indexer definition page." border="true":::
The indexer drives the indexing process. It specifies the data source name, a ta
## Monitor status
-Cognitive skills indexing takes longer to complete than typical text-based indexing, especially OCR and image analysis. To monitor progress, go to the Overview page and select **Indexers** in the middle of page.
+Select **Indexers** from the left navigation pane to monitor status, and then select the indexer. Skills-based indexing takes longer than text-based indexing, especially OCR and image analysis.
:::image type="content" source="media/cognitive-search-quickstart-blob/indexer-notification.png" alt-text="Screenshot of the indexer status page." border="true":::
-To check details about execution status, select an indexer from the list, and then select **Success** (or **Failed**) to view execution details.
+To view details about execution status, select **Success** (or **Failed**) to view execution details.
-In this demo, there's one warning: `"Could not execute skill because one or more skill input was invalid."` It tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. This warning occurs because the upstream OCR skill didn't recognize any text in the image, and thus couldn't provide a text input to the downstream Entity Recognition skill.
+In this demo, there are a few warnings: `"Could not execute skill because one or more skill input was invalid."` It tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. This warning occurs because the upstream OCR skill didn't recognize any text in the image, and thus couldn't provide a text input to the downstream Entity Recognition skill.
Warnings are common in skillset execution. As you become familiar with how skills iterate over your data, you might begin to notice patterns and learn which warnings are safe to ignore. ## Query in Search explorer
-After an index is created, run queries in **Search explorer** to return results.
+After an index is created, use **Search explorer** to return results.
-1. On the search service dashboard page, select **Search explorer** on the command bar.
+1. On the left, select **Indexes** and then select the index. **Search explorer** is on the first tab.
-1. Select **Change Index** at the top to select the index you created.
-
-1. Enter a search string to query the index, such as `search=Satya Nadella&$select=people,organizations,locations&$count=true`.
+1. Enter a search string to query the index, such as `satya nadella`. The search bar accepts keywords, quote-enclosed phrases, and operators (`"Satya Nadella" +"Bill Gates" +"Steve Ballmer"`).
Results are returned as verbose JSON, which can be hard to read, especially in large documents. Some tips for searching in this tool include the following techniques:
-+ Append `$select` to limit the fields returned in results.
++ Switch to JSON view to specify parameters that shape results.++ Add `select` to limit the fields in results.++ Add `count` to show the number of matches. + Use CTRL-F to search within the JSON for specific properties or terms.
-Query strings are case-sensitive so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify name and case.
- :::image type="content" source="media/cognitive-search-quickstart-blob/search-explorer.png" alt-text="Screenshot of the Search explorer page." border="true":::
+Here's some JSON you can paste into the view:
+
+ ```json
+ {
+ "search": "\"Satya Nadella\" +\"Bill Gates\" +\"Steve Ballmer\"",
+ "count": true,
+ "select": "content, people"
+ }
+ ```
+
+> [!TIP]
+> Query strings are case-sensitive so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify name and case.
+ ## Takeaways
-You've now created your first skillset and learned important concepts useful for prototyping an enriched search solution using your own data.
+You've now created your first skillset and learned the basic steps of skills-based indexing.
-Some key concepts that we hope you picked up include the dependency on Azure data sources. A skillset is bound to an indexer, and indexers are Azure and source-specific. Although this quickstart uses Azure Blob Storage, other Azure data sources are possible. For more information, see [Indexers in Azure AI Search](search-indexer-overview.md).
+Some key concepts that we hope you picked up include the dependencies. A skillset is bound to an indexer, and indexers are Azure and source-specific. Although this quickstart uses Azure Blob Storage, other Azure data sources are possible. For more information, see [Indexers in Azure AI Search](search-indexer-overview.md).
Another important concept is that skills operate over content types, and when working with heterogeneous content, some inputs are skipped. Also, large files or fields might exceed the indexer limits of your service tier. It's normal to see warnings when these events occur.
-Output is directed to a search index, and there's a mapping between name-value pairs created during indexing and individual fields in your index. Internally, the portal sets up [annotations](cognitive-search-concept-annotations-syntax.md) and defines a [skillset](cognitive-search-defining-skillset.md), establishing the order of operations and general flow. These steps are hidden in the portal, but when you start writing code, these concepts become important.
+Output is routed to a search index, and there's a mapping between name-value pairs created during indexing and individual fields in your index. Internally, the wizard sets up [an enrichment tree](cognitive-search-concept-annotations-syntax.md) and defines a [skillset](cognitive-search-defining-skillset.md), establishing the order of operations and general flow. These steps are hidden in the wizard, but when you start writing code, these concepts become important.
Finally, you learned that can verify content by querying the index. In the end, what Azure AI Search provides is a searchable index, which you can query using either the [simple](/rest/api/searchservice/simple-query-syntax-in-azure-search) or [fully extended query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search). An index containing enriched fields is like any other. If you want to incorporate standard or [custom analyzers](search-analyzers.md), [scoring profiles](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [synonyms](search-synonyms.md), [faceted navigation](search-faceted-navigation.md), geo-search, or any other Azure AI Search feature, you can certainly do so.
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you use a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
search Search Get Started Portal Import Vectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal-import-vectors.md
- ignite-2023 Previously updated : 11/06/2023 Last updated : 11/29/2023 # Quickstart: Integrated vectorization (preview)
Search explorer accepts text strings as input and then vectorizes the text for v
1. Make sure the API version is **2023-10-01-preview**.
-1. Enter your search string. Here's a string that gets a count of the chunked documents and selects just the title and chunk fields: `$count=true&$select=title,chunk`.
+1. Select **JSON view** so that you can enter text for your vector query in the **text** vector query parameter.
1. Select **Search**.
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
Previously updated : 11/16/2023 Last updated : 11/30/2023 - mode-ui - ignite-2023
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Data connectors are available as part of the following offerings:
## Cisco Systems, Inc. -- [[Deprecated] Cisco Firepower eStreamer via Legacy Agent](data-connectors/deprecated-cisco-firepower-estreamer-via-legacy-agent.md)-- [[Recommended] Cisco Firepower eStreamer via Legacy Agent via AMA](data-connectors/recommended-cisco-firepower-estreamer-via-legacy-agent-via-ama.md)
+- [Cisco Firepower eStreamer](data-connectors/cisco-firepower-estreamer.md)
- [Cisco Software Defined WAN](data-connectors/cisco-software-defined-wan.md) ## Citrix
Data connectors are available as part of the following offerings:
## Cloud Software Group -- [[Deprecated] Citrix WAF (Web App Firewall) via Legacy Agent](data-connectors/deprecated-citrix-waf-web-app-firewall-via-legacy-agent.md)-- [[Recommended] Citrix WAF (Web App Firewall) via AMA](data-connectors/recommended-citrix-waf-web-app-firewall-via-ama.md)
+- [Citrix WAF (Web App Firewall)](data-connectors/deprecated-citrix-waf-web-app-firewall-via-legacy-agent.md)
- [CITRIX SECURITY ANALYTICS](data-connectors/citrix-security-analytics.md)+ ## Cloudflare - [Cloudflare (Preview) (using Azure Functions)](data-connectors/cloudflare-using-azure-functions.md)
Data connectors are available as part of the following offerings:
## Contrast Security -- [[Deprecated] Contrast Protect via Legacy Agent](data-connectors/deprecated-contrast-protect-via-legacy-agent.md)-- [[Recommended] Contrast Protect via AMA](data-connectors/recommended-contrast-protect-via-ama.md)
+- [Contrast Protect](data-connectors/contrast-protect.md)
## Corelight Inc.
Data connectors are available as part of the following offerings:
## CyberArk -- [[Deprecated] CyberArk Enterprise Password Vault (EPV) Events via Legacy Agent](data-connectors/deprecated-cyberark-enterprise-password-vault-epv-events-via-legacy-agent.md)-- [[Recommended] CyberArk Enterprise Password Vault (EPV) Events via AMA](data-connectors/recommended-cyberark-enterprise-password-vault-epv-events-via-ama.md)
+- [CyberArk Enterprise Password Vault (EPV) Events](data-connectors/cyberark-enterprise-password-vault-epv-events.md)
- [CyberArkEPM (using Azure Functions)](data-connectors/cyberarkepm-using-azure-functions.md) ## Cybersixgill
Data connectors are available as part of the following offerings:
## Darktrace plc -- [[Deprecated] AI Analyst Darktrace via Legacy Agent](data-connectors/deprecated-ai-analyst-darktrace-via-legacy-agent.md)-- [[Recommended] AI Analyst Darktrace via AMA](data-connectors/recommended-ai-analyst-darktrace-via-ama.md)
+- [AI Analyst Darktrace](data-connectors/ai-analyst-darktrace.md)
## Defend Limited
Data connectors are available as part of the following offerings:
## Delinea Inc. -- [[Deprecated] Delinea Secret Server via Legacy Agent](data-connectors/deprecated-delinea-secret-server-via-legacy-agent.md)-- [[Recommended] Delinea Secret Server via AMA](data-connectors/recommended-delinea-secret-server-via-ama.md)
+- [Delinea Secret Server](data-connectors/delinea-secret-server.md)
## Derdack
Data connectors are available as part of the following offerings:
## ExtraHop Networks, Inc. -- [[Deprecated] ExtraHop Reveal(x) via Legacy Agent](data-connectors/deprecated-extrahop-reveal-x-via-legacy-agent.md)-- [[Recommended] ExtraHop Reveal(x) via AMA](data-connectors/recommended-extrahop-reveal-x-via-ama.md)
+- [ExtraHop Reveal(x)](data-connectors/extrahop-reveal-x.md)
## F5, Inc. -- [[Deprecated] F5 Networks via Legacy Agent](data-connectors/deprecated-f5-networks-via-legacy-agent.md)-- [[Recommended] F5 Networks via AMA](data-connectors/recommended-f5-networks-via-ama.md)
+- [F5 Networks](data-connectors/f5-networks.md)
- [F5 BIG-IP](data-connectors/f5-big-ip.md) ## Facebook
Data connectors are available as part of the following offerings:
## iboss inc -- [[Deprecated] iboss via Legacy Agent](data-connectors/deprecated-iboss-via-legacy-agent.md)-- [[Recommended] iboss via AMA](data-connectors/recommended-iboss-via-ama.md)
+- [iboss](data-connectors/iboss.md)
+ ## Illumio
Data connectors are available as part of the following offerings:
## Illusive Networks -- [[Deprecated] Illusive Platform via Legacy Agent](data-connectors/deprecated-illusive-platform-via-legacy-agent.md)-- [[Recommended] Illusive Platform via AMA](data-connectors/recommended-illusive-platform-via-ama.md)
+- [Illusive Platform](data-connectors/illusive-platform.md)
+ ## Imperva
Data connectors are available as part of the following offerings:
## Morphisec -- [[Deprecated] Morphisec UTPP via Legacy Agent](data-connectors/deprecated-morphisec-utpp-via-legacy-agent.md)-- [[Recommended] Morphisec UTPP via AMA](data-connectors/recommended-morphisec-utpp-via-ama.md)
+- [Morphisec UTPP](data-connectors/morphisec-utpp.md)
## MuleSoft
Data connectors are available as part of the following offerings:
## SonicWall Inc -- [[Deprecated] SonicWall Firewall via Legacy Agent](data-connectors/deprecated-sonicwall-firewall-via-legacy-agent.md)-- [[Recommended] SonicWall Firewall via AMA](data-connectors/recommended-sonicwall-firewall-via-ama.md)
+- [SonicWall Firewall](data-connectors/sonicwall-firewall.md)
## Sonrai Security
Data connectors are available as part of the following offerings:
## vArmour Networks -- [[Deprecated] vArmour Application Controller via Legacy Agent](data-connectors/deprecated-varmour-application-controller-via-legacy-agent.md)-- [[Recommended] vArmour Application Controller via AMA](data-connectors/recommended-varmour-application-controller-via-ama.md)
+- [vArmour Application Controller](data-connectors/varmour-application-controller.md)
## Vectra AI, Inc
Data connectors are available as part of the following offerings:
## WireX Systems -- [[Deprecated] WireX Network Forensics Platform via Legacy Agent](data-connectors/deprecated-wirex-network-forensics-platform-via-legacy-agent.md)-- [[Recommended] WireX Network Forensics Platform via AMA](data-connectors/recommended-wirex-network-forensics-platform-via-ama.md)
+- [WireX Network Forensics Platform](data-connectors/wirex-network-forensics-platform.md)
+ ## WithSecure
sentinel Ai Analyst Darktrace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ai-analyst-darktrace.md
+
+ Title: "AI Analyst Darktrace connector for Microsoft Sentinel"
+description: "Learn how to install the connector AI Analyst Darktrace to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# AI Analyst Darktrace connector for Microsoft Sentinel
+
+The Darktrace connector lets users connect Darktrace Model Breaches in real-time with Microsoft Sentinel, allowing creation of custom Dashboards, Workbooks, Notebooks and Custom Alerts to improve investigation. Microsoft Sentinel's enhanced visibility into Darktrace logs enables monitoring and mitigation of security threats.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (Darktrace)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Darktrace](https://www.darktrace.com/en/contact/) |
+
+## Query samples
+
+**first 10 most recent data breaches**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Darktrace"
+
+ | order by TimeGenerated desc
+
+ | limit 10
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-premises environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Configure Darktrace to forward Syslog messages in CEF format to your Azure workspace via the Syslog agent.
+
+ 1) Within the Darktrace Threat Visualizer, navigate to the System Config page in the main menu under Admin.
+
+ 2) From the left-hand menu, select Modules and choose Microsoft Sentinel from the available Workflow Integrations.\n 3) A configuration window will open. Locate Microsoft Sentinel Syslog CEF and click New to reveal the configuration settings, unless already exposed.
+
+ 4) In the Server configuration field, enter the location of the log forwarder and optionally modify the communication port. Ensure that the port selected is set to 514 and is allowed by any intermediary firewalls.
+
+ 5) Configure any alert thresholds, time offsets or additional settings as required.
+
+ 6) Review any additional configuration options you may wish to enable that alter the Syslog syntax.
+
+ 7) Enable Send Alerts and save your changes.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
sentinel Azure Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/azure-ddos-protection.md
Title: "Azure DDoS Protection connector for Microsoft Sentinel"
description: "Learn how to install the connector Azure DDoS Protection to connect your data source to Microsoft Sentinel." Previously updated : 06/22/2023 Last updated : 11/29/2023 # Azure DDoS Protection connector for Microsoft Sentinel
-Connect to Azure DDoS Protection logs via Public IP Address Diagnostic Logs. In addition to the core DDoS protection in the platform, Azure DDoS Protection provides advanced DDoS mitigation capabilities against network attacks. It's automatically tuned to protect your specific Azure resources. Protection is simple to enable during the creation of new virtual networks. It can also be done after creation and requires no application or resource changes. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2219760&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
+Connect to Azure DDoS Protection Standard logs via Public IP Address Diagnostic Logs. In addition to the core DDoS protection in the platform, Azure DDoS Protection Standard provides advanced DDoS mitigation capabilities against network attacks. It's automatically tuned to protect your specific Azure resources. Protection is simple to enable during the creation of new virtual networks. It can also be done after creation and requires no application or resource changes. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2219760&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
## Connector attributes
sentinel Cisco Firepower Estreamer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-firepower-estreamer.md
+
+ Title: "Cisco Firepower eStreamer connector for Microsoft Sentinel"
+description: "Learn how to install the connector Cisco Firepower eStreamer to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Cisco Firepower eStreamer connector for Microsoft Sentinel
+
+eStreamer is a Client Server API designed for the Cisco Firepower NGFW Solution. The eStreamer client requests detailed event data on behalf of the SIEM or logging solution in the Common Event Format (CEF).
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CiscoFirepowerEstreamerCEF)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Cisco](https://www.cisco.com/c/en_in/support/https://docsupdatetracker.net/index.html) |
+
+## Query samples
+
+**Firewall Blocked Events**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cisco"
+
+ | where DeviceProduct == "Firepower"
+ | where DeviceAction != "Allow"
+ ```
+
+**File Malware Events**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cisco"
+
+ | where DeviceProduct == "Firepower"
+ | where Activity == "File Malware Event"
+ ```
+
+**Outbound Web Traffic Port 80**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cisco"
+
+ | where DeviceProduct == "Firepower"
+ | where DestinationPort == "80"
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 25226 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Install the Firepower eNcore client
+
+Install and configure the Firepower eNcore eStreamer client, for more details see full install [guide](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html)
+
+2.1 Download the Firepower Connector from github
+
+Download the latest version of the Firepower eNcore connector for Microsoft Sentinel [here](https://github.com/CiscoSecurity/fp-05-microsoft-sentinel-connector). If you plan on using python3 use the [python3 eStreamer connector](https://github.com/CiscoSecurity/fp-05-microsoft-sentinel-connector/tree/python3)
+
+2.2 Create a pkcs12 file using the Azure/VM Ip Address
+
+Create a pkcs12 certificate using the public IP of the VM instance in Firepower under System->Integration->eStreamer, for more information please see install [guide](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html#_Toc527049443)
+
+2.3 Test Connectivity between the Azure/VM Client and the FMC
+
+Copy the pkcs12 file from the FMC to the Azure/VM instance and run the test utility (./encore.sh test) to ensure a connection can be established, for more details please see the setup [guide](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html#_Toc527049430)
+
+2.4 Configure encore to stream data to the agent
+
+Configure encore to stream data via TCP to the Microsoft Agent, this should be enabled by default, however, additional ports and streaming protocols can configured depending on your network security posture, it is also possible to save the data to the file system, for more information please see [Configure Encore](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html#_Toc527049433)
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-firepower-estreamer?tab=Overview) in the Azure Marketplace.
sentinel Citrix Waf Web App Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/citrix-waf-web-app-firewall.md
+
+ Title: "Citrix WAF (Web App Firewall) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Citrix WAF (Web App Firewall) to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Citrix WAF (Web App Firewall) connector for Microsoft Sentinel
+
+ Citrix WAF (Web App Firewall) is an industry leading enterprise-grade WAF solution. Citrix WAF mitigates threats against your public-facing assets, including websites, apps, and APIs. From layer 3 to layer 7, Citrix WAF includes protections such as IP reputation, bot mitigation, defense against the OWASP Top 10 application threats, built-in signatures to protect against application stack vulnerabilities, and more.
+
+Citrix WAF supports Common Event Format (CEF) which is an industry standard format on top of Syslog messages . By connecting Citrix WAF CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CitrixWAFLogs)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Citrix Systems](https://www.citrix.com/support/) |
+
+## Query samples
+
+**Citrix WAF Logs**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ ```
+
+**Citrix Waf logs for cross site scripting**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ | where Activity == "APPFW_XSS"
+
+ ```
+
+**Citrix Waf logs for SQL Injection**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ | where Activity == "APPFW_SQL"
+
+ ```
+
+**Citrix Waf logs for Bufferoverflow**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ | where Activity == "APPFW_STARTURL"
+
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Configure Citrix WAF to send Syslog messages in CEF format to the proxy machine using the steps below.
+
+1. Follow [this guide](https://support.citrix.com/article/CTX234174) to configure WAF.
+
+2. Follow [this guide](https://support.citrix.com/article/CTX136146) to configure CEF logs.
+
+3. Follow [this guide](https://docs.citrix.com/en-us/citrix-adc/13/system/audit-logging/configuring-audit-logging.html) to forward the logs to proxy . Make sure you to send the logs to port 514 TCP on the Linux machine's IP address.
+++
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/citrix.citrix_waf_mss?tab=Overview) in the Azure Marketplace.
sentinel Contrast Protect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/contrast-protect.md
+
+ Title: "Contrast Protect connector for Microsoft Sentinel"
+description: "Learn how to install the connector Contrast Protect to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Contrast Protect connector for Microsoft Sentinel
+
+Contrast Protect mitigates security threats in production applications with runtime protection and observability. Attack event results (blocked, probed, suspicious...) and other information can be sent to Microsoft Sentinel to blend with security information from other systems.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ContrastProtect)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Contrast Protect](https://docs.contrastsecurity.com/) |
+
+## Query samples
+
+**All attacks**
+ ```kusto
+let extract_data=(a:string, k:string) { parse_urlquery(replace(@';', @'&', a))["Query Parameters"][k] }; CommonSecurityLog
+ | where DeviceVendor == 'Contrast Security'
+ | extend Outcome = replace(@'INEFFECTIVE', @'PROBED', tostring(coalesce(column_ifexists("EventOutcome", ""), extract_data(AdditionalExtensions, 'outcome'), "")))
+ | where Outcome != 'success'
+ | extend Rule = extract_data(AdditionalExtensions, 'pri')
+ | project TimeGenerated, ApplicationProtocol, Rule, Activity, Outcome, RequestURL, SourceIP
+ | order by TimeGenerated desc
+ ```
+
+**Effective attacks**
+ ```kusto
+let extract_data=(a:string, k:string) {
+ parse_urlquery(replace(@';', @'&', a))["Query Parameters"][k]
+};
+CommonSecurityLog
+
+ | where DeviceVendor == 'Contrast Security'
+
+ | extend Outcome = tostring(coalesce(column_ifexists("EventOutcome", ""), extract_data(AdditionalExtensions, 'outcome'), ""))
+
+ | where Outcome in ('EXPLOITED','BLOCKED','SUSPICIOUS')
+
+ | extend Rule = extract_data(AdditionalExtensions, 'pri')
+
+ | project TimeGenerated, ApplicationProtocol, Rule, Activity, Outcome, RequestURL, SourceIP
+
+ | order by TimeGenerated desc
+
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Configure the Contrast Protect agent to forward events to syslog as described here: https://docs.contrastsecurity.com/en/output-to-syslog.html. Generate some attack events for your application.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/contrast_security.contrast_protect_azure_sentinel_solution?tab=Overview) in the Azure Marketplace.
sentinel Cyberark Enterprise Password Vault Epv Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cyberark-enterprise-password-vault-epv-events.md
+
+ Title: "CyberArk Enterprise Password Vault (EPV) Events connector for Microsoft Sentinel"
+description: "Learn how to install the connector CyberArk Enterprise Password Vault (EPV) Events to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# CyberArk Enterprise Password Vault (EPV) Events connector for Microsoft Sentinel
+
+CyberArk Enterprise Password Vault generates an xml Syslog message for every action taken against the Vault. The EPV will send the xml messages through the Microsoft Sentinel.xsl translator to be converted into CEF standard format and sent to a syslog staging server of your choice (syslog-ng, rsyslog). The Log Analytics agent installed on your syslog staging server will import the messages into Microsoft Log Analytics. Refer to the [CyberArk documentation](https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/DV-Integrating-with-SIEM-Applications.htm) for more guidance on SIEM integrations.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CyberArk)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Cyberark](https://www.cyberark.com/services-support/technical-support/) |
+
+## Query samples
+
+**CyberArk Alerts**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cyber-Ark"
+
+ | where DeviceProduct == "Vault"
+
+ | where LogSeverity == "7" or LogSeverity == "10"
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python installed on your machine.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+On the EPV configure the dbparm.ini to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machines IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python installed on your machine using the following command: python -version
+
+>
+
+> 2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machines security according to your organizations security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberark.cyberark_epv_events_mss?tab=Overview) in the Azure Marketplace.
sentinel Delinea Secret Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/delinea-secret-server.md
+
+ Title: "Delinea Secret Server connector for Microsoft Sentinel"
+description: "Learn how to install the connector Delinea Secret Server to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Delinea Secret Server connector for Microsoft Sentinel
+
+Common Event Format (CEF) from Delinea Secret Server
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog(DelineaSecretServer)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Delinea](https://delinea.com/support/) |
+
+## Query samples
+
+**Get records create new secret**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Delinea Software" or DeviceVendor == "Thycotic Software"
+
+ | where DeviceProduct == "Secret Server"
+
+ | where Activity has "SECRET - CREATE"
+ ```
+
+**Get records where view secret**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Delinea Software" or DeviceVendor == "Thycotic Software"
+
+ | where DeviceProduct == "Secret Server"
+
+ | where Activity has "SECRET - VIEW"
+ ```
+++
+## Prerequisites
+
+To integrate with Delinea Secret Server make sure you have:
+
+- **Delinea Secret Server**: must be configured to export logs via Syslog
+
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-premises environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/delineainc1653506022260.delinea_secret_server_mss?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Forcepoint Csg Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-forcepoint-csg-via-legacy-agent.md
Title: "[Deprecated] Forcepoint CSG via Legacy Agent connector for Microsoft Sen
description: "Learn how to install the connector [Deprecated] Forcepoint CSG via Legacy Agent to connect your data source to Microsoft Sentinel." Previously updated : 10/23/2023 Last updated : 11/29/2023
This integration requires the Linux Syslog agent to collect your Forcepoint Clou
The integration is made available with two implementations options.
-2.1 Splunk Implementation
+2.1 Docker Implementation
-Leverages splunk images where the integration component is already installed with all necessary dependencies.
+Leverages docker images where the integration component is already installed with all necessary dependencies.
Follow the instructions provided in the Integration Guide linked below.
-[Integration Guide >](https://forcepoint.github.io/docs/csg_and_splunk/)
+[Integration Guide >](https://frcpnt.com/csg-sentinel)
-2.2 VeloCloud Implementation
+2.2 Traditional Implementation
Requires the manual deployment of the integration component inside a clean Linux machine. Follow the instructions provided in the Integration Guide linked below.
-[Integration Guide >](https://forcepoint.github.io/docs/csg_and_velocloud/)
+[Integration Guide >](https://frcpnt.com/csg-sentinel)
3. Validate connection
sentinel Deprecated Trend Micro Apex One Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-trend-micro-apex-one-via-legacy-agent.md
Title: "[Deprecated] Trend Micro Apex One via Legacy Agent connector for Microso
description: "Learn how to install the connector [Deprecated] Trend Micro Apex One via Legacy Agent to connect your data source to Microsoft Sentinel." Previously updated : 10/23/2023 Last updated : 11/29/2023 # [Deprecated] Trend Micro Apex One via Legacy Agent connector for Microsoft Sentinel
-The [Trend Micro Apex One](https://www.trendmicro.com/en_us/business/products/user-protection/sps/endpoint.html) data connector provides the capability to ingest [Trend Micro Apex One events](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/appendices/syslog-mapping-cef.aspx) into Microsoft Sentinel. Refer to [Trend Micro Apex Central](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/preface_001.aspx) for more information.
+The [Trend Micro Apex One](https://www.trendmicro.com/en_us/business/products/user-protection/sps/endpoint.html) data connector provides the capability to ingest [Trend Micro Apex One events](https://aka.ms/sentinel-TrendMicroApex-OneEvents) into Microsoft Sentinel. Refer to [Trend Micro Apex Central](https://aka.ms/sentinel-TrendMicroApex-OneCentral) for more information.
## Connector attributes
sentinel Dynatrace Attacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/dynatrace-attacks.md
Title: "Dynatrace Attacks connector for Microsoft Sentinel"
description: "Learn how to install the connector Dynatrace Attacks to connect your data source to Microsoft Sentinel." Previously updated : 02/28/2023 Last updated : 11/29/2023
DynatraceAttacks
To integrate with Dynatrace Attacks make sure you have: -- **Dynatrace tenant (ex. xyz.dynatrace.com)**: You need a valid Dynatrace tenant with [Application Security](https://www.dynatrace.com/support/help/how-to-use-dynatrace/application-security) enabled, learn more about the [Dynatrace platform](https://www.dynatrace.com/).
+- **Dynatrace tenant (ex. xyz.dynatrace.com)**: You need a valid Dynatrace tenant with [Application Security](https://www.dynatrace.com/platform/application-security/) enabled, learn more about the [Dynatrace platform](https://www.dynatrace.com/).
- **Dynatrace Access Token**: You need a Dynatrace Access Token, the token should have ***Read attacks*** (attacks.read) scope.
To integrate with Dynatrace Attacks make sure you have:
Dynatrace Attack Events to Microsoft Sentinel
-Configure and Enable Dynatrace [Application Security](https://www.dynatrace.com/support/help/how-to-use-dynatrace/application-security).
- Follow [these instructions](https://www.dynatrace.com/support/help/get-started/access-tokens#create-api-token) to generate an access token.
+Configure and Enable Dynatrace [Application Security](https://www.dynatrace.com/platform/application-security/).
+ Follow [these instructions](https://docs.dynatrace.com/docs/shortlink/token#create-api-token) to generate an access token.
sentinel Dynatrace Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/dynatrace-audit-logs.md
Title: "Dynatrace Audit Logs connector for Microsoft Sentinel"
description: "Learn how to install the connector Dynatrace Audit Logs to connect your data source to Microsoft Sentinel." Previously updated : 02/28/2023 Last updated : 11/29/2023 # Dynatrace Audit Logs connector for Microsoft Sentinel
-This connector uses the [Dynatrace Audit Logs REST API](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/audit-logs) to ingest tenant audit logs into Microsoft Sentinel Log Analytics
+This connector uses the [Dynatrace Audit Logs REST API](https://docs.dynatrace.com/docs/dynatrace-api/environment-api/audit-logs) to ingest tenant audit logs into Microsoft Sentinel Log Analytics
## Connector attributes
To integrate with Dynatrace Audit Logs make sure you have:
Dynatrace Audit Log Events to Microsoft Sentinel
-Enable Dynatrace Audit [Logging](https://www.dynatrace.com/support/help/how-to-use-dynatrace/data-privacy-and-security/configuration/audit-logs#enable-audit-logging).
- Follow [these instructions](https://www.dynatrace.com/support/help/get-started/access-tokens#create-api-token) to generate an access token.
+Enable Dynatrace Audit [Logging](https://docs.dynatrace.com/docs/shortlink/audit-logs#enable-audit-logging).
+ Follow [these instructions](https://docs.dynatrace.com/docs/shortlink/token#create-api-token) to generate an access token.
sentinel Dynatrace Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/dynatrace-problems.md
Title: "Dynatrace Problems connector for Microsoft Sentinel"
description: "Learn how to install the connector Dynatrace Problems to connect your data source to Microsoft Sentinel." Previously updated : 02/28/2023 Last updated : 11/29/2023 # Dynatrace Problems connector for Microsoft Sentinel
-This connector uses the [Dynatrace Problem REST API](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/problems-v2) to ingest problem events into Microsoft Sentinel Log Analytics
+This connector uses the [Dynatrace Problem REST API](https://docs.dynatrace.com/docs/dynatrace-api/environment-api/problems-v2) to ingest problem events into Microsoft Sentinel Log Analytics
## Connector attributes
To integrate with Dynatrace Problems make sure you have:
Dynatrace Problem Events to Microsoft Sentinel
-Follow [these instructions](https://www.dynatrace.com/support/help/get-started/access-tokens#create-api-token) to generate an access token.
+Follow [these instructions](https://docs.dynatrace.com/docs/shortlink/token#create-api-token) to generate an access token.
sentinel Dynatrace Runtime Vulnerabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/dynatrace-runtime-vulnerabilities.md
Title: "Dynatrace Runtime Vulnerabilities connector for Microsoft Sentinel"
description: "Learn how to install the connector Dynatrace Runtime Vulnerabilities to connect your data source to Microsoft Sentinel." Previously updated : 02/28/2023 Last updated : 11/29/2023 # Dynatrace Runtime Vulnerabilities connector for Microsoft Sentinel
-This connector uses the [Dynatrace Security Problem REST API](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/application-security/security-problems) to ingest detected runtime vulnerabilities into Microsoft Sentinel Log Analytics.
+This connector uses the [Dynatrace Security Problem REST API](https://docs.dynatrace.com/docs/dynatrace-api/environment-api/application-security/vulnerabilities/get-vulnerabilities) to ingest detected runtime vulnerabilities into Microsoft Sentinel Log Analytics.
## Connector attributes
DynatraceSecurityProblems
To integrate with Dynatrace Runtime Vulnerabilities make sure you have: -- **Dynatrace tenant (ex. xyz.dynatrace.com)**: You need a valid Dynatrace tenant with [Application Security](https://www.dynatrace.com/support/help/how-to-use-dynatrace/application-security) enabled, learn more about the [Dynatrace platform](https://www.dynatrace.com/).
+- **Dynatrace tenant (ex. xyz.dynatrace.com)**: You need a valid Dynatrace tenant with [Application Security](https://www.dynatrace.com/platform/application-security/) enabled, learn more about the [Dynatrace platform](https://www.dynatrace.com/).
- **Dynatrace Access Token**: You need a Dynatrace Access Token, the token should have ***Read security problems*** (securityProblems.read) scope.
To integrate with Dynatrace Runtime Vulnerabilities make sure you have:
Dynatrace Vulnerabilities Events to Microsoft Sentinel
-Configure and Enable Dynatrace [Application Security](https://www.dynatrace.com/support/help/how-to-use-dynatrace/application-security).
- Follow [these instructions](https://www.dynatrace.com/support/help/get-started/access-tokens#create-api-token) to generate an access token.
+Configure and Enable Dynatrace [Application Security](https://www.dynatrace.com/platform/application-security/).
+ Follow [these instructions](https://docs.dynatrace.com/docs/shortlink/token#create-api-token) to generate an access token.
sentinel Extrahop Reveal X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/extrahop-reveal-x.md
+
+ Title: "ExtraHop Reveal(x) connector for Microsoft Sentinel"
+description: "Learn how to install the connector ExtraHop Reveal(x) to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# ExtraHop Reveal(x) connector for Microsoft Sentinel
+
+The ExtraHop Reveal(x) data connector enables you to easily connect your Reveal(x) system with Microsoft Sentinel to view dashboards, create custom alerts, and improve investigation. This integration gives you the ability to gain insight into your organization's network and improve your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ΓÇÿExtraHopΓÇÖ)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [ExtraHop](https://www.extrahop.com/support/) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "ExtraHop"
+
+
+ | sort by TimeGenerated
+ ```
+
+**All detections, de-duplicated**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "ExtraHop"
+
+
+ | extend categories = iif(DeviceCustomString2 != "", split(DeviceCustomString2, ","),dynamic(null))
+    
+ | extend StartTime = extract("start=([0-9-]+T[0-9:.]+Z)", 1, AdditionalExtensions,typeof(datetime))
+    
+ | extend EndTime = extract("end=([0-9-]+T[0-9:.]+Z)", 1, AdditionalExtensions,typeof(datetime))
+    
+ | project      
+     DeviceEventClassID="ExtraHop Detection",
+     Title=Activity,
+     Description=Message,
+     riskScore=DeviceCustomNumber2,     
+     SourceIP,
+     DestinationIP,
+     detectionID=tostring(DeviceCustomNumber1),
+     updateTime=todatetime(ReceiptTime),
+     StartTime,
+     EndTime,
+     detectionURI=DeviceCustomString1,
+     categories,
+     Computer
+    
+ | summarize arg_max(updateTime, *) by detectionID
+    
+ | sort by detectionID desc
+ ```
+++
+## Prerequisites
+
+To integrate with ExtraHop Reveal(x) make sure you have:
+
+- **ExtraHop**: ExtraHop Discover or Command appliance with firmware version 7.8 or later with a user account that has Unlimited (administrator) privileges.
++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-premises environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python --version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward ExtraHop Networks logs to Syslog agent
+
+1. Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure to send the logs to port 514 TCP on the machine IP address.
+2. Follow the directions to install the [ExtraHop Detection SIEM Connector bundle](https://learn.extrahop.com/extrahop-detection-siem-connector-bundle) on your Reveal(x) system. The SIEM Connector is required for this integration.
+3. Enable the trigger for **ExtraHop Detection SIEM Connector - CEF**
+4. Update the trigger with the ODS syslog targets you created 
+5. The Reveal(x) system formats syslog messages in Common Event Format (CEF) and then sends data to Microsoft Sentinel.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python --version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/extrahop.extrahop_revealx_mss?tab=Overview) in the Azure Marketplace.
sentinel F5 Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/f5-networks.md
+
+ Title: "F5 Networks connector for Microsoft Sentinel"
+description: "Learn how to install the connector F5 Networks to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# F5 Networks connector for Microsoft Sentinel
+
+The F5 firewall connector allows you to easily connect your F5 logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (F5)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [F5](https://www.f5.com/services/support) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "F5"
+
+
+ | sort by TimeGenerated
+ ```
+
+**Summarize by time**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "F5"
+
+
+ | summarize count() by TimeGenerated
+
+ | sort by TimeGenerated
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python --version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Configure F5 to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent.
+
+Go to [F5 Configuring Application Security Event Logging](https://aka.ms/asi-syslog-f5-forwarding), follow the instructions to set up remote logging, using the following guidelines:
+
+1. Set the **Remote storage type** to CEF.
+2. Set the **Protocol setting** to UDP.
+3. Set the **IP address** to the Syslog server IP address.
+4. Set the **port number** to 514, or the port your agent uses.
+5. Set the **facility** to the one that you configured in the Syslog agent (by default, the agent sets this to local4).
+6. You can set the **Maximum Query String Size** to be the same as you configured.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python --version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5_networks_data_mss?tab=Overview) in the Azure Marketplace.
sentinel Google Workspace G Suite Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-workspace-g-suite-using-azure-functions.md
Title: "Google Workspace (G Suite) (using Azure Functions) connector for Microso
description: "Learn how to install the connector Google Workspace (G Suite) (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 11/29/2023
To integrate with Google Workspace (G Suite) (using Azure Functions) make sure y
>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias GWorkspaceReports and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/GoogleWorkspaceReports/Parsers/GWorkspaceActivityReports.yaml), on the second line of the query, enter the hostname(s) of your GWorkspaceReports device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias GWorkspaceReports and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/GoogleWorkspaceReports/Parsers/GWorkspaceActivityReports), on the second line of the query, enter the hostname(s) of your GWorkspaceReports device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
**STEP 1 - Ensure the prerequisites to obtain the Google Pickel String**
sentinel Iboss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/iboss.md
+
+ Title: "iboss connector for Microsoft Sentinel"
+description: "Learn how to install the connector iboss to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# iboss connector for Microsoft Sentinel
+
+The [iboss](https://www.iboss.com) data connector enables you to seamlessly connect your Threat Console to Microsoft Sentinel and enrich your instance with iboss URL event logs. Our logs are forwarded in Common Event Format (CEF) over Syslog and the configuration required can be completed on the iboss platform without the use of a proxy. Take advantage of our connector to garner critical data points and gain insight into security threats.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | ibossUrlEvent<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [iboss](https://www.iboss.com/contact-us/) |
+
+## Query samples
+
+**Logs Received from the past week**
+ ```kusto
+ibossUrlEvent
+ | where TimeGenerated > ago(7d)
+ ```
+++
+## Vendor installation instructions
+
+1. Configure a dedicated proxy Linux machine
+
+If using the iboss gov environment or there is a preference to forward the logs to a dedicated proxy Linux machine, proceed with this step. In all other cases, please advance to step two.
+
+1.1 Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.2 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the dedicated proxy Linux machine between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.3 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+> 2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs
+
+Set your Threat Console to send Syslog messages in CEF format to your Azure workspace. Make note of your Workspace ID and Primary Key within your Log Analytics Workspace (Select the workspace from the Log Analytics workspaces menu in the Azure portal. Then select Agents management in the Settings section).
+
+>1. Navigate to Reporting & Analytics inside your iboss Console
+
+>2. Select Log Forwarding -> Forward From Reporter
+
+>3. Select Actions -> Add Service
+
+>4. Toggle to Microsoft Sentinel as a Service Type and input your Workspace ID/Primary Key along with other criteria. If a dedicated proxy Linux machine has been configured, toggle to Syslog as a Service Type and configure the settings to point to your dedicated proxy Linux machine
+
+>5. Wait one to two minutes for the setup to complete
+
+>6. Select your Microsoft Sentinel Service and verify the Microsoft Sentinel Setup Status is Successful. If a dedicated proxy Linux machine has been configured, you may proceed with validating your connection
+
+3. Validate connection
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy (Only applicable if a dedicated proxy Linux machine has been configured).
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/iboss.iboss-sentinel-connector?tab=Overview) in the Azure Marketplace.
sentinel Illusive Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/illusive-platform.md
+
+ Title: "Illusive Platform connector for Microsoft Sentinel"
+description: "Learn how to install the connector Illusive Platform to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Illusive Platform connector for Microsoft Sentinel
+
+The Illusive Platform Connector allows you to share Illusive's attack surface analysis data and incident logs with Microsoft Sentinel and view this information in dedicated dashboards that offer insight into your organization's attack surface risk (ASM Dashboard) and track unauthorized lateral movement in your organization's network (ADS Dashboard).
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (illusive)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Illusive Networks](https://illusive.com/support) |
+
+## Query samples
+
+**Number of Incidents in the last 30 days in which Trigger Type is found**
+ ```kusto
+union CommonSecurityLog
+ | where (DeviceEventClassID == "illusive:login" or DeviceEventClassID == "illusive:access" or DeviceEventClassID == "illusive:suspicious")
+ | where Message !contains "hasForensics"
+ | where TimeGenerated > ago(30d)
+ | extend DeviceCustomNumber2 = coalesce(column_ifexists("FieldDeviceCustomNumber2", long(null)), DeviceCustomNumber2, long(null))
+ | summarize by DestinationServiceName, DeviceCustomNumber2
+ | summarize incident_count=count() by DestinationServiceName
+ ```
+
+**Top 10 alerting hosts in the last 30 days**
+ ```kusto
+union CommonSecurityLog
+ | where (DeviceEventClassID == "illusive:login" or DeviceEventClassID == "illusive:access" or DeviceEventClassID == "illusive:suspicious")
+ | where Message !contains "hasForensics"
+ | where TimeGenerated > ago(30d)
+ | extend DeviceCustomNumber2 = coalesce(column_ifexists("FieldDeviceCustomNumber2", long(null)), DeviceCustomNumber2, long(null))
+ | summarize by AlertingHost=iff(SourceHostName != "" and SourceHostName != "Failed to obtain", SourceHostName, SourceIP) ,DeviceCustomNumber2
+ | where AlertingHost != "" and AlertingHost != "Failed to obtain"
+ | summarize incident_count=count() by AlertingHost
+ | order by incident_count
+ | limit 10
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Illusive Common Event Format (CEF) logs to Syslog agent
+
+1. Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+> 2. Log onto the Illusive Console, and navigate to Settings->Reporting.
+> 3. Find Syslog Servers
+> 4. Supply the following information:
+>> 1. Host name: Linux Syslog agent IP address or FQDN host name
+>> 2. Port: 514
+>> 3. Protocol: TCP
+>> 4. Audit messages: Send audit messages to server
+> 5. To add the syslog server, click Add.
+> 6. For more information about how to add a new syslog server in the Illusive platform, please find the Illusive Networks Admin Guide in here: https://support.illusivenetworks.com/hc/en-us/sections/360002292119-Documentation-by-Version
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/illusivenetworks.illusive_platform_mss?tab=Overview) in the Azure Marketplace.
sentinel Morphisec Utpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/morphisec-utpp.md
+
+ Title: "Morphisec UTPP connector for Microsoft Sentinel"
+description: "Learn how to install the connector Morphisec UTPP to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Morphisec UTPP connector for Microsoft Sentinel
+
+Integrate vital insights from your security products with the Morphisec Data Connector for Microsoft Sentinel and expand your analytical capabilities with search and correlation, threat intelligence, and customized alerts. Morphisec's Data Connector provides visibility into today's most advanced threats including sophisticated fileless attacks, in-memory exploits and zero days. With a single, cross-product view, you can make real-time, data-backed decisions to protect your most important assets
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Kusto function url** | https://aka.ms/sentinel-morphisecutpp-parser |
+| **Log Analytics table(s)** | CommonSecurityLog (Morphisec)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Morphisec](https://support.morphisec.com/hc/en-us) |
+
+## Query samples
+
+**Threats count by host**
+ ```kusto
+
+Morphisec
+
+
+ | summarize Times_Attacked=count() by SourceHostName
+ ```
+
+**Threats count by username**
+ ```kusto
+
+Morphisec
+
+
+ | summarize Times_Attacked=count() by SourceUserName
+ ```
+
+**Threats with high severity**
+ ```kusto
+
+Morphisec
+
+
+ | where toint( LogSeverity) > 7
+ | order by TimeGenerated
+ ```
+++
+## Vendor installation instructions
++
+These queries and workbooks are dependent on Kusto functions based on Kusto to work as expected. Follow the steps to use the Kusto functions alias "Morphisec"
+in queries and workbooks. [Follow steps to get this Kusto function.](https://aka.ms/sentinel-morphisecutpp-parser)
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-premises environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/morphisec.morphisec_utpp_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Ai Analyst Darktrace Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-ai-analyst-darktrace-via-ama.md
- Title: "[Recommended] AI Analyst Darktrace via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] AI Analyst Darktrace via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] AI Analyst Darktrace via AMA connector for Microsoft Sentinel
-
-The Darktrace connector lets users connect Darktrace Model Breaches in real-time with Microsoft Sentinel, allowing creation of custom Dashboards, Workbooks, Notebooks and Custom Alerts to improve investigation. Microsoft Sentinel's enhanced visibility into Darktrace logs enables monitoring and mitigation of security threats.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (Darktrace)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Darktrace](https://www.darktrace.com/en/contact/) |
-
-## Query samples
-
-**first 10 most recent data breaches**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "Darktrace"
-
- | order by TimeGenerated desc
-
- | limit 10
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] AI Analyst Darktrace via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
---
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
sentinel Recommended Cisco Firepower Estreamer Via Legacy Agent Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-cisco-firepower-estreamer-via-legacy-agent-via-ama.md
- Title: "[Recommended] Cisco Firepower eStreamer via Legacy Agent via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] Cisco Firepower eStreamer via Legacy Agent via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] Cisco Firepower eStreamer via Legacy Agent via AMA connector for Microsoft Sentinel
-
-eStreamer is a Client Server API designed for the Cisco Firepower NGFW Solution. The eStreamer client requests detailed event data on behalf of the SIEM or logging solution in the Common Event Format (CEF).
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (CiscoFirepowerEstreamerCEF)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Cisco](https://www.cisco.com/c/en_in/support/https://docsupdatetracker.net/index.html) |
-
-## Query samples
-
-**Firewall Blocked Events**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "Cisco"
-
- | where DeviceProduct == "Firepower"
- | where DeviceAction != "Allow"
- ```
-
-**File Malware Events**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "Cisco"
-
- | where DeviceProduct == "Firepower"
- | where Activity == "File Malware Event"
- ```
-
-**Outbound Web Traffic Port 80**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "Cisco"
-
- | where DeviceProduct == "Firepower"
- | where DestinationPort == "80"
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] Cisco Firepower eStreamer via Legacy Agent via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
---
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-firepower-estreamer?tab=Overview) in the Azure Marketplace.
sentinel Recommended Citrix Waf Web App Firewall Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-citrix-waf-web-app-firewall-via-ama.md
- Title: "[Recommended] Citrix WAF (Web App Firewall) via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] Citrix WAF (Web App Firewall) via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] Citrix WAF (Web App Firewall) via AMA connector for Microsoft Sentinel
-
- Citrix WAF (Web App Firewall) is an industry leading enterprise-grade WAF solution. Citrix WAF mitigates threats against your public-facing assets, including websites, apps, and APIs. From layer 3 to layer 7, Citrix WAF includes protections such as IP reputation, bot mitigation, defense against the OWASP Top 10 application threats, built-in signatures to protect against application stack vulnerabilities, and more.
-
-Citrix WAF supports Common Event Format (CEF) which is an industry standard format on top of Syslog messages . By connecting Citrix WAF CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (CitrixWAFLogs)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Citrix Systems](https://www.citrix.com/support/) |
-
-## Query samples
-
-**Citrix WAF Logs**
- ```kusto
-
-CommonSecurityLog
-
- | where DeviceVendor == "Citrix"
-
- | where DeviceProduct == "NetScaler"
-
- ```
-
-**Citrix Waf logs for cross site scripting**
- ```kusto
-
-CommonSecurityLog
-
- | where DeviceVendor == "Citrix"
-
- | where DeviceProduct == "NetScaler"
-
- | where Activity == "APPFW_XSS"
-
- ```
-
-**Citrix Waf logs for SQL Injection**
- ```kusto
-
-CommonSecurityLog
-
- | where DeviceVendor == "Citrix"
-
- | where DeviceProduct == "NetScaler"
-
- | where Activity == "APPFW_SQL"
-
- ```
-
-**Citrix Waf logs for Bufferoverflow**
- ```kusto
-
-CommonSecurityLog
-
- | where DeviceVendor == "Citrix"
-
- | where DeviceProduct == "NetScaler"
-
- | where Activity == "APPFW_STARTURL"
-
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] Citrix WAF (Web App Firewall) via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
-
-> Notice that the data from all regions will be stored in the selected workspace
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/citrix.citrix_waf_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Contrast Protect Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-contrast-protect-via-ama.md
- Title: "[Recommended] Contrast Protect via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] Contrast Protect via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] Contrast Protect via AMA connector for Microsoft Sentinel
-
-Contrast Protect mitigates security threats in production applications with runtime protection and observability. Attack event results (blocked, probed, suspicious...) and other information can be sent to Microsoft Microsoft Sentinel to blend with security information from other systems.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (ContrastProtect)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Contrast Protect](https://docs.contrastsecurity.com/) |
-
-## Query samples
-
-**All attacks**
- ```kusto
-let extract_data=(a:string, k:string) { parse_urlquery(replace(@';', @'&', a))["Query Parameters"][k] }; CommonSecurityLog
- | where DeviceVendor == 'Contrast Security'
- | extend Outcome = replace(@'INEFFECTIVE', @'PROBED', tostring(coalesce(column_ifexists("EventOutcome", ""), extract_data(AdditionalExtensions, 'outcome'), "")))
- | where Outcome != 'success'
- | extend Rule = extract_data(AdditionalExtensions, 'pri')
- | project TimeGenerated, ApplicationProtocol, Rule, Activity, Outcome, RequestURL, SourceIP
- | order by TimeGenerated desc
- ```
-
-**Effective attacks**
- ```kusto
-let extract_data=(a:string, k:string) {
- parse_urlquery(replace(@';', @'&', a))["Query Parameters"][k]
-};
-CommonSecurityLog
-
- | where DeviceVendor == 'Contrast Security'
-
- | extend Outcome = tostring(coalesce(column_ifexists("EventOutcome", ""), extract_data(AdditionalExtensions, 'outcome'), ""))
-
- | where Outcome in ('EXPLOITED','BLOCKED','SUSPICIOUS')
-
- | extend Rule = extract_data(AdditionalExtensions, 'pri')
-
- | project TimeGenerated, ApplicationProtocol, Rule, Activity, Outcome, RequestURL, SourceIP
-
- | order by TimeGenerated desc
-
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] Contrast Protect via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
-
-> Notice that the data from all regions will be stored in the selected workspace
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/contrast_security.contrast_protect_azure_sentinel_solution?tab=Overview) in the Azure Marketplace.
sentinel Recommended Cyberark Enterprise Password Vault Epv Events Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-cyberark-enterprise-password-vault-epv-events-via-ama.md
- Title: "[Recommended] CyberArk Enterprise Password Vault (EPV) Events via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] CyberArk Enterprise Password Vault (EPV) Events via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] CyberArk Enterprise Password Vault (EPV) Events via AMA connector for Microsoft Sentinel
-
-CyberArk Enterprise Password Vault generates an xml Syslog message for every action taken against the Vault. The EPV will send the xml messages through the Microsoft Sentinel.xsl translator to be converted into CEF standard format and sent to a syslog staging server of your choice (syslog-ng, rsyslog). The Log Analytics agent installed on your syslog staging server will import the messages into Microsoft Log Analytics. Refer to the [CyberArk documentation](https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/DV-Integrating-with-SIEM-Applications.htm) for more guidance on SIEM integrations.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (CyberArk)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Cyberark](https://www.cyberark.com/services-support/technical-support/) |
-
-## Query samples
-
-**CyberArk Alerts**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "Cyber-Ark"
-
- | where DeviceProduct == "Vault"
-
- | where LogSeverity == "7" or LogSeverity == "10"
-
- | sort by TimeGenerated desc
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] CyberArk Enterprise Password Vault (EPV) Events via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
---
-2. Secure your machine
-
-Make sure to configure the machines security according to your organizations security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberark.cyberark_epv_events_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Delinea Secret Server Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-delinea-secret-server-via-ama.md
- Title: "[Recommended] Delinea Secret Server via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] Delinea Secret Server via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] Delinea Secret Server via AMA connector for Microsoft Sentinel
-
-Common Event Format (CEF) from Delinea Secret Server
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog(DelineaSecretServer)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Delinea](https://delinea.com/support/) |
-
-## Query samples
-
-**Get records create new secret**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "Delinea Software" or DeviceVendor == "Thycotic Software"
-
- | where DeviceProduct == "Secret Server"
-
- | where Activity has "SECRET - CREATE"
- ```
-
-**Get records where view secret**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "Delinea Software" or DeviceVendor == "Thycotic Software"
-
- | where DeviceProduct == "Secret Server"
-
- | where Activity has "SECRET - VIEW"
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] Delinea Secret Server via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
---
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/delineainc1653506022260.delinea_secret_server_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Extrahop Reveal X Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-extrahop-reveal-x-via-ama.md
- Title: "[Recommended] ExtraHop Reveal(x) via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] ExtraHop Reveal(x) via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] ExtraHop Reveal(x) via AMA connector for Microsoft Sentinel
-
-The ExtraHop Reveal(x) data connector enables you to easily connect your Reveal(x) system with Microsoft Sentinel to view dashboards, create custom alerts, and improve investigation. This integration gives you the ability to gain insight into your organization's network and improve your security operation capabilities.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (ΓÇÿExtraHopΓÇÖ)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [ExtraHop](https://www.extrahop.com/support/) |
-
-## Query samples
-
-**All logs**
- ```kusto
-
-CommonSecurityLog
-
- | where DeviceVendor == "ExtraHop"
-
-
- | sort by TimeGenerated
- ```
-
-**All detections, de-duplicated**
- ```kusto
-
-CommonSecurityLog
-
- | where DeviceVendor == "ExtraHop"
-
-
- | extend categories = iif(DeviceCustomString2 != "", split(DeviceCustomString2, ","),dynamic(null))
-    
- | extend StartTime = extract("start=([0-9-]+T[0-9:.]+Z)", 1, AdditionalExtensions,typeof(datetime))
-    
- | extend EndTime = extract("end=([0-9-]+T[0-9:.]+Z)", 1, AdditionalExtensions,typeof(datetime))
-    
- | project      
-     DeviceEventClassID="ExtraHop Detection",
-     Title=Activity,
-     Description=Message,
-     riskScore=DeviceCustomNumber2,     
-     SourceIP,
-     DestinationIP,
-     detectionID=tostring(DeviceCustomNumber1),
-     updateTime=todatetime(ReceiptTime),
-     StartTime,
-     EndTime,
-     detectionURI=DeviceCustomString1,
-     categories,
-     Computer
-    
- | summarize arg_max(updateTime, *) by detectionID
-    
- | sort by detectionID desc
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] ExtraHop Reveal(x) via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
---
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/extrahop.extrahop_revealx_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended F5 Networks Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-f5-networks-via-ama.md
- Title: "[Recommended] F5 Networks via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] F5 Networks via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] F5 Networks via AMA connector for Microsoft Sentinel
-
-The F5 firewall connector allows you to easily connect your F5 logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (F5)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [F5](https://www.f5.com/services/support) |
-
-## Query samples
-
-**All logs**
- ```kusto
-
-CommonSecurityLog
-
- | where DeviceVendor == "F5"
-
-
- | sort by TimeGenerated
- ```
-
-**Summarize by time**
- ```kusto
-
-CommonSecurityLog
-
- | where DeviceVendor == "F5"
-
-
- | summarize count() by TimeGenerated
-
- | sort by TimeGenerated
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] F5 Networks via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
---
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5_networks_data_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Iboss Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-iboss-via-ama.md
- Title: "[Recommended] iboss via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] iboss via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] iboss via AMA connector for Microsoft Sentinel
-
-The [iboss](https://www.iboss.com) data connector enables you to seamlessly connect your Threat Console to Microsoft Sentinel and enrich your instance with iboss URL event logs. Our logs are forwarded in Common Event Format (CEF) over Syslog and the configuration required can be completed on the iboss platform without the use of a proxy. Take advantage of our connector to garner critical data points and gain insight into security threats.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | ibossUrlEvent<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [iboss](https://www.iboss.com/contact-us/) |
-
-## Query samples
-
-**Logs Received from the past week**
- ```kusto
-ibossUrlEvent
- | where TimeGenerated > ago(7d)
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] iboss via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
---
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy (Only applicable if a dedicated proxy Linux machine has been configured).
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/iboss.iboss-sentinel-connector?tab=Overview) in the Azure Marketplace.
sentinel Recommended Illusive Platform Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-illusive-platform-via-ama.md
- Title: "[Recommended] Illusive Platform via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] Illusive Platform via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] Illusive Platform via AMA connector for Microsoft Sentinel
-
-The Illusive Platform Connector allows you to share Illusive's attack surface analysis data and incident logs with Microsoft Sentinel and view this information in dedicated dashboards that offer insight into your organization's attack surface risk (ASM Dashboard) and track unauthorized lateral movement in your organization's network (ADS Dashboard).
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (illusive)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Illusive Networks](https://illusive.com/support) |
-
-## Query samples
-
-**Number of Incidents in the last 30 days in which Trigger Type is found**
- ```kusto
-union CommonSecurityLog
- | where (DeviceEventClassID == "illusive:login" or DeviceEventClassID == "illusive:access" or DeviceEventClassID == "illusive:suspicious")
- | where Message !contains "hasForensics"
- | where TimeGenerated > ago(30d)
- | extend DeviceCustomNumber2 = coalesce(column_ifexists("FieldDeviceCustomNumber2", long(null)), DeviceCustomNumber2, long(null))
- | summarize by DestinationServiceName, DeviceCustomNumber2
- | summarize incident_count=count() by DestinationServiceName
- ```
-
-**Top 10 alerting hosts in the last 30 days**
- ```kusto
-union CommonSecurityLog
- | where (DeviceEventClassID == "illusive:login" or DeviceEventClassID == "illusive:access" or DeviceEventClassID == "illusive:suspicious")
- | where Message !contains "hasForensics"
- | where TimeGenerated > ago(30d)
- | extend DeviceCustomNumber2 = coalesce(column_ifexists("FieldDeviceCustomNumber2", long(null)), DeviceCustomNumber2, long(null))
- | summarize by AlertingHost=iff(SourceHostName != "" and SourceHostName != "Failed to obtain", SourceHostName, SourceIP) ,DeviceCustomNumber2
- | where AlertingHost != "" and AlertingHost != "Failed to obtain"
- | summarize incident_count=count() by AlertingHost
- | order by incident_count
- | limit 10
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] Illusive Platform via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
---
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/illusivenetworks.illusive_platform_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Morphisec Utpp Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-morphisec-utpp-via-ama.md
- Title: "[Recommended] Morphisec UTPP via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] Morphisec UTPP via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] Morphisec UTPP via AMA connector for Microsoft Sentinel
-
-Integrate vital insights from your security products with the Morphisec Data Connector for Microsoft Sentinel and expand your analytical capabilities with search and correlation, threat intelligence, and customized alerts. Morphisec's Data Connector provides visibility into today's most advanced threats including sophisticated fileless attacks, in-memory exploits and zero days. With a single, cross-product view, you can make real-time, data-backed decisions to protect your most important assets
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Kusto function url** | https://aka.ms/sentinel-morphisecutpp-parser |
-| **Log Analytics table(s)** | CommonSecurityLog (Morphisec)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Morphisec](https://support.morphisec.com/hc/en-us) |
-
-## Query samples
-
-**Threats count by host**
- ```kusto
-
-Morphisec
-
-
- | summarize Times_Attacked=count() by SourceHostName
- ```
-
-**Threats count by username**
- ```kusto
-
-Morphisec
-
-
- | summarize Times_Attacked=count() by SourceUserName
- ```
-
-**Threats with high severity**
- ```kusto
-
-Morphisec
-
-
- | where toint( LogSeverity) > 7
- | order by TimeGenerated
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] Morphisec UTPP via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-These queries and workbooks are dependent on Kusto functions based on Kusto to work as expected. Follow the steps to use the Kusto functions alias "Morphisec"
-in queries and workbooks. [Follow steps to get this Kusto function.](https://aka.ms/sentinel-morphisecutpp-parser)
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/morphisec.morphisec_utpp_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Sonicwall Firewall Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-sonicwall-firewall-via-ama.md
- Title: "[Recommended] SonicWall Firewall via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] SonicWall Firewall via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] SonicWall Firewall via AMA connector for Microsoft Sentinel
-
-Common Event Format (CEF) is an industry standard format on top of Syslog messages, used by SonicWall to allow event interoperability among different platforms. By connecting your CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (SonicWall)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [SonicWall](https://www.sonicwall.com/support/) |
-
-## Query samples
-
-**All logs**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "SonicWall"
-
- | sort by TimeGenerated desc
- ```
-
-**Summarize by destination IP and port**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "SonicWall"
-
- | summarize count() by DestinationIP, DestinationPort, TimeGenerated
-
- | sort by TimeGenerated desc
- ```
-
-**Show all dropped traffic from the SonicWall Firewall**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "SonicWall"
-
- | where AdditionalExtensions contains "fw_action='drop'"
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] SonicWall Firewall via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
---
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sonicwall-inc.sonicwall-networksecurity-azure-sentinal?tab=Overview) in the Azure Marketplace.
sentinel Recommended Trend Micro Apex One Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-trend-micro-apex-one-via-ama.md
Title: "[Recommended] Trend Micro Apex One via AMA connector for Microsoft Senti
description: "Learn how to install the connector [Recommended] Trend Micro Apex One via AMA to connect your data source to Microsoft Sentinel." Previously updated : 10/23/2023 Last updated : 11/29/2023 # [Recommended] Trend Micro Apex One via AMA connector for Microsoft Sentinel
-The [Trend Micro Apex One](https://www.trendmicro.com/en_us/business/products/user-protection/sps/endpoint.html) data connector provides the capability to ingest [Trend Micro Apex One events](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/appendices/syslog-mapping-cef.aspx) into Microsoft Sentinel. Refer to [Trend Micro Apex Central](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/preface_001.aspx) for more information.
+The [Trend Micro Apex One](https://www.trendmicro.com/en_us/business/products/user-protection/sps/endpoint.html) data connector provides the capability to ingest [Trend Micro Apex One events](https://aka.ms/sentinel-TrendMicroApex-OneEvents) into Microsoft Sentinel. Refer to [Trend Micro Apex Central](https://aka.ms/sentinel-TrendMicroApex-OneCentral) for more information.
## Connector attributes
sentinel Recommended Varmour Application Controller Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-varmour-application-controller-via-ama.md
- Title: "[Recommended] vArmour Application Controller via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] vArmour Application Controller via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] vArmour Application Controller via AMA connector for Microsoft Sentinel
-
-vArmour reduces operational risk and increases cyber resiliency by visualizing and controlling application relationships across the enterprise. This vArmour connector enables streaming of Application Controller Violation Alerts into Microsoft Sentinel, so you can take advantage of search & correlation, alerting, & threat intelligence enrichment for each log.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (vArmour)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [vArmour Networks](https://www.varmour.com/contact-us/) |
-
-## Query samples
-
-**Top 10 App to App violations**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "vArmour"
-
- | where DeviceProduct == "AC"
-
- | where Activity == "POLICY_VIOLATION"
-
- | extend AppNameSrcDstPair = extract_all("AppName=;(\\w+)", AdditionalExtensions)
-
- | summarize count() by tostring(AppNameSrcDstPair)
-
- | top 10 by count_
-
- ```
-
-**Top 10 Policy names matching violations**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "vArmour"
-
- | where DeviceProduct == "AC"
-
- | where Activity == "POLICY_VIOLATION"
-
- | summarize count() by DeviceCustomString1
-
- | top 10 by count_ desc
-
- ```
-
-**Top 10 Source IPs generating violations**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "vArmour"
-
- | where DeviceProduct == "AC"
-
- | where Activity == "POLICY_VIOLATION"
-
- | summarize count() by SourceIP
-
- | top 10 by count_
-
- ```
-
-**Top 10 Destination IPs generating violations**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "vArmour"
-
- | where DeviceProduct == "AC"
-
- | where Activity == "POLICY_VIOLATION"
-
- | summarize count() by DestinationIP
-
- | top 10 by count_
-
- ```
-
-**Top 10 Application Protocols matching violations**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "vArmour"
-
- | where DeviceProduct == "AC"
-
- | where Activity == "POLICY_VIOLATION"
-
- | summarize count() by ApplicationProtocol
-
- | top 10 by count_
-
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] vArmour Application Controller via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
-
-> Notice that the data from all regions will be stored in the selected workspace
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/varmournetworks.varmour_sentinel?tab=Overview) in the Azure Marketplace.
sentinel Recommended Wirex Network Forensics Platform Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-wirex-network-forensics-platform-via-ama.md
- Title: "[Recommended] WireX Network Forensics Platform via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] WireX Network Forensics Platform via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023----
-# [Recommended] WireX Network Forensics Platform via AMA connector for Microsoft Sentinel
-
-The WireX Systems data connector allows security professional to integrate with Microsoft Sentinel to allow you to further enrich your forensics investigations; to not only encompass the contextual content offered by WireX but to analyze data from other sources, and to create custom dashboards to give the most complete picture during a forensic investigation and to create custom workflows.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (WireXNFPevents)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [WireX Systems](https://wirexsystems.com/contact-us/) |
-
-## Query samples
-
-**All Imported Events from WireX**
- ```kusto
-CommonSecurityLog
- | where DeviceVendor == "WireX"
-
- ```
-
-**Imported DNS Events from WireX**
- ```kusto
-CommonSecurityLog
- | where DeviceVendor == "WireX"
- and ApplicationProtocol == "DNS"
-
- ```
-
-**Imported DNS Events from WireX**
- ```kusto
-CommonSecurityLog
- | where DeviceVendor == "WireX"
- and ApplicationProtocol == "HTTP"
-
- ```
-
-**Imported DNS Events from WireX**
- ```kusto
-CommonSecurityLog
- | where DeviceVendor == "WireX"
- and ApplicationProtocol == "TDS"
-
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] WireX Network Forensics Platform via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
-
-> Notice that the data from all regions will be stored in the selected workspace
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/wirexsystems1584682625009.wirex_network_forensics_platform_mss?tab=Overview) in the Azure Marketplace.
sentinel Rubrik Security Cloud Data Connector Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rubrik-security-cloud-data-connector-using-azure-functions.md
Title: "Rubrik Security Cloud data connector (using Azure Functions) connector f
description: "Learn how to install the connector Rubrik Security Cloud data connector (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 08/28/2023 Last updated : 11/29/2023
Use this method for automated deployment of the Rubrik connector.
Workspace Key Anomalies_table_name RansomwareAnalysis_table_name
- ThreatHunts_table_name
+ ThreatHunts_table_name
+ LogLevel
4. Mark the checkbox labeled **I agree to the terms and conditions stated above**. 5. Click **Purchase** to deploy.
If you're already signed in, go to the next step.
Anomalies_table_name RansomwareAnalysis_table_name ThreatHunts_table_name
+ LogLevel
logAnalyticsUri (optional) - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`. 4. Once all application settings have been entered, click **Save**.
If you're already signed in, go to the next step.
+Step 1 - Get the Function app endpoint
-**STEP 1 - To get the Azure Function url**
-
- 1. Go to Azure function Overview page and Click on "Functions" in the left blade.
- 2. Click on the Rubrik defined function for the event.
- 3. Go to "GetFunctionurl" and copy the function url.
+1. Go to Azure function Overview page and Click on **"Functions"** tab.
+2. Click on the function called **"RubrikHttpStarter"**.
+3. Go to **"GetFunctionurl"** and copy the function url.
+Step 2 - Add a webhook in RubrikSecurityCloud to send data to Microsoft Sentinel.
-**STEP 2 - Follow the Rubrik User Guide instructions to [Add a Webhook](https://docs.rubrik.com/en-us/saas/saas/common/adding_webhook.html) to begin receiving event information related to Ransomware Anomalies.**
-
+Follow the Rubrik User Guide instructions to [Add a Webhook](https://docs.rubrik.com/en-us/saas/saas/common/adding_webhook.html) to begin receiving event information related to Ransomware Anomalies
1. Select the Generic as the webhook Provider(This will use CEF formatted event information)
- 2. Enter the Function App URL as the webhook URL endpoint for the Rubrik Microsoft Sentinel Solution
- 3. Select the Custom Authentication option
+ 2. Enter the URL part from copied Function-url as the webhook URL endpoint and replace **{functionname}** with **"RubrikAnomalyOrchestrator"**, for the Rubrik Microsoft Sentinel Solution
+ 3. Select the Advanced or Custom Authentication option
4. Enter x-functions-key as the HTTP header
- 5. Enter the Function access key as the HTTP value(Note: if you change this function access key in Microsoft Sentinel in the future you will need to update this webhook configuration)
- 6. Select the following Event types: Anomaly, Ransomware Investigation Analysis, Threat Hunt
- 7. Select the following severity levels: Critical, Warning, Informational
+ 5. Enter the Function access key(value of code parameter from copied function-url) as the HTTP value(Note: if you change this function access key in Microsoft Sentinel in the future you will need to update this webhook configuration)
+ 6. Select the EventType as Anomaly
+ 7. Select the following severity levels: Critical, Warning, Informational
+ 8. Repeat the same steps to add webhooks for Ransomware Investigation Analysis and Threat Hunt.
+
+ >[!NOTE]
+ > While adding webhooks for Ransomware Investigation Analysis and Threat Hunt, replace **{functionname}** with **"RubrikRansomwareOrchestrator"** and **"RubrikThreatHuntOrchestrator"** respectively in copied function-url.
-*Now we are done with the rubrik Webhook configuration. Once the webhook events triggered , you should be able to see the Anomaly, Ransomware Analysis, ThreatHunt events from the Rubrik into respective LogAnalytics workspace table called "Rubrik_Anomaly_Data_CL", "Rubrik_Ransomware_Data_CL", "Rubrik_ThreatHunt_Data_CL".*
+*Now we are done with the rubrik Webhook configuration. Once the webhook events triggered , you should be able to see the Anomaly, Ransomware Investigation Analysis, Threat Hunt events from the Rubrik into respective LogAnalytics workspace table called "Rubrik_Anomaly_Data_CL", "Rubrik_Ransomware_Data_CL", "Rubrik_ThreatHunt_Data_CL".*
sentinel Sonicwall Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sonicwall-firewall.md
+
+ Title: "SonicWall Firewall connector for Microsoft Sentinel"
+description: "Learn how to install the connector SonicWall Firewall to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# SonicWall Firewall connector for Microsoft Sentinel
+
+Common Event Format (CEF) is an industry standard format on top of Syslog messages, used by SonicWall to allow event interoperability among different platforms. By connecting your CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (SonicWall)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [SonicWall](https://www.sonicwall.com/support/) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "SonicWall"
+
+ | sort by TimeGenerated desc
+ ```
+
+**Summarize by destination IP and port**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "SonicWall"
+
+ | summarize count() by DestinationIP, DestinationPort, TimeGenerated
+
+ | sort by TimeGenerated desc
+ ```
+
+**Show all dropped traffic from the SonicWall Firewall**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "SonicWall"
+
+ | where AdditionalExtensions contains "fw_action='drop'"
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward SonicWall Firewall Common Event Format (CEF) logs to Syslog agent
+
+Set your SonicWall Firewall to send Syslog messages in CEF format to the proxy machine. Make sure you send the logs to port 514 TCP on the machine's IP address.
+
+ Follow Instructions . Then Make sure you select local use 4 as the facility. Then select ArcSight as the Syslog format.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sonicwall-inc.sonicwall-networksecurity-azure-sentinal?tab=Overview) in the Azure Marketplace.
sentinel Symantec Proxysg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-proxysg.md
Title: "Symantec ProxySG connector for Microsoft Sentinel"
description: "Learn how to install the connector Symantec ProxySG to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 11/29/2023
To integrate with Symantec ProxySG make sure you have:
> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Symantec Proxy SG and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/SymantecProxySG/Parsers/SymantecProxySG/SymantecProxySG.txt), on the second line of the query, enter the hostname(s) of your Symantec Proxy SG device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Symantec Proxy SG and load the function code or click [here](https://aka.ms/sentinel-SymantecProxySG-parser), on the second line of the query, enter the hostname(s) of your Symantec Proxy SG device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
3. Select New. 4. Enter a unique name in the Format Name field. 5. Click the radio button for **Custom format string** and paste the following string into the field.
- <p><code>date time time-taken c-ip cs-userdn cs-auth-groups x-exception-id sc-filter-result cs-categories cs(Referer) sc-status s-action cs-method rs(Content-Type) cs-uri-scheme cs-host cs-uri-port cs-uri-path cs-uri-query cs-uri-extension cs(User-Agent) s-ip sr-bytes rs-bytes x-virus-id x-bluecoat-application-name x-bluecoat-application-operation cs-uri-port x-cs-client-ip-country cs-threat-risk</code></p>
+ <p><code>1 $(date) $(time) $(time-taken) $(c-ip) $(cs-userdn) $(cs-auth-groups) $(x-exception-id) $(sc-filter-result) $(cs-categories) $(quot)$(cs(Referer))$(quot) $(sc-status) $(s-action) $(cs-method) $(quot)$(rs(Content-Type))$(quot) $(cs-uri-scheme) $(cs-host) $(cs-uri-port) $(cs-uri-path) $(cs-uri-query) $(cs-uri-extension) $(quot)$(cs(User-Agent))$(quot) $(s-ip) $(sr-bytes) $(rs-bytes) $(x-virus-id) $(x-bluecoat-application-name) $(x-bluecoat-application-operation) $(cs-uri-port) $(x-cs-client-ip-country) $(cs-threat-risk)</code></p>
6. Click the **OK** button. 7. Click the **Apply** button. 8. [Follow these instructions](https://knowledge.broadcom.com/external/article/166529/sending-access-logs-to-a-syslog-server.html) to enable syslog streaming of **Access** Logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address
sentinel Symantec Vip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-vip.md
Title: "Symantec VIP connector for Microsoft Sentinel"
description: "Learn how to install the connector Symantec VIP to connect your data source to Microsoft Sentinel." Previously updated : 10/23/2023 Last updated : 11/29/2023
Configure the facilities you want to collect and their severities.
3. Configure and connect the Symantec VIP
-Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
+[Follow these instructions](https://help.symantec.com/cs/VIP_EG_INSTALL_CONFIG/VIP/v134652108_v128483142/Configuring-syslog) to configure the Symantec VIP Enterprise Gateway to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
sentinel Tenable Io Vulnerability Management Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/tenable-io-vulnerability-management-using-azure-function.md
Title: "Tenable.io Vulnerability Management (using Azure Functions) connector fo
description: "Learn how to install the connector Tenable.io Vulnerability Management (using Azure Function) to connect your data source to Microsoft Sentinel." Previously updated : 10/23/2023 Last updated : 11/29/2023
The [Tenable.io](https://www.tenable.com/products/tenable-io) data connector pro
| Connector attribute | Description | | | | | **Application settings** | TenableAccessKey<br/>TenableSecretKey<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) |
-| **Azure functions app code** | https://aka.ms/sentinel-TenableIO-functionapp |
+| **Azure function app code** | https://aka.ms/sentinel-TenableIO-functionapp |
| **Log Analytics table(s)** | Tenable_IO_Assets_CL<br/> Tenable_IO_Vuln_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Tenable](https://www.tenable.com/support/technical-support) |
sentinel Varmour Application Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/varmour-application-controller.md
+
+ Title: "vArmour Application Controller connector for Microsoft Sentinel"
+description: "Learn how to install the connector vArmour Application Controller to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# vArmour Application Controller via Legacy Agent connector for Microsoft Sentinel
+
+vArmour reduces operational risk and increases cyber resiliency by visualizing and controlling application relationships across the enterprise. This vArmour connector enables streaming of Application Controller Violation Alerts into Microsoft Sentinel, so you can take advantage of search & correlation, alerting, & threat intelligence enrichment for each log.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (vArmour)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [vArmour Networks](https://www.varmour.com/contact-us/) |
+
+## Query samples
+
+**Top 10 App to App violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | extend AppNameSrcDstPair = extract_all("AppName=;(\\w+)", AdditionalExtensions)
+
+ | summarize count() by tostring(AppNameSrcDstPair)
+
+ | top 10 by count_
+
+ ```
+
+**Top 10 Policy names matching violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by DeviceCustomString1
+
+ | top 10 by count_ desc
+
+ ```
+
+**Top 10 Source IPs generating violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by SourceIP
+
+ | top 10 by count_
+
+ ```
+
+**Top 10 Destination IPs generating violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by DestinationIP
+
+ | top 10 by count_
+
+ ```
+
+**Top 10 Application Protocols matching violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by ApplicationProtocol
+
+ | top 10 by count_
+
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Configure the vArmour Application Controller to forward Common Event Format (CEF) logs to the Syslog agent
+
+Send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+2.1 Download the vArmour Application Controller user guide
+
+Download the user guide from https://support.varmour.com/hc/en-us/articles/360057444831-vArmour-Application-Controller-6-0-User-Guide.
+
+2.2 Configure the Application Controller to Send Policy Violations
+
+In the user guide - refer to "Configuring Syslog for Monitoring and Violations" and follow steps 1 to 3.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/varmournetworks.varmour_sentinel?tab=Overview) in the Azure Marketplace.
sentinel Wirex Network Forensics Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/wirex-network-forensics-platform.md
+
+ Title: "WireX Network Forensics Platform connector for Microsoft Sentinel"
+description: "Learn how to install the connector WireX Network Forensics Platform to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# WireX Network Forensics Platform connector for Microsoft Sentinel
+
+The WireX Systems data connector allows security professional to integrate with Microsoft Sentinel to allow you to further enrich your forensics investigations; to not only encompass the contextual content offered by WireX but to analyze data from other sources, and to create custom dashboards to give the most complete picture during a forensic investigation and to create custom workflows.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (WireXNFPevents)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [WireX Systems](https://wirexsystems.com/contact-us/) |
+
+## Query samples
+
+**All Imported Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+
+ ```
+
+**Imported DNS Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+ and ApplicationProtocol == "DNS"
+
+ ```
+
+**Imported DNS Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+ and ApplicationProtocol == "HTTP"
+
+ ```
+
+**Imported DNS Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+ and ApplicationProtocol == "TDS"
+
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Contact WireX support (https://wirexsystems.com/contact-us/) in order to configure your NFP solution to send Syslog messages in CEF format to the proxy machine. Make sure that they central manager can send the logs to port 514 TCP on the machine's IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/wirexsystems1584682625009.wirex_network_forensics_platform_mss?tab=Overview) in the Azure Marketplace.
sentinel Withsecure Elements Via Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/withsecure-elements-via-connector.md
Title: "WithSecure Elements via connector for Microsoft Sentinel"
description: "Learn how to install the connector WithSecure Elements via to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 11/29/2023
When in EPP open account settings in top right corner. Then select Get managemen
2.4 Install Connector
-To install Elements Connector follow [Elements Connector Docs](https://help.f-secure.com/product.html#business/connector/latest/en/concept_BA55FDB13ABA44A8B16E9421713F4913-latest-en).
+To install Elements Connector follow [Elements Connector Docs](https://www.withsecure.com/userguides/product.html#business/connector/latest/en/).
2.5 Configure event forwarding
-If api access has not been configured during installation follow [Configuring API access for Elements Connector](https://help.f-secure.com/product.html#business/connector/latest/en/task_F657F4D0F2144CD5913EE510E155E234-latest-en).
+If api access has not been configured during installation follow [Configuring API access for Elements Connector](https://www.withsecure.com/userguides/product.html#business/connector/latest/en/task_F657F4D0F2144CD5913EE510E155E234-latest-en).
Then go to EPP, then Profiles, then use For Connector from where you can see the connector profiles. Create a new profile (or edit an existing not read-only profile). In Event forwarding enable it. SIEM system address: **127.0.0.1:514**. Set format to **Common Event Format**. Protocol is **TCP**. Save profile and assign it to Elements Connector in Devices tab. 3. Validate connection
sentinel Zoom Reports Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zoom-reports-using-azure-functions.md
Title: "Zoom Reports (using Azure Functions) connector for Microsoft Sentinel"
description: "Learn how to install the connector Zoom Reports (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 11/29/2023
The [Zoom](https://zoom.us/) Reports data connector provides the capability to i
| Connector attribute | Description | | | |
-| **Application settings** | ZoomApiKey<br/>ZoomApiSecret<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) |
-| **Azure function app code** | https://aka.ms/sentinel-ZoomAPI-functionapp |
| **Kusto function alias** | Zoom | | **Kusto function url** | https://aka.ms/sentinel-ZoomAPI-parser | | **Log Analytics table(s)** | Zoom_CL<br/> |
Zoom_CL
To integrate with Zoom Reports (using Azure Functions) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **REST API Credentials/permissions**: **ZoomApiKey** and **ZoomApiSecret** are required for Zoom API. [See the documentation to learn more about API](https://developers.zoom.us/docs/internal-apps/jwt/#generating-jwts). Check all [requirements and follow the instructions](https://developers.zoom.us/docs/internal-apps/jwt/#generating-jwts) for obtaining credentials.
+- **REST API Credentials/permissions**: **AccountID**, **ClientID** and **ClientSecret** are required for Zoom API. [See the documentation to learn more about Zoom API](https://developers.zoom.us/docs/internal-apps/create/). [Follow the instructions for Zoom API configurations](https://aka.ms/sentinel-zoomreports-readme).
## Vendor installation instructions
To integrate with Zoom Reports (using Azure Functions) make sure you have:
>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-ZoomAPI-parser) to create the Kusto functions alias, **Zoom**
+>[!NOTE]
+> This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Zoom and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/ZoomReports/Parsers/Zoom.yaml). The function usually takes 10-15 minutes to activate after solution installation/update.
**STEP 1 - Configuration steps for the Zoom API**
- [Follow the instructions](https://developers.zoom.us/docs/internal-apps/jwt/#generating-jwts) to obtain the credentials.
+ [Follow the instructions](https://developers.zoom.us/docs/internal-apps/create/) to obtain the credentials.
To integrate with Zoom Reports (using Azure Functions) make sure you have:
-Option 1 - Azure Resource Manager (ARM) Template
-
-Use this method for automated deployment of the Zoom Audit data connector using an ARM Tempate.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-ZoomAPI-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group.
-3. Enter the **ZoomApiKey**, **ZoomApiSecret** and deploy.
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the Zoom Reports data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-ZoomAPI-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. ZoomXXXXX).
-
- e. **Select a runtime:** Choose Python 3.8.
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select ** New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- ZoomApiKey
- ZoomApiSecret
- WorkspaceID
- WorkspaceKey
- logAnalyticsUri (optional)
-> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
-4. Once all application settings have been entered, click **Save**.
service-bus-messaging Compare Messaging Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/compare-messaging-services.md
For more information, see [Service Bus overview](../service-bus-messaging/servic
## Use the services together In some cases, you use the services side by side to fulfill distinct roles. For example, an e-commerce site can use Service Bus to process the order, Event Hubs to capture site telemetry, and Event Grid to respond to events like an item was shipped.
-In other cases, you link them together to form an event and data pipeline. You use Event Grid to respond to events in the other services. For an example of using Event Grid with Event Hubs to migrate data to Azure Synapse Analytics, see [Stream big data into a Azure Synapse Analytics](../event-grid/event-hubs-integration.md). The following image shows the workflow for streaming the data.
+In other cases, you link them together to form an event and data pipeline. You use Event Grid to respond to events in the other services. For an example of using Event Grid with Event Hubs to migrate data to Azure Synapse Analytics, see [Stream big data into Azure Synapse Analytics](../event-grid/event-hubs-integration.md). The following image shows the workflow for streaming the data.
:::image type="content" source="./media/compare-messaging-services/overview.svg" alt-text="Diagram showing how Event Hubs, Service Bus, and Event Grid can be connected together.":::
service-bus-messaging Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/explorer.md
Title: Use Azure Service Bus Explorer to run data operations
description: This article provides information on how to use the portal-based Azure Service Bus Explorer to access Azure Service Bus data. Previously updated : 09/26/2022 Last updated : 11/30/2023
Azure Service Bus allows sender and receiver client applications to decouple the
> > The community owned [open source Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer) is a standalone application and is different from this one.
-Operations run on an Azure Service Bus namespace are of two kinds
+Operations run on an Azure Service Bus namespace are of two kinds.
* **Management operations** - Create, update, delete of Service Bus namespace, queues, topics, and subscriptions. * **Data operations** - Send to and receive messages from queues, topics, and subscriptions.
To use the Service Bus Explorer, navigate to the Service Bus namespace on which
## Peek a message
-With the peek functionality, you can use the Service Bus Explorer to view the top 100 messages in a queue, subscription or dead-letter queue.
+With the peek functionality, you can use the Service Bus Explorer to view the top 100 messages in a queue, subscription, or dead-letter queue.
1. To peek messages, select **Peek Mode** in the Service Bus Explorer dropdown.
With the peek functionality, you can use the Service Bus Explorer to view the to
:::image type="content" source="./media/service-bus-explorer/peek-message-from-queue.png" alt-text="Screenshot with overview of peeked messages and message body content shown for peeked messages." lightbox="./media/service-bus-explorer/peek-message-from-queue.png":::
+ Switch to the **Message Properties** tab in the bottom pane to see the metadata.
+ :::image type="content" source="./media/service-bus-explorer/peek-message-from-queue-2.png" alt-text="Screenshot with overview of peeked messages and message properties shown for peeked messages." lightbox="./media/service-bus-explorer/peek-message-from-queue-2.png"::: > [!NOTE]
The peek with options functionality allows you to use the Service Bus Explorer t
:::image type="content" source="./media/service-bus-explorer/peek-message-from-queue-3.png" alt-text="Screenshot with overview of peeked messages and message body content shown for peek with advanced options." lightbox="./media/service-bus-explorer/peek-message-from-queue-3.png":::
+ Switch to the **Message Properties** tab in the bottom pane to see the metadata.
+
:::image type="content" source="./media/service-bus-explorer/peek-message-from-queue-4.png" alt-text="Screenshot with overview of peeked messages and message properties shown for peek with advanced options." lightbox="./media/service-bus-explorer/peek-message-from-queue-4.png"::: > [!NOTE]
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
VNET to VNET connection | Supported | [Learn more](./azure-to-azure-about-net
Virtual Network Service Endpoints | Supported | If you are restricting the virtual network access to storage accounts, ensure that the trusted Microsoft services are allowed access to the storage account. Accelerated networking | Supported | Accelerated networking can be enabled on the recovery VM only if it is enabled on the source VM also. [Learn more](azure-vm-disaster-recovery-with-accelerated-networking.md). Palo Alto Network Appliance | Not supported | With third-party appliances, there are often restrictions imposed by the provider inside the Virtual Machine. Azure Site Recovery needs agent, extensions, and outbound connectivity to be available. But the appliance doesn't let any outbound activity to be configured inside the Virtual Machine.
-IPv6 | Not supported | Mixed configurations that include both IPv4 and IPv6 are also not supported. Free up the subnet of the IPv6 range before any Site Recovery operation.
+IPv6 | Not supported | Mixed configurations that include both IPv4 and IPv6 are supported. However, Azure Site Recovery will use any free IPv4 address available, if there are no free IPv4 addresses in the subnet, then the configuration is not supported.
Private link access to Site Recovery service | Supported | [Learn more](azure-to-azure-how-to-enable-replication-private-endpoints.md) Tags | Supported | User-generated tags on NICs are replicated every 24 hours.
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
If you want to access all VMs in VMMap in a loop, you can use the following code
```powershell
+param (
+ [parameter(Mandatory=$false)]
+ [Object]$RecoveryPlanContext
+)
$VMinfo = $RecoveryPlanContext.VmMap | Get-Member | Where-Object MemberType -EQ NoteProperty | select -ExpandProperty Name $vmMap = $RecoveryPlanContext.VmMap foreach($VMID in $VMinfo)
static-web-apps Deploy Nextjs Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs-hybrid.md
-# Deploy hybrid Next.js websites on Azure Static Web Apps
+# Deploy hybrid Next.js websites on Azure Static Web Apps (Preview)
In this tutorial, you learn to deploy a [Next.js](https://nextjs.org) website to [Azure Static Web Apps](overview.md), leveraging the support for Next.js features such as Server-Side Rendering (SSR) and API routes.
+>[!NOTE]
+> Next.js hybrid support is in preview.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
storage Storage Blob Scalable App Create Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-create-vm.md
Title: Create a VM and storage account for a scalable application in Azure description: Learn how to deploy a VM to be used to run a scalable application using Azure blob storage-+ Last updated 02/20/2018-+
storage Storage Blob Scalable App Download Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-download-files.md
Title: Download large amounts of random data from Azure Storage description: Learn how to use the Azure SDK to download large amounts of random data from an Azure Storage account -+ Last updated 02/04/2021-+ ms.devlang: csharp
storage Storage Blob Scalable App Upload Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-upload-files.md
Title: Upload large amounts of random data in parallel to Azure Storage description: Learn how to use the Azure Storage client library to upload large amounts of random data in parallel to an Azure Storage account-+ Last updated 02/04/2021-+ ms.devlang: csharp
storage Storage Blob Scalable App Verify Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-verify-metrics.md
Title: Verify throughput and latency metrics for a storage account in the Azure portal description: Learn how to verify throughput and latency metrics for a storage account in the portal.-+ Last updated 02/20/2018-+ # Verify throughput and latency metrics for a storage account
synapse-analytics Concepts Data Factory Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/concepts-data-factory-differences.md
Check below table for features availability:
| | Support for global parameters | Γ£ô | Γ£ù | | **Template Gallery and Knowledge center** | Solution Templates | Γ£ô<br><small>*Azure Data Factory Template Gallery* | Γ£ô<br><small>*Synapse Workspace Knowledge center* | | **GIT Repository Integration** | GIT Integration | Γ£ô | Γ£ô |
-| **Monitoring** | Monitoring of Spark Jobs for Data Flow | Γ£ù | Γ£ô<br><small>*Leverage the Synapse Spark pools* |
+| **Monitoring** | Monitoring of Spark Jobs for Data Flow | Γ£ù | Γ£ô<br>*Leverage the Synapse Spark pools* |
## Next steps
synapse-analytics Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/known-issues.md
To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
|Azure Synapse dedicated SQL pool|[Queries failing with Data Exfiltration Error](#queries-failing-with-data-exfiltration-error)|Has Workaround| |Azure Synapse Workspace|[Blob storage linked service with User Assigned Managed Identity (UAMI) is not getting listed](#blob-storage-linked-service-with-user-assigned-managed-identity-uami-is-not-getting-listed)|Has Workaround| |Azure Synapse Workspace|[Failed to delete Synapse workspace & Unable to delete virtual network](#failed-to-delete-synapse-workspace--unable-to-delete-virtual-network)|Has Workaround|
-|Azure Synapse Apache Spark pool|[Certain spark job or task fails too early with Error Code 503 due to storage account throttling](#certain-spark-job-or-task-fails-too-early-with-error-code-503-due-to-storage-account-throttling)|Has Workaround|
## Azure Synapse Analytics serverless SQL pool active known issues summary
When using an ARM template, Bicep template, or direct REST API PUT operation to
**Workaround**: The problem can be mitigated by using a REST API PATCH operation or the Azure Portal UI to reverse and retry the desired configuration changes. The engineering team is aware of this behavior and working on a fix.
-## Azure Synapse Analytics Apache Spark pool active known issues summary
-
-The following are known issues with the Synapse Spark.
-
-### Certain spark job or task fails too early with Error Code 503 due to storage account throttling
-
-Starting at 00:00 UTC on October 3, 2023, few Azure Synapse Analytics Apache Spark pools might experience spark job/task failures due to storage API limit threshold being exceeded.
-
-**Workaround**: The engineering team is currently aware of this behavior and working on a fix. We recommend setting the following spark config at [pool level](spark/apache-spark-azure-create-spark-configuration.md#create-an-apache-spark-configuration)
-
-`spark.hadoop.fs.azure.io.retry.max.retries 19`
-- ## Recently Closed Known issues |Synapse Component|Issue|Status|Date Resolved
Starting at 00:00 UTC on October 3, 2023, few Azure Synapse Analytics Apache Spa
|Azure Synapse serverless SQL pool|[Queries using Microsoft Entra authentication fails after 1 hour](#queries-using-azure-ad-authentication-fails-after-1-hour)|Resolved|August 2023 |Azure Synapse serverless SQL pool|[Query failures while reading Cosmos DB data using OPENROWSET](#query-failures-while-reading-azure-cosmos-db-data-using-openrowset)|Resolved|March 2023 |Azure Synapse Apache Spark pool|[Failed to write to SQL Dedicated Pool from Synapse Spark using Azure Synapse Dedicated SQL Pool Connector for Apache Spark when using notebooks in pipelines](#failed-to-write-to-sql-dedicated-pool-from-synapse-spark-using-azure-synapse-dedicated-sql-pool-connector-for-apache-spark-when-using-notebooks-in-pipelines)|Resolved|June 2023
+|Azure Synapse Apache Spark pool|[Certain spark job or task fails too early with Error Code 503 due to storage account throttling](#certain-spark-job-or-task-fails-too-early-with-error-code-503-due-to-storage-account-throttling)|Resolved|November 2023
## Azure Synapse Analytics serverless SQL pool recently closed known issues summary
While using Azure Synapse Dedicated SQL Pool Connector for Apache Spark to write
**Status**: Resolved
+### Certain spark job or task fails too early with Error Code 503 due to storage account throttling
+
+Between October 3, 2023 and November 16, 2023, few Azure Synapse Analytics Apache Spark pools could have experienced spark job/task failures due to storage API limit threshold being exceeded.
+
+**Status**: Resolved
+ ## Next steps - [Synapse Studio troubleshooting](troubleshoot/troubleshoot-synapse-studio.md)
virtual-desktop Session Host Status Health Checks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/session-host-status-health-checks.md
+
+ Title: Session host statuses and health checks in Azure Virtual Desktop
+description: Learn about the different statuses and health checks for session hosts in Azure Virtual Desktop.
++ Last updated : 09/11/2023++
+# Session host statuses and health checks in Azure Virtual Desktop
+
+The Azure Virtual Desktop Agent regularly runs health checks on the session host. The agent assigns these health checks various statuses that include descriptions of how to fix common issues. This article tells you what each status means and how to act on them during a health check.
+
+## Session host statuses
+
+The following table lists all statuses for session hosts in the Azure portal each potential status. *Available* is considered the ideal default status. Any other statuses represent potential issues that you need to take care of to ensure the service works properly.
+
+>[!NOTE]
+>If an issue is listed as **non-fatal**, the service can still run with the issue active. However, we recommend you resolve the issue as soon as possible to prevent future issues. If an issue is listed as **fatal**, it prevents the service from running. You must resolve all fatal issues to make sure your users can access the session host.
+
+| Session host status | Description | Load balancing | How to resolve related issues |
+||||--|
+|Available| This status means that the session host passed all health checks and is available to accept user connections. If a session host has reached its maximum session limit but has passed health checks, it's still listed as ΓÇ£Available." | New user sessions are load balanced here. |N/A|
+|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. In this state, users can connect to VMs, but their user experience may degrade. You can find which health checks failed in the Azure portal by going to the **Session hosts** tab and selecting the name of your session host. | New user sessions are load balanced here. |Follow the directions in [Error: Session hosts are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-session-hosts-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
+|Shutdown| The session host has been shut down. If the agent enters a shutdown state before connecting to the broker, its status changes to *Unavailable*. If you've shut down your session host and see an *Unavailable* status, that means the session host shut down before it could update the status, and doesn't indicate an issue. You should use this status with the [VM instance view API](/rest/api/compute/virtual-machines/instance-view?tabs=HTTP#virtualmachineinstanceview) to determine the power state of the VM. | Not available for load balancing. |Turn on the session host. |
+|Unavailable| The session host is either turned off or hasn't passed fatal health checks, which prevents user sessions from connecting to this session host. | Not available for load balancing. |If the session host is off, turn it back on. If the session host didn't pass the domain join check or side-by-side stack listener health checks, refer to the table in [Health check](#health-check) for ways to resolve the issue. If the status is still "Unavailable" after following those directions, open a support case.|
+|Upgrade Failed| This status means that the Azure Virtual Desktop Agent couldn't update or upgrade. This status doesn't affect new nor existing user sessions. | New user sessions are load balanced here. |Follow the instructions in the [Azure Virtual Desktop Agent troubleshooting article](troubleshoot-agent.md).|
+|Upgrading| This status means that the agent upgrade is in progress. This status updates to ΓÇ£AvailableΓÇ¥ once the upgrade is done and the session host can accept connections again.| New user sessions are load balanced here. |If your session host is stuck in the "Upgrading" state, then [reinstall the agent](troubleshoot-agent.md#error-session-host-vms-are-stuck-in-upgrading-state).|
+
+## Health check
+
+The health check is a test run by the agent on the session host. The following table lists each type of health check and describes what it does.
+
+| Health check name | Description | What happens if the session host doesn't pass the check |
+||||
+| Domain joined | Verifies that the session host is joined to a domain controller. | If this check fails, users won't be able to connect to the session host. To solve this issue, join your session host to a domain. |
+| Geneva Monitoring Agent | Verifies that the session host has a healthy monitoring agent by checking if the monitoring agent is installed and running in the expected registry location. | If this check fails, it's semi-fatal. There may be successful connections, but they'll contain no logging information. To resolve this issue, make sure a monitoring agent is installed. If it's already installed, contact Microsoft support. |
+| Side-by-side (SxS) Stack Listener | Verifies that the side-by-side stack is up and running, listening, and ready to receive connections. | If this check fails, it's fatal, and users won't be able to connect to the session host. Try restarting your virtual machine (VM). If restarting doesn't work, contact Microsoft support. |
+| App attach health check | Verifies that the [MSIX app attach](what-is-app-attach.md) service is working as intended during package staging or destaging. | If this check fails, it isn't fatal. However, certain apps stop working for end-users. |
+| Domain trust check | Verifies the session host isn't experiencing domain trust issues that could prevent authentication when a user connects to a session. | If this check fails, it's fatal. The service won't be able to connect if it can't reach the authentication domain for the session host. |
+| Metadata service check | Verifies the metadata service is accessible and returns compute properties. | If this check fails, it isn't fatal. |
+
+## Next steps
+
+- For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md).
+- To troubleshoot issues while creating an Azure Virtual Desktop environment and host pool in an Azure Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md).
+- To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md).
+- To troubleshoot issues related to the Azure Virtual Desktop agent or session connectivity, see [Troubleshoot common Azure Virtual Desktop Agent issues](troubleshoot-agent.md).
+- To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md).
+- To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
az vmss update-instances \
## Troubleshoot
-## View VMHealth - single instance
+### View VMHealth - single instance
```azurepowershell-interactive Get-AzVmssVM -InstanceView `
virtual-machines Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/whats-new.md
- Title: "What's new for virtual machines"
-description: Learn about what's new for virtual machines in Azure.
--- Previously updated : 10/12/2022----
-# What's new for virtual machines
-
-This article describes what's new for virtual machines in Azure.
--
-## Spot Priority Mix for Flexible scale sets
--
-## Next steps
-
-For updates and announcements about Azure, see the [Microsoft Azure Blog](https://azure.microsoft.com/blog/).
virtual-network Manage Network Security Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-network-security-group.md
az network asg delete --resource-group myResourceGroup --name myASG
To manage network security groups, security rules, and application security groups, your account must be assigned to the [Network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role. A [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) can also be used that's assigned the appropriate permissions as listed in the following tables:
+> [!NOTE]
+> You might NOT see the full list of service tags if the Network Contributor role has been assigned at a Resource Group level. To view the full list, you can assign this role at a Subscription scope instead. If you can only allow Network Contributor for the Resource Group, you can then also create a custom role for the permissions "Microsoft.Network/locations/serviceTags/read" and "Microsoft.Network/locations/serviceTagDetails/read" and assign them at a Subscription scope along with the Network Contributor at Resource Group scope.
+ ### Network security group | Action | Name |
vpn-gateway Vpn Gateway About Skus Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-skus-legacy.md
If you don't migrate your gateway SKUs by September 30, 2025, your gateway will
Important Dates:
-* **December 1, 2023**: No new gateway creations possible on Standard / High Performance SKUs
-* **November 30, 2024**: Begin migrating gateways to other SKUs
-* **September 30, 2025**: Standard/High Performance SKUs will be retired and gateways will be automatically migrated
+* **December 1, 2023**: No new gateway creations are possible using Standard or High Performance SKUs.
+* **November 30, 2024**: Begin migrating gateways to other SKUs.
+* **September 30, 2025**: Standard/High Performance SKUs will be retired and remaining deprecated legacy gateways will be automatically migrated and upgraded to AZ SKUs.
## <a name="agg"></a>Estimated aggregate throughput by SKU
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
description: Learn about frequently asked questions for VPN Gateway cross-premis
Previously updated : 10/24/2023 Last updated : 11/29/2023
We recommend that you use a Standard SKU public IP address for your VPN gateway.
For non-zone-redundant and non-zonal gateways (gateway SKUs that do *not* have *AZ* in the name), dynamic IP address assignment is supported, but is being phased out. When you use a dynamic IP address, the IP address doesn't change after it has been assigned to your VPN gateway. The only time the VPN gateway IP address changes is when the gateway is deleted and then re-created. The VPN gateway public IP address doesn't change when you resize, reset, or complete other internal maintenance and upgrades of your VPN gateway.
-### How does the retirement of the public IP address Basic SKU affect my VPN gateways?
+### How does Public IP address Basic SKU retirement affect my VPN gateways?
We're taking action to ensure the continued operation of deployed VPN gateways that utilize Basic SKU public IP addresses. If you already have VPN gateways with Basic SKU public IP addresses, there is no need for you to take any action.
A virtual network gateway is fundamentally a multi-homed device with one NIC tap
No. The Basic SKU isn't available in the portal. You can create a Basic SKU VPN gateway using Azure CLI or PowerShell.
-### More information about gateway types, requirements, and throughput
+### Where can I find information about gateway types, requirements, and throughput?
-For more information, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).
+See the following articles:
+* [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md)
+* [About gateway SKUs](about-gateway-skus.md)
+
+## <a name="sku-deprecate"></a>SKU deprecation for legacy SKUs
+
+Standard and High Performance VPN Gateway legacy SKUs are being deprecated September 30, 2025. For more information, see the [VPN Gateway legacy SKUs](vpn-gateway-about-skus-legacy.md#sku-deprecation) article.
+
+### Can I create a new Standard/High Performance SKU after the deprecation announcement on November 30, 2023?
+
+No. Starting December 1, 2023 you can't create new gateways with Standard or High Performance SKUs. You can create new gateways using VpnGw1 and VpnGw2 for the same price as the Standard and High Performance SKUs, listed respectively on our [pricing page](https://azure.microsoft.com/pricing/details/vpn-gateway/).
+
+### How long will my existing gateways be supported on Standard/High Performance SKUs?
+
+All existing gateways using Standard or High Performance SKUs will be supported until September 30, 2025.
+
+### Do I need to migrate my Standard/High Performance gateway SKUs right now?
+
+No, there's no action required right now. You'll be able to migrate your SKUs starting December 2024. We'll send communication with detailed documentation about the migration steps.
+
+### Which SKU can I migrate my gateway to?
+
+When gateway SKU migration becomes available, SKUs can be migrated as follows:
+
+* Standard -> VpnGw1
+* High Performance -> VpnGw2
+
+### What if I want to migrate to an AZ SKU?
+
+You can't migrate your legacy SKU to an AZ SKU. However, note that all gateways that are still using Standard or High Performance SKUs after September 30, 2025 will be migrated and upgraded automatically to the following SKUs:
+
+* Standard -> VpnGw1AZ
+* High Performance -> VpnGw2AZ
+
+You can use this strategy to have your SKUs automatically migrated and upgraded to an AZ SKU. You can then resize your SKU within that SKU family if necessary. See our [pricing page](https://azure.microsoft.com/pricing/details/vpn-gateway/) for AZ SKU pricing.
+
+### Will there be any pricing difference for my gateways after migration?
+
+If you migrate your SKUs by September 30, 2025 there will be no pricing difference. VpnGw1 and VpnGw2 SKUs are offered at the same price as Standard and High Performance SKUs, respectively. If you don't migrate by that date, your SKUs will automatically be migrated and upgraded to AZ SKUs. In that case, there is a pricing difference.
+
+### Will there be any performance impact on my gateways with this migration?
+
+Yes, you get better performance with VpnGw1 and VpnGw2. Currently, VpnGw1 at 650 Mbps provides a 6.5x and VpnGw2 at 1 Gbps provides a 5x performance improvement at the same price as the legacy Standard and High Performance gateways, respectively. For more information, see [Gateway SKUs](about-gateway-skus.md).
+
+### What happens if I don't migrate SKUs by September 30, 2025?
+
+All gateways that are still using Standard or High Performance SKUs will be migrated automatically and upgraded to the following AZ SKUs:
+
+* Standard -> VpnGw1AZ
+* High Performance -> VpnGw2AZ
+
+Final communication will be sent before initiating migration on any gateways.
+
+### Will I see a sudden stop in my gateway flow after the announcement?
+
+No, there will be no impact on existing gateways.
## <a name="s2s"></a>Site-to-site connections and VPN devices