Updates from: 09/21/2024 01:07:54
Service Microsoft Docs article Related commit history on GitHub Change details
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
description: Learn how to enable user sign-in to the API Management developer po
Previously updated : 12/08/2023 Last updated : 09/19/2024
After the Microsoft Entra provider is enabled:
1. Save the **Redirect URL** for later. :::image type="content" source="media/api-management-howto-aad/api-management-with-aad001.png" alt-text="Screenshot of adding identity provider in Azure portal.":::-
- > [!NOTE]
- > There are two redirect URLs:<br/>
- > * **Redirect URL** points to the latest developer portal of the API Management.
- > * **Redirect URL (deprecated portal)** points to the deprecated developer portal of API Management.
- >
- > We recommended you use the latest developer portal Redirect URL.
-
+
1. In your browser, open the Azure portal in a new tab. 1. Navigate to [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) to register an app in Active Directory. 1. Select **New registration**. On the **Register an application** page, set the values as follows:
After the Microsoft Entra provider is enabled:
* Select any option for **Expires**. * Choose **Add**. 1. Copy the client **Secret value** before leaving the page. You will need it later.
-1. Under **Manage** in the side menu, select **Authentication**.
- 1. Under the **Implicit grant and hybrid flows** section, select the **ID tokens** checkbox.
- 1. Select **Save**.
1. Under **Manage** in the side menu, select **Token configuration** > **+ Add optional claim**. 1. In **Token type**, select **ID**. 1. Select (check) the following claims: **email**, **family_name**, **given_name**.
After the Microsoft Entra provider is enabled:
> [!IMPORTANT] > Update the **Client secret** before the key expires.
-1. In the **Add identity provider** pane's **Allowed tenants** field, specify the Microsoft Entra instance's domains to which you want to grant access to the API Management service instance APIs.
- * You can separate multiple domains with newlines, spaces, or commas.
-
- > [!NOTE]
- > You can specify multiple domains in the **Allowed Tenants** section. A global administration must grant the application access to directory data before users can sign in from a different domain than the original app registration domain. To grant permission, the global administrator should:
- > 1. Go to `https://<URL of your developer portal>/aadadminconsent` (for example, `https://contoso.portal.azure-api.net/aadadminconsent`).
- > 1. Enter the domain name of the Microsoft Entra tenant to which they want to grant access.
- > 1. Select **Submit**.
-
+1. In **Signin tenant**, specify a tenant name or ID to use for sign-in to Microsoft Entra. If no value is specified, the Common endpoint is used.
+1. In **Allowed tenants**, add specific Microsoft Entra tenant names or IDs for sign-in to Microsoft Entra.
1. After you specify the desired configuration, select **Add**. 1. Republish the developer portal for the Microsoft Entra configuration to take effect. In the left menu, under **Developer portal**, select **Portal overview** > **Publish**. After the Microsoft Entra provider is enabled:
-* Users in the specified Microsoft Entra instance can [sign into the developer portal by using a Microsoft Entra account](#log_in_to_dev_portal).
+* Users in the specified Microsoft Entra tenant(s) can [sign into the developer portal by using a Microsoft Entra account](#log_in_to_dev_portal).
* You can manage the Microsoft Entra configuration on the **Developer portal** > **Identities** page in the portal. * Optionally configure other sign-in settings by selecting **Identities** > **Settings**. For example, you might want to redirect anonymous users to the sign-in page. * Republish the developer portal after any configuration change.
For steps, see [Switch redirect URIs to the single-page application type](../act
## Add an external Microsoft Entra group Now that you've enabled access for users in a Microsoft Entra tenant, you can:
-* Add Microsoft Entra groups into API Management.
+* Add Microsoft Entra groups into API Management. Groups added must be in the tenant where your API Management instance is deployed.
* Control product visibility using Microsoft Entra groups. 1. Navigate to the App Registration page for the application you registered in [the previous section](#enable-user-sign-in-using-azure-adportal).
api-management Developer Portal Wordpress Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-wordpress-plugin.md
In this step, create a new Microsoft Entra app. In later steps, you configure th
`https://<apim-instance-name>.developer.azure-api.net/`
-1. Under **Implicit grant and hybrid flows**, select **ID tokens** and select **Save**.
1. In the left menu, under **Manage**, select **Token configuration** > **+ Add optional claim**. 1. On the **Add optional claim** page, select **ID** and then select the following claims: **email, family_name, given_name, onprem_sid, preferred_username, upn**. Select **Add**. 1. When prompted, select **Turn on the Microsoft Graph email, profile permission**. Select **Add**.
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
Previously updated : 03/20/2023 Last updated : 09/19/2024 # Connect privately to API Management using an inbound private endpoint
You can configure an inbound [private endpoint](../private-link/private-endpoint
## Prerequisites - An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md).
- - The API Management instance must be hosted on the [`stv2` compute platform](compute-infrastructure.md). For example, create a new instance or, if you already have an instance in the Premium service tier, enable [zone redundancy](../reliability/migrate-api-mgt.md).
+ - The API Management instance must be hosted on the [`stv2` compute platform](compute-infrastructure.md).
- Do not deploy (inject) the instance into an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) virtual network. - A virtual network and subnet to host the private endpoint. The subnet may contain other Azure resources. - (Recommended) A virtual machine in the same or a different subnet in the virtual network, to test the private endpoint. ## Approval method for private endpoint
When you use the Azure portal to create a private endpoint, as shown in the next
1. Navigate to your API Management service in the [Azure portal](https://portal.azure.com/).
-1. In the left-hand menu, select **Network**.
+1. In the left-hand menu, under **Deployment + infrastructure**, select **Network**.
1. Select **Inbound private endpoint connections** > **+ Add endpoint**.
- :::image type="content" source="media/private-endpoint/add-endpoint-from-instance.png" alt-text="Add a private endpoint using Azure portal":::
+ :::image type="content" source="media/private-endpoint/add-endpoint-from-instance.png" alt-text="Screenshot showing how to add a private endpoint using the Azure portal.":::
1. In the **Basics** tab of **Create a private endpoint**, enter or select the following information:
When you use the Azure portal to create a private endpoint, as shown in the next
| Network Interface Name | Enter a name for the network interface, such as *myInterface* | | Region | Select a location for the private endpoint. It must be in the same region as your virtual network. It may differ from the region where your API Management instance is hosted. |
-1. Select the **Resource** tab or the **Next: Resource** button at the bottom of the page. The following information about your API Management instance is already populated:
+1. Select the **Next: Resource** button at the bottom of the screen. The following information about your API Management instance is already populated:
* Subscription
- * Resource group
+ * Resource type
* Resource name 1. In **Resource**, in **Target sub-resource**, select **Gateway**.
- :::image type="content" source="media/private-endpoint/create-private-endpoint.png" alt-text="Create a private endpoint in Azure portal":::
+ :::image type="content" source="media/private-endpoint/create-private-endpoint.png" alt-text="Screenshot showing settings to create a private endpoint in the Azure portal.":::
-1. Select the **Virtual Network** tab or the **Next: Virtual Network** button at the bottom of the screen.
+1. Select the **Next: Virtual Network** button at the bottom of the screen.
1. In **Networking**, enter or select this information:
When you use the Azure portal to create a private endpoint, as shown in the next
| Private IP configuration | In most cases, select **Dynamically allocate IP address.** | | Application security group | Optionally select an [application security group](../virtual-network/application-security-groups.md). |
-1. Select the **DNS** tab or the **Next: DNS** button at the bottom of the screen.
+1. Select the **Next: DNS** button at the bottom of the screen.
1. In **Private DNS integration**, enter or select this information:
When you use the Azure portal to create a private endpoint, as shown in the next
| Resource group | Select your resource group. | | Private DNS zones | The default value is displayed: **(new) privatelink.azure-api.net**.
-1. Select the **Tags** tab or the **Next: Tabs** button at the bottom of the screen. If you desire, enter tags to organize your Azure resources.
+1. Select the **Next: Tabs** button at the bottom of the screen. If you desire, enter tags to organize your Azure resources.
-1. Select **Review + create**.
+ 1. Select the **Next: Review + create** button at the bottom of the screen.
1. Select **Create**. ### List private endpoint connections to the instance
-After the private endpoint is created, it appears in the list on the API Management instance's **Inbound private endpoint connections** page in the portal.
-
-You can also use the [Private Endpoint Connection - List By Service](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-by-service) REST API to list private endpoint connections to the service instance.
-
+After the private endpoint is created and the service updated, it appears in the list on the API Management instance's **Inbound private endpoint connections** page in the portal.
Note the endpoint's **Connection status**:
Note the endpoint's **Connection status**:
If a private endpoint connection is in pending status, an owner of the API Management instance must manually approve it before it can be used.
-If you have sufficient permissions, approve a private endpoint connection on the API Management instance's **Private endpoint connections** page in the portal.
+If you have sufficient permissions, approve a private endpoint connection on the API Management instance's **Private endpoint connections** page in the portal. In the connection's context (...) menu, select **Approve**.
-You can also use the API Management [Private Endpoint Connection - Create Or Update](/rest/api/apimanagement/current-ga/private-endpoint-connection/create-or-update) REST API.
-
-```rest
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{apimServiceName}privateEndpointConnections/{privateEndpointConnectionName}?api-version=2021-08-01
-```
+You can also use the API Management [Private Endpoint Connection - Create Or Update](/rest/api/apimanagement/private-endpoint-connection/create-or-update) REST API to approve pending private endpoint connectionis.
### Optionally disable public network access
-To optionally limit incoming traffic to the API Management instance only to private endpoints, disable public network access. Use the [API Management Service - Create Or Update](/rest/api/apimanagement/current-ga/api-management-service/create-or-update) REST API to set the `publicNetworkAccess` property to `Disabled`.
+To optionally limit incoming traffic to the API Management instance only to private endpoints, disable public network access.
> [!NOTE]
-> The `publicNetworkAccess` property can only be used to disable public access to API Management instances configured with a private endpoint, not with other networking configurations such as VNet injection.
+> Public network access can only be disabled in API Management instances configured with a private endpoint, not with other networking configurations such as VNet injection.
-```rest
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{apimServiceName}?api-version=2021-08-01
-Authorization: Bearer {{authToken.response.body.access_token}}
-Content-Type: application/json
+To disable public network access using the Azure CLI, run the following [az apim update](/cli/azure/apim#az-apim-update) command, substituting the names of your API Management instance and resource group:
+```azurecli
+az apim update --name my-apim-service --resource-group my-resource-group --public-network-access false
```
-Use the following JSON body:
-
-```json
-{
- [...]
- "properties": {
- "publicNetworkAccess": "Disabled"
- }
-}
-```
+
+You can also use the [API Management Service - Update](/rest/api/apimanagement/api-management-service/update) REST API to disable public network access, by setting the `publicNetworkAccess` property to `Disabled`.
## Validate private endpoint connection
After the private endpoint is created, confirm its DNS settings in the portal:
1. Navigate to your API Management service in the [Azure portal](https://portal.azure.com/).
-1. In the left-hand menu, select **Network** > **Inbound private endpoint connections**, and select the private endpoint you created.
+1. In the left-hand menu, under **Deployment + infrastructure**, select **Network** > **Inbound private endpoint connections**, and select the private endpoint you created.
-1. In the left-hand navigation, select **DNS configuration**.
+1. In the left-hand navigation, under **Settings**, select **DNS configuration**.
1. Review the DNS records and IP address of the private endpoint. The IP address is a private address in the address space of the subnet where the private endpoint is configured.
API calls initiated within the virtual network to the default Gateway endpoint s
### Test from internet
-From outside the private endpoint path, attempt to call the API Management instance's default Gateway endpoint. If public access is disabled, output will include an error with status code `403` and a message similar to:
+From outside the private endpoint path, attempt to call the API Management instance's default Gateway endpoint. If public access is disabled, output includes an error with status code `403` and a message similar to:
``` Request originated from client public IP address xxx.xxx.xxx.xxx, public network access on this 'Microsoft.ApiManagement/service/my-apim-service' is disabled.
Request originated from client public IP address xxx.xxx.xxx.xxx, public network
To connect to 'Microsoft.ApiManagement/service/my-apim-service', please use the Private Endpoint from inside your virtual network. ```
-## Next steps
+## Related content
* Use [policy expressions](api-management-policy-expressions.md#ref-context-request) with the `context.request` variable to identify traffic from the private endpoint. * Learn more about [private endpoints](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md), including [Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-* Learn more about [managing private endpoint connections](../private-link/manage-private-endpoint.md).
+* [Manage private endpoint connections](../private-link/manage-private-endpoint.md).
* [Troubleshoot Azure private endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md). * Use a [Resource Manager template](https://azure.microsoft.com/resources/templates/api-management-private-endpoint/) to create an API Management instance and a private endpoint with private DNS integration.
application-gateway Ingress Controller Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md
The following conditions must be in place for AGIC to function as expected:
``` + * Is your [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) annotated with: `kubernetes.io/ingress.class: azure/application-gateway`? AGIC only watches for Kubernetes Ingress resources that have this annotation. ```bash
azure-app-configuration Feature Management Dotnet Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/feature-management-dotnet-reference.md
zone_pivot_groups: feature-management
[![Microsoft.FeatureManagement](https://img.shields.io/nuget/vpre/Microsoft.FeatureManagement?label=Microsoft.FeatureManagement)](https://www.nuget.org/packages/Microsoft.FeatureManagement/4.0.0-preview3)<br> [![Microsoft.FeatureManagement.AspNetCore](https://img.shields.io/nuget/vpre/Microsoft.FeatureManagement.AspNetCore?label=Microsoft.FeatureManagement.AspNetCore)](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore/4.0.0-preview3)<br> [![Microsoft.FeatureManagement.Telemetry.ApplicationInsights](https://img.shields.io/nuget/v/Microsoft.FeatureManagement.Telemetry.ApplicationInsights?label=Microsoft.FeatureManagement.Telemetry.ApplicationInsights)](https://www.nuget.org/packages/Microsoft.FeatureManagement.Telemetry.ApplicationInsights/4.0.0-preview3)<br>
-[![Microsoft.FeatureManagement.Telemetry.ApplicationInsights.AspNetCore](https://img.shields.io/nuget/v/Microsoft.FeatureManagement.Telemetry.ApplicationInsights.AspNetCore?label=Microsoft.FeatureManagement.Telemetry.ApplicationInsights.AspNetCore)](https://www.nuget.org/packages/Microsoft.FeatureManagement.Telemetry.ApplicationInsights.AspNetCore/4.0.0-preview3)<br>
:::zone-end
The `telemetry` section of a feature flag has the following properties:
| Property | Description | | - | - | | `enabled` | Specifies whether telemetry should be published for the feature flag. |
-| `metadata` | A collection of key-value pairs, modeled as a dictionary, that can be used to attach custom metadata about the feature flag to evaluation events. |
+| `metadata` | A collection of key-value pairs, modeled as a dictionary, which can be used to attach custom metadata about the feature flag to evaluation events. |
### Custom Telemetry Publishing
-The feature manager has its own `ActivitySource` named "Microsoft.FeatureManagement". If `telemetry` is enabled for a feature flag, whenever the evaluation of the feature flag is started, the feature manager will start an `Activity`. When the feature flag evaluation is finished, the feature manager will add an `ActivityEvent` named `"FeatureFlag"` to the current activity. The `"FeatureFlag"` event will have tags which include the information about the feature flag evaluation. Specifically, the tags will include the following fields:
-
-| Tag | Description |
-| - | - |
-| `FeatureName` | The feature flag name. |
-| `Enabled` | Whether the feature flag is evaluated as enabled. |
-| `Variant` | The assigned variant. |
-| `VariantAssignmentReason` | The reason why the variant is assigned. |
-| `TargetingId` | The user id used for targeting. |
+The feature manager has its own `ActivitySource` named "Microsoft.FeatureManagement". If `telemetry` is enabled for a feature flag, whenever the evaluation of the feature flag is started, the feature manager will start an `Activity`. When the feature flag evaluation is finished, the feature manager will add an `ActivityEvent` named `FeatureFlag` to the current activity. The `FeatureFlag` event will have tags which include the information about the feature flag evaluation, following the fields defined in the [FeatureEvaluationEvent](https://github.com/microsoft/FeatureManagement/tree/main/Schema/FeatureEvaluationEvent) schema.
> [!NOTE] > All key value pairs specified in `telemetry.metadata` of the feature flag will also be included in the tags.
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
Title: Linter settings for Bicep config
description: Describes how to customize configuration values for the Bicep linter Previously updated : 07/30/2024 Last updated : 09/19/2024 # Add linter settings in the Bicep config file
The following example shows the rules that are available for configuration.
}, "use-stable-vm-image": { "level": "warning"
+ },
+ "what-if-short-circuiting": {
+ "level": "warning"
} } }
azure-resource-manager Bicep Core Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-core-diagnostics.md
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP136' />BCP136 | Error | Expected a loop item variable identifier at this location. | | <a id='BCP137' />BCP137 | Error | Loop expected an expression of type "{LanguageConstants.Array}" but the provided value is of type "{actualType}". | | <a id='BCP138' />BCP138 | Error | For-expressions aren't supported in this context. For-expressions may be used as values of resource, module, variable, and output declarations, or values of resource and module properties. |
-| <a id='BCP139' />BCP139 | Warning | A resource's scope must match the scope of the Bicep file for it to be deployable. You must use modules to deploy resources to a different scope. |
+| <a id='BCP083' />[BCP139](./diagnostics/bcp139.md) | Error | A resource's scope must match the scope of the Bicep file for it to be deployable. You must use modules to deploy resources to a different scope. |
| <a id='BCP140' />BCP140 | Error | The multi-line string at this location isn't terminated. Terminate it with "'''. | | <a id='BCP141' />BCP141 | Error | The expression can't be used as a decorator as it isn't callable. | | <a id='BCP142' />BCP142 | Error | Property value for-expressions can't be nested. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP167' />BCP167 | Error | Expected the "{" character or the "if" keyword at this location. | | <a id='BCP168' />BCP168 | Error | Length must not be a negative value. | | <a id='BCP169' />BCP169 | Error | Expected resource name to contain {expectedSlashCount} "/" character(s). The number of name segments must match the number of segments in the resource type. |
-| <a id='BCP170' />BCP170 | Error | Expected resource name to not contain any "/" characters. Child resources with a parent resource reference (via the parent property or via nesting) must not contain a fully-qualified name. |
+| <a id='BCP170' />BCP170 | Error | Expected resource name to not contain any "/" characters. Child resources with a parent resource reference (via the parent property or via nesting) must not contain a fully qualified name. |
| <a id='BCP171' />BCP171 | Error | Resource type "{resourceType}" isn't a valid child resource of parent "{parentResourceType}". | | <a id='BCP172' />BCP172 | Error | The resource type can't be validated due to an error in parent resource "{resourceName}". | | <a id='BCP173' />BCP173 | Error | The property "{property}" can't be used in an existing resource declaration. |
If you need more information about a particular diagnostic code, select the **Fe
| <a id='BCP261' />BCP261 | Error | A using declaration must be present in this parameters file. | | <a id='BCP262' />BCP262 | Error | More than one using declaration is present. | | <a id='BCP263' />BCP263 | Error | The file specified in the using declaration path doesn't exist. |
-| <a id='BCP264' />BCP264 | Error | Resource type "{resourceTypeName}" is declared in multiple imported namespaces ({ToQuotedStringWithCaseInsensitiveOrdering(namespaces)}), and must be fully-qualified. |
+| <a id='BCP264' />BCP264 | Error | Resource type "{resourceTypeName}" is declared in multiple imported namespaces ({ToQuotedStringWithCaseInsensitiveOrdering(namespaces)}), and must be fully qualified. |
| <a id='BCP265' />BCP265 | Error | The name "{name}" isn't a function. Did you mean "{knownFunctionNamespace}.{knownFunctionName}"? | | <a id='BCP266' />BCP266 | Error | Expected a metadata identifier at this location. | | <a id='BCP267' />BCP267 | Error | Expected a metadata declaration after the decorator. |
azure-resource-manager Bcp139 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/diagnostics/bcp139.md
+
+ Title: BCP139
+description: Error - A resource's scope must match the scope of the Bicep file for it to be deployable. You must use modules to deploy resources to a different scope.
++ Last updated : 09/20/2024++
+# Bicep error code - BCP139
+
+This error occurs when you use [`resource`](../file.md#resources) to deploy resources to a different scope than the target one. You should use [`module`](../file.md#modules) instead. For more information, see the following articles based on the scope:
+
+- Resource group: [Scope to different resource group](../deploy-to-resource-group.md#scope-to-different-resource-group).
+- Subscription: [Deployment scopes](../deploy-to-subscription.md#deployment-scopes).
+- Management group: [Deployment scopes](../deploy-to-management-group.md#deployment-scopes).
+- Tenant: [Deployment scopes](../deploy-to-tenant.md#deployment-scopes).
+
+## Error description
+
+`A resource's scope must match the scope of the Bicep file for it to be deployable. You must use modules to deploy resources to a different scope.`
+
+## Solution
+
+To deploy resources to a scope that isn't the target scope, add a `module`.
+
+## Examples
+
+The following example deploys a storage account resource to a different resource group in the same subscription. The example raises the error because the `module` declaration type isn't used:
+
+```bicep
+param otherResourceGroup string
+param location string
+
+// resource deployed to a different resource group in the same subscription
+resource storage 'Microsoft.Storage/storageAccounts@2023-05-01' = {
+ name: uniqueString(resourceGroup().id)
+ scope: resourceGroup(otherResourceGroup)
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+```
+
+You can fix the error by using the `module` declaration type:
+
+```bicep
+param otherResourceGroup string
+
+// module deployed to a different resource group in the same subscription
+module exampleModule 'module.bicep' = {
+ name: 'deployStorageToAnotherRG'
+ scope: resourceGroup(otherResourceGroup)
+}
+```
+
+The following example deploys a resource group to a different subscription. The example raises the error because `module` isn't used
+
+```bicep
+targetScope = 'subscription'
+
+param otherSubscriptionID string
+
+// resource deployed to a different subscription
+resource exampleResource 'Microsoft.Resources/resourceGroups@2024-03-01' = {
+ name: 'deployToDifferentSub'
+ scope: subscription(otherSubscriptionID)
+ location: 'eastus'
+}
+```
+
+You can fix the error by using the `module` declaration type:
+
+```bicep
+targetScope = 'subscription'
+
+param otherSubscriptionID string
+
+// module deployed to a different subscription
+module exampleModule 'module.bicep' = {
+ name: 'deployToDifferentSub'
+ scope: subscription(otherSubscriptionID)
+}
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep core diagnostics](../bicep-core-diagnostics.md).
azure-resource-manager Linter Rule What If Short Circuiting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-what-if-short-circuiting.md
+
+ Title: Linter rule - what-if short circuiting
+description: Linter rule - what-if short circuiting
++ Last updated : 09/19/2024++
+# Linter rule - what-if short circuiting
+
+This rule detects when runtime values are passed as parameters to modules, which in turn use them to determine resource IDs (such as when the parameter is used to determine the name, subscriptionId, resourceGroup, condition, scope, or apiVersion of one or more resources within the module) , and flags potential what-if short-circuiting.
+
+> [!NOTE]
+> This rule is off by default, change the level in [bicepconfig.json](./bicep-config-linter.md) to enable it.
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`what-if-short-circuiting`
+
+## Solution
+
+This rule checks for runtime values used to determine resource IDs within modules. It alerts you if your Bicep code could cause what-if short-circuiting. In the example below, **appServiceOutputs** and **appServiceTests** would be flagged for what-if short-circuiting because they pass runtime values as parameters to the module, which uses them when naming the resource:
+
+**main.bicep**
+
+```bicep
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-05-01' = {
+ name: 'storageAccountName'
+ location: 'eastus'
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+
+module appServiceModule 'modules/appService.bicep' = {
+ name: 'appService2'
+ params: {
+ appServiceName: 'test'
+ }
+}
+
+module appServiceOutputs 'modules/appService.bicep' = {
+ name: 'appService3'
+ params: {
+ appServiceName: appServiceModule.outputs.outputName
+ }
+}
+
+module appServiceTest 'modules/appService.bicep' = {
+ name:'test3'
+ params: {
+ appServiceName: storageAccount.properties.accessTier
+ }
+}
+```
+
+**modules/appService.bicep**
+
+```bicep
+param appServiceName string
+
+resource appServiceApp 'Microsoft.Web/sites@2023-12-01' = {
+ name: appServiceName
+ location: 'eastus'
+ properties: {
+ httpsOnly: true
+ }
+}
+
+output outputName string = 'outputName'
+```
+
+To avoid this issue, use deployment-time constants for values that are used in determining resource IDs.
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
Title: Use Bicep linter
description: Learn how to use Bicep linter. Previously updated : 07/30/2024 Last updated : 09/19/2024 # Use Bicep linter
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [use-secure-value-for-secure-inputs](./linter-rule-use-secure-value-for-secure-inputs.md) - [use-stable-resource-identifiers](./linter-rule-use-stable-resource-identifier.md) - [use-stable-vm-image](./linter-rule-use-stable-vm-image.md)
+- [what-if-short-circuiting](./linter-rule-what-if-short-circuiting.md)
You can customize how the linter rules are applied. To overwrite the default settings, add a **bicepconfig.json** file and apply custom settings. For more information about applying those settings, see [Add custom settings in the Bicep config file](bicep-config-linter.md).
azure-vmware Architecture Private Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/architecture-private-clouds.md
AV64 SKUs are available per Availability Zone, the table below lists the Azure r
| Australia East | AZ03 | AV36P, AV64 | Yes |7| | Australia Southeast | AZ01 | AV36 | No | N/A | | Brazil South | AZ02 | **AV36** | No | N/A |
-| Canada Central | AZ02 | AV36, **AV36P,** AV64| No |7|
+| Canada Central | AZ02 | AV36 **AV36P,** AV64| No |7|
| Canada East | N/A | AV36| No | N/A |
-| Central India | AZ03 | AV36P, (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
-| Central US | AZ01 | AV36P, (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
-| Central US | AZ02 | **AV36**, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| Central India | AZ03 | AV36P (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
+| Central US | AZ01 | AV36P (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
+| Central US | AZ02 | **AV36** (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
| Central US | AZ03 | AV36P, AV64| No |7|
-| East Asia | AZ01 | AV36, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| East Asia | AZ01 | AV36 (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
| East US | AZ01 | **AV36P**, AV64| Yes |7| | East US | AZ02 | **AV36P**, AV64 | Yes | 7 | | East US | AZ03 | **AV36**, **AV36P**, AV64 | Yes | 7 | | East US 2 | AZ01 | **AV36**, AV64 | No |7| | East US 2 | AZ02 | AV36P, **AV52**, AV64 | No | 7|
-| France Central | AZ01 | **AV36**, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
-| Germany West Central | AZ01 | AV36P, (AV64 Planned H2 2024)| Yes |N/A (7 Planned H2 2024) |
-| Germany West Central | AZ02 | **AV36**, (AV64 Planned H2 2024)| Yes |N/A (7 Planned H2 2024) |
+| France Central | AZ01 | **AV36** (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| Germany West Central | AZ01 | AV36P (AV64 Planned H2 2024)| Yes |N/A (7 Planned H2 2024) |
+| Germany West Central | AZ02 | **AV36** (AV64 Planned H2 2024)| Yes |N/A (7 Planned H2 2024) |
| Germany West Central | AZ03 | AV36, **AV36P**, AV64 | Yes |7|
-| Italy North | AZ03 | AV36P, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
-| Japan East | AZ02 | **AV36**, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
-| Japan West | AZ01 | **AV36**, (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
+| Italy North | AZ03 | AV36P (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| Japan East | AZ02 | **AV36** (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| Japan West | AZ01 | **AV36** (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
| North Central US | AZ01 | **AV36**, AV64 | No |7| | North Central US | AZ02 | AV36P, AV64 | No |7| | North Europe | AZ02 | AV36, AV64 | No |7|
-| Qatar Central | AZ03 | AV36P, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
-| South Africa North | AZ03 | AV36, (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
+| Qatar Central | AZ03 | AV36P (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| South Africa North | AZ03 | AV36 (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
| South Central US | AZ01 | AV36, AV64 | No | 7 | | South Central US | AZ02 | **AV36P**, AV52, AV64 | No | 7 | | Southeast Asia | AZ02 | **AV36** | No | N/A |
-| Sweden Central | AZ01 | AV36, (AV64 Planned H2 2024)| No | N/A (7 Planned H2 2024)|
+| Sweden Central | AZ01 | AV36 (AV64 Planned H2 2024)| No | N/A (7 Planned H2 2024)|
| Switzerland North | AZ01 | **AV36**, AV64 | No | 7 |
-| Switzerland North | AZ03 | AV36P, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| Switzerland North | AZ03 | AV36P (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
| Switzerland West | AZ01 | **AV36**, AV64 | No | 7 | | UAE North | AZ03 | AV36P | No | N/A | | UK South | AZ01 | AV36, AV36P, AV52, AV64 | Yes | 7 |
Azure VMware Solution monitors the following conditions on the host:
- Hardware power status - Storage status - Connection failure
+
+## Alert Codes and Remediation Table
+| Error Code | Error Details | Recommended Action |
+|--|||
+| EPC_SCSIDEVICE_SHARINGMODE | This error is encountered when a Virtual Machine is configured to use a device that prevents a maintenance operation: A device that is a SCSI controller which is engaged in bus-sharing | Follow the KB article for the removal of any SCSI controller engaged in bus-sharing attached to VMsΓÇ» https://knowledge.broadcom.com/external/article?legacyId=79910 |
+| EPC_CDROM_EMULATEMODE | This error is encountered when CD-ROM on the Virtual Machine uses emulate mode, whose ISO image is not accessible | Follow the KB article for the removal of any CDROM mounted on customer's workload Virtual Machines in emulate mode or detach ISO. It is recommended to use Passthrough mode for mounting any CD-ROM. https://knowledge.broadcom.com/external/article?legacyId=79306 |
+| EPC_DATASTORE_INACCESSIBLE | This error is encountered when any external Datastore attached to AVS Private Cloud becomes inaccessible | Follow the KB article for the removal of any stale Datastore attached to cluster /azure/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts?tabs=azure-portal#performance-best-practices |
+| EPC_NWADAPTER_STALE | This error is encountered when connected Network interface on the Vitual Machine uses network adapter which becomes inaccessible | Follow the KB article for the removal of any stale N/W adapters attached to Virtual Machines https://knowledge.broadcom.com/external/article/318738/troubleshooting-the-migration-compatibil.html |
> [!NOTE] > Azure VMware Solution tenant admins must not edit or delete the previously defined VMware vCenter Server alarms because they are managed by the Azure VMware Solution control plane on vCenter Server. These alarms are used by Azure VMware Solution monitoring to trigger the Azure VMware Solution host remediation process.
backup Backup Reports Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-reports-email.md
Title: Email Azure Backup Reports description: Create automated tasks to receive periodic reports via email Previously updated : 09/11/2024 Last updated : 09/20/2024
When attempting to authorize the Microsoft 365 API connection, you might see an
This error can occur if the mailbox is on a dedicated Microsoft Exchange Server and isn't a valid Office 365 mailbox. [Learn more](/connectors/office365/#common-errors)
-To get a valid Office 365 mailbox, submit a request to your Exchange or Global administrator to migrate the mailbox account. Users who don't have administrator permissions can't migrate accounts. For information on how to migrate the mailbox account, see [How to migrate mailbox data by using the Exchange Admin Center in Office 365](/exchange/troubleshoot/move-or-migrate-mailboxes/migrate-data-with-admin-center).
+To get a valid Office 365 mailbox, submit a request to your Exchange administrator to migrate the mailbox account. Users who don't have administrator permissions can't migrate accounts. For information on how to migrate the mailbox account, see [How to migrate mailbox data by using the Exchange Admin Center in Office 365](/exchange/troubleshoot/move-or-migrate-mailboxes/migrate-data-with-admin-center).
### Scenario 4: Error in authorizing Azure Monitor Logs connection
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
Before acquiring a phone number, make sure your subscription meets the [geograph
For more information, see the [phone number types](./telephony/plan-solution.md) concept page and the [telephony concept](./telephony/telephony-concept.md) overview page.
-If you want to purchase more phone numbers or place a special order, follow the [instructions here](https://github.com/Azure/Communication/blob/master/special-order-numbers.md). If you would like to port toll-free phone numbers from external accounts to their Azure Communication Services account, follow the [instructions here](https://github.com/Azure/Communication/blob/master/port-numbers.md).
+Number purchase limits can be increased through a request to Azure Support.
+
+1. Open the [Azure portal](https://ms.portal.azure.com/) and sign in.
+2. Select [Help+Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+3. Click **Create new support request**.
+4. In the **Describe your issue** text box, enter `Technical` then click **Go**.
+5. From the **Select a service** dropdown menu, select **Service and Subscription Limits (Quotas)** then click **Next**.
+6. At the Problem description, choose the **Issue type**, **Subscription**, and **Quota type** then click **Next**.
+7. Review any **Recommended solution** if available, then click **Next**.
+8. Add **Additional details** as needed, then click **Next**.
+9. At **Review + create** check the information, make changes as needed, then click **Create**.
++ ## Identity
You can send a limited number of email messages. If you exceed the following lim
### Size Limits
-| **Name** | Limit |
-|--|--|
-|Number of recipients in Email|50 |
-|Total email request size (including attachments) |10 MB |
+| **Name** | Limit |
+| | |
+| Number of recipients in Email | 50 |
+| Total email request size (including attachments) | 10 MB |
+
+For all message size limits, you need to consider that that base64 encoding increases the size of the message. You need to increase the size value to account for the message size increase that occurs after the message attachments and any other binary data are Base64 encoded. Base64 encoding increases the size of the message by about 33%, so the message size is about 33% larger than the message sizes before encoding. For example, if you specify a maximum message size value of ~10 MB, you can expect a realistic maximum message size value of approximately ~7.5 MB.
### Send attachments larger than 10 MB
When you implement error handling, use the HTTP error code 429 to detect throttl
You can find more information on Microsoft Graph [throttling](/graph/throttling) limits in the [Microsoft Graph](/graph/overview) documentation. ## Next steps
-See the [help and support](../support.md) options.
+See the [help and support](../support.md) options.
communication-services Outbound Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/outbound-calling.md
- Title: Outbound calling with Toll-Free numbers - Azure Communication Services
-description: Information about outbound calling limitations with Toll-Free numbers
----- Previously updated : 03/10/2023-----
-# Toll-Free telephone numbers and outbound calling
-Outbound calling capability with Toll-Free telephone numbers is available in many countries/regions where Azure Communication Services is available. However, there can be some limitations when trying to place outbound calls with toll-free telephone numbers.
-
-**Why outbound calls from Toll-Free numbers may not work?**
-
-Microsoft provides Toll-Free telephone numbers that have outbound calling capabilities, but it's important to note that this feature is only provided on a "best-effort" basis. In some countries/regions, toll-free numbers are considered as an "inbound only" service from regulatory perspective. This means, that in some scenarios, the receiving carrier may not allow incoming calls from toll-free telephone numbers. Since Microsoft and our carrier-partners don't have control over other carrier networks, we can't guarantee that outbound calls will reach all possible destinations.
communication-services Toll Free Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/toll-free-calling.md
+
+ Title: Calling with toll-free numbers - Azure Communication Services
+description: Information about inbound and outbound calling limitations with toll-free numbers
+++++ Last updated : 03/10/2023+++++
+# Outbound and inbound calling with toll-free numbers
+Azure Communication Services supports inbound and outbound calling capability with toll-free numbers in many countries. However, there are some common limitations that you should be aware of.
+
+**Outbound calling with toll-free numbers**
+
+Microsoft provides toll-free telephone numbers that have outbound calling capabilities, but it's important to note that this feature is only provided on a "best-effort" basis. In some countries/regions, toll-free numbers are considered as an "inbound only" service from regulatory perspective. This means, that in some scenarios, the receiving carrier may not allow incoming calls from toll-free telephone numbers. Since Microsoft and our carrier-partners don't have control over other carrier networks, we can't guarantee that outbound calls reach all possible destinations.
++
+**Inbound calls to Toll-Free numbers**
+
+All of our toll-free numbers have inbound calling capability, but do note that inbound calls to your toll-free numbers are only guaranteed to work if both the Toll-Free number and the caller's phone number are from the same country. If a call to your toll-free number originates outside the toll-free number country, we can't guarantee the delivery of the call. In most cases, international reachability on our toll-free numbers is supported, but there can be certain international carrier and country combinations where we can't guarantee that the call reaches to your Toll-Free number.
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Once you start development, check out the [known issues page](../known-issues.md
- **Media Stats** - The Calling SDK provides comprehensive insights into [the metrics](media-quality-sdk.md) of your VoIP and video calls. With this information, developers have a clearer understanding of call quality and can make informed decisions to further enhance their communication experience. - **Video Constraints** - The Calling SDK provides APIs that gain the ability to regulate [video quality among other parameters](../../quickstarts/voice-video-calling/get-started-video-constraints.md) during video calls by adjusting parameters such as resolution and frame rate supporting different call situations for different levels of video quality - **User Facing Diagnostics (UFD)** - The Calling SDK provides [events](user-facing-diagnostics.md) that are designed to provide insights into underlying issues that could affect call quality. Developers can subscribe to triggers such as weak network signals or muted microphones, ensuring that they're always aware of any factors impacting the calls.-- **Custom context** - The Calling SDK provides APIs supporting calling with one user-to-user and up to five custom headers. The headers are received within the incoming call. ## Detailed capabilities
The following list presents the set of features that are currently available in
| | Noise suppression | ✔️ | ✔️ | ✔️ | ✔️ | | | Automatic gain control (AGC) | ❌ | ✔️ | ✔️ | ✔️ | | Notifications <sup>4</sup> | [Push notifications](../../how-tos/calling-sdk/push-notifications.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| Custom context | Place a call with user-to-user or custom headers | ✔️ | ❌ | ❌ | ❌ |
+| Custom context | Add [User-to-User (UUI)](../../how-tos/calling-sdk/call-context.md) or custom headers to a call | ✔️ | ❌ | ❌ | ❌ |
<sup>1</sup> The capability to Mute Others is currently in public preview.
confidential-computing Skr Flow Confidential Containers Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-flow-confidential-containers-azure-container-instance.md
Secure Key Release (SKR) flow with Azure Key Vault (AKV) with confidential conta
## Side-Car helper container provided by Azure
-An [open sourced GitHub project "confidential side-cars"](https://github.com/microsoft/confidential-sidecar-containers) details how to build this container and what parameters/environment variables are required for you to prepare and run this side-car container. The current side car implementation provides various HTTP REST APIs that your primary application container can use to fetch the key from AKV. The integration through Microsoft Azure Attestation(MAA) is already built in. The preparation steps to run the side-car SKR container can be found in details [here](https://github.com/microsoft/confidential-sidecar-containers/tree/main/examples/skr).
+An [open sourced GitHub project "confidential side-cars"](https://github.com/microsoft/confidential-sidecar-containers) details how to build this container and what parameters/environment variables are required for you to prepare and run this side-car container. The current side car implementation provides various HTTP REST APIs that your primary application container can use to fetch the key from AKV. The integration through Microsoft Azure Attestation (MAA) is already built in. The preparation steps to run the side-car SKR container can be found in details [here](https://github.com/microsoft/confidential-sidecar-containers/tree/main/examples/skr).
Your main application container application can call the side-car WEB API end points as defined in the example below. Side-cars runs within the same container group and is a local endpoint to your application container. Full details of the API can be found [here](https://github.com/microsoft/confidential-sidecar-containers/blob/main/cmd/skr/README.md)
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
The following information shows the currently supported [Microsoft Azure offers]
| **Azure Government** | Azure Government pay-as-you-go | Pay-as-you-go_2014-09-01 | MS-AZR-USGOV-0003P | October 2, 2018 | | **Enterprise Agreement (EA)** | Enterprise Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0148P | May 2014 | | **Enterprise Agreement (EA)** | Microsoft Azure Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-0017P | May 2014 |
-| **Microsoft Customer Agreement** | Microsoft Azure Plan | EnterpriseAgreement_2014-09-01 | N/A | March 2019┬╣ |
-| **Microsoft Customer Agreement** | Microsoft Azure Plan for Dev/Test | MSDNDevTest_2014-09-01 | N/A | March 2019┬╣ |
+| **Microsoft Customer Agreement** | Microsoft Azure Plan | EnterpriseAgreement_2014-09-01 | MS-AZR-0017G | March 2019┬╣ |
+| **Microsoft Customer Agreement** | Microsoft Azure Plan for Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0148G | March 2019┬╣ |
| **Microsoft Customer Agreement supported by partners** | Microsoft Azure Plan | CSP_2015-05-01, CSP_MG_2017-12-01, and CSPDEVTEST_2018-05-01┬│ | N/A | October 2019 | | **Microsoft Developer Network (MSDN)** | MSDN Platforms┬▓ | MSDN_2014-09-01 | MS-AZR-0062P | October 2, 2018 | | **Pay-as-you-go** | Pay-as-you-go | Pay-as-you-go_2014-09-01 | MS-AZR-0003P | October 2, 2018 |
data-factory Connector Azure File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-file-storage.md
Previously updated : 07/31/2024 Last updated : 09/13/2024 # Copy data from or to Azure Files by using Azure Data Factory
To use system-assigned managed identity authentication, follow these steps:
2. Grant the managed identity permission in Azure Files. For more information on the roles, see this [article](../role-based-access-control/built-in-roles/storage.md#storage-file-data-smb-share-reader).
- - **As source**, in **Access control (IAM)**, grant at least the **Storage File Data SMB Share Reader** role.
- - **As sink**, in **Access control (IAM)**, grant at least the **Storage File Data SMB Share Contributor** role.
+ - **As source**, in **Access control (IAM)**, grant at least the **Storage File Data Privileged Reader** role.
+ - **As sink**, in **Access control (IAM)**, grant at least the **Storage File Data Privileged Contributor** role.
These properties are supported for an Azure Files linked service:
To use user-assigned managed identity authentication, follow these steps:
1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant permission in Azure Files. For more information on the roles, see this [article](../role-based-access-control/built-in-roles/storage.md#storage-file-data-smb-share-reader).
- - **As source**, in **Access control (IAM)**, grant at least the **Storage File Data SMB Share Reader** role.
- - **As sink**, in **Access control (IAM)**, grant at least the **Storage File Data SMB Share Contributor** role.
+ - **As source**, in **Access control (IAM)**, grant at least the **Storage File Data Privileged Reader** role.
+ - **As sink**, in **Access control (IAM)**, grant at least the **Storage File Data Privileged Contributor** role.
2. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](credentials.md) for each user-assigned managed identity.
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
If you select the **Install Azure PowerShell** type for your express custom setu
If you select the **Install licensed component** type for your express custom setup, you can then select an integrated component from our ISV partners in the **Component name** drop-down list:
-* If you select the **SentryOne's Task Factory** component, you can install the [Task Factory](https://www.solarwinds.com/resources/it-glossary/ssis-components) suite of components from SentryOne on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **2020.21.2**.
+* If you select the **SentryOne's Task Factory** component, you can install the [Task Factory](https://www.solarwinds.com/resources/it-glossary/ssis-components) suite of components from SentryOne on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **2021.18.1**.
* If you select the **oh22's HEDDA.IO** component, you can install the [HEDDA.IO](https://github.com/oh22is/HEDDA.IO/tree/master/SSIS-IR) data quality/cleansing component from oh22 on your Azure-SSIS IR. To do so, you need to purchase their service beforehand. The current integrated version is **1.0.14**. * If you select the **oh22's SQLPhonetics.NET** component, you can install the [SQLPhonetics.NET](https://appsource.microsoft.com/product/web-apps/oh22.sqlphonetics-ssis) data quality/matching component from oh22 on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **1.0.45**.
-* If you select the **KingswaySoft's SSIS Integration Toolkit** component, you can install the [SSIS Integration Toolkit](https://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-365) suite of connectors for CRM/ERP/marketing/collaboration apps, such as Microsoft Dynamics/SharePoint/Project Server, Oracle/Salesforce Marketing Cloud, etc. from KingswaySoft on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **21.2**.
+* If you select the **KingswaySoft's SSIS Integration Toolkit** component, you can install the [SSIS Integration Toolkit](https://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-365) suite of connectors for CRM/ERP/marketing/collaboration apps, such as Microsoft Dynamics/SharePoint/Project Server, Oracle/Salesforce Marketing Cloud, etc. from KingswaySoft on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **23.1**.
-* If you select the **KingswaySoft's SSIS Productivity Pack** component, you can install the [SSIS Productivity Pack](https://www.kingswaysoft.com/products/ssis-productivity-pack) suite of components from KingswaySoft on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **21.2**.
+* If you select the **KingswaySoft's SSIS Productivity Pack** component, you can install the [SSIS Productivity Pack](https://www.kingswaysoft.com/products/ssis-productivity-pack) suite of components from KingswaySoft on your Azure-SSIS IR by entering the product license key that you purchased from them in the **License key** box. The current integrated version is **23.1**.
-* If you select the **Theobald Software's Xtract IS** component, you can install the [Xtract IS](https://theobald-software.com/en/xtract-is/) suite of connectors for SAP system (ERP, S/4HANA, BW) from Theobald Software on your Azure-SSIS IR by dragging & dropping/uploading the product license file that you purchased from them into the **License file** box. The current integrated version is **6.5.13.18**.
+* If you select the **Theobald Software's Xtract IS** component, you can install the [Xtract IS](https://theobald-software.com/en/xtract-is/) suite of connectors for SAP system (ERP, S/4HANA, BW) from Theobald Software on your Azure-SSIS IR by dragging & dropping/uploading the product license file that you purchased from them into the **License file** box. The current integrated version is **6.9.3.26**.
* If you select the **AecorSoft's Integration Service** component, you can install the [Integration Service](https://www.aecorsoft.com/en/products/integrationservice) suite of connectors for SAP and Salesforce systems from AecorSoft on your Azure-SSIS IR. To do so, enter the product license key that you purchased from them beforehand in the **License key** text box. The current integrated version is **3.0.00**.
event-grid Event Schema Azure Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-azure-signalr.md
Last updated 12/02/2022
-# Azure Event Grid event schema for SignalR Service
+# Azure SignalR as an Azure Event Grid source
This article provides the properties and schema for SignalR Service events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md). It also gives you a list of quick starts and tutorials to use Azure SignalR as an event source.
event-grid Event Schema Communication Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-communication-services.md
Last updated 09/19/2023
-# Event Handling in Azure Communication Services
+# Azure Communication Services as an Azure Event Grid source
Azure Communication Services integrates with [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to deliver real-time event notifications in a reliable, scalable, and secure manner. The purpose of this article is to help you configure your applications to listen to Communication Services events. For example, you may want to update a database, create a work item and deliver a push notification whenever an SMS message is received by a phone number associated with your Communication Services resource.
event-grid Event Schema Resource Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-resource-notifications.md
Last updated 09/26/2023
-# Azure Resource Notifications overview
+# Azure Resource Notifications as an Azure Event Grid source
Azure Resource Notifications (ARN) represent the cutting-edge unified pub/sub service catering to all Azure resources. ARN taps into a diverse range of publishers, and this wealth of data is now accessible through ARN's dedicated system topics in Azure Event Grid. Here are the key advantages:
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
You can configure **private links** to connect to Azure Event Grid to **publish
Here's the list of regions where the new MQTT broker and namespace topics features are available: - Australia East - Australia South East
+- Australia Central
+- Australia Central 2
- Brazil South - Brazil Southeast - Canada Central
Here's the list of regions where the new MQTT broker and namespace topics featur
- East Asia - East US - East US 2
+- West US
- France Central
+- France South
+- Germany North
- Germany West Central - Israel Central - Italy North
Here's the list of regions where the new MQTT broker and namespace topics featur
- Japan West - Korea Central - Korea South
+- Mexico Central
- North Central US - North Europe - Norway East - Poland Central - South Africa West
+- South Africa North
- South Central US - South India - Southeast Asia
+- Spain Central
- Sweden Central
+- Sweden South
- Switzerland North
+- Switzerland West
- UAE North
+- UAE Central
- UK South - UK West - West Europe - West US 2 - West US 3
+- West Central US
+
## Next steps
event-grid Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/sdk-overview.md
The management SDKs enable you to create, update, and delete Event Grid topics a
| SDK | Package | Reference documentation | Samples | | -- | - | -- | - | | REST API | | [REST reference](/rest/api/eventgrid/controlplane-preview/ca-certificates) | |
-| .NET | [`Azure.ResourceManager.EventGrid`](https://www.nuget.org/packages/Azure.ResourceManager.EventGrid/). The beta package has the latest `Namespaces` API. | .NET reference: [Preview](/dotnet/api/overview/azure/resourcemanager.eventgrid-readme?view=azure-dotnet-preview&preserve-view=true), [GA](/dotnet/api/overview/azure/event-grid) | [.NET samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.ResourceManager.EventGrid/samples) |
-| Java | [`azure-resourcemanager-eventgrid`](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-eventgrid/). The beta package has the latest `Namespaces` API. | Java reference: [Preview](/java/api/overview/azure/resourcemanager-eventgrid-readme?view=azure-java-preview&preserve-view=true), [GA](/java/api/overview/azure/event-grid) | [Java samples](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/eventgrid/azure-resourcemanager-eventgrid/src/samples) |
-| JavaScript | [`@azure/arm-eventgrid`](https://www.npmjs.com/package/@azure/arm-eventgrid). The beta package has the latest `Namespaces` API. | JavaScript reference: [Preview](/javascript/api/overview/azure/arm-eventgrid-readme?view=azure-node-preview&preserve-view=true), [GA](/javascript/api/overview/azure/event-grid) | [JavaScript and TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/arm-eventgrid) |
-| Python | [`azure-mgmt-eventgrid`](https://pypi.org/project/azure-mgmt-eventgrid/). The beta package has the latest `Namespaces` API. | Python reference: [Preview](/python/api/azure-mgmt-eventgrid/?view=azure-python-preview&preserve-view=true), [GA](/python/api/overview/azure/event-grid) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-mgmt-eventgrid/generated_samples)
+| .NET | [`Azure.ResourceManager.EventGrid`](https://www.nuget.org/packages/Azure.ResourceManager.EventGrid/). The package has the latest `Namespaces` API. | .NET reference: [Preview](/dotnet/api/overview/azure/resourcemanager.eventgrid-readme?view=azure-dotnet-preview&preserve-view=true), [GA](/dotnet/api/overview/azure/event-grid) | [.NET samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.ResourceManager.EventGrid/samples) |
+| Java | [`azure-resourcemanager-eventgrid`](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-eventgrid/). The package has the latest `Namespaces` API. | Java reference: [Preview](/java/api/overview/azure/resourcemanager-eventgrid-readme?view=azure-java-preview&preserve-view=true), [GA](/java/api/overview/azure/event-grid) | [Java samples](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/eventgrid/azure-resourcemanager-eventgrid/src/samples) |
+| JavaScript | [`@azure/arm-eventgrid`](https://www.npmjs.com/package/@azure/arm-eventgrid). The package has the latest `Namespaces` API. | JavaScript reference: [Preview](/javascript/api/overview/azure/arm-eventgrid-readme?view=azure-node-preview&preserve-view=true), [GA](/javascript/api/overview/azure/event-grid) | [JavaScript and TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/arm-eventgrid) |
+| Python | [`azure-mgmt-eventgrid`](https://pypi.org/project/azure-mgmt-eventgrid/). The package has the latest `Namespaces` API. | Python reference: [Preview](/python/api/azure-mgmt-eventgrid/?view=azure-python-preview&preserve-view=true), [GA](/python/api/overview/azure/event-grid) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-mgmt-eventgrid/generated_samples)
| Go | [Azure SDK for Go](https://github.com/Azure/azure-sdk-for-go) | | [Go samples](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/main/sdk/resourcemanager/eventgrid) |
The data plane SDKs enable you to post events to topics by taking care of authen
| Programming language | Package | Reference documentation | Samples | | -- | - | - | -- | | REST API | | [REST reference](/rest/api/eventgrid/dataplane-preview/publish-cloud-events) |
-| .NET | [`Azure.Messaging.EventGrid`](https://www.nuget.org/packages/Azure.Messaging.EventGrid/). The beta package has the latest `Namespaces` API. | [.NET reference](/dotnet/api/overview/azure/messaging.eventgrid-readme?view=azure-dotnet-preview&preserve-view=true) | [.NET samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.Messaging.EventGrid/samples) |
-|Java | [`azure-messaging-eventgrid`](https://central.sonatype.com/artifact/com.azure/azure-messaging-eventgrid/). The beta package has the latest `Namespaces` API. | [Java reference](/java/api/overview/azure/messaging-eventgrid-readme?view=azure-java-preview&preserve-view=true) | [Java samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/eventgrid/azure-messaging-eventgrid/src/samples/java) |
-| JavaScript | [`@azure/eventgrid`](https://www.npmjs.com/package/@azure/eventgrid). The beta package has the latest `Namespaces` API. | [JavaScript reference](/javascript/api/overview/azure/eventgrid-readme?view=azure-node-preview&preserve-view=true) | [JavaScript and TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid) |
-| Python | [`azure-eventgrid`](https://pypi.org/project/azure-eventgrid/). The beta package has the latest `Namespaces` API. | [Python reference](/python/api/overview/azure/eventgrid-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-eventgrid/samples) |
+| .NET | [`Azure.Messaging.EventGrid`](https://www.nuget.org/packages/Azure.Messaging.EventGrid/). The package has the latest `Namespaces` API. | [.NET reference](/dotnet/api/overview/azure/messaging.eventgrid-readme?view=azure-dotnet-preview&preserve-view=true) | [.NET samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.Messaging.EventGrid/samples) |
+|Java | [`azure-messaging-eventgrid`](https://central.sonatype.com/artifact/com.azure/azure-messaging-eventgrid/). The package has the latest `Namespaces` API. | [Java reference](/java/api/overview/azure/messaging-eventgrid-readme?view=azure-java-preview&preserve-view=true) | [Java samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/eventgrid/azure-messaging-eventgrid/src/samples/java) |
+| JavaScript | [`@azure/eventgrid`](https://www.npmjs.com/package/@azure/eventgrid). The package has the latest `Namespaces` API. | [JavaScript reference](/javascript/api/overview/azure/eventgrid-readme?view=azure-node-preview&preserve-view=true) | [JavaScript and TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid) |
+| Python | [`azure-eventgrid`](https://pypi.org/project/azure-eventgrid/). The package has the latest `Namespaces` API. | [Python reference](/python/api/overview/azure/eventgrid-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-eventgrid/samples) |
| Go | [Azure SDK for Go](https://github.com/Azure/azure-sdk-for-go) | | |
hdinsight-aks Cluster Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/cluster-storage.md
Title: Introduction to cluster storage
description: Understand how Azure HDInsight on AKS integrates with Azure Storage Previously updated : 08/3/2023 Last updated : 09/20/2024+ # Introduction to cluster storage
hdinsight-aks Concept Azure Monitor Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/concept-azure-monitor-integration.md
Title: Metrics and monitoring in HDInsight on AKS
description: Learn about how HDInsight on AKS interacts with Azure Monitoring. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Azure Monitor integration
hdinsight-aks Concept Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/concept-security.md
Title: Security in HDInsight on AKS
description: An introduction to security with managed identity from Microsoft Entra ID in HDInsight on AKS. Previously updated : 05/11/2024 Last updated : 09/20/2024+ # Overview of enterprise security in Azure HDInsight on AKS
hdinsight-aks Control Egress Traffic From Hdinsight On Aks Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/control-egress-traffic-from-hdinsight-on-aks-clusters.md
Title: Control network traffic from HDInsight on AKS Cluster pools and cluster
description: A guide to configure and manage inbound and outbound network connections from HDInsight on AKS. Previously updated : 05/21/2024 Last updated : 09/20/2024+ # Control network traffic from HDInsight on AKS Cluster pools and clusters
In the following sections, we describe each method in detail.
### Outbound with load balancer
-The load balancer is used for egress through an HDInsight on AKS assigned public IP. When you configure the outbound type of load balancer on your cluster pool, you can expect egress out of the load balancer created by the HDInsight on AKS.
+The load balancer is used for egress through a HDInsight on AKS assigned public IP. When you configure the outbound type of load balancer on your cluster pool, you can expect egress out of the load balancer created by the HDInsight on AKS.
You can configure the outbound with load balancer configuration using the Azure portal.
hdinsight-aks Create Cluster Error Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-error-dictionary.md
Title: Create a cluster - error dictionary in Azure HDInsight on AKS
description: Learn how to troubleshoot errors that occur when creating Azure HDInsight on AKS clusters Previously updated : 08/31/2023 Last updated : 09/20/2024+ # Cluster creation errors on Azure HDInsight on AKS
hdinsight-aks Create Cluster Using Arm Template Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-using-arm-template-script.md
description: How to create an ARM template of a cluster in Azure HDInsight on AK
Previously updated : 02/12/2024 Last updated : 09/20/2024+ # Export cluster ARM template - Azure portal
hdinsight-aks Create Cluster Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-using-arm-template.md
description: Learn how to Create cluster ARM template using Azure CLI
Previously updated : 02/12/2024 Last updated : 09/20/2024+ # Export cluster ARM template - Azure CLI
hdinsight-aks Customize Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/customize-clusters.md
Title: Customize Azure HDInsight on AKS clusters
description: Add custom components to HDInsight on AKS clusters by using script actions. Script actions are Bash scripts that can be used to customize the cluster configuration. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Customize Azure HDInsight on AKS clusters using script actions
Azure HDInsight on AKS provides a configuration method calledΓÇ» Script Actions
## Understand script actions
-A script action is Bash script that runs on the service components in an HDInsight on AKS cluster.
+A script action is Bash script that runs on the service components in a HDInsight on AKS cluster.
The characteristics and features of script actions are as follows:
hdinsight-aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/faq.md
Title: HDInsight on AKS FAQ
description: HDInsight on AKS frequently asked questions. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # HDInsight on AKS - Frequently asked questions
This article addresses some common questions about Azure HDInsight on AKS.
[!INCLUDE [retirement-notice](includes/retirement-notice.md)] [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] -- ## General * What is HDInsight on AKS?
This article addresses some common questions about Azure HDInsight on AKS.
For a list of supported regions, refer to [Region availability](./overview.md#region-availability-public-preview).
-* WhatΓÇÖs the cost to deploy an HDInsight on AKS Cluster?
+* WhatΓÇÖs the cost to deploy a HDInsight on AKS Cluster?
For more information about pricing, see HDInsight on AKS pricing.
hdinsight-aks Application Mode Cluster On Hdinsight On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/application-mode-cluster-on-hdinsight-on-aks.md
Title: Apache Flink® Application Mode cluster on HDInsight on AKS
description: Learn about Flink® Application Mode cluster on HDInsight on AKS. Previously updated : 03/21/2024 Last updated : 09/20/2024+ # Apache Flink Application Mode cluster on HDInsight on AKS
hdinsight-aks Assign Kafka Topic Event Message To Azure Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md
Title: Write event messages into Azure Data Lake Storage Gen2 with Apache Flink
description: Learn how to write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API. Previously updated : 03/29/2024 Last updated : 09/20/2024+ # Write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API
hdinsight-aks Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/azure-databricks.md
Title: Incorporate Apache Flink® DataStream into Azure Databricks Delta Lake Ta
description: Learn about incorporate Apache Flink® DataStream into Azure Databricks Delta Lake Table. Previously updated : 04/10/2024 Last updated : 09/20/2024+ # Incorporate Apache Flink® DataStream into Azure Databricks Delta Lake Tables
hdinsight-aks Azure Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/azure-iot-hub.md
Title: Process real-time IoT data on Apache Flink® with Azure HDInsight on AKS
description: How to integrate Azure IoT Hub and Apache Flink®. Previously updated : 04/04/2024 Last updated : 09/20/2024+ # Process real-time IoT data on Apache Flink® with Azure HDInsight on AKS
hdinsight-aks Azure Service Bus Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/azure-service-bus-demo.md
Title: Use Apache Flink on HDInsight on AKS with Azure Service Bus
description: Use Apache Flink DataStream API on HDInsight on AKS with Azure Service Bus. Previously updated : 04/02/2024 Last updated : 09/20/2024+ # Use Apache Flink on HDInsight on AKS with Azure Service Bus
hdinsight-aks Change Data Capture Connectors For Apache Flink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/change-data-capture-connectors-for-apache-flink.md
Title: How to perform Change Data Capture of SQL Server with Apache Flink® Data
description: Learn how to perform Change Data Capture of SQL Server with Apache Flink® DataStream API and DataStream Source. Previously updated : 04/02/2024 Last updated : 09/20/2024+ # Change Data Capture of SQL Server with Apache Flink® DataStream API and DataStream Source on HDInsight on AKS
hdinsight-aks Cosmos Db For Apache Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/cosmos-db-for-apache-cassandra.md
Title: Using Azure Cosmos DB for Apache Cassandra® with HDInsight on AKS for Ap
description: Learn how to Sink Apache Kafka® message into Azure Cosmos DB for Apache Cassandra®, with Apache Flink® running on HDInsight on AKS. Previously updated : 04/02/2024 Last updated : 09/20/2024+ # Sink Apache Kafka® messages into Azure Cosmos DB for Apache Cassandra, with Apache Flink® on HDInsight on AKS
hdinsight-aks Create Kafka Table Flink Kafka Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/create-kafka-table-flink-kafka-sql-connector.md
Title: How to create Apache Kafka table on an Apache Flink® on HDInsight on AKS
description: Learn how to create Apache Kafka table on Apache Flink®. Previously updated : 03/14/2024 Last updated : 09/20/2024+ # Create Apache Kafka® table on Apache Flink® on HDInsight on AKS
hdinsight-aks Datastream Api Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/datastream-api-mongodb.md
Title: Use DataStream API for MongoDB as a source and sink with Apache Flink®
description: Learn how to use Apache Flink® DataStream API on HDInsight on AKS for MongoDB as a source and sink. Previously updated : 03/22/2024 Last updated : 09/20/2024+ # Use Apache Flink® DataStream API on HDInsight on AKS for MongoDB as a source and sink
hdinsight-aks Fabric Lakehouse Flink Datastream Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/fabric-lakehouse-flink-datastream-api.md
Title: Microsoft Fabric with Apache Flink® in HDInsight on AKS
description: An introduction to lakehouse on Microsoft Fabric with Apache Flink® on HDInsight on AKS Previously updated : 03/23/2024 Last updated : 09/20/2024+ # Connect to OneLake in Microsoft Fabric with HDInsight on AKS cluster for Apache Flink®
hdinsight-aks Flink Catalog Delta Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-catalog-delta-hive.md
Title: Table API and SQL - Use Delta Catalog type with Hive with Apache Flink®
description: Learn about how to create Delta Catalog with Apache Flink® on Azure HDInsight on AKS Previously updated : 03/29/2024 Last updated : 09/20/2024+ # Create Delta Catalog with Apache Flink® on Azure HDInsight on AKS
hdinsight-aks Flink Catalog Iceberg Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-catalog-iceberg-hive.md
Title: Table API and SQL - Use Iceberg Catalog type with Hive in Apache Flink®
description: Learn how to create Iceberg Catalog in Apache Flink® on HDInsight on AKS. Previously updated : 04/19/2024 Last updated : 09/20/2024+ # Create Iceberg Catalog in Apache Flink® on HDInsight on AKS
hdinsight-aks Flink Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-cluster-configuration.md
Title: Troubleshoot Apache Flink® on HDInsight on AKS
description: Learn to troubleshoot Apache Flink® cluster configurations on HDInsight on AKS Previously updated : 09/26/2023 Last updated : 09/20/2024+ # Troubleshoot Apache Flink® cluster configurations on HDInsight on AKS
hdinsight-aks Flink Configuration Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-configuration-management.md
Title: Apache Flink® Configuration Management in HDInsight on AKS
description: Learn about Apache Flink Configuration Management in HDInsight on AKS. Previously updated : 04/25/2024 Last updated : 09/20/2024+ # Apache Flink® Configuration management in HDInsight on AKS
hdinsight-aks Flink Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-create-cluster-portal.md
Title: Create an Apache Flink® cluster in HDInsight on AKS using Azure portal
description: Creating an Apache Flink cluster in HDInsight on AKS with Azure portal. Previously updated : 12/28/2023 Last updated : 09/20/2024+ # Create an Apache Flink® cluster in HDInsight on AKS with Azure portal
hdinsight-aks Flink How To Setup Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-how-to-setup-event-hub.md
Title: How to connect Apache Flink® on HDInsight on AKS with Azure Event Hubs f
description: Learn how to connect Apache Flink® on HDInsight on AKS with Azure Event Hubs for Apache Kafka® Previously updated : 04/02/2024 Last updated : 09/20/2024+ # Connect Apache Flink® on HDInsight on AKS with Azure Event Hubs for Apache Kafka®
hdinsight-aks Flink Job Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-job-management.md
Title: Apache Flink® job management in HDInsight on AKS
description: HDInsight on AKS provides a feature to manage and submit Apache Flink jobs directly through the Azure portal. Previously updated : 04/01/2024 Last updated : 09/20/2024+ # Apache Flink® job management in HDInsight on AKS clusters
hdinsight-aks Flink Job Orchestration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-job-orchestration.md
Title: Azure Data Factory Workflow Orchestration Manager (powered by Apache Airf
description: Learn how to perform Apache Flink® job orchestration using Azure Data Factory Workflow Orchestration Manager Previously updated : 10/28/2023 Last updated : 09/20/2024+ # Apache Flink® job orchestration using Azure Data Factory Workflow Orchestration Manager (powered by Apache Airflow)
hdinsight-aks Flink Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-overview.md
Title: What is Apache Flink® in Azure HDInsight on AKS? (Preview)
description: An introduction to Apache Flink® in Azure HDInsight on AKS. Previously updated : 10/28/2023 Last updated : 09/20/2024+ # What is Apache Flink® in Azure HDInsight on AKS? (Preview)
hdinsight-aks Flink Table Api And Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-table-api-and-sql.md
Title: Table API and SQL in Apache Flink® clusters on HDInsight on AKS
description: Learn about Table API and SQL in Apache Flink® clusters on HDInsight on AKS Previously updated : 10/27/2023 Last updated : 09/20/2024+ # Table API and SQL in Apache Flink® clusters on HDInsight on AKS
hdinsight-aks Flink Web Ssh On Portal To Flink Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-web-ssh-on-portal-to-flink-sql.md
Title: How to enter the Apache Flink® CLI client using Secure Shell (SSH) on HD
description: How to enter Apache Flink® SQL & DStream CLI client using webssh on HDInsight on AKS clusters with Azure portal. Previously updated : 02/04/2024 Last updated : 09/20/2024+ # Access Apache Flink® CLI client using Secure Shell (SSH) on HDInsight on AKS clusters with Azure portal
hdinsight-aks Fraud Detection Flink Datastream Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/fraud-detection-flink-datastream-api.md
Title: Fraud detection with the Apache Flink® DataStream API
description: Learn about Fraud detection with the Apache Flink® DataStream API. Previously updated : 04/09/2024 Last updated : 09/20/2024+ # Fraud detection with the Apache Flink® DataStream API
hdinsight-aks Hive Dialect Flink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/hive-dialect-flink.md
Title: Hive dialect in Apache Flink® clusters on HDInsight on AKS
description: How to use Hive dialect in Apache Flink® clusters on HDInsight on AKS. Previously updated : 04/17/2024 Last updated : 09/20/2024+ # Hive dialect in Apache Flink® clusters on HDInsight on AKS
hdinsight-aks Integration Of Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/integration-of-azure-data-explorer.md
Title: Integration of Azure Data Explorer and Apache Flink®
description: Integration of Azure Data Explorer and Apache Flink® in HDInsight on AKS Previously updated : 09/18/2023 Last updated : 09/20/2024+ # Integration of Azure Data Explorer and Apache Flink®
hdinsight-aks Join Stream Kafka Table Filesystem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/join-stream-kafka-table-filesystem.md
Title: Enrich the events from Apache Kafka® with the attributes from FileSystem
description: Learn how to join stream from Kafka with table from fileSystem using Apache Flink® DataStream API. Previously updated : 03/14/2024 Last updated : 09/20/2024+ # Enrich the events from Apache Kafka® with attributes from ADLS Gen2 with Apache Flink®
hdinsight-aks Monitor Changes Postgres Table Flink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/monitor-changes-postgres-table-flink.md
Title: Change Data Capture (CDC) of PostgreSQL table using Apache Flink®
description: Learn how to perform CDC on PostgreSQL table using Apache Flink® Previously updated : 03/29/2024 Last updated : 09/20/2024+ # Change Data Capture (CDC) of PostgreSQL table using Apache Flink®
hdinsight-aks Process And Consume Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/process-and-consume-data.md
Title: Using Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS
description: Learn how to use Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS Previously updated : 04/03/2024 Last updated : 09/20/2024+ # Using Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS
hdinsight-aks Sink Kafka To Kibana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/sink-kafka-to-kibana.md
Title: Use Elasticsearch with Apache Flink on HDInsight on AKS
description: This article shows you how to use Elasticsearch along with Apache Flink on HDInsight on Azure Kubernetes Service. Previously updated : 04/09/2024 Last updated : 09/20/2024+ # Use Elasticsearch with Apache Flink on HDInsight on AKS
hdinsight-aks Sink Sql Server Table Using Flink Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/sink-sql-server-table-using-flink-sql.md
Title: Change Data Capture (CDC) of SQL Server using Apache Flink®
description: Learn how to perform CDC of SQL Server using Apache Flink® Previously updated : 10/27/2023 Last updated : 09/20/2024+ # Change Data Capture (CDC) of SQL Server using Apache Flink®
hdinsight-aks Start Sql Client Cli Gateway Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/start-sql-client-cli-gateway-mode.md
Title: Start SQL Client CLI in gateway mode in Apache Flink Cluster 1.17.0 on H
description: Learn how to start SQL Client CLI in gateway mode in Apache Flink Cluster 1.17.0 on HDInsight on AKS. Previously updated : 04/17/2024 Last updated : 09/20/2024+ # Start SQL Client CLI in gateway mode
hdinsight-aks Use Apache Nifi With Datastream Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-apache-nifi-with-datastream-api.md
Title: Use Apache NiFi with HDInsight on AKS clusters running Apache Flink® to
description: Learn how to use Apache NiFi to consume processed Apache Kafka® topic from Apache Flink® on HDInsight on AKS clusters and publish into ADLS Gen2. Previously updated : 03/25/2024 Last updated : 09/20/2024+ # Use Apache NiFi to consume processed Apache Kafka® topics from Apache Flink® and publish into ADLS Gen2
hdinsight-aks Use Azure Pipelines To Run Flink Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-azure-pipelines-to-run-flink-jobs.md
Title: How to use Azure Pipelines with Apache Flink® on HDInsight on AKS
description: Learn how to use Azure Pipelines with Apache Flink® Previously updated : 10/27/2023 Last updated : 09/20/2024+ # How to use Azure Pipelines with Apache Flink® on HDInsight on AKS
hdinsight-aks Use Flink Cli To Submit Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-cli-to-submit-jobs.md
Title: How to use Apache Flink® CLI to submit jobs
description: Learn how to use Apache Flink® CLI to submit jobs Previously updated : 10/27/2023 Last updated : 09/20/2024+ # Apache Flink® Command-Line Interface (CLI) on HDInsight on AKS clusters
hdinsight-aks Use Flink Delta Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-delta-connector.md
Title: How to use Apache Flink® on HDInsight on AKS with Flink/Delta connector
description: Learn how to use Flink/Delta Connector. Previously updated : 04/25/2024 Last updated : 09/20/2024+ # How to use Flink/Delta Connector
hdinsight-aks Use Flink To Sink Kafka Message Into Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-to-sink-kafka-message-into-hbase.md
Title: Write messages to Apache HBase® with Apache Flink® DataStream API
description: Learn how to write messages to Apache HBase with Apache Flink DataStream API. Previously updated : 05/01/2024 Last updated : 09/20/2024+ # Write messages to Apache HBase® with Apache Flink® DataStream API
hdinsight-aks Use Hive Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-hive-catalog.md
Title: Use Hive Catalog, Hive Read & Write demo on Apache Flink®
description: Learn how to use Hive Catalog, Hive Read & Write demo on Apache Flink® on HDInsight on AKS Previously updated : 03/29/2024 Last updated : 09/20/2024+ # How to use Hive Catalog with Apache Flink® on HDInsight on AKS
hdinsight-aks Use Hive Metastore Datastream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-hive-metastore-datastream.md
Title: Use Hive Metastore with Apache Flink® DataStream API
description: Use Hive Metastore with Apache Flink® DataStream API Previously updated : 03/29/2024 Last updated : 09/20/2024+ # Use Hive Metastore with Apache Flink® DataStream API
hdinsight-aks Hdinsight Aks Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/hdinsight-aks-support-help.md
Title: Support and troubleshooting for HDInsight on AKS
description: This article provides support and troubleshooting options for HDInsight on AKS. Previously updated : 10/06/2023 Last updated : 09/20/2024+ # Support and troubleshooting for HDInsight on AKS
hdinsight-aks Hdinsight On Aks Autoscale Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/hdinsight-on-aks-autoscale-clusters.md
Title: Automatically scale Azure HDInsight on AKS clusters
description: Use the Auto scale feature to automatically scale Azure HDInsight clusters on AKS based on a schedule or load based metrics. Previously updated : 02/06/2024 Last updated : 09/20/2024+ # Auto Scale HDInsight on AKS Clusters
hdinsight-aks Hdinsight On Aks Manage Authorization Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/hdinsight-on-aks-manage-authorization-profile.md
Title: Manage cluster access
description: How to manage cluster access in HDInsight on AKS Previously updated : 08/4/2023 Last updated : 09/20/2024+ # Manage cluster access
hdinsight-aks How To Azure Monitor Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/how-to-azure-monitor-integration.md
Title: How to integrate with Azure Monitor
description: Learn how to integrate with Azure Monitoring. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # How to integrate with Log Analytics
hdinsight-aks In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/in-place-upgrade.md
Title: Upgrade your HDInsight on AKS clusters and cluster pools
description: Upgrade your HDInsight on AKS clusters and cluster pools. Previously updated : 03/22/2024 Last updated : 09/20/2024+ # Upgrade your HDInsight on AKS clusters and cluster pools
hdinsight-aks Manage Cluster Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manage-cluster-pool.md
Title: Manage cluster pools
description: Manage cluster pools in HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Manage cluster pools
hdinsight-aks Manage Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manage-cluster.md
Title: Manage clusters
description: Manage clusters in HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Manage clusters
hdinsight-aks Manage Script Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manage-script-actions.md
Title: Manage script actions on Azure HDInsight on AKS clusters
description: An introduction on how to manage script actions in Azure HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Script actions during cluster creation
hdinsight-aks Manual Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manual-scale.md
Title: Manual scale
description: How to manually scale in HDInsight on AKS. Previously updated : 02/06/2024 Last updated : 09/20/2024+ # Manual scale
hdinsight-aks Monitor With Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/monitor-with-prometheus-grafana.md
Title: Monitoring with Azure Managed Prometheus and Grafana
description: Learn how to use monitor With Azure Managed Prometheus and Grafana Previously updated : 11/07/2023 Last updated : 09/20/2024+ # Monitoring with Azure Managed Prometheus and Grafana
hdinsight-aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/overview.md
description: An introduction to Azure HDInsight on AKS.
Previously updated : 05/28/2024 Last updated : 09/20/2024+ # What is HDInsight on AKS? (Preview)
hdinsight-aks Powershell Cluster Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/powershell-cluster-create.md
Title: Manage HDInsight on AKS clusters using PowerShell (Preview)
description: Manage HDInsight on AKS clusters using PowerShell. Previously updated : 12/11/2023 Last updated : 09/20/2024+ # Manage HDInsight on AKS clusters using PowerShell
hdinsight-aks Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/preview.md
Title: HDInsight on AKS preview information
description: This article explains what public preview mean in HDInsight on AKS. Previously updated : 09/05/2023 Last updated : 09/20/2024+ # Microsoft HDInsight on AKS preview information
hdinsight-aks Quickstart Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-create-cli.md
description: Learn how to use Azure CLI to create an HDInsight on AKS cluster po
Previously updated : 06/18/2024 Last updated : 09/20/2024+ # Quickstart: Create an HDInsight on AKS cluster pool using Azure CLI
hdinsight-aks Quickstart Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-create-cluster.md
Title: 'Quickstart: Create an HDInsight on AKS cluster pool using Azure portal'
description: This quickstart shows you how to create a cluster pool for Azure HDInsight on AKS. Previously updated : 06/18/2024 Last updated : 09/20/2024+ # Quickstart: Create an HDInsight on AKS cluster pool using Azure portal
hdinsight-aks Quickstart Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-create-powershell.md
description: Learn how to use Azure PowerShell to create an HDInsight on AKS clu
Previously updated : 06/19/2024 Last updated : 09/20/2024+ # Quickstart: Create an HDInsight on AKS cluster pool using Azure PowerShell
hdinsight-aks Quickstart Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-get-started.md
Title: One-click deployment for Azure HDInsight on AKS
description: How to create cluster pool and cluster with one-click deployment on Azure HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Get started with one-click deployment
hdinsight-aks Quickstart Prerequisites Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-prerequisites-resources.md
Title: Resource prerequisites for Azure HDInsight on AKS
description: Prerequisite steps to complete for Azure resources before working with HDInsight on AKS. Previously updated : 04/08/2024 Last updated : 09/20/2024+ # Resource prerequisites
hdinsight-aks Quickstart Prerequisites Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-prerequisites-subscription.md
Title: Subscription prerequisites for Azure HDInsight on AKS.
description: Prerequisite steps to complete on your subscription before working with Azure HDInsight on AKS. Previously updated : 05/06/2024 Last updated : 09/20/2024+ # Subscription prerequisites
hdinsight-aks Hdinsight Aks Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/release-notes/hdinsight-aks-release-notes-archive.md
Title: Archived release notes for Azure HDInsight on AKS
description: Archived release notes for Azure HDInsight on AKS. Get development tips and details for Trino, Flink, and Spark. Previously updated : 09/05/2024 Last updated : 09/20/2024+ # Azure HDInsight on AKS archived release notes
hdinsight-aks Hdinsight Aks Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/release-notes/hdinsight-aks-release-notes.md
Title: Release notes for Azure HDInsight on AKS
description: Latest release notes for Azure HDInsight on AKS. Get development tips and details for Trino, Flink, Spark, and more. Previously updated : 09/16/2024 Last updated : 09/20/2024+ # Azure HDInsight on AKS release notes
hdinsight-aks Required Outbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/required-outbound-traffic.md
Title: Outbound traffic on HDInsight on AKS
description: Learn required outbound traffic on HDInsight on AKS. Previously updated : 03/26/2024 Last updated : 09/20/2024+ # Required outbound traffic for HDInsight on AKS
hdinsight-aks Rest Api Cluster Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/rest-api-cluster-creation.md
Title: Manage HDInsight on AKS clusters using Azure REST API
description: Manage HDInsight on AKS clusters using Azure REST API Previously updated : 11/26/2023 Last updated : 09/20/2024+ # Manage HDInsight on AKS clusters using Azure REST API
hdinsight-aks Sdk Cluster Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/sdk-cluster-creation.md
description: Manage HDInsight on AKS clusters using .NET SDK.
Previously updated : 11/23/2023 Last updated : 09/20/2024+ # Manage HDInsight on AKS clusters using .NET SDK
hdinsight-aks Secure Traffic By Firewall Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-firewall-azure-portal.md
Title: Use firewall to restrict outbound traffic on HDInsight on AKS, using Azur
description: Learn how to secure traffic using firewall on HDInsight on AKS using Azure portal Previously updated : 08/3/2023 Last updated : 09/20/2024+ # Use firewall to restrict outbound traffic using Azure portal
hdinsight-aks Secure Traffic By Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-firewall.md
description: Learn how to secure traffic using firewall on HDInsight on AKS usin
Previously updated : 02/19/2024 Last updated : 09/20/2024+ # Use firewall to restrict outbound traffic using Azure CLI
hdinsight-aks Secure Traffic By Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-nsg.md
Title: Use NSG to restrict traffic on HDInsight on AKS
description: Learn how to secure traffic by NSGs on HDInsight on AKS Previously updated : 08/3/2023 Last updated : 09/20/2024+ # Use NSG to restrict traffic to HDInsight on AKS
hdinsight-aks Service Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/service-configuration.md
Title: Manage cluster configuration description: How to update cluster configuration for HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Manage cluster configuration
hdinsight-aks Service Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/service-health.md
Title: Manage service health.
description: Learn how to check the health of the services running in a cluster. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Manage service health
hdinsight-aks Azure Hdinsight Spark On Aks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/azure-hdinsight-spark-on-aks-delta-lake.md
Title: How to use Delta Lake in Azure HDInsight on AKS with Apache SparkΓäó clus
description: Learn how to use Delta Lake scenario in Azure HDInsight on AKS with Apache SparkΓäó cluster. Previously updated : 10/27/2023 Last updated : 09/20/2024+ # Use Delta Lake in Azure HDInsight on AKS with Apache SparkΓäó cluster (Preview)
hdinsight-aks Configuration Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/configuration-management.md
Title: Configuration management in HDInsight on AKS with Apache SparkΓäó
description: Learn how to perform Configuration management in HDInsight on AKS with Apache SparkΓäó cluster Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Configuration management in HDInsight on AKS with Apache SparkΓäó cluster
hdinsight-aks Connect To One Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/connect-to-one-lake-storage.md
Title: Connect to OneLake Storage
description: Learn how to connect to OneLake storage Previously updated : 10/27/2023 Last updated : 09/20/2024+ # Connect to OneLake Storage
hdinsight-aks Create Spark Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/create-spark-cluster.md
Title: How to create Spark cluster in HDInsight on AKS
description: Learn how to create Spark cluster in HDInsight on AKS Previously updated : 12/28/2023 Last updated : 09/20/2024+ # Create Spark cluster in HDInsight on AKS (Preview)
hdinsight-aks Hdinsight On Aks Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/hdinsight-on-aks-spark-overview.md
Title: What is Apache SparkΓäó in HDInsight on AKS? (Preview)
description: An introduction to Apache SparkΓäó in HDInsight on AKS Previously updated : 10/27/2023 Last updated : 09/20/2024+ # What is Apache SparkΓäó in HDInsight on AKS? (Preview)
hdinsight-aks Library Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/library-management.md
Title: Library Management in Azure HDInsight on AKS
description: Learn how to use Library Management in Azure HDInsight on AKS with Spark Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Library management in Spark
hdinsight-aks Spark Job Orchestration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/spark-job-orchestration.md
Title: Azure Data Factory Workflow Orchestration Manager (powered by Apache Airf
description: Learn how to perform Apache Spark® job orchestration using Azure Data Factory Workflow Orchestration Manager Previously updated : 11/28/2023 Last updated : 09/20/2024+ # Apache Spark® job orchestration using Azure Data Factory Workflow Orchestration Manager (powered by Apache Airflow)
hdinsight-aks Submit Manage Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/submit-manage-jobs.md
Title: How to submit and manage jobs on an Apache SparkΓäó cluster in Azure HDIn
description: Learn how to submit and manage jobs on an Apache SparkΓäó cluster in HDInsight on AKS Previously updated : 10/27/2023 Last updated : 09/20/2024+ # Submit and manage jobs on an Apache SparkΓäó cluster in HDInsight on AKS
hdinsight-aks Use Hive Metastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/use-hive-metastore.md
Title: How to use Hive metastore in Apache SparkΓäó
description: Learn how to use Hive metastore in Apache SparkΓäó Previously updated : 10/27/2023 Last updated : 09/20/2024+ # How to use Hive metastore with Apache SparkΓäó cluster
hdinsight-aks Use Machine Learning Notebook On Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/use-machine-learning-notebook-on-spark.md
Title: How to use Azure Machine Learning Notebook on Spark
description: Learn how to Azure Machine Learning notebook on Spark Previously updated : 08/29/2023 Last updated : 09/20/2024+ # How to use Azure Machine Learning Notebook on Spark
hdinsight-aks Subscribe To Release Notes Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/subscribe-to-release-notes-repo.md
Title: Subscribe to GitHub release notes repo
description: Learn how to subscribe to HDInsight on AKS GitHub release notes repo Previously updated : 11/20/2023 Last updated : 09/20/2024+ # Subscribe to HDInsight on AKS release notes GitHub repo
hdinsight-aks Trademarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trademarks.md
Title: Trademarks
description: The Trademark and Brand Guidelines detail how you can help us protect MicrosoftΓÇÖs brand assets. Previously updated : 10/26/2023 Last updated : 09/20/2024+ # Trademarks
hdinsight-aks Configure Azure Active Directory Login For Superset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/configure-azure-active-directory-login-for-superset.md
Title: Configure Microsoft Entra ID OAuth2 login for Apache Superset
description: Learn how to configure Microsoft Entra ID OAuth2 login for Superset Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Configure Microsoft Entra ID OAuth2 login
hdinsight-aks Configure Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/configure-ingress.md
Title: Expose Superset to the internet
description: Learn how to expose Superset to the internet Previously updated : 12/11/2023 Last updated : 09/20/2024+ # Expose Apache Superset to Internet
hdinsight-aks Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/role-based-access-control.md
Title: Configure Role Based Access Control
description: How to provide Role Based Access Control Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Configure Role Based Access Control
hdinsight-aks Trino Add Catalogs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-catalogs.md
description: Add catalogs to an existing Trino cluster in HDInsight on AKS
Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Configure catalogs
hdinsight-aks Trino Add Delta Lake Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-delta-lake-catalog.md
Title: Configure Delta Lake catalog
description: How to configure Delta Lake catalog in a Trino cluster. Previously updated : 06/19/2024 Last updated : 09/20/2024+ # Configure Delta Lake catalog
hdinsight-aks Trino Add Iceberg Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-iceberg-catalog.md
Title: Configure Iceberg catalog
description: How to configure iceberg catalog in a Trino cluster. Previously updated : 06/19/2024 Last updated : 09/20/2024+ # Configure Iceberg catalog
hdinsight-aks Trino Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-airflow.md
Title: Use Apache Airflow with Trino cluster
description: How to create Apache Airflow DAG to connect to Trino cluster with HDInsight on AKS Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Use Apache AirflowΓäó with Trino cluster
hdinsight-aks Trino Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-authentication.md
Title: Client authentication
description: How to authenticate to Trino cluster Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Authentication mechanism
hdinsight-aks Trino Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-caching.md
Title: Configure caching
description: Learn how to configure caching in Trino Previously updated : 11/03/2023 Last updated : 09/20/2024+ # Configure caching
hdinsight-aks Trino Catalog Glue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-catalog-glue.md
Title: Query data from AWS S3 and with AWS Glue
description: How to configure Trino catalogs for HDInsight on AKS with AWS Glue as metastore Previously updated : 10/19/2023 Last updated : 09/20/2024+
hdinsight-aks Trino Configuration Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-configuration-troubleshoot.md
Title: Troubleshoot cluster configuration
description: How to understand and fix errors for Trino clusters for HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 09/20/2024+
hdinsight-aks Trino Connect To Metastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connect-to-metastore.md
Title: Add external Hive metastore database
description: Connecting to the HIVE metastore for Trino clusters in HDInsight on AKS Previously updated : 02/21/2024 Last updated : 09/20/2024+ # Use external Hive metastore database
hdinsight-aks Trino Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connectors.md
Title: Trino connectors
description: Connectors available for Trino. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Trino connectors
hdinsight-aks Trino Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-create-cluster.md
Title: Create a Trino cluster - Azure portal
description: Creating a Trino cluster in HDInsight on AKS on the Azure portal. Previously updated : 12/28/2023 Last updated : 09/20/2024+ # Create a Trino cluster in the Azure portal (Preview)
hdinsight-aks Trino Create Delta Lake Tables Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-create-delta-lake-tables-synapse.md
Title: Read Delta Lake tables (Synapse or External Location)
description: How to read external tables created in Synapse or other systems into a Trino cluster. Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Read Delta Lake tables (Synapse or external location)
hdinsight-aks Trino Custom Plugins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-custom-plugins.md
Title: Add custom plugins in Azure HDInsight on AKS
description: Add custom plugins to an existing Trino cluster in HDInsight on AKS Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Custom plugins
hdinsight-aks Trino Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-fault-tolerance.md
Title: Configure fault-tolerance
description: Learn how to configure fault-tolerance in Trino with HDInsight on AKS. Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Fault-tolerant execution
hdinsight-aks Trino Jvm Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-jvm-configuration.md
Title: Modifying JVM heap settings
description: How to modify initial and max heap size for Trino pods. Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Configure JVM heap size
hdinsight-aks Trino Miscellaneous Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-miscellaneous-files.md
Title: Using miscellaneous files
description: Using miscellaneous files with Trino clusters in HDInsight on AKS Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Using miscellaneous files
hdinsight-aks Trino Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-overview.md
Title: What is Trino? (Preview)
description: An introduction to Trino. Previously updated : 08/29/2023 Last updated : 09/20/2024+ # What is Trino? (Preview)
hdinsight-aks Trino Query Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-query-logging.md
Title: Query logging
description: Log query lifecycle events in Trino cluster Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Query logging
hdinsight-aks Trino Scan Stats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-scan-stats.md
Title: Use scan statistics
description: How to enable, understand and query scan statistics using query log tables for Trino clusters for HDInsight on AKS. Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Enable scan statistics for queries
hdinsight-aks Trino Service Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-service-configuration.md
description: How to perform service configuration for Trino clusters for HDInsig
Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Trino configuration management
hdinsight-aks Trino Sharded Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-sharded-sql-connector.md
Title: Sharded SQL connector
description: How to configure and use sharded sql connector. Previously updated : 02/06/2024 Last updated : 09/20/2024+ # Sharded SQL connector
hdinsight-aks Trino Superset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-superset.md
Title: Use Apache Superset with Trino on HDInsight on AKS
description: Deploying Superset and connecting to Trino with HDInsight on AKS Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Deploy Apache SupersetΓäó
hdinsight-aks Trino Ui Command Line Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-command-line-interface.md
description: Using Trino via CLI
Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Trino CLI
hdinsight-aks Trino Ui Dbeaver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-dbeaver.md
Title: Trino with DBeaver
description: Using Trino in DBeaver. Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Connect and query with DBeaver
hdinsight-aks Trino Ui Jdbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-jdbc-driver.md
description: Using the Trino JDBC driver.
Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Trino JDBC driver
hdinsight-aks Trino Ui Web Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-web-ssh.md
Title: Trino Web SSH
description: Using Trino in Web SSH Previously updated : 08/29/2023 Last updated : 09/20/2024+ # Web SSH
hdinsight-aks Trino Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui.md
Title: Trino UI
description: Using Trino UI Previously updated : 10/19/2023 Last updated : 09/20/2024+ # Trino UI
hdinsight-aks Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/versions.md
Title: Versioning
description: Versioning in HDInsight on AKS. Previously updated : 03/27/2024 Last updated : 09/20/2024+ # Azure HDInsight on AKS versions
hdinsight-aks Virtual Machine Recommendation Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/virtual-machine-recommendation-capacity-planning.md
Title: Azure Virtual Machine recommendations and capacity planning
description: Default and minimum virtual machine size recommendations and capacity planning for HDInsight on AKS. Previously updated : 10/05/2023 Last updated : 09/20/2024+ # Default and minimum virtual machine size recommendations and capacity planning for HDInsight on AKS
hdinsight-aks Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/whats-new.md
Title: What's new in HDInsight on AKS? (Preview)
description: An introduction to new concepts in HDInsight on AKS that aren't in HDInsight. Previously updated : 03/24/2024 Last updated : 09/20/2024+ # What's new in HDInsight on AKS? (Preview)
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
- Title: Benefits of migrating to Azure HDInsight 4.0.
-description: Learn the benefits of migrating to Azure HDInsight 4.0.
-- Previously updated : 07/23/2024-
-# Significant version changes in HDInsight 4.0 and advantages
-
-HDInsight 4.0 has several advantages over HDInsight 3.6. Here's an overview of what's new in Azure HDInsight 4.0.
-
-| # | OSS component | HDInsight 4.0 version | HDInsight 3.6 version |
-| | | | |
-| 1 | Apache Hadoop | 3.1.1 | 2.7.3 |
-| 2 | Apache HBase | 2.1.6 | 1.1.2 |
-| 3 | Apache Hive | 3.1.0 | 1.2.1, 2.1 (LLAP) |
-| 4 | Apache Kafka | 2.1.1, 2.4 (GA) | 1.1 |
-| 5 | Apache Phoenix | 5 | 4.7.0 |
-| 6 | Apache Spark | 2.4.4, 3.0.0 (Preview) | 2.2 |
-| 7 | Apache TEZ | 0.9.1 | 0.7.0 |
-| 8 | Apache ZooKeeper | 3.4.6 | 3.4.6 |
-| 9 | Apache Kafka | 2.1.1, 2.4.1 (Preview) | 1.1 |
-| 10 | Apache Ranger | 1.1.0 | 0.7.0 |
-
-## Workloads and features
-
-### Hive
--- Advanced features:
- - Low-latency analytical processing (LLAP) workload management.
- - LLAP support for Java Database Connectivity (JDBC), Druid, and Kafka connectors.
- - Better SQL features (constraints and default values).
- - Surrogate keys.
- - Information schema.
-- Performance advantage:
- - Result caching. Caching query results allows a previously computed query result to be reused.
- - Dynamic materialized views and precomputation of summaries.
- - Atomicity, consistency, isolation, and durability (ACID) V2 performance improvements in both storage format and execution engine.
-- Security:
- - GDPR compliance enabled on Apache Hive transactions.
- - Hive user-defined function (UDF) execution authorization in Ranger.
-
-### HBase
--- Advanced features:
- - Procedure V2 (procv2), an updated framework for executing multistep HBase administrative operations.
- - Fully off-heap read/write path.
- - In-memory compactions.
- - HBase cluster support of the Azure Data Lake Storage Gen2 Premium tier.
-- Performance advantage:
- - Accelerated writes that use Azure Premium SSD managed disks to improve performance of the Apache HBase write-ahead log (WAL).
-- Security:
- - Hardening of both secondary indexes, which include local and global.
-
-### Kafka
--- Advanced features:
- - Kafka partition distribution on Azure fault domains.
- - Zstandard (zstd) compression support.
- - Kafka Consumer Incremental Rebalance.
- - Support for MirrorMaker 2.0.
-- Performance advantage:
- - Improved windowed aggregation performance in Kafka Streams.
- - Improved broker resiliency by reducing the memory footprint of message conversion.
- - Replication protocol improvements for fast leader failover.
-- Security:
- - Access control for creation of specific topics or topic prefixes.
- - Host-name verification to help prevent Secure Sockets Layer (SSL) configuration man-in-the-middle attacks.
- - Improved encryption support with faster Transport Layer Security (TLS) and CRC32C implementation.
-
-### Spark
--- Advanced features:
- - Structured Streaming support for ORC.
- - Capability to integrate with the new metastore catalog feature.
- - Structured Streaming support for the Hive Streaming library.
- - Transparent writes to Hive warehouses.
- - SparkCruise, an automatic computation reuse system for Spark.
-- Performance advantage:
- - Result caching. Caching query results allows a previously computed query result to be reused.
- - Dynamic materialized views and precomputation of summaries.
-- Security:
- - GDPR compliance enabled for Spark transactions.
-
-## Hive partition discovery and repair
-
-Hive automatically discovers and synchronizes the metadata of the partition in Hive Metastore (HMS).
-
-The `discover.partitions` table property enables and disables synchronization of the file system with partitions. In external partitioned tables, this property is enabled (`true`) by default.
-
-When Hive Metastore starts in remote service mode, a periodic background thread `(PartitionManagementTask)` is scheduled for every 300 seconds (configurable via `metastore.partition.management.task.frequency config`). The thread looks for tables with the `discover.partitions` table property set to `true` and performs `msck` repair in sync mode.
-
-If the table is a transactional table, the thread obtains an exclusive lock for that table before it performs `msck` repair. With this table property, you no longer need to run `MSCK REPAIR TABLE table_name SYNC PARTITIONS` manually.
-
-If you have an external table that you created by using a version of Hive that doesn't support partition discovery, enable partition discovery for the table:
-
-```ALTER TABLE exttbl SET TBLPROPERTIES ('discover.partitions' = 'true');```
-
-Set synchronization of partitions to occur every 10 minutes, expressed in seconds. In **Ambari** > **Hive** > **Configs**, set `metastore.partition.management.task.frequency` to `3600` or more.
--
-> [!WARNING]
-> Running `management.task` every 10 minutes puts pressure on the SQL Server database transaction units (DTUs).
-
-You can verify the output from the Azure portal.
--
-Hive drops the metadata and corresponding data in any partition that you create after the retention period. You express the retention time by using a numeral and the following character or characters:
-
-```
-ms (milliseconds)
-s (seconds)
-m (minutes)
-d (days)
-```
-
-To configure a partition retention period for one week, use this command:
-
-```
-ALTER TABLE employees SET TBLPROPERTIES ('partition.retention.period'='7d');
-```
-
-The partition metadata and the actual data for employees in Hive are automatically dropped after a week.
-
-## Performance optimizations available for Hive 3
-
-### OLAP vectorization
-
-Online analytical processing (OLAP) vectorization allows Hive to process a batch of rows together instead of processing one row at a time. Each batch is usually an array of primitive types. Operations are performed on the entire column vector, which improves the instruction pipelines and cache usage.
-
-This feature includes vectorized execution of Partitioned Table Function (PTF), roll-ups, and grouping sets.
-
-### Dynamic semijoin reduction
-
-Dynamic `semijoin` reduction dramatically improves performance for selective joins. It builds a bloom filter from one side of a join and filters rows from the other side. It skips the scan and provides further evaluation of rows that don't qualify for the join.
-
-### Parquet support for vectorization with LLAP
-
-Vectorized query execution is a feature that greatly reduces the CPU usage for typical query operations such as:
--- Scan-- Filter-- Aggregate-- Join-
-Vectorization is also implemented for the ORC format. Spark also uses whole-stage code generation and this vectorization (for Parquet) since Spark 2.0. There's an added time-stamp column for Parquet vectorization and format under LLAP.
-
-> [!WARNING]
-> Parquet writes are slow when you convert to zoned times from the time stamp. For more information, see the [issue details](https://issues.apache.org/jira/browse/HIVE-24693) on the Apache Hive site.
-
-### Automatic query cache
-
-Here are some considerations for automatic query cache:
--- With `hive.query.results.cache.enabled=true`, every query that runs in Hive 3 stores its result in a cache.-- If the input table changes, Hive evicts invalid data from the cache. For example, if you perform aggregation and the base table changes, queries that you run most frequently stay in the cache, but stale queries are evicted.-- The query result cache works with managed tables only because Hive can't track changes to an external table.-- If you join external and managed tables, Hive falls back to running the full query. The query result cache works with ACID tables. If you update an ACID table, Hive reruns the query automatically.-- You can enable and disable the query result cache from the command line. You might want to do so to debug a query.-- You can disable the query result cache by setting `hive.query.results.cache.enabled=false`.-- Hive stores the query result cache in `/tmp/hive/__resultcache__/`. By default, Hive allocates 2 GB for the query result cache. You can change this setting by configuring the following parameter in bytes: `hive.query.results.cache.max.size`.-- Changes to query processing: During query compilation, check the result cache to see if it already has the query results. If there's a cache hit, the query plan is set to a `FetchTask` that reads from the cached location.-
-During query execution, Parquet `DataWriteableWriter` relies on `NanoTimeUtils` to convert a time-stamp object into a binary value. This query calls `toString()` on the time-stamp object, and then it parses the string.
-
-If you can use the result cache for this query:
--- The query is `FetchTask` reading from the directory of cached results.-- No cluster tasks are required.-
-If you can't use the result cache, run the cluster tasks as normal:
--- Check if the computed query results are eligible to add to the result cache.-- If results can be cached, the temporary results generated for the query are saved to the result cache. You might need to perform steps to ensure that the query cleanup doesn't delete the query result directory.-
-## SQL features
-
-The initial implementation introduced in Apache Hive 3.0.0 focuses on introducing materialized views and automatic query rewriting based on those materializations in the project. Materialized views can be stored natively in Hive or in other custom storage handlers (ORC), and they can take advantage of new Hive features such as LLAP acceleration.
-
-For more information, see the [Azure blog post on Hive materialized views](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/hive-materialized-views/ba-p/2502785).
-
-## Surrogate keys
-
-Use the built-in `SURROGATE_KEY` UDF to automatically generate numerical IDs for rows as you enter data into a table. The generated surrogate keys can replace wide, multiple composite keys.
-
-Hive supports the surrogate keys on ACID tables only. The table that you want to join by using surrogate keys can't have column types that need to cast. These data types must be primitives, such as `INT` or `STRING`.
-
-Joins that use the generated keys are faster than joins that use strings. Using generated keys doesn't force data into a single node by a row number. You can generate keys as abstractions of natural keys. Surrogate keys have an advantage over universally unique identifiers (UUIDs), which are slower and probabilistic.
-
-The `SURROGATE_KEY` UDF generates a unique ID for every row that you insert into a table. It generates keys based on the execution environment in a distributed system, which includes many factors such as:
--- Internal data structures-- State of a table-- Last transaction ID-
-Surrogate key generation doesn't require any coordination between compute tasks. The UDF takes no arguments, or two arguments are:
--- Write ID bits-- Task ID bits-
-### Constraints
-
-SQL constraints help enforce data integrity and improve performance. The optimizer uses the constraint information to make smart decisions. Constraints can make data predictable and easy to locate.
-
-|Constraint|Description|
-|||
-|`Check`|Limits the range of values that you can place in a column.|
-|`PRIMARY KEY`|Identifies each row in a table by using a unique identifier.|
-|`FOREIGN KEY`|Identifies a row in another table by using a unique identifier.|
-|`UNIQUE KEY`|Checks that values stored in a column are different.|
-|`NOT NULL`|Ensures that a column can't be set to `NULL`.|
-|`ENABLE`|Ensures that all incoming data conforms to the constraint.|
-|`DISABLE`|Doesn't ensure that all incoming data conforms to the constraint.|
-|`VALIDATEC`|Checks that all existing data in the table conforms to the constraint.|
-|`NOVALIDATE`|Doesn't check that all existing data in the table conforms to the constraint.|
-|`ENFORCED`|Maps to `ENABLE NOVALIDATE`.|
-|`NOT ENFORCED`|Maps to `DISABLE NOVALIDATE`.|
-|`RELY`|Specifies abiding by a constraint. The optimizer uses it to apply further optimizations.|
-|`NORELY`|Specifies not abiding by a constraint.|
-
-For more information, see [Supported Features: Apache Hive 3.1](https://cwiki.apache.org/confluence/display/Hive/Supported+Features%3A++Apache+Hive+3.1) on the Apache Hive site.
-
-### Metastore CachedStore
-
-A Hive Metastore operation takes much time and slows down Hive compilation. In some extreme cases, it takes longer than the actual query runtime.
-
-In particular, we find that the latency of the cloud database is high and that 90% of total query runtime is waiting for metastore SQL database operations. Based on this observation, you can enhance the performance of the Hive Metastore operation if you have a memory structure that caches the database query result:
-
-`hive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.cache.CachedStore`
--
-## References
-
-For more information, see the following release notes:
--- [Hive 3.1.0](https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/hive-overview/content/hive_whats_new_in_this_release_hive.html)-- [HBase 2.1.6](https://apache.googlesource.com/hbase/+/ba26a3e1fd5bda8a84f99111d9471f62bb29ed1d/RELEASENOTES.md)-- [Hadoop 3.1.1](https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/release/3.1.1/RELEASENOTES.3.1.1.html)-
-## Related content
--- [HDInsight 4.0 announcement](./hdinsight-version-release.md)-- [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)-- [Troubleshooting guide for migration of Hive workloads from HDInsight 3.6 to HDInsight 4.0](./interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md)
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md
- Title: Open-source components and versions - Azure HDInsight 4.0
-description: Learn about the open-source components and versions in Azure HDInsight 4.0.
-- Previously updated : 04/11/2024--
-# HDInsight 4.0 component versions
-
-In this article, you learn about the open-source components and versions in Azure HDInsight 4.0.
-
-## Open-source components available with HDInsight version 4.0
-
-The Open-source component versions associated with HDInsight 4.0 are present in the following table.
-
-| Component | HDInsight 4.0 |
-|||
-| Apache Hadoop and YARN | 3.1.1 |
-| Apache Tez | 0.9.1 |
-| Apache Pig | 0.16.1 |
-| Apache Hive | 3.1.2 |
-| Apache Ranger | 1.1.0 |
-| Apache HBase | 2.1.6 |
-| Apache Sqoop | 1.5.0 |
-| Apache Oozie | 4.3.1 |
-| Apache Zookeeper | 3.4.6 |
-| Apache Phoenix | 5 |
-| Apache Spark | 2.4.4 |
-| Apache Livy | 0.5 |
-| Apache Kafka | 2.1.1 |
-| Apache Ambari | 2.7.0 |
-| Apache Zeppelin | 0.8.0 |
-
-This table lists certain HDInsight 4.0 cluster types that have retired or will be retiring soon.
-
-| Cluster Type | Framework version | Support expiration date | Retirement date |
-||-||--|
-| HDInsight 4.0 Spark | 2.3 | June 30, 2020 | June 30, 2020 |
-| HDInsight 4.0 Kafka | 1.1 | Dec 31, 2020 | Dec 31, 2020 |
-| HDInsight 4.0 Kafka | 2.1.0 | Sep 30, 2022 | Oct 1, 2022 |
--
-## Apache Spark 2.4 to Spark 3.x Migration Guides
-
-Spark 2.4 to Spark 3.x Migration Guides see [here](https://spark.apache.org/docs/latest/migration-guide.html).
-
-## Next steps
--- [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md)-- [Enterprise Security Package](./enterprise-security-package.md)-- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)-
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md
This table lists the versions of HDInsight that are available in the Azure porta
| | | | | | | | | [HDInsight 5.1](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |November 1, 2023 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes | | [HDInsight 5.0](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |March 11, 2022 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | March 31, 2025 | March 31, 2025| Yes |
-| [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | March 31, 2025 | March 31, 2025 |Yes |
+| HDInsight 4.0 |Ubuntu 18.0.4 LTS |September 24, 2018 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | March 31, 2025 | March 31, 2025 |Yes |
**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You might not be able to create clusters from the Azure portal.
-**Retirement** means that existing clusters of an HDInsight version continue to run as is. You can't create new clusters of this version through any means, which includes the CLI and SDKs. Other control plane features, such as manual scaling and autoscaling, not guaranteed to work after retirement date. Support isn't available for retired versions.
+**Retirement** means that existing clusters of a HDInsight version continue to run as is. You can't create new clusters of this version through any means, which includes the CLI and SDKs. Other control plane features, such as manual scaling and autoscaling, not guaranteed to work after retirement date. Support isn't available for retired versions.
### Spark versions supported in Azure HDInsight
Azure HDInsight supports the following Apache Spark versions.
## Support options for HDInsight versions
-Support defined as a time period that an HDInsight version supported by Microsoft Customer Service and Support. HDInsight offers two types of support:
+Support defined as a time period that a HDInsight version supported by Microsoft Customer Service and Support. HDInsight offers two types of support:
- **Standard support** - **Basic support**
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see * [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
-* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
## New features
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see * [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
-* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
### Fixed issues
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see * [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
-* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
## What's new
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see * [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
-* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
> [!IMPORTANT] > This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on September 12, 2023. The action is to update to the latest image **2308221128**. Customers are advised to plan accordingly.
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see * [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
-* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
## :::image type="icon" border="false" source="./media/hdinsight-release-notes/whats-new.svg"::: What's new * HDInsight 5.1 is now supported with ESP cluster.
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see * [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
-* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
:::image type="content" border="true" source="media/hdinsight-release-notes/new-icon-for-updated.png" alt-text="Icon showing update with text.":::
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see * [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
-* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
> [!IMPORTANT] > Microsoft has issued [CVE-2023-23408](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-23408), which is fixed on the current release and customers are advised to upgrade their clusters to latest image. 
For more information, see [HDInsight 5.1.0 version](./hdinsight-51-component-ver
:::image type="content" border="true" source="media/hdinsight-release-notes/new-icon-for-end-of-support.png" alt-text="Icon showing end of support with text.":::
-End of support for Azure HDInsight clusters on Spark 2.4 February 10, 2024. For more information, see [Spark versions supported in Azure HDInsight](./hdinsight-40-component-versioning.md)
## What's next
HDInsight uses safe deployment practices, which involve gradual region deploymen
* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4 * HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
-For workload specific versions, see [here.](./hdinsight-40-component-versioning.md)
- :::image type="content" border="true" source="media/hdinsight-release-notes/new-icon-for-new-feature.png" alt-text="Icon showing new features with text."::: * **Log Analytics** - Customers can enable classic monitoring to get the latest OMS version 14.19. To remove old versions, disable and enable classic monitoring.
hdinsight Apache Hive Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-migrate-workloads.md
- Title: Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0
-description: Learn how to migrate Apache Hive workloads on HDInsight 3.6 to HDInsight 4.0.
---- Previously updated : 06/12/2024--
-# Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0
-
-HDInsight 4.0 has several advantages over HDInsight 3.6. Here's an [overview of what's new in HDInsight 4.0](../hdinsight-version-release.md).
-
-This article covers steps to migrate Hive workloads from HDInsight 3.6 to 4.0, including
-
-* Hive metastore copy and schema upgrade
-* Safe migration for ACID compatibility
-* Preservation of Hive security policies
-
-The new and old HDInsight clusters must have access to the same Storage Accounts.
-
-Migration of Hive tables to a new Storage Account needs to be done as a separate step. See [Hive Migration across Storage Accounts](./hive-migration-across-storage-accounts.md).
--
-## Changes in Hive 3 and what's new:
-
-### Hive client changes
-Hive 3 supports only the thin client, Beeline for running queries and Hive administrative commands from the command line. Beeline uses a JDBC connection to HiveServer to execute all commands. Parsing, compiling, and executing operations occur in HiveServer.
-
-You enter supported Hive CLI commands by invoking Beeline using the Hive keyword as a Hive user or invoke a beeline using `beeline -u <JDBC URL>`. You can get the JDBC URL from Ambari Hive page.
--
-Use Beeline (instead of the thick client Hive CLI, which is no longer supported) has several advantages, includes:
-
-* Instead of maintaining the entire Hive code base, you can maintain only the JDBC client.
-* Startup overhead is lower by using Beeline because the entire Hive code base isn't involved.
-
-You can also execute the Hive script, which is under the directory ΓÇ£/usr/binΓÇ¥, which invokes a beeline connection using JDBC URL.
--
-A thin client architecture facilitates securing data in
-
-* Session state, internal data structures, passwords, and so on, reside on the client instead of the server.
-* The small number of daemons required to execute queries simplifies monitoring and debugging.
-
-HiveServer enforces allowlist and blocklist settings that you can change using `SET` commands. Using the blocklists, you can restrict memory configuration to prevent Hive Server instability. You can configure multiple HiveServer instances with different allowlist and blocklist to establish different levels of stability.
-
-### Hive Metastore changes
-
-Hive now supports only a remote metastore instead of an embedded metastore (within HS2 JVM). The Hive metastore resides on a node in a cluster managed by Ambari as part of the HDInsight stack. A standalone server outside the cluster isn't supported. You no longer set key=value commands on the command line to configure Hive Metastore. Based on the value configured in "hive.metastore.uris=' ' " HMS service used and connection established.
-
-#### Execution engine change
-
-Apache Tez replaces MapReduce as the default Hive execution engine. MapReduce is deprecated starting Hive 2.0 Refer [HIVE-12300](https://issues.apache.org/jira/browse/HIVE-12300). With expressions of directed acyclic graphs (DAGs) and data transfer primitives, execution of Hive queries under Tez improves performance. SQL queries you submit to Hive are executed as follows
-
-1. Hive compiles the query.
-1. Tez executes the query.
-1. YARN allocates resources for applications across the cluster and enables authorization for Hive jobs in YARN queues.
-1. Hive updates the data in ABFS or WASB.
-1. Hive returns query results over a JDBC connection.
-
-If a legacy script or application specifies MapReduce for execution, an exception occurs as follows
--
-> [!NOTE]
-> Most user-defined functions (UDFs) require no change to execute on Tez instead of MapReduce.
-
-**Changes with respect to ACID transaction and CBO:**
-
-* ACID tables are the default table type in HDInsight 4.x with no performance or operational overload.
-* Simplified application development, operations with stronger transactional guarantees, and simpler semantics for SQL commands
-* Hive internal takes care of bucketing for ACID tables in HDInsight 4.1, thus removing maintenance overhead.
-* Advanced optimizations ΓÇô Upgrade in CBO
-* Automatic Query cache. The Property used to enable query caching is `hive.query.results.cache.enabled`. You need to set this property to true. Hive stores the query result cache in `/tmp/hive/__resultcache__/.` By default, Hive allocates 2 GB for the query result cache. You can change this setting by configuring the following parameter in bytes `hive.query.results.cache.max.size`.
-
- For more information, [Benefits of migrating to Azure HDInsight 4.0.](../benefits-of-migrating-to-hdinsight-40.md)
-
-**Materialized view rewrites**
-
- For more information, on [Hive - Materialized Views](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/hive-materialized-views/ba-p/2502785)
-
-## Changes after upgrading to Apache Hive 3
-To locate and use your Apache Hive 3 tables after an upgrade, you need to understand the changes that occur during the upgrade process. Changes to the management and location of tables, permissions to table directories, table types, and ACID-compliance concerns.
-
-### Hive Management of Tables
-Hive 3 takes more control of tables than Hive 2, and requires managed tables adhere to a strict definition. The level of control Hive takes over tables is homogeneous to the traditional databases. Hive is self-aware of the delta changes to the data; this control framework enhances the performance.
-
-For example, if Hive knows that resolving a query doesn't require scanning tables for new data, Hive returns results from the hive query result cache.
-When the underlying data in a materialized view change, Hive needs to rebuild the materialized view. ACID properties reveal exactly which rows changed, and needs to be processed and added to the materialized view.
-
-### Hive changes to ACID properties
-
-Hive 2.x and 3.x have both transactional(managed) and nontransactional (external) tables. Transactional tables have atomic, consistent, isolation and durable (ACID) properties. In Hive 2.x, the initial version of ACID transaction processing is ACID v1. In Hive 3.x, the default tables would be with ACID v2.
-
-### Native and non-native storage formats
-
-Storage formats are a factor in upgrade changes to table types. Hive 2.x and 3.x supports the following Hadoop native and non-native storage formats
-
-**Native:** Tables with built-in support in Hive, in the following file formats
-* Text
-* Sequence File
-* RC File
-* AVRO File
-* ORC File
-* Parquet File
-
-**Non-native:** Tables that use a storage handler, such as the DruidStorageHandler or HBaseStorageHandler
-
-## HDInsight 4.x upgrade changes to table types
-
-The following table compares Hive table types and ACID operations before an upgrade from HDInsight 3.x and after an upgrade to HDInsight 4.x. The ownership of the Hive table file is a factor in determining table types and ACID operations after the upgrade
-
-### HDInsight 3.x and HDInsight 4.x Table type comparison
-
-|**HDInsight 3.x**| - | - | - |**HDInsight 4.x**| - |
-|-|-|-|-|-|-|
-|**Table Type** |**ACID v1** |**Format** |**Owner (user) of Hive Table File** |**Table Type**|**ACID v2**|
-|External |No |Native or non-native| Hive or non-Hive |External |No|
-|Managed |Yes |ORC |Hive or non-Hive| Managed, updatable |Yes|
-|Managed |No |ORC |Hive| Managed, updatable |Yes|
-|Managed|No|ORC|non-Hive |External, with data delete |NO|
-|Managed |No |Native (but non-ORC)| Hive |Managed, insert only |Yes|
-|Managed|No|Native (but non-ORC)|non-Hive |External, with data delete |No|
-|Managed |No |Non-native| Hive or non-Hive| External, with data delete| No|
-
-## Hive Impersonation
-
-Hive impersonation was enabled by default in Hive 2 (doAs=true), and disabled by default in Hive 3. Hive impersonation runs Hive as end user, or not.
-
-### Other HDInsight 4.x upgrade changes
-
-1. Managed, ACID tables not owned by the Hive user remain managed tables after the upgrade, but Hive becomes the owner.
-1. After the upgrade, the format of a Hive table is the same as before the upgrade. For example, native or non-native tables remain native or non-native, respectively.
-
-## Location Changes
-
-After the upgrade, the location of managed tables or partitions doesn't change under any one of the following conditions:
-
-* The old table or partition directory wasn't in its default location /apps/hive/warehouse before the upgrade.
-* The old table or partition is in a different file system than the new warehouse directory.
-* The old table or partition directory is in a different encryption zone than the new warehouse directory.
-
-Otherwise, the location of managed tables or partitions does change. The upgrade process moves managed files to `/hive/warehouse/managed`. By default, Hive places any new external tables you create in HDInsight 4.x in `/hive/warehouse/external`
-
-The `/apps/hive directory`, which is the former location of the Hive 2.x warehouse, might or might not exist in HDInsight 4.x
-
-Following Scenario's are present for location changes
-
-**Scenario 1**
-
-If the table is a managed table in HDInsight-3.x and if it's present in the location `/apps/hive/warehouse` and converted as external table in HDInsight-4.x, then the location is the same `/apps/hive/warehouse` in HDInsight 4.x as well. It doesn't change any location. After this step, if you're performing alter table command to convert it as managed (acid) table at that time present in the same location `/apps/hive/warehouse`.
-
-**Scenario 2**
-
-If the table is a managed table in HDInsight-3.x and if it's present in the location `/apps/hive/warehouse` and converted to managed (ACID) table in HDInsight 4.x, then the location is `/hive/warehouse/managed`.
-
-**Scenario 3**
-If you're creating an external table, in HDInsight-4.x without specifying any location then it presents in the location `/hive/warehouse/external`.
-
-## Table conversion
-
-After upgrading, to convert a nontransactional table to an ACID v2 transactional table, you use the `ALTER TABLE` command and set table properties to
-```
-transaction'='true' and 'EXTERNAL'='false
-```
-* The managed table, non-ACID, ORC format and owned by non-Hive user in HDInsight-3.x will be converted to external, non-ACID table in HDInsight-4.x.
-* If the user wishes to change the external table (non-ACID) to ACID, then they should change the external table to managed and ACID as well. Because in HDInsight-4.x all the managed tables are strictly ACID by default. You can't convert the external tables(non-ACID) to ACID table.
-
-> [!NOTE]
-> The table must be a ORC table.
-
-To convert external table (non-ACID) to Managed (ACID) table,
-
-1. Convert external table to managed and acid equals to true using the following command:
- ```
- alter table <table name> set TBLPROPERTIES ('EXTERNAL'='false', 'transactional'='true');
- ```
-1. If you try to execute the following command for external table, you get the below error.
-
-**Scenario 1**
-
-Consider table `rt` is external table (non-ACID). If the table is non-ORC table,
-
-```
-alter table rt set TBLPROPERTIES ('transactional'='true');
-ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The table must be stored using an ACID compliant format (such as ORC): work.rt
-The table must be ORC format
-```
-
-**Scenario 2**
-
-```
->>>> alter table rt set TBLPROPERTIES ('transactional'='true'); If the table is ORC table.
-ERROR:
-Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. work.rt can't be declared transactional because it's an external table (state=08S01,code=1)
-```
-
-This error is occurring because the table `rt` is external table and you can't convert external table to ACID.
-
-**Scenario 3**
-
-```
->>>> alter table rt set TBLPROPERTIES ('EXTERNAL'='false');
-ERROR:
-Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Table work.rt failed strict managed table checks due to the following reason: Table is marked as a managed table but isn't transactional. (state=08S01,code=1)
-```
-
-Here we're trying to change the external table first to managed table. In HDInsight 4.x, it should be Strictly managed table (which means it should be ACID).
-So, here you get a deadlock. The only way to convert the external table(NON_ACID) to managed (ACID) you have to follow the command:
-
-```
-alter table rt set TBLPROPERTIES ('EXTERNAL'='false', 'transactional'='true');
-```
-
-## Syntax and semantics
-
-* Creating a table
-To improve useability and functionality, Hive 3 changed table creation.
-Hive has changed table creation in the following ways
- * Creates ACID-compliant table, which is the default in HDP
- * Supports simple writes and inserts
- * Writes to multiple partitions
- * Inserts multiple data updates in a single SELECT statement
- * Eliminates the need for bucketing.
-
- If you have an ETL pipeline that creates tables in Hive, the tables create as ACID. Hive now tightly controls access and performs compaction periodically on the tables
-
- **Before Upgrade**
- In HDInsight 3.x, by default CREATE TABLE created a non-ACID table.
-
- **After Upgrade** By default CREATE TABLE creates a full, ACID transactional table in ORC format.
-
- **Action Required**
- To access Hive ACID tables from Spark, you connect to Hive using the Hive Warehouse Connector (HWC). To write ACID tables to Hive from Spark, you use the HWC and HWC API
-
-* Escaping `db.table` References
-
- You need to change queries that use db.table references to prevent Hive from interpreting the entire db.table string as the table name.
- Hive 3.x rejects `db.table` in SQL queries. A dot (.) isn't allowed in table names. You enclose the database name and the table name in backticks.
- Find a table having the problematic table reference.
- `math.students` that appears in a CREATE TABLE statement.
- Enclose the database name and the table name in backticks.
-
- ```sql
- TABLE `math`.`students` (name VARCHAR(64), age INT, gpa DECIMAL(3,2));
- ```
-
-* CASTING TIMESTAMPS
- Results of applications that cast numerics to timestamps differ from Hive 2 to Hive 3. Apache Hive changed the behavior of CAST to comply with the SQL Standard, which doesn't associate a time zone with the TIMESTAMP type.
-
- **Before Upgrade**
- Casting a numeric type value into a timestamp could be used to produce a result that reflected the time zone of the cluster. For example, 1597217764557 is 2020-08-12 00:36:04 PDT. Running the following query casts the numeric to a timestamp in PDT:
- `SELECT CAST(1597217764557 AS TIMESTAMP);`
- | 2020-08-12 00:36:04 |
-
- **After Upgrade**
- Casting a numeric type value into a timestamp produces a result that reflects the UTC instead of the time zone of the cluster. Running the query casts the numeric to a timestamp in UTC.
- `SELECT CAST(1597217764557 AS TIMESTAMP);`
- | 2020-08-12 07:36:04.557 |
-
- **Action Required**
- Change applications. Don't cast from a numeral to obtain a local time zone. Built-in functions from_utc_timestamp and to_utc_timestamp can be used to mimic behavior before the upgrade.
-
-* CHECKING COMPATIBILITY OF COLUMN CHANGES
- A default configuration change can cause applications that change column types to fail.
-
- **Before Upgrade**
- In HDInsight 3.x Hive.metastore.disallow.incompatible.col.type.changes is false by default to allow changes to incompatible column types. For example, you can change a STRING column to a column of an incompatible type, such as MAP<STRING, STRING>. No error occurs.
-
- **After Upgrade**
- The hive.metastore.disallow.incompatible.col.type.changes is true by default. Hive prevents changes to incompatible column types. Compatible column type changes, such as INT, STRING, BIGINT, aren't blocked.
-
- **Action Required**
- Change applications to disallow incompatible column type changes to prevent possible data corruption.
-
-* DROPPING PARTITIONS
-
- The OFFLINE and NO_DROP keywords in the CASCADE clause for dropping partitions causes performance problems and is no longer supported.
-
- **Before Upgrade**
- You could use OFFLINE and NO_DROP keywords in the CASCADE clause to prevent partitions from being read or dropped.
-
- **After Upgrade**
- OFFLINE and NO_DROP aren't supported in the CASCADE clause.
-
- **Action Required**
- Change applications to remove OFFLINE and NO_DROP from the CASCADE clause. Use an authorization scheme, such as Ranger, to prevent partitions from being dropped or read.
-
-* RENAMING A TABLE
- After the upgrade Renaming a managed table moves its location only if the table is created without a `LOCATION` clause and is under its database directory.
-
-## Limitations with respect to CBO
-
-* We see that the select output gives trailing zero's in few columns. For example, if we have a table column with datatype as decimal(38,4) and if we insert data as 38 then it adds the trailing zero's and provide result as 38.0000
-As per https://issues.apache.org/jira/browse/HIVE-12063 and https://issues.apache.org/jira/browse/HIVE-24389, the idea is retained the scale and precision instead of running a wrapper in decimal columns. This is the default behavior from Hive 2.
-To fix this issue, you can follow the below option.
-
- 1. Modify the datatype at source level to adjust the precision as col1(decimal(38,0)). This value provides the result as 38 without trailing zero's. But if you insert the data as 35.0005 then it's .0005 and provides only the value as 38
- 1.Remove the trailing zeros for the columns with issue and then cast to string,
- 1. Use select TRIM(cast(<column_name> AS STRING))+0 FROM <table_name>;
- 1. Use regex.
-
-1. Hive query fails with "Unsupported SubQuery Expression" when we use UNIX_TIMESTAMP in the query.
- For example,
- If we run a query, then it throws an error "Unsupported SubQuery Expression"
- ```
- select * from
- (SELECT col_1 from table1 where col_2 >= unix_timestamp('2020-03-07','yyyy-MM-dd'));
- ```
- The root case of this issue is that the current Hive codebase is throwing an exception which parsing the UNIX_TIMESTAMP because there's no precision mapping in `HiveTypeSystemImpl.java code` for the precision of `UNIX_TIMESTAMP` which Calcite recognizes as `BIGINT`.
- But the below query works fine
- `select * from (SELECT col_1 from table1 where col_2 >= 1);`
-
- This command executes successfully since col_2 is an integer.
- The above issue was fixed in hdi-3.1.2-4.1.12(4.1 stack) and hdi-3.1.2-5.0.8(5.0 stack)
-
-## Steps to upgrade
-
-### 1. Prepare the data
-
-* HDInsight 3.6 by default doesn't support ACID tables. If ACID tables are present, however, run 'MAJOR' compaction on them. See the [Hive Language Manual](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/Partition/Compact) for details on compaction.
-
- |Property | Value |
- |||
- |Bash script URI|`https://hdiconfigactions.blob.core.windows.net/linuxhivemigrationv01/hive-adl-expand-location-v01.sh`|
- |Node type(s)|Head|
- |Parameters||
-
-### 2. Copy the SQL database
-
-* If the cluster uses a default Hive metastore, follow this [guide](./hive-default-metastore-export-import.md) to export metadata to an external metastore. Then, create a copy of the external metastore for upgrade.
-
-* If the cluster uses an external Hive metastore, create a copy of it. Options include [export/import](/azure/azure-sql/database/database-export) and [point-in-time restore](/azure/azure-sql/database/recovery-using-backups#point-in-time-restore).
-
-### 3. Upgrade the metastore schema
-
-This step uses the [`Hive Schema Tool`](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool) from HDInsight 4.0 to upgrade the metastore schema.
-
-> [!WARNING]
-> This step isn't reversible. Run this only on a copy of the metastore.
-
-1. Create a temporary HDInsight 4.0 cluster to access the 4.0 Hive `schematool`. You can use the [default Hive metastore](../hdinsight-use-external-metadata-stores.md#default-metastore) for this step.
-
-1. From the HDInsight 4.0 cluster, execute `schematool` to upgrade the target HDInsight 3.6 metastore. Edit the following shell script to add your SQL server name, database name, username, and password. Open an [SSH Session](../hdinsight-hadoop-linux-use-ssh-unix.md) on the headnode and run it.
-
- ```sh
- SERVER='servername.database.windows.net' # replace with your SQL Server
- DATABASE='database' # replace with your 3.6 metastore SQL Database
- USERNAME='username' # replace with your 3.6 metastore username
- PASSWORD='password' # replace with your 3.6 metastore password
- STACK_VERSION=$(hdp-select status hive-server2 | awk '{ print $3; }')
- /usr/hdp/$STACK_VERSION/hive/bin/schematool -upgradeSchema -url "jdbc:sqlserver://$SERVER;databaseName=$DATABASE;trustServerCertificate=false;encrypt=true;hostNameInCertificate=*.database.windows.net;" -userName "$USERNAME" -passWord "$PASSWORD" -dbType "mssql" --verbose
- ```
-
- > [!NOTE]
- > This utility uses client `beeline` to execute SQL scripts in `/usr/hdp/$STACK_VERSION/hive/scripts/metastore/upgrade/mssql/upgrade-*.mssql.sql`.
- >
- > SQL Syntax in these scripts isn't necessarily compatible to other client tools. For example, [SSMS](/sql/ssms/download-sql-server-management-studio-ssms) and [Query Editor on Azure Portal](/azure/azure-sql/database/connect-query-portal) require keyword `GO` after each command.
- >
- > If any script fails due to resource capacity or transaction timeouts, scale up the SQL Database.
-
-1. Verify the final version with query `select schema_version from dbo.version`.
-
- The output should match that of the following bash command from the HDInsight 4.0 cluster.
-
- ```bash
- grep . /usr/hdp/$(hdp-select --version)/hive/scripts/metastore/upgrade/mssql/upgrade.order.mssql | tail -n1 | rev | cut -d'-' -f1 | rev
- ```
-
-1. Delete the temporary HDInsight 4.0 cluster.
-
-### 4. Deploy a new HDInsight 4.0 cluster
-
-Create a new HDInsight 4.0 cluster, [selecting the upgraded Hive metastore](../hdinsight-use-external-metadata-stores.md#select-a-custom-metastore-during-cluster-creation) and the same Storage Accounts.
-
-* The new cluster doesn't require having the same default filesystem.
-
-* If the metastore contains tables residing in multiple Storage Accounts, you need to add those Storage Accounts to the new cluster to access those tables. See [add extra Storage Accounts to HDInsight](../hdinsight-hadoop-add-storage.md).
-
-* If Hive jobs fail due to storage inaccessibility, verify that the table location is in a Storage Account added to the cluster.
-
- Use the following Hive command to identify table location:
-
- ```sql
- SHOW CREATE TABLE ([db_name.]table_name|view_name);
- ```
-
-### 5. Convert Tables for ACID Compliance
-
-Managed tables must be ACID-compliant on HDInsight 4.0. Run `strictmanagedmigration` on HDInsight 4.0 to convert all non-ACID managed tables to external tables with property `'external.table.purge'='true'`. Execute from the headnode:
-
-```bash
-sudo su - hive
-STACK_VERSION=$(hdp-select status hive-server2 | awk '{ print $3; }')
-/usr/hdp/$STACK_VERSION/hive/bin/hive --config /etc/hive/conf --service strictmanagedmigration --hiveconf hive.strict.managed.tables=true -m automatic --modifyManagedTables
-```
-### 6. Class not found error with `MultiDelimitSerDe`
-
-**Problem**
-
-In certain situations when running a Hive query, you might receive `java.lang.ClassNotFoundException` stating `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe` class isn't found. This error occurs when customer migrates from HDInsight 3.6 to HDInsight 4.0. The SerDe class `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe`, which is a part of `hive-contrib-1.2.1000.2.6.5.3033-1.jar` in HDInsight 3.6 is removed and we're using `org.apache.hadoop.hive.serde2.MultiDelimitSerDe` class, which is a part of `hive-exec jar` in HDI-4.0. `hive-exec jar` will load to HS2 by default when we start the service.
-
-**STEPS TO TROUBLESHOOT**
-
-1. Check if any JAR under a folder (likely that it supposed to be under Hive libraries folder, which is `/usr/hdp/current/hive/lib` in HDInsight) contains this class or not.
-1. Check for the class `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe` and `org.apache.hadoop.hive.serde2.MultiDelimitSerDe` as mentioned in the solution.
-
-**Solution**
-
-1. Although a JAR file is a binary file, you can still use `grep` command with `-Hrni` switches as below to search for a particular class name
- ```
- grep -Hrni "org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe" /usr/hdp/current/hive/lib
- ```
-1. If it couldn't find the class, it will return no output. If it finds the class in a JAR file, it will return the output
-
-1. Below is the example took from HDInsight 4.x cluster
-
- ```
- sshuser@hn0-alters:~$ grep -Hrni "org.apache.hadoop.hive.serde2.MultiDelimitSerDe" /usr/hdp/4.1.9.7/hive/lib/
- Binary file /usr/hdp/4.1.9.7/hive/lib/hive-exec-3.1.0.4.1-SNAPSHOT.jar matches
- ```
-1. From the above output, we can confirm that no jar contains the class `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe` and hive-exec jar contains `org.apache.hadoop.hive.serde2.MultiDelimitSerDe`.
-1. Try to create the table with row format DerDe as `ROW FORMAT SERDE org.apache.hadoop.hive.serde2.MultiDelimitSerDe`
-1. This command will fix the issue. If you've already created the table, you can rename it using the below commands
- ```
- Hive => ALTER TABLE TABLE_NAME SET SERDE 'org.apache.hadoop.hive.serde2.MultiDelimitSerDe'
- Backend DB => UPDATE SERDES SET SLIB='org.apache.hadoop.hive.serde2.MultiDelimitSerDe' where SLIB='org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe';
- ```
-The update command is to update the details manually in the backend DB and the alter command is used to alter the table with the new SerDe class from beeline or Hive.
-
-### Hive Backend DB schema compares Script
-
-You can run the following script after completing the migration.
-
-There's a chance of missing few columns in the backend DB, which causes the query failures. If the schema upgrade wasn't happened properly, then there's chance that we may hit the invalid column name issue. The below script fetches the column name and datatype from customer backend DB and provides the output if there's any missing column or incorrect datatype.
-
-The following path contains the schemacompare_final.py and test.csv file. The script is present in `schemacompare_final.py` file and the file "test.csv" contains all the column name and the datatype for all the tables, which should be present in the hive backend DB.
-
-https://hdiconfigactions2.blob.core.windows.net/hiveschemacompare/schemacompare_final.py
-
-https://hdiconfigactions2.blob.core.windows.net/hiveschemacompare/test.csv
-
-Download these two files from the link. And copy these files to one of the head nodes where hive service is running.
-
-**Steps to execute the script:**
-
-Create a directory called `schemacompare` under "/tmp" directory.
-
-Put the "schemacompare_final.py" and "test.csv" into the folder "/tmp/schemacompare". Do "ls -ltrh /tmp/schemacompare/" and verify whether the files are present.
-
-To execute the Python script, use the command "python schemacompare_final.py". This script starts executing the script and it takes less than five minutes to complete. The above script automatically connects to your backend DB and fetches the details from each and every table, which Hive uses and update the details in the new csv file called "return.csv". After you create the file return.csv, it compares the data with the file "test.csv" and prints the column name or datatype if there's anything missing under the tablename.
-
-Once after executing the script you can see the following lines, which indicate that the details are fetched for the tables and the script is in progressing
-
-```
-KEY_CONSTRAINTS
-Details Fetched
-DELEGATION_TOKENS
-Details Fetched
-WRITE_SET
-Details Fetched
-SERDES
-Details Fetched
-```
-
-And you can see the difference details under "DIFFERENCE DETAILS:" line. If there's any difference, it prints
-
-```
-PART_COL_STATS;
-('difference', ['BIT_VECTOR', 'varbinary'])
-The line with semicolon PART_COL_STATS; is the table name. And under the table name you can find the differences as ('difference', ['BIT_VECTOR', 'varbinary']) if there are any difference in column or datatype.
-```
-
-If there are no differences in the table, then the output is
-
-```
-BUCKETING_COLS;
-('difference', [])
-PARTITIONS;
-('difference', [])
-```
-
-From this output, you can find the column names that are missing or incorrect. You can run the following query in your backend DB to verify once if the column is missing or not.
-
-`SELECT * FROM INFORMATION_SCHEMA.columns WHERE TABLE_NAME = 'PART_COL_STATS';`
-
-In case any of the columns is missed in the table, for example, if we run the queries like insert or insert overwrite then the stats will be calculated automatically and it tries to update the stats table like PART_COL_STATS and TAB_COL_STATS. And if the column like "BIT_VECTOR" is missing in the tables then it will fail with "Invalid column name" error. You can add the column as mentioned in the following commands. As a workaround you can disable the stats by setting the following properties, which can't update the stats in the backend Database.
-
-```
-hive.stats.autogather=false;
-hive.stats.column.autogather=false;
-To Fix this issue, run the following two queries on backend SQL server (Hive metastore DB):
-
-ALTER TABLE PART_COL_STATS ADD BIT_VECTOR VARBINARY(MAX);
-ALTER TABLE TAB_COL_STATS ADD BIT_VECTOR VARBINARY(MAX);
-```
-This step avoids the query failures, which fail with "Invalid column name" once after the migration.
-
-## Secure Hive across HDInsight versions
-
-HDInsight optionally integrates with Microsoft Entra ID using HDInsight Enterprise Security Package (ESP). ESP uses Kerberos and Apache Ranger to manage the permissions of specific resources within the cluster. Ranger policies deployed against Hive in HDInsight 3.6 can be migrated to HDInsight 4.0 with the following steps:
-
-1. Navigate to the Ranger Service Manager panel in your HDInsight 3.6 cluster.
-1. Navigate to the policy named **HIVE** and export the policy to a json file.
-1. Make sure that all users referred to in the exported policy json exist in the new cluster. If a user is referred to in the policy json but doesn't exist in the new cluster, either add the user to the new cluster or remove the reference from the policy.
-1. Navigate to the **Ranger Service Manager** panel in your HDInsight 4.0 cluster.
-1. Navigate to the policy named **HIVE** and import the ranger policy json from step 2.
-
-## Hive changes in HDInsight 4.0 that may require application changes
-
-* See [Extra configuration using Hive Warehouse Connector](./apache-hive-warehouse-connector.md) for sharing the metastore between Spark and Hive for ACID tables.
-
-* HDInsight 4.0 uses [Storage Based Authorization](https://cwiki.apache.org/confluence/display/Hive/Storage+Based+Authorization+in+the+Metastore+Server). If you modify file permissions or create folders as a different user than Hive, you'll likely hit Hive errors based on storage permissions. To fix, grant `rw-` access to the user. See [HDFS Permissions Guide](https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html).
-
-* `HiveCLI` is replaced with `Beeline`.
-
-Refer to [HDInsight 4.0 Announcement](../hdinsight-version-release.md) for other changes.
-
-## Post the migration
-
-Make sure to follow these steps after completing the migration.
-
-**Table Sanity**
-1. Recreate tables in Hive 3.1 using CTAS or IOW to change table type instead of changing table properties.
-1. Keep doAs as false.
-1. Ensure managed table/data ownership is with ΓÇ£hiveΓÇ¥ user.
-1. Use managed ACID tables if table format is ORC and managed non-ACID for non-ORC types.
-1. Regenerate stats on recreated tables as migration would have caused incorrect stats.
-
-**Cluster Health**
-
-If multiple clusters share the same storage and HMS DB, then we should enable auto-compaction/compaction threads only in one cluster and disable everywhere else.
-
-Tune Metastore to reduce their CPU usage.
-1. Disable transactional event listeners.
- > [!NOTE]
- > Perform the following steps, only if the hive replication feature not used.
-
- 1. From Ambari UI, **remove the value for hive.metastore.transactional.event.listeners**.
- 1. Default Value: `org.apache.hive.hcatalog.listener.DbNotificationListener`
- 1. New value: `<Empty>`
-
-1. Disable Hive PrivilegeSynchronizer
- 1. From Ambari UI, **set hive.privilege.synchronizer = false.**
- 1. Default Value: `true`
- 1. New value: `false`
-
-1. Optimize the partition repair feature
- 1. Disable partition repair - This feature is used to synchronize the partitions of Hive tables in storage location with Hive metastore. You may disable this feature if `msck repair` is used after the data ingestion.
- 1. To disable the feature **add "discover.partitions=false"** under table properties using ALTER TABLE.
- OR (if the feature can't be disabled)
- 1. Increase the partition repair frequency.
-
-1. From Ambari UI, increase the value of ΓÇ£metastore.partition.management.task.frequencyΓÇ¥ (in seconds).
- > [!NOTE]
- > This change can delay the visibility of some of the partitions ingested into storage.
-
- 1. Default Value: `60`
- 1. Proposed value: `3600`
-1. Advanced Optimizations
-The following options need to be tested in a lower(non-prod) environment before applying to production.
- 1. Remove the Materialized view related listener if Materialized view isn't used.
- 1. From Ambari UI, **add a custom property (in custom hive-site.xml) and remove the unwanted background metastore threads**.
- 1. Property name: **metastore.task.threads.remote**
- 1. Default Value: `N/A (it uses few class names internally)`
- 1. New value:
-`org.apache.hadoop.hive.metastore.txn.AcidHouseKeeperService,org.apache.hadoop.hive.metastore.txn.AcidOpenTxnsCounterService,org.apache.hadoop.hive.metastore.txn.AcidCompactionHistoryService,org.apache.hadoop.hive.metastore.txn.AcidWriteSetService,org.apache.hadoop.hive.metastore.PartitionManagementTask`
-1. Disable the background threads if replication is disabled.
- 1. From Ambari UI, add a custom property (in custom hive-site.xml) and remove the unwanted threads.
- 1. Property name: **metastore.task.threads.always**
- 1. Default Value: `N/A (it uses few class names internally)`
- 1. New value: `org.apache.hadoop.hive.metastore.RuntimeStatsCleanerTask`
-
-**Query Tuning**
-1. Keep default configs of Hive to run the queries as they're tuned for TPC-DS workloads. Need query level tuning only if it fails or running slow.
-1. Ensure stats are up to date to avoid bad plan or wrong results.
-1. Avoid mixing external and managed ACID tables in join type of queries. In such case, try to convert external to managed non-ACID table through recreation.
-1. In Hive-3, lot of work happened on vectorization, CBO, timestamp with zone etc., which may have product bugs. So, if any query gives wrong results, try disabling vectorization, CBO, map-join etc., to see if that helps.
-
-Other steps to be followed to fix the incorrect results and poor performance after the migration
-
-1. **Issue**
- Hive query gives the incorrect result. Even the select count(*) query gives the incorrect result.
-
- **Cause**
- The property ΓÇ£hive.compute.query.using.statsΓÇ¥ is set to true, by default. If we set it to true, then it uses the stats, which is stored in metastore to execute the query. If the stats aren't up to date, then it results in incorrect results.
-
- **Resolution**
- collect the stats for the managed tables using `alter table <table_name> compute statics;` command at the table level and column level. Reference link - https://cwiki.apache.org/confluence/display/hive/statsdev#StatsDev-TableandPartitionStatistics
-
-1. **Issue**
- Hive queries are taking long time to execute.
-
- **Cause**
- If the query has a join condition then hive creates a plan whether to use map join or merge join based on the table size and join condition. If one of the tables contains a small size, then it loads that table in the memory and performs the join operation. This way the query execution is faster when compared to the merge join.
-
- **Resolution**
- Make sure to set the property "hive.auto.convert.join=true" which is the default value. Setting it to false uses the merge join and may result in poor performance.
- Hive decides whether to use map join or not based on following properties, which is set in the cluster
-
- ```
- set hive.auto.convert.join=true;
- set hive.auto.convert.join.noconditionaltask=true;
- set hive.auto.convert.join.noconditionaltask.size=<value>;
- set hive.mapjoin.smalltable.filesize = <value>;
- ```
- Common join can convert to map join automatically, when `hive.auto.convert.join.noconditionaltask=true`, if estimated size of small table(s) is smaller than hive.`auto.convert.join.noconditionaltask.size` (default value is 10000000 MB).
-
-
- If you face any issue related to OOM by setting the property `hive.auto.convert.join` to true, then it's advisable to set it to false only for that particular query at the session level and not at the cluster level. This issue might occur if the stats are wrong and Hive decides to use map join based on the stats.
-
-* **Issue**
- Hive query gives the incorrect result if the query has a join condition and the tables involved has null or empty values.
-
- **Cause**
- Sometimes we may get an issue related to null values if the tables involved in the query have lot of null values. Hive performs the query optimization wrongly with the null values involved which results in incorrect results.
-
- **Resolution**
- We recommend try setting the property `set hive.cbo.returnpath.hiveop=true` at the session level if you get any incorrect results. This config introduces not null filtering on join keys. If the tables had many null values, for optimizing the join operation between multiple tables, we can enable this config so that it considers only the not null values.
-
-* **Issue**
- Hive query gives the incorrect result if the query has a multiple join conditions.
-
- **Cause**
- Sometime Tez produce bad runtime plans whenever there are same joins multiple time with map-joins.
-
- **Resolution**
- There's a chance of getting incorrect results when we set `hive.merge.nway.joins` to false. Try setting it to true only for the query, which got affected. This helps query with multiple joins on the same condition, merge joins together into a single join operator. This method is useful if large shuffle joins to avoid a reshuffle phase.
-
-* **Issue**'
- There's an increase in time of the query execution day by day when compared to the earlier runs.
-
- **Cause**
- This issue might occur if there's an increase in more numbers of small files. So hive takes time in reading all the files to process the data, which results in increase in execution time.
-
- **Resolution**
- Make sure to run the compaction frequently for the tables, which are managed. This step avoids the small files and improves the performance.
-
- Reference link: [Hive Transactions - Apache Hive - Apache Software Foundation](https://cwiki.apache.org/confluence/display/hive/hive+transactions).
--
-* **Issue**
- Hive query gives incorrect result when customer is using a join condition on managed acid orc table and managed non-ACID orc table.
-
- **Cause**
- From HIVE 3 onwards, it's strictly requested to keep all the managed tables as an acid table. And if we want to keep it as an acid table then the table format must be orc and this is the main criteria. But if we disable the strict managed table property ΓÇ£hive.strict.managed.tablesΓÇ¥ to false then we can create a managed non-ACID table. Some case customer creates an external ORC table or after the migration the table converted to an external table and they disable the strict managed table property and convert it to managed table. At this point, the table converted to non-ACID managed orc format.
-
- **Resolution**
- Hive optimization goes wrong if you join table with non-ACID managed ORC table with acid managed orc table.
-
- If you're converting an external table to managed table,
- 1. DonΓÇÖt set the property ΓÇ£hive.strict.managed.tablesΓÇ¥ to false. If you set then you can create a non-ACID managed table but it's not requested in HIVE-3
- 1. Convert the external table to managed table using the following alter command instead of `alter table <table_name> set TBLPROPERTIES ('EXTERNAL'='false');`
- ```
- alter table rt set TBLPROPERTIES ('EXTERNAL'='false', 'transactional'='true');
- ```
-
-## Troubleshooting guide
-
-[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](./interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
-
-## Further reading
-
-* [HDInsight 4.0 Announcement](../hdinsight-version-release.md)
-* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
hdinsight Interactive Query Troubleshoot Migrate 36 To 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md
- Title: Troubleshoot migration of Hive from 3.6 to 4.0 - Azure HDInsight
-description: Troubleshooting guide for migration of Hive workloads from HDInsight 3.6 to 4.0
-- Previously updated : 05/10/2024--
-# Troubleshooting guide for migration of Hive workloads from HDInsight 3.6 to HDInsight 4.0
-
-This article provides answers to some of the most common issues that customers face when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
-
-## Reduce latency when running `DESCRIBE TABLE_NAME`
-
-Workaround:
-
-* Increase maximum number of objects (tables/partitions) that can be retrieved from metastore in one batch. Set it to a large number (default is 300) until satisfactory latency levels are reached. The higher the number, the fewer round trips are needed to the Hive metastore server, but it may also cause higher memory requirement at the client side.
-
- `hive.metastore.batch.retrieve.max=2000`
-
-* Restart Hive and all stale services
-
-## Unable to query Gzipped text file if skip.header.line.count and skip.footer.line.count are set for table
-
-Issue has been fixed in Interactive Query 4.0 but still not in Interactive Query 3.1.0
-
-Workaround:
-* Create table without using ```"skip.header.line.count"="1"``` and ```"skip.footer.line.count"="1"```, then create view from the original table that excludes the header/footer row in the query.
-
-## Unable to use Unicode characters
-
-Workaround:
-1. Connect to the hive metastore database for your cluster.
-
-2. Take the backup of `TBLS` and `TABLE_PARAMS` tables using the following command:
- ```sql
- select * into tbls_bak from tbls;
- select * into table_params_bak from table_params;
- ```
-
-3. Manually change the affected column types to `nvarchar(max)`.
- ```sql
- alter table TABLE_PARAMS alter column PARAM_VALUE nvarchar(max);
- alter table TBLS alter column VIEW_EXPANDED_TEXT nvarchar(max) null;
- alter table TBLS alter column VIEW_ORIGINAL_TEXT nvarchar(max) null;
- ```
-
-## Create table as select (CTAS) creates a new table with same UUID
-
-Hive 3.1 (HDInsight 4.0) offers a built-in UDF to generate unique UUIDs. Hive UUID() method generates unique IDs even with CTAS. You can use it as follows.
-```hql
-create table rhive as
-select uuid() as UUID
-from uuid_test
-```
-
-## Hive job output format differs from HDInsight 3.6
-
-It's caused by the difference of WebHCat(Templeton) between HDInsight 3.6 and HDInsight 4.0.
-
-* Hive REST API - add ```arg=--showHeader=false -d arg=--outputformat=tsv2 -d```
-
-* .NET SDK - initialize args of ```HiveJobSubmissionParameters```
- ```csharp
- List<string> args = new List<string> { { "--showHeader=false" }, { "--outputformat=tsv2" } };
- var parameters = new HiveJobSubmissionParameters
- {
- Query = "SELECT clientid,market from hivesampletable LIMIT 10",
- Defines = defines,
- Arguments = args
- };
- ```
-
-## Reduce Hive internal table creation latency
-
-1. From Advanced hive-site and Advanced hivemetastore-site, delete the value ```org.apache.hive.hcatalog.listener.DbNotificationListener``` for ```hive.metastore.transactional.event.listeners```.
-
-2. If ```hive.metastore.event.listeners``` has a value, remove it.
-
-3. DbNotificationListener is needed only if you use REPL commands and if not, it's safe to remove it.
-
- :::image type="content" source="./media/apache-hive-40-migration-guide/hive-reduce-internal-table-creation-latency.png" alt-text="Reduce internal table latency in HDInsight 4.0." border="true":::
-
-## Change Hive default table location
-
-This behavior change is by-design on HDInsight 4.0 (Hive 3.1). The major reason of this change is for file permission control purposes.
-
-To create external tables under a custom location, specify the location in the create table statement.
-
-## Disable ACID in HDInsight 4.0
-
-We recommend enabling ACID in HDInsight 4.0. Most of the recent enhancements, both functional and performance, in Hive are made available only for ACID tables.
-
-Steps to disable ACID on HDInsight 4.0:
-1. Change the following hive configurations in Ambari:
-
- ```text
- hive.strict.managed.tables=false
- hive.support.concurrency=false;
- hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager;
- hive.enforce.bucketing=false;
- hive.compactor.initiator.on=false;
- hive.compactor.worker.threads=0;
- hive.create.as.insert.only=false;
- metastore.create.as.acid=false;
- ```
-> [!Note]
-> If hive.strict.managed.tables is set to true \<Default value\>, Creating Managed and non-transaction table will fail with the following error:
-```
-java.lang.Exception: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Table <Table name> failed strict managed table checks due to the following reason: Table is marked as a managed table but is not transactional.
-```
-2. Restart hive service.
-
-> [!IMPORTANT]
-> Microsoft recommends against sharing the same data/storage with HDInsight 3.6 and HDInsight 4.0 Hive-managed tables.It is an unsupported scenario.
-
-* Normally, above configurations should be set even before creating any Hive tables on HDInsight 4.0 cluster. We shouldn't disable ACID once managed tables are created. It would potentially cause data loss or inconsistent results. So, it's recommended to set it once when you create a new cluster and donΓÇÖt change it later.
-
-* Disabling ACID after creating tables is risky, however in case you want to do it, follow the below steps to avoid potential data loss or inconsistency:
-
- 1. Create an external table with same schema and copy the data from original managed table using CTAS command ```create external table e_t1 select * from m_t1```.
- 2. Drop the managed table using ```drop table m_t1```.
- 3. Disable ACID using the configs suggested.
- 4. Create m_t1 again and copy data from external table using CTAS command ```create table m_t1 select * from e_t1```.
- 5. Drop external table using ```drop table e_t1```.
-
-Make sure all managed tables are converted to external tables and dropped before disabling ACID. Also, compare the schema and data after each step to avoid any discrepancy.
-
-## Create Hive external table with 755 permission
-
-This issue can be resolved by either of the following two options:
-
-1. Manually set the folder permission to 757 or 777, to allow hive user to write to the directory.
-
-2. Change the ΓÇ£Hive Authorization ManagerΓÇ¥ from ```org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider``` to ```org.apache.hadoop.hive.ql.security.authorization.MetaStoreAuthzAPIAuthorizerEmbedOnly```.
-
-MetaStoreAuthzAPIAuthorizerEmbedOnly effectively disables security checks because the Hive metastore isn't embedded in HDInsight 4.0. However, it may bring other potential issues. Exercise caution when using this option.
-
-## Permission errors in Hive job after upgrading to HDInsight 4.0
-
-* In HDInsight 4.0, all cluster shapes with Hive components are configured with a new authorization provider:
-
- `org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider`
-
-* HDFS file permissions should be assigned to the hive user for the file being accessed. The error message provides the details needed to resolve the issue.
-
-* You can also switch to ```MetaStoreAuthzAPIAuthorizerEmbedOnly``` provider used in HDInsight 3.6 Hive clusters.
-
- `org.apache.hadoop.hive.ql.security.authorization.MetaStoreAuthzAPIAuthorizerEmbedOnly`
-
- :::image type="content" source="./media/apache-hive-40-migration-guide/hive-job-permission-errors.png" alt-text="Set authorization to MetaStoreAuthzAPIAuthorizerEmbedOnly." border="true":::
-
-## Unable to query table with OpenCSVSerde
-
-Reading data from `csv` format table may throw exception like:
-```text
-MetaException(message:java.lang.UnsupportedOperationException: Storage schema reading not supported)
-```
-
-Workaround:
-
-* Add configuration `metastore.storage.schema.reader.impl`=`org.apache.hadoop.hive.metastore.SerDeStorageSchemaReader` in `Custom hive-site` via Ambari UI
-
-* Restart all stale hive services
-
-## Next steps
-
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
Clone the sample code for this guide by using the following commands. The sample
git clone https://github.com/Azure-Samples/open-liberty-on-aro.git cd open-liberty-on-aro export BASE_DIR=$PWD
-git checkout 20240223
+git checkout 20240920
cd 3-integration/connect-db/mssql ```
operator-nexus List Of Metrics Collected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/list-of-metrics-collected.md
Title: List of Metrics Collected in Azure Operator Nexus.
-description: List of metrics collected in Azure Operator Nexus.
+description: List of metrics emitted by the resource types in Azure Operator Nexus and observed in Azure Monitor.
This section provides the list of metrics collected from the different component
- [***kubelet***](#kubelet) - [***Kubernetes Node***](#kubernetes-node) - [***Kubernetes Pod***](#kubernetes-pod)
- - [***Kuberenetes StatefulSet***](#kuberenetes-statefulset)
+ - [***Kubernetes StatefulSet***](#kubernetes-statefulset)
- [***Virtual Machine Orchestrator***](#virtual-machine-orchestrator)
+ - [***sharedVolume***](#sharedvolume)
+ - [***Platform Cluster***](#platform-cluster)
- [Baremetal servers](#baremetal-servers) - [***node metrics***](#node-metrics) - [Storage Appliances](#storage-appliances) - [***pure storage***](#pure-storage)
+ - [Cluster Management](#cluster-management)
+ - [***cluster management metrics***](#cluster-management-metrics)
- [Network Fabric Metrics](#network-fabric-metrics) - [Network Devices Metrics](#network-devices-metrics)
All these metrics for Nexus Cluster are collected and delivered to Azure Monitor
| Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|ApiserverAuditRequestsRejectedTotal|API Server|API Server Audit Requests Rejected Total|Count|Counter of API server requests rejected due to an error in the audit logging backend|Component,Pod Name|
-|ApiserverClientCertificateExpirationSecondsSum|API Server|API Server Client Certificate Expiration Seconds Sum (Preview)|Seconds|Sum of API server client certificate expiration (seconds)|Component,Pod Name|
-|ApiserverStorageDataKeyGenerationFailuresTotal|API Server|API Server Storage Data Key Generation Failures Total|Count|Total number of operations that failed Data Encryption Key (DEK) generation|Component,Pod Name|
-|ApiserverTlsHandshakeErrorsTotal|API Server|API Server TLS Handshake Errors Total (Preview)|Count|Number of requests dropped with 'TLS handshake' error|Component,Pod Name|
+|ApiserverAuditRequestsRejectedTotal|API Server|APIServer Audit Requests Rejected Total|Count|Counter of API server requests rejected due to an error in the audit logging backend. In the absence of data, this metric will retain the most recent value emitted|Component, Pod Name|
+|ApiserverClientCertificateExpirationSecondsSum|API Server|APIServer Clnt Cert Exp Sec Sum (Preview)|Seconds|Sum of API server client certificate expiration. In the absence of data, this metric will retain the most recent value emitted|Component, Pod Name|
+|ApiserverStorageDataKeyGenerationFailuresTotal|API Server|APIServer Storage Data Key Gen Fail|Count|Total number of operations that failed Data Encryption Key (DEK) generation. In the absence of data, this metric will retain the most recent value emitted|Component, Pod Name|
+|ApiserverTlsHandshakeErrorsTotal|API Server|APIServer TLS Handshake Err (Preview)|Count|Number of requests dropped with 'TLS handshake' error. In the absence of data, this metric will retain the most recent value emitted|Component, Pod Name|
### ***calico-felix*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|FelixActiveLocalEndpoints|Calico|Felix Active Local Endpoints|Count|Number of active endpoints on this host|Host|
-|FelixClusterNumHostEndpoints|Calico|Felix Cluster Num Host Endpoints|Count|Total number of host endpoints cluster-wide|Host|
-|FelixClusterNumHosts|Calico|Felix Cluster Number of Hosts|Count|Total number of Calico hosts in the cluster|Host|
-|FelixClusterNumWorkloadEndpoints|Calico|Felix Cluster Number of Workload Endpoints|Count|Total number of workload endpoints cluster-wide|Host|
-|FelixIntDataplaneFailures|Calico|Felix Interface Dataplane Failures|Count|Number of times dataplane updates failed and will be retried|Host|
-|FelixIpsetErrors|Calico|Felix Ipset Errors|Count|Number of 'ipset' command failures|Host|
-|FelixIpsetsCalico|Calico|Felix Ipsets Calico|Count|Number of active Calico IP sets|Host|
-|FelixIptablesRestoreErrors|Calico|Felix IP Tables Restore Errors|Count|Number of 'iptables-restore' errors|Host|
-|FelixIptablesSaveErrors|Calico|Felix IP Tables Save Errors|Count|Number of 'iptables-save' errors|Host|
-|FelixResyncState|Calico|Felix Resync State|Unspecified|Current datastore state|Host|
-|FelixResyncsStarted|Calico|Felix Resyncs Started|Count|Number of times Felix has started resyncing with the datastore|Host|
+|FelixActiveLocalEndpoints|Calico|Felix Active Local Endpoints|Count|Number of active endpoints on this host. In the absence of data, this metric will retain the most recent value emitted|Host|
+|FelixClusterNumHostEndpoints|Calico|Felix Cluster Num Host Endpoints|Count|Total number of host endpoints cluster-wide. In the absence of data, this metric will retain the most recent value emitted|Host|
+|FelixClusterNumHosts|Calico|Felix Cluster Number of Hosts|Count|Total number of Calico hosts in the cluster. In the absence of data, this metric will retain the most recent value emitted|Host|
+|FelixClusterNumWorkloadEndpoints|Calico|Felix Cluster Nmbr Workload Endpoints|Count|Total number of workload endpoints cluster-wide. In the absence of data, this metric will retain the most recent value emitted|Host|
+|FelixIntDataplaneFailures|Calico|Felix Interface Dataplane Failures|Count|Number of times dataplane updates failed and will be retried. In the absence of data, this metric will retain the most recent value emitted|Host|
+|FelixIpsetErrors|Calico|Felix Ipset Errors|Count|Number of 'ipset' command failures. In the absence of data, this metric will retain the most recent value emitted|Host|
+|FelixIpsetsCalico|Calico|Felix Ipsets Calico|Count|Number of active Calico IP sets. In the absence of data, this metric will retain the most recent value emitted|Host|
+|FelixIptablesRestoreErrors|Calico|Felix IP Tables Restore Errors|Count|Number of 'iptables-restore' errors. In the absence of data, this metric will retain the most recent value emitted|Host|
+|FelixIptablesSaveErrors|Calico|Felix IP Tables Save Errors|Count|Number of 'iptables-save' errors. In the absence of data, this metric will retain the most recent value emitted|Host|
+|FelixResyncState|Calico|Felix Resync State|Unspecified|Current datastore state. In the absence of data, this metric will retain the most recent value emitted|Host|
+|FelixResyncsStarted|Calico|Felix Resyncs Started|Count|Number of times Felix has started resyncing with the datastore. In the absence of data, this metric will retain the most recent value emitted|Host|
### ***calico-typha*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|TyphaClientLatencySecsCount|Calico|Typha Client Latency Secs|Count|Per-client latency. I.e. how far behind the current state each client is.|Pod Name|
-|TyphaConnectionsAccepted|Calico|Typha Connections Accepted|Count|Total number of connections accepted over time|Pod Name|
-|TyphaConnectionsDropped|Calico|Typha Connections Dropped|Count|Total number of connections dropped due to rebalancing|Pod Name|
-|TyphaPingLatencyCount|Calico|Typha Ping Latency|Count|Round-trip ping/pong latency to client. Typha's protocol includes a regular ping/pong keepalive to verify that the connection is still up|Pod Name|
+|TyphaClientLatencySecsCount|Calico|Typha Client Latency Secs|Count|Per-client latency: how far behind the current state each client is. In the absence of data, this metric will retain the most recent value emitted|Pod Name|
+|TyphaConnectionsAccepted|Calico|Typha Connections Accepted|Count|Total number of connections accepted over time. In the absence of data, this metric will retain the most recent value emitted|Pod Name|
+|TyphaConnectionsDropped|Calico|Typha Connections Dropped|Count|Total number of connections dropped due to rebalancing. In the absence of data, this metric will retain the most recent value emitted|Pod Name|
+|TyphaPingLatencyCount|Calico|Typha Ping Latency|Count|Round-trip ping latency to client. Typha's protocol includes a regular ping keepalive to verify that the connection is still up. In the absence of data, this metric will retain the most recent value emitted|Pod Name|
### ***Kubernetes Containers*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|ContainerFsIoTimeSecondsTotal|Container|Container FS I/O Time Seconds Total (Preview)|Seconds|Time taken for container Input/Output (I/O) operations|Device,Host|
-|ContainerMemoryFailcnt|Container|Container Memory Fail Count|Count|Number of times a container's memory usage limit is hit|Container,Host,Namespace,Pod|
-|ContainerMemoryUsageBytes|Container|Container Memory Usage Bytes|Bytes|Current memory usage, including all memory regardless of when it was accessed|Container,Host,Namespace,Pod|
-|ContainerNetworkReceiveErrorsTotal|Container|Container Network Receive Errors Total (Preview)|Count|Number of errors encountered while receiving bytes over the network|Interface,Namespace,Pod|
-|ContainerNetworkTransmitErrorsTotal|Container|Container Network Transmit Errors Total (Preview)|Count|Count of errors that happened while transmitting|Interface,Namespace,Pod|
-|ContainerScrapeError|Container|Container Scrape Error|Unspecified|Indicates whether there was an error while getting container metrics|Host|
-|ContainerTasksState|Container|Container Tasks State|Count|Number of tasks or processes in a given state (sleeping, running, stopped, uninterruptible, or waiting) in a container|Container,Host,Namespace,Pod,State|
+|ContainerFsIoTimeSecondsTotal|Container|Container FS I/O Time Seconds Total (Preview)|Seconds|Time taken for container Input/Output (I/O) operations. In the absence of data, this metric will retain the most recent value emitted|Device, Host|
+|ContainerMemoryFailcnt|Container|Container Memory Fail Count|Count|Number of times a container's memory usage limit is hit. In the absence of data, this metric will default to 0|Container, Host, Namespace, Pod|
+|ContainerMemoryUsageBytes|Container|Container Memory Usage Bytes|Bytes|Current memory usage, including all memory regardless of when it was accessed. In the absence of data, this metric will default to 0|Container, Host, Namespace, Pod|
+|ContainerNetworkReceiveErrorsTotal|Container|Container Net Rx Errors (Preview)|Count|Number of errors encountered while receiving bytes over the network. In the absence of data, this metric will retain the most recent value emitted|Interface, Namespace, Pod|
+|ContainerNetworkTransmitErrorsTotal|Container|Container Net Tx Err Total (Preview)|Count|Number of errors encountered while transmitting bytes over the network. In the absence of data, this metric will retain the most recent value emitted|Interface, Namespace, Pod|
+|ContainerScrapeError|Container|Container Scrape Error|Unspecified|Indicates whether there was an error while getting container metrics. In the absence of data, this metric will retain the most recent value emitted|Host|
+|ContainerTasksState|Container|Container Tasks State|Count|Number of tasks or processes in a given state (sleeping, running, stopped, uninterruptible, or waiting) in a container. In the absence of data, this metric will retain the most recent value emitted|Container, Host, Namespace, Pod, State|
### ***Kubernetes Controllers*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|ControllerRuntimeReconcileErrorsTotal|Controller|Controller Reconcile Errors Total|Count|Total number of reconciliation errors per controller|Controller,Namespace,Pod Name|
-|ControllerRuntimeReconcileTotal|Controller|Controller Reconciliations Total|Count|Total number of reconciliations per controller|Controller,Namespace,Pod Name|
+|ControllerRuntimeReconcileErrorsTotal|Controller|Controller Reconcile Errors Total (Deprecated)|Count|Total number of reconciliation errors per controller. In the absence of data, this metric will retain the most recent value emitted|Controller, Namespace, Pod Name|
+|ControllerRuntimeReconcileErrorsTotal2|Controller|Controller Reconcile Errors Total|Count|Total number of reconciliation errors per controller. In the absence of data, this metric will retain the most recent value emitted|Controller, Namespace, Pod Name|
+|ControllerRuntimeReconcileTotal|Controller|Controller Reconciliations Total (Deprecated)|Count|Total number of reconciliations per controller. In the absence of data, this metric will retain the most recent value emitted|Controller, Namespace, Pod Name|
+|ControllerRuntimeReconcileTotal2|Controller|Controller Reconciliations Total|Count|Total number of reconciliations per controller. In the absence of data, this metric will retain the most recent value emitted|Controller, Namespace, Pod Name|
### ***coreDNS*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|CorednsDnsRequestsTotal|CoreDNS|CoreDNS Requests Total|Count|Total number of DNS requests|Family,Pod Name,Proto,Server,Type|
-|CorednsDnsResponsesTotal|CoreDNS|CoreDNS Responses Total|Count|Total number of DNS responses|Pod Name,Server,Rcode|
-|CorednsForwardHealthcheckBrokenTotal|CoreDNS|CoreDNS Forward Healthcheck Broken Total (Preview)|Count|Total number of times all upstreams are unhealthy|Pod Name,Namespace|
-|CorednsForwardMaxConcurrentRejectsTotal|CoreDNS|CoreDNS Forward Max Concurrent Rejects Total (Preview)|Count|Total number of rejected queries because concurrent queries were at the maximum limit|Pod Name,Namespace|
-|CorednsHealthRequestFailuresTotal|CoreDNS|CoreDNS Health Request Failures Total|Count|The number of times the self health check failed|Pod Name|
-|CorednsPanicsTotal|CoreDNS|CoreDNS Panics Total|Count|Total number of panics|Pod Name|
-|CorednsReloadFailedTotal|CoreDNS|CoreDNS Reload Failed Total|Count|Total number of failed reload attempts|Pod Name,Namespace|
+|CorednsDnsRequestsTotal|CoreDNS|CoreDNS Requests Total|Count|Total number of DNS requests received by a CoreDNS server. In the absence of data, this metric will retain the most recent value emitted|Family, Pod Name,Proto,Server,Type|
+|CorednsDnsResponsesTotal|CoreDNS|CoreDNS Responses Total|Count|Total number of DNS responses sent by a CoreDNS server. In the absence of data, this metric will retain the most recent value emitted|Pod Name, Server, Rcode|
+|CorednsForwardHealthcheckBrokenTotal|CoreDNS|CoreDNS Frwd Hlthchk Broken (Deprecated)|Count|Total number of times the health checks for all upstream DNS servers has failed. In the absence of data, this metric will retain the most recent value emitted|Pod Name, Namespace|
+|CorednsForwardHealthcheckBrokenTotal2|CoreDNS|CoreDNS Frwd Hlthchk Broken|Count|Total number of times the health checks for all upstream DNS servers has failed. In the absence of data, this metric will retain the most recent value emitted|Pod Name, Namespace|
+|CorednsForwardMaxConcurrentRejectsTotal|CoreDNS|CoreDNS Frwd Max Concurrent Rejects (Deprecated)|Count|Total number of rejected queries due to concurrent queries reaching the maximum limit. In the absence of data, this metric will retain the most recent value emitted|Pod Name, Namespace|
+|CorednsForwardMaxConcurrentRejectsTotal2|CoreDNS|CoreDNS Frwd Max Concurrent Rejects|Count|Total number of rejected queries due to concurrent queries reaching the maximum limit. In the absence of data, this metric will retain the most recent value emitted|Pod Name, Namespace|
+|CorednsHealthRequestFailuresTotal|CoreDNS|CoreDNS Health Request Failures Total|Count|The number of times the self health check failed for a CoreDNS server. In the absence of data, this metric will retain the most recent value emitted|Pod Name|
+|CorednsPanicsTotal|CoreDNS|CoreDNS Panics Total|Count|Total number of unexpected errors (panics) that have occurred in a CoreDNS server. In the absence of data, this metric will retain the most recent value emitted|Pod Name|
+|CorednsReloadFailedTotal|CoreDNS|CoreDNS Reload Failed Total (Deprecated)|Count|Total number of failed attempts CoreDNS has had when reloading its configuration. In the absence of data, this metric will retain the most recent value emitted|Pod Name, Namespace|
+|CorednsReloadFailedTotal2|CoreDNS|CoreDNS Reload Failed Total|Count|Total number of failed attempts CoreDNS has had when reloading its configuration. In the absence of data, this metric will retain the most recent value emitted|Pod Name, Namespace|
### ***Kubernetes Daemonset*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|KubeDaemonsetStatusCurrentNumberScheduled|Daemonset|Daemonsets Current Number Scheduled|Count|Number of daemonsets currently scheduled|Daemonset,Namespace|
-|KubeDaemonsetStatusDesiredNumberScheduled|Daemonset|Daemonsets Desired Number Scheduled|Count|Number of daemonsets desired scheduled|Daemonset,Namespace|
+|KubeDaemonsetStatusCurrentNumberScheduled|Daemonset|Daemonsets Current Number Scheduled|Count|Number of daemonsets currently scheduled. In the absence of data, this metric will default to 0|Daemonset, Namespace|
+|KubeDaemonsetStatusDesiredNumberScheduled|Daemonset|Daemonsets Desired Number Scheduled|Count|Number of daemonsets desired scheduled. In the absence of data, this metric will default to 0|Daemonset, Namespace|
+|KubeDaemonsetStatusNotScheduled|Daemonset|Daemonsets Not Scheduled|Count|Number of daemonsets not scheduled. In the absence of data, this metric will default to 0|Daemonset, Namespace|
### ***Kubernetes Deployment*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|KubeDeploymentStatusReplicasAvailable|Deployment|Deployment Replicas Available|Count|Number of deployment replicas available|Deployment,Namespace|
-|KubeDeploymentStatusReplicasReady|Deployment|Deployment Replicas Ready|Count|Number of deployment replicas ready|Deployment,Namespace|
+|KubeDeploymentStatusReplicasAvailable|Deployment|Deployment Replicas Available|Count|Number of deployment replicas available. In the absence of data, this metric will default to 0|Deployment, Namespace|
+|KubeDeploymentStatusReplicasUnavailable|Deployment|Deployment Replicas Unavailable|Count|Number of deployment replicas unavailable. In the absence of data, this metric will default to 0|Deployment, Namespace|
+|KubeDeploymentStatusReplicasReady|Deployment|Deployment Replicas Ready|Count|Number of deployment replicas ready. In the absence of data, this metric will default to 0|Deployment, Namespace|
+|KubeDeploymentStatusReplicasAvailablePercent|Deployment|Deployment Replicas Available Percent|Percent|Percentage of deployment replicas available. In the absence of data, this metric will default to 0|Deployment, Namespace|
### ***etcD*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|EtcdDiskBackendCommitDurationSecondsSum|Etcd|Etcd Disk Backend Commit Duration Seconds Sum|Seconds|The latency distribution of commits called by the backend|Component,Pod Name,Tier|
-|EtcdDiskWalFsyncDurationSecondsSum|Etcd|Etcd Disk WAL Fsync Duration Seconds Sum|Seconds|The sum of latency distributions of 'fsync' called by the write-ahead log (WAL)|Component,Pod Name,Tier|
-|EtcdServerHealthFailures|Etcd|Etcd Server Health Failures|Count|Total server health failures|Pod Name|
-|EtcdServerIsLeader|Etcd|Etcd Server Is Leader|Unspecified|Whether or not this member is a leader; 1 if is, 0 otherwise|Component,Pod Name,Tier|
-|EtcdServerIsLearner|Etcd|Etcd Server Is Learner|Unspecified|Whether or not this member is a learner; 1 if is, 0 otherwise|Component,Pod Name,Tier|
-|EtcdServerLeaderChangesSeenTotal|Etcd|Etcd Server Leader Changes Seen Total|Count|The number of leader changes seen|Component,Pod Name,Tier|
-|EtcdServerProposalsAppliedTotal|Etcd|Etcd Server Proposals Applied Total|Count|The total number of consensus proposals applied|Component,Pod Name,Tier|
-|EtcdServerProposalsCommittedTotal|Etcd|Etcd Server Proposals Committed Total|Count|The total number of consensus proposals committed|Component,Pod Name,Tier|
-|EtcdServerProposalsFailedTotal|Etcd|Etcd Server Proposals Failed Total|Count|The total number of failed proposals|Component,Pod Name,Tier|
-|EtcdServerSlowApplyTotal|Etcd|Etcd Server Slow Apply Total (Preview)|Count|The total number of slow apply requests|Pod Name,Tier|
+|EtcdDBUtilizationPercent|Etcd|Etcd Database Utilization Percentage|Percent|The percentage of the Etcd Database utilized. In the absence of data, this metric will default to 0|Pod Name|
+|EtcdDiskBackendCommitDurationSecondsSum|Etcd|Etcd Disk Backend Commit Duration Sec|Seconds|The cumulative sum of the time taken for etcd to commit transactions to its backend disk storage. In the absence of data, this metric will retain the most recent value emitted|Component, Pod Name, Tier|
+|EtcdDiskWalFsyncDurationSecondsSum|Etcd|Etcd Disk WAL Fsync Duration Sec|Seconds|The cumulative sum of the time that etcd has spent performing fsync operations on the write-ahead log (WAL) file. In the absence of data, this metric will retain the most recent value emitted|Component, Pod Name, Tier|
+|EtcdServerHealthFailures|Etcd|Etcd Server Health Failures|Count|Total number of failed health checks performed on an etcd server. In the absence of data, this metric will default to 0|Pod Name|
+|EtcdServerIsLeader|Etcd|Etcd Server Is Leader|Unspecified|Indicates whether an etcd server is the leader of the cluster; 1, 0 otherwise. In the absence of data, this metric will default to 0|Component, Pod Name, Tier|
+|EtcdServerIsLearner|Etcd|Etcd Server Is Learner|Unspecified|Indicates whether an etcd server is a learner within the cluster; 1, 0 otherwise. In the absence of data, this metric will default to 0|Component, Pod Name, Tier|
+|EtcdServerLeaderChangesSeenTotal|Etcd|Etcd Server Leader Changes Seen Total|Count|The number of leader changes seen within the etcd cluster. In the absence of data, this metric will retain the most recent value emitted|Component, Pod Name, Tier|
+|EtcdServerProposalsAppliedTotal|Etcd|Etcd Server Proposals Applied Total|Count|The total number of consensus proposals that have been successfully applied. In the absence of data, this metric will retain the most recent value emitted|Component, Pod Name, Tier|
+|EtcdServerProposalsCommittedTotal|Etcd|Etcd Server Proposals Committed Total|Count|The total number of consensus proposals that have been committed. In the absence of data, this metric will retain the most recent value emitted|Component, Pod Name, Tier|
+|EtcdServerProposalsFailedTotal|Etcd|Etcd Server Proposals Failed Total|Count|The total number of failed consensus proposals. In the absence of data, this metric will retain the most recent value emitted|Component, Pod Name, Tier|
+|EtcdServerSlowApplyTotal|Etcd|Etcd Server Slow Apply Total (Preview)|Count|The total number of etcd apply requests that took longer than expected. In the absence of data, this metric will retain the most recent value emitted|Pod Name, Tier|
### ***Kubernetes Job*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|KubeJobStatusActive|Job|Jobs Active|Count|Number of jobs active|Job,Namespace|
-|KubeJobStatusFailed|Job|Jobs Failed|Count|Number and reason of jobs failed|Job,Namespace,Reason|
-|KubeJobStatusSucceeded|Job|Jobs Succeeded|Count|Number of jobs succeeded|Job,Namespace|
+|KubeJobStatusActive|Job|Jobs Active|Count|Number of jobs active. In the absence of data, this metric will default to 0|Job, Namespace|
+|KubeJobStatusFailedReasons|Job|Jobs Failed|Count|Number and reason of jobs failed. In the absence of data, this metric will default to 0|Job, Namespace, Reason|
+|KubeJobStatusSucceeded|Job|Jobs Succeeded|Count|Number of jobs succeeded. In the absence of data, this metric will default to 0|Job, Namespace|
### ***kubelet*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|KubeletRunningContainers|Kubelet|Kubelet Running Containers|Count|Number of containers currently running|Container State,Host|
-|KubeletRunningPods|Kubelet|Kubelet Running Pods|Count|Number of pods running on the node|Host|
-|KubeletRuntimeOperationsErrorsTotal|Kubelet|Kubelet Runtime Operations Errors Total|Count|Cumulative number of runtime operation errors by operation type|Host,Operation Type|
-|KubeletStartedPodsErrorsTotal|Kubelet|Kubelet Started Pods Errors Total|Count|Cumulative number of errors when starting pods|Host|
-|KubeletVolumeStatsAvailableBytes|Kubelet|Volume Available Bytes|Bytes|Number of available bytes in the volume|Host,Namespace,Persistent Volume Claim|
-|KubeletVolumeStatsCapacityBytes|Kubelet|Volume Capacity Bytes|Bytes|Capacity (in bytes) of the volume|Host,Namespace,Persistent Volume Claim|
-|KubeletVolumeStatsUsedBytes|Kubelet|Volume Used Bytes|Bytes|Number of used bytes in the volume|Host,Namespace,Persistent Volume Claim|
+|KubeletRunningContainers|Kubelet|Kubelet Running Containers|Count|Number of containers currently running. In the absence of data, this metric will retain the most recent value emitted|Container State, Host|
+|KubeletRunningPods|Kubelet|Kubelet Running Pods|Count|Number of pods running on the node. In the absence of data, this metric will retain the most recent value emitted|Host|
+|KubeletRuntimeOperationsErrorsTotal|Kubelet|Kubelet Runtime Operations Errors Total|Count|Cumulative number of runtime operation errors by operation type. In the absence of data, this metric will retain the most recent value emitted|Host, Operation Type|
+|KubeletStartedPodsErrorsTotal|Kubelet|Kubelet Started Pods Errors Total|Count|Cumulative number of errors when starting pods. In the absence of data, this metric will retain the most recent value emitted|Host|
+|KubeletVolumeStatsAvailableBytes|Kubelet|Volume Available Bytes|Bytes|Number of available bytes in the volume. In the absence of data, this metric will retain the most recent value emitted|Host, Namespace, Persistent Volume Claim|
+|KubeletVolumeStatsCapacityBytes|Kubelet|Volume Capacity Bytes|Bytes|Capacity of the volume. In the absence of data, this metric will retain the most recent value emitted|Host, Namespace, Persistent Volume Claim|
+|KubeletVolumeStatsUsedBytes|Kubelet|Volume Used Bytes|Bytes|Number of used bytes in the volume. In the absence of data, this metric will retain the most recent value emitted|Host, Namespace, Persistent Volume Claim|
### ***Kubernetes Node*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|KubeNodeStatusAllocatable|Node|Node Resources Allocatable|Count|Node resources allocatable for pods|Node,resource,unit|
-|KubeNodeStatusCapacity|Node|Node Resources Capacity|Count|Total amount of node resources available|Node,resource,unit|
-|KubeNodeStatusCondition|Node|Node Status Condition|Count|The condition of a node|Condition,Node,Status|
+|KubeNodeStatusAllocatable|Node|Node Resources Allocatable|Count|Node resources allocatable for pods. In the absence of data, this metric will retain the most recent value emitted|Node, resource, unit|
+|KubeNodeStatusCapacity|Node|Node Resources Capacity|Count|Total amount of node resources available. In the absence of data, this metric will retain the most recent value emitted|Node, resource, unit|
+|KubeNodeStatusCondition|Node|Node Status Condition|Count|The condition of a node. In the absence of data, this metric will retain the most recent value emitted|Condition, Node, Status|
### ***Kubernetes Pod*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|KubePodContainerResourceLimits|Pod|Container Resources Limits|Count|The container's resources limits|Container,Namespace,Node,Pod,Resource,Unit|
-|KubePodContainerResourceRequests|Pod|Container Resources Requests|Count|The container's resources requested|Container,Namespace,Node,Pod,Resource,Unit|
-|KubePodContainerStateStarted|Pod|Container State Started (Preview)|Count|Unix timestamp start time of a container|Container,Namespace,Pod|
-|KubePodContainerStatusLastTerminatedReason|Pod|Container Status Last Terminated Reason|Count|The reason of a container's last terminated status|Container,Namespace,Pod,Reason|
-|KubePodContainerStatusReady|Pod|Container Status Ready|Count|Describes whether the container's readiness check succeeded|Container,Namespace,Pod|
-|KubePodContainerStatusRestartsTotal|Pod|Container Restarts|Count|The number of container restarts|Container,Namespace,Pod|
-|KubePodContainerStatusRunning|Pod|Container Status Running|Count|The number of containers with a status of 'running'|Container,Namespace,Pod|
-|KubePodContainerStatusTerminated|Pod|Container Status Terminated|Count|The number of containers with a status of 'terminated'|Container,Namespace,Pod|
-|KubePodContainerStatusTerminatedReason|Pod|Container Status Terminated Reason|Count|The number and reason of containers with a status of 'terminated'|Container,Namespace,Pod,Reason|
-|KubePodContainerStatusWaiting|Pod|Container Status Waiting|Count|The number of containers with a status of 'waiting'|Container,Namespace,Pod|
-|KubePodContainerStatusWaitingReason|Pod|Container Status Waiting Reason|Count|The number and reason of containers with a status of 'waiting'|Container,Namespace,Pod,Reason|
-|KubePodDeletionTimestamp|Pod|Pod Deletion Timestamp (Preview)|Count|The timestamp of the pod's deletion|Namespace,Pod|
-|KubePodInitContainerStatusReady|Pod|Pod Init Container Ready|Count|The number of ready pod init containers|Namespace,Container,Pod|
-|KubePodInitContainerStatusRestartsTotal|Pod|Pod Init Container Restarts|Count|The number of pod init containers restarts|Namespace,Container,Pod|
-|KubePodInitContainerStatusRunning|Pod|Pod Init Container Running|Count|The number of running pod init containers|Namespace,Container,Pod|
-|KubePodInitContainerStatusTerminated|Pod|Pod Init Container Terminated|Count|The number of terminated pod init containers|Namespace,Container,Pod|
-|KubePodInitContainerStatusTerminatedReason|Pod|Pod Init Container Terminated Reason|Count|The number of pod init containers with terminated reason|Namespace,Container,Pod,Reason|
-|KubePodInitContainerStatusWaiting|Pod|Pod Init Container Waiting|Count|The number of pod init containers waiting|Namespace,Container,Pod|
-|KubePodInitContainerStatusWaitingReason|Pod|Pod Init Container Waiting Reason|Count|The reason the pod init container is waiting|Namespace,Container,Pod,Reason|
-|KubePodStatusPhase|Pod|Pod Status Phase|Count|The pod status phase|Namespace,Pod,Phase|
-|KubePodStatusReady|Pod|Pod Ready State|Count|Signifies if the pod is in ready state|Namespace,Pod|
-|KubePodStatusReason|Pod|Pod Status Reason|Count|The pod status reason <Evicted\|NodeAffinity\|NodeLost\|Shutdown\|UnexpectedAdmissionError>|Namespace,Pod,Reason|
-
-### ***Kuberenetes StatefulSet***
+|KubePodContainerResourceLimits|Pod|Container Resources Limits|Count|The container's resources limits. In the absence of data, this metric will default to 0|Container, Namespace, Node, Pod, Resource, Unit|
+|KubePodContainerResourceRequests|Pod|Container Resources Requests|Count|The container's resources requested. In the absence of data, this metric will default to 0|Container, Namespace, Node, Pod, Resource, Unit|
+|KubePodContainerStateStarted|Pod|Container State Started (Preview)|Count|Unix timestamp start time of a container. In the absence of data, this metric will default to 0|Container, Namespace, Pod|
+|KubePodContainerStatusLastTerminatedReason|Pod|Container Status Last Terminated Reason|Count|The reason of a container's last terminated status. In the absence of data, this metric will default to 0|Container, Namespace, Pod, Reason|
+|KubePodContainerStatusReady|Pod|Container Status Ready|Count|Describes whether the container's readiness check succeeded. In the absence of data, this metric will default to 0|Container, Namespace, Pod|
+|KubePodContainerStatusRestartsTotal|Pod|Container Restarts|Count|The number of container restarts. In the absence of data, this metric will retain the most recent value emitted|Container, Namespace, Pod|
+|KubePodContainerStatusRunning|Pod|Container Status Running|Count|The number of containers with a status of 'running'. In the absence of data, this metric will default to 0|Container, Namespace, Pod|
+|KubePodContainerStatusTerminated|Pod|Container Status Terminated|Count|The number of containers with a status of 'terminated'. In the absence of data, this metric will default to 0|Container, Namespace, Pod|
+|KubePodContainerStatusTerminatedReasons|Pod|Container Status Terminated Reason|Count|The number and reason of containers with a status of 'terminated'. In the absence of data, this metric will default to 0|Container, Namespace, Pod, Reason|
+|KubePodContainerStatusWaiting|Pod|Container Status Waiting|Count|The number of containers with a status of 'waiting'. In the absence of data, this metric will default to 0|Container, Namespace, Pod|
+|KubePodContainerStatusWaitingReason|Pod|Container Status Waiting Reason|Count|The number and reason of containers with a status of 'waiting'. In the absence of data, this metric will default to 0|Container, Namespace, Pod, Reason|
+|KubePodDeletionTimestamp|Pod|Pod Deletion Timestamp (Preview)|Count|The timestamp of the pod's deletion. In the absence of data, this metric will default to 0|Namespace, Pod|
+|KubePodInitContainerStatusReady|Pod|Pod Init Container Ready|Count|The number of ready pod init containers. In the absence of data, this metric will default to 0|Namespace, Container, Pod|
+|KubePodInitContainerStatusRestartsTotal|Pod|Pod Init Container Restarts|Count|The number of pod init containers restarts. In the absence of data, this metric will retain the most recent value emitted|Namespace, Container, Pod|
+|KubePodInitContainerStatusRunning|Pod|Pod Init Container Running|Count|The number of running pod init containers. In the absence of data, this metric will default to 0|Namespace, Container, Pod|
+|KubePodInitContainerStatusTerminated|Pod|Pod Init Container Terminated|Count|The number of terminated pod init containers. In the absence of data, this metric will default to 0|Namespace, Container, Pod|
+|KubePodInitContainerStatusTerminatedReason|Pod|Pod Init Container Terminated Reason|Count|The number of pod init containers with terminated reason. In the absence of data, this metric will default to 0|Namespace, Container, Pod, Reason|
+|KubePodInitContainerStatusWaiting|Pod|Pod Init Container Waiting|Count|The number of pod init containers waiting. In the absence of data, this metric will default to 0|Namespace, Container, Pod|
+|KubePodInitContainerStatusWaitingReason|Pod|Pod Init Container Waiting Reason|Count|The reason the pod init container is waiting. In the absence of data, this metric will default to 0|Namespace, Container, Pod, Reason|
+|KubePodStatusPhases|Pod|Pod Status Phase|Count|The pod status phase. In the absence of data, this metric will default to 0|Namespace, Pod, Phase|
+|KubePodStatusReady|Pod|Pod Ready State|Count|Signifies if the pod is in ready state. In the absence of data, this metric will default to 0|Namespace, Pod, Condition|
+|KubePodStatusReason|Pod|Pod Status Reason|Count|The pod status reason <Evicted/NodeAffinity/NodeLost/Shutdown/UnexpectedAdmissionError>. In the absence of data, this metric will default to 0|NamespacePod, Reason|
+
+### ***Kubernetes StatefulSet***
| Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|KubeStatefulsetReplicas|Statefulset|Statefulset Desired Replicas Number|Count|The desired number of statefulset replicas|Namespace,Statefulset|
-|KubeStatefulsetStatusReplicas|Statefulset|Statefulset Replicas Number|Count|The number of replicas per statefulset|Namespace,Statefulset|
+|KubeStatefulsetReplicas|Statefulset|Statefulset Desired Replicas Number|Count|The desired number of statefulset replicas. In the absence of data, this metric will default to 0|Namespace, Statefulset|
+|KubeStatefulsetStatusReplicas|Statefulset|Statefulset Replicas Number|Count|The number of replicas per statefulset. In the absence of data, this metric will default to 0|Namespace, Statefulset|
+|KubeStatefulsetStatusReplicaDifference|Statefulset|Statefulset Replicas Difference|Count|The difference between desired and current number of replicas per statefulset. In the absence of data, this metric will default to 0|Namespace, Statefulset|
+|KubeletRunningContainers|Kubelet|Kubelet Running Containers|Count|Number of containers currently running. In the absence of data, this metric will retain the most recent value emitted|Container State, Host|
### ***Virtual Machine Orchestrator*** | Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|KubevirtInfo|VMOrchestrator|Kubevirt Info|Unspecified|Kubevirt version information|Kube Version|
-|KubevirtVirtControllerLeading|VMOrchestrator|Kubevirt Virt Controller Leading|Unspecified|Indication for an operating virt-controller|Pod Name|
-|KubevirtVirtControllerReady|VMOrchestrator|Kubevirt Virt Controller Ready|Unspecified|Indication for a virt-controller that is ready to take the lead|Pod Name|
-|KubevirtVirtOperatorReady|VMOrchestrator|Kubevirt Virt Operator Ready|Unspecified|Indication for a virt operator being ready|Pod Name|
-|KubevirtVmiMemoryActualBalloonBytes|VMOrchestrator|Kubevirt VMI Memory Actual BalloonBytes|Bytes|Current balloon size (in bytes)|Name,Node|
-|KubevirtVmiMemoryAvailableBytes|VMOrchestrator|Kubevirt VMI Memory Available Bytes|Bytes|Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages|Name,Node|
-|KubevirtVmiMemorySwapInTrafficBytesTotal|VMOrchestrator|Kubevirt VMI Memory Swap In Traffic Bytes Total|Bytes|The total amount of data read from swap space of the guest (in bytes)|Name,Node|
-|KubevirtVmiMemoryDomainBytesTotal|VMOrchestrator|Kubevirt VMI Memory Domain Bytes Total (Preview)|Bytes|The amount of memory (in bytes) allocated to the domain. The memory value in domain XML file|Node|
-|KubevirtVmiMemorySwapOutTrafficBytesTotal|VMOrchestrator|Kubevirt VMI Memory Swap Out Traffic Bytes Total|Bytes|The total amount of memory written out to swap space of the guest (in bytes)|Name,Node|
-|KubevirtVmiMemoryUnusedBytes|VMOrchestrator|Kubevirt VMI Memory Unused Bytes|Bytes|The amount of memory left completely unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free|Name,Node|
-|KubevirtVmiNetworkReceivePacketsTotal|VMOrchestrator|Kubevirt VMI Network Receive Packets Total|Bytes|Total network traffic received packets|Interface,Name,Node|
-|KubevirtVmiNetworkTransmitPacketsDroppedTotal|VMOrchestrator|Kubevirt VMI Network Transmit Packets Dropped Total|Bytes|The total number of transmit packets dropped on virtual NIC (vNIC) interfaces|Interface,Name,Node|
-|KubevirtVmiNetworkTransmitPacketsTotal|VMOrchestrator|Kubevirt VMI Network Transmit Packets Total|Bytes|Total network traffic transmitted packets|Interface,Name,Node|
-|KubevirtVmiOutdatedCount|VMOrchestrator|Kubevirt VMI Outdated Count|Count|Indication for the total number of VirtualMachineInstance (VMI) workloads that are not running within the most up-to-date version of the virt-launcher environment|Name|
-|KubevirtVmiPhaseCount|VMOrchestrator|Kubevirt VMI Phase Count|Count|Sum of VirtualMachineInstances (VMIs) per phase and node|Node,Phase,Workload|
-|KubevirtVmiStorageIopsReadTotal|VMOrchestrator|Kubevirt VMI Storage IOPS Read Total|Count|Total number of Input/Output (I/O) read operations|Drive,Name,Node|
-|KubevirtVmiStorageIopsWriteTotal|VMOrchestrator|Kubevirt VMI Storage IOPS Write Total|Count|Total number of Input/Output (I/O) write operations|Drive,Name,Node|
-|KubevirtVmiStorageReadTimesMsTotal|VMOrchestrator|Kubevirt VMI Storage Read Times Total (Preview)|Milliseconds|Total time in milliseconds (ms) spent on read operations|Drive,Name,Node|
-|KubevirtVmiStorageWriteTimesMsTotal|VMOrchestrator|Kubevirt VMI Storage Write Times Total (Preview)|Milliseconds|Total time in milliseconds (ms) spent on write operations|Drive,Name,Node|
-|NcVmiCpuAffinity|Network Cloud|CPU Pinning Map (Preview)|Count|Pinning map of virtual CPUs (vCPUs) to CPUs|CPU,NUMA Node,VMI Namespace,VMI Node,VMI Name|
+|KubevirtInfo|VMOrchestrator|Kubevirt Info|Unspecified|Kubevirt version information. In the absence of data, this metric will retain the most recent value emitted|Kube Version|
+|KubevirtVirtControllerLeading|VMOrchestrator|Kubevirt Virt Controller Leading|Unspecified|Indication of whether the virt-controller is leading. The value is 1 if the virt-controller is leading, 0 otherwise. In the absence of data, this metric will default to 0|Pod Name|
+|KubevirtVirtControllerReady|VMOrchestrator|Kubevirt Virt Controller Ready|Unspecified|Indication for a virt-controller that is ready to take the lead. The value is 1 if the virt-controller is ready, 0 otherwise. In the absence of data, this metric will default to 0|Pod Name|
+|KubevirtVirtOperatorReady|VMOrchestrator|Kubevirt Virt Operator Ready|Unspecified|Indication for a virt operator being ready. The value is 1 if the virt operator is ready, 0 otherwise. In the absence of data, this metric will default to 0|Pod Name|
+|KubevirtVmiMemoryActualBalloonBytes|VMOrchestrator|Kubevirt VMI Memory Balloon Bytes|Bytes|Current balloon size. In the absence of data, this metric will default to 0|Name, Node|
+|KubevirtVmiMemoryAvailableBytes|VMOrchestrator|Kubevirt VMI Memory Available Bytes|Bytes|Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages. In the absence of data, this metric will default to 0|Name, Node|
+|KubevirtVmiMemorySwapInTrafficBytesTotal|VMOrchestrator|Kubevirt VMI Mem Swp In Traffic Bytes|Bytes|The total amount of data read from swap space of the guest. In the absence of data, this metric will retain the most recent value emitted|Name, Node|
+|KubevirtVmiMemoryDomainBytesTotal|VMOrchestrator|Kubevirt VMI Mem Dom Bytes (Preview)|Bytes|The amount of memory allocated to the domain. The memory value in the domain XML file. In the absence of data, this metric will retain the most recent value emitted|Node|
+|KubevirtVmiMemorySwapOutTrafficBytesTotal|VMOrchestrator|Kubevirt VMI Mem Swp Out Traffic Bytes|Bytes|The total amount of memory written out to swap space of the guest. In the absence of data, this metric will retain the most recent value emitted|Name, Node|
+|KubevirtVmiMemoryUnusedBytes|VMOrchestrator|Kubevirt VMI Memory Unused Bytes|Bytes|The amount of memory left unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free. In the absence of data, this metric will default to 0|Name, Node|
+|KubevirtVmiMemoryUsage|VMOrchestrator|Kubevirt VMI Memory Usage|Percent|The amount of memory used as a percentage. In the absence of data, this metric will default to 0|Name, Node|
+|KubevirtVmiNetworkReceivePacketsTotal|VMOrchestrator|Kubevirt VMI Net Rx Packets|Bytes|Total network traffic received packets. In the absence of data, this metric will retain the most recent value emitted|Interface, Name, Node|
+|KubevirtVmiNetworkTransmitPacketsDroppedTotal|VMOrchestrator|Kubevirt VMI Net Tx Packets Drop|Bytes|The total number of transmit packets dropped on virtual NIC (vNIC) interfaces. In the absence of data, this metric will retain the most recent value emitted|Interface, Name, Node|
+|KubevirtVmiNetworkTransmitPacketsTotal|VMOrchestrator|Kubevirt VMI Net Tx Packets Total|Bytes|Total network traffic transmitted packets. In the absence of data, this metric will retain the most recent value emitted|Interface, Name, Node|
+|KubevirtVmiOutdatedInstances|VMOrchestrator|Kubevirt VMI Outdated Count|Count|Indication for the total number of VirtualMachineInstance (VMI) workloads that are not running within the most up-to-date version of the virt-launcher environment. In the absence of data, this metric will default to 0||
+|KubevirtVmiPhaseCount|VMOrchestrator|Kubevirt VMI Phase Count|Count|Sum of Virtual Machine Instances (VMIs) per phase and node. Phase can be one of the following values: Pending, Scheduling, Scheduled, Running, Succeeded, Failed, Unknown. In the absence of data, this metric will retain the most recent value emitted|Node, Phase, Workload|
+|KubevirtVmiStorageIopsReadTotal|VMOrchestrator|Kubevirt VMI Storage IOPS Read Total|Count|Total number of Input/Output (I/O) read operations. In the absence of data, this metric will retain the most recent value emitted|Drive, Name, Node|
+|KubevirtVmiStorageIopsWriteTotal|VMOrchestrator|Kubevirt VMI Storage IOPS Write Total|Count|Total number of Input/Output (I/O) write operations. In the absence of data, this metric will retain the most recent value emitted|Drive, Name, Node|
+|KubevirtVmiStorageReadTimesMsTotal|VMOrchestrator|Kubevirt VMI Storage Read Times Total (Preview)|Milliseconds|Total time spent on read operations from storage. In the absence of data, this metric will retain the most recent value emitted|Drive, Name, Node|
+|KubevirtVmiStorageWriteTimesMsTotal|VMOrchestrator|Kubevirt VMI Storage Write Times Total (Preview)|Milliseconds|Total time spent on write operations to storage. In the absence of data, this metric will retain the most recent value emitted|Drive, Name, Node|
+|NcVmiCpuAffinity|Network Cloud|CPU Pinning Map (Preview)|Count|Pinning map of virtual CPUs (vCPUs) to CPUs. In the absence of data, this metric will retain the most recent value emitted|CPU, NUMA Node, VMI Namespace, VMI Node, VMI Name|
+
+### ***sharedVolume***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|NfsVolumeSizeBytes|Deployment|NFS Volume Size Bytes|Bytes|Total Size of the NFS volume. In the absence of data, this metric will retain the most recent value emitted|CSN Name|
+|NfsVolumeUsedBytes|Deployment|NFS Volume Used Bytes|Bytes|Size of NFS volume used. In the absence of data, this metric will retain the most recent value emitted|CSN Name|
+
+### ***Platform Cluster***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|NexusClusterHeartbeatConnectionStatus|Nexus Cluster|Cluster Heartbeat Connection Status|Count|Indicates whether the Cluster is having issues communicating with the Cluster Manager. The value of the metric is 0 when the connection is healthy and 1 when it is unhealthy. In the absence of data, this metric will retain the most recent value emitted|Reason|
+|NexusClusterMachineGroupUpgrade|Nexus Cluster|Cluster Machine Group Upgrade|Count|Tracks Cluster Machine Group Upgrades performed. The value of the metric is 0 when the result is successful and 1 for all other results. In the absence of data, this metric will retain the most recent value emitted|Machine Group, Result, Upgraded From Version, Upgraded To Version|
## Baremetal servers Baremetal server metrics are collected and delivered to Azure Monitor per minute, metrics of category HardwareMonitor are collected every 5 minutes.
Baremetal server metrics are collected and delivered to Azure Monitor per minute
| Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-HostDiskReadCompleted|Disk|Host Disk Reads Completed|Count|Disk reads completed by node|Device,Host|
-HostDiskReadSeconds|Disk|Host Disk Read Seconds (Preview)|Seconds|Disk read time by node|Device,Host|
-HostDiskWriteCompleted|Disk|Total Number of Writes Completed|Count|Disk writes completed by node|Device,Host|
-HostDiskWriteSeconds|Disk|Host Disk Write Seconds (Preview)|Seconds|Disk write time by node|Device,Host|
-HostDmiInfo|System|Host DMI Info (Preview)|Unspecified|Host Desktop Management Interface (DMI) environment information|Bios Date,Bios Release,Bios Vendor,Bios Version,Board Asset Tag,Board Name,Board Vendor,Board Version,Chassis Asset Tag,Chassis Vendor,Chassis Version,Host,Product Family,Product Name,Product Sku,Product Uuid,Product Version,System Vendor|
-HostEntropyAvailableBits|Filesystem|Host Entropy Available Bits (Preview)|Count|Available bits in node entropy|Host|
-HostFilesystemAvailBytes|Filesystem|Host Filesystem Available Bytes|Count|Available filesystem size by node|Device,FS Type,Host,Mount Point|
-HostFilesystemDeviceError|Filesystem|Host Filesystem Device Errors|Count|Indicates if there was a problem getting information for the filesystem|Device,FS Type,Host,Mount Point|
-HostFilesystemFiles|Filesystem|Host Filesystem Files|Count|Total number of permitted inodes|Device,FS Type,Host,Mount Point|
-HostFilesystemFilesFree|Filesystem|Total Number of Free inodes|Count|Total number of free inodes|Device,FS Type,Host,Mount Point|
-HostFilesystemReadOnly|Filesystem|Host Filesystem Read Only|Unspecified|Indicates if the filesystem is readonly|Device,FS Type,Host,Mount Point|
-HostFilesystemSizeBytes|Filesystem|Host Filesystem Size In Bytes|Count|Filesystem size by node|Device,FS Type,Host,Mount Point|
-HostHwmonTempCelsius|HardwareMonitor|Host Hardware Monitor Temp|Count|Hardware monitor for temperature (celsius)|Chip,Host,Sensor|
-HostHwmonTempMax|HardwareMonitor|Host Hardware Monitor Temp Max|Count|Hardware monitor for maximum temperature (celsius)|Chip,Host,Sensor|
-HostLoad1|Memory|Average Load In 1 Minute (Preview)|Count|1 minute load average|Host|
-HostLoad15|Memory|Average Load In 15 Minutes (Preview)|Count|15 minute load average|Host|
-HostLoad5|Memory|Average load in 5 minutes (Preview)|Count|5 minute load average|Host|
-HostMemAvailBytes|Memory|Host Memory Available Bytes|Count|Available memory in bytes by node|Host|
-HostMemHWCorruptedBytes|Memory|Total Amount of Memory In Corrupted Pages|Count|Corrupted bytes in hardware by node|Host|
-HostMemTotalBytes|Memory|Host Memory Total Bytes|Bytes|Total bytes of memory by node|Host|
-HostSpecificCPUUtilization|CPU|Host Specific CPU Utilization (Preview)|Seconds|A counter metric that counts the number of seconds the CPU has been running in a particular mode|Cpu,Host,Mode|
-IdracPowerCapacityWatts|HardwareMonitor|IDRAC Power Capacity Watts|Unspecified|Power Capacity|Host,PSU|
-IdracPowerInputWatts|HardwareMonitor|IDRAC Power Input Watts|Unspecified|Power Input|Host,PSU|
-IdracPowerOn|HardwareMonitor|IDRAC Power On|Unspecified|IDRAC Power On Status|Host|
-IdracPowerOutputWatts|HardwareMonitor|IDRAC Power Output Watts|Unspecified|Power Output|Host,PSU|
-IdracSensorsTemperature|HardwareMonitor|IDRAC Sensors Temperature|Unspecified|IDRAC sensor temperature|Host,Name,Units|
-NcNodeNetworkReceiveErrsTotal|Network|Network Device Receive Errors|Count|Total network device errors received|Hostname,Interface Name|
-NcNodeNetworkTransmitErrsTotal|Network|Network Device Transmit Errors|Count|Total network device errors transmitted|Hostname,Interface Name|
-NcTotalCpusPerNuma|CPU|Total CPUs Available to Nexus per NUMA|Count|Total number of CPUs available to Nexus per NUMA|Hostname,NUMA Node|
-NcTotalWorkloadCpusAllocatedPerNuma|CPU|CPUs per NUMA Allocated for Nexus Kubernetes|Count|Total number of CPUs per NUMA allocated for Nexus Kubernetes and Tenant Workloads|Hostname,NUMA Node|
-NcTotalWorkloadCpusAvailablePerNuma|CPU|CPUs per NUMA Available for Nexus Kubernetes|Count|Total number of CPUs per NUMA available to Nexus Kubernetes and Tenant Workloads|Hostname,NUMA Node|
-NodeBondingActive|Network|Node Bonding Active (Preview)|Count|Number of active interfaces per bonding interface|Primary|
-NodeMemHugePagesFree|Memory|Node Memory Huge Pages Free (Preview)|Bytes|NUMA hugepages free by node|Host,Node|
-NodeMemHugePagesTotal|Memory|Node Memory Huge Pages Total|Bytes|NUMA huge pages total by node|Host,Node|
-NodeMemNumaFree|Memory|Node Memory NUMA (Free Memory)|Bytes|NUMA memory free|Name,Host|
-NodeMemNumaShem|Memory|Node Memory NUMA (Shared Memory)|Bytes|NUMA shared memory|Host,Node|
-NodeMemNumaUsed|Memory|Node Memory NUMA (Used Memory)|Bytes|NUMA memory used|Host,Node|
-NodeNetworkCarrierChanges|Network|Node Network Carrier Changes|Count|Node network carrier changes|Device,Host|
-NodeNetworkMtuBytes|Network|Node Network Maximum Transmission Unit Bytes|Bytes|Node network Maximum Transmission Unit (mtu_bytes) value of /sys/class/net/\<iface\>|Device,Host|
-NodeNetworkReceiveMulticastTotal|Network|Node Network Received Multicast Total|Bytes|Network device statistic receive_multicast|Device,Host|
-NodeNetworkReceivePackets|Network|Node Network Received Packets|Count|Network device statistic receive_packets|Device,Host|
-NodeNetworkSpeedBytes|Network|Node Network Speed Bytes|Bytes|speed_bytes value of /sys/class/net/\<iface\>|Device,Host|
-NodeNetworkTransmitPackets|Network|Node Network Transmited Packets|Count|Network device statistic transmit_packets|Device,Host|
-NodeNetworkUp|Network|Node Network Up|Count|Value is 1 if operstate is 'up', 0 otherwise.|Device,Host|
-NodeNvmeInfo|Disk|Node NVMe Info (Preview)|Count|Non-numeric data from /sys/class/nvme/\<device\>, value is always 1. Provides firmware, model, state and serial for a device|Device,State|
-NodeOsInfo|System|Node OS Info|Count|Node OS information|Host,Name,Version|
-NodeTimexMaxErrorSeconds|System|Node Timex Max Error Seconds|Seconds|Maximum time error between the local system and reference clock|Host|
-NodeTimexOffsetSeconds|System|Node Timex Offset Seconds|Seconds|Time offset in between the local system and reference clock|Host|
-NodeTimexSyncStatus|System|Node Timex Sync Status|Count|Is clock synchronized to a reliable server (1 = yes, 0 = no)|Host|
-NodeVmOomKill|VM Stat|Node VM Out Of Memory Kill|Count|Information in /proc/vmstat pertaining to the field oom_kill|Host|
-NodeVmstatPswpIn|VM Stat|Node VM PSWP In|Count|Information in /proc/vmstat pertaining to the field pswpin|Host|
-NodeVmstatPswpout|VM Stat|Node VM PSWP Out|Count|Information in /proc/vmstat pertaining to the field pswpout|Host|
+|HostBootTimeSeconds|System|Host Boot Seconds (Preview)|Seconds|Unix timestamp of the last boot of the host. In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostDiskReadCompleted|Disk|Host Disk Reads Completed|Count|Total number of disk reads completed successfully. In the absence of data, this metric will retain the most recent value emitted|Device, Host|
+|HostDiskReadSeconds|Disk|Host Disk Read Seconds (Preview)|Seconds|Total time spent reading from disk. In the absence of data, this metric will retain the most recent value emitted|Device, Host|
+|HostDiskWriteCompleted|Disk|Total Number of Writes Completed|Count|Total number of disk writes completed successfully. In the absence of data, this metric will retain the most recent value emitted|Device, Host|
+|HostDiskWriteSeconds|Disk|Host Disk Write Seconds (Preview)|Seconds|Total time spent writing to disk. In the absence of data, this metric will retain the most recent value emitted|Device, Host|
+|HostDmiInformation|System|Host DMI Info|Unspecified|Environment information about the Desktop Management Interface (DMI), value is always 1. Includes labels about the system's manufacturer, model, version, serial number and UUID. In the absence of data, this metric will default to 0.|Bios Date, Bios Release, Bios Vendor, Bios Version, Board Name, Board Vendor, Board Version, Host, Product Family, Product Name, Product Sku, System Vendor|
+|HostEntropyAvailableBits|Filesystem|Host Entropy Available Bits (Preview)|Count|Available node entropy, in bits. In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostFilesystemAvailBytes|Filesystem|Host Filesystem Available Bytes|Count|Bytes in the filesystem on nodes which are available to non-root users. In the absence of data, this metric will default to 0|Device, FS Type, Host, Mount Point|
+|HostFilesystemDeviceError|Filesystem|Host Filesystem Device Errors|Count|Indicates if there was an error getting information from the filesystem. Value is 1 if there was an error, 0 otherwise. In the absence of data, this metric will default to 0|Device, FS Type, Host, Mount Point|
+|HostFilesystemFiles|Filesystem|Host Filesystem Files|Count|Total number of permitted inodes (file nodes). In the absence of data, this metric will default to 0.|Device, FS Type, Host, Mount Point|
+|HostFilesystemFilesFree|Filesystem|Total Number of Free inodes|Count|Total number of free (not occupied or reserved) inodes (file nodes). In the absence of data, this metric will default to 0.|Device, FS Type, Host, Mount Point|
+|HostFilesystemFilesPercentFree|Filesystem|Host Filesystem Files Percent Free|Percent|Percentage of permitted inodes which are free to be used. In the absence of data, this metric will default to 0.|Device, FS Type, Host, Mount Point|
+|HostFilesystemReadOnly|Filesystem|Host Filesystem Read Only|Unspecified|Indication of whether a filesystem is readonly or not. Value is 1 if readonly, 0 otherwise. In the absence of data, this metric will retain the most recent value emitted|Device, FS Type, Host, Mount Point|
+|HostFilesystemSizeBytes|Filesystem|Host Filesystem Size In Bytes|Count|Host filesystem size in bytes. In the absence of data, this metric will retain the most recent value emitted|Device, FS Type, Host, Mount Point|
+|HostFilesystemUsage|Filesystem|Host Filesystem Usage In Percentage|Percent|Percentage of filesystem which is in use. In the absence of data, this metric will default to 0.|Device, FS Type, Host, Mount Point|
+|HostHwmonTempCelsius|HardwareMonitor|Host Hardware Monitor Temp|Count|Temperature (in Celsius) of different hardware components. In the absence of data, this metric will retain the most recent value emitted|Chip, Host, Sensor|
+|HostHwmonTempMax|HardwareMonitor|Host Hardware Monitor Temp Max|Count|Maximum temperature (in Celsius) of different hardware components. In the absence of data, this metric will retain the most recent value emitted|Chip, Host, Sensor|
+|HostInletTemp|HardwareMonitor|Host Hardware Inlet Temp|Count|Inlet temperature for hardware nodes (in Celsius). In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostLoad1|Memory|Average Load In 1 Minute (Preview)|Count|1-minute load average of the system, as a measure of the system activity over the last minute, expressed as a fractional number (values >1.0 may indicate overload). In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostLoad15|Memory|Average Load In 15 Minutes (Preview)|Count|15-minute load average of the system, as a measure of the system activity over the last 15 minutes, expressed as a fractional number (values >1.0 may indicate overload). In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostLoad5|Memory|Average load in 5 minutes (Preview)|Count|5-minute load average of the system, as a measure of the system activity over the last 5 minutes, expressed as a fractional number (values >1.0 may indicate overload). In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostMemAvailBytes|Memory|Host Memory Available Bytes|Count|Memory available, in bytes. In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostMemHWCorruptedBytes|Memory|Total Memory In Corrupted Pages|Count|Memory corrupted due to hardware issues, in bytes. In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostMemHugePagesFree|Memory|Memory Free Huge Pages|Bytes|Total memory in huge pages that is free. In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostMemHugePagesTotal|Memory|Memory Total Huge Pages|Bytes|Total memory in huge pages on nodes. In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostMemTotalBytes|Memory|Host Memory Total Bytes|Bytes|Total amount of physical memory on nodes. In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostMemSwapFreeBytes|Memory|Host Memory Swap Free Bytes|Bytes|Total swap memory that is free. In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostMemSwapTotalBytes|Memory|Host Memory Swap Total Bytes|Bytes|Total amount of swap memory. In the absence of data, this metric will retain the most recent value emitted|Host|
+|HostMemSwapAvailableSpace|Memory|Host Memory Swap Available Percentage|Percent|Percentage of swap memory that is available. In the absence of data, this metric will default to 0|Host|
+|HostSpecificCPUUtilization|CPU|Host Specific CPU Utilization (Preview)|Seconds|A counter metric quantifying the CPU time that each CPU has spent in different states (or 'modes'). In the absence of data, this metric will retain the most recent value emitted|Cpu, Host, Mode|
+|IdracPowerCapacityWatts|HardwareMonitor|IDRAC Power Capacity Watts|Unspecified|IDRAC Power Capacity in Watts. In the absence of data, this metric will default to 0|Host, PSU|
+|IdracPowerInputWatts|HardwareMonitor|IDRAC Power Input Watts|Unspecified|IDRAC Power Input in Watts. In the absence of data, this metric will default to 0|Host, PSU|
+|IdracPowerOn|HardwareMonitor|IDRAC Power On|Unspecified|IDRAC Power On Status. Value is 1 if on, 0 otherwise. In the absence of data, this metric will default to 0|Host|
+|IdracPowerOutputWatts|HardwareMonitor|IDRAC Power Output Watts|Unspecified|IDRAC Power Output in Watts. In the absence of data, this metric will default to 0|Host, PSU|
+|IdracSensorsTemperature|HardwareMonitor|IDRAC Sensors Temperature|Unspecified|IDRAC sensor temperature (in Celsius). In the absence of data, this metric will retain the most recent value emitted|Host, Name, Units|
+|NcNodeNetworkReceiveErrsTotal|Network|Network Device Receive Errors|Count|Total number of errors encountered by network devices while receiving data. In the absence of data, this metric will retain the most recent value emitted|Hostname, Interface Name|
+|NcNodeNetworkTransmitErrsTotal|Network|Network Device Transmit Errors|Count|Total number of errors encountered by network devices while transmitting data. In the absence of data, this metric will retain the most recent value emitted|Hostname, Interface Name|
+|NcTotalCpusPerNuma|CPU|Total CPUs Available to Nexus per NUMA|Count|Total number of CPUs available to Nexus per NUMA. In the absence of data, this metric will retain the most recent value emitted|Hostname, NUMA Node|
+|NcTotalWorkloadCpusAllocatedPerNuma|CPU|CPUs per NUMA Allocated for Nexus K8s|Count|Total number of CPUs per NUMA allocated for Nexus Kubernetes and Tenant Workloads. In the absence of data, this metric will retain the most recent value emitted|Hostname, NUMA Node|
+|NcTotalWorkloadCpusAvailablePerNuma|CPU|CPUs per NUMA Available for Nexus K8s|Count|Total number of CPUs per NUMA available for use by Nexus Kubernetes and Tenant Workloads. In the absence of data, this metric will retain the most recent value emitted|Hostname, NUMA Node|
+|NodeBondingActive|Network|Node Bonding Active (Preview)|Count|Total number of active network interfaces per bonding interface. In the absence of data, this metric will retain the most recent value emitted|Primary|
+|NodeBondingAggregateIdMismatch|Network|Node Bond Aggregate ID Mismatch|Count|Total number of mismatches between the expected and actual link-aggregator ids. In the absence of data, this metric will retain the most recent value emitted|Host, Interface Name|
+|NodeMemHugePagesFree|Memory|Node Memory Huge Pages Free (Preview)|Bytes|Free memory in NUMA huge pages. In the absence of data, this metric will retain the most recent value emitted|Host, Node|
+|NodeMemHugePagesTotal|Memory|Node Memory Huge Pages Total|Bytes|Total memory in NUMA huge pages. In the absence of data, this metric will retain the most recent value emitted|Host, Node|
+|NodeMemNumaFree|Memory|Node Memory NUMA (Free Memory)|Bytes|Total amount of free NUMA memory. In the absence of data, this metric will retain the most recent value emitted|Host, Node|
+|NodeMemNumaShem|Memory|Node Memory NUMA (Shared Memory)|Bytes|Total amount of NUMA memory that is shared between nodes. In the absence of data, this metric will retain the most recent value emitted|Host, Node|
+|NodeMemNumaUsed|Memory|Node Memory NUMA (Used Memory)|Bytes|Total amount of used NUMA memory. In the absence of data, this metric will retain the most recent value emitted|Host, Node|
+|NodeNetworkCarrierChanges|Network|Node Network Carrier Changes|Count|Total number of network carrier changes. In the absence of data, this metric will retain the most recent value emitted|Device, Host|
+|NodeNetworkMtuBytes|Network|Node Network Max Transmission|Bytes|Maximum Transmission Unit (MTU) for node network interfaces. In the absence of data, this metric will default to 0|Device, Host|
+|NodeNetworkReceiveMulticastTotal|Network|Node Network Received Multicast Total|Bytes|Total amount of multicast traffic received by the node network interfaces. In the absence of data, this metric will retain the most recent value emitted|Device, Host|
+|NodeNetworkReceivePackets|Network|Node Network Received Packets|Count|Total number of packets received by the node network interfaces. In the absence of data, this metric will retain the most recent value emitted|Device, Host|
+|NodeNetworkSpeedBytes|Network|Node Network Speed Bytes|Bytes|Current network speed, in bytes per second, for the node network interfaces. In the absence of data, this metric will default to 0|Device, Host|
+|NodeNetworkTransmitPackets|Network|Node Network Transmited Packets|Count|Total number of packets transmitted by the node network interfaces. In the absence of data, this metric will retain the most recent value emitted|Device, Host|
+|NodeNetworkStatus|Network|Node Network Up|Count|Indicates the operational status of the nodes network interfaces. Value is 1 if operational state is 'up', 0 otherwise. In the absence of data, this metric will default to 0|Device, Host|
+|NodeNtpLeap|System|Node NTP Leap|Count|The raw leap flag value of the local NTP daemon. This indicates the status of leap seconds. Value is 0 if no adjustment is needed, 1 to add a leap second, 2 to delete a leap second, and 3 if the clock is unsynchronized. In the absence of data, this metric will retain the most recent value emitted|Host|
+|NodeNtpStratum|System|Node NTP Stratum|Count|The stratum level of the local NTP daemon. This indicates the distance from the reference clock, with lower numbers representing closer proximity and higher accuracy. Values range from 1 (directly connected to reference clock) to 15 (further away), with 16 indicating the clock is unsynchronized. In the absence of data, this metric will retain the most recent value emitted|Host|
+|NodeNtpSanity|System|Node NTP Sanity|Count|The aggregate health of the local NTP daemon. This includes checks for stratum, leap flag, freshness, root distance, and causality violations. Value is 1 if all checks pass, 0 otherwise. In the absence of data, this metric will retain the most recent value emitted|Host|
+|NodeNtpRtt|System|Node NTP RTT|Seconds|Round-trip time from node exporter collector to local NTP daemon. In the absence of data, this metric will retain the most recent value emitted|Host|
+|NodeNvmeInfo|Disk|Node NVMe Info (Preview)|Count|Non-Volatile Memory express (NVMe) information, value is always 1. Provides state for a device. In the absence of data, this metric will default to 0|Device, State|
+|NodeOsInfo|System|Node OS Info|Count|Node OS information, value is always 1. Provides name and version for a device. In the absence of data, this metric will retain the most recent value emitted|Host, Name, Version|
+|NodeProcessState|System|Node Processes State|Count|The number of processes in each state. The possible states are D (UNINTERRUPTABLE_SLEEP), R (RUNNING & RUNNABLE), S (INTERRUPTABLE_SLEEP), T (STOPPED) and Z (ZOMBIE). In the absence of data, this metric will default to 0|Host, State|
+|NodeTimexMaxErrorSeconds|System|Node Timex Max Error Seconds|Seconds|Maximum time error between the local system and reference clock. In the absence of data, this metric will retain the most recent value emitted|Host|
+|NodeTimexOffsetSeconds|System|Node Timex Offset Seconds|Seconds|Time offset between the local system and reference clock. In the absence of data, this metric will retain the most recent value emitted|Host|
+|NodeTimexSyncStatus|System|Node Timex Sync Status|Count|Indicates whether the clock is synchronized to a reliable server. Value is 1 if synchronized, 0 if unsynchronized. In the absence of data, this metric will retain the most recent value emitted|Host|
+|NodeVmOomKill|VM Stat|Node VM Out Of Memory Kill|Count|Total number of times a process has been terminated due to a critical lack of memory. In the absence of data, this metric will retain the most recent value emitted|Host|
+|NodeVmstatPswpIn|VM Stat|Node VM PSWP In|Count|Total number of pages swapped in from disk to memory on the node. In the absence of data, this metric will retain the most recent value emitted|Host|
+|NodeVmstatPswpout|VM Stat|Node VM PSWP Out|Count|Total number of pages swapped out from memory to disk on the node. In the absence of data, this metric will retain the most recent value emitted|Host|
+|CpuUsageGuest|CPU|CPU Guest Usage|Count|Percentage of time that the CPU is running a virtual CPU for a guest operating system. In the absence of data, this metric will default to 0|Host, CPU|
+|CpuUsageGuestNice|CPU|CPU Guest Nice Usage|Count|Percentage of time that the CPU is running low-priority processes on a virtual CPU for a guest operating system. In the absence of data, this metric will default to 0|Host, CPU|
+|CpuUsageIdle|CPU|CPU Usage Idle|Count|Percentage of time that the CPU is idle. In the absence of data, this metric will default to 0|Host, CPU|
+|CpuUsageIowait|CPU|CPU Usage IO Wait|Count|Percentage of time that the CPU is waiting for I/O operations to complete. In the absence of data, this metric will default to 0|Host, CPU|
+|CpuUsageIrq|CPU|CPU Usage IRQ|Count|Percentage of time that the CPU is servicing hardware interrupt requests. In the absence of data, this metric will default to 0|Host, CPU|
+|CpuUsageNice|CPU|CPU Usage Nice|Count|Percentage of time that the CPU is in user mode, running low-priority processes. In the absence of data, this metric will default to 0|Host, CPU|
+|CpuUsageSoftirq|CPU|CPU Usage Soft IRQ|Count|Percentage of time that the CPU is servicing software interrupt requests. In the absence of data, this metric will default to 0|Host, CPU|
+|CpuUsageSteal|CPU|CPU Usage Steal|Count|Percentage of time that the CPU is in stolen time, which is time spent in other operating systems in a virtualized environment. In the absence of data, this metric will default to 0|Host, CPU|
+|CpuUsageSystem|CPU|CPU Usage System|Count|Percentage of time that the CPU is in system mode. In the absence of data, this metric will default to 0|Host, CPU|
+|CpuUsageTotal|CPU|CPU Usage Total|Percent|Percentage of time that the CPU is active (not idle). In the absence of data, this metric will default to 0|Host, CPU|
+|CpuUsageUser|CPU|CPU Usage User|Count|Percentage of time that the CPU is in user mode. In the absence of data, this metric will default to 0|Host, CPU|
## Storage Appliances All the metrics from Storage appliance are collected and delivered to Azure Monitor per minute.
All the metrics from Storage appliance are collected and delivered to Azure Moni
| Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-PurefaAlertsTotal|Storage Array|Nexus Storage Alerts Total|Count|Number of alert events|Severity|
-PurefaArrayPerformanceAvgBlockBytes|Storage Array|Nexus Storage Array Avg Block Bytes|Bytes|Average block size|Dimension|
-PurefaArrayPerformanceBandwidthBytes|Storage Array|Nexus Storage Array Bandwidth Bytes|Bytes|Array throughput in bytes per second|Dimension|
-PurefaArrayPerformanceIOPS|Storage Array|Nexus Storage Array IOPS|Count|Storage array IOPS|Dimension|
-PurefaArrayPerformanceLatencyUsec|Storage Array|Nexus Storage Array Latency (Microseconds)|MilliSeconds|Storage array latency in microseconds|Dimension|
-PurefaArrayPerformanceQdepth|Storage Array|Nexus Storage Array Queue Depth|Bytes|Storage array queue depth|
-PurefaArraySpaceCapacityBytes|Storage Array|Nexus Storage Array Capacity Bytes|Bytes|Storage array overall space capacity|
-PurefaArraySpaceDatareductionRatio|Storage Array|Nexus Storage Array Space Datareduction Ratio|Percent|Storage array overall data reduction|
-PurefaArraySpaceProvisionedBytes|Storage Array|Nexus Storage Array Space Provisioned Bytes|Bytes|Storage array overall provisioned space|
-PurefaArraySpaceUsedBytes|Storage Array|Nexus Storage Array Space Used Bytes|Bytes|Storage Array overall used space|Dimension|
-PurefaHardwareComponentHealth|Storage Array|Nexus Storage Hardware Component Health|Count|Storage array hardware component health status|Component,Controller,Index|
-PurefaHardwarePowerVolts|Storage Array|Nexus Storage Hardware Power Volts|Unspecified|Storage array hardware power supply voltage|Power Supply|
-PurefaHardwareTemperatureCelsius|Storage Array|Nexus Storage Hardware Temperature Celsius|Unspecified|Storage array hardware temperature sensors|Controller,Sensor|
-PurefaHostPerformanceBandwidthBytes|Host|Nexus Storage Host Bandwidth Bytes|Bytes|Storage array host bandwidth in bytes per second|Dimension,Host|
-PurefaHostPerformanceIOPS|Host|Nexus Storage Host IOPS|Count|Storage array host IOPS|Dimension,Host|
-PurefaHostPerformanceLatencyUsec|Host|Nexus Storage Host Latency (Microseconds)|MilliSeconds|Storage array host latency in microseconds|Dimension,Host|
-PurefaHostSpaceBytes|Host|Nexus Storage Host Space Bytes|Bytes|Storage array host space in bytes|Dimension,Host|
-PurefaHostSpaceDatareductionRatio|Host|Nexus Storage Host Space Datareduction Ratio|Percent|Storage array host volumes data reduction ratio|Host|
-PurefaHostSpaceSizeBytes|Host|Nexus Storage Host Space Size Bytes|Bytes|Storage array host volumes size|Host|
-PurefaInfo|Storage Array|Nexus Storage Info (Preview)|Unspecified|Storage array system information|Array Name|
-PurefaVolumePerformanceIOPS|Volume|Nexus Storage Volume Performance IOPS|Count|Storage array volume IOPS|Dimension,Volume|
-PurefaVolumePerformanceLatencyUsec|Volume|Nexus Storage Volume Performance Latency (Microseconds)|MilliSeconds|Storage array volume latency in microseconds|Dimension,Volume|
-PurefaVolumePerformanceThroughputBytes|Volume|Nexus Storage Volume Performance Throughput Bytes|Bytes|Storage array volume throughput|Dimension,Volume|
-PurefaVolumeSpaceBytes|Volume|Nexus Storage Volume Space Bytes|Bytes|Storage array volume space in bytes|Dimension,Volume|
-PurefaVolumeSpaceDatareductionRatio|Volume|Nexus Storage Volume Space Datareduction Ratio|Percent|Storage array overall data reduction|Volume|
-PurefaVolumeSpaceSizeBytes|Volume|Nexus Storage Volume Space Size Bytes|Bytes|Storage array volumes size|Volume|
+|PurefaAlertsOpen|Storage Array|Nexus Storage Alerts Open|Count|Number of open alert events. In the absence of data, this metric will retain the most recent value emitted|Code, Component Type, Issue, Severity, Summary|
+|PurefaAlertsTotal|Storage Array|Nexus Storage Alerts Total (Deprecated)|Count|Deprecated - Total number of alerts generated by the pure storage array. In the absence of data, this metric will retain the most recent value emitted|Severity|
+|PurefaArrayPerformanceAverageBytes|Storage Array|Nexus Storage Array Performance Average Bytes|Bytes|The average operations size by dimension, where dimension can be mirrored_write_bytes_per_sec, read_bytes_per_sec or write_bytes_per_sec. In the absence of data, this metric will retain the most recent value emitted|Dimension|
+|PurefaArrayPerformanceAvgBlockBytes|Storage Array|Nexus Storage Array Avg Block Bytes (Deprecated)|Bytes|Deprecated - The average block size processed by the array. In the absence of data, this metric will retain the most recent value emitted|Dimension|
+|PurefaArrayPerformanceBandwidthBytes|Storage Array|Nexus Storage Array Bandwidth Bytes|Bytes|Performance of the pure storage array bandwidth in bytes per second. In the absence of data, this metric will retain the most recent value emitted|Dimension|
+|PurefaArrayPerformanceIOPS|Storage Array|Nexus Storage Array IOPS (Deprecated)|Count|Deprecated - Performance of pure storage array based on input/output processing per second. In the absence of data, this metric will default to 0|Dimension|
+|PurefaArrayPerformanceLatencyMs|Storage Array|Nexus Storage Array Latency|MilliSeconds|Latency of the pure storage array. In the absence of data, this metric will default to 0|Dimension|
+|PurefaArrayPerformanceQdepth|Storage Array|Nexus Storage Array Queue Depth (Deprecated)|Bytes|Deprecated - Queue depth of the pure storage array. In the absence of data, this metric will retain the most recent value emitted||
+|PurefaArrayPerformanceQueueDepthOps|Storage Array|Nexus Storage Array Performance Queue Depth Operations|Count|The array queue depth size by number of operations. In the absence of data, this metric will retain the most recent value emitted||
+|PurefaArrayPerformanceThroughputIops|Storage Array|Nexus Storage Array Performance Throughput Iops|Count|The array throughput in operations per second. In the absence of data, this metric will retain the most recent value emitted|Dimension|
+|PurefaArraySpaceBytes|Storage Array|Nexus Storage Array Space Bytes|Bytes|The amount of array space. The space filter can be used to filter the space by type. In the absence of data, this metric will retain the most recent value emitted|Space|
+|PurefaArraySpaceCapacityBytes|Storage Array|Nexus Storage Array Capacity Bytes (Deprecated)|Bytes|Deprecated - Capacity of the pure storage array. In the absence of data, this metric will retain the most recent value emitted||
+|PurefaArraySpaceDataReductionRatioV2|Storage Array|Nexus Storage Array Space Data Reduction Ratio|Count|Storage array overall data reduction ratio. In the absence of data, this metric will retain the most recent value emitted||
+|PurefaArraySpaceDatareductionRatio|Storage Array|Nexus Storage Array Space DRR (Deprecated)|Count|Deprecated - Data reduction ratio of the pure storage array. In the absence of data, this metric will retain the most recent value emitted||
+|PurefaArraySpaceProvisionedBytes|Storage Array|Nexus Storage Array Space Prov (Deprecated)|Bytes|Deprecated - Overall space provisioned for the pure storage array. In the absence of data, this metric will retain the most recent value emitted||
+|PurefaArraySpaceUsage|Storage Array|Nexus Storage Array Space Used (Deprecated)|Percent|Deprecated - Space usage of the pure storage array in percentage. In the absence of data, this metric will default to 0||
+|PurefaArraySpaceUsedBytes|Storage Array|Nexus Storage Array Space Used Bytes (Deprecated)|Bytes|Deprecated - Overall space used for the pure storage array. In the absence of data, this metric will retain the most recent value emitted|Dimension|
+|PurefaHardwareChassisHealth|Storage Array|Nexus Storage HW Chassis Health (Deprecated)|Count|Deprecated - Denotes whether a hardware chassis of the pure storage array is healthy or not. A value of 0 means the chassis is healthy, a value of 1 means it is unhealthy. In the absence of data, this metric will default to 0||
+|PurefaHardwareControllerHealth|Storage Array|Nexus Storage HW Controller Health (Deprecated)|Count|Deprecated - Denotes whether a hardware controller of the pure storage array is healthy or not. A value of 0 means the controller is healthy, a value of 1 means it is unhealthy. In the absence of data, this metric will default to 0|Controller|
+|PurefaHardwarePowerVolt|Storage Array|Nexus Storage Hardware Power Volts (Deprecated)|Unspecified|Deprecated - Hardware power supply voltage of the pure storage array. In the absence of data, this metric will default to 0|Power Supply|
+|PurefaHardwareTemperatureCelsiusByChassis|Storage Array|Nexus Storage Hardware Temp Celsius By Chassis (Deprecated)|Unspecified|Deprecated - Hardware temperature, in Celsius, of the controller in the pure storage array. In the absence of data, this metric will retain the most recent value emitted|Sensor, Chassis|
+|PurefaHardwareTemperatureCelsiusByController|Storage Array|Nexus Storage Hardware Temp Celsius By Controller (Deprecated)|Unspecified|Deprecated - Hardware temperature, in Celsius, of the controller in the pure storage array. In the absence of data, this metric will retain the most recent value emitted|Controller, Sensor|
+|PurefaHostPerformanceBandwidthBytes|Host|Nexus Storage Host Bandwidth Bytes|Bytes|Host bandwidth of the pure storage array in bytes per second. In the absence of data, this metric will retain the most recent value emitted|Dimension, Host|
+|PurefaHostPerformanceIOPS|Host|Nexus Storage Host IOPS (Deprecated)|Count|Deprecated - Performance of pure storage array hosts in I/O operations per second. In the absence of data, this metric will default to 0|Dimension, Host|
+|PurefaHostPerformanceLatencyMs|Host|Nexus Storage Host Latency|MilliSeconds|Latency of the pure storage array hosts. In the absence of data, this metric will default to 0|Dimension, Host|
+|PurefaHostPerformanceThroughputIops|Host|Nexus Storage Host Performance Throughput Iops|Count|The host throughput in I/O operations per second. In the absence of data, this metric will retain the most recent value emitted|Host, Dimension|
+|PurefaHostSpaceBytes|Host|Nexus Storage Host Space Bytes (Deprecated)|Bytes|Deprecated - Storage array host space. In the absence of data, this metric will retain the most recent value emitted|Dimension, Host|
+|PurefaHostSpaceBytesV2|Host|Nexus Storage Host Space Bytes|Bytes|Storage array host space. In the absence of data, this metric will retain the most recent value emitted|Host, Space|
+|PurefaHostSpaceDataReductionRatioV2|Host|Nexus Storage Host Space Data Reduction Ratio|Count|Host space data reduction ratio. In the absence of data, this metric will retain the most recent value emitted|Host|
+|PurefaHostSpaceDatareductionRatio|Host|Nexus Storage Host Space DRR (Deprecated)|Count|Deprecated - Data reduction ratio of the pure storage array hosts. In the absence of data, this metric will retain the most recent value emitted|Host|
+|PurefaHostSpaceSizeBytes|Host|Nexus Storage Host Space Size Bytes (Deprecated)|Bytes|Deprecated - Pure storage array hosts space size. In the absence of data, this metric will retain the most recent value emitted|Host|
+|PurefaHostSpaceUsage|Host|Nexus Storage Host Space Used (Deprecated)|Percent|Deprecated - Space usage of the pure storage array's host. In the absence of data, this metric will default to 0|Host|
+|PurefaHwComponentStatus|Storage Array|Nexus Storage Hardware Component Status|Count|Status of a hardware component. In the absence of data, this metric will retain the most recent value emitted|Component Name, Component Type, Component Status|
+|PurefaHwComponentTemperatureCelsius|Storage Array|Nexus Storage Hardware Component Temperature Celsius|Unspecified|Temperature of the temperature sensor component in Celsius. In the absence of data, this metric will retain the most recent value emitted|Component Name|
+|PurefaHwComponentVoltageVolt|Storage Array|Nexus Storage Hardware Component Voltage|Unspecified|Voltage used by the power supply component in volts. In the absence of data, this metric will retain the most recent value emitted|Component Name|
+|PurefaInfo|Storage Array|Nexus Storage Info (Preview)|Unspecified|Storage array system information. In the absence of data, this metric will default to 0|Array Name|
+|PurefaNetworkInterfacePerformanceErrors|Storage Array|Nexus Storage Network Interface Performance Errors|Count|The number of network interface errors per second. In the absence of data, this metric will retain the most recent value emitted|Dimension, Name, Type|
+|PurefaVolumePerformanceBandwidthBytesV2|Volume|Nexus Storage Volume Performance Bandwidth Bytes|Bytes|Volume throughput in bytes per second. In the absence of data, this metric will retain the most recent value emitted|Name, Dimension|
+|PurefaVolumePerformanceIOPS|Volume|Nexus Storage Volume Performance IOPS (Deprecated)|Count|Deprecated - Performance of pure storage array volumes based on input/output processing per second. In the absence of data, this metric will default to 0|Dimension, Volume|
+|PurefaVolumePerformanceLatencyMs|Volume|Nexus Storage Vol Perf Latency (Deprecated)|MilliSeconds|Deprecated - Latency of the pure storage array volumes. In the absence of data, this metric will default to 0|Dimension, Volume|
+|PurefaVolumePerformanceLatencyMsV2|Volume|Nexus Storage Volume Latency|MilliSeconds|Latency of the pure storage array volumes. In the absence of data, this metric will default to 0|Dimension, Name|
+|PurefaVolumePerformanceThroughputBytes|Volume|Nexus Storage Vol Perf Throughput (Deprecated)|Bytes|Deprecated - Pure storage array volume throughput. In the absence of data, this metric will retain the most recent value emitted|Dimension, Volume|
+|PurefaVolumePerformanceThroughputIops|Volume|Nexus Storage Volume Performance Throughput Iops|Count|Volume throughput in operations per second. In the absence of data, this metric will retain the most recent value emitted|Name, Dimension|
+|PurefaVolumeSpaceBytes|Volume|Nexus Storage Volume Space Bytes (Deprecated)|Bytes|Deprecated - Pure storage array volume space. In the absence of data, this metric will retain the most recent value emitted|Dimension, Volume|
+|PurefaVolumeSpaceBytesV2|Volume|Nexus Storage Volume Space Bytes|Bytes|Pure storage array volume space. In the absence of data, this metric will retain the most recent value emitted|Name, Space|
+|PurefaVolumeSpaceDataReductionRatioV2|Volume|Nexus Storage Volume Space Data Reduction Ratio|Unspecified|Volume space data reduction ratio. In the absence of data, this metric will retain the most recent value emitted|Name|
+|PurefaVolumeSpaceDatareductionRatio|Volume|Nexus Storage Vol Space DRR (Deprecated)|Count|Deprecated - Data reduction ratio of the pure storage array volumes. In the absence of data, this metric will retain the most recent value emitted|Volume|
+|PurefaVolumeSpaceSizeBytes|Volume|Nexus Storage Volume Space Size Bytes (Deprecated)|Bytes|Deprecated - Size of the pure storage array volumes. In the absence of data, this metric will retain the most recent value emitted|Volume|
+|PurefaVolumeSpaceUsage|Volume|Nexus Storage Volume Space Used (Deprecated)|Percent|Deprecated - Space usage of the pure storage array volumes. In the absence of data, this metric will default to 0|Volume|
+
+## Cluster Management
+All the metrics from Cluster Management are collected and delivered to Azure Monitor per minute.
+
+### ***cluster management metrics***
+
+| Metric | Category | Display Name | Unit | Description | Dimensions |
+|-|:-:|:--:|:-:|:--:|:--:|
+|NexusClusterDeployClusterRequests|Nexus Cluster|Cluster Deploy Requests|Count|Number of cluster deploy requests|Cluster Version|
+|NexusClusterConnectionStatus|Nexus Cluster|Cluster Connection Status|Count|Tracks changes in the connection status of the Cluster(s) managed by the Cluster Manager. The metric gets a value of 0 when the Cluster is connected and 1 when disconnected. The reason filter describes the connection status itself.|Cluster Name, Reason|
+|NexusClusterMachineUpgrade|Nexus Cluster|Cluster Machine Upgrade|Count|Nexus machine upgrade request, successful will have a value of 0 while unsuccessful will have a value of 1.|Cluster Name, Cluster Version, Result, Upgraded From Version, Upgraded To Version, Upgrade Strategy|
+|NexusClusterManagementBundleUpgrade|Nexus Cluster|Cluster Management Bundle Upgrade|Count|Nexus Cluster management bundle upgrade, successful will have a value of 0 while unsuccessful will have a value of 1.|Cluster Name, Cluster Version, Result, Upgraded From Version, Upgraded To Version|
+|NexusClusterRuntimeBundleUpgrade|Nexus Cluster|Cluster Runtime Bundle Upgrade|Count|Nexus Cluster runtime bundle upgrade, successful will have a value of 0 while unsuccessful will have a value of 1.|Cluster Name, Cluster Version, Result, Upgraded From Version, Upgraded To Version|
## Network Fabric Metrics ### Network Devices Metrics
-The collection interval for Network Fabric device metrics varies and you can find more information per metric (please refer to column "Collection Interval").
+The collection interval for Network Fabric device metrics varies and you can find more information per metric (refer to column "Collection Interval").
| Metric Name | Metric Display Name| Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? | Collection Interval | |-|:-:|:-:|:--:|:-:|-|::|:-:|:-:|
The collection interval for Network Fabric device metrics varies and you can fin
| LacpTxErrors | LACP Tx Errors | LACP State Counters | Count | Average | The count of LACPDU packets with errors transmitted by an interface over a given interval of time | Interface name | Yes | Every 5 mins | | LacpUnknownErrors | LACP Unknown Errors | LACP State Counters | Count | Average | The count of LACPDU packets with unknown errors over a given interval of time | Interface name | Yes | Every 5 mins | | LldpFrameIn | LLDP Frame In | LLDP State Counters | Count | Average | The count of LLDP frames received by an interface over a given interval of time | Interface name | Yes | Every 5 mins |
-| LldpFrameOut | LLDP Frame Out | LLDP State Counters | Count | Average | The count of LLDP frames trasmitted from an interface over a given interval of time | Interface name | Yes | Every 5 mins |
+| LldpFrameOut | LLDP Frame Out | LLDP State Counters | Count | Average | The count of LLDP frames transmitted from an interface over a given interval of time | Interface name | Yes | Every 5 mins |
| LldpTlvUnknown | LLDP Tlv Unknown | LLDP State Counters | Count | Average | The count of LLDP frames received with unknown TLV by an interface over a given interval of time | Interface name | Yes | Every 5 mins |
reliability Overview Reliability Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview-reliability-guidance.md
For a more detailed overview of reliability principles in Azure, see [Reliabilit
|Azure Storage - Blob Storage|[Choose the right redundancy option](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#choose-the-right-redundancy-option)|[Azure storage disaster recovery planning and failover](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) / [Azure Blob backup](/azure/backup/blob-backup-overview)| |Azure Stream Analytics|| [Achieve geo-redundancy for Azure Stream Analytics jobs](../stream-analytics/geo-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Virtual WAN|[How are Availability Zones and resiliency handled in Virtual WAN?](../virtual-wan/virtual-wan-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-are-availability-zones-and-resiliency-handled-in-virtual-wan)| [Disaster recovery design](/azure/virtual-wan/disaster-recovery-design) |
-|Azure Web Application Firewall|[Deploy an Azure Firewall with Availability Zones using Azure PowerShell](../firewall/deploy-availability-zone-powershell.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[How do I achieve a disaster recovery scenario across datacenters by using Application Gateway?](../application-gateway/application-gateway-faq.yml?#how-do-i-achieve-a-disaster-recovery-scenario-across-datacenters-by-using-application-gateway) |
### ![An icon that signifies this service is strategic.](media/icon-strategic.svg) Strategic services
remote-rendering Create An Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/create-an-account.md
# Create an Azure Remote Rendering account
+> [!NOTE]
+> Please note that Azure Remote Rendering (ARR) will be retired on **September 30, 2025**. It is no longer possible to create new accounts if your subscription did not have an active ARR account previously.
+> More details [here](https://azure.microsoft.com/updates/v2/azure-remote-rendering-retirement).
+ This chapter guides you through the steps to create an account for the **Azure Remote Rendering** service. A valid account is mandatory for completing any of the quickstarts or tutorials. ## Create an account
remote-rendering About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/about.md
# About Azure Remote Rendering
+> [!NOTE]
+> Please note that Azure Remote Rendering (ARR) will be retired on **September 30, 2025**. More details [here](https://azure.microsoft.com/updates/v2/azure-remote-rendering-retirement).
+ *Azure Remote Rendering (ARR)* is a service that enables you to render high-quality, interactive 3D content in the cloud and stream it in real time to devices, such as the HoloLens 2. ![Diagram that shows an example of rendered high-quality, interactive 3D automobile engine.](../media/arr-engine.png)
sap Manage With Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/manage-with-azure-rbac.md
To register an existing SAP system and manage that system with Azure Center for
| Minimum permissions for *user-assigned managed identities* | | - | | `Microsoft.Compute/virtualMachines/read` |
+| `Microsoft.Compute/virtualMachines/write` |
| `Microsoft.Compute/virtualMachines/extensions/read` | | `Microsoft.Compute/virtualMachines/extensions/write` | | `Microsoft.Compute/virtualMachines/extensions/delete` |
sentinel Connect Common Event Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-common-event-format.md
- Title: Get CEF-formatted logs from your device or appliance into Microsoft Sentinel | Microsoft Docs
-description: Use the Log Analytics agent, installed on a Linux-based log forwarder, to ingest logs sent in Common Event Format (CEF) over Syslog into your Microsoft Sentinel workspace.
--- Previously updated : 11/09/2021---
-# Get CEF-formatted logs from your device or appliance into Microsoft Sentinel
--
-Many networking and security devices and appliances send their system logs over the Syslog protocol in a specialized format known as Common Event Format (CEF). This format includes more information than the standard Syslog format, and it presents the information in a parsed key-value arrangement. The Log Analytics Agent accepts CEF logs and formats them especially for use with Microsoft Sentinel, before forwarding them on to your Microsoft Sentinel workspace.
-
-Learn how to [collect Syslog with the AMA](/azure/azure-monitor/agents/data-collection-syslog), including how to configure Syslog and create a DCR.
-
-> [!IMPORTANT]
->
-> Upcoming changes:
-> - On **February 28th 2023**, we introduced changes to the CommonSecurityLog table schema.
-> - Following this change, you might need to review and update custom queries. For more details, see the [recommended actions section](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232) in this blog post. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) has been updated by Microsoft Sentinel.
-> - Data that has been streamed and ingested before the change will still be available in its former columns and formats. Old columns will therefore remain in the schema.
-> - On **31 August, 2024**, the [Log Analytics agent will be retired](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start [planning your migration to the AMA](ama-migrate.md). Review the [options for streaming logs in the CEF and Syslog format to Microsoft Sentinel](connect-cef-syslog-options.md).
-
-This article describes the process of using CEF-formatted logs to connect your data sources. For information about data connectors that use this method, see [Microsoft Sentinel data connectors reference](data-connectors-reference.md).
-
-There are two main steps to making this connection, that will be explained below in detail:
--- Designating a Linux machine or VM as a dedicated log forwarder, installing the Log Analytics agent on it, and configuring the agent to forward the logs to your Microsoft Sentinel workspace. The installation and configuration of the agent are handled by a deployment script.--- Configuring your device to send its logs in CEF format to a Syslog server.-
-> [!NOTE]
-> Data is stored in the geographic location of the workspace on which you are running Microsoft Sentinel.
-
-## Supported architectures
-
-The following diagram describes the setup in the case of a Linux VM in Azure:
-
- ![CEF in Azure](./media/connect-cef/cef-syslog-azure.png)
-
-Alternatively, you'll use the following setup if you use a VM in another cloud, or an on-premises machine:
-
- ![CEF on premises](./media/connect-cef/cef-syslog-onprem.png)
-
-## Prerequisites
-
-A Microsoft Sentinel workspace is required in order to ingest CEF data into Log Analytics.
--- You must have read and write permissions on this workspace.--- You must have read permissions to the shared keys for the workspace. [Learn more about workspace keys](/azure/azure-monitor/agents/agent-windows).-
-## Designate a log forwarder and install the Log Analytics agent
-
-This section describes how to designate and configure the Linux machine that will forward the logs from your device to your Microsoft Sentinel workspace.
-
-Your Linux machine can be a physical or virtual machine in your on-premises environment, an Azure VM, or a VM in another cloud.
-
-Use the link provided on the **Common Event Format (CEF) data connector page** to run a script on the designated machine and perform the following tasks:
--- **Installs the Log Analytics agent for Linux** (also known as the OMS agent) and configures it for the following purposes:
- - listening for CEF messages from the built-in Linux Syslog daemon on TCP port 25226
- - sending the messages securely over TLS to your Microsoft Sentinel workspace, where they are parsed and enriched
--- **Configures the built-in Linux Syslog daemon** (rsyslog.d/syslog-ng) for the following purposes:
- - listening for Syslog messages from your security solutions on TCP port 514
- - forwarding only the messages it identifies as CEF to the Log Analytics agent on localhost using TCP port 25226
-
-For more information, see [Deploy a log forwarder to ingest Syslog and CEF logs to Microsoft Sentinel](connect-log-forwarder.md).
-
-### Security considerations
-
-Make sure to configure the machine's security according to your organization's security policy. For example, you can configure your network to align with your corporate network security policy and change the ports and protocols in the daemon to align with your requirements.
-
-For more information, see [Secure VM in Azure](/azure/virtual-machines/security-policy) and [Best practices for Network security](../security/fundamentals/network-best-practices.md).
-
-If your devices are sending Syslog and CEF logs over TLS, such as when your log forwarder is in the cloud, you will need to configure the Syslog daemon (rsyslog or syslog-ng) to communicate in TLS.
-
-For more information, see:
--- [Encrypting Syslog traffic with TLS ΓÇô rsyslog](https://www.rsyslog.com/doc/v8-stable/tutorials/tls_cert_summary.html)-- [Encrypting log messages with TLS ΓÇô syslog-ng](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.22/administration-guide/60#TOPIC-1209298)-
-## Configure your device
-
-Locate and follow your device vendor's configuration instructions for sending logs in CEF format to a SIEM or log server.
-
-If your product appears in the data connectors gallery, you can consult the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) for assistance, where the configuration instructions should include the settings in the list below.
-
- - Protocol = TCP
- - Port = 514
- - Format = CEF
- - IP address - make sure to send the CEF messages to the IP address of the virtual machine you dedicated for this purpose.
-
-This solution supports Syslog RFC 3164 or RFC 5424.
-
-> [!TIP]
-> Define a different protocol or port number in your device as needed, as long as you also make the same changes in the Syslog daemon on the log forwarder.
->
-
-## Find your data
-
-It may take up to 20 minutes after the connection is made for data to appear in Log Analytics.
-
-To search for CEF events in Log Analytics, query the `CommonSecurityLog` table in the query window.
-
-Some products listed in the data connectors gallery require the use of additional parsers for best results. These parsers are implemented through the use of Kusto functions. For more information, see the section for your product on the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) page.
-
-To find CEF events for these products, enter the name of the Kusto function as your query subject, instead of "CommonSecurityLog."
-
-You can find helpful sample queries, workbooks, and analytics rule templates made especially for your product on the **Next steps** tab of your product's data connector page in the Microsoft Sentinel portal.
-
-If you're not seeing any data, see the [CEF troubleshooting](./troubleshooting-cef-syslog.md) page for guidance.
-
-### Changing the source of the TimeGenerated field
-
-By default, the Log Analytics agent populates the *TimeGenerated* field in the schema with the time the agent received the event from the Syslog daemon. As a result, the time at which the event was generated on the source system is not recorded in Microsoft Sentinel.
-
-You can, however, run the following command, which will download and run the `TimeGenerated.py` script. This script configures the Log Analytics agent to populate the *TimeGenerated* field with the event's original time on its source system, instead of the time it was received by the agent. In the following command, replace `{WORKSPACE_ID}` with your own workspace ID.
-
-```bash
-wget -O TimeGenerated.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/TimeGenerated.py && python TimeGenerated.py {WORKSPACE_ID}
-```
-
-## Next steps
-
-In this document, you learned how Microsoft Sentinel collects CEF logs from devices and appliances. To learn more about connecting your product to Microsoft Sentinel, see the following articles:
--- [Deploy a Syslog/CEF forwarder](connect-log-forwarder.md)-- [Microsoft Sentinel data connectors reference](data-connectors-reference.md)-- [Troubleshoot log forwarder connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity)-
-To learn more about what to do with the data you've collected in Microsoft Sentinel, see the following articles:
--- Learn about [CEF and CommonSecurityLog field mapping](cef-name-mapping.md).-- Learn how to [get visibility into your data and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
sentinel Connect Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-custom-logs.md
- Title: Collect data in custom log formats to Microsoft Sentinel | Microsoft Docs
-description: Collect data from custom data sources and ingest it into Microsoft Sentinel using the Log Analytics agent.
-- Previously updated : 06/05/2023---
-# Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent
-
-This article describes how to collect data from devices that use custom log formats to Microsoft Sentinel using the **Log Analytics agent**. To learn how to ingest custom logs **using the Azure Monitor Agent (AMA)**, see [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md).
-
-Many applications log data to text files instead of standard logging services like Windows Event log or Syslog. You can use the Log Analytics agent to collect data in text files of nonstandard formats from both Windows and Linux computers. Once collected, you can either parse the data into individual fields in your queries or extract the data during collection to individual fields.
-
-For more information about supported data connectors that collect custom log formats, see [Data connectors reference](data-connectors-reference.md).
-
-> [!IMPORTANT]
-> **The Log Analytics agent will be [retired on 31 August, 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/).** If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you complete your migration to the **Azure Monitor Agent (AMA)**. For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
-
-Learn all about [custom logs in the Azure Monitor documentation](/azure/azure-monitor/agents/data-sources-custom-logs).
--
-## Install the Log Analytics agent
-
-Install the Log Analytics agent on the Linux or Windows machine that will be generating the logs.
-
-Some vendors recommend installing the Log Analytics agent on a separate log server instead of directly on the device. Consult your product's section on the [Data connectors reference](data-connectors-reference.md) page, or your product's own documentation.
-
-Select the appropriate tab below, depending on whether your connector is part of solution listed in the content hub of Microsoft Sentinel or not.
-
-# [From a specific data connector page](#tab/DCG)
--
-Before you begin, install the solution for the product from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md). Once the data connector for the product is available, continue with the following steps.
-
-1. From the Microsoft Sentinel navigation menu, select **Data connectors**.
-
-1. Search for and select the appropriate product data connector.
-1. Select **Open connector page**.
-
-1. Install and onboard the agent on the device that generates the logs. Choose Linux or Windows as appropriate.
-
- | Machine type | Instructions |
- | | |
- | **For an Azure Linux VM** | <ol><li>Under **Choose where to install the Linux agent**, expand **Install agent on Azure Linux virtual machine**.<br><br><li>Select the **Download & install agent for Azure Linux Virtual machines >** link.<br><br><li>In the **Virtual machines** blade, select a virtual machine to install the agent on, and then select **Connect**. Repeat this step for each VM you wish to connect. |
- | **For any other Linux machine** | <ol><li>Under **Choose where to install the Linux agent**, expand **Install agent on a non-Azure Linux Machine**.<br><br><li>Select the **Download & install agent for non-Azure Linux machines >** link.<br><br><li>In the **Agents management** blade, select the **Linux servers** tab, then copy the command for **Download and onboard agent for Linux** and run it on your Linux machine.<br><br> If you want to keep a local copy of the Linux agent installation file, select the **Download Linux Agent** link above the "Download and onboard agent" command.|
- | **For an Azure Windows VM** | <ol><li>Under **Choose where to install the Windows agent**, expand **Install agent on Azure Windows virtual machine**.<br><br><li>Select the **Download & install agent for Azure Windows Virtual machines >** link.<br><br><li>In the **Virtual machines** blade, select a virtual machine to install the agent on, and then select **Connect**. Repeat this step for each VM you wish to connect. |
- | **For any other Windows machine** | <ol><li>Under **Choose where to install the Windows agent**, expand **Install agent on a non-Azure Windows Machine**<br><br><li>Select the **Download & install agent for non-Azure Windows machines >** link.<br><br><li>In the **Agents management** blade, on the **Windows servers** tab, select the **Download Windows Agent** link for either 32-bit or 64-bit systems, as appropriate. |
-
-# [Other data sources](#tab/CUS)
-
-1. From the Microsoft Sentinel navigation menu, select **Settings** and then the **Workspace settings** tab.
-
-1. Install and onboard the agent on the device that generates the logs. Choose Linux or Windows as appropriate.
-
- | Machine type | Instructions |
- | | |
- | **Azure VM (Windows or Linux)** | <ol><li>From the Log Analytics workspace navigation menu, select **Virtual machines**.<br><br><li>In the **Virtual machines** blade, select a virtual machine to install the agent on, and then select **Connect**.<br>Repeat this step for each VM you wish to connect. |
- | **Any other Windows or Linux machine** | <ol><li>From the Log Analytics workspace navigation menu, select **Agents management**.<br><br><li>Select the **Windows servers** or **Linux servers** tab as appropriate.<br><br><li>For Windows, select the **Download Windows Agent** link for either 32-bit or 64-bit systems, as appropriate. For Linux, copy the command for **Download and onboard agent for Linux** and run it from your command line, or select the **Download Linux Agent** link to download a local copy of the installation file. |
---
-## Configure the logs to be collected
-
-Many device types have their own data connectors appearing in the **Data connectors** page in Microsoft Sentinel. Some of these connectors require special additional instructions to properly set up log collection in Microsoft Sentinel. These instructions can include the implementation of a parser based on a Kusto function.
-
-All connectors listed in Microsoft Sentinel will display any specific instructions on their respective connector pages in the portal, as well as in their sections of the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) page.
-
-If your product doesn't have a solution with a data connector listed in the **Content Hub**, consult your vendor's documentation for instructions on configuring logging for your device.
-
-## Configure the Log Analytics agent
-
-1. From the connector page, select the **Open your workspace custom logs configuration** link.
-
- Or, from the Log Analytics workspace navigation menu, select **Custom logs**.
-
-1. In the **Custom tables** tab, select **Add custom log**.
-
-1. In the **Sample** tab, upload a sample of a log file from your device (e.g. access.log or error.log). Then, select **Next**.
-
-1. In the **Record delimiter** tab, select a record delimiter, either **New line** or **Timestamp** (see the instructions on that tab), and select **Next**.
-
-1. In the **Collection paths** tab, select a path type of Windows or Linux, and enter the path to your device's logs based on your configuration. Then, select **Next**.
-
-1. Give your custom log a name and optionally a description and select **Next**.
- Don't end your name with "_CL", as it will be appended automatically.
--
-## Find your data
-
-To query the custom log data in **Logs**, type the name you gave your custom log (ending in "_CL") in the query window.
-
-## Next steps
-
-In this document, you learned how to collect data from custom log types to ingest into Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles:
-- Learn how to [get visibility into your data and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).-- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Log Forwarder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-log-forwarder.md
- Title: Deploy a log forwarder to ingest Syslog and CEF logs to Microsoft Sentinel | Microsoft Docs
-description: Learn how to deploy a log forwarder, consisting of a Syslog daemon and the Log Analytics agent, as part of the process of ingesting Syslog and CEF logs to Microsoft Sentinel.
--- Previously updated : 06/18/2024--
-# Deploy a log forwarder to ingest Syslog and CEF logs to Microsoft Sentinel
-
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that has reached End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
-
-To ingest Syslog and CEF logs into Microsoft Sentinel, particularly from devices and appliances onto which you can't install the Log Analytics agent directly, you'll need to designate and configure a Linux machine that will collect the logs from your devices and forward them to your Microsoft Sentinel workspace. This machine can be a physical or virtual machine in your on-premises environment, an Azure VM, or a VM in another cloud.
-
-This machine has two components that take part in this process:
--- A syslog daemon, either **rsyslog** or **syslog-ng**, that collects the logs.-- The **Log Analytics Agent** (also known as the OMS Agent), that forwards the logs to Microsoft Sentinel.-
-Using the link provided below, you will run a script on the designated machine that performs the following tasks:
--- Installs the Log Analytics agent for Linux (also known as the OMS agent) and configures it for the following purposes:
- - listening for CEF messages from the built-in Linux Syslog daemon on TCP port 25226
- - sending the messages securely over TLS to your Microsoft Sentinel workspace, where they are parsed and enriched
--- Configures the built-in Linux Syslog daemon (rsyslog.d/syslog-ng) for the following purposes:
- - listening for Syslog messages from your security solutions on TCP port 514
- - forwarding only the messages it identifies as CEF to the Log Analytics agent on localhost using TCP port 25226
-
-> [!IMPORTANT]
-> The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
->
-> For information about deploying Syslog and/or CEF logs with the Azure Monitor Agent, review the [options for streaming logs in the CEF and Syslog format to Microsoft Sentinel](connect-cef-syslog-options.md).
-
-## Prerequisites
--
-Install the product solution from the **Content Hub** in Microsoft Sentinel. If the product isn't listed, install the solution for **Common Event Format**. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
-
-> [!IMPORTANT]
-> Operating system versions may have different support dates and lifecycles. We recommend that you check the official documentation of each distribution for the most accurate and up-to-date support and end of life dates.
-
-Your machine must meet the following requirements:
--- **Hardware (physical/virtual)**-
- - Your Linux machine must have a minimum of **4 CPU cores and 8 GB RAM**.
-
- > [!NOTE]
- > - A single log forwarder machine with the above hardware configuration and using the **rsyslog** daemon has a supported capacity of **up to 8500 events per second (EPS)** collected.
--- **Operating system**-
- - Amazon Linux 2 (64-bit only)
- - Oracle Linux 7, 8 (64-bit/32-bit)
- - Red Hat Enterprise Linux (RHEL) Server 7 and 8 (not 6), including minor versions (64-bit/32-bit)
- - Debian GNU/Linux 8 and 9 (64-bit/32-bit)
- - Ubuntu Linux 20.04 LTS (64-bit only)
- - SUSE Linux Enterprise Server 12, 15 (64-bit only)
- - CentOS distributions **are no longer supported**, as they have reached End Of Life (EOL) status. See note at the beginning of this article.
--- **Daemon versions**-
- - Rsyslog: v8
- - Syslog-ng: 2.1 - 3.22.1
--- **Packages**
- - You must have **Python 2.7** or **3** installed on the Linux machine.<br>Use the `python --version` or `python3 --version` command to check.
- - You must have the [GNU Wget](https://www.gnu.org/software/wget/) package.
--- **Syslog RFC support**
- - Syslog RFC 3164
- - Syslog RFC 5424
-
-- **Configuration**
- - You must have elevated permissions (sudo) on your designated Linux machine.
- - The Linux machine must not be connected to any Azure workspaces before you install the Log Analytics agent.
--- **Data**
- - You may need your Microsoft Sentinel workspace's **Workspace ID** and **Workspace Primary Key** at some point in this process. You can find them in the workspace settings, under **Agents management**.
-
-### Security considerations
-
-Make sure to configure the machine's security according to your organization's security policy. For example, you can configure your network to align with your corporate network security policy and change the ports and protocols in the daemon to align with your requirements. You can use the following instructions to improve your machine security configuration:  [Secure VM in Azure](/azure/virtual-machines/security-policy), [Best practices for Network security](../security/fundamentals/network-best-practices.md).
-
-If your devices are sending Syslog and CEF logs over TLS (because, for example, your log forwarder is in the cloud), you will need to configure the Syslog daemon (rsyslog or syslog-ng) to communicate in TLS. See the following documentation for details:
-- [Encrypting Syslog traffic with TLS ΓÇô rsyslog](https://www.rsyslog.com/doc/v8-stable/tutorials/tls_cert_summary.html)-- [Encrypting log messages with TLS ΓÇô syslog-ng](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.22/administration-guide/60#TOPIC-1209298)-
-## Run the deployment script
-
-1. In Microsoft Sentinel, select **Data connectors**.
-1. Select the connector for your product from the connectors gallery. If your product isn't listed, select **Common Event Format (CEF)**.
-1. In the details pane for the connector, select **Open connector page**.
-1. On the connector page, in the instructions under **1.2 Install the CEF collector on the Linux machine**, copy the link provided under **Run the following script to install and apply the CEF collector**.
-If you don't have access to that page, copy the link from the text below (copying and pasting the **Workspace ID** and **Primary Key** from above in place of the placeholders):
-
- ```bash
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py [WorkspaceID] [Workspace Primary Key]
- ```
-1. Paste the link or the text into the command line on your log forwarder, and run it.
-
-1. While the script is running, check to make sure you don't get any error or warning messages.
- - You may get a message directing you to run a command to correct an issue with the mapping of the *Computer* field. See the [explanation in the deployment script](#mapping-command) for details.
-
-1. [Configure your device to send CEF messages](connect-common-event-format.md#configure-your-device).
-
- > [!NOTE]
- > **Using the same machine to forward both plain Syslog *and* CEF messages**
- >
- > If you plan to use this log forwarder machine to forward [Syslog messages](connect-syslog.md) as well as CEF, then in order to avoid the duplication of events to the Syslog and CommonSecurityLog tables:
- >
- > 1. On each source machine that sends logs to the forwarder in CEF format, you must edit the Syslog configuration file to remove the facilities that are being used to send CEF messages. This way, the facilities that are sent in CEF won't also be sent in Syslog. See [Configure Syslog on Linux agent](/azure/azure-monitor/agents/data-sources-syslog#configure-syslog-on-linux-agent) for detailed instructions on how to do this.
- >
- > 1. You must run the following command on those machines to disable the synchronization of the agent with the Syslog configuration in Microsoft Sentinel. This ensures that the configuration change you made in the previous step does not get overwritten.<br>
- > `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable'`
-
-## Deployment script explained
-
-The following is a command-by-command description of the actions of the deployment script.
-
-Choose a syslog daemon to see the appropriate description.
-
-# [rsyslog daemon](#tab/rsyslog)
-
-1. **Downloading and installing the Log Analytics agent:**
-
- - Downloads the installation script for the Log Analytics (OMS) Linux agent.
-
- ```bash
- wget -O onboard_agent.sh https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/
- master/installer/scripts/onboard_agent.sh
- ```
-
- - Installs the Log Analytics agent.
-
- ```bash
- sh onboard_agent.sh -w [workspaceID] -s [Primary Key] -d opinsights.azure.com
- ```
-
-1. **Setting the Log Analytics agent configuration to listen on port 25226 and forward CEF messages to Microsoft Sentinel:**
-
- - Downloads the configuration from the Log Analytics agent GitHub repository.
-
- ```bash
- wget -O /etc/opt/microsoft/omsagent/[workspaceID]/conf/omsagent.d/security_events.conf
- https://raw.githubusercontent.com/microsoft/OMS-Agent-for-Linux/master/installer/conf/
- omsagent.d/security_events.conf
- ```
-
-1. **Configuring the Syslog daemon:**
-
- - Opens port 514 for TCP communication using the syslog configuration file `/etc/rsyslog.conf`.
-
- - Configures the daemon to forward CEF messages to the Log Analytics agent on TCP port 25226, by inserting a special configuration file `security-config-omsagent.conf` into the syslog daemon directory `/etc/rsyslog.d/`.
-
- Contents of the `security-config-omsagent.conf` file:
-
- ```bash
- if $rawmsg contains "CEF:" or $rawmsg contains "ASA-" then @@127.0.0.1:25226
- ```
-
-1. **Restarting the Syslog daemon and the Log Analytics agent:**
-
- - Restarts the rsyslog daemon.
-
- ```bash
- service rsyslog restart
- ```
-
- - Restarts the Log Analytics agent.
-
- ```bash
- /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. **Verifying the mapping of the *Computer* field as expected:**
-
- - Ensures that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
-
- ```bash
- grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb
- ```
- - <a name="mapping-command"></a>If there is an issue with the mapping, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct mapping and restart the agent.
-
- ```bash
- sudo sed -i -e "/'Severity' => tags\[tags.size - 1\]/ a \ \t 'Host' => record['host']" -e "s/'Severity' => tags\[tags.size - 1\]/&,/" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-# [syslog-ng daemon](#tab/syslogng)
-
-1. **Downloading and installing the Log Analytics agent:**
-
- - Downloads the installation script for the Log Analytics (OMS) Linux agent.
-
- ```bash
- wget -O onboard_agent.sh https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/
- master/installer/scripts/onboard_agent.sh
- ```
-
- - Installs the Log Analytics agent.
-
- ```bash
- sh onboard_agent.sh -w [workspaceID] -s [Primary Key] -d opinsights.azure.com
- ```
-
-1. **Setting the Log Analytics agent configuration to listen on port 25226 and forward CEF messages to Microsoft Sentinel:**
-
- - Downloads the configuration from the Log Analytics agent GitHub repository.
-
- ```bash
- wget -O /etc/opt/microsoft/omsagent/[workspaceID]/conf/omsagent.d/security_events.conf
- https://raw.githubusercontent.com/microsoft/OMS-Agent-for-Linux/master/installer/conf/
- omsagent.d/security_events.conf
- ```
-
-1. **Configuring the Syslog daemon:**
-
- - Opens port 514 for TCP communication using the syslog configuration file `/etc/syslog-ng/syslog-ng.conf`.
-
- - Configures the daemon to forward CEF messages to the Log Analytics agent on TCP port 25226, by inserting a special configuration file `security-config-omsagent.conf` into the syslog daemon directory `/etc/syslog-ng/conf.d/`.
-
- Contents of the `security-config-omsagent.conf` file:
-
- ```bash
- filter f_oms_filter {match("CEF\|ASA" ) ;};destination oms_destination {tcp("127.0.0.1" port(25226));};
- log {source(s_src);filter(f_oms_filter);destination(oms_destination);};
- ```
-
-1. **Restarting the Syslog daemon and the Log Analytics agent:**
-
- - Restarts the syslog-ng daemon.
-
- ```bash
- service syslog-ng restart
- ```
-
- - Restarts the Log Analytics agent.
-
- ```bash
- /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. **Verifying the mapping of the *Computer* field as expected:**
-
- - Ensures that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
-
- ```bash
- grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb
- ```
- - <a name="mapping-command"></a>If there is an issue with the mapping, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct mapping and restart the agent.
-
- ```bash
- sed -i -e "/'Severity' => tags\[tags.size - 1\]/ a \ \t 'Host' => record['host']" -e "s/'Severity' => tags\[tags.size - 1\]/&,/" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
--
-## Next steps
-
-In this document, you learned how to deploy the Log Analytics agent to connect CEF appliances to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles:
--- Learn about [CEF and CommonSecurityLog field mapping](cef-name-mapping.md).-- Learn how to [get visibility into your data, and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
sentinel Connect Logstash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash.md
- Title: Use Logstash to stream logs with HTTP Data Collection API (legacy)
-description: Learn how to use Logstash to forward logs from external data sources to Microsoft Sentinel using the HTTP Data Collection API.
--- Previously updated : 11/09/2021---
-# Use Logstash to stream logs with HTTP Data Collection API (legacy)
-
-> [!IMPORTANT]
-> Data ingestion using the Logstash output plugin is currently in public preview. This feature is provided without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-> [!NOTE]
-> A [newer version of the Logstash plugin](connect-logstash-data-connection-rules.md) can forward logs from external data sources into custom and standard tables using the DCR based API. The new plugin allows full control over the output schema, including the configuration of the column names and types.
-
-Using Microsoft Sentinel's output plugin for the **Logstash data collection engine**, you can send any type of log you want through Logstash directly to your Log Analytics workspace in Microsoft Sentinel. Your logs will be sent to a custom table that you define using the output plugin. This version of the plugin uses the HTTP Data Collection API.
-
-To learn more about working with the Logstash data collection engine, see [Getting started with Logstash](https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html).
-
-## Overview
-
-### Architecture and background
--
-The Logstash engine is comprised of three components:
--- Input plugins: Customized collection of data from various sources.-- Filter plugins: Manipulation and normalization of data according to specified criteria.-- Output plugins: Customized sending of collected and processed data to various destinations.-
-> [!NOTE]
-> - Microsoft supports only the Microsoft Sentinel-provided Logstash output plugin discussed here. The current version of this plugin is v1.0.0, released 2020-08-25. You can [open a support ticket](https://portal.azure.com/#create/Microsoft.Support) for any issues regarding the output plugin.
->
-> - Microsoft does not support third-party Logstash output plugins for Microsoft Sentinel, or any other Logstash plugin or component of any type.
->
-> - Microsoft Sentinel's Logstash output plugin supports only **Logstash versions 7.0 to 7.17.10, and versions 8.0 to 8.9 and 8.11**.
-> If you use Logstash 8, we recommended that you [disable ECS in the pipeline](https://www.elastic.co/guide/en/logstash/8.4/ecs-ls.html).
-
-The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to your Log Analytics workspace, using the Log Analytics HTTP Data Collector REST API. The data is ingested into custom logs.
--- Learn more about the [Log Analytics REST API](/rest/api/loganalytics/create-request).-- Learn more about [custom logs](/azure/azure-monitor/agents/data-sources-custom-logs).-
-## Deploy the Microsoft Sentinel output plugin in Logstash
-
-### Step 1: Installation
-
-The Microsoft Sentinel output plugin is available in the Logstash collection.
--- Follow the instructions in the Logstash [Working with plugins](https://www.elastic.co/guide/en/logstash/current/working-with-plugins.html) document to install the ***[microsoft-logstash-output-azure-loganalytics](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-logstash-output-azure-loganalytics)*** plugin.
-
-- If your Logstash system does not have Internet access, follow the instructions in the Logstash [Offline Plugin Management](https://www.elastic.co/guide/en/logstash/current/offline-plugins.html) document to prepare and use an offline plugin pack. (This will require you to build another Logstash system with Internet access.)-
-### Step 2: Configuration
-
-Use the information in the Logstash [Structure of a config file](https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html) document and add the Microsoft Sentinel output plugin to the configuration with the following keys and values. (The proper config file syntax is shown after the table.)
-
-| Field name | Data type | Description |
-|-||--|
-| `workspace_id` | string | Enter your workspace ID GUID (see Tip). |
-| `workspace_key` | string | Enter your workspace primary key GUID (see Tip). |
-| `custom_log_table_name` | string | Set the name of the table into which the logs will be ingested. Only one table name per output plugin can be configured. The log table will appear in Microsoft Sentinel under **Logs**, in **Tables** in the **Custom Logs** category, with a `_CL` suffix. |
-| `endpoint` | string | Optional field. By default, this is the Log Analytics endpoint. Use this field to set an alternative endpoint. |
-| `time_generated_field` | string | Optional field. This property overrides the default **TimeGenerated** field in Log Analytics. Enter the name of the timestamp field in the data source. The data in the field must conform to the ISO 8601 format (`YYYY-MM-DDThh:mm:ssZ`) |
-| `key_names` | array | Enter a list of Log Analytics output schema fields. Each list item should be enclosed in single quotes and the items separated by commas, and the entire list enclosed in square brackets. See example below. |
-| `plugin_flush_interval` | number | Optional field. Set to define the maximum interval (in seconds) between message transmissions to Log Analytics. The default is 5. |
-| `amount_resizing` | boolean | True or false. Enable or disable the automatic scaling mechanism, which adjusts the message buffer size according to the volume of log data received. |
-| `max_items` | number | Optional field. Applies only if `amount_resizing` set to "false." Use to set a cap on the message buffer size (in records). The default is 2000. |
-| `azure_resource_id` | string | Optional field. Defines the ID of the Azure resource where the data resides. <br>The resource ID value is especially useful if you are using [resource-context RBAC](resource-context-rbac.md) to provide access to specific data only. |
--
-> [!TIP]
-> - You can find the workspace ID and primary key in the workspace resource, under **Agents management**.
-> - **However**, because having credentials and other sensitive information stored in cleartext in configuration files is not in line with security best practices, you are strongly encouraged to make use of the **Logstash key store** in order to securely include your **workspace ID** and **workspace primary key** in the configuration. See [Elastic's documentation](https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html) for instructions.
-
-#### Sample configurations
-
-Here are some sample configurations that use a few different options.
--- A basic configuration that uses a filebeat input pipe:-
- ```ruby
- input {
- beats {
- port => "5044"
- }
- }
- filter {
- }
- output {
- microsoft-logstash-output-azure-loganalytics {
- workspace_id => "<your workspace id>"
- workspace_key => "<your workspace key>"
- custom_log_table_name => "tableName"
- }
- }
- ```
-
-- A basic configuration that uses a tcp input pipe:-
- ```ruby
- input {
- tcp {
- port => "514"
- type => syslog #optional, will effect log type in table
- }
- }
- filter {
- }
- output {
- microsoft-logstash-output-azure-loganalytics {
- workspace_id => "<your workspace id>"
- workspace_key => "<your workspace key>"
- custom_log_table_name => "tableName"
- }
- }
- ```
-
-- An advanced configuration:-
- ```ruby
- input {
- tcp {
- port => 514
- type => syslog
- }
- }
- filter {
- grok {
- match => { "message" => "<%{NUMBER:PRI}>1 (?<TIME_TAG>[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}T[0-9]{1,2}:[0-9]{1,2}:[0-9]{1,2})[^ ]* (?<HOSTNAME>[^ ]*) %{GREEDYDATA:MSG}" }
- }
- }
- output {
- microsoft-logstash-output-azure-loganalytics {
- workspace_id => "<WS_ID>"
- workspace_key => "${WS_KEY}"
- custom_log_table_name => "logstashCustomTable"
- key_names => ['PRI','TIME_TAG','HOSTNAME','MSG']
- plugin_flush_interval => 5
- }
- }
- ```
--- A more advanced configuration to parse a custom timestamp and a JSON string from unstructured text data and log a selected set of fields into Log Analytics with the extracted timestamp:-
- ```ruby
- # Example log line below:
- # Mon Nov 07 20:45:08 2022: { "name":"_custom_time_generated", "origin":"test_microsoft", "sender":"test@microsoft.com", "messages":1337}
- # take an input
- input {
- file {
- path => "/var/log/test.log"
- }
- }
- filter {
- # extract the header timestamp and the Json section
- grok {
- match => {
- "message" => ["^(?<timestamp>.{24}):\s(?<json_data>.*)$"]
- }
- }
- # parse the extracted header as a timestamp
- date {
- id => 'parse_metric_timestamp'
- match => [ 'timestamp', 'EEE MMM dd HH:mm:ss yyyy' ]
- timezone => 'Europe/Rome'
- target => 'custom_time_generated'
- }
- json {
- source => "json_data"
- }
- }
- # output to a file for debugging (optional)
- output {
- file {
- path => "/tmp/test.txt"
- codec => line { format => "custom format: %{message} %{custom_time_generated} %{json_data}"}
- }
- }
- # output to the console output for debugging (optional)
- output {
- stdout { codec => rubydebug }
- }
- # log into Log Analytics
- output {
- microsoft-logstash-output-azure-loganalytics {
- workspace_id => '[REDACTED]'
- workspace_key => '[REDACTED]'
- custom_log_table_name => 'RSyslogMetrics'
- time_generated_field => 'custom_time_generated'
- key_names => ['custom_time_generated','name','origin','sender','messages']
- }
- }
- ```
-
- > [!NOTE]
- > Visit the output plugin [GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-logstash-output-azure-loganalytics) to learn more about its inner workings, configuration, and performance settings.
-
-### Step 3: Restart Logstash
-
-### Step 4: View incoming logs in Microsoft Sentinel
-
-1. Verify that messages are being sent to the output plugin.
-
-1. From the Microsoft Sentinel navigation menu, click **Logs**. Under the **Tables** heading, expand the **Custom Logs** category. Find and click the name of the table you specified (with a `_CL` suffix) in the configuration.
-
- :::image type="content" source="./media/connect-logstash/logstash-custom-logs-menu.png" alt-text="Screenshot of log stash custom logs.":::
-
-1. To see records in the table, query the table by using the table name as the schema.
-
- :::image type="content" source="./media/connect-logstash/logstash-custom-logs-query.png" alt-text="Screenshot of a log stash custom logs query.":::
-
-## Monitor output plugin audit logs
-
-To monitor the connectivity and activity of the Microsoft Sentinel output plugin, enable the appropriate Logstash log file. See the [Logstash Directory Layout](https://www.elastic.co/guide/en/logstash/current/dir-layout.html#dir-layout) document for the log file location.
-
-If you are not seeing any data in this log file, generate and send some events locally (through the input and filter plugins) to make sure the output plugin is receiving data. Microsoft Sentinel will support only issues relating to the output plugin.
-
-## Next steps
-
-In this document, you learned how to use Logstash to connect external data sources to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles:
-- Learn how to [get visibility into your data and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-syslog.md
- Title: Connect Syslog data to Microsoft Sentinel
-description: Connect any machine or appliance that supports Syslog to Microsoft Sentinel by using an agent on a Linux machine between the appliance and Microsoft Sentinel.
---- Previously updated : 06/18/2024--
-# Collect data from Linux-based sources using Syslog
-
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that has reached End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
--
-**Syslog** is an event logging protocol that is common to Linux. You can use the Syslog daemon built into Linux devices and appliances to collect local events of the types you specify, and have it send those events to Microsoft Sentinel using the **Log Analytics agent for Linux** (formerly known as the OMS agent).
-
-This article describes how to connect your data sources to Microsoft Sentinel using Syslog. For more information about supported connectors for this method, see [Data connectors reference](data-connectors-reference.md).
-
-Learn how to [collect Syslog with the Azure Monitor Agent](/azure/azure-monitor/agents/data-collection-syslog), including how to configure Syslog and create a DCR.
-
-> [!IMPORTANT]
-> The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
->
-> For information about deploying Syslog logs with the Azure Monitor Agent, review the [options for streaming logs in the CEF and Syslog format to Microsoft Sentinel](connect-cef-syslog-options.md).
-
-## Architecture
-
-When the Log Analytics agent is installed on your VM or appliance, the installation script configures the local Syslog daemon to forward messages to the agent on UDP port 25224. After receiving the messages, the agent sends them to your Log Analytics workspace over HTTPS, where they are ingested into the Syslog table in **Microsoft Sentinel > Logs**.
-
-For more information, see [Syslog data sources in Azure Monitor](/azure/azure-monitor/agents/data-sources-syslog).
--
-For some device types that don't allow local installation of the Log Analytics agent, the agent can be installed instead on a dedicated Linux-based log forwarder. The originating device must be configured to send Syslog events to the Syslog daemon on this forwarder instead of the local daemon. The Syslog daemon on the forwarder sends events to the Log Analytics agent over UDP. If this Linux forwarder is expected to collect a high volume of Syslog events, its Syslog daemon sends events to the agent over TCP instead. In either case, the agent then sends the events from there to your Log Analytics workspace in Microsoft Sentinel.
--
-> [!NOTE]
-> - If your appliance supports **Common Event Format (CEF) over Syslog**, a more complete data set is collected, and the data is parsed at collection. You should choose this option and follow the instructions in [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](connect-common-event-format.md).
->
-> - Log Analytics supports collection of messages sent by the **rsyslog** or **syslog-ng** daemons, where rsyslog is the default. The default syslog daemon on version 5 of Red Hat Enterprise Linux (RHEL), CentOS, and Oracle Linux version (**sysklog**) is *not supported* for syslog event collection. To collect syslog data from this version of these distributions, the rsyslog daemon should be installed and configured to replace sysklog.
-
-There are three steps to configuring Syslog collection:
--- **Configure your Linux device or appliance**. This refers to the device on which the Log Analytics agent will be installed, whether it is the same device that originates the events or a log collector that will forward them.--- **Configure your application's logging settings** corresponding to the location of the Syslog daemon that will be sending events to the agent.--- **Configure the Log Analytics agent itself**. This is done from within Microsoft Sentinel, and the configuration is sent to all installed agents.-
-## Prerequisites
-
-Before you begin, install the solution for **Syslog** from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
-
-## Configure your Linux machine or appliance
-
-1. From the Microsoft Sentinel navigation menu, select **Data connectors**.
-
-1. From the connectors gallery, select **Syslog** and then select **Open connector page**.
-
- If your device type is listed in the Microsoft Sentinel **Data connectors gallery**, choose the connector for your device instead of the generic Syslog connector. If there are extra or special instructions for your device type, you will see them, along with custom content like workbooks and analytics rule templates, on the connector page for your device.
-
-1. Install the Linux agent. Under **Choose where to install the agent:**
-
- |Machine type |Instructions |
- |||
- |**For an Azure Linux VM** | 1. Expand **Install agent on Azure Linux virtual machine**. <br><br>2. Select the **Download & install agent for Azure Linux Virtual machines >** link.<br><br>3. In the **Virtual machines** blade, select a virtual machine to install the agent on, and then select **Connect**. Repeat this step for each VM you wish to connect. |
- |**For any other Linux machine** | 1. Expand **Install agent on a non-Azure Linux Machine** <br><br>2. Select the **Download & install agent for non-Azure Linux machines >** link.<br><br>3. In the **Agents management** blade, select the **Linux servers** tab, then copy the command for **Download and onboard agent for Linux** and run it on your Linux machine.<br><br> If you want to keep a local copy of the Linux agent installation file, select the **Download Linux Agent** link above the "Download and onboard agent" command. |
--
- > [!NOTE]
- > Make sure you configure security settings for these devices according to your organization's security policy. For example, you can configure the network settings to align with your organization's network security policy, and change the ports and protocols in the daemon to align with the security requirements.
-
-### Using the same machine to forward both plain Syslog *and* CEF messages
-
-You can use your existing [CEF log forwarder machine](connect-log-forwarder.md) to collect and forward logs from plain Syslog sources as well. However, you must perform the following steps to avoid sending events in both formats to Microsoft Sentinel, as that will result in duplication of events.
-
-Having already set up [data collection from your CEF sources](connect-common-event-format.md), and having configured the Log Analytics agent:
-
-1. On each machine that sends logs in CEF format, you must edit the Syslog configuration file to remove the facilities that are being used to send CEF messages. This way, the facilities that are sent in CEF won't also be sent in Syslog. See [Configure Syslog on Linux agent](/azure/azure-monitor/agents/data-sources-syslog#configure-syslog-on-linux-agent) for detailed instructions on how to do this.
-
-1. You must run the following command on those machines to disable the synchronization of the agent with the Syslog configuration in Microsoft Sentinel. This ensures that the configuration change you made in the previous step does not get overwritten.
-
- ```bash
- sudo -u omsagent python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable
- ```
-
-## Configure your device's logging settings
-
-Many device types have their own data connectors appearing in the **Data connectors** gallery. Some of these connectors require special additional instructions to properly set up log collection in Microsoft Sentinel. These instructions can include the implementation of a parser based on a Kusto function.
-
-All connectors listed in the gallery will display any specific instructions on their respective connector pages in the portal, as well as in their sections of the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) page.
-
-If the instructions on your data connector's page in Microsoft Sentinel indicate that the Kusto functions are deployed as [Advanced Security Information Model (ASIM)](normalization.md) parsers, make sure that you have the ASIM parsers deployed to your workspace.
-
-Use the link in the data connector page to deploy your parsers, or follow the instructions from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/ASIM).
-
-For more information, see [Advanced Security Information Model (ASIM) parsers](normalization-parsers-overview.md).
-
-## Configure the Log Analytics agent
-
-1. At the bottom of the Syslog connector blade, select the **Open your workspace agents configuration >** link.
-
-1. In the **Legacy agents management** page, add the facilities for the connector to collect. Select **Add facility** and choose from the drop-down list of facilities.
-
- - Add the facilities that your syslog appliance includes in its log headers.
-
- - If you want to use anomalous SSH login detection with the data that you collect, add **auth** and **authpriv**. See the [following section](#configure-the-syslog-connector-for-anomalous-ssh-login-detection) for additional details.
-
-1. When you have added all the facilities that you want to monitor, clear the check boxes for any severities you don't want to collect. By default they are all marked.
-
-1. Select **Apply**.
-
-1. On your VM or appliance, make sure you're sending the facilities that you specified.
-
-## Find your data
-
-1. To query the syslog log data in **Logs**, type `Syslog` in the query window.
-
- (Some connectors using the Syslog mechanism might store their data in tables other than `Syslog`. Consult your connector's section in the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) page.)
-
-1. You can use the query parameters described in [Using functions in Azure Monitor log queries](/azure/azure-monitor/logs/functions) to parse your Syslog messages. You can then save the query as a new Log Analytics function and use it as a new data type.
-
-### Configure the Syslog connector for anomalous SSH login detection
-
-> [!IMPORTANT]
-> Anomalous SSH login detection is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-Microsoft Sentinel can apply machine learning (ML) to the syslog data to identify anomalous Secure Shell (SSH) login activity. Scenarios include:
--- Impossible travel ΓÇô when two successful login events occur from two locations that are impossible to reach within the timeframe of the two login events.--- Unexpected location ΓÇô the location from where a successful login event occurred is suspicious. For example, the location has not been seen recently.
-
-This detection requires a specific configuration of the Syslog data connector:
-
-1. For step 2 under [Configure the Log Analytics agent](#configure-the-log-analytics-agent) above, make sure that both **auth** and **authpriv** are selected as facilities to monitor, and that all the severities are selected.
-
-2. Allow sufficient time for syslog information to be collected. Then, navigate to **Microsoft Sentinel - Logs**, and copy and paste the following query:
-
- ```kusto
- Syslog
- | where Facility in ("authpriv","auth")
- | extend c = extract( "Accepted\\s(publickey|password|keyboard-interactive/pam)\\sfor ([^\\s]+)",1,SyslogMessage)
- | where isnotempty(c)
- | count
- ```
-
- Change the **Time range** if required, and select **Run**.
-
- If the resulting count is zero, confirm the configuration of the connector and that the monitored computers do have successful login activity for the time period you specified for your query.
-
- If the resulting count is greater than zero, your syslog data is suitable for anomalous SSH login detection. You enable this detection from **Analytics** > **Rule templates** > **(Preview) Anomalous SSH Login Detection**.
-
-## Next steps
-In this document, you learned how to connect Syslog on-premises appliances to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles:
-- Learn how to [get visibility into your data, and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).-- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Create Codeless Connector Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector-legacy.md
- Title: Legacy codeless connector for Microsoft Sentinel
-description: Legacy codeless connector instructions in Microsoft Sentinel using the Codeless Connector Platform (CCP).
--- Previously updated : 11/22/2023-
-# Create a legacy codeless connector for Microsoft Sentinel
-
-> [!IMPORTANT]
->There's a newer version of the Codeless Connector Platform (CCP). For more information on the **new CCP**, see [Create a codeless connector (Preview)](create-codeless-connector.md).
->
-
-Reference this document if you need to maintain or update a data connector based on this older, legacy version of the CCP.
-
-The CCP provides partners, advanced users, and developers with the ability to create custom connectors, connect them, and ingest data to Microsoft Sentinel. Connectors created via the CCP can be deployed via API, an ARM template, or as a solution in the Microsoft Sentinel [content hub](sentinel-solutions.md).
-
-Connectors created using CCP are fully SaaS, without any requirements for service installations, and also include [health monitoring](monitor-data-connector-health.md) and full support from Microsoft Sentinel.
-
-Create your data connector by defining JSON configurations, with settings for how the data connector page in Microsoft Sentinel looks along with polling settings that define how the connection functions.
-
-> [!IMPORTANT]
-> This version of Codeless Connector Platform (CCP) is in PREVIEW, but is also considered **Legacy**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-
-**Use the following steps to create your CCP connector and connect to your data source from Microsoft Sentinel**:
-
-> [!div class="checklist"]
-> * Configure the connector's user interface
-> * Configure the connector's polling settings
-> * Deploy your connector to your Microsoft Sentinel workspace
-> * Connect Microsoft Sentinel to your data source and start ingesting data
-
-This article describes the syntax used in the CCP JSON configurations and procedures for deploying your connector via API, an ARM template, or a Microsoft Sentinel solution.
-
-## Prerequisites
-
-Before building a connector, we recommend that you understand how your data source behaves and exactly how Microsoft Sentinel will need to connect.
-
-For example, you'll need to know the types of authentication, pagination, and API endpoints that are required for successful connections.
-
-## Create a connector JSON configuration file
-
-Your custom CCP connector has two primary JSON sections needed for deployment. Fill in these areas to define how your connector is displayed in the Azure portal and how it connects Microsoft Sentinel to your data source.
--- `connectorUiConfig`. Defines the visual elements and text displayed on the data connector page in Microsoft Sentinel. For more information, see [Configure your connector's user interface](#configure-your-connectors-user-interface).--- `pollingConfig`. Defines how Microsoft Sentinel collects data from your data source. For more information, see [Configure your connector's polling settings](#configure-your-connectors-polling-settings).-
-Then, if you deploy your codeless connector via ARM, you'll wrap these sections in the ARM template for data connectors.
-
-Review [other CCP data connectors](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors#codeless-connector-platform-ccp-preview--native-microsoft-sentinel-polling) as examples or download the example template, [DataConnector_API_CCP_template.json (Preview)](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors#build-the-connector).
-
-## Configure your connector's user interface
-
-This section describes the configuration options available to customize the user interface of the data connector page.
-
-The following image shows a sample data connector page, highlighted with numbers that correspond to notable areas of the user interface:
--
-1. **Title**. The title displayed for your data connector.
-1. **Logo**. The icon displayed for your data connector. Customizing this is only possible when deploying as part of a solution.
-1. **Status**. Indicates whether or not your data connector is connected to Microsoft Sentinel.
-1. **Data charts**. Displays relevant queries and the amount of ingested data in the last two weeks.
-1. **Instructions tab**. Includes a **Prerequisites** section, with a list of minimal validations before the user can enable the connector, and **Instructions**, to guide the user enablement of the connector. This section can include text, buttons, forms, tables, and other common widgets to simplify the process.
-1. **Next steps tab**. Includes useful information for understanding how to find data in the event logs, such as sample queries.
-
-Here's the `connectorUiConfig` sections and syntax needed to configure the user interface:
-
-|Property Name |Type |Description |
-|:|:||
-|**availability** | `{`<br>`"status": 1,`<br>`"isPreview":` Boolean<br>`}` | <br> **status**: **1** Indicates connector is generally available to customers. <br>**isPreview** Indicates whether to include (Preview) suffix to connector name. |
-|**connectivityCriteria** | `{`<br>`"type": SentinelKindsV2,`<br>`"value": APIPolling`<br>`}` | An object that defines how to verify if the connector is correctly defined. Use the values indicated here.|
-|**dataTypes** | [dataTypes[]](#datatypes) | A list of all data types for your connector, and a query to fetch the time of the last event for each data type. |
-|**descriptionMarkdown** | String | A description for the connector with the ability to add markdown language to enhance it. |
-|**graphQueries** | [graphQueries[]](#graphqueries) | Queries that present data ingestion over the last two weeks in the **Data charts** pane.<br><br>Provide either one query for all of the data connector's data types, or a different query for each data type. |
-|**graphQueriesTableName** | String | Defines the name of the Log Analytics table from which data for your queries is pulled. <br><br>The table name can be any string, but must end in `_CL`. For example: `TableName_CL`|
-|**instructionsSteps** | [instructionSteps[]](#instructionsteps) | An array of widget parts that explain how to install the connector, displayed on the **Instructions** tab. |
-|**metadata** | [metadata](#metadata) | Metadata displayed under the connector description. |
-|**permissions** | [permissions[]](#permissions) | The information displayed under the **Prerequisites** section of the UI which Lists the permissions required to enable or disable the connector. |
-|**publisher** | String | This is the text shown in the **Provider** section. |
-|**sampleQueries** | [sampleQueries[]](#samplequeries) | Sample queries for the customer to understand how to find the data in the event log, to be displayed in the **Next steps** tab. |
-|**title** | String |Title displayed in the data connector page. |
-
-Putting all these pieces together is complicated. Use the [connector page user experience validation tool](#validate-the-data-connector-page-user-experience) to test out the components you put together.
-
-### dataTypes
-
-|Array Value |Type |Description |
-||||
-| **name** | String | A meaningful description for the`lastDataReceivedQuery`, including support for a variable. <br><br>Example: `{{graphQueriesTableName}}` |
-| **lastDataReceivedQuery** | String | A KQL query that returns one row, and indicates the last time data was received, or no data if there is no relevant data. <br><br>Example: `{{graphQueriesTableName}}\n | summarize Time = max(TimeGenerated)\n | where isnotempty(Time)` |
-
-### graphQueries
-
-Defines a query that presents data ingestion over the last two weeks in the **Data charts** pane.
-
-Provide either one query for all of the data connector's data types, or a different query for each data type.
-
-|Array Value |Type |Description |
-||||
-|**metricName** | String | A meaningful name for your graph. <br><br>Example: `Total data received` |
-|**legend** | String | The string that appears in the legend to the right of the chart, including a variable reference.<br><br>Example: `{{graphQueriesTableName}}` |
-|**baseQuery** | String | The query that filters for relevant events, including a variable reference. <br><br>Example: `TableName_CL | where ProviderName == "myprovider"` or `{{graphQueriesTableName}}` |
-
-### instructionSteps
-
-This section provides parameters that define the set of instructions that appear on your data connector page in Microsoft Sentinel.
-
-|Array Property |Type |Description |
-||||
-| **title** | String | Optional. Defines a title for your instructions. |
-| **description** | String | Optional. Defines a meaningful description for your instructions. |
-| **innerSteps** | Array | Optional. Defines an array of inner instruction steps. |
-| **instructions** | Array of [instructions](#instructions) | Required. Defines an array of instructions of a specific parameter type. |
-| **bottomBorder** | Boolean | Optional. When `true`, adds a bottom border to the instructions area on the connector page in Microsoft Sentinel |
-| **isComingSoon** | Boolean | Optional. When `true`, adds a **Coming soon** title on the connector page in Microsoft Sentinel |
-
-#### instructions
-
-Displays a group of instructions, with various options as parameters and the ability to nest more instructionSteps in groups.
-
-| Parameter | Array property | Description |
-|--|--|-|
-| **APIKey** | [APIKey](#apikey) | Add placeholders to your connector's JSON configuration file. |
-| **CopyableLabel** | [CopyableLabel](#copyablelabel) | Shows a text field with a copy button at the end. When the button is selected, the field's value is copied.|
-| **InfoMessage** | [InfoMessage](#infomessage) | Defines an inline information message.
-| **InstructionStepsGroup** | [InstructionStepsGroup](#instructionstepsgroup) | Displays a group of instructions, optionally expanded or collapsible, in a separate instructions section.|
-| **InstallAgent** | [InstallAgent](#installagent) | Displays a link to other portions of Azure to accomplish various installation requirements. |
-
-#### APIKey
-
-You may want to create a JSON configuration file template, with placeholders parameters, to reuse across multiple connectors, or even to create a connector with data that you don't currently have.
-
-To create placeholder parameters, define an additional array named `userRequestPlaceHoldersInput` in the [Instructions](#instructions) section of your [CCP JSON configuration](#create-a-connector-json-configuration-file) file, using the following syntax:
-
-```json
-"instructions": [
- {
- "parameters": {
- "enable": "true",
- "userRequestPlaceHoldersInput": [
- {
- "displayText": "Organization Name",
- "requestObjectKey": "apiEndpoint",
- "placeHolderName": "{{placeHolder}}"
- }
- ]
- },
- "type": "APIKey"
- }
- ]
-```
-
-The `userRequestPlaceHoldersInput` parameter includes the following attributes:
-
-|Name |Type |Description |
-||||
-|**DisplayText** | String | Defines the text box display value, which is displayed to the user when connecting. |
-|**RequestObjectKey** |String | Defines the ID in the request section of the **pollingConfig** to substitute the placeholder value with the user provided value. <br><br>If you don't use this attribute, use the `PollingKeyPaths` attribute instead. |
-|**PollingKeyPaths** |String |Defines an array of [JsonPath](https://www.npmjs.com/package/JSONPath) objects that directs the API call to anywhere in the template, to replace a placeholder value with a user value.<br><br>**Example**: `"pollingKeyPaths":["$.request.queryParameters.test1"]` <br><br>If you don't use this attribute, use the `RequestObjectKey` attribute instead. |
-|**PlaceHolderName** |String |Defines the name of the placeholder parameter in the JSON template file. This can be any unique value, such as `{{placeHolder}}`. |
-
-#### CopyableLabel
-
- Example:
--
-**Sample code**:
-
-```json
-{
- "parameters": {
- "fillWith": [
- "WorkspaceId",
- "PrimaryKey"
- ],
- "label": "Here are some values you'll need to proceed.",
- "value": "Workspace is {0} and PrimaryKey is {1}"
- },
- "type": "CopyableLabel"
-}
-```
-
-| Array Value |Type |Description |
-|||-|
-|**fillWith** | ENUM | Optional. Array of environment variables used to populate a placeholder. Separate multiple placeholders with commas. For example: `{0},{1}` <br><br>Supported values: `workspaceId`, `workspaceName`, `primaryKey`, `MicrosoftAwsAccount`, `subscriptionId` |
-|**label** | String | Defines the text for the label above a text box. |
-|**value** | String | Defines the value to present in the text box, supports placeholders. |
-|**rows** | Rows | Optional. Defines the rows in the user interface area. By default, set to **1**. |
-|**wideLabel** |Boolean | Optional. Determines a wide label for long strings. By default, set to `false`. |
-
-#### InfoMessage
-
-Here's an example of an inline information message:
--
-In contrast, the following image shows a *non*-inline information message:
---
-|Array Value |Type |Description |
-||||
-|**text** | String | Define the text to display in the message. |
-|**visible** | Boolean | Determines whether the message is displayed. |
-|**inline** | Boolean | Determines how the information message is displayed. <br><br>- `true`: (Recommended) Shows the information message embedded in the instructions. <br>- `false`: Adds a blue background. |
-
-#### InstructionStepsGroup
-
-Here's an example of an expandable instruction group:
--
-|Array Value |Type |Description |
-||||
-|**title** | String | Defines the title for the instruction step. |
-|**canCollapseAllSections** | Boolean | Optional. Determines whether the section is a collapsible accordion or not. |
-|**noFxPadding** | Boolean | Optional. If `true`, reduces the height padding to save space. |
-|**expanded** | Boolean | Optional. If `true`, shows as expanded by default. |
-
-For a detailed example, see the configuration JSON for the [Windows DNS connector](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Windows%20Server%20DNS/Data%20Connectors/template_DNS.JSON).
-
-#### InstallAgent
-
-Some **InstallAgent** types appear as a button, others will appear as a link. Here are examples of both:
---
-|Array Values |Type |Description |
-||||
-|**linkType** | ENUM | Determines the link type, as one of the following values: <br><br>`InstallAgentOnWindowsVirtualMachine`<br>`InstallAgentOnWindowsNonAzure`<br> `InstallAgentOnLinuxVirtualMachine`<br> `InstallAgentOnLinuxNonAzure`<br>`OpenSyslogSettings`<br>`OpenCustomLogsSettings`<br>`OpenWaf`<br> `OpenAzureFirewall` `OpenMicrosoftAzureMonitoring` <br> `OpenFrontDoors` <br>`OpenCdnProfile` <br>`AutomaticDeploymentCEF` <br> `OpenAzureInformationProtection` <br> `OpenAzureActivityLog` <br> `OpenIotPricingModel` <br> `OpenPolicyAssignment` <br> `OpenAllAssignmentsBlade` <br> `OpenCreateDataCollectionRule` |
-|**policyDefinitionGuid** | String | Required when using the **OpenPolicyAssignment** linkType. For policy-based connectors, defines the GUID of the built-in policy definition. |
-|**assignMode** | ENUM | Optional. For policy-based connectors, defines the assign mode, as one of the following values: `Initiative`, `Policy` |
-|**dataCollectionRuleType** | ENUM | Optional. For DCR-based connectors, defines the type of data collection rule type as one of the following: `SecurityEvent`, `ForwardEvent` |
--
-### metadata
-
-This section provides metadata in the data connector UI under the **Description** area.
-
-| Collection Value |Type |Description |
-||||
-| **kind** | String | Defines the kind of ARM template you're creating. Always use `dataConnector`. |
-| **source** | String | Describes your data source, using the following syntax: <br>`{`<br>`"kind":`string<br>`"name":`string<br>`}`|
-| **author** | String | Describes the data connector author, using the following syntax: <br>`{`<br>`"name":`string<br>`}`|
-| **support** | String | Describe the support provided for the data connector using the following syntax: <br>`{`<br>`"tier":`string,<br>`"name":`string,<br>`"email":`string,<br>`"link":`URL string<br>`}`|
-
-### permissions
-
-|Array value |Type |Description |
-||||
-| **customs** | String | Describes any custom permissions required for your data connection, in the following syntax: <br>`{`<br>`"name":`string`,`<br>`"description":`string<br>`}` <br><br>Example: The **customs** value displays in Microsoft Sentinel **Prerequisites** section with a blue informational icon. In the GitHub example, this correlates to the line **GitHub API personal token Key: You need access to GitHub personal token...** |
-| **licenses** | ENUM | Defines the required licenses, as one of the following values: `OfficeIRM`,`OfficeATP`, `Office365`, `AadP1P2`, `Mcas`, `Aatp`, `Mdatp`, `Mtp`, `IoT` <br><br>Example: The **licenses** value displays in Microsoft Sentinel as: **License: Required Azure AD Premium P2**|
-| **resourceProvider** | [resourceProvider](#resourceprovider) | Describes any prerequisites for your Azure resource. <br><br>Example: The **resourceProvider** value displays in Microsoft Sentinel **Prerequisites** section as: <br>**Workspace: read and write permission is required.**<br>**Keys: read permissions to shared keys for the workspace are required.**|
-| **tenant** | array of ENUM values<br>Example:<br><br>`"tenant": [`<br>`"GlobalADmin",`<br>`"SecurityAdmin"`<br>`]`<br> | Defines the required permissions, as one or more of the following values: `"GlobalAdmin"`, `"SecurityAdmin"`, `"SecurityReader"`, `"InformationProtection"` <br><br>Example: displays the **tenant** value in Microsoft Sentinel as: **Tenant Permissions: Requires `Global Administrator` or `Security Administrator` on the workspace's tenant**|
-
-> [!IMPORTANT]
-> Microsoft recommends that you use roles with the fewest permissions. This helps improve security for your organization. Global Administrator is a highly privileged role that should be limited to emergency scenarios when you can't use an existing role.
->
-
-#### resourceProvider
-
-|sub array value |Type |Description |
-||||
-| **provider** | ENUM | Describes the resource provider, with one of the following values: <br>- `Microsoft.OperationalInsights/workspaces` <br>- `Microsoft.OperationalInsights/solutions`<br>- `Microsoft.OperationalInsights/workspaces/datasources`<br>- `microsoft.aadiam/diagnosticSettings`<br>- `Microsoft.OperationalInsights/workspaces/sharedKeys`<br>- `Microsoft.Authorization/policyAssignments` |
-| **providerDisplayName** | String | A list item under **Prerequisites** that will display a red "x" or green checkmark when the **requiredPermissions** are validated in the connector page. Example, `"Workspace"` |
-| **permissionsDisplayText** | String | Display text for *Read*, *Write*, or *Read and Write* permissions that should correspond to the values configured in **requiredPermissions** |
-| **requiredPermissions** | `{`<br>`"action":`Boolean`,`<br>`"delete":`Boolean`,`<br>`"read":`Boolean`,`<br>`"write":`Boolean<br>`}` | Describes the minimum permissions required for the connector. |
-| **scope** | ENUM | Describes the scope of the data connector, as one of the following values: `"Subscription"`, `"ResourceGroup"`, `"Workspace"` |
-
-### sampleQueries
-
-|array value |Type |Description |
-||||
-| **description** | String | A meaningful description for the sample query.<br><br>Example: `Top 10 vulnerabilities detected` |
-| **query** | String | Sample query used to fetch the data type's data. <br><br>Example: `{{graphQueriesTableName}}\n | sort by TimeGenerated\n | take 10` |
-
-### Configure other link options
-
-To define an inline link using markdown, use the following example. Here a link is provided in an instruction description:
-
-```json
-{
- "title": "",
- "description": "Make sure to configure the machine's security according to your organization's security policy\n\n\n[Learn more >](https://aka.ms/SecureCEF)"
-}
-```
-
-To define a link as an ARM template, use the following example as a guide:
-
-```json
-{
- "title": "Azure Resource Manager (ARM) template",
- "description": "1. Click the **Deploy to Azure** button below.\n\n\t[![Deploy To Azure](https://aka.ms/deploytoazurebutton)]({URL to custom ARM template})"
-}
-```
-
-### Validate the data connector page user experience
-Follow these steps to render and validate the connector user experience.
-
-1. The test utility can be accessed by this URL - https://aka.ms/sentineldataconnectorvalidateurl
-1. Go to Microsoft Sentinel -> Data Connectors
-1. Click the "import" button and select a json file that only contains the `connectorUiConfig` section of your data connector.
-
-For more information on this validation tool, see the [Build the connector](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors#build-the-connector) instructions in our GitHub build guide.
-
-> [!NOTE]
-> Because the **APIKey** instruction parameter is specific to the codeless connector, temporarily remove this section to use the validation tool, or it will fail.
->
-
-## Configure your connector's polling settings
-
-This section describes the configuration for how data is polled from your data source for a codeless data connector.
-
-The following code shows the syntax of the `pollingConfig` section of the [CCP configuration](#create-a-connector-json-configuration-file) file.
-
-```json
-"pollingConfig": {
- "auth": {
- },
- "request": {
- },
- "response": {
- },
- "paging": {
- }
- }
-```
-
-The `pollingConfig` section includes the following properties:
-
-| Name | Type | Description |
-| | -- | |
-| **auth** | String | Describes the authentication properties for polling the data. For more information, see [auth configuration](#auth-configuration). |
-| <a name="authtype"></a>**auth.authType** | String | Mandatory. Defines the type of authentication, nested inside the `auth` object, as one of the following values: `Basic`, `APIKey`, `OAuth2` |
-| **request** | Nested JSON | Mandatory. Describes the request payload for polling the data, such as the API endpoint. For more information, see [request configuration](#request-configuration). |
-| **response** | Nested JSON | Mandatory. Describes the response object and nested message returned from the API when polling the data. For more information, see [response configuration](#response-configuration). |
-| **paging** | Nested JSON | Optional. Describes the pagination payload when polling the data. For more information, see [paging configuration](#paging-configuration). |
-
-For more information, see [Sample pollingConfig code](#sample-pollingconfig-code).
-
-### auth configuration
-
-The `auth` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters, depending on the type defined in the [authType](#authtype) element:
-
-#### Basic authType parameters
-
-| Name | Type | Description |
-| - | - | -- |
-| **Username** | String | Mandatory. Defines user name. |
-| **Password** | String | Mandatory. Defines user password. |
-
-#### APIKey authType parameters
-
-| Name | Type | Description |
-| - | - | -- |
-|**APIKeyName** |String | Optional. Defines the name of your API key, as one of the following values: <br><br>- `XAuthToken` <br>- `Authorization` |
-|**IsAPIKeyInPostPayload** |Boolean | Determines where your API key is defined. <br><br>True: API key is defined in the POST request payload <br>False: API key is defined in the header |
-|**APIKeyIdentifier** | String | Optional. Defines the name of the identifier for the API key. <br><br>For example, where the authorization is defined as `"Authorization": "token <secret>"`, this parameter is defined as: `{APIKeyIdentifier: ΓÇ£tokenΓÇ¥})` |
-
-#### OAuth2 authType parameters
-
-The Codeless Connector Platform supports OAuth 2.0 authorization code grant.
-
-The Authorization Code grant type is used by confidential and public clients to exchange an authorization code for an access token.
-
-After the user returns to the client via the redirect URL, the application will get the authorization code from the URL and use it to request an access token.
--
-| Name | Type | Description |
-| - | - | -- |
-| **FlowName** | String | Mandatory. Defines an OAuth2 flow.<br><br>Supported value: `AuthCode` - requires an authorization flow |
-| **AccessToken** | String | Optional. Defines an OAuth2 access token, relevant when the access token doesn't expire. |
-| **AccessTokenPrepend** | String | Optional. Defines an OAuth2 access token prepend. Default is `Bearer`. |
-| **RefreshToken** | String | Mandatory for OAuth2 auth types. Defines the OAuth2 refresh token. |
-| **TokenEndpoint** | String | Mandatory for OAuth2 auth types. Defines the OAuth2 token service endpoint. |
-| **AuthorizationEndpoint** | String | Optional. Defines the OAuth2 authorization service endpoint. Used only during onboarding or when renewing a refresh token. |
-| **RedirectionEndpoint** | String | Optional. Defines a redirection endpoint during onboarding. |
-| **AccessTokenExpirationDateTimeInUtc** | String | Optional. Defines an access token expiration datetime, in UTC format. Relevant for when the access token doesn't expire, and therefore has a large datetime in UTC, or when the access token has a large expiration datetime. |
-| **RefreshTokenExpirationDateTimeInUtc** | String | Mandatory for OAuth2 auth types. Defines the refresh token expiration datetime in UTC format. |
-| **TokenEndpointHeaders** | Dictionary<string, object> | Optional. Defines the headers when calling an OAuth2 token service endpoint.<br><br>Define a string in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
-| **AuthorizationEndpointHeaders** | Dictionary<string, object> | Optional. Defines the headers when calling an OAuth2 authorization service endpoint. Used only during onboarding or when renewing a refresh token.<br><br>Define a string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <serialized val>, '<attr_name>': <serialized val>, ... }` |
-| **AuthorizationEndpointQueryParameters** | Dictionary<string, object> | Optional. Defines query parameters when calling an OAuth2 authorization service endpoint. Used only during onboarding or when renewing a refresh token.<br><br>Define a string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <serialized val>, '<attr_name>': <serialized val>, ... }` |
-| **TokenEndpointQueryParameters** | Dictionary<string, object> | Optional. Define query parameters when calling OAuth2 token service endpoint.<br><br>Define a string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <serialized val>, '<attr_name>': <serialized val>, ... }` |
-| **IsTokenEndpointPostPayloadJson** | Boolean | Optional, default is false. Determines whether query parameters are in JSON format and set in the request POST payload. |
-| **IsClientSecretInHeader** | Boolean | Optional, default is false. Determines whether the `client_id` and `client_secret` values are defined in the header, as is done in the Basic authentication schema, instead of in the POST payload. |
-| **RefreshTokenLifetimeinSecAttributeName** | String | Optional. Defines the attribute name from the token endpoint response, specifying the lifetime of the refresh token, in seconds. |
-| **IsJwtBearerFlow** | Boolean | Optional, default is false. Determines whether you are using JWT. |
-| **JwtHeaderInJson** | Dictionary<string, object> | Optional. Define the JWT headers in JSON format.<br><br>Define a string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <serialized val>, '<attr_name>': <serialized val>...}` |
-| **JwtClaimsInJson** | Dictionary<string, object> | Optional. Defines JWT claims in JSON format.<br><br>Define a string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <serialized val>, '<attr_name>': <serialized val>, ...}` |
-| **JwtPem** | String | Optional. Defines a secret key, in PEM Pkcs1 format: `'--BEGIN RSA PRIVATE KEY--\r\n{privatekey}\r\n--END RSA PRIVATE KEY--\r\n'`<br><br>Make sure to keep the `'\r\n'` code in place. |
-| **RequestTimeoutInSeconds** | Integer | Optional. Determines timeout in seconds when calling token service endpoint. Default is 180 seconds |
-
-Here's an example of how an OAuth2 configuration might look:
-
-```json
-"pollingConfig": {
- "auth": {
- "authType": "OAuth2",
- "authorizationEndpoint": "https://accounts.google.com/o/oauth2/v2/auth?access_type=offline&prompt=consent",
- "redirectionEndpoint": "https://portal.azure.com/TokenAuthorize",
- "tokenEndpoint": "https://oauth2.googleapis.com/token",
- "authorizationEndpointQueryParameters": {},
- "tokenEndpointHeaders": {
- "Accept": "application/json"
- },
- "TokenEndpointQueryParameters": {},
- "isClientSecretInHeader": false,
- "scope": "https://www.googleapis.com/auth/admin.reports.audit.readonly",
- "grantType": "authorization_code",
- "contentType": "application/x-www-form-urlencoded",
- "FlowName": "AuthCode"
- },
-```
-
-#### Session authType parameters
-
-| Name | Type | Description |
-| | -- | |
-| **QueryParameters** | Dictionary<string, object> | Optional. A list of query parameters, in the serialized `dictionary<string, string>` format: <br><br>`{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
-| **IsPostPayloadJson** | Boolean | Optional. Determines whether the query parameters are in JSON format. |
-| **Headers** | Dictionary<string, object> | Optional. Defines the header used when calling the endpoint to get the session ID, and when calling the endpoint API. <br><br> Define the string in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
-| **SessionTimeoutInMinutes** | String | Optional. Defines a session timeout, in minutes. |
-| **SessionIdName** | String | Optional. Defines an ID name for the session. |
-| **SessionLoginRequestUri** | String | Optional. Defines a session login request URI. |
-
-### Request configuration
-
-The `request` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters:
-
-| Name | Type | Description |
-| - | - | -- |
-| **apiEndpoint** | String | Mandatory. Defines the endpoint to pull data from. |
-| **httpMethod** | String | Mandatory. Defines the API method: `GET` or `POST` |
-| **queryTimeFormat** | String, or *UnixTimestamp* or *UnixTimestampInMills* | Mandatory. Defines the format used to define the query time. <br><br>This value can be a string, or in *UnixTimestamp* or *UnixTimestampInMills* format to indicate the query start and end time in the UnixTimestamp. |
-| **startTimeAttributeName** | String | Optional. Defines the name of the attribute that defines the query start time. |
-| **endTimeAttributeName** | String | Optional. Defines the name of the attribute that defines the query end time. |
-| **queryTimeIntervalAttributeName** | String | Optional. Defines the name of the attribute that defines the query time interval. |
-| **queryTimeIntervalDelimiter** | String | Optional. Defines the query time interval delimiter. |
-| **queryWindowInMin** | Integer | Optional. Defines the available query window, in minutes. <br><br>Minimum value: `5` |
-| **queryParameters** | Dictionary<string, object> | Optional. Defines the parameters passed in the query in the [`eventsJsonPaths`](#eventsjsonpaths) path. <br><br>Define the string in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }`. |
-| **queryParametersTemplate** | String | Optional. Defines the query parameters template to use when passing query parameters in advanced scenarios. <br><br>For example: `"queryParametersTemplate": "{'cid': 1234567, 'cmd': 'reporting', 'format': 'siem', 'data': { 'from': '{_QueryWindowStartTime}', 'to': '{_QueryWindowEndTime}'}, '{_APIKeyName}': '{_APIKey}'}"` <br><br>`{_QueryWindowStartTime}` and `{_QueryWindowEndTime}` are only supported in the `queryParameters` and `queryParametersTemplate` request parameters. <br><br>`{_APIKeyName}` and `{_APIKey}` are only supported in the `queryParametersTemplate` request parameter. |
-| **isPostPayloadJson** | Boolean | Optional. Determines whether the POST payload is in JSON format. |
-| **rateLimitQPS** | Double | Optional. Defines the number of calls or queries allowed in a second. |
-| **timeoutInSeconds** | Integer | Optional. Defines the request timeout, in seconds. |
-| **retryCount** | Integer | Optional. Defines the number of request retries to try if needed. |
-| **headers** | Dictionary<string, object> | Optional. Defines the request header value, in the serialized `dictionary<string, object>` format: `{'<attr_name>': '<serialized val>', '<attr_name>': '<serialized val>'... }` |
-
-### Response configuration
-
-The `response` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters:
-
-|Name |Type |Description |
-||||
-| <a name="eventsjsonpaths"></a> **eventsJsonPaths** | List of strings | Mandatory. Defines the path to the message in the response JSON. <br><br>A JSON path expression specifies a path to an element, or a set of elements, in a JSON structure |
-| **successStatusJsonPath** | String | Optional. Defines the path to the success message in the response JSON. |
-| **successStatusValue** | String | Optional. Defines the path to the success message value in the response JSON |
-| **isGzipCompressed** | Boolean | Optional. Determines whether the response is compressed in a gzip file. |
--
-The following code shows an example of the [eventsJsonPaths](#eventsjsonpaths) value for a top-level message:
-
-```json
-"eventsJsonPaths": [
- "$"
- ]
-```
--
-### Paging configuration
-
-The `paging` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters:
-
-|Name |Type |Description |
-||||
-| **pagingType** | String | Mandatory. Determines the paging type to use in results, as one of the following values: `None`, `LinkHeader`, `NextPageToken`, `NextPageUrl`, `Offset` |
-| **linkHeaderTokenJsonPath** | String | Optional. Defines the JSON path to link header in the response JSON, if the `LinkHeader` isn't defined in the response header. |
-| **nextPageTokenJsonPath** | String | Optional. Defines the path to a next page token JSON. |
-| **hasNextFlagJsonPath** |String | Optional. Defines the path to the `HasNextPage` flag attribute. |
-| **nextPageTokenResponseHeader** | String | Optional. Defines the *next page* token header name in the response. |
-| **nextPageParaName** | String | Optional. Determines the *next page* name in the request. |
-| **nextPageRequestHeader** | String | Optional. Determines the *next page* header name in the request. |
-| **nextPageUrl** | String | Optional. Determines the *next page* URL, if it's different from the initial request URL. |
-| **nextPageUrlQueryParameters** | String | Optional. Determines the *next page* URL's query parameters if it's different from the initial request's URL. <br><br>Define the string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <val>, '<attr_name>': <val>... }` |
-| **offsetParaName** | String | Optional. Defines the name of the offset parameter. |
-| **pageSizeParaName** | String | Optional. Defines the name of the page size parameter. |
-| **PageSize** | Integer | Defines the paging size. |
---
-### Sample pollingConfig code
-
-The following code shows an example of the `pollingConfig` section of the [CCP configuration](#create-a-connector-json-configuration-file) file:
-
-```json
-"pollingConfig": {
- "auth": {
- "authType": "APIKey",
- "APIKeyIdentifier": "token",
- "APIKeyName": "Authorization"
- },
- "request": {
- "apiEndpoint": "https://api.github.com/../{{placeHolder1}}/audit-log",
- "rateLimitQPS": 50,
- "queryWindowInMin": 15,
- "httpMethod": "Get",
- "queryTimeFormat": "yyyy-MM-ddTHH:mm:ssZ",
- "retryCount": 2,
- "timeoutInSeconds": 60,
- "headers": {
- "Accept": "application/json",
- "User-Agent": "Scuba"
- },
- "queryParameters": {
- "phrase": "created:{_QueryWindowStartTime}..{_QueryWindowEndTime}"
- }
- },
- "paging": {
- "pagingType": "LinkHeader",
- "pageSizeParaName": "per_page"
- },
- "response": {
- "eventsJsonPaths": [
- "$"
- ]
- }
-}
-```
--
-## Deploy your connector in Microsoft Sentinel and start ingesting data
-
-After creating your [JSON configuration file](#create-a-connector-json-configuration-file), including both the [user interface](#configure-your-connectors-user-interface) and [polling](#configure-your-connectors-polling-settings) configuration, deploy your connector in your Microsoft Sentinel workspace.
-
-1. Use one of the following options to deploy your data connector.
-
- > [!TIP]
- > The advantage of deploying via an Azure Resource Manager (ARM) template is that several values are built-in to the template, and you don't need to define them manually in an API call.
- >
-
- # [Deploy via ARM template](#tab/deploy-via-arm-template)
-
- Wrap your JSON configuration collections in an ARM template to deploy your connector. To ensure that your data connector gets deployed to the correct workspace, make sure to either define the workspace in the ARM template, or select the workspace when deploying the ARM template.
-
- 1. Prepare an [ARM template JSON file](/azure/templates/microsoft.securityinsights/dataconnectors) for your connector. For example, see the following ARM template JSON files:
-
- - Data connector in the [Slack solution](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SlackAudit/Data%20Connectors/SlackNativePollerConnector/azuredeploy_Slack_native_poller_connector.json)
- - Data connector in the [GitHub solution](https://github.com/Azure/Azure-Sentinel/blob/3d324aed163c1702ba0cab6de203ac0bf4756b8c/Solutions/GitHub/Data%20Connectors/azuredeploy_GitHub_native_poller_connector.json)
-
- 1. In the Azure portal, search for **Deploy a custom template**.
-
- 1. On the **Custom deployment** page, select **Build your own template in the editor** > **Load file**. Browse to and select your local ARM template, and then save your changes.
-
- 1. Select your subscription and resource group, and then enter the Log Analytics workspace where you want to deploy your custom connector.
-
- 1. Select **Review + create** to deploy your custom connector to Microsoft Sentinel.
-
- 1. In Microsoft Sentinel, go to the **Data connectors** page, search for your new connector. Configure it to start ingesting data.
-
- For more information, see [Deploy a local template](../azure-resource-manager/templates/deployment-tutorial-local-template.md?tabs=azure-powershell) in the Azure Resource Manager documentation.
-
- # [Deploy via API](#tab/deploy-via-api)
-
- 1. Authenticate to the Azure API. For more information, see [Getting started with REST](/rest/api/azure/).
-
- 1. Invoke a [CREATE or UPDATE](/rest/api/securityinsights/preview/data-connectors/create-or-update) API call to Microsoft Sentinel to deploy your new connector. In the request body, define the `kind` value as `APIPolling`.
-
- Your data connector is deployed to your Microsoft Sentinel workspace, and is available on the **Data connectors** page.
-
-
-
-1. Configure your data connector to connect your data source and start ingesting data into Microsoft Sentinel. You can connect to your data source either via the portal, as with out-of-the-box data connectors, or via API.
-
- When you use the Azure portal to connect, user data is sent automatically. When you connect via API, you'll need to send the relevant authentication parameters in the API call.
-
- # [Connect via the Azure portal](#tab/connect-via-the-azure-portal)
-
- In your Microsoft Sentinel data connector page, follow the instructions you've provided to connect to your data connector.
-
- The data connector page in Microsoft Sentinel is controlled by the [InstructionSteps](#instructionsteps) configuration in the `connectorUiConfig` element of the [CCP JSON configuration](#create-a-connector-json-configuration-file) file. If you have issues with the user interface connection, make sure that you have the correct configuration for your authentication type.
-
- # [Connect via API](#tab/connect-via-api)
-
- Use the [CONNECT](/rest/api/securityinsights/data-connectors/connect) endpoint to send a PUT method and pass the JSON configuration directly in the body of the message. For more information, see [auth configuration](#auth-configuration).
-
- Use the following API attributes, depending on the [authType](#authtype) defined. For each `authType` parameter, all listed attributes are mandatory and are string values.
-
- |authType |Attributes |
- |||
- |**Basic** | Define: <br>- `kind` as `Basic` <br>- `userName` as your username, in quotes <br>- `password` as your password, in quotes |
- |**APIKey** |Define: <br>- `kind` as `APIKey` <br>- `APIKey` as your full API key string, in quotes|
--
- If you're using a [placeholder data in your template](#apikey), send the data together with the `placeHolderValue` attributes that hold the user data. For example:
-
- ```json
- "requestConfigUserInputValues": [
- {
- "displayText": "<A display name>",
- "placeHolderName": "<A placeholder name>",
- "placeHolderValue": "<A value for the placeholder>",
- "pollingKeyPaths": "<Array of items to use in place of the placeHolderName>"
- }
- ]
- ```
-
-
-
-1. In Microsoft Sentinel, go to the **Logs** page and verify that you see the logs from your data source flowing in to your workspace.
-
-If you don't see data flowing into Microsoft Sentinel, check your data source documentation and troubleshooting resources, check the configuration details, and check the connectivity. For more information, see [Monitor the health of your data connectors](monitor-data-connector-health.md).
-
-### Disconnect your connector
-
-If you no longer need your connector's data, disconnect the connector to stop the data flow.
-
-Use one of the following methods:
--- **Azure portal**: In your Microsoft Sentinel data connector page, select **Disconnect**.--- **API**: Use the *DISCONNECT* API to send a PUT call with an empty body to the following URL:-
- ```http
- https://management.azure.com /subscriptions/{{SUB}}/resourceGroups/{{RG}}/providers/Microsoft.OperationalInsights/workspaces/{{WS-NAME}}/providers/Microsoft.SecurityInsights/dataConnectors/{{Connector_Id}}/disconnect?api-version=2021-03-01-preview
- ```
-
-## Next steps
-
-If you haven't yet, share your new codeless data connector with the Microsoft Sentinel community! Create a solution for your data connector and share it in the Microsoft Sentinel Marketplace.
-
-For more information, see
-- [About Microsoft Sentinel solutions](sentinel-solutions.md).-- [Data connector ARM template reference](/azure/templates/microsoft.securityinsights/dataconnectors#dataconnectors-objects-1)
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
For more information about the codeless connector platform, see [Create a codele
- [Jamf Protect](data-connectors/jamf-protect.md)
-## Linux
--- [Microsoft Sysmon For Linux](data-connectors/microsoft-sysmon-for-linux.md)- ## Lookout, Inc. - [Lookout (using Azure Function)](data-connectors/lookout.md)
For more information about the codeless connector platform, see [Create a codele
- [MailGuard 365](data-connectors/mailguard-365.md)
-## McAfee
--- [McAfee ePolicy Orchestrator (ePO)](data-connectors/mcafee-epolicy-orchestrator-epo.md)-- [McAfee Network Security Platform](data-connectors/mcafee-network-security-platform.md)- ## Microsoft - [Automated Logic WebCTRL](data-connectors/automated-logic-webctrl.md)
For more information about the codeless connector platform, see [Create a codele
- [Azure Storage Account](data-connectors/azure-storage-account.md) - [Azure Web Application Firewall (WAF)](data-connectors/azure-web-application-firewall-waf.md) - [Azure Batch Account](data-connectors/azure-batch-account.md)-- [Common Event Format (CEF)](data-connectors/common-event-format-cef.md) - [Common Event Format (CEF) via AMA](data-connectors/common-event-format-cef-via-ama.md) - [Windows DNS Events via AMA](data-connectors/windows-dns-events-via-ama.md) - [Azure Event Hubs](data-connectors/azure-event-hub.md)
For more information about the codeless connector platform, see [Create a codele
- [Windows Security Events via AMA](data-connectors/windows-security-events-via-ama.md) - [Azure Service Bus](data-connectors/azure-service-bus.md) - [Azure Stream Analytics](data-connectors/azure-stream-analytics.md)-- [Syslog](data-connectors/syslog.md) - [Syslog via AMA](data-connectors/syslog-via-ama.md) - [Microsoft Defender Threat Intelligence (Preview)](data-connectors/microsoft-defender-threat-intelligence.md) - [Threat intelligence - TAXII](data-connectors/threat-intelligence-taxii.md)
For more information about the codeless connector platform, see [Create a codele
- [Mimecast Intelligence for Microsoft - Microsoft Sentinel (using Azure Functions)](data-connectors/mimecast-intelligence-for-microsoft-microsoft-sentinel.md) - [Mimecast Targeted Threat Protection (using Azure Functions)](data-connectors/mimecast-targeted-threat-protection.md)
-## MongoDB
--- [MongoDB Audit](data-connectors/mongodb-audit.md)- ## MuleSoft - [MuleSoft Cloudhub (using Azure Functions)](data-connectors/mulesoft-cloudhub.md)
-## Nasuni Corporation
--- [Nasuni Edge Appliance](data-connectors/nasuni-edge-appliance.md)- ## NetClean Technologies AB - [Netclean ProActive Incidents](data-connectors/netclean-proactive-incidents.md)
For more information about the codeless connector platform, see [Create a codele
- [Netskope Data Connector (using Azure Functions)](data-connectors/netskope-data-connector.md) - [Netskope Web Transactions Data Connector (using Azure Functions)](data-connectors/netskope-web-transactions-data-connector.md)
-## Netwrix
--- [[Recommended] Netwrix Auditor via AMA](data-connectors/recommended-netwrix-auditor-via-ama.md)-
-## Nginx
--- [NGINX HTTP Server](data-connectors/nginx-http-server.md)- ## Noname Gate, Inc. - [Noname Security for Microsoft Sentinel](data-connectors/noname-security-for-microsoft-sentinel.md)
-## Nozomi Networks
--- [[Recommended] Nozomi Networks N2OS via AMA](data-connectors/recommended-nozomi-networks-n2os-via-ama.md)- ## NXLog Ltd. - [NXLog AIX Audit](data-connectors/nxlog-aix-audit.md)
For more information about the codeless connector platform, see [Create a codele
- [OneLogin IAM Platform(using Azure Functions)](data-connectors/onelogin-iam-platform.md)
-## OpenVPN
--- [OpenVPN Server](data-connectors/openvpn-server.md)-
-## Oracle
--- [Oracle Cloud Infrastructure (using Azure Functions)](data-connectors/oracle-cloud-infrastructure.md)-- [Oracle Database Audit](data-connectors/oracle-database-audit.md)-- [Oracle WebLogic Server (using Azure Functions)](data-connectors/oracle-weblogic-server.md)- ## Orca Security, Inc. - [Orca Security Alerts](data-connectors/orca-security-alerts.md)
-## OSSEC
--- [[Recommended] OSSEC via AMA](data-connectors/recommended-ossec-via-ama.md)- ## Palo Alto Networks -- [[Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA](data-connectors/recommended-palo-alto-networks-cortex-data-lake-cdl-via-ama.md) - [Palo Alto Prisma Cloud CSPM (using Azure Functions)](data-connectors/palo-alto-prisma-cloud-cspm.md) ## Perimeter 81 - [Perimeter 81 Activity Logs](data-connectors/perimeter-81-activity-logs.md)
-## Ping Identity
--- [[Recommended] PingFederate via AMA](data-connectors/recommended-pingfederate-via-ama.md)-
-## PostgreSQL
--- [PostgreSQL Events](data-connectors/postgresql-events.md)- ## Prancer Enterprise - [Prancer Data Connector](data-connectors/prancer-data-connector.md)
For more information about the codeless connector platform, see [Create a codele
- [Proofpoint TAP (using Azure Functions)](data-connectors/proofpoint-tap.md) - [Proofpoint On Demand Email Security (using Azure Functions)](data-connectors/proofpoint-on-demand-email-security.md)
-## Pulse Secure
--- [Pulse Connect Secure](data-connectors/pulse-connect-secure.md)- ## Qualys - [Qualys Vulnerability Management (using Azure Functions)](data-connectors/qualys-vulnerability-management.md) - [Qualys VM KnowledgeBase (using Azure Functions)](data-connectors/qualys-vm-knowledgebase.md)
-## Ridge Security Technology Inc.
--- [RIDGEBOT - data connector for Microsoft Sentinel](data-connectors/ridgebot-data-connector-for-microsoft-sentinel.md)-
-## RSA
--- [RSA® SecurID (Authentication Manager)](data-connectors/rsa-securid-authentication-manager.md)- ## Rubrik, Inc. - [Rubrik Security Cloud data connector (using Azure Functions)](data-connectors/rubrik-security-cloud-data-connector.md)
For more information about the codeless connector platform, see [Create a codele
- [Snowflake (using Azure Functions)](data-connectors/snowflake.md)
-## SonicWall Inc
--- [SonicWall Firewall](data-connectors/sonicwall-firewall.md)- ## Sonrai Security - [Sonrai Data Connector](data-connectors/sonrai-data-connector.md)
For more information about the codeless connector platform, see [Create a codele
## Sophos - [Sophos Endpoint Protection (using Azure Functions)](data-connectors/sophos-endpoint-protection.md)-- [Sophos XG Firewall](data-connectors/sophos-xg-firewall.md) - [Sophos Cloud Optix](data-connectors/sophos-cloud-optix.md)
-## Squid
--- [Squid Proxy](data-connectors/squid-proxy.md)- ## Symantec -- [Symantec Endpoint Protection](data-connectors/symantec-endpoint-protection.md)-- [Symantec VIP](data-connectors/symantec-vip.md)-- [Symantec ProxySG](data-connectors/symantec-proxysg.md) - [Symantec Integrated Cyber Defense Exchange](data-connectors/symantec-integrated-cyber-defense-exchange.md) ## TALON CYBER SECURITY LTD
For more information about the codeless connector platform, see [Create a codele
## Trend Micro -- [Trend Micro Deep Security](data-connectors/trend-micro-deep-security.md)-- [Trend Micro TippingPoint](data-connectors/trend-micro-tippingpoint.md) - [Trend Vision One (using Azure Functions)](data-connectors/trend-vision-one.md)
-## TrendMicro
--- [[Recommended] Trend Micro Apex One via AMA](data-connectors/recommended-trend-micro-apex-one-via-ama.md)-
-## Ubiquiti
--- [Ubiquiti UniFi (using Azure Functions)](data-connectors/ubiquiti-unifi.md)- ## Valence Security Inc. - [SaaS Security](data-connectors/saas-security.md) ## Vectra AI, Inc -- [AI Vectra Stream](data-connectors/ai-vectra-stream.md) - [Vectra XDR (using Azure Functions)](data-connectors/vectra-xdr.md) ## VMware -- [VMware vCenter](data-connectors/vmware-vcenter.md) - [VMware Carbon Black Cloud (using Azure Functions)](data-connectors/vmware-carbon-black-cloud.md)-- [VMware ESXi](data-connectors/vmware-esxi.md)-
-## WatchGuard Technologies
--- [WatchGuard Firebox](data-connectors/watchguard-firebox.md) ## WithSecure - [WithSecure Elements API (Azure Function) (using Azure Functions)](data-connectors/withsecure-elements-api-azure.md)-- [WithSecure Elements via Connector](data-connectors/withsecure-elements-via-connector.md) ## Wiz, Inc.
For more information about the codeless connector platform, see [Create a codele
- [Zoom Reports (using Azure Functions)](data-connectors/zoom-reports.md)
-## Zscaler
--- [Zscaler Private Access](data-connectors/zscaler-private-access.md)- [comment]: <> (DataConnector includes end) ## Next steps
sentinel Ai Vectra Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ai-vectra-stream.md
- Title: "AI Vectra Stream connector for Microsoft Sentinel"
-description: "Learn how to install the connector AI Vectra Stream to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# AI Vectra Stream connector for Microsoft Sentinel
-
-The AI Vectra Stream connector allows to send Network Metadata collected by Vectra Sensors accross the Network and Cloud to Microsoft Sentinel
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | VectraStream_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Vectra AI](https://www.vectra.ai/support) |
-
-## Query samples
-
-**List all DNS Queries**
-
- ```kusto
-VectraStream
-
- | where metadata_type == "metadat_dns"
-
- | project orig_hostname, id_orig_h, resp_hostname, id_resp_h, id_resp_p, qtype_name, ['query'], answers
- ```
-
-**Number of DNS requests per type**
-
- ```kusto
-VectraStream
-
- | where metadata_type == "metadat_dns"
-
- | summarize count() by type_name
- ```
-
-**Top 10 of query to non existing domain**
-
- ```kusto
-VectraStream
-
- | where metadata_type == "metadat_dns"
-
- | where rcode_name == "NXDomain"
-
- | summarize Count=count() by tostring(query)
-
- | order by Count desc
-
- | limit 10
- ```
-
-**Host and Web sites using non-ephemeral Diffie-Hellman key exchange**
-
- ```kusto
-VectraStream
-
- | where metadata_type == "metadat_dns"
-
- | where cipher contains "TLS_RSA"
-
- | distinct orig_hostname, id_orig_h, id_resp_h, server_name, cipher
-
- | project orig_hostname, id_orig_h, id_resp_h, server_name, cipher
- ```
---
-## Prerequisites
-
-To integrate with AI Vectra Stream make sure you have:
--- **Vectra AI Brain**: must be configured to export Stream metadata in JSON--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected **VectraStream** which is deployed with the Microsoft Sentinel Solution.
-
-1. Install and onboard the agent for Linux
-
-Install the Linux agent on sperate Linux instance.
-
-> Logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Follow the configuration steps below to get Vectra Stream metadata into Microsoft Sentinel. The Log Analytics agent is leveraged to send custom JSON into Azure Monitor, enabling the storage of the metadata into a custom table. For more information, refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json).
-1. Download config file for the log analytics agent: VectraStream.conf (located in the Connector folder within the Vectra solution: https://aka.ms/sentinel-aivectrastream-conf).
-2. Login to the server where you have installed Azure Log Analytics agent.
-3. Copy VectraStream.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder.
-4. Edit VectraStream.conf as follows:
-
- i. configure an alternate port to send data to, if desired. Default port is 29009.
-
- ii. replace **workspace_id** with real value of your Workspace ID.
-5. Save changes and restart the Azure Log Analytics agent for Linux service with the following command:
- sudo /opt/microsoft/omsagent/bin/service_control restart
--
-3. Configure and connect Vectra AI Stream
-
-Configure Vectra AI Brain to forward Stream metadata in JSON format to your Microsoft Sentinel workspace via the Log Analytics Agent.
-
-From the Vectra UI, navigate to Settings > Cognito Stream and Edit the destination configuration:
--- Select Publisher: RAW JSON--- Set the server IP or hostname (which is the host which run the Log Analytics Agent)--- Set all the port to **29009** (this port can be modified if required)--- Save--- Set Log types (Select all log types available)--- Click on **Save**-----
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vectraaiinc.vectra_sentinel_solution?tab=Overview) in the Azure Marketplace.
sentinel Common Event Format Cef https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/common-event-format-cef.md
- Title: "Common Event Format (CEF) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Common Event Format (CEF) to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Common Event Format (CEF) connector for Microsoft Sentinel
-
-Common Event Format (CEF) is an industry standard format on top of Syslog messages, used by many security vendors to allow event interoperability among different platforms. By connecting your CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2223902&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
--
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-commoneventformat?tab=Overview) in the Azure Marketplace.
sentinel Mcafee Epolicy Orchestrator Epo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mcafee-epolicy-orchestrator-epo.md
- Title: "McAfee ePolicy Orchestrator (ePO) connector for Microsoft Sentinel"
-description: "Learn how to install the connector McAfee ePolicy Orchestrator (ePO) to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# McAfee ePolicy Orchestrator (ePO) connector for Microsoft Sentinel
-
-The McAfee ePolicy Orchestrator data connector provides the capability to ingest [McAfee ePO](https://www.mcafee.com/enterprise/en-us/products/epolicy-orchestrator.html) events into Microsoft Sentinel through the syslog.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Kusto function alias** | McAfeeEPOEvent |
-| **Kusto function url** | https://aka.ms/sentinel-McAfeeePO-parser |
-| **Log Analytics table(s)** | Syslog(McAfeeePO)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
-
-## Query samples
-
-**Top 10 Sources**
-
- ```kusto
-McAfeeEPOEvent
-
- | summarize count() by DvcHostname
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
->This data connector depends on a parser based on a Kusto Function to work as expected [**McAfeeEPOEvent**](https://aka.ms/sentinel-McAfeeePO-parser) which is deployed with the Microsoft Sentinel Solution.
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
-
-1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
-2. Select **Apply below configuration to my machines** and select the facilities and severities.
-3. Click **Save**.
--
-3. Configure McAfee ePolicy Orchestrator event forwarding to Syslog server
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-mcafeeepo?tab=Overview) in the Azure Marketplace.
sentinel Mcafee Network Security Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mcafee-network-security-platform.md
- Title: "McAfee Network Security Platform connector for Microsoft Sentinel"
-description: "Learn how to install the connector McAfee Network Security Platform to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# McAfee Network Security Platform connector for Microsoft Sentinel
-
-The [McAfee® Network Security Platform](https://www.mcafee.com/enterprise/en-us/products/network-security-platform.html) data connector provides the capability to ingest McAfee® Network Security Platform events into Microsoft Sentinel.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (McAfeeNSPEvent)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Sources**
-
- ```kusto
-McAfeeNSPEvent
-
- | summarize count() by tostring(DvcHostname)
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
-> [!NOTE]
-> This data connector depends on a parser based on a Kusto Function to work as expected [**McAfeeNSPEvent**](https://aka.ms/sentinel-mcafeensp-parser) which is deployed with the Microsoft Sentinel Solution.
--
-> [!NOTE]
-> This data connector has been developed using McAfee® Network Security Platform version: 10.1.x
-
-1. Install and onboard the agent for Linux or Windows
-
- Install the agent on the Server where the McAfee® Network Security Platform logs are forwarded.
-
- Logs from McAfee® Network Security Platform Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure McAfee® Network Security Platform event forwarding
-
- Follow the configuration steps below to get McAfee® Network Security Platform logs into Microsoft Sentinel.
-
- 1. While creating a profile, to make sure that events are formatted correctly, enter the following text in the Message text box:
-
- ```text
- <SyslogAlertForwarderNSP>:|SENSOR_ALERT_UUID|ALERT_TYPE|ATTACK_TIME|ATTACK_NAME|ATTACK_ID
- |ATTACK_SEVERITY|ATTACK_SIGNATURE|ATTACK_CONFIDENCE|ADMIN_DOMAIN|SENSOR_NAME|INTERFACE
- |SOURCE_IP|SOURCE_PORT|DESTINATION_IP|DESTINATION_PORT|CATEGORY|SUB_CATEGORY
- |DIRECTION|RESULT_STATUS|DETECTION_MECHANISM|APPLICATION_PROTOCOL|NETWORK_PROTOCOL|
- ```
-
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-mcafeensp?tab=Overview) in the Azure Marketplace.
sentinel Microsoft Sysmon For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/microsoft-sysmon-for-linux.md
- Title: "Microsoft Sysmon For Linux connector for Microsoft Sentinel"
-description: "Learn how to install the connector Microsoft Sysmon For Linux to connect your data source to Microsoft Sentinel."
-- Previously updated : 08/20/2024------
-# Microsoft Sysmon For Linux connector for Microsoft Sentinel
-
-[Sysmon for Linux](https://github.com/Sysinternals/SysmonForLinux) provides detailed information about process creations, network connections and other system events.
-[Sysmon for linux link:]. The Sysmon for Linux connector uses [Syslog](https://aka.ms/sysLogInfo) as its data ingestion method. This solution depends on ASIM to work as expected.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (Sysmon)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Events by ActingProcessName**
-
- ```kusto
-_Im_ProcessCreate_LinuxSysmonV03
-
- | summarize count() by ActingProcessName
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
->This data connector depends on ASIM parsers based on a Kusto Functions to work as expected.
-
- The following functions are available:
----
-[Read more](https://aka.ms/AboutASIM)
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
-
-1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
-2. Select **Apply below configuration to my machines** and select the facilities and severities.
-3. Click **Save**.
----
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sysmonforlinux?tab=Overview) in the Azure Marketplace.
sentinel Mongodb Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mongodb-audit.md
- Title: "MongoDB Audit connector for Microsoft Sentinel"
-description: "Learn how to install the connector MongoDB Audit to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# MongoDB Audit connector for Microsoft Sentinel
-
-MongoDB data connector provides the capability to ingest [MongoDBAudit](https://www.mongodb.com/) into Microsoft Sentinel. Refer to [MongoDB documentation](https://www.mongodb.com/docs/manual/tutorial/getting-started/) for more information.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | MongoDBAudit_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**MongoDBAudit - All Activities.**
-
- ```kusto
-MongoDBAudit
-
- | sort by TimeGenerated desc
- ```
---
-## Vendor installation instructions
--
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias MongoDBAudit and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/MongoDBAudit/Parsers/MongoDBAudit.txt) on the second line of the query, enter the hostname(s) of your MongoDBAudit device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the Tomcat Server where the logs are generated.
-
-> Logs from MongoDB Enterprise Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure MongoDBAudit to write logs to files
-
-Edit mongod.conf file (for Linux) or mongod.cfg (for Windows) to write logs to files:
-
->**dbPath**: data/db
-
->**path**: data/db/auditLog.json
-
-Set the following parameters: **dbPath** and **path**. Refer to the [MongoDB documentation for more details](https://www.mongodb.com/docs/manual/tutorial/configure-auditing/)
-
-3. Configure the logs to be collected
-
-Configure the custom log directory to be collected
---
-1. Select the link above to open your workspace advanced settings
-2. From the left pane, select **Settings**, select **Custom Logs** and click **+Add custom log**
-3. Click **Browse** to upload a sample of a MongoDBAudit log file. Then, click **Next >**
-4. Select **Timestamp** as the record delimiter and click **Next >**
-5. Select **Windows** or **Linux** and enter the path to MongoDBAudit logs based on your configuration
-6. After entering the path, click the '+' symbol to apply, then click **Next >**
-7. Add **MongoDBAudit** as the custom log Name (the '_CL' suffix will be added automatically) and click **Done**.
-
-Validate connectivity
-
-It may take upwards of 20 minutes until your logs start to appear in Microsoft Sentinel.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-mongodbaudit?tab=Overview) in the Azure Marketplace.
sentinel Nasuni Edge Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nasuni-edge-appliance.md
- Title: "Nasuni Edge Appliance connector for Microsoft Sentinel"
-description: "Learn how to install the connector Nasuni Edge Appliance to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Nasuni Edge Appliance connector for Microsoft Sentinel
-
-The [Nasuni](https://www.nasuni.com/) connector allows you to easily connect your Nasuni Edge Appliance Notifications and file system audit logs with Microsoft Sentinel. This gives you more insight into activity within your Nasuni infrastructure and improves your security operation capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Nasuni](https://github.com/nasuni-labs/Azure-Sentinel) |
-
-## Query samples
-
-**Last 1000 generated events**
-
- ```kusto
-Syslog
-
- | top 1000 by TimeGenerated
- ```
-
-**All events by facility except for cron**
-
- ```kusto
-Syslog
-
- | summarize count() by Facility
- | where Facility != "cron"
- ```
---
-## Vendor installation instructions
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Follow the configuration steps below to configure your Linux machine to send Nasuni event information to Microsoft Sentinel. Refer to the [Azure Monitor Agent documenation](/azure/azure-monitor/agents/agents-overview) for additional details on these steps.
-Configure the facilities you want to collect and their severities.
-1. Select the link below to open your workspace agents configuration, and select the Syslog tab.
-2. Select Add facility and choose from the drop-down list of facilities. Repeat for all the facilities you want to add.
-3. Mark the check boxes for the desired severities for each facility.
-4. Click Apply.
---
-3. Configure Nasuni Edge Appliance settings
-
-Follow the instructions in the [Nasuni Management Console Guide](https://view.highspot.com/viewer/629a633ae5b4caaf17018daa?iid=5e6fbfcbc7143309f69fcfcf) to configure Nasuni Edge Appliances to forward syslog events. Use the IP address or hostname of the Linux device running the Azure Monitor Agent in the Servers configuration field for the syslog settings.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nasunicorporation.nasuni-sentinel?tab=Overview) in the Azure Marketplace.
sentinel Nginx Http Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nginx-http-server.md
- Title: "NGINX HTTP Server connector for Microsoft Sentinel"
-description: "Learn how to install the connector NGINX HTTP Server to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# NGINX HTTP Server connector for Microsoft Sentinel
-
-The NGINX HTTP Server data connector provides the capability to ingest [NGINX](https://nginx.org/en/) HTTP Server events into Microsoft Sentinel. Refer to [NGINX Logs documentation](https://nginx.org/en/docs/http/ngx_http_log_module.html) for more information.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | NGINX_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Clients (Source IP)**
-
- ```kusto
-NGINXHTTPServer
-
- | summarize count() by SrcIpAddr
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias NGINXHTTPServer and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/NGINX%20HTTP%20Server/Parsers/NGINXHTTPServer.txt).The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the NGINX HTTP Server where the logs are generated.
-
-> Logs from NGINX HTTP Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure the logs to be collected
-
-Configure the custom log directory to be collected
---
-1. Select the link above to open your workspace advanced settings
-2. From the left pane, select **Data**, select **Custom Logs** and click **Add+**
-3. Click **Browse** to upload a sample of a NGINX HTTP Server log file (e.g. access.log or error.log). Then, click **Next >**
-4. Select **New line** as the record delimiter and click **Next >**
-5. Select **Windows** or **Linux** and enter the path to NGINX HTTP logs based on your configuration. Example:
-6. After entering the path, click the '+' symbol to apply, then click **Next >**
-7. Add **NGINX_CL** as the custom log Name and click **Done**
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-nginx?tab=Overview) in the Azure Marketplace.
sentinel Openvpn Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/openvpn-server.md
- Title: "OpenVPN Server connector for Microsoft Sentinel"
-description: "Learn how to install the connector OpenVPN Server to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# OpenVPN Server connector for Microsoft Sentinel
-
-The [OpenVPN](https://github.com/OpenVPN) data connector provides the capability to ingest OpenVPN Server logs into Microsoft Sentinel.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog(OpenVPN)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Sources**
-
- ```kusto
-OpenVpnEvent
-
- | summarize count() by tostring(SrcIpAddr)
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**OpenVpnEvent**](https://aka.ms/sentinel-openvpn-parser) which is deployed with the Microsoft Sentinel Solution.
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the Server where the OpenVPN are forwarded.
-
-> Logs from OpenVPN Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
-
-1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
-2. Select **Apply below configuration to my machines** and select the facilities and severities.
-3. Click **Save**.
--
-3. Check your OpenVPN logs.
-
-OpenVPN server logs are written into common syslog file (depending on the Linux distribution used: e.g. /var/log/messages)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-openvpn?tab=Overview) in the Azure Marketplace.
sentinel Oracle Database Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/oracle-database-audit.md
- Title: "Oracle Database Audit connector for Microsoft Sentinel"
-description: "Learn how to install the connector Oracle Database Audit to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Oracle Database Audit connector for Microsoft Sentinel
-
-The Oracle DB Audit data connector provides the capability to ingest [Oracle Database](https://www.oracle.com/database/technologies/) audit events into Microsoft Sentinel through the syslog. Refer to [documentation](https://docs.oracle.com/en/database/oracle/oracle-database/21/dbseg/introduction-to-auditing.html#GUID-94381464-53A3-421B-8F13-BD171C867405) for more information.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (OracleDatabaseAudit)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Sources**
-
- ```kusto
-OracleDatabaseAuditEvent
-
- | summarize count() by SrcDvcHostname
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Oracle Database Audit and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/OracleDatabaseAudit/Parsers/OracleDatabaseAuditEvent.txt). The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
-
-1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
-2. Select **Apply below configuration to my machines** and select the facilities and severities.
-3. Click **Save**.
--
-3. Configure Oracle Database Audit events to be sent to Syslog
-
-Follow the below instructions
-
- 1. Create the Oracle database [Follow these steps.](/azure/virtual-machines/workloads/oracle/oracle-database-quick-create)
-
- 2. Login to Oracle database created from the above step [Follow these steps.](https://docs.oracle.com/cd/F49540_01/DOC/server.815/a67772/create.htm)
-
- 3. Enable unified logging over syslog by **Alter the system to enable unified logging** [Following these steps.](https://docs.oracle.com/en/database/oracle/oracle-database/21/refrn/UNIFIED_AUDIT_COMMON_SYSTEMLOG.html#GUID-9F26BC8E-1397-4B0E-8A08-3B12E4F9ED3A)
-
- 4. Create and **enable an Audit policy for unified auditing** [Follow these steps.](https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-AUDIT-POLICY-Unified-Auditing.html#GUID-8D6961FB-2E50-46F5-81F7-9AEA314FC693)
-
- 5. **Enabling syslog and Event Viewer** Captures for the Unified Audit Trail [Follow these steps.](https://docs.oracle.com/en/database/oracle/oracle-database/18/dbseg/administering-the-audit-trail.html#GUID-3EFB75DB-AE1C-44E6-B46E-30E5702B0FC4)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-oracledbaudit?tab=Overview) in the Azure Marketplace.
sentinel Oracle Weblogic Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/oracle-weblogic-server.md
- Title: "Oracle WebLogic Server (using Azure Functions) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Oracle WebLogic Server (using Azure Functions) to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Oracle WebLogic Server (using Azure Functions) connector for Microsoft Sentinel
-
-OracleWebLogicServer data connector provides the capability to ingest [OracleWebLogicServer](https://docs.oracle.com/en/middleware/standalone/weblogic-server/https://docsupdatetracker.net/index.html) events into Microsoft Sentinel. Refer to [OracleWebLogicServer documentation](https://docs.oracle.com/en/middleware/standalone/weblogic-server/14.1.1.0/https://docsupdatetracker.net/index.html) for more information.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | OracleWebLogicServer_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Devices**
-
- ```kusto
-OracleWebLogicServerEvent
-
- | summarize count() by DvcHostname
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias OracleWebLogicServerEvent and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/OracleWebLogicServer/Parsers/OracleWebLogicServerEvent.yaml). The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the Oracle WebLogic Server where the logs are generated.
-
-> Logs from Oracle WebLogic Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure the logs to be collected
-
-Configure the custom log directory to be collected
---
-1. Select the link above to open your workspace advanced settings
-2. From the left pane, select **Data**, select **Custom Logs** and click **Add+**
-3. Click **Browse** to upload a sample of a OracleWebLogicServer log file (e.g. server.log). Then, click **Next >**
-4. Select **New line** as the record delimiter and click **Next >**
-5. Select **Windows** or **Linux** and enter the path to OracleWebLogicServer logs based on your configuration. Example:
-6. After entering the path, click the '+' symbol to apply, then click **Next >**
-7. Add **OracleWebLogicServer_CL** as the custom log Name and click **Done**
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-oracleweblogicserver?tab=Overview) in the Azure Marketplace.
sentinel Postgresql Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/postgresql-events.md
- Title: "PostgreSQL Events connector for Microsoft Sentinel"
-description: "Learn how to install the connector PostgreSQL Events to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# PostgreSQL Events connector for Microsoft Sentinel
-
-PostgreSQL data connector provides the capability to ingest [PostgreSQL](https://www.postgresql.org/) events into Microsoft Sentinel. Refer to [PostgreSQL documentation](https://www.postgresql.org/docs/current/https://docsupdatetracker.net/index.html) for more information.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Kusto function alias** | PostgreSQLEvent |
-| **Kusto function url** | https://aka.ms/sentinel-postgresql-parser |
-| **Log Analytics table(s)** | PostgreSQL_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**PostgreSQL errors**
-
- ```kusto
-PostgreSQLEvent
-
- | where EventSeverity in~ ('ERROR', 'FATAL')
-
- | sort by EventEndTime
- ```
---
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on PostgreSQL parser based on a Kusto Function to work as expected. This parser is installed along with solution installation.
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the Tomcat Server where the logs are generated.
-
-> Logs from PostgreSQL Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure PostgreSQL to write logs to files
-
-1. Edit postgresql.conf file to write logs to files:
-
->**log_destination** = 'stderr'
-
->**logging_collector** = on
-
-Set the following parameters: **log_directory** and **log_filename**. Refer to the [PostgreSQL documentation for more details](https://www.postgresql.org/docs/current/runtime-config-logging.html)
-
-3. Configure the logs to be collected
-
-Configure the custom log directory to be collected
---
-1. Select the link above to open your workspace advanced settings
-2. From the left pane, select **Settings**, select **Custom Logs** and click **+Add custom log**
-3. Click **Browse** to upload a sample of a PostgreSQL log file. Then, click **Next >**
-4. Select **Timestamp** as the record delimiter and click **Next >**
-5. Select **Windows** or **Linux** and enter the path to PostgreSQL logs based on your configuration(e.g. for some Linux distros the default path is /var/log/postgresql/)
-6. After entering the path, click the '+' symbol to apply, then click **Next >**
-7. Add **PostgreSQL** as the custom log Name (the '_CL' suffix will be added automatically) and click **Done**.
-
-Validate connectivity
-
-It may take upwards of 20 minutes until your logs start to appear in Microsoft Sentinel.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-postgresql?tab=Overview) in the Azure Marketplace.
sentinel Pulse Connect Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/pulse-connect-secure.md
- Title: "Pulse Connect Secure connector for Microsoft Sentinel"
-description: "Learn how to install the connector Pulse Connect Secure to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Pulse Connect Secure connector for Microsoft Sentinel
-
-The [Pulse Connect Secure](https://www.pulsesecure.net/products/pulse-connect-secure/) connector allows you to easily connect your Pulse Connect Secure logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigations. Integrating Pulse Connect Secure with Microsoft Sentinel provides more insight into your organization's network and improves your security operation capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (PulseConnectSecure)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Failed Logins by User**
-
- ```kusto
-PulseConnectSecure
-
- | where vpn_message startswith 'Login failed'
-
- | summarize count() by vpn_user
-
- | top 10 by count_
- ```
-
-**Top 10 Failed Logins by IP Address**
-
- ```kusto
-PulseConnectSecure
-
- | where vpn_message startswith 'Login failed'
-
- | summarize count() by client_ip
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with Pulse Connect Secure make sure you have:
--- **Pulse Connect Secure**: must be configured to export logs via Syslog--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Pulse Connect Secure and load the function code or click [here](https://aka.ms/sentinel-PulseConnectSecure-parser), on the second line of the query, enter the hostname(s) of your Pulse Connect Secure device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
- 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
- 2. Select **Apply below configuration to my machines** and select the facilities and severities.
- 3. Click **Save**.
--
-3. Configure and connect the Pulse Connect Secure
-
-[Follow the instructions](https://help.ivanti.com/ps/help/en_US/PPS/9.1R13/ag/configuring_an_external_syslog_server.htm) to enable syslog streaming of Pulse Connect Secure logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-pulseconnectsecure?tab=Overview) in the Azure Marketplace.
sentinel Recommended Netwrix Auditor Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-netwrix-auditor-via-ama.md
- Title: "[Recommended] Netwrix Auditor via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] Netwrix Auditor via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023-----
-# [Recommended] Netwrix Auditor via AMA connector for Microsoft Sentinel
-
-Netwrix Auditor data connector provides the capability to ingest [Netwrix Auditor (formerly Stealthbits Privileged Activity Manager)](https://www.netwrix.com/auditor.html) events into Microsoft Sentinel. Refer to [Netwrix documentation](https://helpcenter.netwrix.com/) for more information.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Kusto function alias** | NetwrixAuditor |
-| **Kusto function url** | https://aka.ms/sentinel-netwrixauditor-parser |
-| **Log Analytics table(s)** | CommonSecurityLog<br/> |
-| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Netwrix Auditor Events - All Activities.**
- ```kusto
-NetwrixAuditor
-
- | sort by TimeGenerated desc
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] Netwrix Auditor via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on NetwrixAuditor parser based on a Kusto Function to work as expected. This parser is installed along with solution installation.
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-netwrixauditor?tab=Overview) in the Azure Marketplace.
sentinel Recommended Nozomi Networks N2os Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-nozomi-networks-n2os-via-ama.md
- Title: "[Recommended] Nozomi Networks N2OS via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] Nozomi Networks N2OS via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023-----
-# [Recommended] Nozomi Networks N2OS via AMA connector for Microsoft Sentinel
-
-The [Nozomi Networks](https://www.nozominetworks.com/) data connector provides the capability to ingest Nozomi Networks Events into Microsoft Sentinel. Refer to the Nozomi Networks [PDF documentation](https://www.nozominetworks.com/resources/data-sheets-brochures-learning-guides/) for more information.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (NozomiNetworks)<br/> |
-| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Devices**
- ```kusto
-NozomiNetworksEvents
-
- | summarize count() by DvcHostname
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] Nozomi Networks N2OS via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**NozomiNetworksEvents**](https://aka.ms/sentinel-NozomiNetworks-parser) which is deployed with the Microsoft Sentinel Solution.
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-nozominetworks?tab=Overview) in the Azure Marketplace.
sentinel Recommended Ossec Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-ossec-via-ama.md
- Title: "[Recommended] OSSEC via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] OSSEC via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023-----
-# [Recommended] OSSEC via AMA connector for Microsoft Sentinel
-
-OSSEC data connector provides the capability to ingest [OSSEC](https://www.ossec.net/) events into Microsoft Sentinel. Refer to [OSSEC documentation](https://www.ossec.net/docs) for more information.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (OSSEC)<br/> |
-| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
-
-## Query samples
-
-**Top 10 Rules**
- ```kusto
-OSSECEvent
-
- | summarize count() by RuleName
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] OSSEC via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias OSSEC and load the function code or click [here](https://aka.ms/sentinel-OSSECEvent-parser), on the second line of the query, enter the hostname(s) of your OSSEC device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ossec?tab=Overview) in the Azure Marketplace.
sentinel Recommended Palo Alto Networks Cortex Data Lake Cdl Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-palo-alto-networks-cortex-data-lake-cdl-via-ama.md
- Title: "[Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023-----
-# [Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA connector for Microsoft Sentinel
-
-The [Palo Alto Networks CDL](https://www.paloaltonetworks.com/cortex/cortex-data-lake) data connector provides the capability to ingest [CDL logs](https://docs.paloaltonetworks.com/strata-logging-service/log-reference/log-forwarding-schema-overview) into Microsoft Sentinel.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (PaloAltoNetworksCDL)<br/> |
-| **Data collection rules support** |[Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Destinations**
- ```kusto
-PaloAltoCDLEvent
-
- | where isnotempty(DstIpAddr)
-
- | summarize count() by DstIpAddr
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**PaloAltoCDLEvent**](https://aka.ms/sentinel-paloaltocdl-parser) which is deployed with the Microsoft Sentinel Solution.
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltocdl?tab=Overview) in the Azure Marketplace.
sentinel Recommended Pingfederate Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-pingfederate-via-ama.md
- Title: "[Recommended] PingFederate via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] PingFederate via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 10/23/2023-----
-# [Recommended] PingFederate via AMA connector for Microsoft Sentinel
-
-The [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html) data connector provides the capability to ingest [PingFederate events](https://docs.pingidentity.com/bundle/pingfederate-102/page/lly1564002980532.html) into Microsoft Sentinel. Refer to [PingFederate documentation](https://docs.pingidentity.com/bundle/pingfederate-102/page/tle1564002955874.html) for more information.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (PingFederate)<br/> |
-| **Data collection rules support** |[Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent)|
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Devices**
- ```kusto
-PingFederateEvent
-
- | summarize count() by DvcHostname
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] PingFederate via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**PingFederateEvent**](https://aka.ms/sentinel-PingFederate-parser) which is deployed with the Microsoft Sentinel Solution.
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-pingfederate?tab=Overview) in the Azure Marketplace.
sentinel Recommended Trend Micro Apex One Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-trend-micro-apex-one-via-ama.md
- Title: "[Recommended] Trend Micro Apex One via AMA connector for Microsoft Sentinel"
-description: "Learn how to install the connector [Recommended] Trend Micro Apex One via AMA to connect your data source to Microsoft Sentinel."
-- Previously updated : 11/29/2023-----
-# [Recommended] Trend Micro Apex One via AMA connector for Microsoft Sentinel
-
-The [Trend Micro Apex One](https://www.trendmicro.com/en_us/business/products/user-protection/sps/endpoint.html) data connector provides the capability to ingest [Trend Micro Apex One events](https://aka.ms/sentinel-TrendMicroApex-OneEvents) into Microsoft Sentinel. Refer to [Trend Micro Apex Central](https://aka.ms/sentinel-TrendMicroApex-OneCentral) for more information.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (TrendMicroApexOne)<br/> |
-| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**All logs**
- ```kusto
-
-TMApexOneEvent
-
- | sort by TimeGenerated
- ```
---
-## Prerequisites
-
-To integrate with [Recommended] Trend Micro Apex One via AMA make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
->This data connector depends on a parser based on a Kusto Function to work as expected [**TMApexOneEvent**](https://aka.ms/sentinel-TMApexOneEvent-parser) which is deployed with the Microsoft Sentinel Solution.
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-trendmicroapexone?tab=Overview) in the Azure Marketplace.
sentinel Ridgebot Data Connector For Microsoft Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ridgebot-data-connector-for-microsoft-sentinel.md
- Title: "RIDGEBOT - data connector for Microsoft Sentinel"
-description: "Learn how to install the connector RIDGEBOT - data to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# RIDGEBOT - data connector for Microsoft Sentinel
-
-The RidgeBot connector lets users connect RidgeBot with Microsoft Sentinel, allowing creation of Dashboards, Workbooks, Notebooks and Alerts.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [RidgeSecurity](https://ridgesecurity.ai/about-us/) |
-
-## Query samples
-
-**Lasted 10 Exploited Risks**
-
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "RidgeSecurity"
-
- | where DeviceEventClassID == "4001"
-
- | order by TimeGenerated desc
-
- | limit 10
- ```
---
-## Prerequisites
-
-To integrate with RIDGEBOT - data connector for Microsoft Sentinel make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)--
-## Vendor installation instructions
--
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
-
-> Notice that the data from all regions will be stored in the selected workspace
--
-2. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/ridgesecuritytechnologyinc1670890478389.microsoft-sentinel-solution-ridgesecurity?tab=Overview) in the Azure Marketplace.
sentinel Rsa Securid Authentication Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rsa-securid-authentication-manager.md
- Title: "RSA® SecurID (Authentication Manager) connector for Microsoft Sentinel"
-description: "Learn how to install the connector RSA® SecurID (Authentication Manager) to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# RSA® SecurID (Authentication Manager) connector for Microsoft Sentinel
-
-The [RSA® SecurID Authentication Manager](https://www.securid.com/) data connector provides the capability to ingest [RSA® SecurID Authentication Manager events](https://community.rsa.com/t5/rsa-authentication-manager/rsa-authentication-manager-log-messages/ta-p/630160) into Microsoft Sentinel. Refer to [RSA® SecurID Authentication Manager documentation](https://community.rsa.com/t5/rsa-authentication-manager/getting-started-with-rsa-authentication-manager/ta-p/569582) for more information.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (RSASecurIDAMEvent)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Sources**
-
- ```kusto
-RSASecurIDAMEvent
-
- | summarize count() by tostring(DvcHostname)
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**RSASecurIDAMEvent**](https://aka.ms/sentinel-rsasecuridam-parser) which is deployed with the Microsoft Sentinel Solution.
--
-> [!NOTE]
- > This data connector has been developed using RSA SecurID Authentication Manager version: 8.4 and 8.5
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the Server where the RSA® SecurID Authentication Manager logs are forwarded.
-
-> Logs from RSA® SecurID Authentication Manager Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure RSA® SecurID Authentication Manager event forwarding
-
-Follow the configuration steps below to get RSA® SecurID Authentication Manager logs into Microsoft Sentinel.
-1. [Follow these instructions](https://community.rsa.com/t5/rsa-authentication-manager/configure-the-remote-syslog-host-for-real-time-log-monitoring/ta-p/571374) to forward alerts from the Manager to a syslog server.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-securid?tab=Overview) in the Azure Marketplace.
sentinel Sonicwall Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sonicwall-firewall.md
- Title: "SonicWall Firewall connector for Microsoft Sentinel"
-description: "Learn how to install the connector SonicWall Firewall to connect your data source to Microsoft Sentinel."
-- Previously updated : 08/27/2024----
-# SonicWall Firewall connector for Microsoft Sentinel
--
-Common Event Format (CEF) is an industry standard format on top of Syslog messages, used by SonicWall to allow event interoperability among different platforms. By connecting your CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log.
-
-This is autogenerated content. For changes, contact the solution provider.
--
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (SonicWall)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [SonicWall](https://www.sonicwall.com/support/) |
-
-## Query samples
-
-**All logs**
- ```kusto
-CommonSecurityLog
- | where DeviceVendor == "SonicWall"
- | sort by TimeGenerated desc
- ```
-
-**Summarize by destination IP and port**
- ```kusto
-CommonSecurityLog
- | where DeviceVendor == "SonicWall"
- | summarize count() by DestinationIP, DestinationPort, TimeGenerated
- | sort by TimeGenerated desc
- ```
-
-**Show all dropped traffic from the SonicWall Firewall**
- ```kusto
-CommonSecurityLog
- | where DeviceVendor == "SonicWall"
- | where AdditionalExtensions contains "fw_action='drop'"
- ```
---
-## Vendor installation instructions
-
-1. Linux Syslog agent configuration
-
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
-
-Notice that the data from all regions will be stored in the selected workspace
-1.1 Select or create a Linux machine.
-
-Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
-
-1.2 Install the CEF collector on the Linux machine
-
-Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
-
-1. Make sure that you have Python on your machine using the following command: python -version.
-2. You must have elevated permissions (sudo) on your machine.
- Run the following command to install and apply the CEF collector:
-
- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py [Workspace ID] [Workspace Primary Key]`
-
-2. Forward SonicWall Firewall Common Event Format (CEF) logs to Syslog agent
-
- Set your SonicWall Firewall to send Syslog messages in CEF format to the proxy machine. Make sure you send the logs to port 514 TCP on the machine's IP address.
-
- Follow Instructions. Then Make sure you select local use 4 as the facility. Then select ArcSight as the Syslog format.
-
-3. Validate connection
-
-Follow the instructions to validate your connectivity:
-
-Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
-
-It may take about 20 minutes until the connection streams data to your workspace.
-If the logs are not received, run the following connectivity validation script:
-
-1. Make sure that you have Python on your machine using the following command: python -version
-2. You must have elevated permissions (sudo) on your machine
- Run the following command to validate your connectivity:
-
- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py [Workspace ID]`
-
-4. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy.
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sonicwall-inc.sonicwall-networksecurity-azure-sentinal?tab=Overview) in the Azure Marketplace.
sentinel Sophos Xg Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sophos-xg-firewall.md
- Title: "Sophos XG Firewall connector for Microsoft Sentinel"
-description: "Learn how to install the connector Sophos XG Firewall to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Sophos XG Firewall connector for Microsoft Sentinel
-
-The [Sophos XG Firewall](https://www.sophos.com/products/next-gen-firewall.aspx) allows you to easily connect your Sophos XG Firewall logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigations. Integrating Sophos XG Firewall with Microsoft Sentinel provides more visibility into your organization's firewall traffic and will enhance security monitoring capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (SophosXGFirewall)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
-
-## Query samples
-
-**Top 10 Denied Source IPs**
-
- ```kusto
-SophosXGFirewall
-
- | where Log_Type == "Firewall" and Status == "Deny"
-
- | summarize count() by Src_IP
-
- | top 10 by count_
- ```
-
-**Top 10 Denied Destination IPs**
-
- ```kusto
-SophosXGFirewall
-
- | where Log_Type == "Firewall" and Status == "Deny"
-
- | summarize count() by Dst_IP
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with Sophos XG Firewall make sure you have:
--- **Sophos XG Firewall**: must be configured to export logs via Syslog--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Sophos XG Firewall and load the function code or click [here](https://aka.ms/sentinel-SophosXG-parser), on the second line of the query, enter the hostname(s) of your Sophos XG Firewall device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
- 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
- 2. Select **Apply below configuration to my machines** and select the facilities and severities.
- 3. Click **Save**.
--
-3. Configure and connect the Sophos XG Firewall
-
-[Follow these instructions](https://doc.sophos.com/nsg/sophos-firewall/20.0/Help/en-us/webhelp/onlinehelp/AdministratorHelp/SystemServices/LogSettings/SyslogServerAdd/https://docsupdatetracker.net/index.html) to enable syslog streaming. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sophosxgfirewall?tab=Overview) in the Azure Marketplace.
sentinel Squid Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/squid-proxy.md
- Title: "Squid Proxy connector for Microsoft Sentinel"
-description: "Learn how to install the connector Squid Proxy to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Squid Proxy connector for Microsoft Sentinel
-
-The [Squid Proxy](http://www.squid-cache.org/) connector allows you to easily connect your Squid Proxy logs with Microsoft Sentinel. This gives you more insight into your organization's network proxy traffic and improves your security operation capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | SquidProxy_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
-
-## Query samples
-
-**Top 10 Proxy Results**
-
- ```kusto
-SquidProxy
-
- | where isnotempty(ResultCode)
-
- | summarize count() by ResultCode
-
- | top 10 by count_
- ```
-
-**Top 10 Peer Host**
-
- ```kusto
-SquidProxy
-
- | where isnotempty(PeerHost)
-
- | summarize count() by PeerHost
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Squid Proxy and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/SquidProxy/Parsers/SquidProxy.txt), on the second line of the query, enter the hostname(s) of your SquidProxy device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the Squid Proxy server where the logs are generated.
-
-> Logs from Squid Proxy deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure the logs to be collected
-
-Configure the custom log directory to be collected
---
-1. Select the link above to open your workspace advanced settings
-2. From the left pane, select **Data**, select **Custom Logs** and click **Add+**
-3. Click **Browse** to upload a sample of a Squid Proxy log file(e.g. access.log or cache.log). Then, click **Next >**
-4. Select **New line** as the record delimiter and click **Next >**
-5. Select **Windows** or **Linux** and enter the path to Squid Proxy logs. Default paths are:
-6. After entering the path, click the '+' symbol to apply, then click **Next >**
-7. Add **SquidProxy_CL** as the custom log Name and click **Done**
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-squidproxy?tab=Overview) in the Azure Marketplace.
sentinel Symantec Endpoint Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-endpoint-protection.md
- Title: "Symantec Endpoint Protection connector for Microsoft Sentinel"
-description: "Learn how to install the connector Symantec Endpoint Protection to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Symantec Endpoint Protection connector for Microsoft Sentinel
-
-The [Broadcom Symantec Endpoint Protection (SEP)](https://www.broadcom.com/products/cyber-security/endpoint/end-user/enterprise) connector allows you to easily connect your SEP logs with Microsoft Sentinel. This gives you more insight into your organization's network and improves your security operation capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (SymantecEndpointProtection)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Log Types **
-
- ```kusto
-SymantecEndpointProtection
-
- | summarize count() by LogType
-
- | top 10 by count_
- ```
-
-**Top 10 Users**
-
- ```kusto
-SymantecEndpointProtection
-
- | summarize count() by UserName
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with Symantec Endpoint Protection make sure you have:
--- **Symantec Endpoint Protection (SEP)**: must be configured to export logs via Syslog--
-## Vendor installation instructions
--
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Symantec Endpoint Protection and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Symantec%20Endpoint%20Protection/Parsers/SymantecEndpointProtection.yaml), on the second line of the query, enter the hostname(s) of your SymantecEndpointProtection device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
- 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
- 2. Select **Apply below configuration to my machines** and select the facilities and severities.
- 3. Click **Save**.
--
-3. Configure and connect the Symantec Endpoint Protection
-
-[Follow these instructions](https://techdocs.broadcom.com/us/en/symantec-security-software/endpoint-security-and-management/endpoint-protection/all/Monitoring-Reporting-and-Enforcing-Compliance/viewing-logs-v7522439-d37e464/exporting-data-to-a-syslog-server-v8442743-d15e1107.html) to configure the Symantec Endpoint Protection to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-symantecendpointprotection?tab=Overview) in the Azure Marketplace.
sentinel Symantec Proxysg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-proxysg.md
- Title: "Symantec ProxySG connector for Microsoft Sentinel"
-description: "Learn how to install the connector Symantec ProxySG to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Symantec ProxySG connector for Microsoft Sentinel
-
-The [Symantec ProxySG](https://www.broadcom.com/products/cyber-security/network/gateway/proxy-sg-and-advanced-secure-gateway) allows you to easily connect your Symantec ProxySG logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigations. Integrating Symantec ProxySG with Microsoft Sentinel provides more visibility into your organization's network proxy traffic and will enhance security monitoring capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (SymantecProxySG)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
-
-## Query samples
-
-**Top 10 Denied Users**
-
- ```kusto
-SymantecProxySG
-
- | where sc_filter_result == 'DENIED'
-
- | summarize count() by cs_userdn
-
- | top 10 by count_
- ```
-
-**Top 10 Denied Client IPs**
-
- ```kusto
-SymantecProxySG
-
- | where sc_filter_result == 'DENIED'
-
- | summarize count() by c_ip
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with Symantec ProxySG make sure you have:
--- **Symantec ProxySG**: must be configured to export logs via Syslog--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Symantec Proxy SG and load the function code or click [here](https://aka.ms/sentinel-SymantecProxySG-parser), on the second line of the query, enter the hostname(s) of your Symantec Proxy SG device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
- 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
- 2. Select **Apply below configuration to my machines** and select the facilities and severities.
- 3. Click **Save**.
--
-3. Configure and connect the Symantec ProxySG
-
-
- 1. Log in to the Blue Coat Management Console .
- 2. Select Configuration > Access Logging > Formats.
- 3. Select New.
- 4. Enter a unique name in the Format Name field.
- 5. Click the radio button for **Custom format string** and paste the following string into the field.
-
- `1 $(date) $(time) $(time-taken) $(c-ip) $(cs-userdn) $(cs-auth-groups) $(x-exception-id) $(sc-filter-result) $(cs-categories) $(quot)$(cs(Referer))$(quot) $(sc-status) $(s-action) $(cs-method) $(quot)$(rs(Content-Type))$(quot) $(cs-uri-scheme) $(cs-host) $(cs-uri-port) $(cs-uri-path) $(cs-uri-query) $(cs-uri-extension) $(quot)$(cs(User-Agent))$(quot) $(s-ip) $(sr-bytes) $(rs-bytes) $(x-virus-id) $(x-bluecoat-application-name) $(x-bluecoat-application-operation) $(cs-uri-port) $(x-cs-client-ip-country) $(cs-threat-risk)`
- 7. Click the **OK** button.
- 8. Click the **Apply** button.
- 9. [Follow these instructions](https://knowledge.broadcom.com/external/article/166529/sending-access-logs-to-a-syslog-server.html) to enable syslog streaming of **Access** Logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-symantec-proxysg?tab=Overview) in the Azure Marketplace.
sentinel Symantec Vip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-vip.md
- Title: "Symantec VIP connector for Microsoft Sentinel"
-description: "Learn how to install the connector Symantec VIP to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Symantec VIP connector for Microsoft Sentinel
-
-The [Symantec VIP](https://vip.symantec.com/) connector allows you to easily connect your Symantec VIP logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (SymantecVIP)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Reasons for Failed RADIUS Authentication **
-
- ```kusto
-SymantecVIP
-
- | summarize count() by Reason
-
- | top 10 by count_
- ```
-
-**Top 10 Users**
-
- ```kusto
-SymantecVIP
-
- | summarize count() by User
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with Symantec VIP make sure you have:
--- **Symantec VIP**: must be configured to export logs via Syslog--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Symantec VIP and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Symantec%20VIP/Parsers/SymantecVIP.txt), on the second line of the query, enter the hostname(s) of your Symantec VIP device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
- 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
- 2. Select **Apply below configuration to my machines** and select the facilities and severities.
- 3. Click **Save**.
--
-3. Configure and connect the Symantec VIP
-
-Configure the Symantec VIP Enterprise Gateway to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-symantecvip?tab=Overview) in the Azure Marketplace.
sentinel Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/syslog.md
- Title: "Syslog connector for Microsoft Sentinel"
-description: "Learn how to install the connector Syslog to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Syslog connector for Microsoft Sentinel
-
-Syslog is an event logging protocol that is common to Linux. Applications will send messages that may be stored on the local machine or delivered to a Syslog collector. When the Agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent. The agent then sends the message to the workspace. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2223807&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
-
-> [!NOTE]
-> Container Insights now supports the automatic collection of Syslog events from Linux nodes in your AKS clusters. To learn more, see [Syslog collection with Container Insights](/azure/azure-monitor/containers/container-insights-syslog).
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
--
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-syslog?tab=Overview) in the Azure Marketplace.
sentinel Trend Micro Deep Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-deep-security.md
- Title: "Trend Micro Deep Security connector for Microsoft Sentinel"
-description: "Learn how to install the connector Trend Micro Deep Security to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Trend Micro Deep Security connector for Microsoft Sentinel
-
-The Trend Micro Deep Security connector allows you to easily connect your Deep Security logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's networks/systems and improves your security operation capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Kusto function url** | https://aka.ms/TrendMicroDeepSecurityFunction |
-| **Log Analytics table(s)** | CommonSecurityLog (TrendMicroDeepSecurity)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Trend Micro](https://success.trendmicro.com/) |
-
-## Query samples
-
-**Intrusion Prevention Events**
-
- ```kusto
-
-TrendMicroDeepSecurity
-
-
- | where DeepSecurityModuleName == "Intrusion Prevention"
-
- | sort by TimeGenerated
- ```
-
-**Integrity Monitoring Events**
-
- ```kusto
-
-TrendMicroDeepSecurity
-
-
- | where DeepSecurityModuleName == "Integrity Monitoring"
-
- | sort by TimeGenerated
- ```
-
-**Firewall Events**
-
- ```kusto
-
-TrendMicroDeepSecurity
-
-
- | where DeepSecurityModuleName == "Firewall Events"
-
- | sort by TimeGenerated
- ```
-
-**Log Inspection Events**
-
- ```kusto
-
-TrendMicroDeepSecurity
-
-
- | where DeepSecurityModuleName == "Log Inspection"
-
- | sort by TimeGenerated
- ```
-
-**Anti-Malware Events**
-
- ```kusto
-
-TrendMicroDeepSecurity
-
-
- | where DeepSecurityModuleName == "Anti-Malware"
-
- | sort by TimeGenerated
- ```
-
-**Web Reputation Events**
-
- ```kusto
-
-TrendMicroDeepSecurity
-
-
- | where DeepSecurityModuleName == "Web Reputation"
-
- | sort by TimeGenerated
- ```
---
-## Vendor installation instructions
-
-1. Linux Syslog agent configuration
-
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
-
-> Notice that the data from all regions will be stored in the selected workspace
-
-1.1 Select or create a Linux machine
-
-Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
-
-1.2 Install the CEF collector on the Linux machine
-
-Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
-
-> 1. Make sure that you have Python on your machine using the following command: python -version.
-
-> 2. You must have elevated permissions (sudo) on your machine.
-
- Run the following command to install and apply the CEF collector:
-
- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
-
-2. Forward Trend Micro Deep Security logs to Syslog agent
-
-1. Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure to send the logs to port 514 TCP on the machine's IP address.
-2. Forward Trend Micro Deep Security events to the Syslog agent.
-3. Define a new Syslog Configuration that uses the CEF format by referencing [this knowledge article](https://aka.ms/Sentinel-trendmicro-kblink) for additional information.
-4. Configure the Deep Security Manager to use this new configuration to forward events to the Syslog agent using [these instructions](https://aka.ms/Sentinel-trendMicro-connectorInstructions).
-5. Make sure to save the [TrendMicroDeepSecurity](https://aka.ms/TrendMicroDeepSecurityFunction) function so that it queries the Trend Micro Deep Security data properly.
-
-3. Validate connection
-
-Follow the instructions to validate your connectivity:
-
-Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
-
->It may take about 20 minutes until the connection streams data to your workspace.
-
-If the logs are not received, run the following connectivity validation script:
-
-> 1. Make sure that you have Python on your machine using the following command: python -version
-
->2. You must have elevated permissions (sudo) on your machine
-
- Run the following command to validate your connectivity:
-
- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
-
-4. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trendmicro.trend_micro_deep_security_mss?tab=Overview) in the Azure Marketplace.
sentinel Trend Micro Tippingpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-tippingpoint.md
- Title: "Trend Micro TippingPoint connector for Microsoft Sentinel"
-description: "Learn how to install the connector Trend Micro TippingPoint to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Trend Micro TippingPoint connector for Microsoft Sentinel
-
-The Trend Micro TippingPoint connector allows you to easily connect your TippingPoint SMS IPS events with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's networks/systems and improves your security operation capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (TrendMicroTippingPoint)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Trend Micro](https://success.trendmicro.com/) |
-
-## Query samples
-
-**TippingPoint IPS Events**
-
- ```kusto
-
-TrendMicroTippingPoint
-
-
- | sort by TimeGenerated
- ```
-
-**Top IPS Events**
-
- ```kusto
-
-TrendMicroTippingPoint
-
-
- | summarize EventCountTotal = sum(EventCount) by DeviceEventClassID, Activity, SimplifiedDeviceAction
-
- | sort by EventCountTotal desc
- ```
-
-**Top Source IP for IPS Events**
-
- ```kusto
-
-TrendMicroTippingPoint
-
-
- | summarize EventCountTotal = sum(EventCount) by SourceIP
-
- | sort by EventCountTotal desc
- ```
-
-**Top Destination IP for IPS Events**
-
- ```kusto
-
-TrendMicroTippingPoint
-
-
- | summarize EventCountTotal = sum(EventCount) by DestinationIP
-
- | sort by EventCountTotal desc
- ```
---
-## Vendor installation instructions
--
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias TrendMicroTippingPoint and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Trend%20Micro%20TippingPoint/Parsers/TrendMicroTippingPoint).The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Linux Syslog agent configuration
-
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
-
-> Notice that the data from all regions will be stored in the selected workspace
-
-1.1 Select or create a Linux machine
-
-Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
-
-1.2 Install the CEF collector on the Linux machine
-
-Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
-
-> 1. Make sure that you have Python on your machine using the following command: python -version.
-
-> 2. You must have elevated permissions (sudo) on your machine.
-
- Run the following command to install and apply the CEF collector:
-
- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
-
-2. Forward Trend Micro TippingPoint SMS logs to Syslog agent
-
-Set your TippingPoint SMS to send Syslog messages in ArcSight CEF Format v4.2 format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
-
-3. Validate connection
-
-Follow the instructions to validate your connectivity:
-
-Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
-
->It may take about 20 minutes until the connection streams data to your workspace.
-
-If the logs are not received, run the following connectivity validation script:
-
-> 1. Make sure that you have Python on your machine using the following command: python -version
-
->2. You must have elevated permissions (sudo) on your machine
-
- Run the following command to validate your connectivity:
-
- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
-
-4. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trendmicro.trend_micro_tippingpoint_mss?tab=Overview) in the Azure Marketplace.
sentinel Ubiquiti Unifi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ubiquiti-unifi.md
- Title: "Ubiquiti UniFi (using Azure Functions) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Ubiquiti UniFi (using Azure Functions) to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Ubiquiti UniFi (using Azure Functions) connector for Microsoft Sentinel
-
-The [Ubiquiti UniFi](https://www.ui.com/) data connector provides the capability to ingest [Ubiquiti UniFi firewall, dns, ssh, AP events](https://help.ui.com/hc/en-us/articles/204959834-UniFi-How-to-View-Log-Files) into Microsoft Sentinel.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Ubiquiti_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Clients (Source IP)**
-
- ```kusto
-UbiquitiAuditEvent
-
- | summarize count() by SrcIpAddr
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**UbiquitiAuditEvent**](https://aka.ms/sentinel-UbiquitiUnifi-parser) which is deployed with the Microsoft Sentinel Solution.
--
-> [!NOTE]
- > This data connector has been developed using Enterprise System Controller Release Version: 5.6.2 (Syslog)
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the Server to which the Ubiquiti logs are forwarder from Ubiquiti device (e.g.remote syslog server)
-
-> Logs from Ubiquiti Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure the logs to be collected
-
-Follow the configuration steps below to get Ubiquiti logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
-1. Configure log forwarding on your Ubiquiti controller:
-
- i. Go to Settings > System Setting > Controller Configuration > Remote Logging and enable the Syslog and Debugging (optional) logs (Refer to [User Guide](https://dl.ui.com/guides/UniFi/UniFi_Controller_V5_UG.pdf) for detailed instructions).
-2. Download config file [Ubiquiti.conf](https://aka.ms/sentinel-UbiquitiUnifi-conf).
-3. Login to the server where you have installed Azure Log Analytics agent.
-4. Copy Ubiquiti.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder.
-5. Edit Ubiquiti.conf as follows:
-
- i. specify port which you have set your Ubiquiti device to forward logs to (line 4)
-
- ii. replace **workspace_id** with real value of your Workspace ID (lines 14,15,16,19)
-5. Save changes and restart the Azure Log Analytics agent for Linux service with the following command:
- sudo /opt/microsoft/omsagent/bin/service_control restart
----
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ubiquitiunifi?tab=Overview) in the Azure Marketplace.
sentinel Vmware Esxi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-esxi.md
- Title: "VMware ESXi connector for Microsoft Sentinel"
-description: "Learn how to install the connector VMware ESXi to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# VMware ESXi connector for Microsoft Sentinel
-
-The [VMware ESXi](https://www.vmware.com/products/esxi-and-esx.html) connector allows you to easily connect your VMWare ESXi logs with Microsoft Sentinel This gives you more insight into your organization's ESXi servers and improves your security operation capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (VMwareESXi)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Total Events by Log Type**
-
- ```kusto
-VMwareESXi
-
- | summarize count() by ProcessName
- ```
-
-**Top 10 ESXi Hosts Generating Events**
-
- ```kusto
-VMwareESXi
-
- | summarize count() by HostName
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with VMware ESXi make sure you have:
--- **VMwareESXi**: must be configured to export logs via Syslog--
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias VMwareESXi and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/VMWareESXi/Parsers/VMwareESXi.yaml), on the second line of the query, enter the hostname(s) of your VMwareESXi device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
- 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
- 2. Select **Apply below configuration to my machines** and select the facilities and severities.
- 3. Click **Save**.
--
-3. Configure and connect the VMware ESXi
-
-1. Follow these instructions to configure the VMWare ESXi to forward syslog:
-2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-vmwareesxi?tab=Overview) in the Azure Marketplace.
sentinel Vmware Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-vcenter.md
- Title: "VMware vCenter connector for Microsoft Sentinel"
-description: "Learn how to install the connector VMware vCenter to connect your data source to Microsoft Sentinel."
-- Previously updated : 05/30/2024-----
-# VMware vCenter connector for Microsoft Sentinel
-
-The [vCenter](https://www.vmware.com/products/cloud-infrastructure/vcenter) connector allows you to easily connect your vCenter server logs with Microsoft Sentinel. This gives you more insight into your organization's data centers and improves your security operation capabilities.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | vcenter_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Total Events by Event Type**
-
- ```kusto
-vCenter
-
- | summarize count() by EventType
- ```
-
-**log in/out to vCenter Server**
-
- ```kusto
-vCenter
-
- | where EventType in ('UserLogoutSessionEvent','UserLoginSessionEvent')
-
- | summarize count() by EventType,EventID,UserName,UserAgent
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with VMware vCenter make sure you have:
--- **Include custom pre-requisites if the connectivity requires - else delete customs**: Description for any custom pre-requisite--
-## Vendor installation instructions
--
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias VMware vCenter and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/VMware%20vCenter/Parsers/vCenter.txt), on the second line of the query, enter the hostname(s) of your VMware vCenter device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-1. If you have not installed the vCenter solution from ContentHub then [Follow the steps](https://aka.ms/sentinel-vCenter-parser) to use the Kusto function alias, **vCenter**
-
-1. Install and onboard the agent for Linux
-
- Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
- Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Follow the configuration steps below to get vCenter server logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
- For vCenter Server logs, we have issues while parsing the data by OMS agent data using default settings.
-So we advice to capture the logs into custom table **vcenter_CL** using below instructions.
-1. Login to the server where you have installed OMS agent.
-2. Download config file vCenter.conf
- wget -v https://aka.ms/sentinel-vcenteroms-conf -O vcenter.conf
-3. Copy vcenter.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder.
- cp vcenter.conf /etc/opt/microsoft/omsagent/<<workspace_id>>/conf/omsagent.d/
-4. Edit vcenter.conf as follows:
-
- a. vcenter.conf uses the port **22033** by default. Ensure this port is not being used by any other source on your server
-
- b. If you would like to change the default port for **vcenter.conf** make sure that you dont use default Azure monotoring /log analytic agent ports I.e.(For example CEF uses TCP port **25226** or **25224**)
-
- c. replace **workspace_id** with real value of your Workspace ID (lines 13,14,15,18)
-5. Save changes and restart the Azure Log Analytics agent for Linux service with the following command:
- sudo /opt/microsoft/omsagent/bin/service_control restart
-6. Modify /etc/rsyslog.conf file - add below template preferably at the beginning / before directives section
-
- `$template vcenter,"%timestamp% %hostname% %msg%\ n"`
-
- **Note - There is no space between slash(\\) and character 'n' in above command.**
-
- 7. Create a custom conf file in /etc/rsyslog.d/ for example 10-vcenter.conf and add following filter conditions.
-
- Download config file [10-vCenter.conf](https://aka.ms/sentinel-vcenter-conf)
-
- With an added statement you will need to create a filter which will specify the logs coming from the vcenter server to be forwarded to the custom table.
-
- reference: [Filter Conditions ΓÇö rsyslog 8.18.0.master documentation](https://rsyslog.readthedocs.io/en/latest/configuration/filters.html)
-
- Here is an example of filtering that can be defined, this is not complete and will require additional testing for each installation.
-
- ```json
- if $rawmsg contains "vcenter-server" then @@127.0.0.1:22033;vcenter
- & stop
- if $rawmsg contains "vpxd" then @@127.0.0.1:22033;vcenter
- & stop
- ```
-
-
-8. Restart rsyslog
- systemctl restart rsyslog
--
-3. Configure and connect the vCenter device(s)
-
-[Follow these instructions](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.monitoring.doc/GUID-9633A961-A5C3-4658-B099-B81E0512DC21.html) to configure the vCenter to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-vcenter?tab=Overview) in the Azure Marketplace.
sentinel Watchguard Firebox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/watchguard-firebox.md
- Title: "WatchGuard Firebox connector for Microsoft Sentinel"
-description: "Learn how to install the connector WatchGuard Firebox to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# WatchGuard Firebox connector for Microsoft Sentinel
-
-WatchGuard Firebox (https://www.watchguard.com/wgrd-products/firewall-appliances and https://www.watchguard.com/wgrd-products/cloud-and-virtual-firewalls) is security products/firewall-appliances. Watchguard Firebox will send syslog to Watchguard Firebox collector agent.The agent then sends the message to the workspace.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (WatchGuardFirebox)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [WatchGuard](https://www.watchguard.com/wgrd-support/contact-support) |
-
-## Query samples
-
-**Top 10 Fireboxes in last 24 hours**
-
- ```kusto
-WatchGuardFirebox
-
- | where TimeGenerated >= ago(24h)
-
- | summarize count() by HostName
-
- | top 10 by count_ desc
- ```
-
-**Firebox Named WatchGuard-XTM top 10 messages in last 24 hours**
-
- ```kusto
-WatchGuardFirebox
-
- | where HostName contains 'WatchGuard-XTM'
-
- | where TimeGenerated >= ago(24h)
-
- | summarize count() by MessageId
-
- | top 10 by count_ desc
- ```
-
-**Firebox Named WatchGuard-XTM top 10 applications in last 24 hours**
-
- ```kusto
-WatchGuardFirebox
-
- | where HostName contains 'WatchGuard-XTM'
-
- | where TimeGenerated >= ago(24h)
-
- | summarize count() by Application
-
- | top 10 by count_ desc
- ```
---
-## Vendor installation instructions
--
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias WatchGuardFirebox and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Watchguard%20Firebox/Parsers/WatchGuardFirebox.txt) on the second line of the query, enter the hostname(s) of your WatchGuard Firebox device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
-
-1. Install and onboard the agent for Linux
-
-Typically, you should install the agent on a different computer from the one on which the logs are generated.
-
-> Syslog logs are collected only from **Linux** agents.
--
-2. Configure the logs to be collected
-
-Configure the facilities you want to collect and their severities.
-
-1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
-2. Select **Apply below configuration to my machines** and select the facilities and severities.
-3. Click **Save**.
----
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/watchguard-technologies.watchguard_firebox_mss?tab=Overview) in the Azure Marketplace.
sentinel Withsecure Elements Via Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/withsecure-elements-via-connector.md
- Title: "WithSecure Elements via connector for Microsoft Sentinel"
-description: "Learn how to install the connector WithSecure Elements via to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# WithSecure Elements via connector for Microsoft Sentinel
-
-WithSecure Elements is a unified cloud-based cyber security platform.
-By connecting WithSecure Elements via Connector to Microsoft Sentinel, security events can be received in Common Event Format (CEF) over syslog.
-It requires deploying "Elements Connector" either on-prem or in cloud.
-The Common Event Format (CEF) provides natively search & correlation, alerting and threat intelligence enrichment for each data log.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog (WithSecure Events)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [WithSecure](https://www.withsecure.com/en/support) |
-
-## Query samples
-
-**All logs**
-
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "WithSecureΓäó"
-
- | sort by TimeGenerated
- ```
---
-## Vendor installation instructions
-
-1. Linux Syslog agent configuration
-
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
-
-> Notice that the data from all regions will be stored in the selected workspace
-
-1.1 Select or create a Linux machine
-
-Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your WithSecurity solution and Sentinel. The machine can be on-prem environment, Microsoft Azure or other cloud based.
-> Linux needs to have `syslog-ng` and `python`/`python3` installed.
-
-1.2 Install the CEF collector on the Linux machine
-
-Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
-
-> 1. Make sure that you have Python on your machine using the following command: python -version.
-
-> 2. You must have elevated permissions (sudo) on your machine.
-
- Run the following command to install and apply the CEF collector:
-
- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
-
- For python3 use command below:
-
- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python3 cef_installer.py {0} {1}`
-
-2. Forward data from WithSecure Elements Connector to Syslog agent
-
-This describes how to install and configure Elements Connector step by step.
-
-2.1 Order Connector subscription
-
-If Connector subscription has not been ordered yet go to EPP in Elements Portal. Then navigate to Downloads and in Elements Connector section click 'Create subscription key' button. You can check Your subscription key in Subscriptions.
-
-2.2 Download Connector
-
-Go to Downloads and in WithSecure Elements Connector section select correct installer.
-
-2.3 Create management API key
-
-When in EPP open account settings in top right corner. Then select Get management API key. If key has been created earlier it can be read there as well.
-
-2.4 Install Connector
-
-To install Elements Connector follow [Elements Connector Docs](https://www.withsecure.com/userguides/product.html#business/connector/latest/en/).
-
-2.5 Configure event forwarding
-
-If api access has not been configured during installation follow [Configuring API access for Elements Connector](https://www.withsecure.com/userguides/product.html#business/connector/latest/en/task_F657F4D0F2144CD5913EE510E155E234-latest-en).
-Then go to EPP, then Profiles, then use For Connector from where you can see the connector profiles. Create a new profile (or edit an existing not read-only profile). In Event forwarding enable it. SIEM system address: **127.0.0.1:514**. Set format to **Common Event Format**. Protocol is **TCP**. Save profile and assign it to Elements Connector in Devices tab.
-
-3. Validate connection
-
-Follow the instructions to validate your connectivity:
-
-Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
-
->It may take about 20 minutes until the connection streams data to your workspace.
-
-If the logs are not received, run the following connectivity validation script:
-
-> 1. Make sure that you have Python on your machine using the following command: python -version
-
->2. You must have elevated permissions (sudo) on your machine
-
- Run the following command to validate your connectivity:
-
- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
-
- For python3 use command below:
-
- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python3 cef_troubleshoot.py {0}`
-
-4. Secure your machine
-
-Make sure to configure the machine's security according to your organization's security policy
--
-[Learn more >](https://aka.ms/SecureCEF)
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/withsecurecorporation.sentinel-solution-withsecure-via-connector?tab=Overview) in the Azure Marketplace.
sentinel Zscaler Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zscaler-private-access.md
- Title: "Zscaler Private Access connector for Microsoft Sentinel"
-description: "Learn how to install the connector Zscaler Private Access to connect your data source to Microsoft Sentinel."
-- Previously updated : 08/27/2024-----
-# Zscaler Private Access connector for Microsoft Sentinel
--
-The [Zscaler Private Access (ZPA)](https://help.zscaler.com/zpa/what-zscaler-private-access) data connector provides the capability to ingest [Zscaler Private Access events](https://help.zscaler.com/zpa/log-streaming-service) into Microsoft Sentinel. Refer to [Zscaler Private Access documentation](https://help.zscaler.com/zpa) for more information.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Kusto function alias** | ZPAEvent |
-| **Kusto function url** | https://aka.ms/sentinel-ZscalerPrivateAccess-parser |
-| **Log Analytics table(s)** | ZPA_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**All logs**
-
- ```kusto
-
-ZPAEvent
-
- | sort by TimeGenerated
- ```
---
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-ZscalerPrivateAccess-parser) to create the Kusto Functions alias, **ZPAEvent**
--
-> [!NOTE]
- > This data connector has been developed using Zscaler Private Access version: 21.67.1
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the Server where the Zscaler Private Access logs are forwarded.
-
-> Logs from Zscaler Private Access Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure the logs to be collected
-
-Follow the configuration steps below to get Zscaler Private Access logs into Microsoft Sentinel. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps.
-Zscaler Private Access logs are delivered via Log Streaming Service (LSS). Refer to [LSS documentation](https://help.zscaler.com/zpa/about-log-streaming-service) for detailed information
-1. Configure [Log Receivers](https://help.zscaler.com/zpa/configuring-log-receiver). While configuring a Log Receiver, choose **JSON** as **Log Template**.
-2. Download config file [zpa.conf](https://aka.ms/sentinel-ZscalerPrivateAccess-conf)
- wget -v https://aka.ms/sentinel-zscalerprivateaccess-conf -O zpa.conf
-3. Log in to the server where you have installed Azure Log Analytics agent.
-4. Copy zpa.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder.
-5. Edit zpa.conf as follows:
-
- a. specify port which you have set your Zscaler Log Receivers to forward logs to (line 4)
-
- b. zpa.conf uses the port **22033** by default. Ensure this port isn't being used by any other source on your server
-
- c. If you would like to change the default port for **zpa.conf** make sure that it shouldn't get conflict with default AMA agent ports I.e.(For example CEF uses TCP port **25226** or **25224**)
-
- d. replace **workspace_id** with real value of your Workspace ID (lines 14,15,16,19)
-5. Save changes and restart the Azure Log Analytics agent for Linux service with the following command:
- sudo /opt/microsoft/omsagent/bin/service_control restart
----
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-zscalerprivateaccess?tab=Overview) in the Azure Marketplace.
spring-apps Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-deploy-apps.md
The following steps show you how to generate configurations and deploy to Azure
1. Go to the *spring-petclinic-customers-service* folder. Generate configurations by running the following command. If you've already signed-in with Azure CLI, the command automatically picks up the credentials. Otherwise, it signs you in using a prompt with instructions. For more information, see [Authentication](https://github.com/microsoft/azure-maven-plugins/wiki/Authentication) on the [azure-maven-plugins](https://github.com/microsoft/azure-maven-plugins) wiki. ```bash
- mvn com.microsoft.azure:azure-spring-apps-maven-plugin:1.17.0:config -DappName=customers-service
+ mvn com.microsoft.azure:azure-spring-apps-maven-plugin:1.19.0:config -DappName=customers-service
``` You're asked to provide the following values:
The following steps show you how to generate configurations and deploy to Azure
<plugin> <groupId>com.microsoft.azure</groupId> <artifactId>azure-spring-apps-maven-plugin</artifactId>
- <version>1.17.0</version>
+ <version>1.19.0</version>
<configuration> <subscriptionId>xxxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx</subscriptionId> <clusterName>v-spr-cld</clusterName>
The following steps show you how to generate configurations and deploy to Azure
1. Go to the *spring-petclinic-api-gateway* folder. Run the following commands to generate the configuration and deploy `api-gateway`. Select **yes** for **Public endpoint**. ```bash
- mvn com.microsoft.azure:azure-spring-apps-maven-plugin:1.17.0:config -DappName=api-gateway
+ mvn com.microsoft.azure:azure-spring-apps-maven-plugin:1.19.0:config -DappName=api-gateway
mvn azure-spring-apps:deploy ```
storage-actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/overview.md
Azure Storage tasks are supported in the following public regions:
You can try the feature for free during the preview, paying only for transactions invoked on your storage account. Pricing information for the feature will be published before general availability.
+> [!Note]
+> General-purpose v1 accounts don't support the latest features and hence Azure Storage Actions is not supported either. If you have a general-purpose v1 account, we recommend you to upgrade to [general-purpose v2 accounts](/azure/well-architected/service-guides/storage-accounts/operational-excellence#design-considerations) to use all the latest features.
+ ## Next steps - [Quickstart: Create, assign, and run a storage task by using the Azure portal](storage-tasks/storage-task-quickstart-portal.md)
storage Container Storage Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-release-notes.md
Title: Release notes for Azure Container Storage description: Release notes for Azure Container Storage- Previously updated : 09/15/2024 Last updated : 09/20/2024
-# Release notes for Azure Container Storage
+# Release notes for Azure Container Storage
+ This article provides the release notes for Azure Container Storage. It's important to note that minor releases introduce new functionalities in a backward-compatible manner (for example, 1.1.0 GA). Patch releases focus on bug fixes, security updates, and smaller improvements (for example, 1.1.1). ## Supported versions
The following Azure Container Storage versions are supported:
| Milestone | Status | |-|-|
-|1.1.1- Hotfix | Supported |
+|1.1.1- Minor Release | Supported |
|1.1.0- General Availability| Supported | ## Unsupported versions
-The following Azure Container Storage versions are no longer supported: 1.0.6-preview, 1.0.3-preview, 1.0.2-preview, 1.0.1-preview, 1.0.0-preview. Please refer to the section ΓÇ£Upgrade a preview installation to GAΓÇ¥ for upgrading guidance.
+
+The following Azure Container Storage versions are no longer supported: 1.0.6-preview, 1.0.3-preview, 1.0.2-preview, 1.0.1-preview, 1.0.0-preview. See [Upgrade a preview installation to GA](#upgrade-a-preview-installation-to-ga) for upgrading guidance.
## Minor vs. patch versions+ Minor versions introduce small improvements, performance enhancements, or minor new features without breaking existing functionality. For example, version 1.1.0 would move to 1.2.0. Patch versions are released more frequently than minor versions. They focus solely on bug fixes and security updates. For example, version 1.1.1 would be updated to 1.1.2. ## Version 1.1.1 ### Improvements and issues that are fixed-- This hotfix release addresses specific issues that some customers experienced during the creation of Azure Elastic SAN storage pools. It resolves exceptions that were causing disruptions in the setup process, ensuring smoother and more reliable storage pool creation.+
+- This minor release addresses specific issues that some customers experienced during the creation of Azure Elastic SAN storage pools. It resolves exceptions that were causing disruptions in the setup process, ensuring smoother and more reliable storage pool creation.
- We've also made improvements to cluster restart scenarios. Previously, some corner-case situations caused cluster restarts to fail. This update ensures that cluster restarts are more reliable and resilient. ## Version 1.1.0 ### Improvements and issues that are fixed+ - **Security Enhancements**: This update addresses vulnerabilities in container environments, enhancing security enforcement to better protect workloads. - **Data plane stability**: We've also improved the stability of data-plane components, ensuring more reliable access to Azure Container Storage volumes and storage pools. This also enhances the management of data replication between storage nodes. - **Volume management improvements**: The update resolves issues with volume detachment during node drain scenarios, ensuring that volumes are safely and correctly detached, and allowing workloads to migrate smoothly without interruptions or data access issues.
Minor versions introduce small improvements, performance enhancements, or minor
## Upgrade a preview installation to GA If you already have a preview instance of Azure Container Storage running on your cluster, we recommend updating to the latest generally available (GA) version by running the following command: + ```azurecli-interactive az k8s-extension update --cluster-type managedClusters --cluster-name <cluster-name> --resource-group <resource-group> --name azurecontainerstorage --version <version> --release-train stable ```
Please note that preview versions are no longer supported, and customers should
## Auto-upgrade policy
-To receive the latest features and fixes for Azure Container Storage in future versions, you can enable auto-upgrade. However, please note that this may result in a brief interruption in the I/O operations of applications using PVs with Azure Container Storage during the upgrade process. To minimize potential impact, we recommend setting the auto-upgrade window to a time period with low activity or traffic, ensuring that upgrades occur during less critical times.
+To receive the latest features and fixes for Azure Container Storage in future versions, you can enable auto-upgrade. However, please note that this might result in a brief interruption in the I/O operations of applications using PVs with Azure Container Storage during the upgrade process. To minimize potential impact, we recommend setting the auto-upgrade window to a time period with low activity or traffic, ensuring that upgrades occur during less critical times.
+
+To enable auto-upgrade, run the following command:
-To enable auto-upgrade, run the following command:
```azurecli-interactive az k8s-extension update --cluster-name <cluster name> --resource-group <resource-group> --cluster-type managedClusters --auto-upgrade-minor-version true -n azurecontainerstorage ```
-If you would like to disable auto-upgrades, run the following command:
+If you would like to disable auto-upgrades, run the following command:
+ ```azurecli-interactive az k8s-extension update --cluster-name <cluster name> --resource-group <resource-group> --cluster-type managedClusters --auto-upgrade-minor-version false -n azurecontainerstorage ```
virtual-desktop Client Device Redirection Intune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/client-device-redirection-intune.md
You need to create an [app configuration policy for managed apps](/mem/intune/ap
To create and apply an app configuration policy for managed apps, follow the steps in [App configuration policies for Intune App SDK managed apps](/mem/intune/apps/app-configuration-policies-managed-app) and use the following settings: -- On the **Basics** tab, do the following, depending on whether you're targeting Windows App or the Remote Desktop app-
- - For Windows App, select **Select custom apps**, then for **Bundle or Package ID**, enter `com.microsoft.rdc.apple` and for platform, select **iOS/iPadOS**.
-
- - For the Remote Desktop app, select **Select public apps**, then search for and select **Remote Desktop** for each platform you want to target.
+- On the **Basics** tab, select **Select public apps**, then search for and select **Remote Desktop**. Selecting **Remote Desktop** applies to both Windows App and the Remote Desktop app.
- On the **Settings** tab, expand **General configuration settings**, then enter the following name and value pairs for each redirection setting you want to configure exactly as shown. These values correspond to the RDP properties listed on [Supported RDP properties](/azure/virtual-desktop/rdp-properties#device-redirection), but the syntax is different:
You need to create an [app protection policy](/mem/intune/apps/app-protection-po
To create and apply an app protection policy, follow the steps in [How to create and assign app protection policies](/mem/intune/apps/app-protection-policies) and use the following settings. You need to create an app protection policy for each platform you want to target. -- On the **Apps** tab, do the following, depending on whether you're targeting Windows App or the Remote Desktop app.-
- - For Windows App on iOS/iPadOS, select **Select custom apps**, then for **Bundle or Package ID**, enter `com.microsoft.rdc.apple`.
-
- - For the Remote Desktop app, select **Select public apps**, then search for and select **Remote Desktop**.
-
+- On the **Apps** tab, select **Select public apps**, then search for and select **Remote Desktop**. Selecting **Remote Desktop** applies to both Windows App and the Remote Desktop app.
+
- On the **Data protection** tab, only the following settings are relevant to Windows App and the Remote Desktop app. The other settings don't apply as Windows App and the Remote Desktop app interact with the session host and not with data in the app. On mobile devices, unapproved keyboards are a source of keystroke logging and theft. - For iOS and iPadOS you can configure the following settings:
Now that you configure Intune to manage device redirection on personal devices,
## Known issues
-Configuring redirection settings for Windows App and the Remote Desktop app on a client device using Microsoft Intune has the following limitation:
--- When you configure client device redirection for the Remote Desktop app or Windows App on iOS and iPadOS, multifactor authentication (MFA) requests might get stuck in a loop. A common scenario of this issue happens when the Remote Desktop app or Windows App is being run on an Intune enrolled iPhone and the same iPhone is being used to receive MFA requests from the Microsoft Authenticator app when signing into the Remote Desktop app or Windows App. To work around this issue, use the Remote Desktop app or Windows App on a different device (such as an iPad) from the device being used to receive MFA requests (such as an iPhone).
+When creating an App Configuration Policy or an App Protection Policy, Remote Desktop is still shown instead of Windows App. This will be updated soon.
virtual-desktop Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/terminology.md
To ensure your apps work with the latest updates, the validation environment sho
## Application groups
-An application group is a logical grouping of applications that are available on session hosts in a host pool. Application groups control whether a full desktop or which applications from a host pool are available to users to connect to. An application group can only be assigned to a single host pool, but you can assign multiple application groups with to the same host pool. Users can be assigned to multiple application groups across multiple host pools, which enable you to vary the applications and desktops that users can access.
+An application group controls access to a full desktop or a logical grouping of applications that are available on session hosts in a single host pool. Users can be assigned to multiple application groups across multiple host pools, which enable you to vary the applications and desktops that users can access.
When you create an application group, it can be one of two types:
When you create an application group, it can be one of two types:
- **RemoteApp**: users access individual applications you select and publish to the application group. Available with pooled host pools only.
-With pooled host pools, you can assign both application group types to the same host pool at the same time. You can only assign a single desktop application group with a host pool, but you can also assign multiple RemoteApp application groups to the same host pool.
+With pooled host pools, you can assign both application group types to the same host pool at the same time. You can only assign a single desktop application group per host pool, but you can also assign multiple RemoteApp application groups to the same host pool.
-Users assigned to multiple RemoteApp application groups assigned to the same host pool have access to an aggregate of all the applications in the application groups they're assigned to.
+Host pools have a preferred application group type setting. If an end user has both a desktop and RemoteApp application groups assigned to them on the same host pool, they only see the resources from the preferred application group type. Users assigned to multiple RemoteApp application groups assigned to the same host pool have access to an aggregate of all the applications in the application groups they're assigned to.
To learn more about application groups, see [Preferred application group type behavior for pooled host pools](preferred-application-group-type.md). ## Workspaces
-A workspace is a logical grouping of application groups. Each application group must be associated with a workspace for users to see the desktops and applications published to them.
+A workspace is a logical grouping of application groups. Each application group must be associated with a workspace for users to see the desktops and applications published to them. An application group can only be assigned to a single workspace.
## End users
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
In regions without availability zones, all public IP addresses are created as no
> [!IMPORTANT] > We are updating Standard non-zonal IPs to be zone-redundant by default on a region by region basis. This means that in the following regions, all IPs created (except zonal) are zone-redundant.
-> Region availability: Central Canada, Central Poland, Central Israel, Central France, Central Qatar, East US 2, East Norway, Italy North, Sweden Central, South Africa North, South Brazil, West Central Germany, West US 2.
+> Region availability: Central Canada, Central Poland, Central Israel, Central France, Central Qatar, East Asia, East US 2, East Norway, Italy North, Sweden Central, South Africa North, South Brazil, West Central Germany, West US 2.
## Domain Name Label
vpn-gateway Vpn Gateway Troubleshoot Site To Site Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-error-codes.md
+
+ Title: Troubleshoot Azure site-to-site issues using error codes
+
+description: Common error codes and solutions for Azure VPN Gateway site-to-site connections.
+++ Last updated : 09/20/2024++
+# Troubleshooting: Azure site-to-site VPN error codes
+
+This article lists common site-to-site error codes that you might experience. It also discusses possible causes and solutions for these problems. If you know the error code, you can search for the solution on this page.
+
+## Negotiation timed out (Error code: 13805, Hex: 0X35ED)
+
+### Symptom
+
+Connectivity failure.
+
+### Cause
+
+Customer's on-premises VPN device isn't responding to the connection requests (IKE protocol messages) from the Azure VPN gateway.
+
+### Solution
+
+To resolve this problem, follow these steps:
+
+1. Check to make sure on-premises IP address is correctly configured on the Local Network Gateway resource in Azure
+1. Check to see if the on-premises VPN device is receiving the IKE messages from Azure VPN gateway.
+
+ * If IKE packets aren't received on the on-premises gateway, check if there's an on-premises firewall dropping the IKE packets.
+ * Check on-premises VPN device logs to find why the device isn't responding to the IKE messages from Azure VPN gateway.
+ * Take mitigation steps to ensure that on-premises device responds to Azure VPN Gateway IKE requests. Engage device vendor for help, as needed.
+
+## IKE authentication credentials are unacceptable (Error code: 13801, Hex: 0X35E9)
+
+### Symptom
+
+Connectivity failure.
+
+### Cause
+
+Preshared key mismatch.
+
+### Solution
+
+Check to ensure that preshared key configured on the Azure connection resource matches the preshared key configured on the tunnel of the on-premises VPN device.
+
+## Policy match error (Error code: 13868, Hex: 0X362C) / No policy configured (Error code: 13825, Hex: 0X3601)
+
+### Symptom
+
+Connectivity failure.
+
+### Cause
+
+IKE /IPSec policy mismatch.
+
+### Solution
+
+For custom policy configuration on the connection resource in Azure, check to ensure that the IKE policy that's configured on the tunnel of the on-premises VPN device has the same configuration.
+
+For default policy configuration, check [configuration of IPsec/IKE connection policies](ipsec-ike-policy-howto.md) for site-to-site VPN & VNet-to-VNet to ensure the configuration on the tunnel of the on-premises VPN device has the matching configuration.
+
+## Traffic selectors unacceptable (Error code: 13999, Hex: 0X36AF)
+
+### Symptom
+
+Connectivity failure.
+
+### Cause
+
+Traffic selector configuration mismatch.
+
+### Solution
+
+Check the on-premises device log to find why traffic selector configuration proposed by the Azure VPN gateway isn't accepted by the on-premises device. Use one of the following methods to resolve the issue:
+
+* Fix the traffic selector configuration on the tunnel of the on-premises device.
+* Configure policy-based traffic selector on the connection resource in Azure to keep the same configuration as on-premises device traffic selector. For more information, see [Connect VPN gateways to multiple on-premises policy-based VPN devices](vpn-gateway-connect-multiple-policybased-rm-ps.md#create-the-virtual-network-vpn-gateway-and-local-network-gateway).
+
+## Invalid header (Error code: 13824, Hex: 0X3600)/ Invalid payload received (Error code: 13843, Hex: 0X3613)/ Invalid cookie received (13846, Hex: 0X3616)
+
+### Symptom
+
+Connectivity failure.
+
+### Cause
+
+The VPN gateway received unsupported IKE messages/protocols from the on-premises VPN device.
+
+### Solution
+
+1. Ensure on-premises device is among one of the supported devices. See [About VPN devices for connections](vpn-gateway-about-vpn-devices.md#devicetable).
+
+1. Contact your on-premises device vendor for help.
+
+## The recipient cannot handle version of IKE specified in the header (Error code: 13880, Hex: 0X3638)
+
+### Symptom
+
+Connectivity failure.
+
+### Cause
+
+IKE protocol version mismatch
+
+### Solution
+
+Ensure that IKE protocol version (IKE v1 or IKE v2) is same on the connection resource in Azure and on the tunnel configuration of the on-premises VPN device.
+
+## Failure in Diffie-Hellman computation (Error code: 13822, Hex: 0X35FE)
+
+### Symptom
+
+Connectivity failure.
+
+### Cause
+
+Failure in Diffie-Hellman computation.
+
+### Solution
+
+1. For custom policy configuration on the connection resource in Azure, check to ensure that the DH group configured on the tunnel of the on-premises VPN device has the same configuration.
+1. For default DH group configuration, check the [configuration of IPsec/IKE connection policies for S2S VPN & VNet-to-VNet](ipsec-ike-policy-howto.md) to ensure the configuration on the tunnel of the on-premises VPN device has the matching configuration.
+1. If this doesn't resolve the issue, engage your VPN device vendor for further investigation.
+
+## The remote computer refused the network connection (Error code: 1225, Hex: 0X4C9)
+
+### Symptom
+
+Connectivity failure.
+
+### Cause
+
+The Azure connection resource is configured as Initiator only mode and might not accept any connection requests from the on-premises device.
+
+### Solution
+
+Update the connection mode property on the connection resource in Azure to **Default** or **Responder only**. For more information, see [Connection mode](vpn-gateway-about-vpn-gateway-settings.md#connectionmode) settings.
+
+## Next steps
+
+For more information about VPN Gateway troubleshooting, see [Troubleshooting site-to-site connections](vpn-gateway-troubleshoot-site-to-site-cannot-connect.md).