Updates from: 10/26/2023 01:13:44
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Logging Audio Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/logging-audio-transcription.md
Audio and transcription logs can be used as input for [Custom Speech](custom-spe
> [!WARNING] > Don't depend on audio and transcription logs when the exact record of input audio is required. In the periods of peak load, the service prioritizes hardware resources for transcription tasks. This may result in minor parts of the audio not being logged. Such occasions are rare, but nevertheless possible.
-Logging is done asynchronously for both base and custom model endpoints. Audio and transcription logs are stored by the Speech service and not written locally. The logs are retained for 30 days. After this period, the logs are automatically deleted. However you can [delete](#delete-audio-and-transcription-logs) specific logs or a range of available logs at any time.
+Logging is done asynchronously for both base and custom model endpoints. Audio and transcription logs are stored by the Speech service in its internal storage and not written locally. The logs are retained for 30 days. After this period, the logs are automatically deleted. However you can [delete](#delete-audio-and-transcription-logs) specific logs or a range of available logs at any time.
+
+You can also store audio and transcription logs within an Azure Storage account you own and control instead of Speech service premises using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md) technology. See details on how to use BYOS-enabled Speech resource in [this article](bring-your-own-storage-speech-resource-speech-to-text.md).
## Enable audio and transcription logging
aks Network Observability Managed Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-managed-cli.md
az aks get-credentials --name myAKSCluster --resource-group myResourceGroup
ama-metrics-win-node-tkrm8 2/2 Running 0 (26h ago) 26h ```
-1. Use the ID [18814]( https://grafana.com/grafana/dashboards/18814/) to import the dashboard from Grafana's public dashboard repo.
+1. Select **Dashboards** from the left navigation menu, open **Kubernetes / Networking** dashboard under **Managed Prometheus** folder.
-1. Verify the Grafana dashboard is visible.
+1. Check if the Metrics in **Kubernetes / Networking** Grafana dashboard are visible. If metrics aren't shown, change the time range to the last 15 minutes in the top right.
# [**Cilium**](#tab/cilium)
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
This article shows you how to create a static public IP address and assign it to
## Create a static IP address
-1. Create a static public IP address using the [`az network public ip create`][az-network-public-ip-create] command.
+1. Get the name of the node resource group using the [`az aks show`][az-aks-show] command and query for the `nodeResourceGroup` property.
+
+ ```azurecli-interactive
+ az aks show --name myAKSCluster --resource-group myNetworkResourceGroup --query nodeResourceGroup -o tsv
+ ```
+
+2. Create a static public IP address in the node resource group using the [`az network public ip create`][az-network-public-ip-create] command.
```azurecli-interactive az network public-ip create \
- --resource-group myNetworkResourceGroup \
+ --resource-group <node resource group name> \
--name myAKSPublicIP \ --sku Standard \ --allocation-method static
This article shows you how to create a static public IP address and assign it to
> [!NOTE] > If you're using a *Basic* SKU load balancer in your AKS cluster, use *Basic* for the `--sku` parameter when defining a public IP. Only *Basic* SKU IPs work with the *Basic* SKU load balancer and only *Standard* SKU IPs work with *Standard* SKU load balancers.
-2. Get the name of the node resource group using the [`az aks show`][az-aks-show] command and query for the `nodeResourceGroup` property.
-
- ```azurecli-interactive
- az aks show --name myAKSCluster --resource-group myNetworkResourceGroup --query nodeResourceGroup -o tsv
- ```
-
-3. Get the static public IP address using the [`az network public-ip list`][az-network-public-ip-list] command. Specify the name of the node resource group and public IP address you created, and query for the `ipAddress`.
+2. Get the static public IP address using the [`az network public-ip list`][az-network-public-ip-list] command. Specify the name of the node resource group and public IP address you created, and query for the `ipAddress`.
```azurecli-interactive
- az network public-ip show --resource-group <node resource group> --name myAKSPublicIP --query ipAddress --output tsv
+ az network public-ip show --resource-group myNetworkResourceGroup --name myAKSPublicIP --query ipAddress --output tsv
``` ## Create a service using the static IP address
-1. Ensure the cluster identity used by the AKS cluster has delegated permissions to the node resource group using the [`az role assignment create`][az-role-assignment-create] command.
+1. Ensure the cluster identity used by the AKS cluster has delegated permissions to the public IP's resource group using the [`az role assignment create`][az-role-assignment-create] command.
```azurecli-interactive CLIENT_ID=$(az aks show --name myAKSCluster --resource-group myNetworkResourceGroup --query identity.principalId -o tsv)
This article shows you how to create a static public IP address and assign it to
2. Create a file named `load-balancer-service.yaml` and copy in the contents of the following YAML file, providing your own public IP address created in the previous step and the node resource group name. > [!IMPORTANT]
- > Adding the `loadBalancerIP` property to the load balancer YAML manifest is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235). While current usage remains the same and existing services are expected to work without modification, we **highly recommend setting service annotations** instead. To set service annotations, you can use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address, as shown in the example YAML.
+ > Adding the `loadBalancerIP` property to the load balancer YAML manifest is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235). While current usage remains the same and existing services are expected to work without modification, we **highly recommend setting service annotations** instead. To set service annotations, you can either use `service.beta.kubernetes.io/azure-pip-name` for public IP name, or use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address, as shown in the example YAML.
```yaml apiVersion: v1 kind: Service metadata: annotations:
- service.beta.kubernetes.io/azure-load-balancer-resource-group: <node resource group>
- service.beta.kubernetes.io/azure-load-balancer-ipv4: <public IP address>
- service.beta.kubernetes.io/azure-pip-name: <public IP Name>
+ service.beta.kubernetes.io/azure-load-balancer-resource-group: myNetworkResourceGroup
+ service.beta.kubernetes.io/azure-pip-name: myAKSPublicIP
name: azure-load-balancer spec: type: LoadBalancer
This article shows you how to create a static public IP address and assign it to
kind: Service metadata: annotations:
- service.beta.kubernetes.io/azure-load-balancer-resource-group: <node resource group>
- service.beta.kubernetes.io/azure-load-balancer-ipv4: <public IP address>
- service.beta.kubernetes.io/azure-pip-name: <public IP Name>
+ service.beta.kubernetes.io/azure-load-balancer-resource-group: myNetworkResourceGroup
+ service.beta.kubernetes.io/azure-pip-name: myAKSPublicIP
service.beta.kubernetes.io/azure-dns-label-name: <unique-service-label> name: azure-load-balancer spec:
api-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md
In preview, create and use an API Center in the Azure portal for the following:
For more information about the information you can manage and the capabilities in API Center, see [Key concepts](key-concepts.md).
-## Preview limitations
+## Available regions
* In preview, API Center is available in the following Azure regions: * Australia East
api-center Set Up Api Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center.md
If you haven't already, you need to register the **Microsoft.ApiCenter** resourc
1. Enter a **Name** for your API center. It must be unique in your subscription.
- 1. In **Region**, select one of the [available regions](overview.md#preview-limitations) for API Center preview.
+ 1. In **Region**, select one of the [available regions](overview.md#available-regions) for API Center preview.
1. Optionally, on the **Tags** tab, add one or more name/value pairs to help you categorize your Azure resources.
api-management Api Management Howto Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-policies.md
Title: Policies in Azure API Management | Microsoft Docs
-description: Learn about policies in API Management, a way for API publishers to change API behavior through configuration. Policies are statements that run sequentially on the request or response of an API.
+ Title: Policies in Azure API Management
+description: Introduction to API Management policies, which change API behavior through configuration. Policy statements run sequentially on an API request or response.
documentationcenter: '' Previously updated : 03/07/2023 Last updated : 10/18/2023
By placing policy statements in the `on-error` section, you can:
* Inspect and customize the error response using the `set-body` policy. * Configure what happens if an error occurs.
-For more information, see [Error handling in API Management policies](./api-management-error-handling-policies.md)
+For more information, see [Error handling in API Management policies](./api-management-error-handling-policies.md).
## Policy expressions
Unless the policy specifies otherwise, [policy expressions](api-management-polic
Each expression has access to the implicitly provided `context` variable and an allowed subset of .NET Framework types.
-Policy expressions provide a sophisticated means to control traffic and modify API behavior without requiring you to write specialized code or modify backend services. Some policies are based on policy expressions, such as the [Control flow][Control flow] and [Set variable][Set variable]. For more information, see [Advanced policies][Advanced policies].
+Policy expressions provide a sophisticated means to control traffic and modify API behavior without requiring you to write specialized code or modify backend services. Some policies are based on policy expressions, such as [Control flow][Control flow] and [Set variable][Set variable]. For more information, see [Advanced policies][Advanced policies].
## Scopes
When configuring a policy, you must first select the scope at which the policy a
### Things to know * For fine-grained control for different API consumers, you can configure policy definitions at more than one scope
-* Not all policies can be applied at each scope and policy section
-* When configuring policy definitions at more than one scope, you control the policy evaluation order in each policy section by placement of the `base` element
+* Not all policies are supported at each scope and policy section
+* When configuring policy definitions at more than one scope, you control policy inheritance and the policy evaluation order in each policy section by placement of the `base` element
+* Policies applied to API requests are also affected by the request context, including the presence or absence of a subscription key used in the request, the API or product scope of the subscription key, and whether the API or product requires a subscription.
-For more information, see [Set or edit policies](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order).
+ [!INCLUDE [api-management-product-policy-alert](../../includes/api-management-product-policy-alert.md)]
+
+For more information, see:
+
+* [Set or edit policies](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order)
+* [Subscriptions in API Management](api-management-subscriptions.md)
### GraphQL resolver policies
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
Each API Management instance comes with a built in all-access subscription that
> [!WARNING] > The all-access subscription enables access to every API in the API Management instance and should only be used by authorized users. Never use this subscription for routine API access or embed the all-access subscription key in client apps.
-> [!NOTE]
-> If you're using an API-scoped subscription or the all-access subscription, any [policies](api-management-howto-policies.md) configured at the product scope aren't applied to requests from that subscription.
### Standalone subscriptions
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
Title: How to set or edit Azure API Management policies | Microsoft Docs
-description: Learn how to use the Azure portal to set or edit policies in an Azure API Management instance. Policies are defined in XML documents that contain a sequence of statements that are run sequentially on the request or response of an API.
+description: Configure policies at different scopes in an Azure API Management instance using the policy editor in the Azure portal.
documentationcenter: '' Previously updated : 03/01/2022 Last updated : 10/18/2023
Product scope is configured for a selected product.
1. Select **Save** to propagate changes to the API Management gateway immediately. ++ ### API scope API scope is configured for **All operations** of the selected API.
To modify the policy evaluation order using the policy editor:
A globally scoped policy has no parent scope, and using the `base` element in it has no effect.
-## Next steps
+## Related content
For more information about working with policies, see:
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
description: Learn how to configure a custom container in Azure App Service. Thi
Previously updated : 10/12/2023 Last updated : 10/25/2023 zone_pivot_groups: app-service-containers-windows-linux
App Service logs actions by the Docker host as well as activities from within t
There are several ways to access Docker logs: - [In the Azure portal](#in-azure-portal)-- [From the Kudu console](#from-the-kudu-console)
+- [From Kudu](#from-kudu)
- [With the Kudu API](#with-the-kudu-api) - [Send logs to Azure monitor](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)
There are several ways to access Docker logs:
Docker logs are displayed in the portal, in the **Container Settings** page of your app. The logs are truncated, but you can download all the logs clicking **Download**.
-### From the Kudu console
+### From Kudu
Navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and click the **LogFiles** folder to see the individual log files. To download the entire **LogFiles** directory, click the **Download** icon to the left of the directory name. You can also access this folder using an FTP client.
-In the console terminal, you can't access the `C:\home\LogFiles` folder by default because persistent shared storage is not enabled. To enable this behavior in the console terminal, [enable persistent shared storage](#use-persistent-shared-storage).
+In the SSH terminal, you can't access the `C:\home\LogFiles` folder by default because persistent shared storage is not enabled. To enable this behavior in the console terminal, [enable persistent shared storage](#use-persistent-shared-storage).
If you try to download the Docker log that is currently in use by using an FTP client, you might get an error because of a file lock.
Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WE
> [!NOTE] > Updating the app setting triggers automatic restart, causing minimal downtime. For a production app, consider swapping it into a staging slot, change the app setting in the staging slot, and then swap it back into production.
-Verify your adjusted number by going to the Kudu Console (`https://<app-name>.scm.azurewebsites.net`) and typing in the following commands using PowerShell. Each command outputs a number.
+Verify your adjusted number by opening an SSH session from the portal or via the Kudu portal (`https://<app-name>.scm.azurewebsites.net/webssh/host`) and typing in the following commands using PowerShell. Each command outputs a number.
```PowerShell Get-ComputerInfo | ft CsNumberOfLogicalProcessors # Total number of enabled logical processors. Disabled processors are excluded.
application-gateway Proxy Buffers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/proxy-buffers.md
For reference, visit [Azure SDK for .NET](/dotnet/api/microsoft.azure.management
## Limitations - API version 2020-01-01 or later should be used to configure buffers. - Currently, these changes are not supported through Portal and PowerShell.-- Request and Response Buffers can only be disabled for the WAF v2 SKU if request body checking is disabled. Otherwise, Request and Response Buffers cannot be disabled for the WAF v2 SKU.
+- Request buffering cannot be disabled if you are running the WAF SKU of Application Gateway. The WAF requires the full request to buffer as part of processing, therefore, even if you disable request buffering within Application Gateway the WAF will still buffer the request. Response buffering is not impacted by the WAF.
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
Or to upgrade a resource bridge on Azure Stack HCI, run: `az arcappliance upgrad
Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider.
-For Arc-enabled VMware vSphere (preview), manual upgrade is available, and cloud-managed upgrade is supported for appliances on version 1.0.15 and higher. When Arc-enabled VMware vSphere announces General Availability, appliances on version 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded.
+For Arc-enabled VMware vSphere (preview), manual upgrade is available, and cloud-managed upgrade is supported for appliances on version 1.0.15 and higher. When Arc-enabled VMware vSphere announces General Availability, appliances on version 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded.
[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade or a support request for cloud-managed upgrade. For subsequent upgrades, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
The Arc resource bridge version is tied to the versions of underlying components
## Supported versions
-Generally, the latest released version and the previous three versions (n-3) of Arc resource bridge are supported. For example, if the current version is 1.0.10, then the typical n-3 supported versions are:
+Generally, the latest released version and the previous three versions (n-3) of Arc resource bridge are supported, starting from appliance version 1.0.15 and onward. An Arc resource bridge with an appliance version earlier than 1.0.15 must be upgraded or redeployed to be at minimum on appliance version 1.0.15 to be in a production support window.
-- Current version: 1.0.10-- n-1 version: 1.0.9-- n-2 version: 1.0.8-- n-3 version: 1.0.7
+For example, if the current version is 1.0.18, then the typical n-3 supported versions are:
-There might be instances where supported versions are not sequential. For example, version 1.0.11 is released and later found to contain a bug. A hot fix is released in version 1.0.12 and version 1.0.11 is removed. In this scenario, n-3 supported versions become 1.0.12, 1.0.10, 1.0.9, 1.0.8.
+- Current version: 1.0.18
+- n-1 version: 1.0.17
+- n-2 version: 1.0.16
+- n-3 version: 1.0.15
+
+There might be instances where supported versions are not sequential. For example, version 1.0.18 is released and later found to contain a bug. A hot fix is released in version 1.0.19 and version 1.0.18 is removed. In this scenario, n-3 supported versions become 1.0.19, 1.0.17, 1.0.16, 1.0.15.
Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month, although it's possible that delays could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
To see the current version of an Arc resource bridge appliance, run `az arcappli
- Learn about [Arc resource bridge maintenance operations](maintenance.md). - Learn about [troubleshooting Arc resource bridge](troubleshoot-resource-bridge.md).++
azure-arc Agent Overview Scvmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/agent-overview-scvmm.md
- Title: Overview of Azure Connected Machine agent to manage Windows and Linux machines
-description: This article provides a detailed overview of the Azure Connected Machine agent, which supports monitoring virtual machines hosted in hybrid environments.
Previously updated : 10/20/2023----
-ms.
---
-# Overview of Azure Connected Machine agent to manage Windows and Linux machines
-
-The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
-
-## Agent components
--
-The Azure Connected Machine agent package contains several logical components bundled together:
-
-* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity.
-
-* The guest configuration agent provides functionality such as assessing whether the machine complies with required policies and enforcing compliance.
-
- Note the following behavior with Azure Policy [guest configuration](../../governance/machine-configuration/overview.md) for a disconnected machine:
-
- * An Azure Policy assignment that targets disconnected machines is unaffected.
- * Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied.
- * Assignments are deleted after 14 days, and aren't reassigned to the machine after the 14-day period.
-
-* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Azure downloads extensions and copies them to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension installs to the following path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension installs to `/var/lib/waagent/<extension>`.
-
->[!NOTE]
-> The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines.
-
-## Agent resources
-
-The following information describes the directories and user accounts used by the Azure Connected Machine agent.
-
-### Windows agent installation details
-
-The Windows agent is distributed as a Windows Installer package (MSI). Download the Windows agent from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent).
-Installing the Connected Machine agent for Window applies the following system-wide configuration changes:
-
-* The installation process creates the following folders during setup.
-
- | Directory | Description |
- |--|-|
- | %ProgramFiles%\AzureConnectedMachineAgent | azcmagent CLI and instance metadata service executables.|
- | %ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.|
- | %ProgramFiles%\AzureConnectedMachineAgent\GCArcService\GC | Guest configuration (policy) service executables.|
- | %ProgramData%\AzureConnectedMachineAgent | Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
- | %ProgramData%\GuestConfig | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
- | %SYSTEMDRIVE%\packages | Extension package executables |
-
-* Installing the agent creates the following Windows services on the target machine.
-
- | Service name | Display name | Process name | Description |
- |--|--|--|-|
- | himds | Azure Hybrid Instance Metadata Service | himds | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens |
- | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. |
- | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. |
-
-* Agent installation creates the following virtual service account.
-
- | Virtual Account | Description |
- ||-|
- | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
-
- > [!TIP]
- > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you might need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function.
-
-* Agent installation creates the following local security group.
-
- | Security group name | Description |
- ||-|
- | Hybrid agent extension applications | Members of this security group can request Microsoft Entra tokens for the system-assigned managed identity |
-
-* Agent installation creates the following environmental variables
-
- | Name | Default value | Description |
- ||||
- | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` |
- | IMDS_ENDPOINT | `http://localhost:40342` |
-
-* There are several log files available for troubleshooting, described in the following table.
-
- | Log | Description |
- |--|-|
- | %ProgramData%\AzureConnectedMachineAgent\Log\himds.log | Records details of the heartbeat and identity agent component. |
- | %ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log | Contains the output of the azcmagent tool commands. |
- | %ProgramData%\GuestConfig\arc_policy_logs\gc_agent.log | Records details about the guest configuration (policy) agent component. |
- | %ProgramData%\GuestConfig\ext_mgr_logs\gc_ext.log | Records details about extension manager activity (extension install, uninstall, and upgrade events). |
- | %ProgramData%\GuestConfig\extension_logs | Directory containing logs for individual extensions. |
-
-* The process creates the local security group **Hybrid agent extension applications**.
-
-* After uninstalling the agent, the following artifacts remain.
-
- * %ProgramData%\AzureConnectedMachineAgent\Log
- * %ProgramData%\AzureConnectedMachineAgent
- * %ProgramData%\GuestConfig
- * %SystemDrive%\packages
-
-### Linux agent installation details
-
-The preferred package format for the distribution (`.rpm` or `.deb`) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configures the agent.
-
-Installing, upgrading, and removing the Connected Machine agent isn't required after server restart.
-
-Installing the Connected Machine agent for Linux applies the following system-wide configuration changes.
-
-* Setup creates the following installation folders.
-
- | Directory | Description |
- |--|-|
- | /opt/azcmagent/ | azcmagent CLI and instance metadata service executables. |
- | /opt/GC_Ext/ | Extension service executables. |
- | /opt/GC_Service/ | Guest configuration (policy) service executables. |
- | /var/opt/azcmagent/ | Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
- | /var/lib/GuestConfig/ | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
-
-* Installing the agent creates the following daemons.
-
- | Service name | Display name | Process name | Description |
- |--|--|--|-|
- | himdsd.service | Azure Connected Machine Agent Service | himds | This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
- | gcad.service | GC Arc Service | gc_linux_service | Audits and enforces Azure guest configuration policies on the machine. |
- | extd.service | Extension Service | gc_linux_service | Installs, updates, and manages extensions on the machine. |
-
-* There are several log files available for troubleshooting, described in the following table.
-
- | Log | Description |
- |--|-|
- | /var/opt/azcmagent/log/himds.log | Records details of the heartbeat and identity agent component. |
- | /var/opt/azcmagent/log/azcmagent.log | Contains the output of the azcmagent tool commands. |
- | /var/lib/GuestConfig/arc_policy_logs | Records details about the guest configuration (policy) agent component. |
- | /var/lib/GuestConfig/ext_mgr_logs | Records details about extension manager activity (extension install, uninstall, and upgrade events). |
- | /var/lib/GuestConfig/extension_logs | Directory containing logs for individual extensions. |
-
-* Agent installation creates the following environment variables, set in `/lib/systemd/system.conf.d/azcmagent.conf`.
-
- | Name | Default value | Description |
- |||-|
- | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` |
- | IMDS_ENDPOINT | `http://localhost:40342` |
-
-* After uninstalling the agent, the following artifacts remain.
-
- * /var/opt/azcmagent
- * /var/lib/GuestConfig
-
-## Agent resource governance
-
-The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions:
-
-* The Guest Configuration agent can use up to 5% of the CPU to evaluate policies.
-* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply:
-
- | Extension type | Operating system | CPU limit |
- | -- | - | |
- | AzureMonitorLinuxAgent | Linux | 60% |
- | AzureMonitorWindowsAgent | Windows | 100% |
- | AzureSecurityLinuxAgent | Linux | 30% |
- | LinuxOsUpdateExtension | Linux | 60% |
- | MDE.Linux | Linux | 60% |
- | MicrosoftDnsAgent | Windows | 100% |
- | MicrosoftMonitoringAgent | Windows | 60% |
- | OmsAgentForLinux | Windows | 60%|
-
-During normal operations, defined as the Azure Connected Machine agent being connected to Azure and not actively modifying an extension or evaluating a policy, you can expect the agent to consume the following system resources:
-
-| | Windows | Linux |
-| | - | -- |
-| **CPU usage (normalized to 1 core)** | 0.07% | 0.02% |
-| **Memory usage** | 57 MB | 42 MB |
-
-The performance data above was gathered in April 2023 on virtual machines running Windows Server 2022 and Ubuntu 20.04. Actual agent performance and resource consumption will vary based on the hardware and software configuration of your servers.
-
-## Instance metadata
-
-Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers. Specifically:
-
-* Operating system name, type, and version
-* Computer name
-* Computer manufacturer and model
-* Computer fully qualified domain name (FQDN)
-* Domain name (if joined to an Active Directory domain)
-* Active Directory and DNS fully qualified domain name (FQDN)
-* UUID (BIOS ID)
-* Connected Machine agent heartbeat
-* Connected Machine agent version
-* Public key for managed identity
-* Policy compliance status and details (if using guest configuration policies)
-* SQL Server installed (Boolean value)
-* Cluster resource ID (for Azure Stack HCI nodes)
-* Hardware manufacturer
-* Hardware model
-* CPU family, socket, physical core and logical core counts
-* Total physical memory
-* Serial number
-* SMBIOS asset tag
-* Cloud provider
-
-The agent requests the following metadata information from Azure:
-
-* Resource location (region)
-* Virtual machine ID
-* Tags
-* Microsoft Entra managed identity certificate
-* Guest configuration policy assignments
-* Extension requests - install, update, and delete.
-
-> [!NOTE]
-> Azure Arc-enabled servers doesn't store/process customer data outside the region the customer deploys the service instance in.
-
-## Next steps
--- [Connect your SCVMM server to Azure Arc](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc).-- [Install Arc agent at scale for your SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale).-- [Install Arc agent using a script for SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script).
azure-arc Azure Arc Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/azure-arc-resource-bridge.md
- Title: Azure Arc resource bridge
-description: Learn about Azure Arc resource bridge.
- Previously updated : 10/20/2023----
-ms.
--
-#Customer intent: As an IT infrastructure admin, I want to know about the the Azure Arc resource bridge that facilitates the Arc connection between SCVMM server and Azure
--
-# Azure Arc resource bridge
-
-Azure Arc resource bridge (preview) is a Microsoft managed product that is part of the core Azure Arc platform. It is designed to host other Azure Arc services. The resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](https://learn.microsoft.com/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/) preview), and System Center Virtual Machine Manager (SCVMM) ([Arc-enabled SCVMM](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/).
-
-Azure Arc resource bridge is a Kubernetes management cluster installed on the customerΓÇÖs on-premises infrastructure. The resource bridge is provided with the credentials to the infrastructure control plane that allows it to apply guest management services on the on-premises resources. Arc resource bridge enables projection of on-premises resources as ARM resources and management from ARM as *arc-enabled* Azure resources.
-Arc resource bridge delivers the following benefits:
-- Enables VM self-servicing from Azure without having to create and manage a Kubernetes cluster.-- Fully supported by Microsoft, including updates to core components.-- Supports deployment to any private cloud hosted on Hyper-V or VMware from the Azure portal or using the Azure Command Line Interface (CLI).-
-## Overview
-Azure Arc resource bridge (preview) hosts other components such as [custom locations](https://microsoftapc.sharepoint.com/:w:/t/AzureCoreIDC/EQ4_NliWVCFMtRqzCpkQfrUB3HkS1JwLE8KpoZBUmtNGjg?e=HANuI5), cluster extensions, and other Azure Arc agents in order to deliver the level of functionality with the private cloud infrastructures it supports. This complex system is composed of three layers:
-- The base layer represents the resource bridge and the Arc agents.-- The platform layer that includes the custom location and cluster extension. -- The solution layer for each service supported by Arc resource bridge (that is, the different type of VMs).--
-Azure Arc resource bridge (preview) can host other Azure services or solutions running on-premises. For this preview, there are two objects hosted on the Arc resource bridge (preview):
-- Cluster extension: The Azure service deployed to run on-premises. For the preview release, it supports three
- - Azure Arc-enabled VMware
- - Azure Arc-enabled Azure Stack HCI
- - Azure Arc-enabled System Center Virtual Machine Manager (SCVMM)
-- Custom locations: A deployment target where you can create Azure resources. It maps to different resources for different Azure services. For example, for Arc-enabled VMware, the custom locations resource maps to an instance of vCenter, and for Arc-enabled Azure Stack HCI, it maps to an HCI cluster instance.-
-Custom locations and cluster extension are both Azure resources, which are linked to the Azure Arc resource bridge (preview) resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter, Azure Stack HCI cluster, or SCVMM.
--
-Some resources are unique to the infrastructure. For example, vCenter has a resource pool, network, and template resources. During VM creation, these resources need to be specified. With Azure Stack HCI, you just need to select the custom location, network, and template to create a VM.
--
-To summarize, the Azure resources are projections of the resources running in your on-premises private cloud. If the on-premises resource is not healthy, it can impact the health of the related resources that are projected in Azure. For example, if the resource bridge is deleted by accident, all the resources projected in Azure by the resource bridge are impacted. The on-premises VMs in your on-premises private cloud are not impacted, as they are running on vCenter, but you won't be able to start or stop the VMs from Azure. It is not recommended to directly manage or modify the resource bridge using any on-premises applications.
-
-## Benefits of Azure Arc resource bridge (preview)
-
-Through Azure Arc resource bridge (preview), you can connect an SCVMM management server to Azure by deploying Azure Arc resource bridge (preview) in the VMM environment. Azure Arc resource bridge (preview) enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and perform various operations on them:
-- Start, stop, and restart a virtual machine.-- Control access and add Azure tags.-- Add, remove, and update network interfaces.-- Add, remove, and update disks and update VM size (CPU cores and memory).-- Enable guest management.-- Install extensions.-
-### Regional resiliency
-While Azure has a number of redundancy features at every level of failure, if a service impacting event occurs, the current release of Azure Arc resource bridge does not support cross-region failover or other resiliency capabilities. In the event of the service becoming unavailable, the on-premises VMs continue to operate unaffected.
-Management from Azure is unavailable during that service outage.
-
-### Supported versions
-Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month. Delays might occur that could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub. To learn more about upgrade options, visit [Upgrade Arc resource bridge](https://learn.microsoft.com/azure/azure-arc/resource-bridge/upgrade).
-
-## Next steps
-Learn more about [Arc-enabled SCVMM](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/overview)
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/custom-locations.md
- Title: Custom locations for Arc-enabled SCVMM
-description: Learn about Custom locations.
- Previously updated : 10/20/2023----
-ms.
--
-#Customer intent: As an IT infrastructure admin, I want to know about the concepts behind Azure Arc
--
-# Custom locations for Arc-enabled SCVMM
-
-As an extension of the Azure location construct, a *custom location* provides a reference as deployment target which administrators can set up, and users can point to, when creating an Azure resource. It abstracts the backend infrastructure details from application developers, database admin users, or other users in the organization.
-
-## Custom location for on-premises SCVMM management server:
-
-Since the custom location is an Azure Resource Manager resource that supports [Azure role-based access control (Azure RBAC)](https://learn.microsoft.com/azure/role-based-access-control/overview), an administrator or operator can determine which users have access to create resource instances on the compute, storage, networking, and other SCVMM resources to deploy and manage VMs.
--
-For example, an IT administrator could create a custom location **Contoso-vmm** representing the SCVMM management server in your organization's Data Center. The operator can then assign Azure RBAC permissions to application developers on this custom location so that they can deploy virtual machines. The developers can then deploy these virtual machines without having to know details of the SCVMM management server.
--
-Custom locations create the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the SCVMM resources.
-
-## Next steps
-[Connect your SCVMM Server to Azure Arc](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc)
-
azure-arc Azure Arc Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-agent.md
- Title: Azure Arc agent
-description: Learn about Azure Arc agent
- Previously updated : 10/23/2023--------
-# Azure Arc agent
-
-The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
-
-## Agent components
--
-The Azure Connected Machine agent package contains several logical components bundled together:
-
-* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity.
-
-* The guest configuration agent provides functionality such as assessing whether the machine complies with required policies and enforcing compliance.
-
- Note the following behavior with Azure Policy [guest configuration](../../governance/machine-configuration/overview.md) for a disconnected machine:
-
- * An Azure Policy assignment that targets disconnected machines is unaffected.
- * Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied.
- * Assignments are deleted after 14 days and aren't reassigned to the machine after the 14-day period.
-
-* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Azure downloads extensions and copies them to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension installs to the path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension installs to `/var/lib/waagent/<extension>`.
-
->[!NOTE]
-> The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines.
-
-## Agent resources
-
-The following information describes the directories and user accounts used by the Azure Connected Machine agent.
-
-### Windows agent installation details
-
-The Windows agent is distributed as a Windows Installer package (MSI). Download the Windows agent from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent).
-Installing the Connected Machine agent for Window applies the following system-wide configuration changes:
-
-* The installation process creates the following folders during setup.
-
- | Directory | Description |
- |--|-|
- | %ProgramFiles%\AzureConnectedMachineAgent | azcmagent CLI and instance metadata service executables.|
- | %ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.|
- | %ProgramFiles%\AzureConnectedMachineAgent\GCArcService\GC | Guest configuration (policy) service executables.|
- | %ProgramData%\AzureConnectedMachineAgent | Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
- | %ProgramData%\GuestConfig | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
- | %SYSTEMDRIVE%\packages | Extension package executables. |
-
-* Installing the agent creates the following Windows services on the target machine.
-
- | Service name | Display name | Process name | Description |
- |--|--|--|-|
- | himds | Azure Hybrid Instance Metadata Service | himds | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens |
- | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. |
- | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. |
-
-* Agent installation creates the following virtual service account.
-
- | Virtual Account | Description |
- ||-|
- | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
-
- > [!TIP]
- > This account requires the *Log on as a service* right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you might need to adjust your Group Policy Object to grant the right to **NT SERVICE\\himds** or **NT SERVICE\\ALL SERVICES** to allow the agent to function.
-
-* Agent installation creates the following local security group.
-
- | Security group name | Description |
- ||-|
- | Hybrid agent extension applications | Members of this security group can request Microsoft Entra tokens for the system-assigned managed identity |
-
-* Agent installation creates the following environmental variables
-
- | Name | Default value | Description |
- ||||
- | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` |
- | IMDS_ENDPOINT | `http://localhost:40342` |
-
-* There are several log files available for troubleshooting, described in the following table.
-
- | Log | Description |
- |--|-|
- | %ProgramData%\AzureConnectedMachineAgent\Log\himds.log | Records details of the heartbeat and identity agent component. |
- | %ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log | Contains the output of the azcmagent tool commands. |
- | %ProgramData%\GuestConfig\arc_policy_logs\gc_agent.log | Records details about the guest configuration (policy) agent component. |
- | %ProgramData%\GuestConfig\ext_mgr_logs\gc_ext.log | Records details about extension manager activity (extension install, uninstall, and upgrade events). |
- | %ProgramData%\GuestConfig\extension_logs | Directory containing logs for individual extensions. |
-
-* The process creates the local security group **Hybrid agent extension applications**.
-
-* After uninstalling the agent, the following artifacts remain:
-
- * %ProgramData%\AzureConnectedMachineAgent\Log
- * %ProgramData%\AzureConnectedMachineAgent
- * %ProgramData%\GuestConfig
- * %SystemDrive%\packages
-
-### Linux agent installation details
-
-The preferred package format for the distribution (`.rpm` or `.deb`) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configures the agent.
-
-Installing, upgrading, and removing the Connected Machine agent isn't required after server restart.
-
-Installing the Connected Machine agent for Linux applies the following system-wide configuration changes.
-
-* Setup creates the following installation folders.
-
- | Directory | Description |
- |--|-|
- | /opt/azcmagent/ | azcmagent CLI and instance metadata service executables. |
- | /opt/GC_Ext/ | Extension service executables. |
- | /opt/GC_Service/ | Guest configuration (policy) service executables. |
- | /var/opt/azcmagent/ | Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
- | /var/lib/GuestConfig/ | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
-
-* Installing the agent creates the following daemons.
-
- | Service name | Display name | Process name | Description |
- |--|--|--|-|
- | himdsd.service | Azure Connected Machine Agent Service | himds | This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
- | gcad.service | GC Arc Service | gc_linux_service | Audits and enforces Azure guest configuration policies on the machine. |
- | extd.service | Extension Service | gc_linux_service | Installs, updates, and manages extensions on the machine. |
-
-* There are several log files available for troubleshooting, described in the following table.
-
- | Log | Description |
- |--|-|
- | /var/opt/azcmagent/log/himds.log | Records details of the heartbeat and identity agent component. |
- | /var/opt/azcmagent/log/azcmagent.log | Contains the output of the azcmagent tool commands. |
- | /var/lib/GuestConfig/arc_policy_logs | Records details about the guest configuration (policy) agent component. |
- | /var/lib/GuestConfig/ext_mgr_logs | Records details about extension manager activity (extension install, uninstall, and upgrade events). |
- | /var/lib/GuestConfig/extension_logs | Directory containing logs for individual extensions. |
-
-* Agent installation creates the following environment variables, set in `/lib/systemd/system.conf.d/azcmagent.conf`.
-
- | Name | Default value | Description |
- |||-|
- | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` |
- | IMDS_ENDPOINT | `http://localhost:40342` |
-
-* After uninstalling the agent, the following artifacts remain:
-
- * /var/opt/azcmagent
- * /var/lib/GuestConfig
-
-## Agent resource governance
-
-The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions:
-
-* The Guest Configuration agent can use up to 5% of the CPU to evaluate policies.
-* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply:
-
- | Extension type | Operating system | CPU limit |
- | -- | - | |
- | AzureMonitorLinuxAgent | Linux | 60% |
- | AzureMonitorWindowsAgent | Windows | 100% |
- | AzureSecurityLinuxAgent | Linux | 30% |
- | LinuxOsUpdateExtension | Linux | 60% |
- | MDE.Linux | Linux | 60% |
- | MicrosoftDnsAgent | Windows | 100% |
- | MicrosoftMonitoringAgent | Windows | 60% |
- | OmsAgentForLinux | Windows | 60%|
-
-During normal operations, defined as the Azure Connected Machine agent being connected to Azure and not actively modifying an extension or evaluating a policy, you can expect the agent to consume the following system resources:
-
-| | Windows | Linux |
-| | - | -- |
-| **CPU usage (normalized to 1 core)** | 0.07% | 0.02% |
-| **Memory usage** | 57 MB | 42 MB |
-
-The performance data above was gathered in April 2023 on virtual machines running Windows Server 2022 and Ubuntu 20.04. The actual agent performance and resource consumption vary based on the hardware and software configuration of your servers.
-
-## Instance metadata
-
-Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers, specifically:
-
-* Operating system name, type, and version
-* Computer name
-* Computer manufacturer and model
-* Computer fully qualified domain name (FQDN)
-* Domain name (if joined to an Active Directory domain)
-* Active Directory and DNS fully qualified domain name (FQDN)
-* UUID (BIOS ID)
-* Connected Machine agent heartbeat
-* Connected Machine agent version
-* Public key for managed identity
-* Policy compliance status and details (if using guest configuration policies)
-* SQL Server installed (Boolean value)
-* Cluster resource ID (for Azure Stack HCI nodes)
-* Hardware manufacturer
-* Hardware model
-* CPU family, socket, physical core and logical core counts
-* Total physical memory
-* Serial number
-* SMBIOS asset tag
-* Cloud provider
-* Amazon Web Services (AWS) metadata, when running in AWS:
- * Account ID
- * Instance ID
- * Region
-* Google Cloud Platform (GCP) metadata, when running in GCP:
- * Instance ID
- * Image
- * Machine type
- * Project ID
- * Project number
- * Service accounts
- * Zone
-
-The agent requests the following metadata information from Azure:
-
-* Resource location (region)
-* Virtual machine ID
-* Tags
-* Microsoft Entra managed identity certificate
-* Guest configuration policy assignments
-* Extension requests - install, update, and delete.
-
-> [!NOTE]
-> Azure Arc-enabled servers don't store/process customer data outside the region the customer deploys the service instance in.
-
-## Next steps
--- [Connect VMware vCenter Server to Azure Arc](quick-start-connect-vcenter-to-arc-using-script.md).-- [Install Arc agent at scale for your VMware VMs](enable-guest-management-at-scale.md).
azure-arc Azure Arc Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-resource-bridge.md
- Title: Azure Arc resource bridge (preview)
-description: Learn about Azure Arc resource bridge (preview)
- Previously updated : 10/23/2023------
-#Customer intent: As an IT infrastructure admin, I want to know about the Azure Arc resource bridge (preview) that facilitates the Arc connection between vCenter server and Azure.
--
-# Azure Arc resource bridge (preview)
-
-Azure Arc resource bridge (preview) is a Microsoft managed product that is part of the core Azure Arc platform. It's designed to host other Azure Arc services. The resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware [(Arc-enabled VMware vSphere)](/azure/azure-arc/vmware-vsphere), and System Center Virtual Machine Manager (SCVMM) [(Arc-enabled SCVMM preview)](/azure/azure-arc/system-center-virtual-machine-manager).
-
-Azure Arc resource bridge (preview) is a Kubernetes management cluster installed on the customerΓÇÖs on-premises infrastructure. The resource bridge is provided with the credentials to the infrastructure control plane that allows it to apply guest management services on the on-premises resources. Arc resource bridge enables projection of on-premises resources as ARM resources and management from ARM as **Arc-enabled** Azure resources.
-
-Arc resource bridge delivers the following benefits:
--- Enables VM self-servicing from Azure without having to create and manage a Kubernetes cluster. --- Fully supported by Microsoft, including updates to core components. --- Supports deployment to any private cloud hosted on Hyper-V or VMware from the Azure portal or using the Azure Command Line Interface (CLI). -
-## Overview
-
-Azure Arc resource bridge (preview) hosts other components such as [custom locations](custom-locations.md), cluster extensions, and other Azure Arc agents to deliver the level of functionality with the private cloud infrastructures it supports.
-
-This complex system is composed of three layers:
--- The base layer represents the resource bridge and the Arc agents. --- The platform layer that includes the custom location and cluster extension. --- The solution layer for each service supported by Arc resource bridge (that is, the different type of VMs). --
-Azure Arc resource bridge (preview) can host other Azure services or solutions running on-premises. For this preview, there are two objects hosted on the Arc resource bridge (preview):
--- Cluster extension: The Azure service deployed to run on-premises. For the preview release, it supports three
- - Azure Arc-enabled VMware
- - Azure Arc-enabled Azure Stack HCI
- - Azure Arc-enabled System Center Virtual Machine Manager (SCVMM)
--- Custom locations: A deployment target where you can create Azure resources. It maps to different resources for different Azure services. For example, for Arc-enabled VMware, the custom locations resource maps to an instance of vCenter, and for Arc-enabled Azure Stack HCI, it maps to an HCI cluster instance.-
-Custom locations and cluster extension are both Azure resources, which are linked to the Azure Arc resource bridge (preview) resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter, Azure Stack HCI cluster, or SCVMM.
-
-Some resources are unique to the infrastructure. For example, vCenter has a resource pool, network, and template resources. During VM creation, these resources need to be specified. With Azure Stack HCI, you just need to select the custom location, network, and template to create a VM.
-
-To summarize, the Azure resources are projections of the resources running in your on-premises private cloud. If the on-premises resource isn't healthy, it can affect the health of the related resources that are projected in Azure. For example, if the resource bridge is deleted by accident, all the resources projected in Azure by the resource bridge are affected. The on-premises VMs in your on-premises private cloud aren't affected, as they are running on vCenter but you won't be able to start or stop the VMs from Azure. It's not recommended to directly manage or modify the resource bridge using any on-premises applications.
-
-## Benefits of Azure Arc resource bridge (preview)
-
-Through Azure Arc resource bridge (preview), you can represent a subset of your vCenter resources in Azure to enable self-service by registering resource pools, networks, and VM templates. Integration with Azure allows you to manage access to your vCenter resources in Azure to maintain a secure environment. You can also perform various operations on the VMware virtual machines that are enabled by Arc-enabled VMware vSphere:
--- Start, stop, and restart a virtual machine-- Control access and add Azure tags-- Add, remove, and update network interfaces-- Add, remove, and update disks and update VM size (CPU cores and memory)-- Enable guest management-- Install extensions-
-## Regional resiliency
-
-While Azure has many redundancy features at every level of failure, if a service impacting event occurs, the current release of Azure Arc resource bridge (preview) doesn't support cross-region failover or other resiliency capabilities. If the service becomes unavailable, the on-premises VMs continue to operate unaffected.
-Management from Azure is unavailable during that service outage.
-
-## Supported versions
-
-Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month. Delays can occur that could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub. To learn more about upgrade options, visit [Upgrade Arc resource bridge](../resource-bridge/upgrade.md).
-
-## Next steps
-
-[Learn more about Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere).
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/custom-locations.md
- Title: Custom locations for VMware vSphere
-description: Learn about custom locations for VMware vSphere
- Previously updated : 10/23/2023------
-#Customer intent: As an IT infrastructure admin, I want to know about the concepts behind Azure Arc.
--
-# Custom locations for VMware vSphere
-
-As an extension of the Azure location construct, a *custom location* provides a reference as a deployment target which administrators can set up and users can point to when creating an Azure resource. It abstracts the backend infrastructure details from application developers, database admin users, or other users in the organization.
-
-## Custom location for on-premises vCenter server
-
-Since the custom location is an Azure Resource Manager resource that supports [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview), an administrator or operator can determine which users have access to create resource instances on the compute, storage, networking, and other vCenter resources to deploy and manage VMs.
-
-For example, an IT administrator could create a custom location **Contoso-vCenter** representing the vCenter server in your organization's data center. The operator can then assign Azure RBAC permissions to application developers on this custom location so that they can deploy virtual machines. The developers can then deploy these virtual machines without having to know details of the vCenter management server.
-
-Custom locations create the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the VMware resources.
-
-## Next steps
-
-[Connect VMware vCenter Server to Azure Arc](./quick-start-connect-vcenter-to-arc-using-script.md).
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
When custom application logs are sent directly, the host no longer be emits them
## Configure categories
-The Azure Functions logger includes a *category* for every log. The category indicates which part of the runtime code or your function code wrote the log. Categories differ between version 1.x and later versions. The following chart describes the main categories of logs that the runtime creates:
+The Azure Functions logger includes a *category* for every log. The category indicates which part of the runtime code or your function code wrote the log. Categories differ between version 1.x and later versions.
+
+Category names are assigned differently in Functions compared to other .NET frameworks. For example, when you use `ILogger<T>` in ASP.NET, the category is the name of the generic type. C# functions also use `ILogger<T>`, but instead of setting the generic type name as a category, the runtime assigns categories based on the source. For example:
+
++ Entries related to running a function are assigned a category of `Function.<FUNCTION_NAME>`.++ Entries created by user code inside the function, such as when calling `logger.LogInformation()`, are assigned a category of `Function.<FUNCTION_NAME>.User`.+
+The following chart describes the main categories of logs that the runtime creates:
# [v2.x+](#tab/v2)
The **Table** column indicates to which table in Application Insights the log is
For each category, you indicate the minimum log level to send. The *host.json* settings vary depending on the [Functions runtime version](functions-versions.md).
-The example below defines logging based on the following rules:
-
-+ For logs of `Host.Results` or `Function`, only log events at `Error` or a higher level.
-+ For logs of `Host.Aggregator`, log all generated metrics (`Trace`).
-+ For all other logs, including user logs, log only `Information` level and higher events.
-+ For `fileLoggingMode` the default is `debugOnly`. The value `always` should only be used for short periods of time to review logs in the filesystem. Revert this setting when you are done debugging.
+The examples below define logging based on the following rules:
++ The default logging level is set to `Warning` to prevent [excessive logging](#solutions-with-high-volume-of-telemetry) for unanticipated categories.++ `Host.Aggregator` and `Host.Results` are set to lower levels. Setting these to too high a level (especially higher than `Information`) can result in loss of metrics and performance data.++ Logging for function runs is set to `Information`. This can be [overridden](functions-host-json.md#override-hostjson-values) in local development to `Debug` or `Trace`, when needed. # [v2.x+](#tab/v2)
The example below defines logging based on the following rules:
"logging": { "fileLoggingMode": "debugOnly", "logLevel": {
- "default": "Information",
- "Host.Results": "Error",
- "Function": "Error",
- "Host.Aggregator": "Trace"
+ "default": "Warning",
+ "Host.Aggregator": "Trace",
+ "Host.Results": "Information",
+ "Function": "Information"
} } }
The example below defines logging based on the following rules:
{ "logger": { "categoryFilter": {
- "defaultLevel": "Information",
+ "defaultLevel": "Warning",
"categoryLevels": {
- "Host.Results": "Error",
- "Function": "Error",
- "Host.Aggregator": "Trace"
+ "Host.Results": "Information",
+ "Host.Aggregator": "Trace",
+ "Function": "Information"
} } }
azure-functions Durable Functions Dotnet Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-dotnet-entities.md
Title: Developer's Guide to Durable Entities in .NET - Azure Functions
description: How to work with durable entities in .NET with the Durable Functions extension for Azure Functions. Previously updated : 06/30/2021 Last updated : 10/24/2023 ms.devlang: csharp
public class Counter
The `Run` function contains the boilerplate required for using the class-based syntax. It must be a *static* Azure Function. It executes once for each operation message that is processed by the entity. When `DispatchAsync<T>` is called and the entity isn't already in memory, it constructs an object of type `T` and populates its fields from the last persisted JSON found in storage (if any). Then it invokes the method with the matching name.
-The `EntityTrigger` Function, `Run` in this sample, does not need to reside within the Entity class itself. It can reside within any valid location for an Azure Function: inside the top-level namespace, or inside a top-level class. However, if nested deeper (e.g, the Function is declared inside a *nested* class), then this Function will not be recognized by the latest runtime.
+The `EntityTrigger` Function, `Run` in this sample, doesn't need to reside within the Entity class itself. It can reside within any valid location for an Azure Function: inside the top-level namespace, or inside a top-level class. However, if nested deeper (e.g, the Function is declared inside a *nested* class), then this Function won't be recognized by the latest runtime.
> [!NOTE] > The state of a class-based entity is **created implicitly** before the entity processes an operation, and can be **deleted explicitly** in an operation by calling `Entity.Current.DeleteState()`.
Deleting an entity in the isolated model is accomplished by setting the entity s
- When deriving from `ITaskEntity` or using [function based syntax](#function-based-syntax), delete is accomplished by calling `TaskEntityOperation.State.SetState(null)`. - When deriving from `TaskEntity<TState>`, delete is implicitly defined. However, it can be overridden by defining a method `Delete` on the entity. State can also be deleted from any operation via `this.State = null`.
- - To delete via setting state to null will require `TState` to be nullable.
- - The implicitly defined delete operation will delete non-nullable `TState`.
-- When using a POCO as your state (not deriving from `TaskEntity<TState>`), delete is implicitly defined. It is possible to override the delete operation by defining a method `Delete` on the POCO. However, there is no way to set state to `null` in the POCO route so the implicitly defined delete operation is the only true delete.
+ - To delete by setting state to null requires `TState` to be nullable.
+ - The implicitly defined delete operation deletes non-nullable `TState`.
+- When using a POCO as your state (not deriving from `TaskEntity<TState>`), delete is implicitly defined. It's possible to override the delete operation by defining a method `Delete` on the POCO. However, there's no way to set state to `null` in the POCO route so the implicitly defined delete operation is the only true delete.
Entity classes are POCOs (plain old CLR objects) that require no special supercl
- The class must be constructible (see [Entity construction](#entity-construction)). - The class must be JSON-serializable (see [Entity serialization](#entity-serialization)).
-Also, any method that is intended to be invoked as an operation must satisfy additional requirements:
+Also, any method that is intended to be invoked as an operation must satisfy other requirements:
- An operation must have at most one argument, and must not have any overloads or generic type arguments. - An operation meant to be called from an orchestration using an interface must return `Task` or `Task<T>`.
Operations also have access to functionality provided by the `Entity.Current` co
For example, we can modify the counter entity so it starts an orchestration when the counter reaches 100 and passes the entity ID as an input argument:
-#### [In-Process](#tab/in-process)
+#### [In-process](#tab/in-process)
```csharp public void Add(int amount) {
public void Add(int amount, TaskEntityContext context)
## Accessing entities directly
-Class-based entities can be accessed directly, using explicit string names for the entity and its operations. We provide some examples below; for a deeper explanation of the underlying concepts (such as signals vs. calls) see the discussion in [Access entities](durable-functions-entities.md#access-entities).
+Class-based entities can be accessed directly, using explicit string names for the entity and its operations. This section provides examples. For a deeper explanation of the underlying concepts (such as signals vs. calls), see the discussion in [Access entities](durable-functions-entities.md#access-entities).
> [!NOTE]
-> Where possible, we recommend [Accessing entities through interfaces](#accessing-entities-through-interfaces), because it provides more type checking.
+> Where possible, you should [accesses entities through interfaces](#accessing-entities-through-interfaces), because it provides more type checking.
### Example: client signals entity
public static async Task<HttpResponseData> GetCounter(
```
-### Example: orchestration first signals, then calls entity
+### Example: orchestration first signals then calls entity
The following orchestration signals a counter entity to increment it, and then calls the same entity to read its latest value.
Besides providing type checking, interfaces are useful for a better separation o
### Example: client signals entity through interface
-#### [In-Process](#tab/in-process)
+#### [In-process](#tab/in-process)
Client code can use `SignalEntityAsync<TEntityInterface>` to send signals to entities that implement `TEntityInterface`. For example: ```csharp
This is currently not supported in the .NET isolated worker.
-### Example: orchestration first signals, then calls entity through proxy
+### Example: orchestration first signals then calls entity through proxy
-#### [In-Process](#tab/in-process)
+#### [In-process](#tab/in-process)
To call or signal an entity from within an orchestration, `CreateEntityProxy` can be used, along with the interface type, to generate a proxy for the entity. This proxy can then be used to call or signal operations:
If only the entity key is specified and a unique implementation can't be found a
As usual, all parameter and return types must be JSON-serializable. Otherwise, serialization exceptions are thrown at runtime.
-We also enforce some additional rules:
+We also enforce some more rules:
* Entity interfaces must be defined in the same assembly as the entity class. * Entity interfaces must only define methods. * Entity interfaces must not contain generic parameters.
If any of these rules are violated, an `InvalidOperationException` is thrown at
## Entity serialization
-Since the state of an entity is durably persisted, the entity class must be serializable. The Durable Functions runtime uses the [Json.NET](https://www.newtonsoft.com/json) library for this purpose, which supports a number of policies and attributes to control the serialization and deserialization process. Most commonly used C# data types (including arrays and collection types) are already serializable, and can easily be used for defining the state of durable entities.
+Since the state of an entity is durably persisted, the entity class must be serializable. The Durable Functions runtime uses the [Json.NET](https://www.newtonsoft.com/json) library for this purpose, which supports policies and attributes to control the serialization and deserialization process. Most commonly used C# data types (including arrays and collection types) are already serializable, and can easily be used for defining the state of durable entities.
For example, Json.NET can easily serialize and deserialize the following class:
public class Counter
} ```
-By default, the name of the class is *not* stored as part of the JSON representation: that is, we use `TypeNameHandling.None` as the default setting. This default behavior can be overridden using `JsonObject` or `JsonProperty` attributes.
+By default, the name of the class isn't* stored as part of the JSON representation: that is, we use `TypeNameHandling.None` as the default setting. This default behavior can be overridden using `JsonObject` or `JsonProperty` attributes.
### Making changes to class definitions
-Some care is required when making changes to a class definition after an application has been run, because the stored JSON object can no longer match the new class definition. Still, it is often possible to deal correctly with changing data formats as long as one understands the deserialization process used by `JsonConvert.PopulateObject`.
+Some care is required when making changes to a class definition after an application has been run, because the stored JSON object can no longer match the new class definition. Still, it's often possible to deal correctly with changing data formats as long as one understands the deserialization process used by `JsonConvert.PopulateObject`.
For example, here are some examples of changes and their effect:
-1. If a new property is added, which is not present in the stored JSON, it assumes its default value.
-1. If a property is removed, which is present in the stored JSON, the previous content is lost.
-1. If a property is renamed, the effect is as if removing the old one and adding a new one.
-1. If the type of a property is changed so it can no longer be deserialized from the stored JSON, an exception is thrown.
-1. If the type of a property is changed, but it can still be deserialized from the stored JSON, it will do so.
+* When a new property is added, which isn't present in the stored JSON, it assumes its default value.
+* When a property is removed, which is present in the stored JSON, the previous content is lost.
+* When a property is renamed, the effect is as if removing the old one and adding a new one.
+* When the type of a property is changed so it can no longer be deserialized from the stored JSON, an exception is thrown.
+* When the type of a property is changed, but it can still be deserialized from the stored JSON, it does so.
-There are many options available for customizing the behavior of Json.NET. For example, to force an exception if the stored JSON contains a field that is not present in the class, specify the attribute `JsonObject(MissingMemberHandling = MissingMemberHandling.Error)`. It is also possible to write custom code for deserialization that can read JSON stored in arbitrary formats.
+There are many options available for customizing the behavior of Json.NET. For example, to force an exception if the stored JSON contains a field that isn't present in the class, specify the attribute `JsonObject(MissingMemberHandling = MissingMemberHandling.Error)`. It's also possible to write custom code for deserialization that can read JSON stored in arbitrary formats.
## Entity construction
public class Counter : TaskEntity<int>
### Bindings in entity classes
-Unlike regular functions, entity class methods don't have direct access to input and output bindings. Instead, binding data must be captured in the entry-point function declaration and then passed to the `DispatchAsync<T>` method. Any objects passed to `DispatchAsync<T>` will be automatically passed into the entity class constructor as an argument.
+Unlike regular functions, entity class methods don't have direct access to input and output bindings. Instead, binding data must be captured in the entry-point function declaration and then passed to the `DispatchAsync<T>` method. Any objects passed to `DispatchAsync<T>` is passed automatically to the entity class constructor as an argument.
The following example shows how a `CloudBlobContainer` reference from the [blob input binding](../functions-bindings-storage-blob-input.md) can be made available to a class-based entity.
The following members provide information about the current operation, and allow
The following members manage the state of the entity (create, read, update, delete). * `HasState`: whether the entity exists, that is, has some state.
-* `GetState<TState>()`: gets the current state of the entity. If it does not already exist, it is created.
+* `GetState<TState>()`: gets the current state of the entity. If it doesn't already exist, it's created.
* `SetState(arg)`: creates or updates the state of the entity. * `DeleteState()`: deletes the state of the entity, if it exists.
-If the state returned by `GetState` is an object, it can be directly modified by the application code. There is no need to call `SetState` again at the end (but also no harm). If `GetState<TState>` is called multiple times, the same type must be used.
+If the state returned by `GetState` is an object, it can be directly modified by the application code. There's no need to call `SetState` again at the end (but also no harm). If `GetState<TState>` is called multiple times, the same type must be used.
Finally, the following members are used to signal other entities, or start new orchestrations:
azure-functions Durable Functions Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-entities.md
Title: Durable entities - Azure Functions
description: Learn what durable entities are and how to use them in the Durable Functions extension for Azure Functions. Previously updated : 05/10/2022 Last updated : 10/24/2023 ms.devlang: csharp, java, javascript, python
+zone_pivot_groups: df-languages
#Customer intent: As a developer, I want to learn what durable entities are and how to use them to solve distributed, stateful problems in my applications.
Entity functions define operations for reading and updating small pieces of state, known as *durable entities*. Like orchestrator functions, entity functions are functions with a special trigger type, the *entity trigger*. Unlike orchestrator functions, entity functions manage the state of an entity explicitly, rather than implicitly representing state via control flow. Entities provide a means for scaling out applications by distributing the work across many entities, each with a modestly sized state.- > [!NOTE] > Entity functions and related functionality are only available in [Durable Functions 2.0](durable-functions-versions.md#migrate-from-1x-to-2x) and above. They are currently supported in .NET in-proc, .NET isolated worker ([preview](durable-functions-dotnet-entities.md)), JavaScript, and Python, but not in PowerShell or Java.
+>[!IMPORTANT]
+>Entity functions aren't currently supported in PowerShell and Java.
## General concepts Entities behave a bit like tiny services that communicate via messages. Each entity has a unique identity and an internal state (if it exists). Like services or objects, entities perform operations when prompted to do so. When an operation executes, it might update the internal state of the entity. It might also call external services and wait for a response. Entities communicate with other entities, orchestrations, and clients by using messages that are implicitly sent via reliable queues. - To prevent conflicts, all operations on a single entity are guaranteed to execute serially, that is, one after another. > [!NOTE]
Entities are accessed via a unique identifier, the *entity ID*. An entity ID is
For example, a `Counter` entity function might be used for keeping score in an online game. Each instance of the game has a unique entity ID, such as `@Counter@Game1` and `@Counter@Game2`. All operations that target a particular entity require specifying an entity ID as a parameter.
-### Entity operations ###
+### Entity operations
To invoke an operation on an entity, specify the:
To invoke an operation on an entity, specify the:
* **Operation input**, which is an optional input parameter for the operation. For example, the add operation can take an integer amount as the input. * **Scheduled time**, which is an optional parameter for specifying the delivery time of the operation. For example, an operation can be reliably scheduled to run several days in the future.
-Operations can return a result value or an error result, such as a JavaScript error or a .NET exception. This result or error can be observed by orchestrations that called the operation.
+Operations can return a result value or an error result, such as a JavaScript error or a .NET exception. This result or error occurs in orchestrations that called the operation.
An entity operation can also create, read, update, and delete the state of the entity. The state of the entity is always durably persisted in storage. ## Define entities
+You define entities using a function-based syntax, where entities are represented as functions and operations are explicitly dispatched by the application.
+Currently, there are two distinct APIs for defining entities in .NET:
+
+### [Function-based syntax](#tab/function-based)
-Currently, the two distinct APIs for defining entities are:
+When you use a function-based syntax, entities are represented as functions and operations are explicitly dispatched by the application. This syntax works well for entities with simple state, few operations, or a dynamic set of operations like in application frameworks. This syntax can be tedious to maintain because it doesn't catch type errors at compile time.
-**Function-based syntax**, where entities are represented as functions and operations are explicitly dispatched by the application. This syntax works well for entities with simple state, few operations, or a dynamic set of operations like in application frameworks. This syntax can be tedious to maintain because it doesn't catch type errors at compile time.
+### [Class-based syntax](#tab/class-based)
-**Class-based syntax (.NET only)**, where entities and operations are represented by classes and methods. This syntax produces more easily readable code and allows operations to be invoked in a type-safe way. The class-based syntax is a thin layer on top of the function-based syntax, so both variants can be used interchangeably in the same application.
+When you use a class-based syntax, .NET classes and methods represent entities and operations. This syntax produces more easily readable code and allows operations to be invoked in a type-safe way. The class-based syntax is a thin layer on top of the function-based syntax, so both variants can be used interchangeably in the same application.
-# [C# (In-proc)](#tab/in-process)
+
-### Example: Function-based syntax - C#
+The specific APIs depend on whether your C# functions run in an _isolated worker process_ (recommended) or in the same process as the host.
+
+### [In-process](#tab/in-process/function-based)
The following code is an example of a simple `Counter` entity implemented as a durable function. This function defines three operations, `add`, `reset`, and `get`, each of which operates on an integer state.
public static void Counter([EntityTrigger] IDurableEntityContext ctx)
For more information on the function-based syntax and how to use it, see [Function-based syntax](durable-functions-dotnet-entities.md#function-based-syntax).
-### Example: Class-based syntax - C#
+### [In-process](#tab/in-process/class-based)
The following example is an equivalent implementation of the `Counter` entity using classes and methods.
The state of this entity is an object of type `Counter`, which contains a field
For more information on the class-based syntax and how to use it, see [Defining entity classes](durable-functions-dotnet-entities.md#defining-entity-classes).
-# [C# (Isolated)](#tab/isolated-process)
-### Example: Function-based syntax - C#
+### [Isolated worker process](#tab/isolated-process/function-based)
```csharp [Function(nameof(Counter))]
public static Task DispatchAsync([EntityTrigger] TaskEntityDispatcher dispatcher
} ```
-### Example: Class-based syntax - C#
+### [Isolated worker process](#tab/isolated-process/class-based)
+ The following example shows the implementation of the `Counter` entity using classes and methods. ```csharp public class Counter
public static Task RunEntityStaticAsync([EntityTrigger] TaskEntityDispatcher dis
return dispatcher.DispatchAsync<Counter>(); } ```-
-# [JavaScript](#tab/javascript)
-
-### Example: JavaScript entity
+
Durable entities are available in JavaScript starting with version **1.3.0** of the `durable-functions` npm package. The following code is the `Counter` entity implemented as a durable function written in JavaScript.
module.exports = df.entity(function(context) {
} }); ```
-# [Python](#tab/python)
-
-### Example: Python entity
The following code is the `Counter` entity implemented as a durable function written in Python. **Counter/function.json**
def entity_function(context: df.DurableEntityContext):
main = df.Entity.create(entity_function) ```-- ## Access entities Entities can be accessed using one-way or two-way communication. The following terminology distinguishes the two forms of communication:
The following examples illustrate these various ways of accessing entities.
### Example: Client signals an entity To access entities from an ordinary Azure Function, which is also known as a client function, use the [entity client binding](durable-functions-bindings.md#entity-client). The following example shows a queue-triggered function signaling an entity using this binding.-
-# [C# (In-proc)](#tab/in-process)
+#### [In-process](#tab/in-process)
> [!NOTE] > For simplicity, the following examples show the loosely typed syntax for accessing entities. In general, we recommend that you [access entities through interfaces](durable-functions-dotnet-entities.md#accessing-entities-through-interfaces) because it provides more type checking.
public static Task Run(
} ```
-# [C# (Isolated)](#tab/isolated-process)
-
+#### [Isolated worker process](#tab/isolated-process)
```csharp [Function("AddFromQueue")] public static Task Run(
public static Task Run(
} ```
-# [JavaScript](#tab/javascript)
-
+
```javascript const df = require("durable-functions");
module.exports = async function (context) {
await client.signalEntity(entityId, "add", 1); }; ```-
-# [Python](#tab/python)
- ```Python from azure.durable_functions import DurableOrchestrationClient import azure.functions as func
async def main(req: func.HttpRequest, starter: str, message):
await client.signal_entity(entityId, "add", 1) ``` -- The term *signal* means that the entity API invocation is one-way and asynchronous. It's not possible for a client function to know when the entity has processed the operation. Also, the client function can't observe any result values or exceptions. ### Example: Client reads an entity state Client functions can also query the state of an entity, as shown in the following example:-
-# [C# (In-proc)](#tab/in-process)
-
+#### [In-process](#tab/in-process)
```csharp [FunctionName("QueryCounter")] public static async Task<HttpResponseMessage> Run(
public static async Task<HttpResponseMessage> Run(
return req.CreateResponse(HttpStatusCode.OK, stateResponse.EntityState); } ```
-# [C# (Isolated)](#tab/isolated-process)
+#### [Isolated worker process](#tab/isolated-process)
```csharp [Function("QueryCounter")] public static async Task<HttpResponseData> Run(
public static async Task<HttpResponseData> Run(
} ```
-# [JavaScript](#tab/javascript)
-
+
```javascript const df = require("durable-functions");
module.exports = async function (context) {
return stateResponse.entityState; }; ```-
-# [Python](#tab/python)
- ```python from azure.durable_functions import DurableOrchestrationClient import azure.functions as func
async def main(req: func.HttpRequest, starter: str, message):
entity_state = str(entity_state_result.entity_state) return func.HttpResponse(entity_state) ```--- Entity state queries are sent to the Durable tracking store and return the entity's most recently persisted state. This state is always a "committed" state, that is, it's never a temporary intermediate state assumed in the middle of executing an operation. However, it's possible that this state is stale compared to the entity's in-memory state. Only orchestrations can read an entity's in-memory state, as described in the following section. ### Example: Orchestration signals and calls an entity Orchestrator functions can access entities by using APIs on the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger). The following example code shows an orchestrator function calling and signaling a `Counter` entity.-
-# [C# (In-proc)](#tab/in-process)
-
+#### [In-process](#tab/in-process)
```csharp [FunctionName("CounterOrchestration")] public static async Task Run(
public static async Task Run(
} ```
-# [C# (Isolated)](#tab/isolated-process)
+#### [Isolated worker process](#tab/isolated-process)
```csharp [Function("CounterOrchestration")]
public static async Task Run([OrchestrationTrigger] TaskOrchestrationContext con
} ```
-# [JavaScript](#tab/javascript)
-
+
```javascript const df = require("durable-functions");
module.exports = df.orchestrator(function*(context){
> [!NOTE] > JavaScript does not currently support signaling an entity from an orchestrator. Use `callEntity` instead.-
-# [Python](#tab/python)
- ```Python import azure.functions as func import azure.durable_functions as df
def orchestrator_function(context: df.DurableOrchestrationContext):
context.signal_entity(entityId, "add", 1) return state ```--- Only orchestrations are capable of calling entities and getting a response, which could be either a return value or an exception. Client functions that use the [client binding](durable-functions-bindings.md#entity-client) can only signal entities. > [!NOTE]
Only orchestrations are capable of calling entities and getting a response, whic
An entity function can send signals to other entities, or even itself, while it executes an operation. For example, we can modify the previous `Counter` entity example so that it sends a "milestone-reached" signal to some monitor entity when the counter reaches the value 100.-
-# [C# (In-proc)](#tab/in-process)
-
+#### [In-process](#tab/in-process)
```csharp case "add": var currentValue = ctx.GetState<int>();
For example, we can modify the previous `Counter` entity example so that it send
break; ```
-# [C# (Isolated)](#tab/isolated-process)
-
+#### [Isolated worker process](#tab/isolated-process)
```csharp case "add": var currentValue = operation.State.GetState<int>();
case "add":
break; ```
-# [JavaScript](#tab/javascript)
-
+
```javascript case "add": const amount = context.df.getInput();
case "add":
context.df.setState(currentValue + amount); break; ```-
-# [Python](#tab/python)
- > [!NOTE]
-> Python does not support entity-to-entity signals yet. Please use an orchestrator for signaling entities instead.
--
+> Python doesn't support entity-to-entity signals yet. Please use an orchestrator for signaling entities instead.
-## <a name="entity-coordination"></a>Entity coordination (currently .NET only)
+## <a name="entity-coordination"></a>Entity coordination
There might be times when you need to coordinate operations across multiple entities. For example, in a banking application, you might have entities that represent individual bank accounts. When you transfer funds from one account to another, you must ensure that the source account has sufficient funds. You also must ensure that updates to both the source and destination accounts are done in a transactionally consistent way.
-### Example: Transfer funds (C#)
+### Example: Transfer funds
The following example code transfers funds between two account entities by using an orchestrator function. Coordinating entity updates requires using the `LockAsync` method to create a _critical section_ in the orchestration.
public static async Task<bool> TransferFundsAsync(
In .NET, `LockAsync` returns `IDisposable`, which ends the critical section when disposed. This `IDisposable` result can be used together with a `using` block to get a syntactic representation of the critical section.
-In the preceding example, an orchestrator function transferred funds from a source entity to a destination entity. The `LockAsync` method locked both the source and destination account entities. This locking ensured that no other client could query or modify the state of either account until the orchestration logic exited the critical section at the end of the `using` statement. This behavior prevents the possibility of overdrafting from the source account.
+In the preceding example, an orchestrator function transfers funds from a source entity to a destination entity. The `LockAsync` method locked both the source and destination account entities. This locking ensured that no other client could query or modify the state of either account until the orchestration logic exited the critical section at the end of the `using` statement. This behavior prevents the possibility of overdrafting from the source account.
> [!NOTE] > When an orchestration terminates, either normally or with an error, any critical sections in progress are implicitly ended and all locks are released.
No operations from other clients are allowed on an entity while it's in a locked
Locks on entities are durable, so they persist even if the executing process is recycled. Locks are internally persisted as part of an entity's durable state.
-Unlike transactions, critical sections don't automatically roll back changes in the case of errors. Instead, any error handling, such as roll-back or retry, must be explicitly coded, for example by catching errors or exceptions. This design choice is intentional. Automatically rolling back all the effects of an orchestration is difficult or impossible in general, because orchestrations might run activities and make calls to external services that can't be rolled back. Also, attempts to roll back might themselves fail and require further error handling.
+Unlike transactions, critical sections don't automatically roll back changes when errors occur. Instead, any error handling, such as roll-back or retry, must be explicitly coded, for example by catching errors or exceptions. This design choice is intentional. Automatically rolling back all the effects of an orchestration is difficult or impossible in general, because orchestrations might run activities and make calls to external services that can't be rolled back. Also, attempts to roll back might themselves fail and require further error handling.
### Critical section rules
Unlike low-level locking primitives in most programming languages, critical sect
* Critical sections can signal only entities they haven't locked. Any violations of these rules cause a runtime error, such as `LockingRulesViolationException` in .NET, which includes a message that explains what rule was broken.- ## Comparison with virtual actors
-Many of the durable entities features are inspired by the [actor model](https://en.wikipedia.org/wiki/Actor_model). If you're already familiar with actors, you might recognize many of the concepts described in this article. Durable entities are particularly similar to [virtual actors](https://research.microsoft.com/projects/orleans/), or grains, as popularized by the [Orleans project](http://dotnet.github.io/orleans/). For example:
+Many of the durable entities features are inspired by the [actor model](https://en.wikipedia.org/wiki/Actor_model). If you're already familiar with actors, you might recognize many of the concepts described in this article. Durable entities are similar to [virtual actors](https://research.microsoft.com/projects/orleans/), or grains, as popularized by the [Orleans project](http://dotnet.github.io/orleans/). For example:
* Durable entities are addressable via an entity ID. * Durable entity operations execute serially, one at a time, to prevent race conditions. * Durable entities are created implicitly when they're called or signaled.
-* When not executing operations, durable entities are silently unloaded from memory.
+* Durable entities are silently unloaded from memory when not executing operations.
There are some important differences that are worth noting:
There are some important differences that are worth noting:
* Messages sent between entities are delivered reliably and in order. In Orleans, reliable or ordered delivery is supported for content sent through streams, but isn't guaranteed for all messages between grains. * Request-response patterns in entities are limited to orchestrations. From within entities, only one-way messaging (also known as signaling) is permitted, as in the original actor model, and unlike grains in Orleans. * Durable entities don't deadlock. In Orleans, deadlocks can occur and don't resolve until messages time out.
-* Durable entities can be used in conjunction with durable orchestrations and support distributed locking mechanisms.
-
+* Durable entities can be used with durable orchestrations and support distributed locking mechanisms.
## Next steps > [!div class="nextstepaction"]
azure-functions Functions Deployment Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md
The following considerations apply when using remote builds during deployment:
+ Remote builds are supported for function apps running on Linux in the Consumption plan, however they don't have an SCM/Kudu site, which limits deployment options. + Function apps running on Linux a [Premium plan](functions-premium-plan.md) or in a [Dedicated (App Service) plan](dedicated-plan.md) do have an SCM/Kudu site, but it's limited compared to Windows. + Remote builds aren't performed when an app has previously been set to run in [run-from-package](run-functions-from-deployment-package.md) mode. To learn how to use remote build in these cases, see [Zip deploy](#zip-deploy).
-+ You may have issues with remote build when your app was created before the feature was made available (August 1, 2019). For older apps, either create a new function app or run `az functionapp update -resource-group <RESOURCE_GROUP_NAME> -name <APP_NAME>` to update your function app. This command might take two tries to succeed.
++ You may have issues with remote build when your app was created before the feature was made available (August 1, 2019). For older apps, either create a new function app or run `az functionapp update --resource-group <RESOURCE_GROUP_NAME> --name <APP_NAME>` to update your function app. This command might take two tries to succeed. ### App content storage
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
There are cases in which the best approach is to create one popup and reuse it.
For a fully functional sample that shows how to create one popup and reuse it rather than creating a popup for each point feature, see [Reusing Popup with Multiple Pins] in the [Azure Maps Samples]. For the source code for this sample, see [Reusing Popup with Multiple Pins source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/rQbjvK/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true]
By default, the popup has a white background, a pointer arrow on the bottom, and
For a fully functional sample that shows how to customize the look of a popup, see [Customize a popup] in the [Azure Maps Samples]. For the source code for this sample, see [Customize a popup source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/ymKgdg/?height=500&theme-id=0&default-tab=result]
function InitMap()
} ``` <!-- > [!VIDEO //codepen.io/azuremaps/embed/dyovrzL/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true]
Similar to reusing a popup, you can reuse popup templates. This approach is usef
For a fully functional sample that shows hot to reuse a single popup template with multiple features that share a common set of property fields, see [Reuse a popup template] in the [Azure Maps Samples]. For the source code for this sample, see [Reuse a popup template source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/WNvjxGw/?height=500&theme-id=0&default-tab=result&embed-version=2&editable=true]
Popups can be opened, closed, and dragged. The popup class provides events to he
For a fully functional sample that shows how to add events to popups, see [Popup events] in the [Azure Maps Samples]. For the source code for this sample, see [Popup events source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/BXrpvB/?height=500&theme-id=0&default-tab=result]
azure-maps Map Get Shape Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-shape-data.md
function getDrawnShapes() {
The [Get drawn shapes from drawing manager] code sample allows you to draw a shape on a map and then get the code used to create those drawings by using the drawing managers `drawingManager.getSource()` function. For the source code for this sample, see [Get drawn shapes from drawing manager sample code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/xxKgBVz/?height=265&theme-id=0&default-tab=result]
The [Get drawn shapes from drawing manager] code sample allows you to draw a sha
## Next steps
-Learn how to use additional features of the drawing tools module:
+Learn how to use other features of the drawing tools module:
> [!div class="nextstepaction"] > [React to drawing events](drawing-tools-events.md)
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
Perform the following steps to configure the Log Analytics agent for Linux to re
`sudo /opt/omi/bin/service_control restart`
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### How do I stop the Log Analytics agent from communicating with Azure Monitor?
+
+For agents connected to Log Analytics directly, open Control Panel and select **Microsoft Monitoring Agent**. Under the **Azure Log Analytics (OMS)** tab, remove all workspaces listed. In System Center Operations Manager, remove the computer from the Log Analytics managed computers list. Operations Manager updates the configuration of the agent to no longer report to Log Analytics.
+ ## Next steps - Review [Troubleshooting the Linux agent](agent-linux-troubleshoot.md) if you encounter issues while you install or manage the Linux agent.
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
On the roadmap
<sup>1</sup> Supports only the above distros and versions
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Does Azure Monitor require an agent?
+
+An agent is only required to collect data from the operating system and workloads in virtual machines. The virtual machines can be located in Azure, another cloud environment, or on-premises. See [Azure Monitor Agent overview](./agents-overview.md).
+
+### How can I be notified when data collection from the Log Analytics agent stops?
+
+Use the steps described in [Create a new log alert](../alerts/alerts-metric.md) to be notified when data collection stops. Use the following settings for the alert rule:
+
+- **Define alert condition**: Specify your Log Analytics workspace as the resource target.
+- **Alert criteria**:
+ - **Signal Name**: *Custom log search*.
+ - **Search query**: `Heartbeat | summarize LastCall = max(TimeGenerated) by Computer | where LastCall < ago(15m)`.
+ - **Alert logic**: **Based on** *number of results*, **Condition** *Greater than*, **Threshold value** *0*.
+ - **Evaluated based on**: **Period (in minutes)** *30*, **Frequency (in minutes)** *10*.
+- **Define alert details**:
+ - **Name**: *Data collection stopped*.
+ - **Severity**: *Warning*.
+
+Specify an existing or new [action group](../alerts/action-groups.md) so that when the log alert matches criteria, you're notified if you have a heartbeat missing for more than 15 minutes.
+
+### Will Azure Monitor Agent support data collection for the various Log Analytics solutions and Azure services like Microsoft Defender for Cloud and Microsoft Sentinel?
+
+Review the list of [Azure Monitor Agent extensions currently available in preview](#supported-services-and-features). These extensions are the same solutions and services now available by using the new Azure Monitor Agent instead.
+
+You might see more extensions getting installed for the solution or service to collect extra data or perform transformation or processing as required for the solution or service. Then use Azure Monitor Agent to route the final data to Azure Monitor.
+
+The following diagram explains the new extensibility architecture.
+
+![Diagram that shows extensions architecture.](./media/azure-monitor-agent/extensibility-arch-new.png)
+
+### Is Azure Monitor Agent at parity with the Log Analytics agents?
+
+Review the [current limitations](./azure-monitor-agent-overview.md#current-limitations) of Azure Monitor Agent when compared with Log Analytics agents.
+
+### Does Azure Monitor Agent support non-Azure environments like other clouds or on-premises?
+
+Both on-premises machines and machines connected to other clouds are supported for servers today, after you have the Azure Arc agent installed. For purposes of running Azure Monitor Agent and data collection rules, the Azure Arc requirement comes at *no extra cost or resource consumption*. The Azure Arc agent is only used as an installation mechanism. You don't need to enable the paid management features if you don't want to use them.
+
+### Does Azure Monitor Agent support auditd logs on Linux or AUOMS?
+
+Yes, but you need to [onboard to Defender for Cloud](./azure-monitor-agent-overview.md#supported-services-and-features) (previously Azure Security Center). It's available as an extension to Azure Monitor Agent, which collects Linux auditd logs via AUOMS.
+
+### Why do I need to install the Azure Arc Connected Machine agent to use Azure Monitor Agent?
+
+Azure Monitor Agent authenticates to your workspace via managed identity, which is created when you install the Connected Machine agent. Managed Identity is a more secure and manageable authentication solution from Azure. The legacy Log Analytics agent authenticated by using the workspace ID and key instead, so it didn't need Azure Arc.
+
+### Does the new Azure Monitor Agent have hardening support for Linux?
+
+Hardening support for Linux isn't available yet.
+ ## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
When you create the assignment by using the Azure portal, you have the option of
<!-- convertborder later --> :::image type="content" source="media/azure-monitor-agent-install/built-in-ama-dcr-remediation.png" lightbox="media/azure-monitor-agent-install/built-in-ama-dcr-remediation.png" alt-text="Screenshot that shows initiative remediation for Azure Monitor Agent." border="false":::
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### What impact does installing the Azure Arc Connected Machine agent have on my non-Azure machine?
+
+There's no impact to the machine after the Azure Arc Connected Machine agent is installed. It hardly uses system or network resources and is designed to have a low footprint on the host where it's run.
+ ## Next steps [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
The following features and services now have an Azure Monitor Agent version (som
| [Container insights](../containers/container-insights-overview.md) | Public preview | Containerized Azure Monitor agent | [Enable Container Insights](../containers/container-insights-onboard.md) | | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Moving to an agentless solution | | Many features available now all will be available by April 2024| | [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [GA](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [GA](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | See [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel. |
-| [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Moving to an agentless solution | | Available Novermber 2023 |
+| [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Moving to an agentless solution | | Available November 2023 |
| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | New service called Connection Monitor: Public preview with Azure Monitor Agent | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) | | Azure Stack HCI Insights | Private preview | | [Sign up here](https://aka.ms/amadcr-privatepreviews) | | Azure Virtual Desktop (AVD) Insights | Generally Available | | |
When you migrate the following services, which currently use Log Analytics agent
| [Update Management](../../automation/update-management/overview.md) | Update Manager - Public preview (no dependency on Log Analytics agents or Azure Monitor Agent) | None | [Update Manager (Public preview with Azure Monitor Agent) documentation](../../update-center/index.yml) | | [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension - Generally available (no dependency on Log Analytics agents or Azure Monitor Agent) | None | [Migrate an existing Agent based to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Can Azure Monitor Agent and the Log Analytics agent coexist side by side?
+
+Yes. If you're migrating to Azure Monitor Agent, you might consider installing Azure Monitor Agent together with a legacy agent for a transition period, but you must be mindful of certain considerations. Read more about agent coexistence considerations in the [Azure Monitor Agent migration guidance](./azure-monitor-agent-migration.md#migration-guidance).
+ ## Next steps For more information, see: - [Azure Monitor Agent overview](agents-overview.md) - [Azure Monitor Agent migration for Microsoft Sentinel](../../sentinel/ama-migrate.md)-- [Frequently asked questions for Azure Monitor Agent migration](/azure/azure-monitor/faq#azure-monitor-agent)
+- [Frequently asked questions for Azure Monitor Agent](agents-overview.md#frequently-asked-questions)
azure-monitor Azure Monitor Agent Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-performance.md
The benchmarks are run on an Azure VM Standard_F8s_v2 system using AMA Linux ver
| Network KBps | 338 (18,033) |
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### How much data is sent per agent?
+
+The amount of data sent per agent depends on:
+
+* The solutions you've enabled.
+* The number of logs and performance counters being collected.
+* The volume of data in the logs.
+
+See [Analyze usage in a Log Analytics workspace](../logs/analyze-usage.md).
+
+For computers that are able to run the WireData agent, use the following query to see how much data is being sent:
+
+```kusto
+WireData
+| where ProcessName == "C:\\Program Files\\Microsoft Monitoring Agent\\Agent\\MonitoringHost.exe"
+| where Direction == "Outbound"
+| summarize sum(TotalBytes) by Computer
+```
+
+### How much network bandwidth is used by the Microsoft Monitoring Agent when it sends data to Azure Monitor?
+
+Bandwidth is a function of the amount of data sent. Data is compressed as it's sent over the network.
+ ## Next steps - [Connect computers without internet access by using the Log Analytics gateway in Azure Monitor](gateway.md)
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Here is a comparison between client installer and VM extension for Azure Monitor
1. The machine must be running Windows client OS version 10 RS4 or higher. 2. To download the installer, the machine should have [C++ Redistributable version 2015)](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) or higher 3. The machine must be domain joined to a Microsoft Entra tenant (AADj or Hybrid AADj machines), which enables the agent to fetch Microsoft Entra device tokens used to authenticate and fetch data collection rules from Azure.
-4. You may need tenant admin permissions on the Microsoft Entra tenant.
+4. You might need tenant admin permissions on the Microsoft Entra tenant.
5. The device must have access to the following HTTPS endpoints: - global.handler.control.monitor.azure.com - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
The following image demonstrates how this works:
Then, proceed with the following instructions to create and associate them to a Monitored Object, using REST APIs or PowerShell commands. ### Permissions required
-Since MO is a tenant level resource, the scope of the permission would be higher than a subscription scope. Therefore, an Azure tenant admin may be needed to perform this step. [Follow these steps to elevate Microsoft Entra tenant admin as Azure Tenant Admin](../../role-based-access-control/elevate-access-global-admin.md). It gives the Microsoft Entra admin 'owner' permissions at the root scope. This is needed for all methods described in the following section.
+Since MO is a tenant level resource, the scope of the permission would be higher than a subscription scope. Therefore, an Azure tenant admin might be needed to perform this step. [Follow these steps to elevate Microsoft Entra tenant admin as Azure Tenant Admin](../../role-based-access-control/elevate-access-global-admin.md). It gives the Microsoft Entra admin 'owner' permissions at the root scope. This is needed for all methods described in the following section.
### Using REST APIs
PUT https://management.azure.com/providers/microsoft.insights/providers/microsof
| Name | Description | |:|:| | roleDefinitionId | Fixed value: Role definition ID of the 'Monitored Objects Contributor' role: `/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b` |
-| principalId | Provide the `Object Id` of the identity of the user to which the role needs to be assigned. It may be the user who elevated at the beginning of step 1, or another user or group who will perform later steps. |
+| principalId | Provide the `Object Id` of the identity of the user to which the role needs to be assigned. It might be the user who elevated at the beginning of step 1, or another user or group who will perform later steps. |
After this step is complete, **reauthenticate** your session and **reacquire** your ARM bearer token.
Make sure to start the installer on administrator command prompt. Silent install
### Post installation/Operational issues Once the agent is installed successfully (i.e. you see the agent service running but don't see data as expected), you can follow standard troubleshooting steps listed here for [Windows VM](./azure-monitor-agent-troubleshoot-windows-vm.md) and [Windows Arc-enabled server](azure-monitor-agent-troubleshoot-windows-arc.md) respectively.
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Is Azure Arc required for Microsoft Entra joined machines?
+
+No. Microsoft Entra joined (or Microsoft Entra hybrid joined) machines running Windows 10 or 11 (client OS) **do not require Azure Arc** to be installed. Instead, you can use the Windows MSI installer for Azure Monitor Agent, which is [currently available in preview](https://aka.ms/amadcr-privatepreviews).
+ ## Questions and feedback Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the client installer.
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
The Internet Information Service (IIS) logs data to the local disk of Windows ma
To complete this procedure, you need: - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).
+- One or two [data collection endpoints](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint), depending on whether your virtual machine and Log Analytics workspace are in the same region.
+
+ For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
+ - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. - A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that runs IIS. - An IIS log file in W3C format must be stored on the local drive of the machine on which Azure Monitor Agent is running.
To create the data collection rule in the Azure portal:
<!-- convertborder later --> :::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png" alt-text="Screenshot that shows the Create button on the Data Collection Rules screen." border="false":::
-1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**:
+1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, **Platform Type**, and **Data collection endpoint**:
- **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant. - **Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types.
+ - **Data Collection Endpoint** specifies the data collection endpoint used to collect data. This data collection endpoint must be in the same region as the Log Analytics workspace. For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
:::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png" alt-text="Screenshot that shows the Basics tab of the Data Collection Rule screen.":::
To create the data collection rule in the Azure portal:
> [!IMPORTANT] > The portal enables system-assigned managed identity on the target resources, along with existing user-assigned identities, if there are any. For existing applications, unless you specify the user-assigned identity in the request, the machine defaults to using system-assigned identity instead.
- If you need network isolation using private links, select existing endpoints from the same region for the respective resources or [create a new endpoint](../essentials/data-collection-endpoint-overview.md).
- 1. Select **Enable Data Collection Endpoints**.
- 1. Select a data collection endpoint for each of the resources associate to the data collection rule.
+ 1. Select a data collection endpoint for each of the virtual machines associate to the data collection rule.
+
+ This data collection endpoint sends configuration files to the virtual machine and must be in the same region as the virtual machine. For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
:::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png" alt-text="Screenshot that shows the Resources tab of the Data Collection Rule screen.":::
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
When you paste the XPath query into the field on the **Add data source** screen,
> [!TIP]
-> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPath query locally on your machine first. The following script shows an example:
+> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPath query locally on your machine first. For more information, see the tip provided in the [Windows agent-based connections](../../sentinel/connect-services-windows-based.md) instructions. The [`Get-WinEvent`](/powershell/module/microsoft.powershell.diagnostics/get-winevent) PowerShell cmdlet supports up to 23 expressions. Azure Monitor data collection rules support up to 20. Also, `>` and `<` characters must be encoded as `&gt;` and `&lt;` in your data collection rule. The following script shows an example:
> > ```powershell > $XPath = '*[System[EventID=1035]]'
Examples of using a custom XPath to filter events:
> For a list of limitations in the XPath supported by Windows event log, see [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations). > For instance, you can use the "position", "Band", and "timediff" functions within the query but other functions like "starts-with" and "contains" are not currently supported.
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### How can I collect Windows security events by using the new Azure Monitor Agent?
+
+There are two ways you can collect Security events using the new agent, when sending to a Log Analytics workspace:
+- You can use AMA to natively collect Security Events, same as other Windows Events. These flow to the ['Event'](/azure/azure-monitor/reference/tables/Event) table in your Log Analytics workspace. If you want Security Events to flow into the ['SecurityEvent'](/azure/azure-monitor/reference/tables/SecurityEvent) table instead, you can [create the required DCR with PowerShell or with Azure Policy](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/how-to-configure-security-events-collection-with-azure-monitor/ba-p/3770719).
+- If you have Microsoft Sentinel enabled on the workspace, the security events flow via Azure Monitor Agent into the [`SecurityEvent`](/azure/azure-monitor/reference/tables/SecurityEvent) table instead (the same as using the Log Analytics agent). This scenario always requires the solution to be enabled first.
+
+### Will I duplicate events if I use Azure Monitor Agent and the Log Analytics agent on the same machine?
+
+If you're collecting the same events with both agents, duplication occurs. This duplication could be the legacy agent collecting redundant data from the [workspace configuration](./agent-data-sources.md) data, which is collected by the data collection rule. Or you might be collecting security events with the legacy agent and enable Windows security events with Azure Monitor Agent connectors in Microsoft Sentinel.
+
+Limit duplication events to only the time when you transition from one agent to the other. After you've fully tested the data collection rule and verified its data collection, disable collection for the workspace and disconnect any Microsoft Monitoring Agent data connectors.
+
+### Besides Xpath queries and specifying performance counters, is other more granular event filtering possible by using the new Azure Monitor Agent?
+
+For Syslog on Linux, you can choose facilities and the log level for each facility to collect.
+
+### If I create data collection rules that contain the same event ID and associate it to the same VM, will the events be duplicated?
+
+Yes. To avoid duplication, make sure the event selection you make in your data collection rules doesn't contain duplicate events.
+ ## Next steps - [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md).
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
Many applications log information to text files instead of standard logging serv
To complete this procedure, you need: - Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).-- [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).
+- One or two [data collection endpoints](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint), depending on whether your virtual machine and Log Analytics workspace are in the same region.
+
+ For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
+ - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.+ - A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text file. Text file requirements and best practices:
Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourc
Press return to execute the code. You should see a 200 response, and details about the table you just created will show up. To validate that the table was created go to your workspace and select Tables on the left blade. You should see your table in the list. > [!Note]
-> The column names are case sensitive. For example Rawdata will not correcly collect the event data. It must be RawData.
+> The column names are case sensitive. For example `Rawdata` will not correctly collect the event data. It must be `RawData`.
## Create data collection rule to collect text logs
To create the data collection rule in the Azure portal:
<!-- convertborder later --> :::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png" alt-text="Screenshot that shows the Create button on the Data Collection Rules screen." border="false":::
-1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, **Platform Type**, and **Data Collection Endpoint**:
+1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, **Platform Type**, and **Data collection endpoint**:
- **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant. - **Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types.
- - **Data Collection Endpoint** is required to collect custom logs.
+ - **Data Collection Endpoint** specifies the data collection endpoint used to collect data. This data collection endpoint must be in the same region as the Log Analytics workspace. For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
:::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png" alt-text="Screenshot that shows the Basics tab of the Data Collection Rule screen.":::
To create the data collection rule in the Azure portal:
> [!IMPORTANT] > The portal enables system-assigned managed identity on the target resources, along with existing user-assigned identities, if there are any. For existing applications, unless you specify the user-assigned identity in the request, the machine defaults to using system-assigned identity instead.
- If you need network isolation using private links, select existing endpoints from the same region for the respective resources or [create a new endpoint](../essentials/data-collection-endpoint-overview.md).
- 1. Select **Enable Data Collection Endpoints**.
- 1. Select a data collection endpoint for each of the resources associate to the data collection rule.
+ 1. Select a data collection endpoint for each of the virtual machines associate to the data collection rule.
+
+ This data collection endpoint sends configuration files to the virtual machine and must be in the same region as the virtual machine. For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
:::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png" alt-text="Screenshot that shows the Resources tab of the Data Collection Rule screen.":::
azure-monitor Use Azure Monitor Agent Troubleshooter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/use-azure-monitor-agent-troubleshooter.md
# customer-intent: As an IT manager, I want to investigate agent issue on a particular virtual machine and determine if I can resolve the issue on my own. # Use the Azure Monitor Agent Troubleshooter
-The Azure Monitor Agent isn't a service that runs in the context of an Azure Resource Provider. It may even be running in on premise machines within a customer network boundary. The Azure Monitor Agent Troubleshooter is designed to help diagnose issues with the agent, and general agent health checks. It can run checks to verify agent installation, connection, general heartbeat, and collect AMA-related logs automatically from the affected Windows or Linux VM. More scenarios will be added over time to increase the number of issues that can be diagnosed.
+The Azure Monitor Agent isn't a service that runs in the context of an Azure Resource Provider. It might even be running in on-premises machines within a customer network boundary. The Azure Monitor Agent Troubleshooter is designed to help diagnose issues with the agent, and general agent health checks. It can run checks to verify agent installation, connection, general heartbeat, and collect AMA-related logs automatically from the affected Windows or Linux VM. More scenarios will be added over time to increase the number of issues that can be diagnosed.
> [!Note] > Note: Troubleshooter is a command line executable that is shipped with the agent for all versions newer than **1.12.0.0** for Windows and **1.25.1 for Linux**. > If you have a older version of the agent, you can not copy the Troubleshooter on in to a VM to diagnose an older agent.
The Troubleshooter runs two tests and collects several diagnostic logs.
### Share the Windows Results
-The detailed data collected by the troubleshooter include system configuration, network configuration, environment variables, and agent configuration that can aid the customer in finding any issues. The troubleshooter make is easy to send this data to customer support by creating a Zip file that should be attached to any customer support request. The file is located in C:/Packages/Plugins/Microsoft.Azure.Monitor.AzureMonitorWindowsAgent/{version}/Troubleshooter. The agent logs can be cryptic but they can give you insight into problems that you may be experiencing.
+The detailed data collected by the troubleshooter include system configuration, network configuration, environment variables, and agent configuration that can aid the customer in finding any issues. The troubleshooter make is easy to send this data to customer support by creating a Zip file that should be attached to any customer support request. The file is located in C:/Packages/Plugins/Microsoft.Azure.Monitor.AzureMonitorWindowsAgent/{version}/Troubleshooter. The agent logs can be cryptic but they can give you insight into problems that you might be experiencing.
|Logfile | Contents| |:|:|
The details for the covered scenarios are below:
### Share Linux Logs To create a zip file use this command when running the troubleshooter: sudo sh ama_troubleshooter.sh -A L. You'll be asked for a file location to create the zip file.
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### How can I confirm that the Log Analytics agent can communicate with Azure Monitor?
+
+From Control Panel on the agent computer, select **Security & Settings** > **Microsoft Monitoring Agent**. Under the **Azure Log Analytics (OMS)** tab, a green check mark icon confirms that the agent can communicate with Azure Monitor. A yellow warning icon means the agent is having issues. One common reason is the **Microsoft Monitoring Agent** service has stopped. Use service control manager to restart the service.
+ ## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines. - [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
|Threshold value| A number value for the threshold. | |Frequency of evaluation|How often the query is run. Can be set from a minute to a day.|
+ > [!NOTE]
+ > One-minute alert rule frequency is supported only for queries that can pass an internal optimization manipulation. When you will write the query you will contain the following error message: “Couldn’t optimize the query because …”.
+ > The following are the main reasons why a query will not be supported for one-minute frequency:
+ > * The query contains the search, ΓÇ£union *ΓÇ¥ or ΓÇ£takeΓÇ¥ (limit)
+ > * The query contains the ingestion_time() function
+ > * The query uses the adx pattern
+ > * The query calls a function that calls other tablesΓÇ¥
+ 1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. Your application business policy determines this setting.
azure-monitor Alerts Manage Alert Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md
Previously updated : 03/05/2023 Last updated : 10/25/2023 # Manage your alert rules
The system compiles a list of recommended alert rules based on:
> - AKS resources > - Log Analytics workspaces To enable recommended alert rules:
-1. On the **Alerts** page, select **Enable recommended alert rules**. The **Enable recommended alert rules** pane opens with a list of recommended alert rules based on your type of resource.
-1. In the **Alert me if** section, select all of the rules you want to enable. The rules are populated with the default values for the rule condition, such as the percentage of CPU usage that you want to trigger an alert. You can change the default values if you would like.
+1. In the left pane, select **Alerts**.
+1. Select **View + enable**. The **Set up recommended alert rules** pane opens with a list of recommended alert rules based on your type of resource.
+1. In the **Alert me if** section, all recommended alerts are enabled by default. The rules are populated with the default values for the rule condition, such as the percentage of CPU usage that you want to trigger an alert. You can change the default values if you would like, or turn off an alert.
1. In the **Notify me by** section, select the way you want to be notified if an alert is fired. 1. Select **Use an existing action group**, and enter the details of the existing action group if you want to use an action group that already exists.
-1. Select **Enable**.
+1. Select **Save**.
- :::image type="content" source="media/alerts-managing-alert-instances/alerts-enable-recommended-alert-rule-pane.png" alt-text="Screenshot of recommended alert rules pane.":::
+ :::image type="content" source="media/alerts-managing-alert-instances/set-up-recommended-alerts.png" alt-text="Screenshot of recommended alert rules pane.":::
## Manage metric alert rules with the Azure CLI
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
The ApplicationInsights Java Agent monitors CPU, memory, and request duration su
#### Profile now
-Within the profiler user interface (see [profiler settings](../profiler/profiler-settings.md)) there's a **Profile now** button. Selecting this button immediately requests a profile in all agents that are attached to the Application Insights instance.
+Within the profiler user interface (see [profiler settings](../profiler/profiler-settings.md)) there's a **Profile now** button. Selecting this button immediately requests a profile in all agents that are attached to the Application Insights instance. The default profiling duration is two minutes. You can change it by overriding `periodicRecordingDurationSeconds` (see [Configuration file](#configuration-file)).
> [!WARNING] > Invoking Profile now will enable the profiler feature, and Application Insights will apply default CPU and memory SLA triggers. When your application breaches those SLAs, Application Insights will gather Java profiles. If you wish to disable profiling later on, you can do so within the trigger menu shown in [Installation](#installation).
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md
Title: Data collection endpoints in Azure Monitor
-description: Overview of data collection endpoints (DCEs) in Azure Monitor, including their contents and structure and how you can create and work with them.
+description: Overview of how data collection endpoints work and how to create and set them up based on your deployment.
ms.reviwer: nikeist
# Data collection endpoints in Azure Monitor
-Data collection endpoints (DCEs) provide a connection for certain data sources of Azure Monitor. This article provides an overview of DCEs, including their contents and structure and how you can create and work with them.
-## Data sources that use DCEs
-The following data sources currently use DCEs:
+A data collection endpoint (DCE) is a connection that the [Logs ingestion API](../logs/logs-ingestion-api-overview.md) uses to send collected data for processing and ingestion into Azure Monitor. [Azure Monitor Agent](../agents/agents-overview.md) also uses data collection endpoints to receive configuration files from Azure Monitor and to send collected log data for processing and ingestion.
-- [Azure Monitor Agent when network isolation is required](../agents/azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-azure-monitor-agent)-- [Logs ingestion API](../logs/logs-ingestion-api-overview.md)
+This article provides an overview of data collection endpoints and explains how to create and set them up based on your deployment.
## Components of a data collection endpoint
-A DCE includes the following components:
-| Component | Description |
-|:|:|
-| Configuration access endpoint | The endpoint used to access the configuration service to fetch associated data collection rules (DCRs) for Azure Monitor Agent.<br>Example: `<unique-dce-identifier>.<regionname>-1.handler.control`. |
-| Logs ingestion endpoint | The endpoint used to ingest logs to Log Analytics workspaces.<br>Example: `<unique-dce-identifier>.<regionname>-1.ingest`. |
-| Network access control lists | Network access control rules for the endpoints.
+A data collection endpoint includes components required to ingest data into Azure Monitor and send configuration files to Azure Monitor Agent.
-## Regionality
-Data collection endpoints are Azure Resource Manager resources created within specific regions. An endpoint in a given region *can only be associated with machines in the same region*. However, you can have more than one endpoint within the same region according to your needs.
+[How you set up endpoints for your deployment](#how-to-set-up-data-collection-endpoints-based-on-your-deployment) depends on whether your monitored resources and Log Analytics workspaces are in one or more regions.
-## Limitations
-Data collection endpoints only support Log Analytics workspaces as a destination for collected data. [Custom metrics (preview)](../essentials/metrics-custom-overview.md) collected and uploaded via Azure Monitor Agent aren't currently controlled by DCEs. Data collection endpoints also can't be configured over private links.
+This table describes the components of a data collection endpoint, related regionality considerations, and how to set up the data collection endpoint when you create a data collection rule using the portal:
-## Create a data collection endpoint
+| Component | Description | Regionality considerations |Data collection rule configuration |
+|:|:|:|
+| Configuration access endpoint | The endpoint from which Azure Monitor Agent retrieves data collection rules (DCRs).<br>Example: `<unique-dce-identifier>.<regionname>-1.handler.control`. | Same region as the monitored resources. | Set on the **Basics** tab when you create a data collection rule using the portal. |
+| Logs ingestion endpoint | The endpoint that ingests logs into the data ingestion pipeline. Azure Monitor transforms the data and sends it to the defined destination Log Analytics workspace and table based on a DCR ID sent with the collected data.<br>Example: `<unique-dce-identifier>.<regionname>-1.ingest`. |Same region as the destination Log Analytics workspace. |Set on the **Resources** tab when you create a data collection rule using the portal.|
++
+## How to set up data collection endpoints based on your deployment
+
+- **Scenario: All monitored resources are in the same region as the destination Log Analytics workspace**
+
+ Set up one data collection endpoint to send configuration files and receive collected data.
+
+ :::image type="content" source="media/data-collection-endpoint-overview/data-collection-endpoint-one-region.png" alt-text="A diagram that shows resources in a single region sending data and receiving configuration files using a data collection endpoint." lightbox="media/data-collection-endpoint-overview/data-collection-endpoint-one-region.png":::
+
+- **Scenario: Monitored resources send data to a Log Analytics workspace in a different region**
+
+ - Create a data collection endpoint in each region where you have Azure Monitor Agent deployed to send configuration files to the agents in that region.
+
+ - Send data from all resources to a data collection endpoint in the region where your destination Log Analytics workspaces are located.
+
+ :::image type="content" source="media/data-collection-endpoint-overview/data-collection-endpoint-regionality.png" alt-text="A diagram that shows resources in two regions sending data and receiving configuration files using data collection endpoints." lightbox="media/data-collection-endpoint-overview/data-collection-endpoint-regionality.png":::
-> [!IMPORTANT]
-> If agents will connect to your DCE, it must be created in the same region. If you have agents in different regions, you'll need multiple DCEs.
+- **Scenario: Monitored resources in one or more regions send data to multiple Log Analytics workspaces in different regions**
+
+ - Create a data collection endpoint in each region where you have Azure Monitor Agent deployed to send configuration files to the agents in that region.
+
+ - Create a data collection endpoint in each region with a destination Log Analytics workspace to send data to the Log Analytics workspaces in that region.
+
+ - Send data from each monitored resource to the data collection endpoint in the region where the destination Log Analytics workspace is located.
+
+ :::image type="content" source="media/data-collection-endpoint-overview/data-collection-endpoint-regionality-multiple-workspaces.png" alt-text="A diagram that shows monitored resources in multiple regions sending data to multiple Log Analytics workspaces in different regions using data collection endpoints." lightbox="media/data-collection-endpoint-overview/data-collection-endpoint-regionality-multiple-workspaces.png":::
+
+## Create a data collection endpoint
# [Azure portal](#tab/portal)
Data collection endpoints only support Log Analytics workspaces as a destination
# [REST API](#tab/restapi)
-Create DCRs by using the [DCE REST APIs](/cli/azure/monitor/data-collection/endpoint).
+Create DCEs by using the [DCE REST APIs](/cli/azure/monitor/data-collection/endpoint).
Create associations between endpoints to your target machines or resources by using the [DCRA REST APIs](/rest/api/monitor/datacollectionruleassociations/create#examples).
Create associations between endpoints to your target machines or resources by us
## Sample data collection endpoint For a sample DCE, see [Sample data collection endpoint](data-collection-endpoint-sample.md). +
+## Limitations
+- Data collection endpoints only support Log Analytics workspaces as a destination for collected data. [Custom metrics (preview)](../essentials/metrics-custom-overview.md) collected and uploaded via Azure Monitor Agent aren't currently controlled by DCEs. Data collection endpoints also can't be configured over private links.
+
+- Data collection endpoints are where [Logs ingestion API ingestion limits](../service-limits.md#logs-ingestion-api) are applied.
+ ## Next steps - [Associate endpoints to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule) - [Add an endpoint to an Azure Monitor Private Link Scope resource](../logs/private-link-configure.md#connect-azure-monitor-resources)
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-security.md
To ensure the security of data in transit to Azure Monitor, we strongly encourag
The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents can't communicate over at least TLS 1.2 you won't be able to send data to Azure Monitor Logs.
-We recommend you do NOT explicit set your agent to only use TLS 1.2 unless necessary. Allowing the agent to automatically detect, negotiate, and take advantage of future security standards is preferable. Otherwise you may miss the added security of the newer standards and possibly experience problems if TLS 1.2 is ever deprecated in favor of those newer standards.
+We recommend you do NOT explicit set your agent to only use TLS 1.2 unless necessary. Allowing the agent to automatically detect, negotiate, and take advantage of future security standards is preferable. Otherwise you might miss the added security of the newer standards and possibly experience problems if TLS 1.2 is ever deprecated in favor of those newer standards.
### Platform-specific guidance
Azure Monitor has an incident management process that all Microsoft services adh
* Use a shared responsibility model where a portion of security responsibility belongs to Microsoft and a portion belongs to the customer * Manage Azure security incidents: * Start an investigation upon detection of an incident
- * Assess the impact and severity of an incident by an on-call incident response team member. Based on evidence, the assessment may or may not result in further escalation to the security response team.
- * Diagnose an incident by security response experts to conduct the technical or forensic investigation, identify containment, mitigation, and work around strategies. If the security team believes that customer data may have become exposed to an unlawful or unauthorized individual, parallel execution of the Customer Incident Notification process begins in parallel.
- * Stabilize and recover from the incident. The incident response team creates a recovery plan to mitigate the issue. Crisis containment steps such as quarantining impacted systems may occur immediately and in parallel with diagnosis. Longer term mitigations may be planned which occur after the immediate risk has passed.
+ * Assess the impact and severity of an incident by an on-call incident response team member. Based on evidence, the assessment might or might not result in further escalation to the security response team.
+ * Diagnose an incident by security response experts to conduct the technical or forensic investigation, identify containment, mitigation, and work around strategies. If the security team believes that customer data could have become exposed to an unlawful or unauthorized individual, parallel execution of the Customer Incident Notification process begins in parallel.
+ * Stabilize and recover from the incident. The incident response team creates a recovery plan to mitigate the issue. Crisis containment steps such as quarantining impacted systems can occur immediately and in parallel with diagnosis. Longer term mitigations can be planned which occur after the immediate risk has passed.
* Close the incident and conduct a post-mortem. The incident response team creates a post-mortem that outlines the details of the incident, with the intention to revise policies, procedures, and processes to prevent a recurrence of the event. * Notify customers of security incidents: * Determine the scope of impacted customers and to provide anybody who is impacted as detailed a notice as possible
The Log Analytics software development and service team are actively working wit
## Certifications and attestations Azure Log Analytics meets the following requirements:
-* [ISO/IEC 27001](https://www.iso.org/iso/home/standards/management-standards/iso27001.htm)
+* [ISO/IEC 27001](https://www.iso.org/standard/27001)
* [ISO/IEC 27018:2014](https://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=61498) * [ISO 22301](https://azure.microsoft.com/blog/iso22301/) * [Payment Card Industry (PCI Compliant) Data Security Standard (PCI DSS)](https://www.microsoft.com/en-us/TrustCenter/Compliance/PCI) by the PCI Security Standards Council.
With any agent reporting to an Operations Manager management group that is integ
The Windows or management server agent cached data is protected by the operating system's credential store. If the service cannot process the data after two hours, the agents will queue the data. If the queue becomes full, the agent starts dropping data types, starting with performance data. The agent queue limit is a registry key so you can modify it, if necessary. Collected data is compressed and sent to the service, bypassing the Operations Manager management group databases, so it does not add any load to them. After the collected data is sent, it is removed from the cache.
-As described above, data from the management server or direct-connected agents is sent over TLS to Microsoft Azure datacenters. Optionally, you can use ExpressRoute to provide extra security for the data. ExpressRoute is a way to directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider. For more information, see [ExpressRoute](https://azure.microsoft.com/services/expressroute/).
+As described above, data from the management server or direct-connected agents is sent over TLS to Microsoft Azure datacenters. Optionally, you can use ExpressRoute to provide extra security for the data. ExpressRoute is a way to directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider. For more information, see [ExpressRoute](https://azure.microsoft.com/services/expressroute/) and [Does my agent traffic use my Azure ExpressRoute connection?](#does-my-agent-traffic-use-my-azure-expressroute-connection).
### 3. The Azure Monitor service receives and processes data The Azure Monitor service ensures that incoming data is from a trusted source by validating certificates and the data integrity with Azure authentication. The unprocessed raw data is then stored in an Azure Event Hubs in the region the data will eventually be stored at rest. The type of data that is stored depends on the types of solutions that were imported and used to collect data. Then, the Azure Monitor service processes the raw data and ingests it into the database.
Azure Monitor is an append-only data platform, but includes provisions to delete
To fully tamper-proof your monitoring solution, we recommend you [export your data to an immutable storage solution](../../storage/blobs/immutable-storage-overview.md).
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Does my agent traffic use my Azure ExpressRoute connection?
+
+Traffic to Azure Monitor uses the Microsoft peering ExpressRoute circuit. See [ExpressRoute documentation](../../expressroute/expressroute-faqs.md#supported-services) for a description of the different types of ExpressRoute traffic.
## Next steps * [See the different kinds of data that you can collect in Azure Monitor](../monitor-reference.md).
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Azure Monitor stores data in data stores for each of the three pillars of observ
:::image type="content" source="media/overview/data-platform-box-opt.svg" alt-text="Diagram that shows an overview of Azure Monitor data platform." border="false" lightbox="media/overview/data-platform-blowup-type-2-opt.svg":::
-Click on the picture above for a to see the Data Platform in the context of the whole of Azure Monitor.
+Select the preceding diagram to see the Data Platform in the context of the whole of Azure Monitor.
|Pillar of Observability/<br>Data Store|Description| |||
azure-monitor Scom Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/scom-managed-instance-overview.md
Title: Azure Monitor SCOM Managed Instance overview
-description: Azure Monitor SCOM Managed Instance allows you maintain your investment in your existing System Center Operations Manager (SCOM) environment while moving your monitoring infrastructure into the Azure cloud.
+description: Azure Monitor SCOM Managed Instance allows you to maintain your investment in your existing System Center Operations Manager (SCOM) environment while moving your monitoring infrastructure into the Azure cloud.
Last updated 09/28/2023
## Overview
-While Azure Monitor can use the [Azure Monitor agent](../agents/agents-overview.md) to collect telemetry from a virtual machine, it isn't able to replicate the extensive monitoring provided by management packs written for SCOM, including any management packs that you may have written for your custom applications.
+While Azure Monitor can use the [Azure Monitor agent](../agents/agents-overview.md) to collect telemetry from a virtual machine, it isn't able to replicate the extensive monitoring provided by management packs written for SCOM, including any management packs that you might have written for your custom applications.
-You may have an eventual goal to move your monitoring completely to Azure Monitor, but you must maintain SCOM functionality until you no longer rely on management packs for monitoring your virtual machine workloads. SCOM Managed Instance (preview) is compatible with all existing management packs and provides migration from your existing on-premises SCOM infrastructure.
+You might have an eventual goal to move your monitoring completely to Azure Monitor, but you must maintain SCOM functionality until you no longer rely on management packs for monitoring your virtual machine workloads. SCOM Managed Instance (preview) is compatible with all existing management packs and provides migration from your existing on-premises SCOM infrastructure.
SCOM Managed Instance (preview) allows you to take a step toward an eventual migration to Azure Monitor. You can move your backend SCOM infrastructure into the cloud saving you the complexity of maintaining these components. Then you can manage the configuration in the Azure portal along with the rest of your Azure Monitor configuration and monitoring tasks.
The documentation for SCOM Managed Instance (preview) is maintained with the [ot
| Overview | [About Azure Monitor SCOM Managed Instance (preview)](/system-center/scom/operations-manager-managed-instance-overview) | | Get started | [Migrate from Operations Manager on-premises](/system-center/scom/migrate-to-operations-manager-managed-instance) | | Manage | [Create an Azure Monitor SCOM Managed Instance](/system-center/scom/create-operations-manager-managed-instance)<br>[Scale Azure Monitor SCOM Managed Instance (preview)](/system-center/scom/scale-scom-managed-instance)<br>[Patch Azure Monitor SCOM Managed Instance (preview)](/system-center/scom/patch-scom-managed-instance)<br>[Create reports on Power BI](/system-center/scom/operations-manager-managed-instance-create-reports-on-power-bi)<br>[Azure Monitor SCOM Managed Instance (preview) monitoring scenarios](/system-center/scom/scom-managed-instance-monitoring-scenarios)<br>[Azure Monitor SCOM Managed Instance (preview) Agents](/system-center/scom/plan-planning-agent-deployment-scom-managed-instance)<br>[Install Windows Agent Manually Using MOMAgent.msi - Azure Monitor SCOM Managed Instance (preview)](/system-center/scom/manage-deploy-windows-agent-manually-scom-managed-instance)<br>[Connect the Azure Monitor SCOM Managed Instance (preview) to Ops console](/system-center/scom/connect-managed-instance-ops-console)<br>[Azure Monitor SCOM Managed Instance (preview) activity log](/system-center/scom/scom-mi-activity-log)<br>[Azure Monitor SCOM Managed Instance (preview) frequently asked questions](/system-center/scom/operations-manager-managed-instance-common-questions)<br>[Troubleshoot issues with Azure Monitor SCOM Managed Instance (preview)](/system-center/scom/troubleshoot-scom-managed-instance)
-| Security | [Use Managed identities for Azure with Azure Monitor SCOM Managed Instance (preview)](/system-center/scom/use-managed-identities-with-scom-mi)<br>[Azure Monitor SCOM Managed Instance (preview) Data Encryption at Rest](/system-center/scom/scom-mi-data-encryption-at-rest) |
+| Security | [Use Managed identities for Azure with Azure Monitor SCOM Managed Instance (preview)](/system-center/scom/use-managed-identities-with-scom-mi)<br>[Azure Monitor SCOM Managed Instance (preview) Data Encryption at Rest](/system-center/scom/scom-mi-data-encryption-at-rest) |
+
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### What's the upgrade path from the Log Analytics agent to Azure Monitor Agent for monitoring System Center Operations Manager? Can we use Azure Monitor Agent for System Center Operations Manager scenarios?
+
+Here's how Azure Monitor Agent affects the two System Center Operations Manager monitoring scenarios:
+- **Scenario 1**: Monitoring the Windows operating system of System Center Operations Manager. The upgrade path is the same as for any other machine. You can migrate from the Microsoft Monitoring Agent (versions 2016 and 2019) to Azure Monitor Agent as soon as your required parity features are available on Azure Monitor Agent.
+- **Scenario 2**: Onboarding or connecting System Center Operations Manager to Log Analytics workspaces. Use a System Center Operations Manager connector for Log Analytics/Azure Monitor. Neither the Microsoft Monitoring Agent nor Azure Monitor Agent is required to be installed on the Operations Manager management server. As a result, there's no impact to this use case from an Azure Monitor Agent perspective.
+
+
azure-monitor Tutorial Monitor Vm Alert Recommended https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-alert-recommended.md
From the menu for the VM, select **Alerts** in the **Monitoring** section. Selec
A list of recommended alert rules is displayed. You can select which ones to create and change their recommended threshold if you want. Ensure that **Email** is enabled and provide an email address to be notified when any of the alerts fire. An [action group](../alerts/action-groups.md) will be created with this address. If you already have an action group that you want to use, you can specify it instead. Expand each of the alert rules to inspect its details. By default, the severity for each is **Informational**. You might want to change to another severity such as **Error**.
-Click **Enable** to create the alert rules.
+Select **Save** to create the alert rules.
## View created alert rules
azure-monitor Vminsights Migrate From Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-migrate-from-service-map.md
> [!NOTE] > Service Map will be retired on 30 September 2025. Be sure to migrate to VM insights before this date to continue monitoring processes and dependencies for your virtual machines.
-The map feature of VM insights visualizes virtual machine dependencies by discovering running processes that have active network connection between servers, inbound and outbound connection latency, or ports across any TCP-connected architecture over a specified time range. For more information about the benefits of the VM insights map feature over Service Map, see [How is VM insights Map feature different from Service Map?](/azure/azure-monitor/faq#how-is-vm-insights-map-feature-different-from-service-map-).
+The map feature of VM insights visualizes virtual machine dependencies by discovering running processes that have active network connection between servers, inbound and outbound connection latency, or ports across any TCP-connected architecture over a specified time range. For more information about the benefits of the VM insights map feature over Service Map, see [How is VM insights Map feature different from Service Map?](/azure/azure-monitor/faq#how-is-the-vm-insights-map-feature-different-from-service-map-).
## Enable VM insights using Azure Monitor Agent
azure-netapp-files Backup Configure Policy Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-policy-based.md
na Previously updated : 06/14/2023 Last updated : 10/25/2023 # Configure policy-based backups for Azure NetApp Files
Every Azure NetApp Files volume must have the backup functionality enabled befor
After you enable the backup functionality, you need to assign a backup policy to a volume for policy-based backups to take effects. (For manual backups, a backup policy is optional.)
+>[!NOTE]
+>The active and most current snapshot is required for transferring the backup. As a result, you may see 1 extra snapshot beyond the number of snapshots to keep per the backup policy configuration. If your number of daily backups to keep is set to 2, you may see 3 snapshots related to the backup in the volumes the policy is applied to.
+ To enable the backup functionality for a volume: 1. Go to **Volumes** and select the volume for which you want to enable backup.
backup Offline Backup Azure Data Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/offline-backup-azure-data-box.md
The process to seed data from the MARS Agent by using Azure Data Box is supporte
| Windows 8 64 bit | Enterprise, Pro | | Windows 7 64 bit | Ultimate, Enterprise, Professional, Home Premium, Home Basic, Starter | | **Server** | |
+| Windows Server 2022 64 bit | Standard, Datacenter, Essentials |
| Windows Server 2019 64 bit | Standard, Datacenter, Essentials | | Windows Server 2016 64 bit | Standard, Datacenter, Essentials | | Windows Server 2012 R2 64 bit | Standard, Datacenter, Foundation |
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/selective-disk-backup-restore.md
This solution is useful particularly in the following scenarios:
1. If you have critical data to be backed up in only one disk, or a subset of the disks and donΓÇÖt want to back up the rest of the disks attached to a VM to minimize the backup storage costs. 2. If you've other backup solutions for part of your VM or data. For example, if you back up your databases or data using a different workload backup solution and you want to use Azure VM level backup for the rest of the data or disks to build an efficient and robust system using the best capabilities available.
-3. If you're using [Enhanced policy](backup-azure-vms-enhanced-policy.md), you can use this solution to exclude unsupported disks (Shared Disks) and configure a VM for backup.
+3. If you're using [Enhanced policy](backup-azure-vms-enhanced-policy.md), you can use this solution to exclude unsupported disk types and configure a VM for backup. For Shared Disks in a VM, you can exclude the disk from VM backup and use [Azure Disk Backup](disk-backup-overview.md) to take a crash consistent backup of the Shared Disk.
Using PowerShell, Azure CLI, or Azure portal, you can configure selective disk backup of the Azure VM. Using a script, you can include or exclude data disks using their *LUN numbers*. The ability to configure selective disks backup via the Azure portal is limited to the *Backup OS Disk* only for the Standard policy, but can be configured for all data disks for Enhanced policy.
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md
Azure Bastion offers multiple SKU tiers. The following table shows features and
[!INCLUDE [Azure Bastion SKUs](../../includes/bastion-sku.md)]
-For more information about SKUs, including how to upgrade a SKU and information about the new Developer SKU, see the [Configuration settings](configuration-settings.md#skus) article.
+For more information about SKUs, including how to upgrade a SKU and information about the new Developer SKU (currently in Preview), see the [Configuration settings](configuration-settings.md#skus) article.
## <a name="architecture"></a>Architecture
For frequently asked questions, see the Bastion [FAQ](bastion-faq.md).
## Next steps
-* [Quickstart: Quickstart: Deploy Bastion automatically - Basic SKU](quickstart-host-portal.md)
+* [Quickstart: Deploy Bastion automatically - Basic SKU](quickstart-host-portal.md)
* [Quickstart: Deploy Bastion automatically - Developer SKU](quickstart-developer-sku.md) * [Tutorial: Deploy Bastion using specified settings](tutorial-create-host-portal.md) * [Learn module: Introduction to Azure Bastion](/training/modules/intro-to-azure-bastion/)
cloud-shell Quickstart Deploy Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-deploy-vnet.md
This document guides you through the process to complete the configuration.
This article walks you through the following steps to deploy Azure Cloud Shell in a virtual network:
+1. Register resource providers
1. Collect the required information 1. Create the virtual networks using the **Azure Cloud Shell - VNet** ARM template 1. Create the virtual network storage account using the **Azure Cloud Shell - VNet storage** ARM template 1. Configure and use Azure Cloud Shell in a virtual network
-## 1. Collect the required information
+## 1. Register resource providers
+
+Azure Cloud Shell needs access to certain Azure resources. That access is made available through
+resource providers. The following resource providers must be registered in your subscription:
+
+- **Microsoft.CloudShell**
+- **Microsoft.ContainerInstances**
+- **Microsoft.Relay**
+
+Depending when your tenant was created, some of these providers might already be registered.
+
+To see all resource providers, and the registration status for your subscription:
+
+1. Sign in to the [Azure portal][04].
+1. On the Azure portal menu, search for **Subscriptions**. Select it from the available options.
+1. Select the subscription you want to view.
+1. On the left menu, under **Settings**, select **Resource providers**.
+1. In the search box, enter `cloudshell` to search for the resource provider.
+1. Select the **Microsoft.CloudShell** resource provider register from the provider list.
+1. Select **Register** to change the status from **unregistered** to **Registered**.
+1. Repeat the previous steps for the **Microsoft.ContainerInstances** and **Microsoft.Relay**
+ resource providers.
+
+ [![Screenshot of selecting resource providers in the Azure portal.][98]][98a]
+
+## 2. Collect the required information
There are several pieces of information that you need to collect before you can deploy Azure Cloud. You can use the default Azure Cloud Shell instance to gather the required information and create the
information, see the following articles:
> needs. For more information, see the _Change Network Settings_ section of > [Add, change, or delete a virtual network subnet][07]
-### Register the resource provider
-
-Azure Cloud Shell runs in a container. The **Microsoft.ContainerInstances** resource provider needs
-to be registered in the subscription that holds the virtual network for your deployment. Depending
-when your tenant was created, the provider may already be registered.
-
-Use the following commands to check the registration status.
-
-```powershell
-Set-AzContext -Subscription MySubscriptionName
-Get-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance |
- Select-Object ResourceTypes, RegistrationState
-```
-
-```Output
-ResourceTypes RegistrationState
-- --
-{containerGroups} Registered
-{serviceAssociationLinks} Registered
-{locations} Registered
-{locations/capabilities} Registered
-{locations/usages} Registered
-{locations/operations} Registered
-{locations/operationresults} Registered
-{operations} Registered
-{locations/cachedImages} Registered
-{locations/validateDeleteVirtualNetworkOrSubnets} Registered
-{locations/deleteVirtualNetworkOrSubnets} Registered
-```
-
-If **RegistrationState** for `{containerGroups}` is `NotRegistered`, run the following command to
-register the provider:
-
-```powershell
-Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance
-```
- ### Azure Container Instance ID
-To configure the virtual network for Cloud Shell using the quickstarts, retrieve the `Azure Container Instance`
-ID for your organization.
+The **Azure Container Instance ID** is a unique value for every tenant. You use this identifier in
+the [quickstart templates][07] to configure virtual network for Cloud Shell.
+
+1. Sign in to the [Azure portal][09]. From the **Home** screen, select **Microsoft Entra ID**. If
+ the icon isn't displayed, enter `Microsoft Entra ID` in the top search bar.
+1. In the left menu, select **Overview** and enter `azure container instance service` into the
+ search bar.
-```powershell
-Get-AzADServicePrincipal -DisplayNameBeginsWith 'Azure Container Instance'
-```
+ [![Screenshot of searching for Azure Container Instance Service.][95]][95a]
-```Output
-DisplayName Id AppId
-- --
-Azure Container Instance Service 8fe7fd25-33fe-4f89-ade3-0e705fcf4370 34fbe509-d6cb-4813-99df-52d944bfd95a
-```
+1. In the results under **Enterprise applications**, select the **Azure Container Instance Service**.
+1. Find **ObjectID** listed as a property on the **Overview** page for **Azure Container Instance
+ Service**.
+1. You use this ID in the quickstart template for virtual network.
-Take note of the **Id** value for the `Azure Container Instance` service principal. It's needed for
-the **Azure Cloud Shell - VNet storage** template.
+ [![Screenshot of Azure Container Instance Service details.][96]][96a]
-## 2. Create the virtual network using the ARM template
+## 3. Create the virtual network using the ARM template
Use the [Azure Cloud Shell - VNet][08] template to create Cloud Shell resources in a virtual
-network. The template creates three subnets under the virtual network created earlier. You may
+network. The template creates three subnets under the virtual network created earlier. You might
choose to change the supplied names of the subnets or use the defaults. The virtual network, along with the subnets, require valid IP address assignments. You need at least one IP address for the Relay subnet and enough IP addresses in the container subnet to support the number of concurrent
Fill out the form with the following information:
| Project details | Value | | | -- |
-| Subscription | Defaults to the current subscription context.<br>For this example, we're using `MyCompany Subscription` |
+| Subscription | Defaults to the current subscription context.<br>For this example, we're using `Contoso (carolb)` |
| Resource group | Enter the name of the resource group from the prerequisite information.<br>For this example, we're using `rg-cloudshell-eastus`. | | Instance details | Value |
Fill out the form with the following information:
Once the form is complete, select **Review + Create** and deploy the network ARM template to your subscription.
-## 3. Create the virtual network storage using the ARM template
+## 4. Create the virtual network storage using the ARM template
Use the [Azure Cloud Shell - VNet storage][09] template to create Cloud Shell resources in a virtual network. The template creates the storage account and assigns it to the private virtual network.
Fill out the form with the following information:
| Project details | Value | | | -- |
-| Subscription | Defaults to the current subscription context.<br>For this example, we're using `MyCompany Subscription` |
+| Subscription | Defaults to the current subscription context.<br>For this example, we're using `Contoso (carolb)` |
| Resource group | Enter the name of the resource group from the prerequisite information.<br>For this example, we're using `rg-cloudshell-eastus`. | | Instance details | Value |
Fill out the form with the following information:
Once the form is complete, select **Review + Create** and deploy the network ARM template to your subscription.
-## 4. Configuring Cloud Shell to use a virtual network
+## 5. Configuring Cloud Shell to use a virtual network
After you have deployed your private Cloud Shell instance, each Cloud Shell user must change their configuration to use the new private instance.
user settings.
Resetting the user settings triggers the first-time user experience the next time you start Cloud Shell.
-[ ![Screenshot of Cloud Shell storage dialog box.](media/quickstart-deploy-vnet/setup-cloud-shell-storage.png) ](media/quickstart-deploy-vnet/setup-cloud-shell-storage.png#lightbox)
+[![Screenshot of Cloud Shell storage dialog box.][97]][97a]
1. Choose your preferred shell experience (Bash or PowerShell) 1. Select **Show advanced settings**
private Cloud Shell instance.
[07]: /azure/virtual-network/virtual-network-manage-subnet?tabs=azure-portal#change-subnet-settings [08]: https://aka.ms/cloudshell/docs/vnet/template [09]: https://azure.microsoft.com/resources/templates/cloud-shell-vnet-storage/
+[95]: media/quickstart-deploy-vnet/container-service-search.png
+[95a]: media/quickstart-deploy-vnet/container-service-search.png#lightbox
+[96]: media/quickstart-deploy-vnet/container-service-details.png
+[96a]: media/quickstart-deploy-vnet/container-service-details.png#lightbox
+[97]: media/quickstart-deploy-vnet/setup-cloud-shell-storage.png
+[97a]: media/quickstart-deploy-vnet/setup-cloud-shell-storage.png#lightbox
+[98]: media/quickstart-deploy-vnet/resource-provider.png
+[98a]: media/quickstart-deploy-vnet/resource-provider.png#lightbox
communication-services Reactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/reactions.md
+
+ Title: Reactions
+
+description: Use Azure Communication Services SDKs to send and receive reactions.
++++++ Last updated : 10/20/2023+++
+# Reactions
+In this article, you learn how to implement the reactions capability with Azure Communication Services Calling SDKs. This capability allows users in a group call or meeting to send and receive reactions with participants in Azure Communication Services and Microsoft Teams. Reactions for users in Microsoft Teams are controlled by the configuration and policy settings in Teams. Additional information is available in [Manage reactions in Teams meetings and webinars](/microsoftteams/manage-reactions-meetings) and [Meeting options in Microsoft Teams](https://support.microsoft.com/en-us/office/meeting-options-in-microsoft-teams-53261366-dbd5-45f9-aae9-a70e6354f88e)
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md).
+- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
++
+## Next steps
+- [Learn how to manage calls](./manage-calls.md)
+- [Learn how to manage video](./manage-video.md)
communication-services Connect Whatsapp Business Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/advanced-messaging/whatsapp/connect-whatsapp-business-account.md
Get started with the Azure Communication Services Advanced Messaging, which exte
- [Set-up Event Grid viewer](/samples/azure-samples/azure-event-grid-viewer/azure-event-grid-viewer/). - [Set-up Event subscription for SMS received and SMS delivery events.](../../telephony/get-phone-number.md?tabs=windows&pivots=platform-azp) - [Facebook login account](https://www.facebook.com/index.php)-- Phone number using [Azure Communication Service Phonenumber](../..//telephony/get-phone-number.md?tabs=windows&pivots=platform-azp) **or** bring your own phone number with the given capabilities:
+- Phone number using [Azure Communication Services phone number](../..//telephony/get-phone-number.md?tabs=windows&pivots=platform-azp) **or** bring your own phone number with the given capabilities:
- Able to send and receive SMS messages.
- - Phonenumber isn't associated with a WhatsApp Business Account.
+ - Phone number isn't associated with a WhatsApp Business Account.
- [Active Meta Business Account](https://www.facebook.com/business/tools/meta-business-suite)
Get started with the Azure Communication Services Advanced Messaging, which exte
1. Now that you have selected Meta Business Account, you need to **create/select** a WhatsApp Business profile. Fill out the required information.
+> [!NOTE]
+> A WhatsApp Business Account can only be registered with Advanced Messaging one time. Selecting a WhatsApp Business Account already in use will result in an error when trying to create the channel.
+ :::image type="content" source="./media/register-whatsapp-account/whatsapp-business-account-details.png" alt-text="Screenshot that shows WhatsApp Business account details."::: 2. Once you have completed the form, click **Next** to continue.
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
ms.suite: integration
Previously updated : 10/08/2023 Last updated : 10/24/2023 # Schedule and run recurring workflows with the Recurrence trigger in Azure Logic Apps
Based on whether your workflow is [Consumption or Standard](../logic-apps/logic-
| **Time zone** | `timeZone` | No | String | Applies only when you specify a start time because this trigger doesn't accept [UTC offset](https://en.wikipedia.org/wiki/UTC_offset). Select the time zone that you want to apply. | | **Start time** | `startTime` | No | String | Provide a start date and time, which has a maximum of 49 years in the future and must follow the [ISO 8601 date time specification](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations) in [UTC date time format](https://en.wikipedia.org/wiki/Coordinated_Universal_Time), but without a [UTC offset](https://en.wikipedia.org/wiki/UTC_offset): <br><br>YYYY-MM-DDThh:mm:ss if you select a time zone <br><br>-or- <br><br>YYYY-MM-DDThh:mm:ssZ if you don't select a time zone <br><br>So for example, if you want September 18, 2020 at 2:00 PM, then specify "2020-09-18T14:00:00" and select a time zone such as Pacific Standard Time. Or, specify "2020-09-18T14:00:00Z" without a time zone. <br><br>**Important:** If you don't select a time zone, you must add the letter "Z" at the end without any spaces. This "Z" refers to the equivalent [nautical time](https://en.wikipedia.org/wiki/Nautical_time). If you select a time zone value, you don't need to add a "Z" to the end of your **Start time** value. If you do, Logic Apps ignores the time zone value because the "Z" signifies a UTC time format. <br><br>For simple schedules, the start time is the first occurrence, while for complex schedules, the trigger doesn't fire any sooner than the start time. [*What are the ways that I can use the start date and time?*](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time) | | **On these days** | `weekDays` | No | String or string array | If you select "Week", you can select one or more days when you want to run the workflow: **Monday**, **Tuesday**, **Wednesday**, **Thursday**, **Friday**, **Saturday**, and **Sunday** |
- | **At these hours** | `hours` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 23 as the hours of the day for when you want to run the workflow. <br><br>For example, if you specify "10", "12" and "14", you get 10 AM, 12 PM, and 2 PM for the hours of the day, but the minutes of the day are calculated based on when the recurrence starts. To set specific minutes of the day, for example, 10:00 AM, 12:00 PM, and 2:00 PM, specify those values by using the property named **At these minutes**. |
+ | **At these hours** | `hours` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 23 as the hours of the day for when you want to run the workflow. For example, if you specify "10", "12" and "14", you get 10 AM, 12 PM, and 2 PM for the hours of the day. <br><br>**Note**: By default, the minutes of the day are calculated based on when the recurrence starts. To set specific minutes of the day, for example, 10:00 AM, 12:00 PM, and 2:00 PM, specify those values by using the property named **At these minutes**. |
| **At these minutes** | `minutes` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 59 as the minutes of the hour when you want to run the workflow. <br><br>For example, you can specify "30" as the minute mark and using the previous example for hours of the day, you get 10:30 AM, 12:30 PM, and 2:30 PM. <br><br>**Note**: Sometimes, the timestamp for the triggered run might vary up to 1 minute from the scheduled time. If you need to pass the timestamp exactly as scheduled to subsequent actions, you can use template expressions to change the timestamp accordingly. For more information, see [Date and time functions for expressions](../logic-apps/workflow-definition-language-functions-reference.md#date-time-functions). | ![Screenshot for Consumption workflow designer and Recurrence trigger with advanced scheduling options.](./media/connectors-native-recurrence/recurrence-trigger-advanced-consumption.png)
container-apps Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment.md
Settings relevant to the Azure Container Apps environment API resource.
| `properties.appLogsConfiguration` | Used for configuring the Log Analytics workspace where logs for all apps in the environment are published. | | `properties.containerAppsConfiguration.daprAIInstrumentationKey` | App Insights instrumentation key provided to Dapr for tracing |
+## Policies
+
+Azure Container Apps environments are automatically deleted if one of the following conditions is detected for longer than 90 days:
+
+- In an idle state
+- In a failed state due to VNet or Azure Policy configuration
+- Blocks infrastructure updates due to VNet or Azure Policy configuration
+ ## Next steps > [!div class="nextstepaction"]
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
Previously updated : 09/29/2022 Last updated : 10/25/2023
With managed identities:
- You can use role-based access control to grant specific permissions to a managed identity. - System-assigned identities are automatically created and managed. They're deleted when your container app is deleted. - You can add and delete user-assigned identities and assign them to multiple resources. They're independent of your container app's life cycle.-- You can use managed identity to [authenticate with a private Azure Container Registry](containers.md#container-registries) without a username and password to pull containers for your Container App.
+- You can use managed identity to [authenticate with a private Azure Container Registry](./managed-identity-image-pull.md) without a username and password to pull containers for your Container App.
- You can use [managed identity to create connections for Dapr-enabled applications via Dapr components](./dapr-overview.md) ### Common use cases
An ARM template can be used to automate deployment of your container app and res
Adding the system-assigned type tells Azure to create and manage the identity for your application. For a complete ARM template example, see [ARM API Specification](azure-resource-manager-api-spec.md?tabs=arm-template#container-app-examples).
+# [YAML](#tab/yaml)
+
+Some Azure CLI commands, including `az containerapp create` and `az containerapp job create`, support YAML files for input. To add a system-assigned identity, add an `identity` section to your YAML file.
+
+```yaml
+identity:
+ type: SystemAssigned
+```
+
+Adding the system-assigned type tells Azure to create and manage the identity for your application. For a complete YAML template example, see [ARM API Specification](azure-resource-manager-api-spec.md?tabs=yaml#container-app-examples).
+ ### Add a user-assigned identity
For a complete ARM template example, see [ARM API Specification](azure-resource-
> [!NOTE] > An application can have both system-assigned and user-assigned identities at the same time. In this case, the type property would be `SystemAssigned,UserAssigned`.
+# [YAML](#tab/yaml)
+
+To add one or more user-assigned identities, add an `identity` section to your YAML configuration file. Replace `<IDENTITY1_RESOURCE_ID>` and `<IDENTITY2_RESOURCE_ID>` with the resource identifiers of the identities you want to add.
+
+Specify each user-assigned identity by adding an item to the `userAssignedIdentities` object with the identity's resource identifier as the key. Use an empty object as the value.
+
+```yaml
+identity:
+ type: UserAssigned
+ userAssignedIdentities:
+ <IDENTITY1_RESOURCE_ID>: {}
+ <IDENTITY2_RESOURCE_ID>: {}
+```
+
+For a complete YAML template example, see [ARM API Specification](azure-resource-manager-api-spec.md?tabs=yaml#container-app-examples).
+
+> [!NOTE]
+> An application can have both system-assigned and user-assigned identities at the same time. In this case, the type property would be `SystemAssigned,UserAssigned`.
+ ## Configure a target resource
To remove all identities, set the `type` of the container app's identity to `Non
} ```
+# [YAML](#tab/yaml)
+
+To remove all identities, set the `type` of the container app's identity to `None` in the YAML configuration file:
+
+```yaml
+identity:
+ type: None
+```
+ ## Next steps
container-apps Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md
After a container app is successfully provisioned, a revision enters its operati
| Status | Description | |||
+| Provisioning | The revision is in the verification process. |
| Scale to 0 | Zero running replicas, and not provisioning any new replicas. The container app can create new replicas if scale rules are triggered. | | Activating | Zero running replicas, one replica being provisioned. |
-| Processing | Scaling in or out is occurring. One or more running replicas, while other replicas are being provisioned. |
-| Running | One or more replicas running. There are no issues to report. |
-| Degraded | At least one replica in the revision is failed. View running state details for specific issues. |
+| Activation failed | The first replica failed to provision. |
+| Scaling / Processing | Scaling in or out is occurring. One or more replicas are running, while other replicas are being provisioned. |
+| Running | One or more replicas are running. There are no issues to report. |
+| Running (at max) | The maximum number of replicas (according to the scale rules of the revision) are running. There are no issues to report. |
+| Deprovisioning | The revision is transitioning from active to inactive, and is removing any resources it has created. |
+| Degraded | At least one replica in the revision is in a failed state. View running state details for specific issues. |
| Failed | Critical errors caused revisions to fail. The *running state* provides details. Common causes include:<br>ΓÇó Termination<br>ΓÇó Exit code `137` | ### Inactive status
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
Title: Quickstart - Azure Cosmos DB for NoSQL client library for .NET
-description: Learn how to build a .NET app to manage Azure Cosmos DB for NoSQL account resources in this quickstart.
+ Title: Quickstart - Client library for .NET
+
+description: Deploy a .NET web application to manage Azure Cosmos DB for NoSQL account resources in this quickstart.
ms.devlang: csharp Previously updated : 11/07/2022- Last updated : 10/24/2023+
+zone_pivot_groups: azure-cosmos-db-quickstart-path
+# CustomerIntent: As a developer, I want to learn the basics of the .NET client library so that I can build applications with Azure Cosmos DB for NoSQL.
# Quickstart: Azure Cosmos DB for NoSQL client library for .NET
[!INCLUDE[Quickstart selector](includes/quickstart-selector.md)]
-Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks.
+Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Follow these steps to deploy a sample application and explore the code. In this quickstart, you use the Azure Developer CLI (`azd`) and the `Microsoft.Azure.Cosmos` library to connect to a newly created Azure Cosmos DB for NoSQL account.
-> [!NOTE]
-> The [example code snippets](https://github.com/azure-samples/cosmos-db-nosql-dotnet-samples) are available on GitHub as a .NET project.
-
-[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Samples](samples-dotnet.md)
+[API reference documentation](/dotnet/api/microsoft.azure.cosmos) | [Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | [Azure Developer CLI](/azure/developer/azure-developer-cli/overview)
## Prerequisites - An Azure account with an active subscription. - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.-- [.NET 6.0 or later](https://dotnet.microsoft.com/download)-- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+- [.NET 8.0](https://dotnet.microsoft.com/download/dotnet/8.0)
-### Prerequisite check
-- In a terminal or command window, run ``dotnet --version`` to check that the .NET SDK is version 6.0 or later.-- Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
-## Setting up
+## Deploy the Azure Developer CLI template
-This section walks you through creating an Azure Cosmos DB account and setting up a project that uses Azure Cosmos DB for NoSQL client library for .NET to manage resources.
+Use the Azure Developer CLI (`azd`) to create an Azure Cosmos DB for NoSQL account and set up an Azure Container Apps web application. The sample application uses the client library for .NET to manage resources.
-### <a id="create-account"></a>Create an Azure Cosmos DB account
+1. Start in an empty directory in the Azure Cloud Shell.
-> [!TIP]
-> No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. If you create an account using the free trial, you can safely skip ahead to the [Create a new .NET app](#create-a-new-net-app) section.
+ > [!TIP]
+ > We recommend creating a new uniquely named directory within the fileshare folder (`~/clouddrive`).
+ >
+ > For example, this command will create a new directory and navigate to that directory:
+ >
+ > ```azurecli-interactive
+ > mkdir ~/clouddrive/cosmos-db-nosql-dotnet-quickstart
+ >
+ > cd ~/clouddrive/cosmos-db-nosql-dotnet-quickstart
+ > ```
+1. Initialize the Azure Developer CLI using `azd init` and the `cosmos-db-nosql-dotnet-quickstart` template.
-### Create a new .NET app
+ ```azurecli-interactive
+ azd init --template cosmos-db-nosql-dotnet-quickstart
+ ```
-Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new``](/dotnet/core/tools/dotnet-new) command specifying the **console** template.
+1. During initialization, configure a unique environment name.
-```dotnetcli
-dotnet new console
-```
+ > [!NOTE]
+ > The environment name will also be used as the target resource group name.
-### Install the package
+1. Deploy the Azure Cosmos DB account and other resources for this quickstart with `azd provision`.
-Add the [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) NuGet package to the .NET project. Use the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command specifying the name of the NuGet package.
+ ```azurecli-interactive
+ azd provision
+ ```
-```dotnetcli
-dotnet add package Microsoft.Azure.Cosmos
-```
+1. During the provisioning process, select your subscription and desired location. Wait for the provisioning process to complete. The process can take **approximately five minutes**.
-Build the project with the [``dotnet build``](/dotnet/core/tools/dotnet-build) command.
+1. Once the provisioning of your Azure resources is done, a link to the running web application is included in the output.
-```dotnetcli
-dotnet build
-```
+ ```output
+ View the running web application in Azure Container Apps:
+ <https://container-app-39423723798.redforest-xz89v7c.eastus.azurecontainerapps.io>
+
+ SUCCESS: Your application was provisioned in Azure in 5 minutes 0 seconds.
+ ```
-Make sure that the build was successful with no errors. The expected output from the build should look something like this:
+1. Use the link in the console to navigate to your web application in the browser.
-```output
- Determining projects to restore...
- All projects are up-to-date for restore.
- dslkajfjlksd -> C:\Users\sidandrews\Demos\dslkajfjlksd\bin\Debug\net6.0\dslkajfjlksd.dll
+ :::image type="content" source="media/quickstart-dotnet/web-application.png" alt-text="Screenshot of the running web application.":::
-Build succeeded.
- 0 Warning(s)
- 0 Error(s)
-```
-### Configure environment variables
+## Get the application code
-## Object model
+Use the Azure Developer CLI (`azd`) to get the application code. The sample application uses the client library for .NET to manage resources.
+1. Start in an empty directory.
-You'll use the following .NET classes to interact with these resources:
+1. Initialize the Azure Developer CLI using `azd init` and the `cosmos-db-nosql-dotnet-quickstart` template.
-- [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.-- [``Database``](/dotnet/api/microsoft.azure.cosmos.database) - This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.-- [``Container``](/dotnet/api/microsoft.azure.cosmos.container) - This class is a reference to a container that also may not exist in the service yet. The container is validated server-side when you attempt to work with it.-- [``QueryDefinition``](/dotnet/api/microsoft.azure.cosmos.querydefinition) - This class represents a SQL query and any query parameters.-- [``FeedIterator<>``](/dotnet/api/microsoft.azure.cosmos.feediterator-1) - This class represents an iterator that can track the current page of results and get a new page of results.-- [``FeedResponse<>``](/dotnet/api/microsoft.azure.cosmos.feedresponse-1) - This class represents a single page of responses from the iterator. This type can be iterated over using a ``foreach`` loop.
+ ```azurecli
+ azd init --template cosmos-db-nosql-dotnet-quickstart
+ ```
-## Code examples
+1. During initialization, configure a unique environment name.
-- [Authenticate the client](#authenticate-the-client)-- [Create a database](#create-a-database)-- [Create a container](#create-a-container)-- [Create an item](#create-an-item)-- [Get an item](#get-an-item)-- [Query items](#query-items)
+ > [!NOTE]
+ > If you decide to deploy this application to Azure in the future, the environment name will also be used as the target resource group name.
-The sample code described in this article creates a database named ``cosmicworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+## Create the API for NoSQL account
-For this sample code, the container will use the category as a logical partition key.
+Use the Azure CLI (`az`) to create an API for NoSQL account. You can choose to create an account in your existing subscription, or try a free Azure Cosmos DB account.
-### Authenticate the client
+### [Try Azure Cosmos DB free](#tab/try-free)
+1. Navigate to the **Try Azure Cosmos DB free** homepage: <https://cosmos.azure.com/try/>
-## [Passwordless (Recommended)](#tab/passwordless)
+1. Sign-in using your Microsoft account.
+1. In the list of APIs, select the **Create** button for the **API for NoSQL**.
+1. Navigate to the newly created account by selecting **Open in portal**.
-## Authenticate using DefaultAzureCredential
+1. Record the account and resource group names for the API for NoSQL account. You use these values in later steps.
+> [!IMPORTANT]
+> If you are using a free account, you might need to change the default subscription in Azure CLI to the subscription ID used for the free account.
+>
+> ```azurecli
+> az account set --subscription <subscription-id>
+> ```
-You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the `Azure.Identity` NuGet package to your application. `DefaultAzureCredential` will automatically discover and use the account you signed-in with in the previous step.
+### [Azure subscription](#tab/azure-subscription)
-```dotnetcli
-dotnet add package Azure.Identity
-```
+1. If you haven't already, sign in to the Azure CLI using the `az login` command.
-From the project directory, open the `Program.cs` file. In your editor, add using directives for the ``Microsoft.Azure.Cosmos`` and `Azure.Identity` namespaces.
+1. Use `az group create` to create a new resource group in your subscription.
+ ```azurecli
+ az group create \
+ --name <resource-group-name> \
+ --location <location>
+ ```
-Define a new instance of the ``CosmosClient`` class using the constructor, and [``Environment.GetEnvironmentVariable``](/dotnet/api/system.environment.getenvironmentvariable) to read the `COSMOS_ENDPOINT` environment variable you created earlier.
+1. Use the `az cosmosdb create` command to create a new API for NoSQL account with default settings.
+ ```azurecli
+ az cosmosdb create \
+ --resource-group <resource-group-name> \
+ --name <account-name> \
+ --locations regionName=<location>
+ ```
-For more information on different ways to create a ``CosmosClient`` instance, see [Get started with Azure Cosmos DB for NoSQL and .NET](how-to-dotnet-get-started.md#connect-to-azure-cosmos-db-sql-api).
+
-## [Connection String](#tab/connection-string)
+## Create the database and container
+
+Use the Azure CLI to create the `cosmicworks` database and `products` container for the quickstart.
+
+1. Create a new database with `az cosmosdb sql database create`. Set the name of the database to `comsicworks` and use autoscale throughput with a maximum of **1,000** RU/s.
+
+ ```azurecli
+ az cosmosdb sql database create \
+ --resource-group <resource-group-name> \
+ --account-name <account-name> \
+ --name "cosmicworks" \
+ --max-throughput 1000
+ ```
+
+1. Create a container named `products` within the `cosmicworks` database using `az cosmosdb sql container create`. Set the partition key path to `/category`.
+
+ ```azurecli
+ az cosmosdb sql container create \
+ --resource-group <resource-group-name> \
+ --account-name <account-name> \
+ --database-name "cosmicworks" \
+ --name "products" \
+ --partition-key-path "/category"
+ ```
-From the project directory, open the `Program.cs` file. In your editor, add a using directive for ``Microsoft.Azure.Cosmos``.
+## Configure passwordless authentication
+When developing locally with passwordless authentication, make sure the user account that connects to Cosmos DB is assigned a role with the correct permissions to perform data operations. Currently, Azure Cosmos DB for NoSQL doesn't include built-in roles for data operations, but you can create your own using the Azure CLI or PowerShell.
-Define a new instance of the ``CosmosClient`` class using the constructor, and [``Environment.GetEnvironmentVariable``](/dotnet/api/system.environment.getenvironmentvariable) to read the two environment variables you created earlier.
+1. Get the API for NoSQL endpoint for the account using `az cosmosdb show`. You'll use this value in the next step.
+ ```azurecli
+ az cosmosdb show \
+ --resource-group <resource-group-name> \
+ --name <account-name> \
+ --query "documentEndpoint"
+ ```
-For more information on different ways to create a ``CosmosClient`` instance, see [Get started with Azure Cosmos DB for NoSQL and .NET](how-to-dotnet-get-started.md#connect-to-azure-cosmos-db-sql-api).
+1. Set the `AZURE_COSMOS_DB_NOSQL_ENDPOINT` environment variable using the .NET secret manager (`dotnet user-secrets`). Set the value to the API for NoSQL account endpoint recorded in the previous step.
-
+ ```bash
+ dotnet user-secrets set "AZURE_COSMOS_DB_NOSQL_ENDPOINT" "<cosmos-db-nosql-endpoint>" --project ./src/web/Cosmos.Samples.NoSQL.Quickstart.Web.csproj
+ ```
-### Create and query the database
+1. Create a JSON file named `role-definition.json`. Use this content to configure the role with the following permissions:
-Next you'll create a database and container to store products, and perform queries to insert and read those items.
+ - `Microsoft.DocumentDB/databaseAccounts/readMetadata`
+ - `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*`
+ - `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/*`
-## [Passwordless (Recommended)](#tab/passwordless)
+ ```json
+ {
+ "RoleName": "Write to Azure Cosmos DB for NoSQL data plane",
+ "Type": "CustomRole",
+ "AssignableScopes": [
+ "/"
+ ],
+ "Permissions": [
+ {
+ "DataActions": [
+ "Microsoft.DocumentDB/databaseAccounts/readMetadata",
+ "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*",
+ "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/*"
+ ]
+ }
+ ]
+ }
+ ```
-The `Microsoft.Azure.Cosmos` client libraries enable you to perform *data* operations using [Azure RBAC](../role-based-access-control.md). However, to authenticate *management* operations such as creating and deleting databases you must use RBAC through one of the following options:
+1. Create a role using the `az role definition create` command. Name the role `Write to Azure Cosmos DB for NoSQL data plane` and ensure the role is scoped to the account level using `/`. Use the `role-definition.json` file you created in the previous step.
-> - [Azure CLI scripts](manage-with-cli.md)
-> - [Azure PowerShell scripts](manage-with-powershell.md)
-> - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md)
-> - [Azure Resource Manager .NET client library](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDB/)
+ ```azurecli
+ az cosmosdb sql role definition create \
+ --resource-group <resource-group-name> \
+ --account-name <account-name> \
+ --body @role-definition.json
+ ```
-The Azure CLI approach is used in this example. Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container.
+1. When the command is finished, it outputs an object that includes an `id` field. Record the value from the `id` field. You use this value in an upcoming step.
-```azurecli
-# Create a SQL API database
-az cosmosdb sql database create
- --account-name msdocs-cosmos-nosql
- --resource-group msdocs
- --name cosmicworks
-
-# Create a SQL API container
-az cosmosdb sql container create
- --account-name msdocs-cosmos-nosql
- --resource-group msdocs
- --database-name cosmicworks
- --name products
-```
+ > [!TIP]
+ > If you need to get the `id` again, you can use the `az cosmosdb sql role definition list` command:
+ >
+ > ```azurecli
+ > az cosmosdb sql role definition list \
+ > --resource-group <resource-group-name> \
+ > --account-name <account-name> \
+ > --query "[?roleName == 'Write to Azure Cosmos DB for NoSQL data plane'].id"
+ > ```
+ >
-After the resources have been created, use classes from the `Microsoft.Azure.Cosmos` client libraries to connect to and query the database.
+1. For local development, get your currently logged in **service principal id**. Record this value as you'll also use this value in the next step.
-### Get the database
+ ```azurecli
+ az ad signed-in-user show --query id
+ ```
-Use the [``CosmosClient.GetDatabase``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.getdatabase) method will return a reference to the specified database.
+1. Assign the role definition to your currently logged in user using `az cosmosdb sql role assignment create`.
+ ```azurecli
+ az cosmosdb sql role assignment create \
+ --resource-group <resource-group-name> \
+ --account-name <account-name> \
+ --scope "/" \
+ --role-definition-id "<your-custom-role-definition-id>" \
+ --principal-id "<your-service-principal-id>"
+ ```
+
+1. Run the .NET web application.
+
+ ```bash
+ dotnet run --project ./src/web/Cosmos.Samples.NoSQL.Quickstart.Web.csproj
+ ```
+
+1. Use the link in the console to navigate to your web application in the browser.
+
+ :::image type="content" source="media/quickstart-dotnet/web-application.png" alt-text="Screenshot of the running web application.":::
++
+## Walk through the .NET library code
-### Get the container
+- [Authenticate the client](#authenticate-the-client)
+- [Get a database](#get-a-database)
+- [Get a container](#get-a-container)
+- [Create an item](#create-an-item)
+- [Get an item](#read-an-item)
+- [Query items](#query-items)
-The [``Database.GetContainer``](/dotnet/api/microsoft.azure.cosmos.database.getcontainer) will return a reference to the specified container.
+The sample code in the Azure Develop CLI template creates a database named `cosmicworks` with a container named `products`. The `products` container is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
+For this sample, the container uses the `/category` property as a logical partition key.
-## [Connection String](#tab/connection-string)
+The code blocks used to perform these operations in this sample are included in this section. You can also [browse the entire template's source](https://vscode.dev/github/azure-samples/cosmos-db-nosql-dotnet-quickstart) using Visual Studio Code for the Web.
-### Create a database
+### Authenticate the client
-Use the [``CosmosClient.CreateDatabaseIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.cosmosclient.createdatabaseifnotexistsasync) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database.
+Application requests to most Azure services must be authorized. Using the <xref:Azure.Identity.DefaultAzureCredential> class provided by the <xref:Azure.Identity> client library and namespace is the recommended approach for implementing passwordless connections to Azure services in your code.
+> [!IMPORTANT]
+> You can also authorize requests to Azure services using passwords, connection strings, or other credentials directly. However, this approach should be used with caution. Developers must be diligent to never expose these secrets in an unsecure location. Anyone who gains access to the password or secret key is able to authenticate. `DefaultAzureCredential` offers improved management and security benefits over the account key to allow passwordless authentication.
-For more information on creating a database, see [Create a database in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-database.md).
+`DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime.
+The client authentication code for this project is in the `src/web/Program.cs` file.
-### Create a container
+For example, your app can authenticate using your Visual Studio sign-in credentials when developing locally, and then use a system-assigned managed identity once it has been deployed to Azure. No code changes are required for this transition between environments.
-The [``Database.CreateContainerIfNotExistsAsync``](/dotnet/api/microsoft.azure.cosmos.database.createcontainerifnotexistsasync) will create a new container if it doesn't already exist. This method will also return a reference to the container.
+Alternatively, your app can specify a `clientId` with the <xref:Azure.Identity.DefaultAzureCredentialOptions> class to use a user-assigned managed identity locally or in Azure.
-For more information on creating a container, see [Create a container in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-container.md).
-
+### Get a database
-### Create an item
+The code to access database resources is in the `GenerateQueryDataAsync` method of the `src/web/Pages/Index.razor` file.
-The easiest way to create a new item in a container is to first build a C# [class](/dotnet/csharp/language-reference/keywords/class) or [record](/dotnet/csharp/language-reference/builtin-types/record) type with all of the members you want to serialize into JSON. In this example, the C# record has a unique identifier, a *categoryId* field for the partition key, and extra *categoryName*, *name*, *quantity*, and *sale* fields.
+Use the <xref:Microsoft.Azure.Cosmos.CosmosClient.GetDatabase%2A> method to return a reference to the specified database.
-Create an item in the container by calling [``Container.CreateItemAsync``](/dotnet/api/microsoft.azure.cosmos.container.createitemasync).
+### Get a container
+The code to access container resources is also in the `GenerateQueryDataAsync` method.
-For more information on creating, upserting, or replacing items, see [Create an item in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-create-item.md).
+The <xref:Microsoft.Azure.Cosmos.Database.GetContainer%2A> returns a reference to the specified container.
-### Get an item
-In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [``Container.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) passing in both values to return a deserialized instance of your C# type.
+### Create an item
+
+The easiest way to create a new item in a container is to first build a C# class or record type with all of the members you want to serialize into JSON. In this example, the C# record has a unique identifier, a `category` field for the partition key, name, quantity, price, and clearance fields.
++
+In the `GenerateQueryDataAsync` method, create an item in the container by calling <xref:Microsoft.Azure.Cosmos.Container.UpsertItemAsync%2A>.
+
+### Read an item
-For more information about reading items and parsing the response, see [Read an item in Azure Cosmos DB for NoSQL using .NET](how-to-dotnet-read-item.md).
+In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (`id`) and partition key fields. In the SDK, call <xref:Microsoft.Azure.Cosmos.Container.ReadItemAsync%2A> passing in both values to return a deserialized instance of your C# type.
+Still in the `GenerateQueryDataAsync` method, use `ReadItemAsync<Product>` to serialize the item using the `Product` type.
+ ### Query items
-After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: ``SELECT * FROM products p WHERE p.categoryId = "61dba35b-4f02-45c5-b648-c6badc0cbd79"``. This example uses the **QueryDefinition** type and a parameterized query expression for the partition key filter. Once the query is defined, call [``Container.GetItemQueryIterator<>``](/dotnet/api/microsoft.azure.cosmos.container.getitemqueryiterator) to get a result iterator that will manage the pages of results. Then, use a combination of ``while`` and ``foreach`` loops to retrieve pages of results and then iterate over the individual items.
+After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: `SELECT * FROM products p WHERE p.category = "gear-surf-surfboards"`. This example uses the QueryDefinition type and a parameterized query expression for the partition key filter. Once the query is defined, call <xref:Microsoft.Azure.Cosmos.Container.GetItemQueryIterator%2A> to get a result iterator that manages the pages of results. In the example, the query logic is also in the `GenerateQueryDataAsync` method.
-## Run the code
+Then, use a combination of `while` and `foreach` loops to retrieve pages of results and then iterate over the individual items.
-This app creates an API for NoSQL database and container. The example then creates an item and then reads the exact same item back. Finally, the example issues a query that should only return that single item. With each step, the example outputs metadata to the console about the steps it has performed.
-To run the app, use a terminal to navigate to the application directory and run the application.
+## Clean up resources
-```dotnetcli
-dotnet run
-```
-The output of the app should be similar to this example:
+When you no longer need the sample application or resources, remove the corresponding deployment and all resources.
-```output
-New database: adventureworks
-New container: products
-Created item: 68719518391 [gear-surf-surfboards]
+```azurecli-interactive
+azd down
```
-## Clean up resources
+
+### [Try Azure Cosmos DB free](#tab/try-free)
+
+1. Navigate to the **Try Azure Cosmos DB free** homepage again: <https://cosmos.azure.com/try/>
+
+1. Sign-in using your Microsoft account.
+
+1. Select **Delete your account**.
+
+### [Azure subscription](#tab/azure-subscription)
+
+When you no longer need the API for NoSQL account, you can delete the corresponding resource group. Use the `az group delete` command to delete the resource group.
+
+```azurecli
+az group delete --name <resource-group-name>
+```
++
-## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account, create a database, and create a container using the .NET SDK. You can now dive deeper into a tutorial where you manage your Azure Cosmos DB for NoSQL resources and data using a .NET console application.
+## Next step
> [!div class="nextstepaction"] > [Tutorial: Develop a .NET console application with Azure Cosmos DB for NoSQL](tutorial-dotnet-console-app.md)
cosmos-db Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-create-users.md
- Title: Create users - Azure Cosmos DB for PostgreSQL
-description: See how you can create new user accounts to interact with an Azure Cosmos DB for PostgreSQL cluster.
------ Previously updated : 09/21/2022--
-# Create users in Azure Cosmos DB for PostgreSQL
--
-The PostgreSQL engine uses
-[roles](https://www.postgresql.org/docs/current/sql-createrole.html) to control
-access to database objects, and a newly created cluster
-comes with several roles pre-defined:
-
-* The [default PostgreSQL roles](https://www.postgresql.org/docs/current/default-roles.html)
-* `azure_pg_admin`
-* `postgres`
-* `citus`
-
-Since Azure Cosmos DB for PostgreSQL is a managed PaaS service, only Microsoft can sign in with the
-`postgres` superuser role. For limited administrative access, Azure Cosmos DB for PostgreSQL
-provides the `citus` role.
-
-## The Citus role
-
-Permissions for the `citus` role:
-
-* Read all configuration variables, even variables normally visible only to
- superusers.
-* Read all pg\_stat\_\* views and use various statistics-related
- extensions--even views or extensions normally visible only to superusers.
-* Execute monitoring functions that may take ACCESS SHARE locks on tables,
- potentially for a long time.
-* [Create PostgreSQL extensions](reference-extensions.md), because
- the role is a member of `azure_pg_admin`.
-
-Notably, the `citus` role has some restrictions:
-
-* Can't create roles
-* Can't create databases
-
-## How to create user roles
-
-As mentioned, the `citus` admin account lacks permission to create user roles. To add a user role, use the Azure portal interface.
-
-1. On your cluster page, select the **Roles** menu item, and on the **Roles** page, select **Add**.
-
- :::image type="content" source="media/howto-create-users/1-role-page.png" alt-text="Screenshot that shows the Roles page.":::
-
-2. Enter the role name and password. Select **Save**.
-
- :::image type="content" source="media/howto-create-users/2-add-user-fields.png" alt-text="Screenshot that shows the Add role page.":::
-
-The user will be created on the coordinator node of the cluster,
-and propagated to all the worker nodes. Roles created through the Azure
-portal have the `LOGIN` attribute, which means theyΓÇÖre true users who
-can sign in to the database.
-
-## How to modify privileges for user roles
-
-New user roles are commonly used to provide database access with restricted
-privileges. To modify user privileges, use standard PostgreSQL commands, using
-a tool such as PgAdmin or psql. For more information, see [Connect to a cluster](quickstart-connect-psql.md).
-
-For example, to allow `db_user` to read `mytable`, grant the permission:
-
-```sql
-GRANT SELECT ON mytable TO db_user;
-```
-
-Azure Cosmos DB for PostgreSQL propagates single-table GRANT statements through the entire
-cluster, applying them on all worker nodes. It also propagates GRANTs that are
-system-wide (for example, for all tables in a schema):
-
-```sql
applies to the coordinator node and propagates to workers
-GRANT SELECT ON ALL TABLES IN SCHEMA public TO db_user;
-```
-
-## How to delete a user role or change their password
-
-To update a user, visit the **Roles** page for your cluster,
-and select the ellipses **...** next to the user. The ellipses will open a menu
-to delete the user or reset their password.
-
- :::image type="content" source="media/howto-create-users/edit-role.png" alt-text="Edit a role":::
-
-The `citus` role is privileged and can't be deleted.
-
-## Next steps
-
-Open the firewall for the IP addresses of the new users' machines to enable
-them to connect: [Create and manage firewall rules using
-the Azure portal](howto-manage-firewall-using-portal.md).
-
-For more information about database user management, see PostgreSQL
-product documentation:
-
-* [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html)
-* [GRANT Syntax](https://www.postgresql.org/docs/current/static/sql-grant.html)
-* [Privileges](https://www.postgresql.org/docs/current/static/ddl-priv.html)
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
Occasionally Microsoft needs legal documentation if the information you provided
* Name difference between Account name and Company name * Change in name
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+ ## Next steps * If needed, update your billing contact information at the [Azure portal](https://portal.azure.com).
cost-management-billing Understand Suse Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-suse-reservation-charges.md
Title: Software plan discount - Azure description: Learn how software plan discounts are applied to software on virtual machines. -+ Previously updated : 12/06/2022 Last updated : 10/25/2023
To buy the right plan, you need to understand your VM usage and the number of vC
## How reservation discount is applied
-A reservation discount is "*use-it-or-lose-it*". So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
+A reservation discount is "*use-it-or-lose-it*." So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are *lost*.
For example, if your usage is for product **SUSE Linux Enterprise Server Priorit
## Discount applies to different VM sizes for SUSE plans
-Like Reserved VM Instances, SUSE plan purchases offer instance size flexibility. This means that your discount applies even when you deploy a VM with a different vCPU count. The discount applies to different VM sizes within the software plan.
+Like Reserved VM Instances, SUSE plan purchases offer instance size flexibility. That means that your discount applies even when you deploy a VM with a different vCPU count. The discount applies to different VM sizes within the software plan.
The discount amount depends on the ratio listed in the following tables. The ratio compares the relative footprint for each meter in that group. The ratio depends on the VM vCPUs. Use the ratio value to calculate how many VM instances get the SUSE Linux plan discount.
For example, if you buy a plan for SUSE Linux Enterprise Server for HPC Priority
The ratio for 5 or more vCPUs is 2.6. So a reservation for SUSE with a VM with 5 or more vCPUs covers only a portion of the software cost, which is about 77%.
+The ratios are based on prices. The 2.6 ratio means that 1 vCPU VM is covered when your purchase quantity of 1 has 5 or more vCPUs.
+ The following tables show the software plans you can buy a reservation for, their associated usage meters, and the ratios for each. ### SUSE Linux Enterprise Server for HPC
cost-management-billing Calculate Ea Savings Plan Savings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/calculate-ea-savings-plan-savings.md
Previously updated : 04/03/2023 Last updated : 10/25/2023
This article helps Enterprise Agreement (EA) users manually calculate their savi
> [!NOTE] > The prices shown in this article are for example purposes only.
-This article is specific to EA users. Microsoft Customer Agreement (MCA) users can use similar steps to calculate their savings plan savings through invoices. However, the MCA amortized usage file doesn't contain UnitPrice (on-demand pricing) for savings plans. Other resources in the file do. For more information, see [Download usage for your Microsoft Customer Agreement](../savings-plan/utilization-cost-reports.md).
+This article is specific to EA users.
+
+However, Microsoft Customer Agreement (MCA) users can use similar steps to calculate their savings plan savings through invoices. The MCA amortized usage file doesn't contain UnitPrice (on-demand pricing) for savings plans. You can get unit prices from your [MCA price sheet](download-savings-plan-price-sheet.md#download-mca-price-sheet).
## Required permissions
cost-management-billing Savings Plan Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md
Previously updated : 10/24/2023 Last updated : 10/25/2023
For more information about how savings plan scope works, see [Saving plan scopes
Usage from [savings plan-eligible resources](https://azure.microsoft.com/pricing/offers/savings-plan-compute/#how-it-works) is eligible for savings plan benefits.
-In addition, virtual machines used with the [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/products/kubernetes-service/) and [Azure Virtual Desktop (AVD)](https://azure.microsoft.com/products/virtual-desktop/) are eligible for the savings plan.
+In addition, virtual machines used with the [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/products/kubernetes-service/), [Azure Virtual Desktop (AVD)](https://azure.microsoft.com/products/virtual-desktop/), and [Azure Red Hat OpenShift (ARO)](https://azure.microsoft.com/products/openshift/) are eligible for the savings plan.
It's important to consider your hourly spend when you determine your hourly commitment. Azure provides commitment recommendations based on usage from your last 30 days. The recommendations are found in:
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
[Further details and notes](defender-for-resource-manager-introduction.md)
-| Alert (alertype) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
+| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
|-|| | **Azure Resource Manager operation from suspicious IP address**<br>(ARM_OperationFromSuspiciousIP) | Microsoft Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds. | Execution | Medium | | **Azure Resource Manager operation from suspicious proxy IP address**<br>(ARM_OperationFromSuspiciousProxyIP) | Microsoft Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP. | Defense Evasion | Medium |
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
Prerequisite: [Enable Defender for DevOps](defender-for-devops-introduction.md).
|--|--| | Internet exposed GitHub repository with plaintext secret is publicly accessible (Preview) | A GitHub repository is reachable from the internet, allows public read access without authorization required, and holds plaintext secrets. |
+### APIs
+
+Prerequisite: [Enable Defender for APIs](defender-for-apis-deploy.md).
+
+| Attack path display name | Attack path description |
+|--|--|
+| Internet exposed APIs that are unauthenticated carry sensitive data | Azure API Management API is reachable from the internet, contains sensitive data and has no authentication enabled resulting in attackers exploiting APIs for data exfiltration. |
+ ## Cloud security graph components list This section lists all of the cloud security graph components (connections and insights) that can be used in queries with the [cloud security explorer](concept-attack-path.md).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 10/18/2023 Last updated : 10/25/2023 # What's new in Microsoft Defender for Cloud?
If you're looking for items older than six months, you can find them in the [Arc
|Date |Update | |-|-|
+| October 25 | [Offline Azure API Management revisions removed from Defender for APIs](#offline-azure-api-management-revisions-removed-from-defender-for-apis) |
| October 19 |[DevOps security posture management recommendations available in public preview](#devops-security-posture-management-recommendations-available-in-public-preview)
-| October 18 | [Releasing CIS Azure Foundations Benchmark v2.0.0 in Regulatory Compliance dashboard](#releasing-cis-azure-foundations-benchmark-v200-in-regulatory-compliance-dashboard)
+| October 18 | [Releasing CIS Azure Foundations Benchmark v2.0.0 in Regulatory Compliance dashboard](#releasing-cis-azure-foundations-benchmark-v200-in-regulatory-compliance-dashboard) |
+
+## Offline Azure API Management revisions removed from Defender for APIs
+
+October 25, 2023
+
+Defender for APIs has updated its support for Azure API Management API revisions. Offline revisions no longer appear in the onboarded Defender for APIs inventory and no longer appear to be onboarded to Defender for APIs. Offline revisions don't allow any traffic to be sent to them and pose no risk from a security perspective.
## DevOps security posture management recommendations available in public preview
New DevOps posture management recommendations are now available in public previe
October 18, 2023
-Microsoft Defender for Cloud now supports the latest [CIS Azure Security Foundations Benchmark - version 2.0.0](https://www.cisecurity.org/benchmark/azure) in the Regulatory Compliance [dashboard](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/22), as well as a built-in policy initiative in Azure Policy. The release of version 2.0.0 in Microsoft Defender for Cloud is a joint collaborative effort between Microsoft, the Center for Internet Security (CIS), and the user communities. The version 2.0.0 significantly expands assessment scope which now includes 90+ built-in Azure policies and will succeed the prior versions 1.4.0 and 1.3.0 and 1.0 in Microsoft Defender for Cloud and Azure Policy. Please refer to this [blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-cloud-now-supports-cis-azure-security/ba-p/3944860) for more details.
-Microsoft Defender Cloud now supports the latest [CIS Azure Security Foundations Benchmark - version 2.0.0](https://www.cisecurity.org/benchmark/azure) in the Regulatory Compliance [dashboard](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/22), as well as a built-in policy initiative in Azure Policy. The release of version 2.0.0 in Microsoft Defender for Cloud is a joint collaborative effort between Microsoft, the Center for Internet Security (CIS), and the user communities. The version 2.0.0 significantly expands assessment scope which now includes 90+ built-in Azure policies and will succeed the prior versions 1.4.0 and 1.3.0 and 1.0 in Microsoft Defender for Cloud and Azure Policy. Please refer to this [blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-cloud-now-supports-cis-azure-security/ba-p/3944860) for more details.
+Microsoft Defender for Cloud now supports the latest [CIS Azure Security Foundations Benchmark - version 2.0.0](https://www.cisecurity.org/benchmark/azure) in the Regulatory Compliance [dashboard](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/22), and a built-in policy initiative in Azure Policy. The release of version 2.0.0 in Microsoft Defender for Cloud is a joint collaborative effort between Microsoft, the Center for Internet Security (CIS), and the user communities. The version 2.0.0 significantly expands assessment scope, which now includes 90+ built-in Azure policies and succeed the prior versions 1.4.0 and 1.3.0 and 1.0 in Microsoft Defender for Cloud and Azure Policy. For more information, you can check out this [blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-cloud-now-supports-cis-azure-security/ba-p/3944860).
+Microsoft Defender Cloud now supports the latest [CIS Azure Security Foundations Benchmark - version 2.0.0](https://www.cisecurity.org/benchmark/azure) in the Regulatory Compliance [dashboard](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/22), and a built-in policy initiative in Azure Policy. The release of version 2.0.0 in Microsoft Defender for Cloud is a joint collaborative effort between Microsoft, the Center for Internet Security (CIS), and the user communities. The version 2.0.0 significantly expands assessment scope, which now includes 90+ built-in Azure policies and succeed the prior versions 1.4.0 and 1.3.0 and 1.0 in Microsoft Defender for Cloud and Azure Policy. For more information, you can check out this [blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-cloud-now-supports-cis-azure-security/ba-p/3944860).
## September 2023
For more information, see [Migrate to SQL server-targeted Azure Monitoring Agent
September 20, 2023
-You can now view GitHub Advanced Security for Azure DevOps (GHAzDO) alerts related to CodeQL, secrets, and dependencies in Defender for Cloud. Results will be displayed in the DevOps blade and in Recommendations. To see these results, onboard your GHAzDO-enabled repositories to Defender for Cloud.
+You can now view GitHub Advanced Security for Azure DevOps (GHAzDO) alerts related to CodeQL, secrets, and dependencies in Defender for Cloud. Results are displayed in the DevOps blade and in Recommendations. To see these results, onboard your GHAzDO-enabled repositories to Defender for Cloud.
Learn more about [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Last updated 10/09/2023
> [!IMPORTANT] > The information on this page relates to pre-release products or features, which might be substantially modified before they are commercially released, if ever. Microsoft makes no commitments or warranties, express or implied, with respect to the information provided here.
-[Defender for Servers](#defender-for-servers)
+<!-- Please don't adjust this next line without getting approval from the Defender for Cloud documentation team. It is necessary for proper RSS functionality. -->
+ On this page, you can learn about changes that are planned for Defender for Cloud. It describes planned modifications to the product that might affect things like your secure score or workflows. > [!TIP]
If you're looking for the latest release notes, you can find them in the [What's
## Four alerts are set to be deprecated
-Announcement date: October 23, 2023
-Estimated date for change: November 23, 2023
+**Announcement date: October 23, 2023**
+
+**Estimated date for change: November 23, 2023**
As part of our quality improvement process, the following security alerts are set to be deprecated:
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
This article describes the HPE Edgeline EL300 appliance for OT sensors or on-pre
| Appliance characteristic |Details | |||
-|**Hardware profile** | L500 |
+|**Hardware profile** | L100 |
|**Performance** |Max bandwidth: 100 Mbps<br>Max devices: 800 | |**Physical specifications** | Mounting: Mounting kit, Din Rail<br>Ports: 5x RJ45| |**Status** | Supported, Not available pre-configured|
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
Title: Get Azure recommendations for your SQL Server migration description: Discover how to utilize the Azure SQL Migration extension in Azure Data Studio for obtaining Azure recommendations while migrating SQL Server databases to Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, or Azure SQL Database.--++ Last updated 05/09/2022
dms Create Dms Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-bicep.md
Title: Create instance of DMS (Bicep) description: Learn how to create Database Migration Service by using Bicep.--++ Last updated 03/21/2022
dms Create Dms Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-resource-manager-template.md
Title: Create instance of DMS (Azure Resource Manager template) description: Learn how to create Database Migration Service by using Azure Resource Manager template (ARM template).--++ Last updated 06/29/2020
dms How To Migrate Ssis Packages Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages-managed-instance.md
Title: Migrate SSIS packages to SQL Managed Instance description: Learn how to migrate SQL Server Integration Services (SSIS) packages and projects to an Azure SQL Managed Instance using the Azure Database Migration Service or the Data Migration Assistant.--++ Last updated 02/20/2020
dms How To Migrate Ssis Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages.md
Title: Redeploy SSIS packages to SQL single database description: Learn how to migrate or redeploy SQL Server Integration Services packages and projects to Azure SQL Database single database using the Azure Database Migration Service and Data Migration Assistant.--++ Last updated 02/20/2020
dms How To Monitor Migration Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-monitor-migration-activity.md
Title: Monitor migration activity - Azure Database Migration Service description: Learn to use the Azure Database Migration Service to monitor migration activity.--++ Last updated 02/20/2020
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
Title: "PowerShell: Migrate SQL Server to SQL Managed Instance offline" description: Learn to offline migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service.--++ Last updated 12/16/2020
dms Howto Sql Server To Azure Sql Managed Instance Powershell Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md
Title: "PowerShell: Migrate SQL Server to SQL Managed Instance online" description: Learn to online migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service.--++ Last updated 12/16/2020
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-powershell.md
Title: "PowerShell: Migrate SQL Server to SQL Database" description: Learn to migrate a database from SQL Server to Azure SQL Database by using Azure PowerShell with the Azure Database Migration Service.--++ Last updated 02/20/2020
dms Known Issues Azure Sql Db Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-db-managed-instance-online.md
Title: Known issues and limitations with online migrations to Azure SQL Managed Instance description: Learn about known issues/migration limitations associated with online migrations to Azure SQL Managed Instance.--++ Last updated 02/20/2020
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
Title: "Known issues, limitations, and troubleshooting" description: Known issues, limitations and troubleshooting guide for Azure SQL Migration extension for Azure Data Studio--++ Last updated 04/21/2023
dms Known Issues Mongo Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-mongo-cosmos-db.md
Title: "Known issues: Migrate from MongoDB to Azure Cosmos DB" description: Learn about known issues and migration limitations with migrations from MongoDB to Azure Cosmos DB using the Azure Database Migration Service.--++ Last updated 05/18/2022
dms Migrate Mysql To Azure Mysql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-mysql-to-azure-mysql-powershell.md
Title: "PowerShell: Run offline migration from MySQL database to Azure Database for MySQL using DMS" description: Learn to migrate an on-premises MySQL database to Azure Database for MySQL by using Azure Database Migration Service through PowerShell script.--++ Last updated 04/11/2021
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
Title: Migrate databases at scale using Azure PowerShell / CLI (Preview) description: Learn how to use Azure PowerShell or CLI to migrate databases at scale with the Azure SQL migration extension in Azure Data Studio--++ Last updated 04/26/2022
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Title: Migrate databases by using the Azure SQL Migration extension for Azure Data Studio description: Learn how to use the Azure SQL Migration extension in Azure Data Studio to migrate databases with Azure Database Migration Service.--++ Last updated 10/10/2023
dms Resource Custom Roles Sql Database Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-database-ads.md
Title: "Custom roles for SQL Server to Azure SQL Database migrations in Azure Data Studio" description: Learn how to use custom roles for SQL Server to Azure SQL Database migrations in Azure Data Studio.--++ Last updated 09/28/2022
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
Title: "Custom roles: Online SQL Server to SQL Managed Instance migrations" description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance online migrations.--++ Last updated 02/08/2021
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
Title: Supported database migration scenarios description: Learn which migration scenarios are currently supported for Azure Database Migration Service and their availability status.--++ Last updated 04/27/2022
dms Tutorial Login Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-login-migration-ads.md
Title: "Tutorial: Migrate SQL Server logins (preview) to Azure SQL in Azure Data Studio" description: Learn how to migrate on-premises SQL Server logins (preview) to Azure SQL by using Azure Data Studio and Azure Database Migration Service.--++ Last updated 10/10/2023
dms Tutorial Mongodb Cosmos Db Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db-online.md
Title: "Tutorial: Migrate MongoDB online to Azure Cosmos DB for MongoDB" description: Learn to migrate from MongoDB on-premises to Azure Cosmos DB for MongoDB online by using Azure Database Migration Service.--++ Last updated 09/21/2021
dms Tutorial Mongodb Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db.md
Title: "Tutorial: Migrate MongoDB offline to Azure Cosmos DB for MongoDB" description: Migrate from MongoDB on-premises to Azure Cosmos DB for MongoDB offline via Azure Database Migration Service.--++ Last updated 09/21/2021
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio" description: Learn how to migrate on-premises SQL Server to Azure SQL Managed Instance offline by using Azure Data Studio and Azure Database Migration Service.--++ Last updated 06/07/2023
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance online by using Azure Data Studio" description: Learn how to migrate on-premises SQL Server to Azure SQL Managed Instance only by using Azure Data Studio and Azure Database Migration Service.--++ Last updated 06/07/2023
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
Title: "Tutorial: Migrate SQL Server online to SQL Managed Instance" description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic)--++ Last updated 06/07/2023
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Title: "Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio" description: Learn how to migrate on-premises SQL Server to SQL Server on Azure Virtual Machines offline by using Azure Data Studio and Azure Database Migration Service.--++ Last updated 06/07/2023
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Title: "Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio" description: Learn how to migrate on-premises SQL Server to SQL Server on Azure Virtual Machines online by using Azure Data Studio and Azure Database Migration Service.--++ Last updated 06/07/2023
dms Tutorial Transparent Data Encryption Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-transparent-data-encryption-migration-ads.md
Title: "Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio" description: Learn how to migrate on-premises SQL Server TDE-enabled databases (preview) to Azure SQL by using Azure Data Studio and Azure Database Migration Service.--++ Last updated 10/10/2023
event-grid Mqtt Client Azure Ad Token And Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-azure-ad-token-and-rbac.md
Title: Microsoft Entra JWT authentication and RBAC authorization for clients with Microsoft Entra identity description: Describes JWT authentication and RBAC roles to authorize clients with Microsoft Entra identity to publish or subscribe MQTT messages Previously updated : 8/11/2023 Last updated : 10/24/2023 # Microsoft Entra JWT authentication and Azure RBAC authorization to publish or subscribe MQTT messages+ You can authenticate MQTT clients with Microsoft Entra JWT to connect to Event Grid namespace. You can use Azure role-based access control (Azure RBAC) to enable MQTT clients, with Microsoft Entra identity, to publish or subscribe access to specific topic spaces. + > [!IMPORTANT]
-> This feature is supported only when using MQTT v5
+> This feature is supported only when using MQTT v5 protocol version
## Prerequisites - You need an Event Grid namespace with MQTT enabled. Learn about [creating Event Grid namespace](/azure/event-grid/create-view-manage-namespaces#create-a-namespace)-- Review the process to [create a custom role](/azure/role-based-access-control/custom-roles-portal)- <a name='authentication-using-azure-ad-jwt'></a>
Authenticate Reason Code with value 25 signifies reauthentication.
> Audience: ΓÇ£audΓÇ¥ claim must be set to "https://eventgrid.azure.net/". ## Authorization to grant access permissions
-A client using Microsoft Entra ID based JWT authentication needs to be authorized to communicate with the Event Grid namespace. You can create custom roles to enable the client to communicate with Event Grid instances in your resource group, and then assign the roles to the client. You can use following two data actions to provide publish or subscribe permissions, to clients with Microsoft Entra identities, on specific topic spaces.
+A client using Microsoft Entra ID based JWT authentication needs to be authorized to communicate with the Event Grid namespace. You can assign the following two built-in roles to provide either publish or subscribe permissions, to clients with Microsoft Entra identities.
-**Topic spaces publish** data action
-Microsoft.EventGrid/topicSpaces/publish/action
+- Use **EventGrid TopicSpaces Publisher** role to provide MQTT message publisher access
+- Use **EventGrid TopicSpaces Subscriber** role to provide MQTT message subscriber access
-**Topic spaces subscribe** data action
-Microsoft.EventGrid/topicSpaces/subscribe/action
+You can use these roles to provide permissions at subscription, resource group, Event Grid namespace or Event Grid topicspace scope.
+
+## Assigning the publisher role to your Microsoft Entra identity at topicspace scope
-> [!NOTE]
-> Currently, we recommend using custom roles with the actions provided.
-
-### Custom roles
-
-You can create custom roles using the publish and subscribe actions.
-
-The following are sample role definitions that allow you to publish and subscribe to MQTT messages. These custom roles give permissions at topic space scope. You can also create roles to provide permissions at subscription, resource group scope.
-
-**EventGridMQTTPublisherRole.json**: MQTT messages publish operation.
-
-```json
-{
- "roleName": "Event Grid namespace MQTT publisher",
- "description": "Event Grid namespace MQTT message publisher role",
- "assignableScopes": [
- "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>"
- ],
- "permissions": [
- {
- "actions": [],
- "notActions": [],
- "dataActions": [
- "Microsoft.EventGrid/topicSpaces/publish/action"
- ],
- "notDataActions": []
- }
- ]
-}
-```
-
-**EventGridMQTTSubscriberRole.json**: MQTT messages subscribe operation.
-
-```json
-{
- "roleName": "Event Grid namespace MQTT subscriber",
- "description": "Event Grid namespace MQTT message subscriber role",
- "assignableScopes": [
- "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>"
- ]
- "permissions": [
- {
- "actions": [],
- "notActions": [],
- "dataActions": [
- "Microsoft.EventGrid/topicSpaces/subscribe/action"
- ],
- "notDataActions": []
- }
- ]
-}
-```
-
-## Create custom roles
-1. Navigate to topic spaces page in your Event Grid namespace
-1. Select the topic space for which the custom RBAC role needs to be created
-1. Navigate to the Access control (IAM) page within the topic space
-1. In the Roles tab, right select any of the roles to clone a new custom role. Provide the custom role name.
-1. Switch the Baseline permissions to **Start from scratch**
-1. On the Permissions tab, select **Add permissions**
-1. In the selection page, find and select Microsoft Event Grid
- :::image type="content" source="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions.png" lightbox="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions.png" alt-text="Screenshot showing the Microsoft Event Grid option to find the permissions.":::
-1. Navigate to Data Actions
-1. Select **Topic spaces publish** data action and select **Add**
- :::image type="content" source="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions-data-actions.png" lightbox="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions-data-actions.png" alt-text="Screenshot showing the data action selection.":::
-1. Select Next to see the topic space in the Assignable scopes tab. You can add other assignable scopes if needed.
-1. Select **Create** in Review + create tab to create the custom role.
-1. Once the custom role is created, you can assign the role to an identity to provide the publish permission on the topic space. You can learn how to assign roles [here](/azure/role-based-access-control/role-assignments-portal).
-
-<a name='assign-the-custom-role-to-your-azure-ad-identity'></a>
-
-## Assign the custom role to your Microsoft Entra identity
1. In the Azure portal, navigate to your Event Grid namespace
-1. Navigate to the topic space to which you want to authorize access.
-1. Go to the Access control (IAM) page of the topic space
+1. Navigate to the topicspace to which you want to authorize access.
+1. Go to the Access control (IAM) page of the topicspace
1. Select the **Role assignments** tab to view the role assignments at this scope. 1. Select **+ Add** and Add role assignment.
-1. On the Role tab, select the role that you created in the previous step.
-1. On the Members tab, select User, group, or service principal to assign the selected role to one or more service principals (applications).
+1. On the Role tab, select the "EventGrid TopicSpaces Publisher" role.
+1. On the Members tab, for **Assign access to**, select User, group, or service principal option to assign the selected role to one or more service principals (applications).
- Users and groups work when user/group belong to fewer than 200 groups.
-1. Select **Select members**.
+1. Select **+ Select members**.
1. Find and select the users, groups, or service principals.
+1. Select **Next**
1. Select **Review + assign** on the Review + assign tab. > [!NOTE]
-> You can follow similar steps to create and assign a custom Event Grid MQTT subscriber permission to a topic space.
+> You can follow similar steps to assign the built-in EventGrid TopicSpaces Subscriber role at topicspace scope.
## Next steps - See [Publish and subscribe to MQTT message using Event Grid](mqtt-publish-and-subscribe-portal.md)
The following are sample role definitions that allow you to publish and subscrib
- To learn more about Azure Identity client library, you can refer to [using Azure Identity client library](/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-the-azure-identity-client-library) - To learn more about implementing an interface for credentials that can provide a token, you can refer to [TokenCredential Interface](/java/api/com.azure.core.credential.tokencredential) - To learn more about how to authenticate using Azure Identity, you can refer to [examples](https://github.com/Azure/azure-sdk-for-java/wiki/Azure-Identity-Examples)
+- If you prefer to use custom roles, you can review the process to [create a custom role](/azure/role-based-access-control/custom-roles-portal)
event-grid Mqtt Routing Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-filtering.md
If you send a non-JSON payload that is still UFT-8, it will be serialized as a J
You can use the following filter to filter all the messages that include the word ΓÇ£ContosoΓÇ¥: ```azurecli-interactive "advancedFilters": [{
- "operatorType": "`StringContains` ",
+ "operatorType": "StringContains",
"key": "data",
- "value": ΓÇ£ContosoΓÇ¥
+ "value": "Contoso"
}] ```
You can use the following filter to filter all the messages coming from your cli
```azurecli-interactive "advancedFilters": [{"
- operatorType": "`StringContains` ",
- "key": "`clienttype`",
- "value": ΓÇ£sensorΓÇ¥
+ operatorType": "StringContains",
+ "key": "clienttype",
+ "value": "sensor"
}] ```
expressroute Site To Site Vpn Over Microsoft Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/site-to-site-vpn-over-microsoft-peering.md
Total number of prefixes 2
* [Configure Network Performance Monitor for ExpressRoute](how-to-npm.md)
-* [Add a site-to-site connection to a VNet with an existing VPN gateway connection](../vpn-gateway/vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md)
+* [Add a site-to-site connection to a VNet with an existing VPN gateway connection](../vpn-gateway/add-remove-site-to-site-connections.md)
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md
Previously updated : 02/10/2022 Last updated : 10/25/2023 # Scale SNAT ports with Azure NAT Gateway
-Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale set instance (Minimum of 2 instances), and you can associate up to [250 public IP addresses](./deploy-multi-public-ip-powershell.md). Depending on your architecture and traffic patterns, you might need more than the 1,248,000 available SNAT ports with this configuration. For example, when you use it to protect large [Azure Virtual Desktop deployments](./protect-azure-virtual-desktop.md) that integrate with Microsoft 365 Apps.
+Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale set instance (Minimum of two instances), and you can associate up to [250 public IP addresses](./deploy-multi-public-ip-powershell.md). Depending on your architecture and traffic patterns, you might need more than the 1,248,000 available SNAT ports with this configuration. For example, when you use it to protect large [Azure Virtual Desktop deployments](./protect-azure-virtual-desktop.md) that integrate with Microsoft 365 Apps.
One of the challenges with using a large number of public IP addresses is when there are downstream IP address filtering requirements. Azure Firewall randomly selects the source public IP address to use for a connection, so you need to allow all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/ip-services/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes.
When a NAT gateway resource is associated with an Azure Firewall subnet, all out
ThereΓÇÖs no double NAT with this architecture. Azure Firewall instances send the traffic to NAT gateway using their private IP address rather than Azure Firewall public IP address. > [!NOTE]
-> Deploying NAT gateway with a [zone redundant firewall](deploy-availability-zone-powershell.md) is not recommended deployment option, as the NAT gateway does not support zonal deployment at this time. In order to use NAT gateway with Azure Firewall, a zonal Firewall deployment is required.
+> Deploying NAT gateway with a [zone redundant firewall](deploy-availability-zone-powershell.md) is not recommended deployment option, as the NAT gateway does not support zonal redundant deployment at this time. In order to use NAT gateway with Azure Firewall, a zonal Firewall deployment is required.
> > In addition, Azure NAT Gateway integration is not currently supported in secured virtual hub network (vWAN) architectures. You must deploy using a hub virtual network architecture. For detailed guidance on integrating NAT gateway with Azure Firewall in a hub and spoke network architecture refer to the [NAT gateway and Azure Firewall integration tutorial](../virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md).
firewall Policy Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-analytics.md
Policy analytics starts monitoring the flows in the DNAT, Network, and Applicati
1. Select **Policy analytics** in the table of contents. 2. Next, select **Configure Workspaces**. 3. In the pane that opens, select the **Enable Policy Analytics** checkbox.
-4. Next, choose a log analytics workspace. The log analytics workspace should be the same as the Firewall attached to the policy.
+4. Next, choose a log analytics workspace. The log analytics workspace should be the same workspace configured in the firewall Diagnostic settings.
5. Select **Save** after you choose the log analytics workspace. > [!TIP]
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md
This table lists certain HDInsight 4.0 cluster types that have retired or will b
| HDInsight 4.0 Kafka | 1.1 | Dec 31, 2020 | Dec 31, 2020 | | HDInsight 4.0 Kafka | 2.1.0 | Sep 30, 2022 | Oct 1, 2022 |
-## Spark versions supported in Azure HDInsight
-
-Apache Spark versions supported in Azure HDIinsight
-
-|Apache Spark version on HDInsight|Release date|Release stage|End of life announcement date|End of standard support|End of basic support|
-|--|--|--|--|--|--|
-|2.4|July 8, 2019|End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024|
-|3.1|March 11,2022|GA |-|-|-|
-|3.3|To be announced for Public Preview|-|-|-|-|
## Apache Spark 2.4 to Spark 3.x Migration Guides
hdinsight Hdinsight 5X Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-5x-component-versioning.md
Title: Open-source components and versions - Azure HDInsight 5.x
description: Learn about the open-source components and versions in Azure HDInsight 5.x. Previously updated : 08/29/2023 Last updated : 10/26/2023 # HDInsight 5.x component versions In this article, you learn about the open-source components and their versions in Azure HDInsight 5.x.
-## Preview
-
-On February 27, 2023, we started rolling out a new version of HDInsight: version 5.1. This version is backward compatible with HDInsight 4.0. and 5.0. All new open-source releases will be added as incremental releases on HDInsight 5.1.
-
-All upgraded cluster shapes are supported as part of HDInsight 5.1.
- ## Open-source components available with HDInsight 5.x The following table lists the versions of open-source components that are associated with HDInsight 5.x. | Component | HDInsight 5.1 |HDInsight 5.0| |||-|
-| Apache Spark | 3.3.1 ** | 3.1.3 |
-| Apache Hive | 3.1.2 ** | 3.1.2 |
-| Apache Kafka | 3.2.0 ** | 2.4.1 |
-| Apache Hadoop | 3.3.4 ** | 3.1.1 |
-| Apache Tez | 0.9.1 ** | 0.9.1 |
-| Apache Ranger | 2.3.0 ** | 1.1.0 |
-| Apache HBase | 2.4.11 ** | 2.1.6 |
-| Apache Oozie | 5.2.1 ** | 4.3.1 |
-| Apache ZooKeeper | 3.6.3 ** | 3.4.6 |
-| Apache Livy | 0.5. ** | 0.5 |
-| Apache Ambari | 2.7.3 ** | 2.7.3 |
-| Apache Zeppelin | 0.10.1 ** | 0.8.0 |
-| Apache Phoenix | 5.1.2 ** | - |
-
-** Preview
+| Apache Spark | 3.3.1 | 3.1.3 |
+| Apache Hive | 3.1.2 | 3.1.2 |
+| Apache Kafka | 3.2.0 | 2.4.1 |
+| Apache Hadoop | 3.3.4 | 3.1.1 |
+| Apache Tez | 0.9.1 | 0.9.1 |
+| Apache Ranger | 2.3.0 | 1.1.0 |
+| Apache HBase | 2.4.11 | 2.1.6 |
+| Apache Oozie | 5.2.1 | 4.3.1 |
+| Apache ZooKeeper | 3.6.3 | 3.4.6 |
+| Apache Livy | 0.5. | 0.5 |
+| Apache Ambari | 2.7.3 | 2.7.3 |
+| Apache Zeppelin | 0.10.1 | 0.8.0 |
+| Apache Phoenix | 5.1.2 | - |
> [!NOTE] > We have discontinued Sqoop and Pig add-ons from HDInsight 5.1 version.
-### Spark versions supported in Azure HDInsight
-
-Azure HDInsight supports the following Apache Spark versions.
-
-|Apache Spark version on HDInsight|Release date|Release stage|End-of-life announcement date|End of standard support|End of basic support|
-|--|--|--|--|--|--|
-|2.4|July 8, 2019|End of life announced (EOLA)| February 10, 2023| August 10, 2023|February 10, 2024|
-|3.1|March 11, 2022|General availability |-|-|-|
-|3.3|Available for preview|-|-|-|-|
- ### Guide for migrating from Apache Spark 2.4 to Spark 3.x To learn how to migrate from Spark 2.4 to Spark 3.x, see the [migration guide on the Spark website](https://spark.apache.org/docs/latest/migration-guide.html).
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md
Title: Open-source components and versions - Azure HDInsight
description: Learn about the open-source components and versions in Azure HDInsight. Previously updated : 07/27/2023 Last updated : 10/25/2023 # Azure HDInsight versions
This table lists the versions of HDInsight that are available in the Azure porta
| [HDInsight 5.1](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |Feb 27, 2023 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes | | [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced | Not announced |Yes |
-**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You may not be able to create clusters from the Azure portal.
+**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You might not be able to create clusters from the Azure portal.
**Retirement** means that existing clusters of an HDInsight version continue to run as is. You can't create new clusters of this version through any means, which includes the CLI and SDKs. Other control plane features, such as manual scaling and autoscaling, not guaranteed to work after retirement date. Support isn't available for retired versions.
+### Spark versions supported in Azure HDInsight
+
+Azure HDInsight supports the following Apache Spark versions.
+
+| HDInsight versions | Apache Spark version on HDInsight | Release date | Release stage |End-of-life announcement date|End of standard support|End of basic support|
+| -- | -- |--|--|--|--|--|
+| 4.0 | 2.4 | July 8, 2019 | End of life announced (EOLA)| February 10, 2023| August 10, 2023 | February 10, 2024 |
+| 5.0 | 3.1 | March 11, 2022 | General availability |-|-|-|
+| 5.1 | 3.3 | October 26, 2023 | General availability |-|-|-|
+ ## Support options for HDInsight versions Support defined as a time period that an HDInsight version supported by Microsoft Customer Service and Support. HDInsight offers two types of support:
Microsoft doesn't encourage creating analytics pipelines or solutions on cluster
For extra release notes on the latest versions of HDInsight, see [HDInsight release notes](hdinsight-release-notes.md). ## Versioning considerations-- Once a cluster deployed with an image, that cluster can't automatically upgrade to newer image version. When you create new clusters, most recent image version deployed.
+- Once a cluster deployed with an image, that cluster can't automatically upgrade to newer image version. When you create new clusters, the most recent image version is deployed.
- Customers should test and validate that applications run properly when using new HDInsight version. - HDInsight reserves the right to change the default version without prior notice. If you have a version dependency, specify the HDInsight version when you create your clusters.-- HDInsight may retire an OSS component version before retiring the HDInsight version.
+- HDInsight might retire an OSS component version before retiring the HDInsight version.
## Next steps
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
For more information, see [HDInsight 5.1.0 version](./hdinsight-51-component-ver
![Icon showing end of support with text.](media/hdinsight-release-notes/new-icon-for-end-of-support.png)
-End of support for Azure HDInsight clusters on Spark 2.4 February 10, 2024. For more information, see [Spark versions supported in Azure HDInsight](./hdinsight-40-component-versioning.md#spark-versions-supported-in-azure-hdinsight)
+End of support for Azure HDInsight clusters on Spark 2.4 February 10, 2024. For more information, see [Spark versions supported in Azure HDInsight](./hdinsight-40-component-versioning.md)
## What's next
hdinsight Selective Logging Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/selective-logging-analysis.md
For instructions on how to create an HDInsight cluster, see [Get started with Az
## Enable or disable logs by using a script action for multiple tables and log types
-1. Go to **Script actions** in your cluster and select **Submit now** to start the process of creating a script action.
+1. Go to **Script actions** in your cluster and select **Submit new** to start the process of creating a script action.
:::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-submit-script-action.png" alt-text="Screenshot that shows the button for starting the process of creating a script action.":::
healthcare-apis Bulk Delete Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/bulk-delete-operation.md
+
+ Title: Bulk-delete operation for Azure API for FHIR
+description: This article describes the bulk-delete operation for Azure API for FHIR.
++++ Last updated : 10/22/2022+++
+# Bulk Delete operation
++
+## Next steps
+
+In this article, you learned how to bulk delete resources in the FHIR service. For information about supported FHIR features, see
+
+>[!div class="nextstepaction"]
+>[Supported FHIR features](fhir-features-supported.md)
+
+>[!div class="nextstepaction"]
+>[FHIR REST API capabilities for Azure Health Data Services FHIR service](fhir-rest-api-capabilities.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Bulk Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-bulk-delete.md
+
+ Title: Bulk-delete operation for Azure Health Data Services FHIR service.
+description: This article describes the bulk-delete operation for the AHDS FHIR service.
++++ Last updated : 10/22/2022+++
+# Bulk Delete
++
+## Next steps
+
+In this article, you learned how to bulk delete resources in the FHIR service. For information about supported FHIR features, see
+
+>[!div class="nextstepaction"]
+>[Supported FHIR features](fhir-features-supported.md)
+
+>[!div class="nextstepaction"]
+>[FHIR REST API capabilities for Azure Health Data Services FHIR service](fhir-rest-api-capabilities.md)
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
For details on Incremental Import, visit [Import Documentation](./../healthcare-
**Batch-Bundle parallelization capability available in Public Preview** Batch bundles are executed serially in FHIR service by default. To improve throughput with bundle calls, we're enabling parallel processing of batch bundles.For details, visit [Batch Bundle Parellization](./../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
-> [!IMPORTANT]
-> Bundle parallel processing is currently in public preview. Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. For more information, review Supplemental Terms of Use for Microsoft Azure Previews
+
+Batch-bundle parallelization capability is in public preview. Review disclaimer for more details.
**Decimal value precision in FHIR service is updated per FHIR specification**
key-vault Rbac Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-migration.md
Access policy predefined permission templates:
| Azure Information BYOK | Keys: get, decrypt, sign | N/A<br>Custom role required| > [!NOTE]
-> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignemnts for 'Microsoft Azure App Service' global indentity.
+> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignments for 'Microsoft Azure App Service' global indentity.
## Assignment scopes mapping
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and imp
In Azure, virtual machines created in a virtual network without explicit outbound connectivity defined are assigned a default outbound public IP address. This IP address enables outbound connectivity from the resources to the Internet. This access is referred to as [default outbound access](../virtual-network/ip-services/default-outbound-access.md). This method of access is **not recommended** as it is insecure and the IP addresses are subject to change. >[!Important]
->On September 30, 2025, default outbound access for new deployments will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). It is reccomended to use one the explict forms of connectivity as shown in options 1-3 above.
+>On September 30, 2025, default outbound access for new deployments will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). It is recommended to use one the explict forms of connectivity as shown in options 1-3 above.
### What are SNAT ports?
load-balancer Tutorial Multi Availability Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-multi-availability-sets-portal.md
Title: 'Tutorial: Create a load balancer with more than one availability set in the backend pool - Azure portal'
-description: In this tutorial, deploy an Azure Load Balancer with more than one availability set in the backend pool.
+description: Learn to deploy Azure Load Balancer with multiple availability sets and virtual machines in a backend pool using the Azure portal.
Previously updated : 07/05/2023 Last updated : 10/24/2023
Load Balancer supports more than one availability set with virtual machines in t
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a virtual network and a network security group
> * Create a NAT gateway for outbound connectivity
+> * Create a virtual network and a network security group
> * Create a standard SKU Azure Load Balancer > * Create four virtual machines and two availability sets > * Add virtual machines in availability sets to backend pool of load balancer
In this tutorial, you learn how to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Create a virtual network
-
-In this section, you'll create a virtual network for the load balancer and the other resources used in the tutorial.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the search box at the top of the portal, enter **Virtual network**.
-
-1. In the search results, select **Virtual networks**.
-
-1. Select **+ Create**.
-
-1. In the **Basics** tab of the **Create virtual network**, enter, or select the following information:
-
- | Setting | Value |
- | - | |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. |
- | **Instance details** | |
- | Name | Enter **myVNet**. |
- | Region | Select **(US) West US 2**. |
-
-1. Select the **IP addresses** tab, or the **Next: Security** and **Next: IP Addresses** buttons at the bottom of the page.
-
-1. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.0.0.0** and choose **/16 (65,536 addresses)** |
-
-1. Select **default** under **Subnets**.
-1. Under **Subnet details**, enter **myBackendSubnet** for **Name**.
-1. In **Add subnet**, enter this information:
-
- | Setting | Value |
- | -- | - |
- | Subnet address range | Enter **10.1.0.0/24** |
-
-1. Select **Save**.
-1. Select the **Review + create** tab, or the blue **Review + create** button at the bottom of the page.
-1. Select **Create**.
-
-## Create a network security group
-
-In this section, you'll create a network security group for the virtual machines in the backend pool of the load balancer. The NSG will allow inbound traffic on port 80.
-
-1. In the search box at the top of the portal, enter **Network security group**.
-1. Select **Network security groups** in the search results.
-1. Select **+ Create** or **Create network security group** button.
-1. On the **Basics** tab, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **myNSG**. |
- | Region | Select **(US) West US 2**. |
-
-1. Select *Review + create* tab, or select the blue **Review + create** button at the bottom of the page.
-1. Select **Create**.
-1. When deployment is complete, select **Go to resource**.
-1. In the **Settings** section of the **myNSG** page, select **Inbound security rules**.
-1. Select **+ Add**.
-1. In the **Add inbound security rule** window, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Source | Select **Any**. |
- | Source port ranges | Enter **\***. |
- | Destination | Select **Any**. |
- | Service | Select **HTTP**. |
- | Action | Select **Allow**. |
- | Priority | Enter **100**. |
- | Name | Enter **allowHTTPrule**. |
-1. Select **Add**.
-## Create NAT gateway
-In this section, you'll create a NAT gateway for outbound connectivity of the virtual machines.
-
-1. In the search box at the top of the portal, enter **NAT gateway**.
-1. Select **NAT gateway** in the search results.
-1. Select **+ Create** or **Create NAT Gateway** button.
-1. In the **Basics** tab of **Create network address translation (NAT) gateway**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | NAT gateway name | Enter **myNATgateway**. |
- | Region | Select **(US) West US 2**. |
- | Availability zone | Select **No Zone**. |
- | Idle timeout (minutes) | Enter **15**. |
-
-1. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
-1. Select **Create a new public IP address** next to **Public IP addresses** in the **Outbound IP** tab.
-1. Enter **myNATgatewayIP** in **Name**.
-1. Select **OK**.
-1. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
-1. Select **myVNet** in the pull-down menu under **Virtual network**.
-1. Select the check box next to **myBackendSubnet**.
-1. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-1. Select **Create**.
-
-## Create load balancer
-
-In this section, you'll create a load balancer for the virtual machines.
-
-1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-1. In the **Load balancer** page, select **Create** or the **Create load balancer** button.
-1. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
-
- | Setting | Value |
- | | |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **myLoadBalancer** |
- | Region | Select **(US) West US 2**. |
- | SKU | Leave the default **Standard**. |
- | Type | Select **Public**. |
- | Tier | Leave the default **Regional**. |
-
-1. Select the **Frontend IP configuration** tab, or select the **Next: Frontend IP configuration** button at the bottom of the page.
-1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
-1. Enter **myLoadBalancerFrontEnd** in **Name**.
-1. Select **IPv4** or **IPv6** for the **IP version**.
-
- > [!NOTE]
- > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
-1. Select **IP address** for the **IP type**.
-
- > [!NOTE]
- > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md).
-1. Select **Create new** in **Public IP address**.
-1. In **Add a public IP address**, enter **myPublicIP-lb** for **Name**.
-1. Select **Zone-redundant** in **Availability zone**.
-
- > [!NOTE]
- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
-
-1. Select **OK**.
-1. Select **Add**.
-1. Select the **Next: Backend pools>** button at the bottom of the page.
-1. In the **Backend pools** tab, select **+ Add a backend pool**.
-1. Enter **myBackendPool** for **Name** in **Add backend pool**.
-1. Select **myVNet** in **Virtual network**.
-1. Select **IP Address** for **Backend Pool Configuration** and select **Save**.
-1. Select the **Inbound rules** tab, or select the **Next: Inbound rules** button at the bottom of the page.
-1. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
-1. In **Add load balancing rule**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHTTPRule** |
- | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **myLoadBalancerFrontEnd**. |
- | Backend pool | Select **myBackendPool**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**. |
- | Backend port | Enter **80**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **Save**. |
- | Session persistence | Select **None**. |
- | Idle timeout (minutes) | Enter **15**. |
- | Enable TCP reset | Select checkbox. |
- | Enable Floating IP | Select checkbox. |
- | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
-
-1. Select **Save**.
-1. Select the blue **Review + create** button at the bottom of the page.
-1. Select **Create**.
- > [!NOTE]
- > In this example we created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
- > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
## Create virtual machines
-In this section, you'll create two availability groups with two virtual machines per group. These machines will be added to the backend pool of the load balancer during creation.
+In this section, you create two availability groups with two virtual machines per group. These machines are added to the backend pool of the load balancer during creation.
### Create first set of VMs
In this section, you'll create two availability groups with two virtual machines
| - | -- | | **Project details** | | | Subscription | Select your subscription |
- | Resource group | Select **myResourceGroup**. |
+ | Resource group | Select **lb-resource-group**. |
| **Instance details** | |
- | Virtual machine name | Enter **myVM1**. |
- | Region | Select **(US) West US 2**. |
+ | Virtual machine name | Enter **lb-VM1**. |
+ | Region | Select **(US) East US**. |
| Availability options | Select **Availability set**. |
- | Availability set | Select **Create new**. </br> Enter **myAvailabilitySet1** in **Name**. </br> Select **OK**. |
+ | Availability set | Select **Create new**. </br> Enter **lb-availability-set1** in **Name**. </br> Select **OK**. |
| Security type | Select **Trusted launch virtual machines**. | | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. | | Azure Spot instance | Leave the default of unchecked. |
In this section, you'll create two availability groups with two virtual machines
| Setting | Value | | - | -- | | **Network interface** | |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **myBackendSubnet**. |
+ | Virtual network | Select **lb-VNet**. |
+ | Subnet | Select **backend-subnet**. |
| Public IP | Select **None**. | | NIC network security group | Select **Advanced**. | | Configure network security group | Skip this setting until the rest of the settings are completed. Complete after **Select a backend pool**.| | **Load balancing** | | | Load-balancing options | Select **Azure load balancer**. |
- | Select a load balancer | Select **myLoadBalancer**. |
- | Select a backend pool | Select **myBackendPool**. |
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
+ | Select a load balancer | Select **load-balancer**. |
+ | Select a backend pool | Select **lb-backend-pool**. |
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **lb-NSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **lb-NSG-rule** </br> Select **Add** </br> Select **OK** |
1. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. 1. Select **Create**.
In this section, you'll create two availability groups with two virtual machines
| Setting | Value | | - | -- |
- | Name | Enter **myVM2**. |
- | Availability set | Select **myAvailabilitySet1**. |
- | Virtual Network | Select **myVNet**. |
- | Subnet | Select **myBackendSubnet**. |
+ | Name | Enter **lb-VM2**. |
+ | Availability set | Select **lb-availability-set1**. |
+ | Virtual Network | Select **lb-VNet**. |
+ | Subnet | Select **backend-subnet**. |
| Public IP | Select **None**. | | NIC network security group | Select **Advanced**. | | Configure network security group | Skip this setting until the rest of the settings are completed. Complete after **Select a backend pool**.| | Load-balancing options | Select **Azure load balancer**. |
- | Select a load balancer | Select **myLoadBalancer**. |
- | Select a backend pool | Select **myBackendPool**. |
- | Configure network security group | Select **myNSG**. |
+ | Select a load balancer | Select **load-balancer**. |
+ | Select a backend pool | Select **lb-backend-pool**. |
+ | Configure network security group | Select **lb-NSG**. |
### Create second set of VMs
In this section, you'll create two availability groups with two virtual machines
| - | -- | | **Project details** | | | Subscription | Select your subscription |
- | Resource group | Select **myResourceGroup**. |
+ | Resource group | Select **lb-resource-group**. |
| **Instance details** | |
- | Virtual machine name | Enter **myVM3**. |
- | Region | Select **(US) West US 2**. |
+ | Virtual machine name | Enter **lb-VM3**. |
+ | Region | Select **(US) East US**. |
| Availability options | Select **Availability set**. |
- | Availability set | Select **Create new**. </br> Enter **myAvailabilitySet2** in **Name**. </br> Select **OK**. |
+ | Availability set | Select **Create new**. </br> Enter **lb-availability-set2** in **Name**. </br> Select **OK**. |
| Security type | Select **Trusted launch virtual machines**. | | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. | | Azure Spot instance | Leave the default of unchecked. |
In this section, you'll create two availability groups with two virtual machines
| Setting | Value | | - | -- | | **Network interface** | |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **myBackendSubnet**. |
+ | Virtual network | Select **lb-VNet**. |
+ | Subnet | Select **backend-subnet**. |
| Public IP | Select **None**. | | NIC network security group | Select **Advanced**. | | Configure network security group | Skip this setting until the rest of the settings are completed. Complete after **Select a backend pool**.| | **Load balancing** | | | Load-balancing options | Select **Azure load balancer**. |
- | Select a load balancer | Select **myLoadBalancer**. |
- | Select a backend pool | Select **myBackendPool**. |
- | Configure network security group | Select **myNSG**. |
+ | Select a load balancer | Select **load-balancer**. |
+ | Select a backend pool | Select **lb-backend-pool**. |
+ | Configure network security group | Select **lb-NSG**. |
6. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
In this section, you'll create two availability groups with two virtual machines
| Setting | Value | | - | -- |
- | Name | Enter **myVM4**. |
- | Availability set | Select **myAvailabilitySet2**. |
- | Virtual Network | Select **myVM3**. |
+ | Name | Enter **lb-VM4**. |
+ | Availability set | Select **lb-availability-set2**. |
+ | Virtual Network | Select **lb-VM3**. |
| NIC network security group | Select **Advanced**. | | Configure network security group | Skip this setting until the rest of the settings are completed. Complete after **Select a backend pool**.| | Load-balancing options | Select **Azure load balancer**. |
- | Select a load balancer | Select **myLoadBalancer**. |
- | Select a backend pool | Select **myBackendPool**. |
- | Configure network security group | Select **myNSG**. |
+ | Select a load balancer | Select **load-balancer**. |
+ | Select a backend pool | Select **lb-backend-pool**. |
+ | Configure network security group | Select **lb-NSG**. |
## Install IIS
-In this section, you'll use the Azure Bastion host you created previously to connect to the virtual machines and install IIS.
+In this section, you use the Azure Bastion host you created previously to connect to the virtual machines and install IIS.
1. In the search box at the top of the portal, enter **Virtual machine**. 1. Select **Virtual machines** in the search results.
-1. Select **myVM1**.
-1. Under **Operations** in the left-side menu, select **Run command > RunPowerShellScript**.
+1. Select **lb-VM1**.
+1. Under **Payload** in the left-side menu, select **Run command > RunPowerShellScript**.
1. In the PowerShell Script window, add the following commands to: * Install the IIS server
In this section, you'll use the Azure Bastion host you created previously to con
:::image type="content" source="media/tutorial-multi-availability-sets-portal/run-command-script.png" alt-text="Screenshot of Run Command Script window with PowerShell code and output.":::
-1. Repeat steps 1 through 8 for **myVM2**, **myVM3**, and **myVM4**.
+1. Repeat steps 1 through 8 for **lb-VM2**, **lb-VM3**, and **lb-VM4**.
## Test the load balancer
-In this section, you'll discover the public IP address of the load balancer. You'll use the IP address to test the operation of the load balancer.
+In this section, you discover the public IP address of the load balancer. You use the IP address to test the operation of the load balancer.
1. In the search box at the top of the portal, enter **Public IP**. 1. Select **Public IP addresses** in the search results.
-1. Select **myPublicIP-lb**.
-1. Note the public IP address listed in **IP address** in the **Overview** page of **myPublicIP-lb**:
+1. Select **lb-Public-IP**.
+1. Note the public IP address listed in **IP address** in the **Overview** page of **lb-Public-IP**:
:::image type="content" source="./media/tutorial-multi-availability-sets-portal/find-public-ip.png" alt-text="Find the public IP address of the load balancer." border="true":::
the load balancer and the supporting resources with the following steps:
1. In the search box at the top of the portal, enter **Resource group**. 1. Select **Resource groups** in the search results.
-1. Select **myResourceGroup**.
-1. In the overview page of **myResourceGroup**, select **Delete resource group**.
-1.Select **Apply force delete for selected Virtual Machines and Virtual machine scale sets**.
-1. Enter **myResourceGroup** in **Enter resource group name to confirm deletion**.
+1. Select **lb-resource-group**.
+1. In the overview page of **lb-resource-group**, select **Delete resource group**.
+1. Select **Apply force delete for selected Virtual Machines and Virtual machine scale sets**.
+1. Enter **lb-resource-group** in **Enter resource group name to confirm deletion**.
1. Select **Delete**. ## Next steps
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
In this case, we want to execute a batch endpoint using a service principal alre
# [Azure CLI](#tab/cli)
-1. Create a secret to use for authentication as explained at [Option 32: Create a new client secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-client-secret).
+1. Create a secret to use for authentication as explained at [Option 3: Create a new client secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-client-secret).
1. To authenticate using a service principal, use the following command. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli). ```azurecli
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
Azure Machine Learning managed online endpoints have limits described in the fol
| Number of deployments per subscription | 200 | Yes | | Number of deployments per endpoint | 20 | Yes | | Number of instances per deployment | 20 <sup>2</sup> | Yes |
-| Max request time-out at endpoint level | 90 seconds | - |
+| Max request time-out at endpoint level | 180 seconds | - |
| Total requests per second at endpoint level for all deployments | 500 <sup>3</sup> | Yes | | Total connections per second at endpoint level for all deployments | 500 <sup>3</sup> | Yes | | Total connections active at endpoint level for all deployments | 500 <sup>3</sup> | Yes |
machine-learning How To Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mltable.md
You can choose to save the MLTable yaml file to a cloud storage, or you can also
```python # save the data loading steps in an MLTable file to a cloud storage # NOTE: the tbl object was defined in the previous snippet.
-tbl.save(save_path_dirc= "azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/titanic", collocated=True, show_progress=True, allow_copy_errors=False, overwrite=True)
+tbl.save(path="azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/titanic", colocated=True, show_progress=True, overwrite=True)
``` ```python
tbl.save("./titanic")
``` > [!IMPORTANT]
-> - If collocated == True, then we will copy the data to the same folder with MLTable yaml file if they are not currently collocated, and we will use relative paths in MLTable yaml.
-> - If collocated == False, we will not move the data and we will use absolute paths for cloud data and use relative paths for local data.
-> - We donΓÇÖt support this parameter combination: data is in local, collocated == False, `save_path_dirc` is a cloud directory. Please upload your local data to cloud and use the cloud data paths for MLTable instead.
-> - Parameters `show_progress` (default as True), `allow_copy_errors` (default as False), `overwrite`(default as True) are optional.
+> - If colocated == True, then we will copy the data to the same folder with MLTable yaml file if they are not currently colocated, and we will use relative paths in MLTable yaml.
+> - If colocated == False, we will not move the data and we will use absolute paths for cloud data and use relative paths for local data.
+> - We donΓÇÖt support this parameter combination: data is in local, colocated == False, `path` targets a cloud directory. Please upload your local data to cloud and use the cloud data paths for MLTable instead.
>
machine-learning Community Ecosystem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/community-ecosystem.md
Title: Prompt Flow community ecosystem (preview)
+ Title: Prompt flow ecosystem (preview)
-description: Introduction to the Prompt flow community ecosystem, which includes the SDK and VS Code extension.
+description: Introduction to the Prompt flow ecosystem, which includes the Prompt flow open source project, tutorials, SDK, CLI and VS Code extension.
Last updated 09/12/2023
-# Prompt Flow community ecosystem (preview)
+# Prompt flow ecosystem (preview)
-The Prompt Flow community ecosystem aims to provide a comprehensive set of tools and resources for developers who want to leverage the power of Prompt Flow to experimentally tune their prompts and develop their LLM-based application in a local environment. This article goes through the key components of the ecosystem, including the **Prompt Flow SDK** and the **VS Code extension**.
+The Prompt flow ecosystem aims to provide a comprehensive set of tutorials, tools and resources for developers who want to leverage the power of Prompt flow to experimentally tune their prompts and develop their LLM-based application in pure local environment, without any dependencies on Azure resources binding. This article provides an overview of the key components within the ecosystem, which include:
+ - **Prompt flow open source project** in GitHub.
+ - **Prompt flow SDK and CLI** for seamless flow execution and integration with CI/CD pipeline.
+ - **VS Code extension** for convenient flow authoring and development within a local environment.
> [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). + ## Prompt flow SDK/CLI
-The Prompt Flow SDK/CLI empowers developers to use code manage credentials, initialize flows, develop flows, and execute batch testing and evaluation of prompt flows locally.
+The Prompt flow SDK/CLI empowers developers to use code manage credentials, initialize flows, develop flows, and execute batch testing and evaluation of prompt flows locally.
It's designed for efficiency, allowing simultaneous trigger of large dataset-based flow tests and metric evaluations. Additionally, the SDK/CLI can be easily integrated into your CI/CD pipeline, automating the testing process.
-To get started with the Prompt Flow SDK, explore and follow the [SDK quick start notebook](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/quickstart.ipynb) in steps.
+To get started with the Prompt flow SDK, explore and follow the [SDK quick start notebook](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/quickstart.ipynb) in steps.
## VS Code extension
The ecosystem also provides a powerful VS Code extension designed for enabling y
:::image type="content" source="./media/community-ecosystem/prompt-flow-vs-code-extension-flatten.png" alt-text="Screenshot of the Prompt flow extension in the VS Code showing the UI. "lightbox = "./media/community-ecosystem/prompt-flow-vs-code-extension-flatten.png":::
-To get started with the Prompt Flow VS Code extension, navigate to the extension marketplace to install and read the details tab.
+To get started with the Prompt flow VS Code extension, navigate to the extension marketplace to install and read the details tab.
:::image type="content" source="./media/community-ecosystem/prompt-flow-vs-code-extension.png" alt-text="Screenshot of the Prompt flow extension in the VS Code marketplace. "lightbox = "./media/community-ecosystem/prompt-flow-vs-code-extension.png"::: ## Transition to production in cloud
-After successful development and testing of your prompt flow within our community ecosystem, the subsequent step you're considering may involve transitioning to a production-grade LLM application. We recommend Azure Machine Learning for this phase to ensure security, efficiency, and scalability.
+After successful development and testing of your prompt flow within our community ecosystem, the subsequent step you're considering might involve transitioning to a production-grade LLM application. We recommend Azure Machine Learning for this phase to ensure security, efficiency, and scalability.
You can seamlessly shift your local flow to your Azure resource to leverage large-scale execution and management in the cloud. To achieve this, see [Integration with LLMOps](how-to-integrate-with-llm-app-devops.md#go-back-to-studio-ui-for-continuous-development).
The community ecosystem thrives on collaboration and support. Join the active co
For questions or feedback, you can [open GitHub issue directly](https://github.com/microsoft/promptflow/issues/new) or reach out to pf-feedback@microsoft.com. + ## Next steps
-The prompt flow community ecosystem empowers developers to build interactive and dynamic prompts with ease. By using the Prompt Flow SDK and the VS Code extension, you can create compelling user experiences and fine-tune your prompts in a local environment.
+The prompt flow community ecosystem empowers developers to build interactive and dynamic prompts with ease. By using the Prompt flow SDK and the VS Code extension, you can create compelling user experiences and fine-tune your prompts in a local environment.
- Join the [Prompt flow community on GitHub](https://github.com/microsoft/promptflow).
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
Previously updated : 01/24/2023 Last updated : 10/19/2023
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Default value | | | - | -- | - |
-| `request_timeout_ms` | integer | The scoring timeout in milliseconds. Note that the maximum value allowed is `90000` milliseconds. See [Managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) for more. | `5000` |
+| `request_timeout_ms` | integer | The scoring timeout in milliseconds. Note that the maximum value allowed is `180000` milliseconds. See [Managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) for more. | `5000` |
| `max_concurrent_requests_per_instance` | integer | The maximum number of concurrent requests per instance allowed for the deployment. <br><br> **Note:** If you're using [Azure Machine Learning Inference Server](how-to-inference-server-http.md) or [Azure Machine Learning Inference Images](concept-prebuilt-docker-images-inference.md), your model must be configured to handle concurrent requests. To do so, pass `WORKER_COUNT: <int>` as an environment variable. For more information about `WORKER_COUNT`, see [Azure Machine Learning Inference Server Parameters](how-to-inference-server-http.md#server-parameters) <br><br> **Note:** Set to the number of requests that your model can process concurrently on a single node. Setting this value higher than your model's actual concurrency can lead to higher latencies. Setting this value too low may lead to under utilized nodes. Setting too low may also result in requests being rejected with a 429 HTTP status code, as the system will opt to fail fast. For more information, see [Troubleshooting online endpoints: HTTP status codes](how-to-troubleshoot-online-endpoints.md#http-status-codes). | `1` | | `max_queue_wait_ms` | integer | The maximum amount of time in milliseconds a request will stay in the queue. | `500` |
machine-learning Tutorial Enable Materialization Backfill Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-materialization-backfill-data.md
You can create a new notebook and execute the instructions in this tutorial step
3. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment). 4. Increase the session time-out (idle time) to avoid frequent prerequisite reruns.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=start-spark-session)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=start-spark-session)]
### Set up the root directory for the samples
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=root-dir)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=root-dir)]
1. Set up the CLI.
You can create a new notebook and execute the instructions in this tutorial step
1. Install the Azure Machine Learning extension.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)]
1. Authenticate.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=auth-cli)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=auth-cli)]
1. Set the default subscription.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)]
You can create a new notebook and execute the instructions in this tutorial step
This is the current workspace. You'll run the tutorial notebook from this workspace.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=init-ws-crud-client)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-ws-crud-client)]
1. Initialize the feature store properties. Be sure to update the `featurestore_name` and `featurestore_location` values to reflect what you created in the first tutorial.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=init-fs-crud-client)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-crud-client)]
1. Initialize the feature store core SDK client.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=init-fs-core-sdk)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-core-sdk)]
1. Set up the offline materialization store.
You can create a new notebook and execute the instructions in this tutorial step
You can optionally override the default settings.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=setup-utility-fns)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=setup-utility-fns)]
# [Azure CLI](#tab/cli)
You can create a new notebook and execute the instructions in this tutorial step
The materialization store uses these values. You can optionally override the default settings.
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=set-offline-store-params)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-offline-store-params)]
1. Create storage containers.
The materialization store uses these values. You can optionally override the def
# [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=create-new-storage)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-storage)]
# [Azure CLI](#tab/cli)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=create-new-storage)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=create-new-storage)]
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=create-new-storage-container)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=create-new-storage-container)]
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=set-container-arm-id-cli)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-container-arm-id-cli)]
The materialization store uses these values. You can optionally override the def
# [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
# [Azure CLI](#tab/cli)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
The materialization store uses these values. You can optionally override the def
### Set the UAI values
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=set-uai-params)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-uai-params)]
### Set up a UAI The first option is to create a new managed identity.
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=create-new-uai)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-uai)]
The second option is to reuse an existing managed identity.
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=use-existing-uai)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-uai)]
### Retrieve UAI properties Run this code sample in the SDK to retrieve the UAI properties.
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)]
The next CLI commands assign the first two roles to the UAI. In this example, th
# [Python SDK](#tab/python)
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai)]
# [Azure CLI](#tab/cli)
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)]
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)]
Obtain your Microsoft Entra object ID value from the Azure portal, as described
To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md).
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)]
The following steps grant the Storage Blob Data Reader role access to your user account:
The following steps grant the Storage Blob Data Reader role access to your user
# [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=enable-offline-store)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-store)]
# [Azure CLI](#tab/cli) Inspect file `xxxx`. This command attaches the offline store and the UAI, to update the feature store.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=dump_featurestore_yaml)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=dump_featurestore_yaml)]
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=enable-offline-store)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-store)]
The following steps grant the Storage Blob Data Reader role access to your user
# [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
# [Azure CLI](#tab/cli)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
The following steps grant the Storage Blob Data Reader role access to your user
# [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=dump-txn-fset-yaml)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=dump-txn-fset-yaml)]
# [Azure CLI](#tab/cli)
The following steps grant the Storage Blob Data Reader role access to your user
> [!NOTE] > You might need to determine a backfill data window. The window must match the window of your training data. For example, to use two years of data for training, you need to retrieve features for the same window. This means you should backfill for a two-year window.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=backfill-txns-fset)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=backfill-txns-fset)]
Next, print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method retrieved the training and inference data. It also uses the materialization store by default.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=sample-txns-fset-data)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=sample-txns-fset-data)]
## Clean up
machine-learning Tutorial Get Started With Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md
This tutorial uses an Azure Machine Learning Spark notebook for development.
## Start the Spark session
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=start-spark-session)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=start-spark-session)]
## Set up the root directory for the samples
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=root-dir)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=root-dir)]
### [SDK track](#tab/SDK-track)
Not applicable.
1. Install the Azure Machine Learning extension.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)]
1. Authenticate.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=auth-cli)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=auth-cli)]
1. Set the default subscription.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=set-default-subs-cli)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=set-default-subs-cli)]
This tutorial doesn't need explicit installation of these resources, because the
1. Set feature store parameters, including name, location, and other values.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=fs-params)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=fs-params)]
1. Create the feature store. ### [SDK track](#tab/SDK-track)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs)]
### [SDK and CLI track](#tab/SDK-and-CLI-track)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs-cli)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs-cli)]
1. Initialize a feature store core SDK client for Azure Machine Learning. As explained earlier in this tutorial, the feature store core SDK client is used to develop and consume features.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fs-core-sdk)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fs-core-sdk)]
## Prototype and develop a feature set
In the following steps, you build a feature set named `transactions` that has ro
This notebook uses sample data hosted in a publicly accessible blob container. It can be read into Spark only through a `wasbs` driver. When you create feature sets by using your own source data, host them in an Azure Data Lake Storage Gen2 account, and use an `abfss` driver in the data path.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=explore-txn-src-data)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=explore-txn-src-data)]
1. Locally develop the feature set.
In the following steps, you build a feature set named `transactions` that has ro
To learn more about the feature set and transformations, see [What is managed feature store?](./concept-what-is-managed-feature-store.md).
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=develop-txn-fset-locally)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=develop-txn-fset-locally)]
1. Export as a feature set specification.
In the following steps, you build a feature set named `transactions` that has ro
Persisting the feature set specification offers another benefit: the feature set specification can be source controlled.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=dump-transactions-fs-spec)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=dump-transactions-fs-spec)]
## Register a feature store entity
As a best practice, entities help enforce use of the same join key definition ac
In this code sample, the client is scoped at feature store level.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fset-crud-client)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fset-crud-client)]
1. Register the `account` entity with the feature store. Create an `account` entity that has the join key `accountID` of type `string`.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity)]
### [SDK and CLI track](#tab/SDK-and-CLI-track)
As a best practice, entities help enforce use of the same join key definition ac
In this code sample, the client is scoped at the feature store level, and it registers the `account` entity with the feature store. Additionally, it creates an account entity that has the join key `accountID` of type `string`.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)]
Use the following code to register a feature set asset with the feature store. Y
### [SDK track](#tab/SDK-track)
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset)]
### [SDK and CLI track](#tab/SDK-and-CLI-track)
-[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)]
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)]
Feature store asset creation and updates can happen only through the SDK and CLI
Observation data is data captured during the event itself. Here, it has core transaction data, including transaction ID, account ID, and transaction amount values. Because you use it for training, it also has an appended target variable (**is_fraud**).
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=load-obs-data)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=load-obs-data)]
1. Get the registered feature set, and list its features.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=get-txn-fset)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=get-txn-fset)]
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=print-txn-fset-sample-values)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=print-txn-fset-sample-values)]
1. Select the features that become part of the training data. Then, use the feature store SDK to generate the training data itself.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=select-features-and-gen-training-data)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=select-features-and-gen-training-data)]
A point-in-time join appends the features to the training data.
operator-nexus Howto Cluster Runtime Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-runtime-upgrade.md
This how-to guide explains the steps for installing the required Azure CLI and e
## Prerequisites 1. The [Install Azure CLI][installation-instruction] must be installed.
-2. The `networkcloud` cli extension is required. If the `networkcloud` extension isn't installed, it can be installed following the steps listed [here](https://github.com/MicrosoftDocs/azure-docs-pr/blob/main/articles/operator-nexus/howto-install-cli-extensions.md).
+2. The `networkcloud` CLI extension is required. If the `networkcloud` extension isn't installed, it can be installed following the steps listed [here](https://github.com/MicrosoftDocs/azure-docs-pr/blob/main/articles/operator-nexus/howto-install-cli-extensions.md).
3. Access to the Azure portal for the target cluster to be upgraded. 4. You must be logged in to the same subscription as your target cluster via `az login` 5. Target cluster must be in a running state, with all control plane nodes healthy and 80+% of compute nodes in a running and healthy state.
This how-to guide explains the steps for installing the required Azure CLI and e
To find available upgradeable runtime versions, navigate to the target cluster in the Azure portal. In the cluster's overview pane, navigate to the ***Available upgrade versions*** tab. From the **available upgrade versions** tab, we're able to see the different cluster versions that are currently available to upgrade. The operator can select from the listed the target runtime versions. Once selected, proceed to upgrade the cluster.
az networkcloud cluster update-version --cluster-name "clusterName" --target-clu
The runtime upgrade is a long process. The upgrade first upgrades the management nodes and then sequentially rack by rack for the worker nodes. The upgrade is considered to be finished when 80% of worker nodes per rack and 100% of management nodes have been successfully upgraded.
-Workloads may be impacted while the worker nodes in a rack is in the process of being upgraded, however workloads in all other racks will not be impacted. Consideration of workload placement in light of this implementation design is encouraged.
+Workloads might be impacted while the worker nodes in a rack are in the process of being upgraded, however workloads in all other racks won't be impacted. Consideration of workload placement in light of this implementation design is encouraged.
Upgrading all the nodes takes multiple hours but can take more if other processes, like firmware updates, are also part of the upgrade. Due to the length of the upgrade process, it's advised to check the Cluster's detail status periodically for the current state of the upgrade.
az networkcloud cluster show --cluster-name "clusterName" --resource-group "reso
The output should be the target cluster's information and the cluster's detailed status and detail status message should be present.
+## Configure compute threshold parameters for runtime upgrade using cluster updateStrategy
+The following Azure CLI command is used to configure the compute threshold parameters for a runtime upgrade:
+
+```azurecli
+az networkcloud cluster update --name "<clusterName>" --resource-group "<resourceGroup>" --update-strategy strategy-type="Rack" threshold-type="PercentSuccess" threshold-value="<thresholdValue>" max-unavailable=<maxNodesOffline> wait-time-minutes=<waitTimeBetweenRacks>
+```
+
+Required arguments:
+- strategy-type: Defines the update strategy. In this case, "Rack" means updates occur rack-by-rack. The default value is "Rack"
+- threshold-type: Determines how the threshold should be evaluated, applied in the units defined by the strategy. The default value is "PercentSuccess".
+- threshold-value: The numeric threshold value used to evaluate an update. The default value is 80.
+
+Optional arguments:
+- max-unavailable: The maximum number of worker nodes that can be offline, that is, upgraded rack at a time. The default value is 32767.
+- wait-time-minutes: The delay or waiting period before updating a rack. The default value is 15.
+
+An example usage of the command is as below:
+```azurecli
+az networkcloud cluster update --name "cluster01" --resource-group "cluster01-rg" --update-strategy strategy-type="Rack" threshold-type="PercentSuccess" threshold-value=70 max-unavailable=16 wait-time-minutes=15
+```
+Upon successful execution of the command, the updateStrategy values specified will be applied to the cluster:
+```
+ "updateStrategy": {
+ "maxUnavailable": 16,
+ "strategyType": "Rack",
+ "thresholdType": "PercentSuccess",
+ "thresholdValue": 70,
+ "waitTimeMinutes": 15,
+ },
+```
+ ## Frequently Asked Questions ### Identifying Cluster Upgrade Stalled/Stuck
-During a runtime upgrade it's possible that the upgrade fails to move forward but the detail status reflects that the upgrade is still ongoing. **Because the runtime upgrade may take a very long time to successfully finish, there is no set timeout length currently specified**.
+During a runtime upgrade, it's possible that the upgrade fails to move forward but the detail status reflects that the upgrade is still ongoing. **Because the runtime upgrade can take a very long time to successfully finish, there's no set timeout length currently specified**.
Hence, it's advisable to also check periodically on your cluster's detail status and logs to determine if your upgrade is indefinitely attempting to upgrade. We can identify when this is the case by looking at the Cluster's logs, detailed message, and detailed status message. If a timeout has occurred, we would observe that the Cluster is continuously reconciling over the same indefinitely and not moving forward. The Cluster's detailed status message would reflect, `"Cluster is in the process of being updated."`.
-From here, we recommend checking Cluster logs or configured LAW, to see if there is a failure, or a specific upgrade that is causing the lack of progress.
+From here, we recommend checking Cluster logs or configured LAW, to see if there's a failure, or a specific upgrade that is causing the lack of progress.
### Hardware Failure doesn't require Upgrade re-execution If a hardware failure during an upgrade has occurred, the runtime upgrade continues as long as the set thresholds are met for the compute and management/control nodes. Once the machine is fixed or replaced, it gets provisioned with the current platform runtime's OS, which contains the targeted version of the runtime.
-If a hardware failure occurs, and the runtime upgrade has failed because thresholds weren't met for compute and control nodes, re-execution of the runtime upgrade may be needed depending on when the failure occurred and the state of the individual servers in a rack. If a rack was updated before a failure, then the upgraded runtime version would be used when the nodes are reprovisioned.
+If a hardware failure occurs, and the runtime upgrade has failed because thresholds weren't met for compute and control nodes, re-execution of the runtime upgrade might be needed depending on when the failure occurred and the state of the individual servers in a rack. If a rack was updated before a failure, then the upgraded runtime version would be used when the nodes are reprovisioned.
If the rack's spec wasn't updated to the upgraded runtime version before the hardware failure, the machine would be provisioned with the previous runtime version. To upgrade to the new runtime version, submit a new cluster upgrade request and only the nodes with the previous runtime version will upgrade. Hosts that were successful in the previous upgrade action won't.
-### After a runtime upgrade the cluster shows "Failed" Provisioning State
+### After a runtime upgrade, the cluster shows "Failed" Provisioning State
-During a runtime upgrade the cluster will enter a state of `Upgrading` In the event of a failure of the runtime upgrade, for reasons related to the resources, the cluster will go into a `Failed` provisioning state. This state could be linked to the lifecycle of the components related to the cluster (e.g StorageAppliance) and may be necessary to diagnose the failure with Microsoft support.
+During a runtime upgrade the cluster will enter a state of `Upgrading` In the event of a failure of the runtime upgrade, for reasons related to the resources, the cluster will go into a `Failed` provisioning state. This state could be linked to the lifecycle of the components related to the cluster (e.g StorageAppliance) and might be necessary to diagnose the failure with Microsoft support.
<!-- LINKS - External --> [installation-instruction]: https://aka.ms/azcli
operator-nexus Howto Kubernetes Cluster Agent Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-agent-pools.md
Before proceeding with this how-to guide, it's recommended that you:
* System pools must contain at least one node. * You can't change the VM size of a node pool after you create it. * Each Nexus Kubernetes cluster requires at least one system node pool.
+ * Don't run application workloads on Kubernetes control plane nodes, as they're designed only for managing the cluster, and doing so can harm its performance and stability.
## System pool For a system node pool, Nexus Kubernetes automatically assigns the label `kubernetes.azure.com/mode: system` to its nodes. This label causes Nexus Kubernetes to prefer scheduling system pods on node pools that contain this label. This label doesn't prevent you from scheduling application pods on system node pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
-You can enforce this behavior by creating a dedicated system node pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system node pools. If you intend to use the system pool for application pods (not dedicated), do not apply any application specific taints to the pool, as this can cause cluster creation to fail.
+You can enforce this behavior by creating a dedicated system node pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system node pools. If you intend to use the system pool for application pods (not dedicated), don't apply any application specific taints to the pool, as applying such taints can lead to cluster creation failures.
> [!IMPORTANT] > If you run a single system node pool for your Nexus Kubernetes cluster in a production environment, we recommend you use at least three nodes for the node pool.
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md
The _default_ maximum number of connections per pricing tier and vCores are show
Customers can change the value maximum number of connections using either of the following methods:
-* Change the default value for the `max_connections` parameter using server parameter. This parameter is static and will require an instance restart.
+* Change the default value for the `max_connections` parameter using server parameter. This parameter is static and requires an instance restart.
> [!CAUTION] > While it is possible to increase the value of "max_connections" beyond the default setting, it is not advisable. The rationale behind this recommendation is that instances may encounter difficulties when the workload expands and demands more memory. As the number of connections increases, memory usage also rises. Instances with limited memory may face issues such as crashes or high latency. Although a higher value for "max_connections" might be acceptable when most connections are idle, it can lead to significant performance problems once they become active. Instead, if you require additional connections, we suggest utilizing pgBouncer, Azure's built-in connection pool management solution, in transaction mode. To start, it is recommended to use conservative values by multiplying the vCores within the range of 2 to 5. Afterward, carefully monitor resource utilization and application performance to ensure smooth operation. For detailed information on pgBouncer, please refer to the [PgBouncer in Azure Database for PostgreSQL - Flexible Server](concepts-pgbouncer.md) documentation.
When connections exceed the limit, you may receive the following error:
`FATAL: sorry, too many clients already.`
-When using PostgreSQL for a busy database with a large number of concurrent connections, there may be a significant strain on resources. This strain can result in high CPU utilization, particularly when many connections are established simultaneously and when connections have short durations (less than 60 seconds). These factors can negatively impact overall database performance by increasing the time spent on processing connections and disconnections. It's important to note that each connection in Postgres, regardless of whether it is idle or active, consumes a significant amount of resources from your database. This can lead to performance issues beyond high CPU utilization, such as disk and lock contention, which are discussed in more detail in the PostgreSQL Wiki article on the [Number of Database Connections](https://wiki.postgresql.org/wiki/Number_Of_Database_Connections). To learn more about identifying and solving connection performance issues in Azure Database for Postgres, visit our [Identify and solve connection performance in Azure Postgres](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/identify-and-solve-connection-performance-in-azure-postgres/ba-p/3698375).
+When using PostgreSQL for a busy database with a large number of concurrent connections, there may be a significant strain on resources. This strain can result in high CPU utilization, particularly when many connections are established simultaneously and when connections have short durations (less than 60 seconds). These factors can negatively impact overall database performance by increasing the time spent on processing connections and disconnections. It's important to note that each connection in Postgres, regardless of whether it is idle or active, consumes a significant amount of resources from your database. This consumption can lead to performance issues beyond high CPU utilization, such as disk and lock contention. The topic is discussed in more detail in the PostgreSQL Wiki article on the [Number of Database Connections](https://wiki.postgresql.org/wiki/Number_Of_Database_Connections). To learn more, visit [Identify and solve connection performance in Azure Postgres](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/identify-and-solve-connection-performance-in-azure-postgres/ba-p/3698375).
## Functional limitations
When using PostgreSQL for a busy database with a large number of concurrent conn
- Once configured, storage size can't be reduced. You have to create a new server with desired storage size, perform manual [dump and restore](../howto-migrate-using-dump-and-restore.md) and migrate your database(s) to the new server. - Currently, storage auto-grow feature isn't available. You can monitor the usage and increase the storage to a higher size. - When the storage usage reaches 95% or if the available capacity is less than 5 GiB whichever is more, the server is automatically switched to **read-only mode** to avoid errors associated with disk-full situations. In rare cases, if the rate of data growth outpaces the time it takes switch to read-only mode, your Server may still run out of storage.-- We recommend to set alert rules for `storage used` or `storage percent` when they exceed certain thresholds so that you can proactively take action such as increasing the storage size. For example, you can set an alert if the storage percent exceeds 80% usage.
+- We recommend setting alert rules for `storage used` or `storage percent` when they exceed certain thresholds so that you can proactively take action such as increasing the storage size. For example, you can set an alert if the storage percent exceeds 80% usage.
- If you're using logical replication, then you must drop the logical replication slot in the primary server if the corresponding subscriber no longer exists. Otherwise the WAL files start to get accumulated in the primary filling up the storage. If the storage threshold exceeds certain threshold and if the logical replication slot isn't in use (due to non-available subscriber), Flexible server automatically drops that unused logical replication slot. That action releases accumulated WAL files and avoids your server becoming unavailable due to storage getting filled situation. ### Networking
When using PostgreSQL for a busy database with a large number of concurrent conn
### Availability zones -- Manually moving servers to a different availability zone is currently not supported. However, you can enable HA using the preferred AZ as the standby zone. Once established, you can fail over to the standby and subsequently disable HA.
+- Manually moving servers to a different availability zone is currently not supported. However, you can enable HA using the preferred AZ as the standby zone. Once established, you can fail over to the standby and then disable HA.
### Postgres engine, extensions, and PgBouncer -- Postgres 10 and older aren't supported as those are already retired by the open-source community. If you must use one of these versions, you'll need to use the [Single Server](../overview-single-server.md) option which supports the older major versions 95, 96 and 10.
+- Postgres 10 and older aren't supported as those are already retired by the open-source community. If you must use one of these versions, you need to use the [Single Server](../overview-single-server.md) option, which supports the older major versions 95, 96 and 10.
- Flexible Server supports all `contrib` extensions and more. Please refer to [PostgreSQL extensions](/azure/postgresql/flexible-server/concepts-extensions). - Built-in PgBouncer connection pooler is currently not available for Burstable servers. - SCRAM authentication isn't supported with connectivity using built-in PgBouncer.
When using PostgreSQL for a busy database with a large number of concurrent conn
### Backing up a server - Backups are managed by the system, there's currently no way to run these backups manually. We recommend using `pg_dump` instead.-- The first snapshot is a full backup and consecutive snapshots are differential backups. The differential backups only back up the changed data since the last snapshot backup. For example, if the size of your database is 40GB and your provisioned storage is 64GB, the first snapshot backup will be 40GB. Now, if you change 4GB of data, then the next differential snapshot backup size will only be 4GB. The transaction logs (write ahead logs - WAL) are separate from the full/differential backups, and are archived continuously.
+- The first snapshot is a full backup and consecutive snapshots are differential backups. The differential backups only back up the changed data since the last snapshot backup. For example, if the size of your database is 40 GB and your provisioned storage is 64 GB, the first snapshot backup will be 40 GB. Now, if you change 4 GB of data, then the next differential snapshot backup size will only be 4 GB. The transaction logs (write ahead logs - WAL) are separate from the full/differential backups, and are archived continuously.
### Restoring a server -- When using the Point-in-time-Restore feature, the new server is created with the same compute and storage configurations as the server isn't based on.
+- When using the Point-in-time-Restore feature, the new server is created with the same compute and storage configurations as the server it is based on.
- VNET based database servers are restored into the same VNET when you restore from a backup. - The new server created during a restore doesn't have the firewall rules that existed on the original server. Firewall rules need to be created separately for the new server. - Restoring a deleted server isn't supported. - Cross region restore isn't supported.-- Restore to a different subscription is not supported but as a workaround, you can restore the server within the same subscription and then migrate the restored server to a different subscription.
+- Restore to a different subscription isn't supported but as a workaround, you can restore the server within the same subscription and then migrate the restored server to a different subscription.
## Next steps
postgresql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-dropped-server.md
Last updated 06/15/2023
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-When a server is dropped, the database server backup is retained for five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a dropped PostgreSQL server resource within five days from the time of server deletion. The recommended steps work only if the backup for the server is still available and not deleted from the system.
+When a server is dropped, the database server backup is retained for five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a dropped PostgreSQL server resource within five days from the time of server deletion. The recommended steps work only if the backup for the server is still available and not deleted from the system. While restoring a deleted server often succeeds, it is not always guaranteed, as restoring a deleted server depends on several other factors.
## Prerequisites
postgresql How To Migrate From Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-from-oracle.md
ora2pg -p -t VIEW -o views.sql -b %namespace%/schema/views -c %namespace%/config
To extract the data, use the following command. ```
-ora2pg -t COPY -o data.sql -b %namespace/data -c %namespace/config/ora2pg.conf
+ora2pg -t COPY -o data.sql -b %namespace%/data -c %namespace/config/ora2pg.conf
``` #### Compile files
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [Disk Snapshot Contributor](#disk-snapshot-contributor) | Provides permission to backup vault to manage disk snapshots. | 7efff54f-a5b4-42b5-a1c5-5411624893ce | > | [Virtual Machine Administrator Login](#virtual-machine-administrator-login) | View Virtual Machines in the portal and login as administrator | 1c0163c0-47e6-4577-8991-ea5c82e286e4 | > | [Virtual Machine Contributor](#virtual-machine-contributor) | Create and manage virtual machines, manage disks, install and run software, reset password of the root user of the virtual machine using VM extensions, and manage local user accounts using VM extensions. This role does not grant you management access to the virtual network or storage account the virtual machines are connected to. This role does not allow you to assign roles in Azure RBAC. | 9980e02c-c2be-4d73-94e8-173b1dc7cf3c |
+> | [Virtual Machine Data Access Administrator (preview)](#virtual-machine-data-access-administrator-preview) | Add or remove virtual machine data plane role assignments. Includes an ABAC condition to constrain role assignments. | 66f75aeb-eabe-4b70-9f1e-c350c4c9ad04 |
> | [Virtual Machine User Login](#virtual-machine-user-login) | View Virtual Machines in the portal and login as a regular user. | fb879df8-f326-4884-b1cf-06f3ad86be52 | > | [Windows Admin Center Administrator Login](#windows-admin-center-administrator-login) | Let's you manage the OS of your resource via Windows Admin Center as an administrator. | a6333a3e-0164-44c3-b281-7a577aff287f | > | **Networking** | | |
The following table provides a brief description of each built-in role. Click th
> | [Key Vault Crypto Officer](#key-vault-crypto-officer) | Perform any action on the keys of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | 14b46e9e-c2b7-41b4-b07b-48a6ebf60603 | > | [Key Vault Crypto Service Encryption User](#key-vault-crypto-service-encryption-user) | Read metadata of keys and perform wrap/unwrap operations. Only works for key vaults that use the 'Azure role-based access control' permission model. | e147488a-f6f5-4113-8e2d-b22465e65bf6 | > | [Key Vault Crypto User](#key-vault-crypto-user) | Perform cryptographic operations using keys. Only works for key vaults that use the 'Azure role-based access control' permission model. | 12338af0-0e69-4776-bea7-57ae8d297424 |
-> | [Key Vault Data Access Administrator (preview)](#key-vault-data-access-administrator-preview) | Add or remove key vault data plane role assignments and read resources of all types, except secrets. Includes an ABAC condition to constrain role assignments. | 8b54135c-b56d-4d72-a534-26097cfdc8d8 |
+> | [Key Vault Data Access Administrator (preview)](#key-vault-data-access-administrator-preview) | Manage access to Azure Key Vault by adding or removing role assignments for the Key Vault Administrator, Key Vault Certificates Officer, Key Vault Crypto Officer, Key Vault Crypto Service Encryption User, Key Vault Crypto User, Key Vault Reader, Key Vault Secrets Officer, or Key Vault Secrets User roles. Includes an ABAC condition to constrain role assignments. | 8b54135c-b56d-4d72-a534-26097cfdc8d8 |
> | [Key Vault Reader](#key-vault-reader) | Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. | 21090545-7ca7-4776-b22c-e363652d74d2 | > | [Key Vault Secrets Officer](#key-vault-secrets-officer) | Perform any action on the secrets of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | b86a8fe4-44ce-4948-aee5-eccb2c155cd7 | > | [Key Vault Secrets User](#key-vault-secrets-user) | Read secret contents. Only works for key vaults that use the 'Azure role-based access control' permission model. | 4633458b-17de-408a-b874-0445c86b69e6 |
Create and manage virtual machines, manage disks, install and run software, rese
} ```
+### Virtual Machine Data Access Administrator (preview)
+
+Add or remove virtual machine data plane role assignments. Includes an ABAC condition to constrain role assignments.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/write | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/delete | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | |
+> | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/read | List management groups for the authenticated user. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/publicIPAddresses/read | Gets a public ip address definition. |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/read | Get the virtual network definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/loadBalancers/read | Gets a load balancer definition |
+> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/read | Gets a network interface definition. |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/*/read | |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/*/read | |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+> | **Condition** | |
+> | ((!(ActionMatches{'Microsoft.Authorization/roleAssignments/write'})) OR (@Request[Microsoft.Authorization/roleAssignments:RoleDefinitionId] ForAnyOfAnyValues:GuidEquals{1c0163c0-47e6-4577-8991-ea5c82e286e4, fb879df8-f326-4884-b1cf-06f3ad86be52})) AND ((!(ActionMatches{'Microsoft.Authorization/roleAssignments/delete'})) OR (@Resource[Microsoft.Authorization/roleAssignments:RoleDefinitionId] ForAnyOfAnyValues:GuidEquals{1c0163c0-47e6-4577-8991-ea5c82e286e4, fb879df8-f326-4884-b1cf-06f3ad86be52})) | Add or remove role assignments for the following roles:<br/>Virtual Machine Administrator Login<br/>Virtual Machine User Login |
+
+```json
+{
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/66f75aeb-eabe-4b70-9f1e-c350c4c9ad04",
+ "properties": {
+ "roleName": "Virtual Machine Data Access Administrator (preview)",
+ "description": "Add or remove virtual machine data plane role assignments. Includes an ABAC condition to constrain role assignments.",
+ "assignableScopes": [
+ "/"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/roleAssignments/write",
+ "Microsoft.Authorization/roleAssignments/delete",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Resources/subscriptions/read",
+ "Microsoft.Management/managementGroups/read",
+ "Microsoft.Network/publicIPAddresses/read",
+ "Microsoft.Network/virtualNetworks/read",
+ "Microsoft.Network/loadBalancers/read",
+ "Microsoft.Network/networkInterfaces/read",
+ "Microsoft.Compute/virtualMachines/*/read",
+ "Microsoft.HybridCompute/machines/*/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Support/*"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": [],
+ "conditionVersion": "2.0",
+ "condition": "((!(ActionMatches{'Microsoft.Authorization/roleAssignments/write'})) OR (@Request[Microsoft.Authorization/roleAssignments:RoleDefinitionId] ForAnyOfAnyValues:GuidEquals{1c0163c0-47e6-4577-8991-ea5c82e286e4, fb879df8-f326-4884-b1cf-06f3ad86be52})) AND ((!(ActionMatches{'Microsoft.Authorization/roleAssignments/delete'})) OR (@Resource[Microsoft.Authorization/roleAssignments:RoleDefinitionId] ForAnyOfAnyValues:GuidEquals{1c0163c0-47e6-4577-8991-ea5c82e286e4, fb879df8-f326-4884-b1cf-06f3ad86be52}))"
+ }
+ ]
+ }
+}
+```
+ ### Virtual Machine User Login View Virtual Machines in the portal and login as a regular user. [Learn more](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md)
Perform cryptographic operations using keys. Only works for key vaults that use
### Key Vault Data Access Administrator (preview)
-Add or remove key vault data plane role assignments and read resources of all types, except secrets. Includes an ABAC condition to constrain role assignments.
+Manage access to Azure Key Vault by adding or removing role assignments for the Key Vault Administrator, Key Vault Certificates Officer, Key Vault Crypto Officer, Key Vault Crypto Service Encryption User, Key Vault Crypto User, Key Vault Reader, Key Vault Secrets Officer, or Key Vault Secrets User roles. Includes an ABAC condition to constrain role assignments.
> [!div class="mx-tableFixed"] > | Actions | Description |
Add or remove key vault data plane role assignments and read resources of all ty
> | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/read | List management groups for the authenticated user. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | [Microsoft.KeyVault](resource-provider-operations.md#microsoftkeyvault)/vaults/*/read | |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Add or remove key vault data plane role assignments and read resources of all ty
"id": "/providers/Microsoft.Authorization/roleDefinitions/8b54135c-b56d-4d72-a534-26097cfdc8d8", "properties": { "roleName": "Key Vault Data Access Administrator (preview)",
- "description": "Add or remove key vault data plane role assignments and read resources of all types, except secrets. Includes an ABAC condition to constrain role assignments.",
+ "description": "Manage access to Azure Key Vault by adding or removing role assignments for the Key Vault Administrator, Key Vault Certificates Officer, Key Vault Crypto Officer, Key Vault Crypto Service Encryption User, Key Vault Crypto User, Key Vault Reader, Key Vault Secrets Officer, or Key Vault Secrets User roles. Includes an ABAC condition to constrain role assignments.",
"assignableScopes": [ "/" ],
Add or remove key vault data plane role assignments and read resources of all ty
"Microsoft.Resources/subscriptions/read", "Microsoft.Management/managementGroups/read", "Microsoft.Resources/deployments/*",
- "Microsoft.Support/*"
+ "Microsoft.Support/*",
+ "Microsoft.KeyVault/vaults/*/read"
], "notActions": [], "dataActions": [],
sap Sap On Azure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/sap-on-azure-overview.md
Azure Center for SAP solutions is a service that makes SAP a top-level workload
For more information, see the [Azure Center for SAP solutions](center-sap-solutions/overview.md) documentation. - ## SAP on Azure deployment automation framework The SAP on Azure deployment automation framework is an open-source orchestration tool for deploying, installing and maintaining SAP environments.
search Cognitive Search Skill Textsplit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-textsplit.md
Parameters are case-sensitive.
| Parameter name | Description | |--|-| | `textSplitMode` | Either `pages` or `sentences` |
-| `maximumPageLength` | Only applies if `textSplitMode` is set to `pages`. This parameter refers to the maximum page length in characters as measured by `String.Length`. The minimum value is 300, the maximum is 50000, and the default value is 5000. The algorithm does its best to break the text on sentence boundaries, so the size of each chunk may be slightly less than `maximumPageLength`. |
-| `pageOverlapLength` | Only applies if `textSplitMode` is set to `pages`. If it's specificied (needs to be >= 0), (n+1)th page starts with this number of characters from the end of the nth page. If it's set to 0, it should behave the same as if this value isn't set. |
-| `maximumPagesToTake` | Only applies if `textSplitMode` is set to `pages`. Number of pages to return. Default (0) to all pages. It can be used if only a partial number of pages is needed.
-| `defaultLanguageCode` | (optional) One of the following language codes: `am, bs, cs, da, de, en, es, et, fr, he, hi, hr, hu, fi, id, is, it, ja, ko, lv, no, nl, pl, pt-PT, pt-BR, ru, sk, sl, sr, sv, tr, ur, zh-Hans`. Default is English (en). Few things to consider:<ul><li>Providing a language code is useful to avoid cutting a word in half for nonwhitespace languages such as Chinese, Japanese, and Korean.</li><li>If you don't know the language (that is, you need to split the text for input into the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md)), the default of English (en) should be sufficient. </li></ul> |
+| `maximumPageLength` | Only applies if `textSplitMode` is set to `pages`. This refers to the maximum page length in characters as measured by `String.Length`. The minimum value is 300, the maximum is 100000, and the default value is 5000. The algorithm will do its best to break the text on sentence boundaries, so the size of each chunk may be slightly less than `maximumPageLength`. |
+| `defaultLanguageCode` | (optional) One of the following language codes: `am, bs, cs, da, de, en, es, et, fr, he, hi, hr, hu, fi, id, is, it, ja, ko, lv, no, nl, pl, pt-PT, pt-BR, ru, sk, sl, sr, sv, tr, ur, zh-Hans`. Default is English (en). Few things to consider:<ul><li>Providing a language code is useful to avoid cutting a word in half for non-whitespace languages such as Chinese, Japanese, and Korean.</li><li>If you do not know the language (i.e. you need to split the text for input into the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md)), the default of English (en) should be sufficient. </li></ul> |
## Skill Inputs
Parameters are case-sensitive.
| Parameter name | Description | |-|| | `text` | The text to split into substring. |
-| `languageCode` | (Optional) Language code for the document. If you don't know the language (that is, you need to split the text for input into the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md)), it's safe to remove this input. If the language isn't in the supported list for the `defaultLanguageCode` parameter above, a warning is emitted and the text won't be split. |
+| `languageCode` | (Optional) Language code for the document. If you do not know the language (i.e. you need to split the text for input into the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md)), it is safe to remove this input. If the language is not in the supported list for the `defaultLanguageCode` parameter above, a warning will be emitted and the text will not be split. |
## Skill Outputs
Parameters are case-sensitive.
"@odata.type": "#Microsoft.Skills.Text.SplitSkill", "textSplitMode" : "pages", "maximumPageLength": 1000,
- "pageOverlapLength": 100,
- "maximumPagesToTake": 1,
"defaultLanguageCode": "en", "inputs": [ {
Parameters are case-sensitive.
"recordId": "1", "data": { "textItems": [
- "This is the loan...Here is the overlap part...",
- "Here is the overlap part...On the second page we..."
+ "This is the loan…",
+ "On the second page we…"
] } },
Parameters are case-sensitive.
"recordId": "2", "data": { "textItems": [
- "This is the second document...Here is the overlap part...",
- "Here is the overlap part...On the second page of the second doc..."
+ "This is the second document...",
+ "On the second page of the second doc…"
] } }
Parameters are case-sensitive.
``` ## Error cases
-+ If a language isn't supported, a warning is generated.
+If a language is not supported, a warning is generated.
## See also
security Identity Management Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-best-practices.md
Organizations that donΓÇÖt integrate their on-premises identity with their cloud
> You need to choose which directories critical accounts will reside in and whether the admin workstation used is managed by new cloud services or existing processes. Using existing management and identity provisioning processes can decrease some risks but can also create the risk of an attacker compromising an on-premises account and pivoting to the cloud. You might want to use a different strategy for different roles (for example, IT admins vs. business unit admins). You have two options. First option is to create Microsoft Entra accounts that arenΓÇÖt synchronized with your on-premises Active Directory instance. Join your admin workstation to Microsoft Entra ID, which you can manage and patch by using Microsoft Intune. Second option is to use existing admin accounts by synchronizing to your on-premises Active Directory instance. Use existing workstations in your Active Directory domain for management and security. ## Manage connected tenants
-Your security organization needs visibility to assess risk and to determine whether the policies of your organization, and any regulatory requirements, are being followed. You should ensure that your security organization has visibility into all subscriptions connected to your production environment and network (via [Azure ExpressRoute](../../expressroute/expressroute-introduction.md) or [site-to-site VPN](../../vpn-gateway/vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md)). A [Global Administrator](../../active-directory/roles/permissions-reference.md#global-administrator) in Microsoft Entra ID can elevate their access to the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role and see all subscriptions and managed groups connected to your environment.
+Your security organization needs visibility to assess risk and to determine whether the policies of your organization, and any regulatory requirements, are being followed. You should ensure that your security organization has visibility into all subscriptions connected to your production environment and network (via [Azure ExpressRoute](../../expressroute/expressroute-introduction.md) or [site-to-site VPN](../../vpn-gateway/tutorial-site-to-site-portal.md). A [Global Administrator](../../active-directory/roles/permissions-reference.md#global-administrator) in Microsoft Entra ID can elevate their access to the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role and see all subscriptions and managed groups connected to your environment.
See [elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md) to ensure that you and your security group can view all subscriptions or management groups connected to your environment. You should remove this elevated access after youΓÇÖve assessed risks.
If you have multiple tenants or you want to enable users to [reset their own pas
We recommend that you require two-step verification for all of your users. This includes administrators and others in your organization who can have a significant impact if their account is compromised (for example, financial officers).
-There are multiple options for requiring two-step verification. The best option for you depends on your goals, the Microsoft Entra edition youΓÇÖre running, and your licensing program. See [How to require two-step verification for a user](../../active-directory/authentication/howto-mfa-userstates.md) to determine the best option for you. See the [Microsoft Entra ID](https://azure.microsoft.com/pricing/details/active-directory/) and [Microsoft Entra multifactor Authentication](https://azure.microsoft.com/pricing/details/multi-factor-authentication/) pricing pages for more information about licenses and pricing.
+There are multiple options for requiring two-step verification. The best option for you depends on your goals, the Microsoft Entra edition youΓÇÖre running, and your licensing program. See [How to require two-step verification for a user](../../active-directory/authentication/howto-mfa-userstates.md) to determine the best option for you. See the [Microsoft Entra ID](https://azure.microsoft.com/pricing/details/active-directory/) and [Microsoft Entra multifactor authentication](https://azure.microsoft.com/pricing/details/multi-factor-authentication/) pricing pages for more information about licenses and pricing.
Following are options and benefits for enabling two-step verification:
security Network Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-best-practices.md
Many organizations have chosen the hybrid IT route. With hybrid IT, some of the
In a hybrid IT scenario, there's usually some type of cross-premises connectivity. Cross-premises connectivity allows the company to connect its on-premises networks to Azure virtual networks. Two cross-premises connectivity solutions are available:
-* [Site-to-site VPN](../../vpn-gateway/vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md). It's a trusted, reliable, and established technology, but the connection takes place over the internet. Bandwidth is constrained to a maximum of about 1.25 Gbps. Site-to-site VPN is a desirable option in some scenarios.
+* [Site-to-site VPN](../../vpn-gateway/tutorial-site-to-site-portal.md). It's a trusted, reliable, and established technology, but the connection takes place over the internet. Bandwidth is constrained to a maximum of about 1.25 Gbps. Site-to-site VPN is a desirable option in some scenarios.
* **Azure ExpressRoute**. We recommend that you use [ExpressRoute](../../expressroute/expressroute-introduction.md) for your cross-premises connectivity. ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services like Azure, Microsoft 365, and Dynamics 365. ExpressRoute is a dedicated WAN link between your on-premises location or a Microsoft Exchange hosting provider. Because this is a telco connection, your data doesn't travel over the internet, so it isn't exposed to the potential risks of internet communications. The location of your ExpressRoute connection can affect firewall capacity, scalability, reliability, and network traffic visibility. You'll need to identify where to terminate ExpressRoute in existing (on-premises) networks. You can:
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Title: Find your Microsoft Sentinel data connector | Microsoft Docs
description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Previously updated : 08/28/2023 Last updated : 10/23/2023
Data connectors are available as part of the following offerings:
## Akamai -- [Akamai Security Events](data-connectors/akamai-security-events.md)
+- [[Deprecated] Akamai Security Events via Legacy Agent](data-connectors/deprecated-akamai-security-events-via-legacy-agent.md)
+- [[Recommended] Akamai Security Events via AMA](data-connectors/recommended-akamai-security-events-via-ama.md)
## AliCloud
Data connectors are available as part of the following offerings:
## Amazon Web Services - [Amazon Web Services](data-connectors/amazon-web-services.md)-- [Amazon Web Services S3](data-connectors/amazon-web-services-s3.md)
+- [Amazon Web Services S3 (preview)](data-connectors/amazon-web-services-s3.md)
## Apache
Data connectors are available as part of the following offerings:
## Aruba -- [Aruba ClearPass](data-connectors/aruba-clearpass.md)
+- [[Deprecated] Aruba ClearPass via Legacy Agent](data-connectors/deprecated-aruba-clearpass-via-legacy-agent.md)
+- [[Recommended] Aruba ClearPass via AMA](data-connectors/recommended-aruba-clearpass-via-ama.md)
## Atlassian
Data connectors are available as part of the following offerings:
## Broadcom -- [Broadcom Symantec DLP](data-connectors/braodcom-symantec-dlp.md)
+- [[Deprecated] Broadcom Symantec DLP via Legacy Agent](data-connectors/deprecated-broadcom-symantec-dlp-via-legacy-agent.md)
+- [[Recommended] Broadcom Symantec DLP via AMA](data-connectors/recommended-broadcom-symantec-dlp-via-ama.md)
## Cisco
+- [[Deprecated] Cisco Secure Email Gateway via Legacy Agent](data-connectors/deprecated-cisco-secure-email-gateway-via-legacy-agent.md)
+- [[Recommended] Cisco Secure Email Gateway via AMA](data-connectors/recommended-cisco-secure-email-gateway-via-ama.md)
- [Cisco Application Centric Infrastructure](data-connectors/cisco-application-centric-infrastructure.md) - [Cisco ASA](data-connectors/cisco-asa.md)-- [Cisco AS) - [Cisco Duo Security (using Azure Functions)](data-connectors/cisco-duo-security-using-azure-functions.md) - [Cisco Identity Services Engine](data-connectors/cisco-identity-services-engine.md) - [Cisco Meraki](data-connectors/cisco-meraki.md)-- [Cisco Secure Email Gateway](data-connectors/cisco-secure-email-gateway.md) - [Cisco Secure Endpoint (AMP) (using Azure Functions)](data-connectors/cisco-secure-endpoint-amp-using-azure-functions.md) - [Cisco Stealthwatch](data-connectors/cisco-stealthwatch.md) - [Cisco UCS](data-connectors/cisco-ucs.md)-- [Cisco Umbrella (using Azure Function)](data-connectors/cisco-umbrella-using-azure-function.md)
+- [Cisco Umbrella (using Azure Functions)](data-connectors/cisco-umbrella-using-azure-functions.md)
- [Cisco Web Security Appliance](data-connectors/cisco-web-security-appliance.md) ## Cisco Systems, Inc. -- [Cisco Firepower eStreamer](data-connectors/cisco-firepower-estreamer.md)
+- [[Deprecated] Cisco Firepower eStreamer via Legacy Agent](data-connectors/deprecated-cisco-firepower-estreamer-via-legacy-agent.md)
+- [[Recommended] Cisco Firepower eStreamer via Legacy Agent via AMA](data-connectors/recommended-cisco-firepower-estreamer-via-legacy-agent-via-ama.md)
- [Cisco Software Defined WAN](data-connectors/cisco-software-defined-wan.md) ## Citrix
Data connectors are available as part of the following offerings:
## Claroty -- [Claroty](data-connectors/claroty.md)
+- [[Deprecated] Claroty via Legacy Agent](data-connectors/deprecated-claroty-via-legacy-agent.md)
+- [[Recommended] Claroty via AMA](data-connectors/recommended-claroty-via-ama.md)
## Cloud Software Group
+- [[Deprecated] Citrix WAF (Web App Firewall) via Legacy Agent](data-connectors/deprecated-citrix-waf-web-app-firewall-via-legacy-agent.md)
+- [[Recommended] Citrix WAF (Web App Firewall) via AMA](data-connectors/recommended-citrix-waf-web-app-firewall-via-ama.md)
- [CITRIX SECURITY ANALYTICS](data-connectors/citrix-security-analytics.md)-- [Citrix WAF (Web App Firewall)](data-connectors/citrix-waf-web-app-firewall.md)- ## Cloudflare - [Cloudflare (Preview) (using Azure Functions)](data-connectors/cloudflare-using-azure-functions.md)- ## Cognni - [Cognni](data-connectors/cognni.md)
Data connectors are available as part of the following offerings:
## Contrast Security -- [Contrast Protect](data-connectors/contrast-protect.md)
+- [[Deprecated] Contrast Protect via Legacy Agent](data-connectors/deprecated-contrast-protect-via-legacy-agent.md)
+- [[Recommended] Contrast Protect via AMA](data-connectors/recommended-contrast-protect-via-ama.md)
## Corelight Inc.
Data connectors are available as part of the following offerings:
## CyberArk -- [CyberArk Enterprise Password Vault (EPV) Events](data-connectors/cyberark-enterprise-password-vault-epv-events.md)
+- [[Deprecated] CyberArk Enterprise Password Vault (EPV) Events via Legacy Agent](data-connectors/deprecated-cyberark-enterprise-password-vault-epv-events-via-legacy-agent.md)
+- [[Recommended] CyberArk Enterprise Password Vault (EPV) Events via AMA](data-connectors/recommended-cyberark-enterprise-password-vault-epv-events-via-ama.md)
- [CyberArkEPM (using Azure Functions)](data-connectors/cyberarkepm-using-azure-functions.md)
-## CyberPion
--- [Cyberpion Security Logs](data-connectors/cyberpion-security-logs.md)- ## Cybersixgill - [Cybersixgill Actionable Alerts (using Azure Functions)](data-connectors/cybersixgill-actionable-alerts-using-azure-functions.md)
Data connectors are available as part of the following offerings:
## Darktrace -- [AI Analyst Darktrace](data-connectors/ai-analyst-darktrace.md) - [Darktrace Connector for Microsoft Sentinel REST API](data-connectors/darktrace-connector-for-microsoft-sentinel-rest-api.md)
+## Darktrace plc
+
+- [[Deprecated] AI Analyst Darktrace via Legacy Agent](data-connectors/deprecated-ai-analyst-darktrace-via-legacy-agent.md)
+- [[Recommended] AI Analyst Darktrace via AMA](data-connectors/recommended-ai-analyst-darktrace-via-ama.md)
+ ## Defend Limited - [Cortex XDR - Incidents](data-connectors/cortex-xdr-incidents.md) ## Delinea Inc. -- [Delinea Secret Server](data-connectors/delinea-secret-server.md)
+- [[Deprecated] Delinea Secret Server via Legacy Agent](data-connectors/deprecated-delinea-secret-server-via-legacy-agent.md)
+- [[Recommended] Delinea Secret Server via AMA](data-connectors/recommended-delinea-secret-server-via-ama.md)
## Derdack
Data connectors are available as part of the following offerings:
## ExtraHop Networks, Inc. -- [ExtraHop Reveal(x)](data-connectors/extrahop-reveal-x.md)
+- [[Deprecated] ExtraHop Reveal(x) via Legacy Agent](data-connectors/deprecated-extrahop-reveal-x-via-legacy-agent.md)
+- [[Recommended] ExtraHop Reveal(x) via AMA](data-connectors/recommended-extrahop-reveal-x-via-ama.md)
## F5, Inc.
+- [[Deprecated] F5 Networks via Legacy Agent](data-connectors/deprecated-f5-networks-via-legacy-agent.md)
+- [[Recommended] F5 Networks via AMA](data-connectors/recommended-f5-networks-via-ama.md)
- [F5 BIG-IP](data-connectors/f5-big-ip.md)-- [F5 Networks](data-connectors/f5-networks.md) ## Facebook -- [Workplace from Facebook (using Azure Functions)](data-connectors/workplace-from-facebook-using-azure-function.md)
+- [Workplace from Facebook (using Azure Functions)](data-connectors/workplace-from-facebook-using-azure-functions.md)
+
+## Feedly, Inc.
+
+- [Feedly](data-connectors/feedly.md)
## Fireeye -- [FireEye Network Security (NX)](data-connectors/fireeye-network-security-nx.md)
+- [[Deprecated] FireEye Network Security (NX) via Legacy Agent](data-connectors/deprecated-fireeye-network-security-nx-via-legacy-agent.md)
+- [[Recommended] FireEye Network Security (NX) via AMA](data-connectors/recommended-fireeye-network-security-nx-via-ama.md)
## Flare Systems
Data connectors are available as part of the following offerings:
## iboss inc -- [iboss](data-connectors/iboss.md)
+- [[Deprecated] iboss via Legacy Agent](data-connectors/deprecated-iboss-via-legacy-agent.md)
+- [[Recommended] iboss via AMA](data-connectors/recommended-iboss-via-ama.md)
## Illumio -- [Illumio Core](data-connectors/illumio-core.md)
+- [[Deprecated] Illumio Core via Legacy Agent](data-connectors/deprecated-illumio-core-via-legacy-agent.md)
+- [[Recommended] Illumio Core via AMA](data-connectors/recommended-illumio-core-via-ama.md)
## Illusive Networks -- [Illusive Platform](data-connectors/illusive-platform.md)
+- [[Deprecated] Illusive Platform via Legacy Agent](data-connectors/deprecated-illusive-platform-via-legacy-agent.md)
+- [[Recommended] Illusive Platform via AMA](data-connectors/recommended-illusive-platform-via-ama.md)
## Imperva
Data connectors are available as part of the following offerings:
## Kaspersky -- [Kaspersky Security Center](data-connectors/kaspersky-security-center.md)
+- [[Deprecated] Kaspersky Security Center via Legacy Agent](data-connectors/deprecated-kaspersky-security-center-via-legacy-agent.md)
+- [[Recommended] Kaspersky Security Center via AMA](data-connectors/recommended-kaspersky-security-center-via-ama.md)
## Linux
Data connectors are available as part of the following offerings:
- [Lookout (using Azure Functions)](data-connectors/lookout-using-azure-function.md) - [Lookout Cloud Security for Microsoft Sentinel (using Azure Functions)](data-connectors/lookout-cloud-security-for-microsoft-sentinel-using-azure-function.md)
+## MailGuard Pty Limited
+
+- [MailGuard 365](data-connectors/mailguard-365.md)
+ ## MarkLogic - [MarkLogic Audit](data-connectors/marklogic-audit.md)
Data connectors are available as part of the following offerings:
- [Microsoft Defender for Endpoint](data-connectors/microsoft-defender-for-endpoint.md) - [Microsoft Defender for Identity](data-connectors/microsoft-defender-for-identity.md) - [Microsoft Defender for IoT](data-connectors/microsoft-defender-for-iot.md)-- [Microsoft Defender for Office 365](data-connectors/microsoft-defender-for-office-365.md)
+- [Microsoft Defender for Office 365 (preview)](data-connectors/microsoft-defender-for-office-365.md)
- [Microsoft Defender Threat Intelligence](data-connectors/microsoft-defender-threat-intelligence.md)-- [Microsoft PowerBI](data-connectors/microsoft-powerbi.md)-- [Microsoft Project](data-connectors/microsoft-project.md)-- [Microsoft Purview (Preview)](data-connectors/microsoft-purview.md)
+- [Microsoft PowerBI (preview)](data-connectors/microsoft-powerbi.md)
+- [Microsoft Project (preview)](data-connectors/microsoft-project.md)
+- [Microsoft Purview (preview)](data-connectors/microsoft-purview.md)
- [Microsoft Purview Information Protection](data-connectors/microsoft-purview-information-protection.md) - [Network Security Groups](data-connectors/network-security-groups.md) - [Security Events via Legacy Agent](data-connectors/security-events-via-legacy-agent.md)
Data connectors are available as part of the following offerings:
## Microsoft Sentinel Community, Microsoft Corporation
+- [[Deprecated] Forcepoint CASB via Legacy Agent](data-connectors/deprecated-forcepoint-casb-via-legacy-agent.md)
+- [[Deprecated] Forcepoint CSG via Legacy Agent](data-connectors/deprecated-forcepoint-csg-via-legacy-agent.md)
+- [[Deprecated] Forcepoint NGFW via Legacy Agent](data-connectors/deprecated-forcepoint-ngfw-via-legacy-agent.md)
+- [[Recommended] Forcepoint CASB via AMA](data-connectors/recommended-forcepoint-casb-via-ama.md)
+- [[Recommended] Forcepoint CSG via AMA](data-connectors/recommended-forcepoint-csg-via-ama.md)
+- [[Recommended] Forcepoint NGFW via AMA](data-connectors/recommended-forcepoint-ngfw-via-ama.md)
- [Exchange Security Insights Online Collector (using Azure Functions)](data-connectors/exchange-security-insights-online-collector-using-azure-functions.md)-- [Forcepoint CASB](data-connectors/forcepoint-casb.md)-- [Forcepoint CSG](data-connectors/forcepoint-csg.md) - [Forcepoint DLP](data-connectors/forcepoint-dlp.md)-- [Forcepoint NGFW](data-connectors/forcepoint-ngfw.md) - [MISP2Sentinel](data-connectors/misp2sentinel.md) ## MongoDB
Data connectors are available as part of the following offerings:
## Morphisec -- [Morphisec UTPP](data-connectors/morphisec-utpp.md)
+- [[Deprecated] Morphisec UTPP via Legacy Agent](data-connectors/deprecated-morphisec-utpp-via-legacy-agent.md)
+- [[Recommended] Morphisec UTPP via AMA](data-connectors/recommended-morphisec-utpp-via-ama.md)
## MuleSoft
Data connectors are available as part of the following offerings:
## Netwrix -- [Netwrix Auditor (formerly Stealthbits Privileged Activity Manager)](data-connectors/netwrix-auditor-formerly-stealthbits-privileged-activity-manager.md)
+- [[Deprecated] Netwrix Auditor via Legacy Agent](data-connectors/deprecated-netwrix-auditor-via-legacy-agent.md)
+- [[Recommended] Netwrix Auditor via AMA](data-connectors/recommended-netwrix-auditor-via-ama.md)
## Nginx
Data connectors are available as part of the following offerings:
## Nozomi Networks -- [Nozomi Networks N2OS](data-connectors/nozomi-networks-n2os.md)
+- [[Deprecated] Nozomi Networks N2OS via Legacy Agent](data-connectors/deprecated-nozomi-networks-n2os-via-legacy-agent.md)
+- [[Recommended] Nozomi Networks N2OS via AMA](data-connectors/recommended-nozomi-networks-n2os-via-ama.md)
## NXLog Ltd.
Data connectors are available as part of the following offerings:
## Okta -- [Okta Single Sign-On (using Azure Functions)](data-connectors/okta-single-sign-on-using-azure-function.md)
+- [Okta Single Sign-On (using Azure Functions)](data-connectors/okta-single-sign-on-using-azure-functions.md)
## OneLogin
Data connectors are available as part of the following offerings:
## OSSEC -- [OSSEC](data-connectors/ossec.md)
+- [[Deprecated] OSSEC via Legacy Agent](data-connectors/deprecated-ossec-via-legacy-agent.md)
+- [[Recommended] OSSEC via AMA](data-connectors/recommended-ossec-via-ama.md)
## Palo Alto Networks
+- [[Deprecated] Palo Alto Networks Cortex Data Lake (CDL) via Legacy Agent](data-connectors/deprecated-palo-alto-networks-cortex-data-lake-cdl-via-legacy-agent.md)
+- [[Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA](data-connectors/recommended-palo-alto-networks-cortex-data-lake-cdl-via-ama.md)
- [Palo Alto Networks (Firewall)](data-connectors/palo-alto-networks-firewall.md)-- [Palo Alto Networks Cortex Data Lake (CDL)](data-connectors/palo-alto-networks-cortex-data-lake-cdl.md) - [Palo Alto Prisma Cloud CSPM (using Azure Functions)](data-connectors/palo-alto-prisma-cloud-cspm-using-azure-functions.md) ## Perimeter 81
Data connectors are available as part of the following offerings:
## Ping Identity -- [PingFederate](data-connectors/pingfederate.md)
+- [[Deprecated] PingFederate via Legacy Agent](data-connectors/deprecated-pingfederate-via-legacy-agent.md)
+- [[Recommended] PingFederate via AMA](data-connectors/recommended-pingfederate-via-ama.md)
## PostgreSQL
Data connectors are available as part of the following offerings:
## Qualys -- [Qualys VM KnowledgeBase (using Azure Functions)](data-connectors/qualys-vm-knowledgebase-using-azure-function.md)
+- [Qualys VM KnowledgeBase (using Azure Functions)](data-connectors/qualys-vm-knowledgebase-using-azure-functions.md)
- [Qualys Vulnerability Management (using Azure Functions)](data-connectors/qualys-vulnerability-management-using-azure-functions.md) ## RedHat
Data connectors are available as part of the following offerings:
## Salesforce -- [Salesforce Service Cloud (using Azure Functions)](data-connectors/salesforce-service-cloud-using-azure-function.md)
+- [Salesforce Service Cloud (using Azure Functions)](data-connectors/salesforce-service-cloud-using-azure-functions.md)
## Secure Practice
Data connectors are available as part of the following offerings:
## Snowflake -- [Snowflake (using Azure Functions)](data-connectors/snowflake-using-azure-function.md)
+- [Snowflake (using Azure Functions)](data-connectors/snowflake-using-azure-functions.md)
## SonicWall Inc -- [SonicWall Firewall](data-connectors/sonicwall-firewall.md)
+- [[Deprecated] SonicWall Firewall via Legacy Agent](data-connectors/deprecated-sonicwall-firewall-via-legacy-agent.md)
+- [[Recommended] SonicWall Firewall via AMA](data-connectors/recommended-sonicwall-firewall-via-ama.md)
## Sonrai Security
Data connectors are available as part of the following offerings:
## TheHive -- [TheHive Project - TheHive (using Azure Functions)](data-connectors/thehive-project-thehive-using-azure-function.md)
+- [TheHive Project - TheHive (using Azure Functions)](data-connectors/thehive-project-thehive-using-azure-functions.md)
## Theom, Inc.
Data connectors are available as part of the following offerings:
## TrendMicro -- [Trend Micro Apex One](data-connectors/trend-micro-apex-one.md)
+- [[Deprecated] Trend Micro Apex One via Legacy Agent](data-connectors/deprecated-trend-micro-apex-one-via-legacy-agent.md)
+- [[Recommended] Trend Micro Apex One via AMA](data-connectors/recommended-trend-micro-apex-one-via-ama.md)
## Ubiquiti
Data connectors are available as part of the following offerings:
## vArmour Networks -- [vArmour Application Controller](data-connectors/varmour-application-controller.md)
+- [[Deprecated] vArmour Application Controller via Legacy Agent](data-connectors/deprecated-varmour-application-controller-via-legacy-agent.md)
+- [[Recommended] vArmour Application Controller via AMA](data-connectors/recommended-varmour-application-controller-via-ama.md)
## Vectra AI, Inc
Data connectors are available as part of the following offerings:
## WireX Systems -- [WireX Network Forensics Platform](data-connectors/wirex-network-forensics-platform.md)
+- [[Deprecated] WireX Network Forensics Platform via Legacy Agent](data-connectors/deprecated-wirex-network-forensics-platform-via-legacy-agent.md)
+- [[Recommended] WireX Network Forensics Platform via AMA](data-connectors/recommended-wirex-network-forensics-platform-via-ama.md)
## WithSecure
sentinel Cisco Asa Ftd Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-asa-ftd-via-ama.md
- Title: "Cisco ASA/FTD via AMA (Preview) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Cisco ASA/FTD via AMA (Preview) to connect your data source to Microsoft Sentinel."
-- Previously updated : 07/26/2023----
-# Cisco ASA/FTD via AMA (Preview) connector for Microsoft Sentinel
-
-The Cisco ASA firewall connector allows you to easily connect your Cisco ASA logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CommonSecurityLog<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
-
-## Query samples
-
-**All logs**
- ```kusto
-CommonSecurityLog
-
- | where DeviceVendor == "Cisco"
-
- | where DeviceProduct == "ASA"
-
- | sort by TimeGenerated
- ```
-
-## Prerequisites
-
-To integrate with Cisco ASA/FTD via AMA (Preview) make sure you have:
--- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-
-## Vendor installation instructions
-
-Enable data collection ruleΓÇï
-
-> Cisco ASA/FTD event logs are collected only from **Linux** agents.
-
-Run the following command to install and apply the Cisco ASA/FTD collector:
-
-```
- sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
-```
-
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoasa?tab=Overview) in the Azure Marketplace.
sentinel Cisco Umbrella Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-umbrella-using-azure-functions.md
+
+ Title: "Cisco Umbrella (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Cisco Umbrella (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# Cisco Umbrella (using Azure Functions) connector for Microsoft Sentinel
+
+The Cisco Umbrella data connector provides the capability to ingest [Cisco Umbrella](https://docs.umbrella.com/) events stored in Amazon S3 into Microsoft Sentinel using the Amazon S3 REST API. Refer to [Cisco Umbrella log management documentation](https://docs.umbrella.com/deployment-umbrella/docs/log-management) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Kusto function alias** | Cisco_Umbrella |
+| **Kusto function url** | https://aka.ms/sentinel-ciscoumbrella-function |
+| **Log Analytics table(s)** | Cisco_Umbrella_dns_CL<br/> Cisco_Umbrella_proxy_CL<br/> Cisco_Umbrella_ip_CL<br/> Cisco_Umbrella_cloudfirewall_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
+
+## Query samples
+
+**All Cisco Umbrella Logs**
+ ```kusto
+Cisco_Umbrella
+
+ | sort by TimeGenerated desc
+ ```
+
+**Cisco Umbrella DNS Logs**
+ ```kusto
+Cisco_Umbrella
+
+ | where EventType == 'dnslogs'
+
+ | sort by TimeGenerated desc
+ ```
+
+**Cisco Umbrella Proxy Logs**
+ ```kusto
+Cisco_Umbrella
+
+ | where EventType == 'proxylogs'
+
+ | sort by TimeGenerated desc
+ ```
+
+**Cisco Umbrella IP Logs**
+ ```kusto
+Cisco_Umbrella
+
+ | where EventType == 'iplogs'
+
+ | sort by TimeGenerated desc
+ ```
+
+**Cisco Umbrella Cloud Firewall Logs**
+ ```kusto
+Cisco_Umbrella
+
+ | where EventType == 'cloudfirewalllogs'
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Cisco Umbrella (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Amazon S3 REST API Credentials/permissions**: **AWS Access Key Id**, **AWS Secret Access Key**, **AWS S3 Bucket Name** are required for Amazon S3 REST API.
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the Amazon S3 REST API to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+> [!NOTE]
+ > This connector has been updated to support [cisco umbrella version 5 and version 6.](https://docs.umbrella.com/deployment-umbrella/docs/log-formats-and-versioning)
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Functions App.
++
+> [!NOTE]
+ > This connector uses a parser based on a Kusto Function to normalize fields. [Follow these steps](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Parsers/CiscoUmbrella/Cisco_Umbrella) to create the Kusto function alias **Cisco_Umbrella**.
++
+**STEP 1 - Configuration of the Cisco Umbrella logs collection**
+
+[See documentation](https://docs.umbrella.com/deployment-umbrella/docs/log-management#section-logging-to-amazon-s-3) and follow the instructions for set up logging and obtain credentials.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Functions**
+
+>**IMPORTANT:** Before deploying the Cisco Umbrella data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Amazon S3 REST API Authorization credentials, readily available.
+++++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoumbrella?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Ai Analyst Darktrace Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-ai-analyst-darktrace-via-legacy-agent.md
+
+ Title: "[Deprecated] AI Analyst Darktrace via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] AI Analyst Darktrace via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] AI Analyst Darktrace via Legacy Agent connector for Microsoft Sentinel
+
+The Darktrace connector lets users connect Darktrace Model Breaches in real-time with Microsoft Sentinel, allowing creation of custom Dashboards, Workbooks, Notebooks and Custom Alerts to improve investigation. Microsoft Sentinel's enhanced visibility into Darktrace logs enables monitoring and mitigation of security threats.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (Darktrace)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Darktrace](https://www.darktrace.com/en/contact/) |
+
+## Query samples
+
+**first 10 most recent data breaches**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Darktrace"
+
+ | order by TimeGenerated desc
+
+ | limit 10
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Configure Darktrace to forward Syslog messages in CEF format to your Azure workspace via the Syslog agent.
+
+ 1) Within the Darktrace Threat Visualizer, navigate to the System Config page in the main menu under Admin.
+
+ 2) From the left-hand menu, select Modules and choose Microsoft Sentinel from the available Workflow Integrations.\n 3) A configuration window will open. Locate Microsoft Sentinel Syslog CEF and click New to reveal the configuration settings, unless already exposed.
+
+ 4) In the Server configuration field, enter the location of the log forwarder and optionally modify the communication port. Ensure that the port selected is set to 514 and is allowed by any intermediary firewalls.
+
+ 5) Configure any alert thresholds, time offsets or additional settings as required.
+
+ 6) Review any additional configuration options you may wish to enable that alter the Syslog syntax.
+
+ 7) Enable Send Alerts and save your changes.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/darktrace1655286944672.darktrace_mss?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Akamai Security Events Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-akamai-security-events-via-legacy-agent.md
+
+ Title: "[Deprecated] Akamai Security Events via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Akamai Security Events via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Akamai Security Events via Legacy Agent connector for Microsoft Sentinel
+
+Akamai Solution for Microsoft Sentinel provides the capability to ingest [Akamai Security Events](https://www.akamai.com/us/en/products/security/) into Microsoft Sentinel. Refer to [Akamai SIEM Integration documentation](https://developer.akamai.com/tools/integrations/siem) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (AkamaiSecurityEvents)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Countries**
+ ```kusto
+AkamaiSIEMEvent
+
+ | summarize count() by SrcGeoCountry
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Akamai Security Events and load the function code or click [here](https://aka.ms/sentinel-akamaisecurityevents-parser), on the second line of the query, enter the hostname(s) of your Akamai Security Events device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+[Follow these steps](https://developer.akamai.com/tools/integrations/siem) to configure Akamai CEF connector to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-akamai?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Aruba Clearpass Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-aruba-clearpass-via-legacy-agent.md
+
+ Title: "[Deprecated] Aruba ClearPass via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Aruba ClearPass via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Aruba ClearPass via Legacy Agent connector for Microsoft Sentinel
+
+The [Aruba ClearPass](https://www.arubanetworks.com/products/security/network-access-control/secure-access/) connector allows you to easily connect your Aruba ClearPass with Microsoft Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs network and improves your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ArubaClearPass)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
+
+## Query samples
+
+**Top 10 Events by Username**
+ ```kusto
+ArubaClearPass
+
+ | summarize count() by UserName
+
+ | top 10 by count_
+ ```
+
+**Top 10 Error Codes**
+ ```kusto
+ArubaClearPass
+
+ | summarize count() by ErrorCode
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ArubaClearPass and load the function code or click [here](https://aka.ms/sentinel-arubaclearpass-parser).The function usually takes 10-15 minutes to activate after solution installation/update.
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Aruba ClearPass logs to a Syslog agent
+
+Configure Aruba ClearPass to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent.
+1. [Follow these instructions](https://www.arubanetworks.com/techdocs/ClearPass/6.7/PolicyManager/Content/CPPM_UserGuide/Admin/syslogExportFilters_add_syslog_filter_general.htm) to configure the Aruba ClearPass to forward syslog.
+2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Broadcom Symantec Dlp Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-broadcom-symantec-dlp-via-legacy-agent.md
+
+ Title: "[Deprecated] Broadcom Symantec DLP via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Broadcom Symantec DLP via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Broadcom Symantec DLP via Legacy Agent connector for Microsoft Sentinel
+
+The [Broadcom Symantec Data Loss Prevention (DLP)](https://www.broadcom.com/products/cyber-security/information-protection/data-loss-prevention) connector allows you to easily connect your Symantec DLP with Microsoft Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs information, where it travels, and improves your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (SymantecDLP)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Triggered Activities**
+ ```kusto
+SymantecDLP
+
+ | summarize count() by Activity
+
+ | top 10 by count_
+ ```
+
+**Top 10 Filenames**
+ ```kusto
+SymantecDLP
+
+ | summarize count() by FileName
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SymantecDLP and load the function code or click [here](https://aka.ms/sentinel-symantecdlp-parser). The function usually takes 10-15 minutes to activate after solution installation/update.
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python ΓÇôversion.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Symantec DLP logs to a Syslog agent
+
+Configure Symantec DLP to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent.
+1. [Follow these instructions](https://knowledge.broadcom.com/external/article/159509/generating-syslog-messages-from-data-los.html) to configure the Symantec DLP to forward syslog
+2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python ΓÇôversion
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-broadcomsymantecdlp?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Cisco Firepower Estreamer Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-cisco-firepower-estreamer-via-legacy-agent.md
+
+ Title: "[Deprecated] Cisco Firepower eStreamer via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Cisco Firepower eStreamer via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Cisco Firepower eStreamer via Legacy Agent connector for Microsoft Sentinel
+
+eStreamer is a Client Server API designed for the Cisco Firepower NGFW Solution. The eStreamer client requests detailed event data on behalf of the SIEM or logging solution in the Common Event Format (CEF).
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CiscoFirepowerEstreamerCEF)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Cisco](https://www.cisco.com/c/en_in/support/https://docsupdatetracker.net/index.html) |
+
+## Query samples
+
+**Firewall Blocked Events**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cisco"
+
+ | where DeviceProduct == "Firepower"
+ | where DeviceAction != "Allow"
+ ```
+
+**File Malware Events**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cisco"
+
+ | where DeviceProduct == "Firepower"
+ | where Activity == "File Malware Event"
+ ```
+
+**Outbound Web Traffic Port 80**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cisco"
+
+ | where DeviceProduct == "Firepower"
+ | where DestinationPort == "80"
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 25226 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Install the Firepower eNcore client
+
+Install and configure the Firepower eNcore eStreamer client, for more details see full install [guide](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html)
+
+2.1 Download the Firepower Connector from github
+
+Download the latest version of the Firepower eNcore connector for Microsoft Sentinel [here](https://github.com/CiscoSecurity/fp-05-microsoft-sentinel-connector). If you plan on using python3 use the [python3 eStreamer connector](https://github.com/CiscoSecurity/fp-05-microsoft-sentinel-connector/tree/python3)
+
+2.2 Create a pkcs12 file using the Azure/VM Ip Address
+
+Create a pkcs12 certificate using the public IP of the VM instance in Firepower under System->Integration->eStreamer, for more information please see install [guide](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html#_Toc527049443)
+
+2.3 Test Connectivity between the Azure/VM Client and the FMC
+
+Copy the pkcs12 file from the FMC to the Azure/VM instance and run the test utility (./encore.sh test) to ensure a connection can be established, for more details please see the setup [guide](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html#_Toc527049430)
+
+2.4 Configure encore to stream data to the agent
+
+Configure encore to stream data via TCP to the Microsoft Agent, this should be enabled by default, however, additional ports and streaming protocols can configured depending on your network security posture, it is also possible to save the data to the file system, for more information please see [Configure Encore](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html#_Toc527049433)
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-firepower-estreamer?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Cisco Secure Email Gateway Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-cisco-secure-email-gateway-via-legacy-agent.md
+
+ Title: "[Deprecated] Cisco Secure Email Gateway via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Cisco Secure Email Gateway via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Cisco Secure Email Gateway via Legacy Agent connector for Microsoft Sentinel
+
+The [Cisco Secure Email Gateway (SEG)](https://www.cisco.com/c/en/us/products/security/email-security/https://docsupdatetracker.net/index.html) data connector provides the capability to ingest [Cisco SEG Consolidated Event Logs](https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0/b_ESA_Admin_Guide_12_1_chapter_0100111.html#con_1061902) into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CiscoSEG)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Senders**
+ ```kusto
+CiscoSEGEvent
+
+ | where isnotempty(SrcUserName)
+
+ | summarize count() by SrcUserName
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**CiscoSEGEvent**](https://aka.ms/sentinel-CiscoSEG-parser) which is deployed with the Microsoft Sentinel Solution.
++
+> [!NOTE]
+ > This data connector has been developed using AsyncOS 14.0 for Cisco Secure Email Gateway
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Follow these steps to configure Cisco Secure Email Gateway to forward logs via syslog:
+
+2.1. Configure [Log Subscription](https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0/b_ESA_Admin_Guide_12_1_chapter_0100111.html#con_1134718)
+
+>**NOTE:** Select **Consolidated Event Logs** in Log Type field.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoseg?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Citrix Waf Web App Firewall Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-citrix-waf-web-app-firewall-via-legacy-agent.md
+
+ Title: "[Deprecated] Citrix WAF (Web App Firewall) via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Citrix WAF (Web App Firewall) via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Citrix WAF (Web App Firewall) via Legacy Agent connector for Microsoft Sentinel
+
+ Citrix WAF (Web App Firewall) is an industry leading enterprise-grade WAF solution. Citrix WAF mitigates threats against your public-facing assets, including websites, apps, and APIs. From layer 3 to layer 7, Citrix WAF includes protections such as IP reputation, bot mitigation, defense against the OWASP Top 10 application threats, built-in signatures to protect against application stack vulnerabilities, and more.
+
+Citrix WAF supports Common Event Format (CEF) which is an industry standard format on top of Syslog messages . By connecting Citrix WAF CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CitrixWAFLogs)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Citrix Systems](https://www.citrix.com/support/) |
+
+## Query samples
+
+**Citrix WAF Logs**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ ```
+
+**Citrix Waf logs for cross site scripting**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ | where Activity == "APPFW_XSS"
+
+ ```
+
+**Citrix Waf logs for SQL Injection**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ | where Activity == "APPFW_SQL"
+
+ ```
+
+**Citrix Waf logs for Bufferoverflow**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ | where Activity == "APPFW_STARTURL"
+
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Configure Citrix WAF to send Syslog messages in CEF format to the proxy machine using the steps below.
+
+1. Follow [this guide](https://support.citrix.com/article/CTX234174) to configure WAF.
+
+2. Follow [this guide](https://support.citrix.com/article/CTX136146) to configure CEF logs.
+
+3. Follow [this guide](https://docs.citrix.com/en-us/citrix-adc/13/system/audit-logging/configuring-audit-logging.html) to forward the logs to proxy . Make sure you to send the logs to port 514 TCP on the Linux machine's IP address.
+++
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/citrix.citrix_waf_mss?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Claroty Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-claroty-via-legacy-agent.md
+
+ Title: "[Deprecated] Claroty via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Claroty via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Claroty via Legacy Agent connector for Microsoft Sentinel
+
+The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/secure-remote-access/) events into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (Claroty)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Destinations**
+ ```kusto
+ClarotyEvent
+
+ | where isnotempty(DstIpAddr)
+
+ | summarize count() by DstIpAddr
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**ClarotyEvent**](https://aka.ms/sentinel-claroty-parser) which is deployed with the Microsoft Sentinel Solution.
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Configure Claroty to send logs using CEF
+
+Configure log forwarding using CEF:
+
+1. Navigate to the **Syslog** section of the Configuration menu.
+
+2. Select **+Add**.
+
+3. In the **Add New Syslog Dialog** specify Remote Server **IP**, **Port**, **Protocol** and select **Message Format** - **CEF**.
+
+4. Choose **Save** to exit the **Add Syslog dialog**.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-claroty?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Contrast Protect Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-contrast-protect-via-legacy-agent.md
+
+ Title: "[Deprecated] Contrast Protect via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Contrast Protect via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Contrast Protect via Legacy Agent connector for Microsoft Sentinel
+
+Contrast Protect mitigates security threats in production applications with runtime protection and observability. Attack event results (blocked, probed, suspicious...) and other information can be sent to Microsoft Microsoft Sentinel to blend with security information from other systems.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ContrastProtect)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Contrast Protect](https://docs.contrastsecurity.com/) |
+
+## Query samples
+
+**All attacks**
+ ```kusto
+let extract_data=(a:string, k:string) { parse_urlquery(replace(@';', @'&', a))["Query Parameters"][k] }; CommonSecurityLog
+ | where DeviceVendor == 'Contrast Security'
+ | extend Outcome = replace(@'INEFFECTIVE', @'PROBED', tostring(coalesce(column_ifexists("EventOutcome", ""), extract_data(AdditionalExtensions, 'outcome'), "")))
+ | where Outcome != 'success'
+ | extend Rule = extract_data(AdditionalExtensions, 'pri')
+ | project TimeGenerated, ApplicationProtocol, Rule, Activity, Outcome, RequestURL, SourceIP
+ | order by TimeGenerated desc
+ ```
+
+**Effective attacks**
+ ```kusto
+let extract_data=(a:string, k:string) {
+ parse_urlquery(replace(@';', @'&', a))["Query Parameters"][k]
+};
+CommonSecurityLog
+
+ | where DeviceVendor == 'Contrast Security'
+
+ | extend Outcome = tostring(coalesce(column_ifexists("EventOutcome", ""), extract_data(AdditionalExtensions, 'outcome'), ""))
+
+ | where Outcome in ('EXPLOITED','BLOCKED','SUSPICIOUS')
+
+ | extend Rule = extract_data(AdditionalExtensions, 'pri')
+
+ | project TimeGenerated, ApplicationProtocol, Rule, Activity, Outcome, RequestURL, SourceIP
+
+ | order by TimeGenerated desc
+
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Configure the Contrast Protect agent to forward events to syslog as described here: https://docs.contrastsecurity.com/en/output-to-syslog.html. Generate some attack events for your application.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/contrast_security.contrast_protect_azure_sentinel_solution?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Cyberark Enterprise Password Vault Epv Events Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-cyberark-enterprise-password-vault-epv-events-via-legacy-agent.md
+
+ Title: "[Deprecated] CyberArk Enterprise Password Vault (EPV) Events via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] CyberArk Enterprise Password Vault (EPV) Events via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] CyberArk Enterprise Password Vault (EPV) Events via Legacy Agent connector for Microsoft Sentinel
+
+CyberArk Enterprise Password Vault generates an xml Syslog message for every action taken against the Vault. The EPV will send the xml messages through the Microsoft Sentinel.xsl translator to be converted into CEF standard format and sent to a syslog staging server of your choice (syslog-ng, rsyslog). The Log Analytics agent installed on your syslog staging server will import the messages into Microsoft Log Analytics. Refer to the [CyberArk documentation](https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/DV-Integrating-with-SIEM-Applications.htm) for more guidance on SIEM integrations.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CyberArk)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Cyberark](https://www.cyberark.com/services-support/technical-support/) |
+
+## Query samples
+
+**CyberArk Alerts**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cyber-Ark"
+
+ | where DeviceProduct == "Vault"
+
+ | where LogSeverity == "7" or LogSeverity == "10"
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python installed on your machine.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+On the EPV configure the dbparm.ini to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machines IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python installed on your machine using the following command: python -version
+
+>
+
+> 2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machines security according to your organizations security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberark.cyberark_epv_events_mss?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Delinea Secret Server Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-delinea-secret-server-via-legacy-agent.md
+
+ Title: "[Deprecated] Delinea Secret Server via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Delinea Secret Server via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Delinea Secret Server via Legacy Agent connector for Microsoft Sentinel
+
+Common Event Format (CEF) from Delinea Secret Server
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog(DelineaSecretServer)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Delinea](https://delinea.com/support/) |
+
+## Query samples
+
+**Get records create new secret**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Delinea Software" or DeviceVendor == "Thycotic Software"
+
+ | where DeviceProduct == "Secret Server"
+
+ | where Activity has "SECRET - CREATE"
+ ```
+
+**Get records where view secret**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Delinea Software" or DeviceVendor == "Thycotic Software"
+
+ | where DeviceProduct == "Secret Server"
+
+ | where Activity has "SECRET - VIEW"
+ ```
+++
+## Prerequisites
+
+To integrate with [Deprecated] Delinea Secret Server via Legacy Agent make sure you have:
+
+- **Delinea Secret Server**: must be configured to export logs via Syslog
+
+ [Learn more about configure Secret Server](https://thy.center/ss/link/syslog)
++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/delineainc1653506022260.delinea_secret_server_mss?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Extrahop Reveal X Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-extrahop-reveal-x-via-legacy-agent.md
+
+ Title: "[Deprecated] ExtraHop Reveal(x) via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] ExtraHop Reveal(x) via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] ExtraHop Reveal(x) via Legacy Agent connector for Microsoft Sentinel
+
+The ExtraHop Reveal(x) data connector enables you to easily connect your Reveal(x) system with Microsoft Sentinel to view dashboards, create custom alerts, and improve investigation. This integration gives you the ability to gain insight into your organization's network and improve your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ΓÇÿExtraHopΓÇÖ)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [ExtraHop](https://www.extrahop.com/support/) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "ExtraHop"
+
+
+ | sort by TimeGenerated
+ ```
+
+**All detections, de-duplicated**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "ExtraHop"
+
+
+ | extend categories = iif(DeviceCustomString2 != "", split(DeviceCustomString2, ","),dynamic(null))
+    
+ | extend StartTime = extract("start=([0-9-]+T[0-9:.]+Z)", 1, AdditionalExtensions,typeof(datetime))
+    
+ | extend EndTime = extract("end=([0-9-]+T[0-9:.]+Z)", 1, AdditionalExtensions,typeof(datetime))
+    
+ | project      
+     DeviceEventClassID="ExtraHop Detection",
+     Title=Activity,
+     Description=Message,
+     riskScore=DeviceCustomNumber2,     
+     SourceIP,
+     DestinationIP,
+     detectionID=tostring(DeviceCustomNumber1),
+     updateTime=todatetime(ReceiptTime),
+     StartTime,
+     EndTime,
+     detectionURI=DeviceCustomString1,
+     categories,
+     Computer
+    
+ | summarize arg_max(updateTime, *) by detectionID
+    
+ | sort by detectionID desc
+ ```
+++
+## Prerequisites
+
+To integrate with [Deprecated] ExtraHop Reveal(x) via Legacy Agent make sure you have:
+
+- **ExtraHop**: ExtraHop Discover or Command appliance with firmware version 7.8 or later with a user account that has Unlimited (administrator) privileges.
++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python --version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward ExtraHop Networks logs to Syslog agent
+
+1. Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure to send the logs to port 514 TCP on the machine IP address.
+2. Follow the directions to install the [ExtraHop Detection SIEM Connector bundle](https://aka.ms/asi-syslog-extrahop-forwarding) on your Reveal(x) system. The SIEM Connector is required for this integration.
+3. Enable the trigger for **ExtraHop Detection SIEM Connector - CEF**
+4. Update the trigger with the ODS syslog targets you created 
+5. The Reveal(x) system formats syslog messages in Common Event Format (CEF) and then sends data to Microsoft Sentinel.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python --version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/extrahop.extrahop_revealx_mss?tab=Overview) in the Azure Marketplace.
sentinel Deprecated F5 Networks Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-f5-networks-via-legacy-agent.md
+
+ Title: "[Deprecated] F5 Networks via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] F5 Networks via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] F5 Networks via Legacy Agent connector for Microsoft Sentinel
+
+The F5 firewall connector allows you to easily connect your F5 logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (F5)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [F5](https://www.f5.com/services/support) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "F5"
+
+
+ | sort by TimeGenerated
+ ```
+
+**Summarize by time**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "F5"
+
+
+ | summarize count() by TimeGenerated
+
+ | sort by TimeGenerated
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python --version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Configure F5 to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent.
+
+Go to [F5 Configuring Application Security Event Logging](https://aka.ms/asi-syslog-f5-forwarding), follow the instructions to set up remote logging, using the following guidelines:
+
+1. Set the **Remote storage type** to CEF.
+2. Set the **Protocol setting** to UDP.
+3. Set the **IP address** to the Syslog server IP address.
+4. Set the **port number** to 514, or the port your agent uses.
+5. Set the **facility** to the one that you configured in the Syslog agent (by default, the agent sets this to local4).
+6. You can set the **Maximum Query String Size** to be the same as you configured.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python --version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5_networks_data_mss?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Fireeye Network Security Nx Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-fireeye-network-security-nx-via-legacy-agent.md
+
+ Title: "[Deprecated] FireEye Network Security (NX) via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] FireEye Network Security (NX) via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] FireEye Network Security (NX) via Legacy Agent connector for Microsoft Sentinel
+
+The [FireEye Network Security (NX)](https://www.fireeye.com/products/network-security.html) data connector provides the capability to ingest FireEye Network Security logs into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (FireEyeNX)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Sources**
+ ```kusto
+FireEyeNXEvent
+
+ | where isnotempty(SrcIpAddr)
+
+ | summarize count() by SrcIpAddr
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**FireEyeNXEvent**](https://aka.ms/sentinel-FireEyeNX-parser) which is deployed with the Microsoft Sentinel Solution.
++
+> [!NOTE]
+ > This data connector has been developed using FEOS release v9.0
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Configure FireEye NX to send logs using CEF
+
+Complete the following steps to send data using CEF:
+
+2.1. Log into the FireEye appliance with an administrator account
+
+2.2. Click **Settings**
+
+2.3. Click **Notifications**
+
+Click **rsyslog**
+
+2.4. Check the **Event type** check box
+
+2.5. Make sure Rsyslog settings are:
+
+- Default format: CEF
+
+- Default delivery: Per event
+
+- Default send as: Alert
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fireeyenx?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Forcepoint Casb Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-forcepoint-casb-via-legacy-agent.md
+
+ Title: "[Deprecated] Forcepoint CASB via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Forcepoint CASB via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Forcepoint CASB via Legacy Agent connector for Microsoft Sentinel
+
+The Forcepoint CASB (Cloud Access Security Broker) Connector allows you to automatically export CASB logs and events into Microsoft Sentinel in real-time. This enriches visibility into user activities across locations and cloud applications, enables further correlation with data from Azure workloads and other feeds, and improves monitoring capability with Workbooks inside Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ForcepointCASB)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) |
+
+## Query samples
+
+**Top 5 Users With The Highest Number Of Logs**
+ ```kusto
+CommonSecurityLog
+
+ | summarize Count = count() by DestinationUserName
+
+ | top 5 by DestinationUserName
+
+ | render barchart
+ ```
+
+**Top 5 Users by Number of Failed Attempts **
+ ```kusto
+CommonSecurityLog
+
+ | extend outcome = coalesce(column_ifexists("EventOutcome", ""), tostring(split(split(AdditionalExtensions, ";", 2)[0], "=", 1)[0]), "")
+
+ | extend reason = coalesce(column_ifexists("Reason", ""), tostring(split(split(AdditionalExtensions, ";", 3)[0], "=", 1)[0]), "")
+
+ | where outcome =="Failure"
+
+ | summarize Count= count() by DestinationUserName
+
+ | render barchart
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel. This machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+
+5. Forcepoint integration installation guide
+
+To complete the installation of this Forcepoint product integration, follow the guide linked below.
+
+[Installation Guide >](https://frcpnt.com/casb-sentinel)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-casb?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Forcepoint Csg Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-forcepoint-csg-via-legacy-agent.md
+
+ Title: "[Deprecated] Forcepoint CSG via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Forcepoint CSG via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Forcepoint CSG via Legacy Agent connector for Microsoft Sentinel
+
+Forcepoint Cloud Security Gateway is a converged cloud security service that provides visibility, control, and threat protection for users and data, wherever they are. For more information visit: https://www.forcepoint.com/product/cloud-security-gateway
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (Forcepoint CSG)<br/> CommonSecurityLog (Forcepoint CSG)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) |
+
+## Query samples
+
+**Top 5 Web requested Domains with log severity equal to 6 (Medium)**
+ ```kusto
+CommonSecurityLog
+
+ | where TimeGenerated <= ago(0m)
+
+ | where DeviceVendor == "Forcepoint CSG"
+
+ | where DeviceProduct == "Web"
+
+ | where LogSeverity == 6
+
+ | where DeviceCustomString2 != ""
+
+ | summarize Count=count() by DeviceCustomString2
+
+ | top 5 by Count
+
+ | render piechart
+ ```
+
+**Top 5 Web Users with 'Action' equal to 'Blocked'**
+ ```kusto
+CommonSecurityLog
+
+ | where TimeGenerated <= ago(0m)
+
+ | where DeviceVendor == "Forcepoint CSG"
+
+ | where DeviceProduct == "Web"
+
+ | where Activity == "Blocked"
+
+ | where SourceUserID != "Not available"
+
+ | summarize Count=count() by SourceUserID
+
+ | top 5 by Count
+
+ | render piechart
+ ```
+
+**Top 5 Sender Email Addresses Where Spam Score Greater Than 10.0**
+ ```kusto
+CommonSecurityLog
+
+ | where TimeGenerated <= ago(0m)
+
+ | where DeviceVendor == "Forcepoint CSG"
+
+ | where DeviceProduct == "Email"
+
+ | where DeviceCustomFloatingPoint1 > 10.0
+
+ | summarize Count=count() by SourceUserName
+
+ | top 5 by Count
+
+ | render barchart
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+This integration requires the Linux Syslog agent to collect your Forcepoint Cloud Security Gateway Web/Email logs on port 514 TCP as Common Event Format (CEF) and forward them to Microsoft Sentinel.
+
+ Your Data Connector Syslog Agent Installation Command is:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Implementation options
+
+The integration is made available with two implementations options.
+
+2.1 Docker Implementation
+
+Leverages docker images where the integration component is already installed with all necessary dependencies.
+
+Follow the instructions provided in the Integration Guide linked below.
+
+[Integration Guide >](https://frcpnt.com/csg-sentinel)
+
+2.2 Traditional Implementation
+
+Requires the manual deployment of the integration component inside a clean Linux machine.
+
+Follow the instructions provided in the Integration Guide linked below.
+
+[Integration Guide >](https://frcpnt.com/csg-sentinel)
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF).
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-csg?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Forcepoint Ngfw Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-forcepoint-ngfw-via-legacy-agent.md
+
+ Title: "[Deprecated] Forcepoint NGFW via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Forcepoint NGFW via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Forcepoint NGFW via Legacy Agent connector for Microsoft Sentinel
+
+The Forcepoint NGFW (Next Generation Firewall) connector allows you to automatically export user-defined Forcepoint NGFW logs into Microsoft Sentinel in real-time. This enriches visibility into user activities recorded by NGFW, enables further correlation with data from Azure workloads and other feeds, and improves monitoring capability with Workbooks inside Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ForcePointNGFW)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) |
+
+## Query samples
+
+**Show all terminated actions from the Forcepoint NGFW**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Forcepoint"
+
+ | where DeviceProduct == "NGFW"
+
+ | where DeviceAction == "Terminate"
+
+ ```
+
+**Show all Forcepoint NGFW with suspected compromise behaviour**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Forcepoint"
+
+ | where DeviceProduct == "NGFW"
+
+ | where Activity contains "compromise"
+
+ ```
+
+**Show chart grouping all Forcepoint NGFW events by Activity type**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Forcepoint"
+
+ | where DeviceProduct == "NGFW"
+
+ | summarize count=count() by Activity
+
+ | render barchart
+
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python - version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+
+5. Forcepoint integration installation guide
+
+To complete the installation of this Forcepoint product integration, follow the guide linked below.
+
+[Installation Guide >](https://frcpnt.com/ngfw-sentinel)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-ngfw?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Iboss Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-iboss-via-legacy-agent.md
+
+ Title: "[Deprecated] iboss via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] iboss via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] iboss via Legacy Agent connector for Microsoft Sentinel
+
+The [iboss](https://www.iboss.com) data connector enables you to seamlessly connect your Threat Console to Microsoft Sentinel and enrich your instance with iboss URL event logs. Our logs are forwarded in Common Event Format (CEF) over Syslog and the configuration required can be completed on the iboss platform without the use of a proxy. Take advantage of our connector to garner critical data points and gain insight into security threats.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | ibossUrlEvent<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [iboss](https://www.iboss.com/contact-us/) |
+
+## Query samples
+
+**Logs Received from the past week**
+ ```kusto
+ibossUrlEvent
+ | where TimeGenerated > ago(7d)
+ ```
+++
+## Vendor installation instructions
+
+1. Configure a dedicated proxy Linux machine
+
+If using the iboss gov environment or there is a preference to forward the logs to a dedicated proxy Linux machine, proceed with this step. In all other cases, please advance to step two.
+
+1.1 Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.2 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the dedicated proxy Linux machine between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.3 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+> 2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs
+
+Set your Threat Console to send Syslog messages in CEF format to your Azure workspace. Make note of your Workspace ID and Primary Key within your Log Analytics Workspace (Select the workspace from the Log Analytics workspaces menu in the Azure portal. Then select Agents management in the Settings section).
+
+>1. Navigate to Reporting & Analytics inside your iboss Console
+
+>2. Select Log Forwarding -> Forward From Reporter
+
+>3. Select Actions -> Add Service
+
+>4. Toggle to Microsoft Sentinel as a Service Type and input your Workspace ID/Primary Key along with other criteria. If a dedicated proxy Linux machine has been configured, toggle to Syslog as a Service Type and configure the settings to point to your dedicated proxy Linux machine
+
+>5. Wait one to two minutes for the setup to complete
+
+>6. Select your Microsoft Sentinel Service and verify the Microsoft Sentinel Setup Status is Successful. If a dedicated proxy Linux machine has been configured, you may proceed with validating your connection
+
+3. Validate connection
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy (Only applicable if a dedicated proxy Linux machine has been configured).
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/iboss.iboss-sentinel-connector?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Illumio Core Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-illumio-core-via-legacy-agent.md
+
+ Title: "[Deprecated] Illumio Core via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Illumio Core via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Illumio Core via Legacy Agent connector for Microsoft Sentinel
+
+The [Illumio Core](https://www.illumio.com/products/) data connector provides the capability to ingest Illumio Core logs into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (IllumioCore)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Event Types**
+ ```kusto
+IllumioCoreEvent
+
+ | where isnotempty(EventType)
+
+ | summarize count() by EventType
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias IllumioCoreEvent and load the function code or click [here](https://aka.ms/sentinel-IllumioCore-parser).The function usually takes 10-15 minutes to activate after solution installation/update and maps Illumio Core events to Microsoft Sentinel Information Model (ASIM).
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Configure Ilumio Core to send logs using CEF
+
+2.1 Configure Event Format
+
+ 1. From the PCE web console menu, choose **Settings > Event Settings** to view your current settings.
+
+ 2. Click **Edit** to change the settings.
+
+ 3. Set **Event Format** to CEF.
+
+ 4. (Optional) Configure **Event Severity** and **Retention Period**.
+
+2.2 Configure event forwarding to an external syslog server
+
+ 1. From the PCE web console menu, choose **Settings > Event Settings**.
+
+ 2. Click **Add**.
+
+ 3. Click **Add Repository**.
+
+ 4. Complete the **Add Repository** dialog.
+
+ 5. Click **OK** to save the event forwarding configuration.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-illumiocore?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Illusive Platform Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-illusive-platform-via-legacy-agent.md
+
+ Title: "[Deprecated] Illusive Platform via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Illusive Platform via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Illusive Platform via Legacy Agent connector for Microsoft Sentinel
+
+The Illusive Platform Connector allows you to share Illusive's attack surface analysis data and incident logs with Microsoft Sentinel and view this information in dedicated dashboards that offer insight into your organization's attack surface risk (ASM Dashboard) and track unauthorized lateral movement in your organization's network (ADS Dashboard).
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (illusive)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Illusive Networks](https://illusive.com/support) |
+
+## Query samples
+
+**Number of Incidents in in the last 30 days in which Trigger Type is found**
+ ```kusto
+union CommonSecurityLog
+ | where (DeviceEventClassID == "illusive:login" or DeviceEventClassID == "illusive:access" or DeviceEventClassID == "illusive:suspicious")
+ | where Message !contains "hasForensics"
+ | where TimeGenerated > ago(30d)
+ | extend DeviceCustomNumber2 = coalesce(column_ifexists("FieldDeviceCustomNumber2", long(null)), DeviceCustomNumber2, long(null))
+ | summarize by DestinationServiceName, DeviceCustomNumber2
+ | summarize incident_count=count() by DestinationServiceName
+ ```
+
+**Top 10 alerting hosts in the last 30 days**
+ ```kusto
+union CommonSecurityLog
+ | where (DeviceEventClassID == "illusive:login" or DeviceEventClassID == "illusive:access" or DeviceEventClassID == "illusive:suspicious")
+ | where Message !contains "hasForensics"
+ | where TimeGenerated > ago(30d)
+ | extend DeviceCustomNumber2 = coalesce(column_ifexists("FieldDeviceCustomNumber2", long(null)), DeviceCustomNumber2, long(null))
+ | summarize by AlertingHost=iff(SourceHostName != "" and SourceHostName != "Failed to obtain", SourceHostName, SourceIP) ,DeviceCustomNumber2
+ | where AlertingHost != "" and AlertingHost != "Failed to obtain"
+ | summarize incident_count=count() by AlertingHost
+ | order by incident_count
+ | limit 10
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Illusive Common Event Format (CEF) logs to Syslog agent
+
+1. Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+> 2. Log onto the Illusive Console, and navigate to Settings->Reporting.
+> 3. Find Syslog Servers
+> 4. Supply the following information:
+>> 1. Host name: Linux Syslog agent IP address or FQDN host name
+>> 2. Port: 514
+>> 3. Protocol: TCP
+>> 4. Audit messages: Send audit messages to server
+> 5. To add the syslog server, click Add.
+> 6. For more information about how to add a new syslog server in the Illusive platform, please find the Illusive Networks Admin Guide in here: https://support.illusivenetworks.com/hc/en-us/sections/360002292119-Documentation-by-Version
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/illusivenetworks.illusive_platform_mss?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Kaspersky Security Center Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-kaspersky-security-center-via-legacy-agent.md
+
+ Title: "[Deprecated] Kaspersky Security Center via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Kaspersky Security Center via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Kaspersky Security Center via Legacy Agent connector for Microsoft Sentinel
+
+The [Kaspersky Security Center](https://support.kaspersky.com/KSC/13/en-US/3396.htm) data connector provides the capability to ingest [Kaspersky Security Center logs](https://support.kaspersky.com/KSC/13/en-US/151336.htm) into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (KasperskySC)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Destinations**
+ ```kusto
+KasperskySCEvent
+
+ | where isnotempty(DstIpAddr)
+
+ | summarize count() by DstIpAddr
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**KasperskySCEvent**](https://aka.ms/sentinel-kasperskysc-parser) which is deployed with the Microsoft Sentinel Solution.
++
+> [!NOTE]
+ > This data connector has been developed using Kaspersky Security Center 13.1.
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Microsoft or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Configure Kaspersky Security Center to send logs using CEF
+
+[Follow the instructions](https://support.kaspersky.com/KSC/13/en-US/89277.htm) to configure event export from Kaspersky Security Center.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-kasperskysc?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Morphisec Utpp Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-morphisec-utpp-via-legacy-agent.md
+
+ Title: "[Deprecated] Morphisec UTPP via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Morphisec UTPP via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Morphisec UTPP via Legacy Agent connector for Microsoft Sentinel
+
+Integrate vital insights from your security products with the Morphisec Data Connector for Microsoft Sentinel and expand your analytical capabilities with search and correlation, threat intelligence, and customized alerts. Morphisec's Data Connector provides visibility into today's most advanced threats including sophisticated fileless attacks, in-memory exploits and zero days. With a single, cross-product view, you can make real-time, data-backed decisions to protect your most important assets
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Kusto function url** | https://aka.ms/sentinel-morphisecutpp-parser |
+| **Log Analytics table(s)** | CommonSecurityLog (Morphisec)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Morphisec](https://support.morphisec.com/support/home) |
+
+## Query samples
+
+**Threats count by host**
+ ```kusto
+
+Morphisec
+
+
+ | summarize Times_Attacked=count() by SourceHostName
+ ```
+
+**Threats count by username**
+ ```kusto
+
+Morphisec
+
+
+ | summarize Times_Attacked=count() by SourceUserName
+ ```
+
+**Threats with high severity**
+ ```kusto
+
+Morphisec
+
+
+ | where toint( LogSeverity) > 7
+ | order by TimeGenerated
+ ```
+++
+## Vendor installation instructions
++
+These queries and workbooks are dependent on Kusto functions based on Kusto to work as expected. Follow the steps to use the Kusto functions alias "Morphisec"
+in queries and workbooks. [Follow steps to get this Kusto function.](https://aka.ms/sentinel-morphisecutpp-parser)
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/morphisec.morphisec_utpp_mss?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Netwrix Auditor Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-netwrix-auditor-via-legacy-agent.md
+
+ Title: "[Deprecated] Netwrix Auditor via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Netwrix Auditor via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Netwrix Auditor via Legacy Agent connector for Microsoft Sentinel
+
+Netwrix Auditor data connector provides the capability to ingest [Netwrix Auditor (formerly Stealthbits Privileged Activity Manager)](https://www.netwrix.com/auditor.html) events into Microsoft Sentinel. Refer to [Netwrix documentation](https://helpcenter.netwrix.com/) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Kusto function alias** | NetwrixAuditor |
+| **Kusto function url** | https://aka.ms/sentinel-netwrixauditor-parser |
+| **Log Analytics table(s)** | CommonSecurityLog<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Netwrix Auditor Events - All Activities.**
+ ```kusto
+NetwrixAuditor
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on NetwrixAuditor parser based on a Kusto Function to work as expected. This parser is installed along with solution installation.
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Configure Netwrix Auditor to send logs using CEF
+
+[Follow the instructions](https://www.netwrix.com/download/QuickStart/Netwrix_Auditor_Add-on_for_HPE_ArcSight_Quick_Start_Guide.pdf) to configure event export from Netwrix Auditor.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-netwrixauditor?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Nozomi Networks N2os Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-nozomi-networks-n2os-via-legacy-agent.md
+
+ Title: "[Deprecated] Nozomi Networks N2OS via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Nozomi Networks N2OS via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Nozomi Networks N2OS via Legacy Agent connector for Microsoft Sentinel
+
+The [Nozomi Networks](https://www.nozominetworks.com/) data connector provides the capability to ingest Nozomi Networks Events into Microsoft Sentinel. Refer to the Nozomi Networks [PDF documentation](https://www.nozominetworks.com/resources/data-sheets-brochures-learning-guides/) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (NozomiNetworks)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Devices**
+ ```kusto
+NozomiNetworksEvents
+
+ | summarize count() by DvcHostname
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**NozomiNetworksEvents**](https://aka.ms/sentinel-NozomiNetworks-parser) which is deployed with the Microsoft Sentinel Solution.
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Follow these steps to configure Nozomi Networks device for sending Alerts, Audit Logs, Health Logs log via syslog in CEF format:
+
+> 1. Log in to the Guardian console.
+
+> 2. Navigate to Administration->Data Integration, press +Add and select the Common Event Format (CEF) from the drop down
+
+> 3. Create New Endpoint using the appropriate host information and enable Alerts, Audit Logs, Health Logs for sending.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-nozominetworks?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Ossec Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-ossec-via-legacy-agent.md
+
+ Title: "[Deprecated] OSSEC via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] OSSEC via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] OSSEC via Legacy Agent connector for Microsoft Sentinel
+
+OSSEC data connector provides the capability to ingest [OSSEC](https://www.ossec.net/) events into Microsoft Sentinel. Refer to [OSSEC documentation](https://www.ossec.net/docs) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (OSSEC)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
+
+## Query samples
+
+**Top 10 Rules**
+ ```kusto
+OSSECEvent
+
+ | summarize count() by RuleName
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias OSSEC and load the function code or click [here](https://aka.ms/sentinel-OSSECEvent-parser), on the second line of the query, enter the hostname(s) of your OSSEC device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+[Follow these steps](https://www.ossec.net/docs/docs/manual/output/syslog-output.html) to configure OSSEC sending alerts via syslog.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ossec?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Palo Alto Networks Cortex Data Lake Cdl Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-palo-alto-networks-cortex-data-lake-cdl-via-legacy-agent.md
+
+ Title: "[Deprecated] Palo Alto Networks Cortex Data Lake (CDL) via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Palo Alto Networks Cortex Data Lake (CDL) via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Palo Alto Networks Cortex Data Lake (CDL) via Legacy Agent connector for Microsoft Sentinel
+
+The [Palo Alto Networks CDL](https://www.paloaltonetworks.com/cortex/cortex-data-lake) data connector provides the capability to ingest [CDL logs](https://docs.paloaltonetworks.com/cortex/cortex-data-lake/log-forwarding-schema-reference/log-forwarding-schema-overview.html) into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (PaloAltoNetworksCDL)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Destinations**
+ ```kusto
+PaloAltoCDLEvent
+
+ | where isnotempty(DstIpAddr)
+
+ | summarize count() by DstIpAddr
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**PaloAltoCDLEvent**](https://aka.ms/sentinel-paloaltocdl-parser) which is deployed with the Microsoft Sentinel Solution.
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Configure Cortex Data Lake to forward logs to a Syslog Server using CEF
+
+[Follow the instructions](https://docs.paloaltonetworks.com/cortex/cortex-data-lake/cortex-data-lake-getting-started/get-started-with-log-forwarding-app/forward-logs-from-logging-service-to-syslog-server.html) to configure logs forwarding from Cortex Data Lake to a Syslog Server.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltocdl?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Pingfederate Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-pingfederate-via-legacy-agent.md
+
+ Title: "[Deprecated] PingFederate via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] PingFederate via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] PingFederate via Legacy Agent connector for Microsoft Sentinel
+
+The [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html) data connector provides the capability to ingest [PingFederate events](https://docs.pingidentity.com/bundle/pingfederate-102/page/lly1564002980532.html) into Microsoft Sentinel. Refer to [PingFederate documentation](https://docs.pingidentity.com/bundle/pingfederate-102/page/tle1564002955874.html) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (PingFederate)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Devices**
+ ```kusto
+PingFederateEvent
+
+ | summarize count() by DvcHostname
+
+ | top 10 by count_
+ ```
+++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**PingFederateEvent**](https://aka.ms/sentinel-PingFederate-parser) which is deployed with the Microsoft Sentinel Solution.
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+[Follow these steps](https://docs.pingidentity.com/bundle/pingfederate-102/page/gsn1564002980953.html) to configure PingFederate sending audit log via syslog in CEF format.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-pingfederate?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Sonicwall Firewall Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-sonicwall-firewall-via-legacy-agent.md
+
+ Title: "[Deprecated] SonicWall Firewall via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] SonicWall Firewall via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] SonicWall Firewall via Legacy Agent connector for Microsoft Sentinel
+
+Common Event Format (CEF) is an industry standard format on top of Syslog messages, used by SonicWall to allow event interoperability among different platforms. By connecting your CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (SonicWall)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [SonicWall](https://www.sonicwall.com/support/) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "SonicWall"
+
+ | sort by TimeGenerated desc
+ ```
+
+**Summarize by destination IP and port**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "SonicWall"
+
+ | summarize count() by DestinationIP, DestinationPort, TimeGenerated
+
+ | sort by TimeGenerated desc
+ ```
+
+**Show all dropped traffic from the SonicWall Firewall**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "SonicWall"
+
+ | where AdditionalExtensions contains "fw_action='drop'"
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward SonicWall Firewall Common Event Format (CEF) logs to Syslog agent
+
+Set your SonicWall Firewall to send Syslog messages in CEF format to the proxy machine. Make sure you send the logs to port 514 TCP on the machine's IP address.
+
+ Follow Instructions . Then Make sure you select local use 4 as the facility. Then select ArcSight as the Syslog format.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sonicwall-inc.sonicwall-networksecurity-azure-sentinal?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Trend Micro Apex One Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-trend-micro-apex-one-via-legacy-agent.md
+
+ Title: "[Deprecated] Trend Micro Apex One via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] Trend Micro Apex One via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] Trend Micro Apex One via Legacy Agent connector for Microsoft Sentinel
+
+The [Trend Micro Apex One](https://www.trendmicro.com/en_us/business/products/user-protection/sps/endpoint.html) data connector provides the capability to ingest [Trend Micro Apex One events](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/appendices/syslog-mapping-cef.aspx) into Microsoft Sentinel. Refer to [Trend Micro Apex Central](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/preface_001.aspx) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (TrendMicroApexOne)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+
+TMApexOneEvent
+
+ | sort by TimeGenerated
+ ```
+++
+## Vendor installation instructions
++
+>This data connector depends on a parser based on a Kusto Function to work as expected [**TMApexOneEvent**](https://aka.ms/sentinel-TMApexOneEvent-parser) which is deployed with the Microsoft Sentinel Solution.
++
+> [!NOTE]
+ > This data connector has been developed using Trend Micro Apex Central 2019
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+[Follow these steps](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/detections/logs_001/syslog-forwarding.aspx) to configure Apex Central sending alerts via syslog. While configuring, on step 6, select the log format **CEF**.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-trendmicroapexone?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Varmour Application Controller Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-varmour-application-controller-via-legacy-agent.md
+
+ Title: "[Deprecated] vArmour Application Controller via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] vArmour Application Controller via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] vArmour Application Controller via Legacy Agent connector for Microsoft Sentinel
+
+vArmour reduces operational risk and increases cyber resiliency by visualizing and controlling application relationships across the enterprise. This vArmour connector enables streaming of Application Controller Violation Alerts into Microsoft Sentinel, so you can take advantage of search & correlation, alerting, & threat intelligence enrichment for each log.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (vArmour)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [vArmour Networks](https://www.varmour.com/contact-us/) |
+
+## Query samples
+
+**Top 10 App to App violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | extend AppNameSrcDstPair = extract_all("AppName=;(\\w+)", AdditionalExtensions)
+
+ | summarize count() by tostring(AppNameSrcDstPair)
+
+ | top 10 by count_
+
+ ```
+
+**Top 10 Policy names matching violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by DeviceCustomString1
+
+ | top 10 by count_ desc
+
+ ```
+
+**Top 10 Source IPs generating violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by SourceIP
+
+ | top 10 by count_
+
+ ```
+
+**Top 10 Destination IPs generating violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by DestinationIP
+
+ | top 10 by count_
+
+ ```
+
+**Top 10 Application Protocols matching violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by ApplicationProtocol
+
+ | top 10 by count_
+
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Configure the vArmour Application Controller to forward Common Event Format (CEF) logs to the Syslog agent
+
+Send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address.
+
+2.1 Download the vArmour Application Controller user guide
+
+Download the user guide from https://support.varmour.com/hc/en-us/articles/360057444831-vArmour-Application-Controller-6-0-User-Guide.
+
+2.2 Configure the Application Controller to Send Policy Violations
+
+In the user guide - refer to "Configuring Syslog for Monitoring and Violations" and follow steps 1 to 3.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/varmournetworks.varmour_sentinel?tab=Overview) in the Azure Marketplace.
sentinel Deprecated Wirex Network Forensics Platform Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-wirex-network-forensics-platform-via-legacy-agent.md
+
+ Title: "[Deprecated] WireX Network Forensics Platform via Legacy Agent connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Deprecated] WireX Network Forensics Platform via Legacy Agent to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Deprecated] WireX Network Forensics Platform via Legacy Agent connector for Microsoft Sentinel
+
+The WireX Systems data connector allows security professional to integrate with Microsoft Sentinel to allow you to further enrich your forensics investigations; to not only encompass the contextual content offered by WireX but to analyze data from other sources, and to create custom dashboards to give the most complete picture during a forensic investigation and to create custom workflows.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (WireXNFPevents)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [WireX Systems](https://wirexsystems.com/contact-us/) |
+
+## Query samples
+
+**All Imported Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+
+ ```
+
+**Imported DNS Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+ and ApplicationProtocol == "DNS"
+
+ ```
+
+**Imported DNS Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+ and ApplicationProtocol == "HTTP"
+
+ ```
+
+**Imported DNS Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+ and ApplicationProtocol == "TDS"
+
+ ```
+++
+## Vendor installation instructions
+
+1. Linux Syslog agent configuration
+
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
+
+1.1 Select or create a Linux machine
+
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+
+1.2 Install the CEF collector on the Linux machine
+
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+
+> 1. Make sure that you have Python on your machine using the following command: python -version.
+
+> 2. You must have elevated permissions (sudo) on your machine.
+
+ Run the following command to install and apply the CEF collector:
+
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
+
+2. Forward Common Event Format (CEF) logs to Syslog agent
+
+Contact WireX support (https://wirexsystems.com/contact-us/) in order to configure your NFP solution to send Syslog messages in CEF format to the proxy machine. Make sure that they central manager can send the logs to port 514 TCP on the machine's IP address.
+
+3. Validate connection
+
+Follow the instructions to validate your connectivity:
+
+Open Log Analytics to check if the logs are received using the CommonSecurityLog schema.
+
+>It may take about 20 minutes until the connection streams data to your workspace.
+
+If the logs are not received, run the following connectivity validation script:
+
+> 1. Make sure that you have Python on your machine using the following command: python -version
+
+>2. You must have elevated permissions (sudo) on your machine
+
+ Run the following command to validate your connectivity:
+
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
+
+4. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/wirexsystems1584682625009.wirex_network_forensics_platform_mss?tab=Overview) in the Azure Marketplace.
sentinel Exchange Security Insights Online Collector Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/exchange-security-insights-online-collector-using-azure-functions.md
Title: "Exchange Security Insights Online Collector (using Azure Functions) conn
description: "Learn how to install the connector Exchange Security Insights Online Collector (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 10/23/2023
To integrate with Exchange Security Insights Online Collector (using Azure Funct
> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. Follow the steps for each Parser to create the Kusto Functions alias : [**ExchangeConfiguration**](https://aka.ms/sentinel-ESI-ExchangeConfiguration-Online-parser) and [**ESI_ExchConfigAvailableEnvironments**](https://aka.ms/sentinel-ESI-ExchangeEnvironmentList-Online-parser)
+ > This data connector depends on a parser based on a Kusto Function to work as expected. Follow the steps for each Parser to create the Kusto Functions alias : [**ExchangeConfiguration**](https://aka.ms/sentinel-ESI-ExchangeConfiguration-Online-parser) and [**ExchangeEnvironmentList**](https://aka.ms/sentinel-ESI-ExchangeEnvironmentList-Online-parser)
**STEP 1 - Parsers deployment**
sentinel Feedly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/feedly.md
+
+ Title: "Feedly connector for Microsoft Sentinel"
+description: "Learn how to install the connector Feedly to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# Feedly connector for Microsoft Sentinel
+
+This connector allows you to ingest IoCs from Feedly.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | feedly_indicators_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Feedly Inc](https://feedly.com/i/support/contactUs) |
+
+## Query samples
+
+**All IoCs collected**
+ ```kusto
+feedly_indicators_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**Ip addresses**
+ ```kusto
+feedly_indicators_CL
+
+ | where type_s == "ip"
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Feedly make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Custom prerequisites if necessary, otherwise delete this customs tag**: Description for any custom pre-requisites
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to Feedly to pull IoCs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
+
+(Optional Step) Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault.
+
+Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+
+Step 1 - Get your Feedly API token
+
+Go to https://feedly.com/i/team/api and generate a new API token for the connector.
+
+Step 2 - Deploy the connector
+
+Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Feedly connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Feedly API Token, readily available.
+++
+Option 1 - Azure Resource Manager (ARM) Template
+
+Use this method for automated deployment of the Feedly connector.
+
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-Feedly-azuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the SentinelWorkspaceId, SentinelWorkspaceKey, FeedlyApiKey, FeedlyStreamIds, DaysToBackfill.
+>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+Option 2 - Manual Deployment of Azure Functions
+
+Use the following step-by-step instructions to deploy the Feedly connector manually with Azure Functions (Deployment via Visual Studio Code).
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/feedlyinc1693853810319.azure-sentinel-solution-feedly?tab=Overview) in the Azure Marketplace.
sentinel Holm Security Asset Data Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/holm-security-asset-data-using-azure-functions.md
Title: "Holm Security Asset Data (using Azure Functions) connector for Microsoft
description: "Learn how to install the connector Holm Security Asset Data (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 08/28/2023 Last updated : 10/23/2023
The connector provides the capability to poll data from Holm Security Center int
| | | | **Log Analytics table(s)** | net_assets_CL<br/> web_assets_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [Holm Security](https://support.holmsecurity.com/hc/en-us) |
+| **Supported by** | [Holm Security](https://support.holmsecurity.com/) |
## Query samples **All low net assets** ```kusto
-net_assets_Cl
+net_assets_CL
| where severity_s == 'low' ``` **All low web assets** ```kusto
-web_assets_Cl
+web_assets_CL
| where severity_s == 'low' ```
web_assets_Cl
To integrate with Holm Security Asset Data (using Azure Functions) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **Holm Security API Token**: Holm Security API Token is required. [Holm Security API Token](https://support.holmsecurity.com/hc/en-us)
+- **Holm Security API Token**: Holm Security API Token is required. [Holm Security API Token](https://support.holmsecurity.com/)
## Vendor installation instructions
To integrate with Holm Security Asset Data (using Azure Functions) make sure you
**STEP 1 - Configuration steps for the Holm Security API**
- [Follow these instructions](https://support.holmsecurity.com/hc/en-us/articles/360027651591-How-do-I-set-up-an-API-token-) to create an API authentication token.
+ [Follow these instructions](https://support.holmsecurity.com/knowledge/how-do-i-set-up-an-api-token) to create an API authentication token.
**STEP 2 - Use the below deployment option to deploy the connector and the associated Azure Function**
sentinel Isc Bind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/isc-bind.md
Title: "ISC Bind connector for Microsoft Sentinel"
description: "Learn how to install the connector ISC Bind to connect your data source to Microsoft Sentinel." Previously updated : 03/25/2023 Last updated : 10/23/2023
To integrate with ISC Bind make sure you have:
## Vendor installation instructions
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ISCBind and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/ISC%20Bind/Parsers/ISCBind.txt).The function usually takes 10-15 minutes to activate after solution installation/update.
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ISCBind and load the function code or click [here](https://aka.ms/sentinel-iscbind-parser).The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
sentinel Mailguard 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mailguard-365.md
+
+ Title: "MailGuard 365 connector for Microsoft Sentinel"
+description: "Learn how to install the connector MailGuard 365 to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# MailGuard 365 connector for Microsoft Sentinel
+
+MailGuard 365 Enhanced Email Security for Microsoft 365. Exclusive to the Microsoft marketplace, MailGuard 365 is integrated with Microsoft 365 security (incl. Defender) for enhanced protection against advanced email threats like phishing, ransomware and sophisticated BEC attacks.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | MailGuard365_Threats_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [MailGuard 365](https://www.mailguard365.com/support/) |
+
+## Query samples
+
+**All phishing threats stopped by MailGuard 365**
+ ```kusto
+MailGuard365_Threats_CL
+
+ | where Category == "Phishing"
+ ```
+
+**All threats summarized by sender email address**
+ ```kusto
+MailGuard365_Threats_CL
+
+ | summarize count() by Sender_Email_s
+ ```
+
+**All threats summarized by category**
+ ```kusto
+MailGuard365_Threats_CL
+
+ | summarize count() by Category
+ ```
+++
+## Vendor installation instructions
+
+Configure and connect MailGuard 365
+
+1. In the MailGuard 365 Console, click **Settings** on the navigation bar.
+2. Click the **Integrations** tab.
+3. Click the **Enable Microsoft Sentinel**.
+4. Enter your workspace id and primary key from the fields below, click **Finish**.
+5. For additional instructions, please contact MailGuard 365 support.
+++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/mailguardptylimited.microsoft-sentinel-solution-mailguard365?tab=Overview) in the Azure Marketplace.
sentinel Netskope Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netskope-using-azure-functions.md
Title: "Netskope (using Azure Functions) connector for Microsoft Sentinel"
description: "Learn how to install the connector Netskope (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 08/28/2023 Last updated : 10/23/2023
Netskope
To integrate with Netskope (using Azure Functions) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **Netskope API Token**: A Netskope account and API Token are required.
+- **Netskope API Token**: A Netskope API Token is required. [See the documentation to learn more about Netskope API](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v1-overview.html). **Note:** A Netskope account is required
+ ## Vendor installation instructions
sentinel Nxlog Dns Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-dns-logs.md
Title: "NXLog DNS Logs connector for Microsoft Sentinel"
description: "Learn how to install the connector NXLog DNS Logs to connect your data source to Microsoft Sentinel." Previously updated : 08/28/2023 Last updated : 10/23/2023 # NXLog DNS Logs connector for Microsoft Sentinel
-The NXLog DNS Logs data connector uses Event Tracing for Windows ([ETW](/windows/apps/trace-processing/overview)) for collecting both Audit and Analytical DNS Server events. The [NXLog *im_etw* module](https://nxlog.co/documentation/nxlog-user-guide/im_etw.html) reads event tracing data directly for maximum efficiency, without the need to capture the event trace into an .etl file. This REST API connector can forward DNS Server events to Microsoft Sentinel in real time.
+The NXLog DNS Logs data connector uses Event Tracing for Windows ([ETW](/windows/apps/trace-processing/overview)) for collecting both Audit and Analytical DNS Server events. The [NXLog *im_etw* module](https://docs.nxlog.co/refman/current/im/etw.html) reads event tracing data directly for maximum efficiency, without the need to capture the event trace into an .etl file. This REST API connector can forward DNS Server events to Microsoft Sentinel in real time.
## Connector attributes
The NXLog DNS Logs data connector uses Event Tracing for Windows ([ETW](/windows
| | | | **Log Analytics table(s)** | NXLog_DNS_Server_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/user?destination=node/add/support-ticket) |
+| **Supported by** | [NXLog](https://nxlog.co/support-tickets/add/support-ticket) |
## Query samples
ASimDnsMicrosoftNXLog
> This data connector depends on parsers based on Kusto functions deployed with the Microsoft Sentinel Solution to work as expected. The [**ASimDnsMicrosoftNXLog **](https://aka.ms/sentinel-nxlogdnslogs-parser) is designed to leverage Microsoft Sentinel's built-in DNS-related analytics capabilities.
-Follow the step-by-step instructions in the *NXLog User Guide* Integration Topic [Microsoft Sentinel](https://nxlog.co/documentation/nxlog-user-guide/sentinel.html) to configure this connector.
+Follow the step-by-step instructions in the *NXLog User Guide* Integration Topic [Microsoft Sentinel](https://docs.nxlog.co/userguide/integrate/microsoft-azure-sentinel.html) to configure this connector.
sentinel Okta Single Sign On Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/okta-single-sign-on-using-azure-function.md
- Title: "Okta Single Sign-On (using Azure Functions) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Okta Single Sign-On (using Azure Functions) to connect your data source to Microsoft Sentinel."
-- Previously updated : 02/23/2023----
-# Okta Single Sign-On (using Azure Functions) connector for Microsoft Sentinel
-
-The [Okta Single Sign-On (SSO)](https://www.okta.com/products/single-sign-on/) connector provides the capability to ingest audit and event logs from the Okta API into Microsoft Sentinel. The connector provides visibility into these log types in Microsoft Sentinel to view dashboards, create custom alerts, and to improve monitoring and investigation capabilities.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Azure functions app code** | https://aka.ms/sentineloktaazurefunctioncodev2 |
-| **Log Analytics table(s)** | Okta_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Active Applications**
-
- ```kusto
- Okta_CL
- | mv-expand todynamic(target_s)
- | where target_s.type == "AppInstance"
- | summarize count() by tostring(target_s.alternateId)
- | top 10 by count_
- ```
-
-**Top 10 Client IP Addresses**
- ```kusto
- Okta_CL
- | summarize count() by client_ipAddress_s
- | top 10 by count_
- ```
-## Prerequisites
-
-To integrate with Okta Single Sign-On (using Azure Function), make sure you have the following prerequisites:
--- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).-- **Okta API Token**: An Okta API Token is required. See the documentation to learn more about the [Okta System Log API](https://developer.okta.com/docs/reference/api/system-log/).-
-## Vendor installation instructions
-
-> [!NOTE]
-> This connector uses Azure Functions to connect to Okta SSO to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
--
-> [!NOTE]
-> This connector has been updated. If you have previously deployed an earlier version, and want to update, please delete the existing Okta Azure Function before redeploying this version.
--
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
--
-**STEP 1 - Configuration steps for the Okta SSO API**
-
- [Follow these instructions](https://developer.okta.com/docs/guides/create-an-api-token/create-the-token/) to create an API Token.
-
- > [!NOTE]
- > For more information on the rate limit restrictions enforced by Okta, see **[OKTA developer reference documentation](https://developer.okta.com/docs/reference/rl-global-mgmt/)**.
-
-**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
-
-> [!IMPORTANT]
-> Before deploying the Okta SSO connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Okta SSO API Authorization Token, readily available.
-
-### Option 1 - Azure Resource Manager (ARM) Template
-
-This method provides an automated deployment of the Okta SSO connector using an ARM Template.
-
-1. Select the following **Deploy to Azure** button.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentineloktaazuredeployv2-solution)
-
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-
-3. Enter the **Workspace ID**, **Workspace Key**, **API Token** and **URI**.
-
- Use the following schema for the `uri` value: `https://<OktaDomain>/api/v1/logs?since=` Replace `<OktaDomain>` with your domain. [Click here](https://developer.okta.com/docs/reference/api-overview/#url-namespace) for further details on how to identify your Okta domain namespace. There's no need to add a time value to the URI. The Function App will dynamically append the initial start time of logs to UTC 0:00 for the current UTC date as a time value to the URI in the proper format.
-
- > [!NOTE]
- > If using Azure Key Vault secrets for any of the preceding values, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
-
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-
-5. Select **Purchase** to deploy.
-
-### Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the Okta SSO connector manually with Azure Functions.
-
-**1. Create a Function App**
-
-1. From the Azure portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp), and select **+ Add**.
-2. In the **Basics** tab, ensure Runtime stack is set to **Powershell Core**.
-3. In the **Hosting** tab, ensure the **Consumption (Serverless)** plan type is selected.
-4. Make other preferable configuration changes, if needed, then click **Create**.
-
-**2. Import Function App Code**
-
-1. In the newly created Function App, select **Functions** on the left pane and click **+ Add**.
-2. Select **Timer Trigger**.
-3. Enter a unique Function **Name** and change the default cron schedule to every 10 minutes, then click **Create**.
-4. Click on **Code + Test** on the left pane.
-5. Copy the [Function App Code](https://aka.ms/sentineloktaazurefunctioncodev2) and paste into the Function App `run.ps1` editor.
-5. Click **Save**.
--
-**3. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-
-2. In the **Application settings** tab, select **+ New application setting**.
-
-3. Add each of the following five (5) application settings individually, with their respective string values (case-sensitive):
- - apiToken
- - workspaceID
- - workspaceKey
- - uri
- - logAnalyticsUri (optional)
-
- Use the following schema for the `uri` value: `https://<OktaDomain>/api/v1/logs?since=` Replace `<OktaDomain>` with your domain. For more information on how to identify your Okta domain namespace, see the [Okta Developer reference](https://developer.okta.com/docs/reference/api-overview/#url-namespace). There's no need to add a time value to the URI. The Function App dynamically appends the initial start time of logs to UTC 0:00 (for the current UTC date) as a time value to the URI in the proper format.
-
- > [!NOTE]
- > If using Azure Key Vault secrets for any of the preceding values, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
-
- Use logAnalyticsUri to override the log analytics API endpoint for a dedicated cloud. For example, for the public cloud, leave the value empty; for the Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
-
-5. Once all application settings have been entered, click **Save**.
-
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-okta?tab=Overview) in the Azure Marketplace.
sentinel Okta Single Sign On Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/okta-single-sign-on-using-azure-functions.md
+
+ Title: "Okta Single Sign-On (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Okta Single Sign-On (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# Okta Single Sign-On (using Azure Functions) connector for Microsoft Sentinel
+
+The [Okta Single Sign-On (SSO)](https://www.okta.com/products/single-sign-on/) connector provides the capability to ingest audit and event logs from the Okta API into Microsoft Sentinel. The connector provides visibility into these log types in Microsoft Sentinel to view dashboards, create custom alerts, and to improve monitoring and investigation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | Okta_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Active Applications**
+ ```kusto
+Okta_CL
+
+ | mv-expand todynamic(target_s)
+
+ | where target_s.type == "AppInstance"
+
+ | summarize count() by tostring(target_s.alternateId)
+
+ | top 10 by count_
+ ```
+
+**Top 10 Client IP Addresses**
+ ```kusto
+Okta_CL
+
+ | summarize count() by client_ipAddress_s
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with Okta Single Sign-On (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Okta API Token**: An Okta API Token is required. See the documentation to learn more about the [Okta System Log API](https://developer.okta.com/docs/reference/api/system-log/).
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to Okta SSO to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+> [!NOTE]
+ > This connector has been updated, if you have previously deployed an earlier version, and want to update, please delete the existing Okta Azure Function before redeploying this version.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+**STEP 1 - Configuration steps for the Okta SSO API**
+
+ [Follow these instructions](https://developer.okta.com/docs/guides/create-an-api-token/create-the-token/) to create an API Token.
++
+**Note** - For more information on the rate limit restrictions enforced by Okta, please refer to the **[documentation](https://developer.okta.com/docs/reference/rl-global-mgmt/)**.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Okta SSO connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Okta SSO API Authorization Token, readily available.
+++++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-okta?tab=Overview) in the Azure Marketplace.
sentinel Onelogin Iam Platform Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/onelogin-iam-platform-using-azure-functions.md
Title: "OneLogin IAM Platform(using Azure Functions) connector for Microsoft Sen
description: "Learn how to install the connector OneLogin IAM Platform(using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 10/23/2023
The [OneLogin](https://www.onelogin.com/) data connector provides the capability
| Connector attribute | Description | | | |
-| **Application settings** | OneLoginBearerToken<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) |
-| **Azure function app code** | https://aka.ms/sentinel-OneLogin-functionapp |
| **Log Analytics table(s)** | OneLogin_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
To integrate with OneLogin IAM Platform(using Azure Functions) make sure you hav
-Option 1 - Azure Resource Manager (ARM) Template
-Use this method for automated deployment of the OneLogin data connector using an ARM Template.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-OneLogin-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-> **NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group.
-3. Enter the **OneLoginBearerToken** and deploy.
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-6. After deploying open Function App page, select your app, go to the **Functions** and click **Get Function Url** copy it and follow p.7 from STEP 1.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the OneLogin data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-OneLogin-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. OneLoginXXXXX).
-
- e. **Select a runtime:** Choose Python 3.8.
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select ** New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- OneLoginBearerToken
- WorkspaceID
- WorkspaceKey
- logAnalyticsUri (optional)
-> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
-4. Once all application settings have been entered, click **Save**.
sentinel Oracle Cloud Infrastructure Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/oracle-cloud-infrastructure-using-azure-functions.md
Title: "Oracle Cloud Infrastructure (using Azure Functions) connector for Micros
description: "Learn how to install the connector Oracle Cloud Infrastructure (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 10/23/2023
The Oracle Cloud Infrastructure (OCI) data connector provides the capability to
| Connector attribute | Description | | | |
-| **Azure function app code** | https://aka.ms/sentinel-OracleCloudInfrastructureLogsConnector-functionapp |
| **Log Analytics table(s)** | OCI_Logs_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
Follow the documentation to [create Private Key and API Key Configuration File.]
-Option 1 - Azure Resource Manager (ARM) Template
-Use this method for automated deployment of the OCI data connector using an ARM Template.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-OracleCloudInfrastructureLogsConnector-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-3. Enter the **Microsoft Sentinel Workspace Id**, **Microsoft Sentinel Shared Key**, **User**, **Key_content**, **Pass_phrase**, **Fingerprint**, **Tenancy**, **Region**, **Message Endpoint**, **Stream Ocid**
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the OCI data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-OracleCloudInfrastructureLogsConnector-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. OciAuditXX).
-
- e. **Select a runtime:** Choose Python 3.8.
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- AzureSentinelWorkspaceId
- AzureSentinelSharedKey
- user
- key_content
- pass_phrase (Optional)
- fingerprint
- tenancy
- region
- Message Endpoint
- StreamOcid
- logAnalyticsUri (Optional)
-4. Once all application settings have been entered, click **Save**.
sentinel Proofpoint Tap Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/proofpoint-tap-using-azure-functions.md
Title: "Proofpoint TAP (using Azure Functions) connector for Microsoft Sentinel"
description: "Learn how to install the connector Proofpoint TAP (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 08/28/2023 Last updated : 10/23/2023
The [Proofpoint Targeted Attack Protection (TAP)](https://www.proofpoint.com/us/
| Connector attribute | Description | | | |
-| **Application settings** | apiUsername<br/>apipassword<br/>workspaceID<br/>workspaceKey<br/>uri<br/>logAnalyticsUri (optional) |
-| **Azure function app code** | https://aka.ms/sentinelproofpointtapazurefunctioncode |
| **Log Analytics table(s)** | ProofPointTAPClicksPermitted_CL<br/> ProofPointTAPClicksBlocked_CL<br/> ProofPointTAPMessagesDelivered_CL<br/> ProofPointTAPMessagesBlocked_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
To integrate with Proofpoint TAP (using Azure Functions) make sure you have:
-Option 1 - Azure Resource Manager (ARM) Template
-Use this method for automated deployment of the Proofpoint TAP connector.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinelproofpointtapazuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password**, and validate the **Uri**.
-> - The default URI is pulling data for the last 300 seconds (5 minutes) to correspond with the default Function App Timer trigger of 5 minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly (in the function.json file, post deployment) to prevent overlapping data ingestion.
-> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-This method provides the step-by-step instructions to deploy the Proofpoint TAP connector manually with Azure Function.
--
-**1. Create a Function App**
-
-1. From the Azure Portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp), and select **+ Add**.
-2. In the **Basics** tab, ensure Runtime stack is set to **Powershell Core**.
-3. In the **Hosting** tab, ensure the **Consumption (Serverless)** plan type is selected.
-4. Make other preferrable configuration changes, if needed, then click **Create**.
--
-**2. Import Function App Code**
-
-1. In the newly created Function App, select **Functions** on the left pane and click **+ Add**.
-2. Select **Timer Trigger**.
-3. Enter a unique Function **Name** and modify the cron schedule, if needed. The default value is set to run the Function App every 5 minutes. (Note: the Timer trigger should match the `timeInterval` value below to prevent overlapping data), click **Create**.
-4. Click on **Code + Test** on the left pane.
-5. Copy the [Function App Code](https://aka.ms/sentinelproofpointtapazurefunctioncode) and paste into the Function App `run.ps1` editor.
-5. Click **Save**.
--
-**3. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following six (6) application settings individually, with their respective string values (case-sensitive):
- apiUsername
- apipassword
- workspaceID
- workspaceKey
- uri
- logAnalyticsUri (optional)
-> - Set the `uri` value to: `https://tap-api-v2.proofpoint.com/v2/siem/all?format=json&sinceSeconds=300`
-> - The default URI is pulling data for the last 300 seconds (5 minutes) to correspond with the default Function App Timer trigger of 5 minutes. If the time interval needs to be modified, it is recommended to change the Function App Timer Trigger accordingly to prevent overlapping data ingestion.
-> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
-> - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`
-4. Once all application settings have been entered, click **Save**.
sentinel Qualys Vm Knowledgebase Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/qualys-vm-knowledgebase-using-azure-function.md
- Title: "Qualys VM KnowledgeBase (using Azure Function) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Qualys VM KnowledgeBase (using Azure Function) to connect your data source to Microsoft Sentinel."
-- Previously updated : 06/22/2023----
-# Qualys VM KnowledgeBase (using Azure Function) connector for Microsoft Sentinel
-
-The [Qualys Vulnerability Management (VM)](https://www.qualys.com/apps/vulnerability-management/) KnowledgeBase (KB) connector provides the capability to ingest the latest vulnerability data from the Qualys KB into Microsoft Sentinel.
-
- This data can used to correlate and enrich vulnerability detections found by the [Qualys Vulnerability Management (VM)](/azure/sentinel/data-connectors-reference#qualys) data connector.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Application settings** | apiUsername<br/>apiPassword<br/>workspaceID<br/>workspaceKey<br/>uri<br/>filterParameters<br/>logAnalyticsUri (optional) |
-| **Azure function app code** | https://aka.ms/sentinel-qualyskb-functioncode |
-| **Log Analytics table(s)** | QualysKB_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
-
-## Query samples
-
-**Vulnerabilities by Category**
- ```kusto
-QualysKB
-
- | summarize count() by Category
- ```
-
-**Top 10 Software Vendors**
- ```kusto
-QualysKB
-
- | summarize count() by SoftwareVendor
-
- | top 10 by count_
- ```
---
-## Prerequisites
-
-To integrate with Qualys VM KnowledgeBase (using Azure Function) make sure you have:
--- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).-- **Qualys API Key**: A Qualys VM API username and password is required. [See the documentation to learn more about Qualys VM API](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf).--
-## Vendor installation instructions
--
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias QualysVM Knowledgebase and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/CrowdStrike%20Falcon%20Endpoint%20Protection/Parsers/CrowdstrikeFalconEventStream.txt), on the second line of the query, enter the hostname(s) of your QualysVM Knowledgebase device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
--
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-qualyskb-parser) to use the Kusto function alias, **QualysKB**
--
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
--
-**STEP 1 - Configuration steps for the Qualys API**
-
-1. Log into the Qualys Vulnerability Management console with an administrator account, select the **Users** tab and the **Users** subtab.
-2. Click on the **New** drop-down menu and select **Users**.
-3. Create a username and password for the API account.
-4. In the **User Roles** tab, ensure the account role is set to **Manager** and access is allowed to **GUI** and **API**
-4. Log out of the administrator account and log into the console with the new API credentials for validation, then log out of the API account.
-5. Log back into the console using an administrator account and modify the API accounts User Roles, removing access to **GUI**.
-6. Save all changes.
--
-**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
-
->**IMPORTANT:** Before deploying the Qualys KB connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Qualys API username and password, readily available.
---
-Option 1 - Azure Resource Manager (ARM) Template
-
-Use this method for automated deployment of the Qualys KB connector using an ARM Tempate.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-qualyskb-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-3. Enter the **Workspace ID**, **Workspace Key**, **API Username**, **API Password** , update the **URI**, and any additional URI **Filter Parameters** (This value should include a "&" symbol between each parameter and should not include any spaces)
-> - Enter the URI that corresponds to your region. The complete list of API Server URLs can be [found here](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf#G4.735348)
-> - Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-This method provides the step-by-step instructions to deploy the Qualys KB connector manually with Azure Function.
--
-**1. Create a Function App**
-
-1. From the Azure Portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp), and select **+ Add**.
-2. In the **Basics** tab, ensure Runtime stack is set to **Powershell Core**.
-3. In the **Hosting** tab, ensure the **Consumption (Serverless)** plan type is selected.
-4. Make other preferrable configuration changes, if needed, then click **Create**.
--
-**2. Import Function App Code**
-
-1. In the newly created Function App, select **Functions** on the left pane and click **+ Add**.
-2. Select **Timer Trigger**.
-3. Enter a unique Function **Name** and modify the cron schedule, if needed. Leave the default **Schedule** value, which will run the Function App every 5 minutes and click **Create**.
-4. Click on **Code + Test** on the left pane.
-5. Copy the [Function App Code](https://aka.ms/sentinel-qualyskb-functioncode) and paste into the Function App `run.ps1` editor.
-5. Click **Save**.
--
-**3. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following seven (7) application settings individually, with their respective string values (case-sensitive):
- apiUsername
- apiPassword
- workspaceID
- workspaceKey
- uri
- filterParameters
- logAnalyticsUri (optional)
-> - Enter the URI that corresponds to your region. The complete list of API Server URLs can be [found here](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf#G4.735348). The `uri` value must follow the following schema: `https://<API Server>/api/2.0`
-> - Add any additional filter parameters, for the `filterParameters` variable, that need to be appended to the URI. The `filterParameter` value should include a "&" symbol between each parameter and should not include any spaces.
-> - Note: If using Azure Key Vault, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
-4. Once all application settings have been entered, click **Save**.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-qualysvmknowledgebase?tab=Overview) in the Azure Marketplace.
sentinel Qualys Vm Knowledgebase Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/qualys-vm-knowledgebase-using-azure-functions.md
+
+ Title: "Qualys VM KnowledgeBase (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Qualys VM KnowledgeBase (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# Qualys VM KnowledgeBase (using Azure Functions) connector for Microsoft Sentinel
+
+The [Qualys Vulnerability Management (VM)](https://www.qualys.com/apps/vulnerability-management/) KnowledgeBase (KB) connector provides the capability to ingest the latest vulnerability data from the Qualys KB into Microsoft Sentinel.
+
+ This data can used to correlate and enrich vulnerability detections found by the [Qualys Vulnerability Management (VM)](/azure/sentinel/connect-qualys-vm) data connector.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | QualysKB_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
+
+## Query samples
+
+**Vulnerabilities by Category**
+ ```kusto
+QualysKB
+
+ | summarize count() by Category
+ ```
+
+**Top 10 Software Vendors**
+ ```kusto
+QualysKB
+
+ | summarize count() by SoftwareVendor
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with Qualys VM KnowledgeBase (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Qualys API Key**: A Qualys VM API username and password is required. [See the documentation to learn more about Qualys VM API](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf).
++
+## Vendor installation instructions
++
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias QualysVM Knowledgebase and load the function code or click [here](https://aka.ms/sentinel-crowdstrikefalconendpointprotection-parser), on the second line of the query, enter the hostname(s) of your QualysVM Knowledgebase device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
++
+>This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-qualyskb-parser) to use the Kusto function alias, **QualysKB**
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+**STEP 1 - Configuration steps for the Qualys API**
+
+1. Log into the Qualys Vulnerability Management console with an administrator account, select the **Users** tab and the **Users** subtab.
+2. Click on the **New** drop-down menu and select **Users**.
+3. Create a username and password for the API account.
+4. In the **User Roles** tab, ensure the account role is set to **Manager** and access is allowed to **GUI** and **API**
+4. Log out of the administrator account and log into the console with the new API credentials for validation, then log out of the API account.
+5. Log back into the console using an administrator account and modify the API accounts User Roles, removing access to **GUI**.
+6. Save all changes.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Qualys KB connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Qualys API username and password, readily available.
+++++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-qualysvmknowledgebase?tab=Overview) in the Azure Marketplace.
sentinel Recommended Ai Analyst Darktrace Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-ai-analyst-darktrace-via-ama.md
+
+ Title: "[Recommended] AI Analyst Darktrace via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] AI Analyst Darktrace via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] AI Analyst Darktrace via AMA connector for Microsoft Sentinel
+
+The Darktrace connector lets users connect Darktrace Model Breaches in real-time with Microsoft Sentinel, allowing creation of custom Dashboards, Workbooks, Notebooks and Custom Alerts to improve investigation. Microsoft Sentinel's enhanced visibility into Darktrace logs enables monitoring and mitigation of security threats.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (Darktrace)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Darktrace](https://www.darktrace.com/en/contact/) |
+
+## Query samples
+
+**first 10 most recent data breaches**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Darktrace"
+
+ | order by TimeGenerated desc
+
+ | limit 10
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] AI Analyst Darktrace via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
+++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/darktrace1655286944672.darktrace_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Akamai Security Events Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-akamai-security-events-via-ama.md
+
+ Title: "[Recommended] Akamai Security Events via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Akamai Security Events via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Akamai Security Events via AMA connector for Microsoft Sentinel
+
+Akamai Solution for Microsoft Sentinel provides the capability to ingest [Akamai Security Events](https://www.akamai.com/us/en/products/security/) into Microsoft Sentinel. Refer to [Akamai SIEM Integration documentation](https://developer.akamai.com/tools/integrations/siem) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (AkamaiSecurityEvents)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Countries**
+ ```kusto
+AkamaiSIEMEvent
+
+ | summarize count() by SrcGeoCountry
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Akamai Security Events via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Akamai Security Events and load the function code or click [here](https://aka.ms/sentinel-akamaisecurityevents-parser), on the second line of the query, enter the hostname(s) of your Akamai Security Events device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-akamai?tab=Overview) in the Azure Marketplace.
sentinel Recommended Aruba Clearpass Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-aruba-clearpass-via-ama.md
+
+ Title: "[Recommended] Aruba ClearPass via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Aruba ClearPass via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Aruba ClearPass via AMA connector for Microsoft Sentinel
+
+The [Aruba ClearPass](https://www.arubanetworks.com/products/security/network-access-control/secure-access/) connector allows you to easily connect your Aruba ClearPass with Microsoft Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs network and improves your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ArubaClearPass)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
+
+## Query samples
+
+**Top 10 Events by Username**
+ ```kusto
+ArubaClearPass
+
+ | summarize count() by UserName
+
+ | top 10 by count_
+ ```
+
+**Top 10 Error Codes**
+ ```kusto
+ArubaClearPass
+
+ | summarize count() by ErrorCode
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Aruba ClearPass via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ArubaClearPass and load the function code or click [here](https://aka.ms/sentinel-arubaclearpass-parser).The function usually takes 10-15 minutes to activate after solution installation/update.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview) in the Azure Marketplace.
sentinel Recommended Broadcom Symantec Dlp Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-broadcom-symantec-dlp-via-ama.md
+
+ Title: "[Recommended] Broadcom Symantec DLP via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Broadcom Symantec DLP via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Broadcom Symantec DLP via AMA connector for Microsoft Sentinel
+
+The [Broadcom Symantec Data Loss Prevention (DLP)](https://www.broadcom.com/products/cyber-security/information-protection/data-loss-prevention) connector allows you to easily connect your Symantec DLP with Microsoft Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs information, where it travels, and improves your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (SymantecDLP)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Triggered Activities**
+ ```kusto
+SymantecDLP
+
+ | summarize count() by Activity
+
+ | top 10 by count_
+ ```
+
+**Top 10 Filenames**
+ ```kusto
+SymantecDLP
+
+ | summarize count() by FileName
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Broadcom Symantec DLP via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SymantecDLP and load the function code or click [here](https://aka.ms/sentinel-symantecdlp-parser). The function usually takes 10-15 minutes to activate after solution installation/update.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-broadcomsymantecdlp?tab=Overview) in the Azure Marketplace.
sentinel Recommended Cisco Firepower Estreamer Via Legacy Agent Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-cisco-firepower-estreamer-via-legacy-agent-via-ama.md
+
+ Title: "[Recommended] Cisco Firepower eStreamer via Legacy Agent via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Cisco Firepower eStreamer via Legacy Agent via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Cisco Firepower eStreamer via Legacy Agent via AMA connector for Microsoft Sentinel
+
+eStreamer is a Client Server API designed for the Cisco Firepower NGFW Solution. The eStreamer client requests detailed event data on behalf of the SIEM or logging solution in the Common Event Format (CEF).
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CiscoFirepowerEstreamerCEF)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Cisco](https://www.cisco.com/c/en_in/support/https://docsupdatetracker.net/index.html) |
+
+## Query samples
+
+**Firewall Blocked Events**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cisco"
+
+ | where DeviceProduct == "Firepower"
+ | where DeviceAction != "Allow"
+ ```
+
+**File Malware Events**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cisco"
+
+ | where DeviceProduct == "Firepower"
+ | where Activity == "File Malware Event"
+ ```
+
+**Outbound Web Traffic Port 80**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cisco"
+
+ | where DeviceProduct == "Firepower"
+ | where DestinationPort == "80"
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Cisco Firepower eStreamer via Legacy Agent via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
+++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cisco.cisco-firepower-estreamer?tab=Overview) in the Azure Marketplace.
sentinel Recommended Cisco Secure Email Gateway Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-cisco-secure-email-gateway-via-ama.md
+
+ Title: "[Recommended] Cisco Secure Email Gateway via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Cisco Secure Email Gateway via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Cisco Secure Email Gateway via AMA connector for Microsoft Sentinel
+
+The [Cisco Secure Email Gateway (SEG)](https://www.cisco.com/c/en/us/products/security/email-security/https://docsupdatetracker.net/index.html) data connector provides the capability to ingest [Cisco SEG Consolidated Event Logs](https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0/b_ESA_Admin_Guide_12_1_chapter_0100111.html#con_1061902) into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CiscoSEG)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Senders**
+ ```kusto
+CiscoSEGEvent
+
+ | where isnotempty(SrcUserName)
+
+ | summarize count() by SrcUserName
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Cisco Secure Email Gateway via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**CiscoSEGEvent**](https://aka.ms/sentinel-CiscoSEG-parser) which is deployed with the Microsoft Sentinel Solution.
++
+2. Secure your machine
+
+2Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoseg?tab=Overview) in the Azure Marketplace.
sentinel Recommended Citrix Waf Web App Firewall Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-citrix-waf-web-app-firewall-via-ama.md
+
+ Title: "[Recommended] Citrix WAF (Web App Firewall) via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Citrix WAF (Web App Firewall) via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Citrix WAF (Web App Firewall) via AMA connector for Microsoft Sentinel
+
+ Citrix WAF (Web App Firewall) is an industry leading enterprise-grade WAF solution. Citrix WAF mitigates threats against your public-facing assets, including websites, apps, and APIs. From layer 3 to layer 7, Citrix WAF includes protections such as IP reputation, bot mitigation, defense against the OWASP Top 10 application threats, built-in signatures to protect against application stack vulnerabilities, and more.
+
+Citrix WAF supports Common Event Format (CEF) which is an industry standard format on top of Syslog messages . By connecting Citrix WAF CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CitrixWAFLogs)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Citrix Systems](https://www.citrix.com/support/) |
+
+## Query samples
+
+**Citrix WAF Logs**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ ```
+
+**Citrix Waf logs for cross site scripting**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ | where Activity == "APPFW_XSS"
+
+ ```
+
+**Citrix Waf logs for SQL Injection**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ | where Activity == "APPFW_SQL"
+
+ ```
+
+**Citrix Waf logs for Bufferoverflow**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Citrix"
+
+ | where DeviceProduct == "NetScaler"
+
+ | where Activity == "APPFW_STARTURL"
+
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Citrix WAF (Web App Firewall) via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/citrix.citrix_waf_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Claroty Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-claroty-via-ama.md
+
+ Title: "[Recommended] Claroty via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Claroty via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Claroty via AMA connector for Microsoft Sentinel
+
+The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/secure-remote-access/) events into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (Claroty)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Destinations**
+ ```kusto
+ClarotyEvent
+
+ | where isnotempty(DstIpAddr)
+
+ | summarize count() by DstIpAddr
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Claroty via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**ClarotyEvent**](https://aka.ms/sentinel-claroty-parser) which is deployed with the Microsoft Sentinel Solution.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-claroty?tab=Overview) in the Azure Marketplace.
sentinel Recommended Contrast Protect Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-contrast-protect-via-ama.md
+
+ Title: "[Recommended] Contrast Protect via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Contrast Protect via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Contrast Protect via AMA connector for Microsoft Sentinel
+
+Contrast Protect mitigates security threats in production applications with runtime protection and observability. Attack event results (blocked, probed, suspicious...) and other information can be sent to Microsoft Microsoft Sentinel to blend with security information from other systems.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ContrastProtect)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Contrast Protect](https://docs.contrastsecurity.com/) |
+
+## Query samples
+
+**All attacks**
+ ```kusto
+let extract_data=(a:string, k:string) { parse_urlquery(replace(@';', @'&', a))["Query Parameters"][k] }; CommonSecurityLog
+ | where DeviceVendor == 'Contrast Security'
+ | extend Outcome = replace(@'INEFFECTIVE', @'PROBED', tostring(coalesce(column_ifexists("EventOutcome", ""), extract_data(AdditionalExtensions, 'outcome'), "")))
+ | where Outcome != 'success'
+ | extend Rule = extract_data(AdditionalExtensions, 'pri')
+ | project TimeGenerated, ApplicationProtocol, Rule, Activity, Outcome, RequestURL, SourceIP
+ | order by TimeGenerated desc
+ ```
+
+**Effective attacks**
+ ```kusto
+let extract_data=(a:string, k:string) {
+ parse_urlquery(replace(@';', @'&', a))["Query Parameters"][k]
+};
+CommonSecurityLog
+
+ | where DeviceVendor == 'Contrast Security'
+
+ | extend Outcome = tostring(coalesce(column_ifexists("EventOutcome", ""), extract_data(AdditionalExtensions, 'outcome'), ""))
+
+ | where Outcome in ('EXPLOITED','BLOCKED','SUSPICIOUS')
+
+ | extend Rule = extract_data(AdditionalExtensions, 'pri')
+
+ | project TimeGenerated, ApplicationProtocol, Rule, Activity, Outcome, RequestURL, SourceIP
+
+ | order by TimeGenerated desc
+
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Contrast Protect via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/contrast_security.contrast_protect_azure_sentinel_solution?tab=Overview) in the Azure Marketplace.
sentinel Recommended Cyberark Enterprise Password Vault Epv Events Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-cyberark-enterprise-password-vault-epv-events-via-ama.md
+
+ Title: "[Recommended] CyberArk Enterprise Password Vault (EPV) Events via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] CyberArk Enterprise Password Vault (EPV) Events via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] CyberArk Enterprise Password Vault (EPV) Events via AMA connector for Microsoft Sentinel
+
+CyberArk Enterprise Password Vault generates an xml Syslog message for every action taken against the Vault. The EPV will send the xml messages through the Microsoft Sentinel.xsl translator to be converted into CEF standard format and sent to a syslog staging server of your choice (syslog-ng, rsyslog). The Log Analytics agent installed on your syslog staging server will import the messages into Microsoft Log Analytics. Refer to the [CyberArk documentation](https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/DV-Integrating-with-SIEM-Applications.htm) for more guidance on SIEM integrations.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (CyberArk)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Cyberark](https://www.cyberark.com/services-support/technical-support/) |
+
+## Query samples
+
+**CyberArk Alerts**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Cyber-Ark"
+
+ | where DeviceProduct == "Vault"
+
+ | where LogSeverity == "7" or LogSeverity == "10"
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] CyberArk Enterprise Password Vault (EPV) Events via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
+++
+2. Secure your machine
+
+Make sure to configure the machines security according to your organizations security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberark.cyberark_epv_events_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Delinea Secret Server Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-delinea-secret-server-via-ama.md
+
+ Title: "[Recommended] Delinea Secret Server via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Delinea Secret Server via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Delinea Secret Server via AMA connector for Microsoft Sentinel
+
+Common Event Format (CEF) from Delinea Secret Server
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog(DelineaSecretServer)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Delinea](https://delinea.com/support/) |
+
+## Query samples
+
+**Get records create new secret**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Delinea Software" or DeviceVendor == "Thycotic Software"
+
+ | where DeviceProduct == "Secret Server"
+
+ | where Activity has "SECRET - CREATE"
+ ```
+
+**Get records where view secret**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "Delinea Software" or DeviceVendor == "Thycotic Software"
+
+ | where DeviceProduct == "Secret Server"
+
+ | where Activity has "SECRET - VIEW"
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Delinea Secret Server via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
+++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/delineainc1653506022260.delinea_secret_server_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Extrahop Reveal X Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-extrahop-reveal-x-via-ama.md
+
+ Title: "[Recommended] ExtraHop Reveal(x) via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] ExtraHop Reveal(x) via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] ExtraHop Reveal(x) via AMA connector for Microsoft Sentinel
+
+The ExtraHop Reveal(x) data connector enables you to easily connect your Reveal(x) system with Microsoft Sentinel to view dashboards, create custom alerts, and improve investigation. This integration gives you the ability to gain insight into your organization's network and improve your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ΓÇÿExtraHopΓÇÖ)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [ExtraHop](https://www.extrahop.com/support/) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "ExtraHop"
+
+
+ | sort by TimeGenerated
+ ```
+
+**All detections, de-duplicated**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "ExtraHop"
+
+
+ | extend categories = iif(DeviceCustomString2 != "", split(DeviceCustomString2, ","),dynamic(null))
+    
+ | extend StartTime = extract("start=([0-9-]+T[0-9:.]+Z)", 1, AdditionalExtensions,typeof(datetime))
+    
+ | extend EndTime = extract("end=([0-9-]+T[0-9:.]+Z)", 1, AdditionalExtensions,typeof(datetime))
+    
+ | project      
+     DeviceEventClassID="ExtraHop Detection",
+     Title=Activity,
+     Description=Message,
+     riskScore=DeviceCustomNumber2,     
+     SourceIP,
+     DestinationIP,
+     detectionID=tostring(DeviceCustomNumber1),
+     updateTime=todatetime(ReceiptTime),
+     StartTime,
+     EndTime,
+     detectionURI=DeviceCustomString1,
+     categories,
+     Computer
+    
+ | summarize arg_max(updateTime, *) by detectionID
+    
+ | sort by detectionID desc
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] ExtraHop Reveal(x) via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
+++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/extrahop.extrahop_revealx_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended F5 Networks Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-f5-networks-via-ama.md
+
+ Title: "[Recommended] F5 Networks via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] F5 Networks via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] F5 Networks via AMA connector for Microsoft Sentinel
+
+The F5 firewall connector allows you to easily connect your F5 logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (F5)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [F5](https://www.f5.com/services/support) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "F5"
+
+
+ | sort by TimeGenerated
+ ```
+
+**Summarize by time**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "F5"
+
+
+ | summarize count() by TimeGenerated
+
+ | sort by TimeGenerated
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] F5 Networks via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
+++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5_networks_data_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Fireeye Network Security Nx Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-fireeye-network-security-nx-via-ama.md
+
+ Title: "[Recommended] FireEye Network Security (NX) via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] FireEye Network Security (NX) via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] FireEye Network Security (NX) via AMA connector for Microsoft Sentinel
+
+The [FireEye Network Security (NX)](https://www.fireeye.com/products/network-security.html) data connector provides the capability to ingest FireEye Network Security logs into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (FireEyeNX)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Sources**
+ ```kusto
+FireEyeNXEvent
+
+ | where isnotempty(SrcIpAddr)
+
+ | summarize count() by SrcIpAddr
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] FireEye Network Security (NX) via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**FireEyeNXEvent**](https://aka.ms/sentinel-FireEyeNX-parser) which is deployed with the Microsoft Sentinel Solution.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fireeyenx?tab=Overview) in the Azure Marketplace.
sentinel Recommended Forcepoint Casb Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-forcepoint-casb-via-ama.md
+
+ Title: "[Recommended] Forcepoint CASB via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Forcepoint CASB via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Forcepoint CASB via AMA connector for Microsoft Sentinel
+
+The Forcepoint CASB (Cloud Access Security Broker) Connector allows you to automatically export CASB logs and events into Microsoft Sentinel in real-time. This enriches visibility into user activities across locations and cloud applications, enables further correlation with data from Azure workloads and other feeds, and improves monitoring capability with Workbooks inside Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ForcepointCASB)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) |
+
+## Query samples
+
+**Top 5 Users With The Highest Number Of Logs**
+ ```kusto
+CommonSecurityLog
+
+ | summarize Count = count() by DestinationUserName
+
+ | top 5 by DestinationUserName
+
+ | render barchart
+ ```
+
+**Top 5 Users by Number of Failed Attempts **
+ ```kusto
+CommonSecurityLog
+
+ | extend outcome = coalesce(column_ifexists("EventOutcome", ""), tostring(split(split(AdditionalExtensions, ";", 2)[0], "=", 1)[0]), "")
+
+ | extend reason = coalesce(column_ifexists("Reason", ""), tostring(split(split(AdditionalExtensions, ";", 3)[0], "=", 1)[0]), "")
+
+ | where outcome =="Failure"
+
+ | summarize Count= count() by DestinationUserName
+
+ | render barchart
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Forcepoint CASB via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+
+3. Forcepoint integration installation guide
+
+To complete the installation of this Forcepoint product integration, follow the guide linked below.
+
+[Installation Guide >](https://frcpnt.com/casb-sentinel)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-casb?tab=Overview) in the Azure Marketplace.
sentinel Recommended Forcepoint Csg Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-forcepoint-csg-via-ama.md
+
+ Title: "[Recommended] Forcepoint CSG via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Forcepoint CSG via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Forcepoint CSG via AMA connector for Microsoft Sentinel
+
+Forcepoint Cloud Security Gateway is a converged cloud security service that provides visibility, control, and threat protection for users and data, wherever they are. For more information visit: https://www.forcepoint.com/product/cloud-security-gateway
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (Forcepoint CSG)<br/> CommonSecurityLog (Forcepoint CSG)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) |
+
+## Query samples
+
+**Top 5 Web requested Domains with log severity equal to 6 (Medium)**
+ ```kusto
+CommonSecurityLog
+
+ | where TimeGenerated <= ago(0m)
+
+ | where DeviceVendor == "Forcepoint CSG"
+
+ | where DeviceProduct == "Web"
+
+ | where LogSeverity == 6
+
+ | where DeviceCustomString2 != ""
+
+ | summarize Count=count() by DeviceCustomString2
+
+ | top 5 by Count
+
+ | render piechart
+ ```
+
+**Top 5 Web Users with 'Action' equal to 'Blocked'**
+ ```kusto
+CommonSecurityLog
+
+ | where TimeGenerated <= ago(0m)
+
+ | where DeviceVendor == "Forcepoint CSG"
+
+ | where DeviceProduct == "Web"
+
+ | where Activity == "Blocked"
+
+ | where SourceUserID != "Not available"
+
+ | summarize Count=count() by SourceUserID
+
+ | top 5 by Count
+
+ | render piechart
+ ```
+
+**Top 5 Sender Email Addresses Where Spam Score Greater Than 10.0**
+ ```kusto
+CommonSecurityLog
+
+ | where TimeGenerated <= ago(0m)
+
+ | where DeviceVendor == "Forcepoint CSG"
+
+ | where DeviceProduct == "Email"
+
+ | where DeviceCustomFloatingPoint1 > 10.0
+
+ | summarize Count=count() by SourceUserName
+
+ | top 5 by Count
+
+ | render barchart
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Forcepoint CSG via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
+++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF).
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-csg?tab=Overview) in the Azure Marketplace.
sentinel Recommended Forcepoint Ngfw Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-forcepoint-ngfw-via-ama.md
+
+ Title: "[Recommended] Forcepoint NGFW via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Forcepoint NGFW via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Forcepoint NGFW via AMA connector for Microsoft Sentinel
+
+The Forcepoint NGFW (Next Generation Firewall) connector allows you to automatically export user-defined Forcepoint NGFW logs into Microsoft Sentinel in real-time. This enriches visibility into user activities recorded by NGFW, enables further correlation with data from Azure workloads and other feeds, and improves monitoring capability with Workbooks inside Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (ForcePointNGFW)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) |
+
+## Query samples
+
+**Show all terminated actions from the Forcepoint NGFW**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Forcepoint"
+
+ | where DeviceProduct == "NGFW"
+
+ | where DeviceAction == "Terminate"
+
+ ```
+
+**Show all Forcepoint NGFW with suspected compromise behaviour**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Forcepoint"
+
+ | where DeviceProduct == "NGFW"
+
+ | where Activity contains "compromise"
+
+ ```
+
+**Show chart grouping all Forcepoint NGFW events by Activity type**
+ ```kusto
+
+CommonSecurityLog
+
+ | where DeviceVendor == "Forcepoint"
+
+ | where DeviceProduct == "NGFW"
+
+ | summarize count=count() by Activity
+
+ | render barchart
+
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Forcepoint NGFW via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+
+3. Forcepoint integration installation guide
+
+To complete the installation of this Forcepoint product integration, follow the guide linked below.
+
+[Installation Guide >](https://frcpnt.com/ngfw-sentinel)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-ngfw?tab=Overview) in the Azure Marketplace.
sentinel Recommended Iboss Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-iboss-via-ama.md
+
+ Title: "[Recommended] iboss via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] iboss via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] iboss via AMA connector for Microsoft Sentinel
+
+The [iboss](https://www.iboss.com) data connector enables you to seamlessly connect your Threat Console to Microsoft Sentinel and enrich your instance with iboss URL event logs. Our logs are forwarded in Common Event Format (CEF) over Syslog and the configuration required can be completed on the iboss platform without the use of a proxy. Take advantage of our connector to garner critical data points and gain insight into security threats.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | ibossUrlEvent<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [iboss](https://www.iboss.com/contact-us/) |
+
+## Query samples
+
+**Logs Received from the past week**
+ ```kusto
+ibossUrlEvent
+ | where TimeGenerated > ago(7d)
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] iboss via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
+++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy (Only applicable if a dedicated proxy Linux machine has been configured).
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/iboss.iboss-sentinel-connector?tab=Overview) in the Azure Marketplace.
sentinel Recommended Illumio Core Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-illumio-core-via-ama.md
+
+ Title: "[Recommended] Illumio Core via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Illumio Core via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Illumio Core via AMA connector for Microsoft Sentinel
+
+The [Illumio Core](https://www.illumio.com/products/) data connector provides the capability to ingest Illumio Core logs into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (IllumioCore)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Event Types**
+ ```kusto
+IllumioCoreEvent
+
+ | where isnotempty(EventType)
+
+ | summarize count() by EventType
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Illumio Core via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias IllumioCoreEvent and load the function code or click [here](https://aka.ms/sentinel-IllumioCore-parser).The function usually takes 10-15 minutes to activate after solution installation/update and maps Illumio Core events to Microsoft Sentinel Information Model (ASIM).
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-illumiocore?tab=Overview) in the Azure Marketplace.
sentinel Recommended Illusive Platform Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-illusive-platform-via-ama.md
+
+ Title: "[Recommended] Illusive Platform via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Illusive Platform via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Illusive Platform via AMA connector for Microsoft Sentinel
+
+The Illusive Platform Connector allows you to share Illusive's attack surface analysis data and incident logs with Microsoft Sentinel and view this information in dedicated dashboards that offer insight into your organization's attack surface risk (ASM Dashboard) and track unauthorized lateral movement in your organization's network (ADS Dashboard).
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (illusive)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Illusive Networks](https://illusive.com/support) |
+
+## Query samples
+
+**Number of Incidents in in the last 30 days in which Trigger Type is found**
+ ```kusto
+union CommonSecurityLog
+ | where (DeviceEventClassID == "illusive:login" or DeviceEventClassID == "illusive:access" or DeviceEventClassID == "illusive:suspicious")
+ | where Message !contains "hasForensics"
+ | where TimeGenerated > ago(30d)
+ | extend DeviceCustomNumber2 = coalesce(column_ifexists("FieldDeviceCustomNumber2", long(null)), DeviceCustomNumber2, long(null))
+ | summarize by DestinationServiceName, DeviceCustomNumber2
+ | summarize incident_count=count() by DestinationServiceName
+ ```
+
+**Top 10 alerting hosts in the last 30 days**
+ ```kusto
+union CommonSecurityLog
+ | where (DeviceEventClassID == "illusive:login" or DeviceEventClassID == "illusive:access" or DeviceEventClassID == "illusive:suspicious")
+ | where Message !contains "hasForensics"
+ | where TimeGenerated > ago(30d)
+ | extend DeviceCustomNumber2 = coalesce(column_ifexists("FieldDeviceCustomNumber2", long(null)), DeviceCustomNumber2, long(null))
+ | summarize by AlertingHost=iff(SourceHostName != "" and SourceHostName != "Failed to obtain", SourceHostName, SourceIP) ,DeviceCustomNumber2
+ | where AlertingHost != "" and AlertingHost != "Failed to obtain"
+ | summarize incident_count=count() by AlertingHost
+ | order by incident_count
+ | limit 10
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Illusive Platform via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
+++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/illusivenetworks.illusive_platform_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Kaspersky Security Center Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-kaspersky-security-center-via-ama.md
+
+ Title: "[Recommended] Kaspersky Security Center via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Kaspersky Security Center via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Kaspersky Security Center via AMA connector for Microsoft Sentinel
+
+The [Kaspersky Security Center](https://support.kaspersky.com/KSC/13/en-US/3396.htm) data connector provides the capability to ingest [Kaspersky Security Center logs](https://support.kaspersky.com/KSC/13/en-US/151336.htm) into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (KasperskySC)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Destinations**
+ ```kusto
+KasperskySCEvent
+
+ | where isnotempty(DstIpAddr)
+
+ | summarize count() by DstIpAddr
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Kaspersky Security Center via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**KasperskySCEvent**](https://aka.ms/sentinel-kasperskysc-parser) which is deployed with the Microsoft Sentinel Solution.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-kasperskysc?tab=Overview) in the Azure Marketplace.
sentinel Recommended Morphisec Utpp Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-morphisec-utpp-via-ama.md
+
+ Title: "[Recommended] Morphisec UTPP via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Morphisec UTPP via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Morphisec UTPP via AMA connector for Microsoft Sentinel
+
+Integrate vital insights from your security products with the Morphisec Data Connector for Microsoft Sentinel and expand your analytical capabilities with search and correlation, threat intelligence, and customized alerts. Morphisec's Data Connector provides visibility into today's most advanced threats including sophisticated fileless attacks, in-memory exploits and zero days. With a single, cross-product view, you can make real-time, data-backed decisions to protect your most important assets
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Kusto function url** | https://aka.ms/sentinel-morphisecutpp-parser |
+| **Log Analytics table(s)** | CommonSecurityLog (Morphisec)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Morphisec](https://support.morphisec.com/support/home) |
+
+## Query samples
+
+**Threats count by host**
+ ```kusto
+
+Morphisec
+
+
+ | summarize Times_Attacked=count() by SourceHostName
+ ```
+
+**Threats count by username**
+ ```kusto
+
+Morphisec
+
+
+ | summarize Times_Attacked=count() by SourceUserName
+ ```
+
+**Threats with high severity**
+ ```kusto
+
+Morphisec
+
+
+ | where toint( LogSeverity) > 7
+ | order by TimeGenerated
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Morphisec UTPP via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+These queries and workbooks are dependent on Kusto functions based on Kusto to work as expected. Follow the steps to use the Kusto functions alias "Morphisec"
+in queries and workbooks. [Follow steps to get this Kusto function.](https://aka.ms/sentinel-morphisecutpp-parser)
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/morphisec.morphisec_utpp_mss?tab=Overview) in the Azure Marketplace.
sentinel Recommended Netwrix Auditor Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-netwrix-auditor-via-ama.md
+
+ Title: "[Recommended] Netwrix Auditor via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Netwrix Auditor via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Netwrix Auditor via AMA connector for Microsoft Sentinel
+
+Netwrix Auditor data connector provides the capability to ingest [Netwrix Auditor (formerly Stealthbits Privileged Activity Manager)](https://www.netwrix.com/auditor.html) events into Microsoft Sentinel. Refer to [Netwrix documentation](https://helpcenter.netwrix.com/) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Kusto function alias** | NetwrixAuditor |
+| **Kusto function url** | https://aka.ms/sentinel-netwrixauditor-parser |
+| **Log Analytics table(s)** | CommonSecurityLog<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Netwrix Auditor Events - All Activities.**
+ ```kusto
+NetwrixAuditor
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Netwrix Auditor via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on NetwrixAuditor parser based on a Kusto Function to work as expected. This parser is installed along with solution installation.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-netwrixauditor?tab=Overview) in the Azure Marketplace.
sentinel Recommended Nozomi Networks N2os Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-nozomi-networks-n2os-via-ama.md
+
+ Title: "[Recommended] Nozomi Networks N2OS via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Nozomi Networks N2OS via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Nozomi Networks N2OS via AMA connector for Microsoft Sentinel
+
+The [Nozomi Networks](https://www.nozominetworks.com/) data connector provides the capability to ingest Nozomi Networks Events into Microsoft Sentinel. Refer to the Nozomi Networks [PDF documentation](https://www.nozominetworks.com/resources/data-sheets-brochures-learning-guides/) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (NozomiNetworks)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Devices**
+ ```kusto
+NozomiNetworksEvents
+
+ | summarize count() by DvcHostname
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Nozomi Networks N2OS via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**NozomiNetworksEvents**](https://aka.ms/sentinel-NozomiNetworks-parser) which is deployed with the Microsoft Sentinel Solution.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-nozominetworks?tab=Overview) in the Azure Marketplace.
sentinel Recommended Ossec Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-ossec-via-ama.md
+
+ Title: "[Recommended] OSSEC via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] OSSEC via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] OSSEC via AMA connector for Microsoft Sentinel
+
+OSSEC data connector provides the capability to ingest [OSSEC](https://www.ossec.net/) events into Microsoft Sentinel. Refer to [OSSEC documentation](https://www.ossec.net/docs) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (OSSEC)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
+
+## Query samples
+
+**Top 10 Rules**
+ ```kusto
+OSSECEvent
+
+ | summarize count() by RuleName
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] OSSEC via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias OSSEC and load the function code or click [here](https://aka.ms/sentinel-OSSECEvent-parser), on the second line of the query, enter the hostname(s) of your OSSEC device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ossec?tab=Overview) in the Azure Marketplace.
sentinel Recommended Palo Alto Networks Cortex Data Lake Cdl Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-palo-alto-networks-cortex-data-lake-cdl-via-ama.md
+
+ Title: "[Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA connector for Microsoft Sentinel
+
+The [Palo Alto Networks CDL](https://www.paloaltonetworks.com/cortex/cortex-data-lake) data connector provides the capability to ingest [CDL logs](https://docs.paloaltonetworks.com/cortex/cortex-data-lake/log-forwarding-schema-reference/log-forwarding-schema-overview.html) into Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (PaloAltoNetworksCDL)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Destinations**
+ ```kusto
+PaloAltoCDLEvent
+
+ | where isnotempty(DstIpAddr)
+
+ | summarize count() by DstIpAddr
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**PaloAltoCDLEvent**](https://aka.ms/sentinel-paloaltocdl-parser) which is deployed with the Microsoft Sentinel Solution.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltocdl?tab=Overview) in the Azure Marketplace.
sentinel Recommended Pingfederate Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-pingfederate-via-ama.md
+
+ Title: "[Recommended] PingFederate via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] PingFederate via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] PingFederate via AMA connector for Microsoft Sentinel
+
+The [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html) data connector provides the capability to ingest [PingFederate events](https://docs.pingidentity.com/bundle/pingfederate-102/page/lly1564002980532.html) into Microsoft Sentinel. Refer to [PingFederate documentation](https://docs.pingidentity.com/bundle/pingfederate-102/page/tle1564002955874.html) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (PingFederate)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Top 10 Devices**
+ ```kusto
+PingFederateEvent
+
+ | summarize count() by DvcHostname
+
+ | top 10 by count_
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] PingFederate via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**PingFederateEvent**](https://aka.ms/sentinel-PingFederate-parser) which is deployed with the Microsoft Sentinel Solution.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-pingfederate?tab=Overview) in the Azure Marketplace.
sentinel Recommended Sonicwall Firewall Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-sonicwall-firewall-via-ama.md
+
+ Title: "[Recommended] SonicWall Firewall via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] SonicWall Firewall via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] SonicWall Firewall via AMA connector for Microsoft Sentinel
+
+Common Event Format (CEF) is an industry standard format on top of Syslog messages, used by SonicWall to allow event interoperability among different platforms. By connecting your CEF logs to Microsoft Sentinel, you can take advantage of search & correlation, alerting, and threat intelligence enrichment for each log.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (SonicWall)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [SonicWall](https://www.sonicwall.com/support/) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "SonicWall"
+
+ | sort by TimeGenerated desc
+ ```
+
+**Summarize by destination IP and port**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "SonicWall"
+
+ | summarize count() by DestinationIP, DestinationPort, TimeGenerated
+
+ | sort by TimeGenerated desc
+ ```
+
+**Show all dropped traffic from the SonicWall Firewall**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "SonicWall"
+
+ | where AdditionalExtensions contains "fw_action='drop'"
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] SonicWall Firewall via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
+++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sonicwall-inc.sonicwall-networksecurity-azure-sentinal?tab=Overview) in the Azure Marketplace.
sentinel Recommended Trend Micro Apex One Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-trend-micro-apex-one-via-ama.md
+
+ Title: "[Recommended] Trend Micro Apex One via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] Trend Micro Apex One via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] Trend Micro Apex One via AMA connector for Microsoft Sentinel
+
+The [Trend Micro Apex One](https://www.trendmicro.com/en_us/business/products/user-protection/sps/endpoint.html) data connector provides the capability to ingest [Trend Micro Apex One events](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/appendices/syslog-mapping-cef.aspx) into Microsoft Sentinel. Refer to [Trend Micro Apex Central](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/preface_001.aspx) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (TrendMicroApexOne)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+
+TMApexOneEvent
+
+ | sort by TimeGenerated
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] Trend Micro Apex One via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+>This data connector depends on a parser based on a Kusto Function to work as expected [**TMApexOneEvent**](https://aka.ms/sentinel-TMApexOneEvent-parser) which is deployed with the Microsoft Sentinel Solution.
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-trendmicroapexone?tab=Overview) in the Azure Marketplace.
sentinel Recommended Varmour Application Controller Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-varmour-application-controller-via-ama.md
+
+ Title: "[Recommended] vArmour Application Controller via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] vArmour Application Controller via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] vArmour Application Controller via AMA connector for Microsoft Sentinel
+
+vArmour reduces operational risk and increases cyber resiliency by visualizing and controlling application relationships across the enterprise. This vArmour connector enables streaming of Application Controller Violation Alerts into Microsoft Sentinel, so you can take advantage of search & correlation, alerting, & threat intelligence enrichment for each log.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (vArmour)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [vArmour Networks](https://www.varmour.com/contact-us/) |
+
+## Query samples
+
+**Top 10 App to App violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | extend AppNameSrcDstPair = extract_all("AppName=;(\\w+)", AdditionalExtensions)
+
+ | summarize count() by tostring(AppNameSrcDstPair)
+
+ | top 10 by count_
+
+ ```
+
+**Top 10 Policy names matching violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by DeviceCustomString1
+
+ | top 10 by count_ desc
+
+ ```
+
+**Top 10 Source IPs generating violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by SourceIP
+
+ | top 10 by count_
+
+ ```
+
+**Top 10 Destination IPs generating violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by DestinationIP
+
+ | top 10 by count_
+
+ ```
+
+**Top 10 Application Protocols matching violations**
+ ```kusto
+CommonSecurityLog
+
+ | where DeviceVendor == "vArmour"
+
+ | where DeviceProduct == "AC"
+
+ | where Activity == "POLICY_VIOLATION"
+
+ | summarize count() by ApplicationProtocol
+
+ | top 10 by count_
+
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] vArmour Application Controller via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/varmournetworks.varmour_sentinel?tab=Overview) in the Azure Marketplace.
sentinel Recommended Wirex Network Forensics Platform Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-wirex-network-forensics-platform-via-ama.md
+
+ Title: "[Recommended] WireX Network Forensics Platform via AMA connector for Microsoft Sentinel"
+description: "Learn how to install the connector [Recommended] WireX Network Forensics Platform via AMA to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# [Recommended] WireX Network Forensics Platform via AMA connector for Microsoft Sentinel
+
+The WireX Systems data connector allows security professional to integrate with Microsoft Sentinel to allow you to further enrich your forensics investigations; to not only encompass the contextual content offered by WireX but to analyze data from other sources, and to create custom dashboards to give the most complete picture during a forensic investigation and to create custom workflows.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | CommonSecurityLog (WireXNFPevents)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [WireX Systems](https://wirexsystems.com/contact-us/) |
+
+## Query samples
+
+**All Imported Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+
+ ```
+
+**Imported DNS Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+ and ApplicationProtocol == "DNS"
+
+ ```
+
+**Imported DNS Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+ and ApplicationProtocol == "HTTP"
+
+ ```
+
+**Imported DNS Events from WireX**
+ ```kusto
+CommonSecurityLog
+ | where DeviceVendor == "WireX"
+ and ApplicationProtocol == "TDS"
+
+ ```
+++
+## Prerequisites
+
+To integrate with [Recommended] WireX Network Forensics Platform via AMA make sure you have:
+
+- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)
+- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)
++
+## Vendor installation instructions
++
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
+
+> Notice that the data from all regions will be stored in the selected workspace
++
+2. Secure your machine
+
+Make sure to configure the machine's security according to your organization's security policy
++
+[Learn more >](https://aka.ms/SecureCEF)
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/wirexsystems1584682625009.wirex_network_forensics_platform_mss?tab=Overview) in the Azure Marketplace.
sentinel Salesforce Service Cloud Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/salesforce-service-cloud-using-azure-function.md
- Title: "Salesforce Service Cloud (using Azure Function) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Salesforce Service Cloud (using Azure Function) to connect your data source to Microsoft Sentinel."
-- Previously updated : 05/22/2023----
-# Salesforce Service Cloud (using Azure Function) connector for Microsoft Sentinel
-
-The Salesforce Service Cloud data connector provides the capability to ingest information about your Salesforce operational events into Microsoft Sentinel through the REST API. The connector provides ability to review events in your org on an accelerated basis, get [event log files](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/event_log_file_hourly_overview.htm) in hourly increments for recent activity.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Azure function app code** | https://aka.ms/sentinel-SalesforceServiceCloud-functionapp |
-| **Log Analytics table(s)** | SalesforceServiceCloud_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
-
-## Query samples
-
-**Last Salesforce Service Cloud EventLogFile Events**
- ```kusto
-SalesforceServiceCloud
-
- | sort by TimeGenerated desc
- ```
---
-## Prerequisites
-
-To integrate with Salesforce Service Cloud (using Azure Function) make sure you have:
--- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **REST API Credentials/permissions**: **Salesforce API Username**, **Salesforce API Password**, **Salesforce Security Token**, **Salesforce Consumer Key**, **Salesforce Consumer Secret** is required for REST API. [See the documentation to learn more about API](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/quickstart.htm).--
-## Vendor installation instructions
--
-> [!NOTE]
- > This connector uses Azure Functions to connect to the Salesforce Lightning Platform REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
--
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
--
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SalesforceServiceCloud and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Salesforce%20Service%20Cloud/Parsers/SalesforceServiceCloud.txt). The function usually takes 10-15 minutes to activate after solution installation/update.
--
-**STEP 1 - Configuration steps for the Salesforce Lightning Platform REST API**
-
-1. See the [link](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/quickstart.htm) and follow the instructions for obtaining Salesforce API Authorization credentials.
-2. On the **Set Up Authorization** step choose **Session ID Authorization** method.
-3. You must provide your client id, client secret, username, and password with user security token.
--
-> [!NOTE]
- > Ingesting data from on an hourly interval may require additional licensing based on the edition of the Salesforce Service Cloud being used. Please refer to [Salesforce documentation](https://www.salesforce.com/editions-pricing/service-cloud/) and/or support for more details.
--
-**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
-
->**IMPORTANT:** Before deploying the Salesforce Service Cloud data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Salesforce API Authorization credentials, readily available.
---
-Option 1 - Azure Resource Manager (ARM) Template
-
-Use this method for automated deployment of the Salesforce Service Cloud data connector using an ARM Tempate.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-SalesforceServiceCloud-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-3. Enter the **Workspace ID**, **Workspace Key**, **Salesforce API Username**, **Salesforce API Password**, **Salesforce Security Token**, **Salesforce Consumer Key**, **Salesforce Consumer Secret** and deploy.
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the Salesforce Service Cloud data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> NOTE:You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-SalesforceServiceCloud-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. SalesforceXXXXX).
-
- e. **Select a runtime:** Choose Python 3.8.
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- SalesforceUser
- SalesforcePass
- SalesforceSecurityToken
- SalesforceConsumerKey
- SalesforceConsumerSecret
- WorkspaceID
- WorkspaceKey
- logAnalyticsUri (optional)
-3. Once all application settings have been entered, click **Save**.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-salesforceservicecloud?tab=Overview) in the Azure Marketplace.
sentinel Salesforce Service Cloud Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/salesforce-service-cloud-using-azure-functions.md
+
+ Title: "Salesforce Service Cloud (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Salesforce Service Cloud (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# Salesforce Service Cloud (using Azure Functions) connector for Microsoft Sentinel
+
+The Salesforce Service Cloud data connector provides the capability to ingest information about your Salesforce operational events into Microsoft Sentinel through the REST API. The connector provides ability to review events in your org on an accelerated basis, get [event log files](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/event_log_file_hourly_overview.htm) in hourly increments for recent activity.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | SalesforceServiceCloud_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
+
+## Query samples
+
+**Last Salesforce Service Cloud EventLogFile Events**
+ ```kusto
+SalesforceServiceCloud
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Salesforce Service Cloud (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **REST API Credentials/permissions**: **Salesforce API Username**, **Salesforce API Password**, **Salesforce Security Token**, **Salesforce Consumer Key**, **Salesforce Consumer Secret** is required for REST API. [See the documentation to learn more about API](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/quickstart.htm).
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the Salesforce Lightning Platform REST API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SalesforceServiceCloud and load the function code or click [here](https://aka.ms/sentinel-SalesforceServiceCloud-parser). The function usually takes 10-15 minutes to activate after solution installation/update.
++
+**STEP 1 - Configuration steps for the Salesforce Lightning Platform REST API**
+
+1. See the [link](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/quickstart.htm) and follow the instructions for obtaining Salesforce API Authorization credentials.
+2. On the **Set Up Authorization** step choose **Session ID Authorization** method.
+3. You must provide your client id, client secret, username, and password with user security token.
++
+> [!NOTE]
+ > Ingesting data from on an hourly interval may require additional licensing based on the edition of the Salesforce Service Cloud being used. Please refer to [Salesforce documentation](https://www.salesforce.com/editions-pricing/service-cloud/) and/or support for more details.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the Salesforce Service Cloud data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Salesforce API Authorization credentials, readily available.
+++++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-salesforceservicecloud?tab=Overview) in the Azure Marketplace.
sentinel Snowflake Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/snowflake-using-azure-function.md
- Title: "Snowflake (using Azure Function) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Snowflake (using Azure Function) to connect your data source to Microsoft Sentinel."
-- Previously updated : 02/23/2023----
-# Snowflake (using Azure Functions) connector for Microsoft Sentinel
-
-The Snowflake data connector provides the capability to ingest Snowflake [login logs](https://docs.snowflake.com/en/sql-reference/account-usage/login_history.html) and [query logs](https://docs.snowflake.com/en/sql-reference/account-usage/query_history.html) into Microsoft Sentinel using the Snowflake Python Connector. Refer to [Snowflake documentation](https://docs.snowflake.com/en/user-guide/python-connector.html) for more information.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Azure functions app code** | https://aka.ms/sentinel-SnowflakeDataConnector-functionapp |
-| **Log Analytics table(s)** | Snowflake_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**All Snowflake Events**
- ```kusto
-Snowflake_CL
-
- | sort by TimeGenerated desc
- ```
---
-## Prerequisites
-
-To integrate with Snowflake (using Azure Functions) make sure you have:
--- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).-- **Snowflake Credentials**: **Snowflake Account Identifier**, **Snowflake User** and **Snowflake Password** are required for connection. See the documentation to learn more about [Snowflake Account Identifier](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#). Instructions on how to create user for this connector you can find below.--
-## Vendor installation instructions
--
-> [!NOTE]
- > This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
--
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**Snowflake**](https://aka.ms/sentinel-SnowflakeDataConnector-parser) which is deployed with the Microsoft Sentinel Solution.
--
-**STEP 1 - Creating user in Snowflake**
-
-To query data from Snowflake you need a user that is assigned to a role with sufficient privileges and a virtual warehouse cluster. The initial size of this cluster will be set to small but if it is insufficient, the cluster size can be increased as necessary.
-
-1. Enter the Snowflake console.
-2. Switch role to SECURITYADMIN and [create a new role](https://docs.snowflake.com/en/sql-reference/sql/create-role.html):
-
-USE ROLE SECURITYADMIN;
-CREATE OR REPLACE ROLE EXAMPLE_ROLE_NAME;
-
-3. Switch role to SYSADMIN and [create warehouse](https://docs.snowflake.com/en/sql-reference/sql/create-warehouse.html) and [grand access](https://docs.snowflake.com/en/sql-reference/sql/grant-privilege.html) to it:
-
-USE ROLE SYSADMIN;
-CREATE OR REPLACE WAREHOUSE EXAMPLE_WAREHOUSE_NAME
- WAREHOUSE_SIZE = 'SMALL'
- AUTO_SUSPEND = 5
- AUTO_RESUME = true
- INITIALLY_SUSPENDED = true;
-GRANT USAGE, OPERATE ON WAREHOUSE EXAMPLE_WAREHOUSE_NAME TO ROLE EXAMPLE_ROLE_NAME;
-
-4. Switch role to SECURITYADMIN and [create a new user](https://docs.snowflake.com/en/sql-reference/sql/create-user.html):
-
-USE ROLE SECURITYADMIN;
-CREATE OR REPLACE USER EXAMPLE_USER_NAME
- PASSWORD = 'example_password'
- DEFAULT_ROLE = EXAMPLE_ROLE_NAME
- DEFAULT_WAREHOUSE = EXAMPLE_WAREHOUSE_NAME
-;
-
-5. Switch role to ACCOUNTADMIN and [grant access to snowflake database](https://docs.snowflake.com/en/sql-reference/account-usage.html#enabling-account-usage-for-other-roles) for role.
-
-USE ROLE ACCOUNTADMIN;
-GRANT IMPORTED PRIVILEGES ON DATABASE SNOWFLAKE TO ROLE EXAMPLE_ROLE_NAME;
-
-6. Switch role to SECURITYADMIN and [assign role](https://docs.snowflake.com/en/sql-reference/sql/grant-role.html) to user:
-
-USE ROLE SECURITYADMIN;
-GRANT ROLE EXAMPLE_ROLE_NAME TO USER EXAMPLE_USER_NAME;
-
->**IMPORTANT:** Save user and API password created during this step as they will be used during deployment step.
--
-**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
-
->**IMPORTANT:** Before deploying the data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Snowflake credentials, readily available.
---
-Option 1 - Azure Resource Manager (ARM) Template
-
-Use this method for automated deployment of the data connector using an ARM Template.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-SnowflakeDataConnector-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-3. Enter the **Snowflake Account Identifier**, **Snowflake User**, **Snowflake Password**, **Microsoft Sentinel Workspace Id**, **Microsoft Sentinel Shared Key**
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/create-first-function-vs-code-python) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-SnowflakeDataConnector-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. Snowflake12).
-
- e. **Select a runtime:** Choose Python 3.8.
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- SNOWFLAKE_ACCOUNT
- SNOWFLAKE_USER
- SNOWFLAKE_PASSWORD
- WORKSPACE_ID
- SHARED_KEY
- logAnalyticsUri (optional)
-4. Once all application settings have been entered, click **Save**.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-snowflake?tab=Overview) in the Azure Marketplace.
sentinel Snowflake Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/snowflake-using-azure-functions.md
+
+ Title: "Snowflake (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Snowflake (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# Snowflake (using Azure Functions) connector for Microsoft Sentinel
+
+The Snowflake data connector provides the capability to ingest Snowflake [login logs](https://docs.snowflake.com/en/sql-reference/account-usage/login_history.html) and [query logs](https://docs.snowflake.com/en/sql-reference/account-usage/query_history.html) into Microsoft Sentinel using the Snowflake Python Connector. Refer to [Snowflake documentation](https://docs.snowflake.com/en/user-guide/python-connector.html) for more information.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | Snowflake_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**All Snowflake Events**
+ ```kusto
+Snowflake_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Snowflake (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Snowflake Credentials**: **Snowflake Account Identifier**, **Snowflake User** and **Snowflake Password** are required for connection. See the documentation to learn more about [Snowflake Account Identifier](https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#). Instructions on how to create user for this connector you can find below.
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the Azure Blob Storage API to pull logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**Snowflake**](https://aka.ms/sentinel-SnowflakeDataConnector-parser) which is deployed with the Microsoft Sentinel Solution.
++
+**STEP 1 - Creating user in Snowflake**
+
+To query data from Snowflake you need a user that is assigned to a role with sufficient privileges and a virtual warehouse cluster. The initial size of this cluster will be set to small but if it is insufficient, the cluster size can be increased as necessary.
+
+1. Enter the Snowflake console.
+1. Switch role to SECURITYADMIN and [create a new role](https://docs.snowflake.com/en/sql-reference/sql/create-role.html):
+
+ ```
+ USE ROLE SECURITYADMIN;
+ CREATE OR REPLACE ROLE EXAMPLE_ROLE_NAME;
+ ```
+
+1. Switch role to SYSADMIN and [create warehouse](https://docs.snowflake.com/en/sql-reference/sql/create-warehouse.html) and [grand access](https://docs.snowflake.com/en/sql-reference/sql/grant-privilege.html) to it:
+
+ ```
+ USE ROLE SYSADMIN;
+ CREATE OR REPLACE WAREHOUSE EXAMPLE_WAREHOUSE_NAME
+ WAREHOUSE_SIZE = 'SMALL'
+ AUTO_SUSPEND = 5
+ AUTO_RESUME = true
+ INITIALLY_SUSPENDED = true;
+ GRANT USAGE, OPERATE ON WAREHOUSE EXAMPLE_WAREHOUSE_NAME TO ROLE EXAMPLE_ROLE_NAME;
+ ```
+
+1. Switch role to SECURITYADMIN and [create a new user](https://docs.snowflake.com/en/sql-reference/sql/create-user.html):
+
+ ```
+ USE ROLE SECURITYADMIN;
+ CREATE OR REPLACE USER EXAMPLE_USER_NAME
+ PASSWORD = 'example_password'
+ DEFAULT_ROLE = EXAMPLE_ROLE_NAME
+ DEFAULT_WAREHOUSE = EXAMPLE_WAREHOUSE_NAME
+ ;
+
+ ```
+
+1. Switch role to ACCOUNTADMIN and [grant access to snowflake database](https://docs.snowflake.com/en/sql-reference/account-usage.html#enabling-account-usage-for-other-roles) for role.
+
+ ```
+ USE ROLE ACCOUNTADMIN;
+ GRANT IMPORTED PRIVILEGES ON DATABASE SNOWFLAKE TO ROLE EXAMPLE_ROLE_NAME;
+ ```
+
+1. Switch role to SECURITYADMIN and [assign role](https://docs.snowflake.com/en/sql-reference/sql/grant-role.html) to user:
+
+ ```
+ USE ROLE SECURITYADMIN;
+ GRANT ROLE EXAMPLE_ROLE_NAME TO USER EXAMPLE_USER_NAME;
+ ```
+
+>**IMPORTANT:** Save user and API password created during this step as they will be used during deployment step.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as Snowflake credentials, readily available.
+++++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-snowflake?tab=Overview) in the Azure Marketplace.
sentinel Symantec Vip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-vip.md
Title: "Symantec VIP connector for Microsoft Sentinel"
description: "Learn how to install the connector Symantec VIP to connect your data source to Microsoft Sentinel." Previously updated : 08/28/2023 Last updated : 10/23/2023
SymantecVIP
| top 10 by count_ ``` ++ ## Prerequisites To integrate with Symantec VIP make sure you have: - **Symantec VIP**: must be configured to export logs via Syslog + ## Vendor installation instructions + > [!NOTE] > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Symantec VIP and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Symantec%20VIP/Parsers/SymantecVIP.txt), on the second line of the query, enter the hostname(s) of your Symantec VIP device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
Typically, you should install the agent on a different computer from the one on
> Syslog logs are collected only from **Linux** agents. + 2. Configure the logs to be collected Configure the facilities you want to collect and their severities.
Configure the facilities you want to collect and their severities.
2. Select **Apply below configuration to my machines** and select the facilities and severities. 3. Click **Save**.
-3. Connect the Symantec VIP
-Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
+3. Configure and connect the Symantec VIP
+
+[Follow these instructions](https://help.symantec.com/cs/VIP_EG_INSTALL_CONFIG/VIP/v134652108_v128483142/Configuring-syslog) to configure the Symantec VIP Enterprise Gateway to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
++ ## Next steps
sentinel Tenable Io Vulnerability Management Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/tenable-io-vulnerability-management-using-azure-function.md
Title: "Tenable.io Vulnerability Management (using Azure Functions) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Tenable.io Vulnerability Management (using Azure Functions) to connect your data source to Microsoft Sentinel."
+description: "Learn how to install the connector Tenable.io Vulnerability Management (using Azure Function) to connect your data source to Microsoft Sentinel."
Previously updated : 07/26/2023 Last updated : 10/23/2023
Tenable_IO_Assets_CL
To integrate with Tenable.io Vulnerability Management (using Azure Function) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/nessus/Content/GenerateAnAPIKey.htm) for obtaining credentials.
+- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) for obtaining credentials.
## Vendor installation instructions
To integrate with Tenable.io Vulnerability Management (using Azure Function) mak
**STEP 1 - Configuration steps for Tenable.io**
- [Follow the instructions](https://docs.tenable.com/nessus/Content/GenerateAnAPIKey.htm) to obtain the required API credentials.
+ [Follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) to obtain the required API credentials.
sentinel Thehive Project Thehive Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/thehive-project-thehive-using-azure-functions.md
+
+ Title: "TheHive Project - TheHive (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector TheHive Project - TheHive (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# TheHive Project - TheHive (using Azure Functions) connector for Microsoft Sentinel
+
+The [TheHive](http://thehive-project.org/) data connector provides the capability to ingest common TheHive events into Microsoft Sentinel through Webhooks. TheHive can notify external system of modification events (case creation, alert update, task assignment) in real time. When a change occurs in the TheHive, an HTTPS POST request with event information is sent to a callback data connector URL. Refer to [Webhooks documentation](https://docs.thehive-project.org/thehive/legacy/thehive3/admin/webhooks/) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | TheHive_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**TheHive Events - All Activities.**
+ ```kusto
+TheHive_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with TheHive Project - TheHive (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Webhooks Credentials/permissions**: **TheHiveBearerToken**, **Callback URL** are required for working Webhooks. See the documentation to learn more about [configuring Webhooks](https://docs.thehive-project.org/thehive/installation-and-configuration/configuration/webhooks/).
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector uses Azure Functions based on HTTP Trigger for waiting POST requests with logs to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**TheHive**](https://aka.ms/sentinel-TheHive-parser) which is deployed with the Microsoft Sentinel Solution.
++
+**STEP 1 - Configuration steps for the TheHive**
+
+ Follow the [instructions](https://docs.thehive-project.org/thehive/installation-and-configuration/configuration/webhooks/) to configure Webhooks.
+
+1. Authentication method is *Beared Auth*.
+2. Generate the **TheHiveBearerToken** according to your password policy.
+3. Setup Webhook notifications in the *application.conf* file including **TheHiveBearerToken** parameter.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+>**IMPORTANT:** Before deploying the TheHive data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following).
+++++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-thehive?tab=Overview) in the Azure Marketplace.
sentinel Workplace From Facebook Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/workplace-from-facebook-using-azure-functions.md
+
+ Title: "Workplace from Facebook (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Workplace from Facebook (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 10/23/2023++++
+# Workplace from Facebook (using Azure Functions) connector for Microsoft Sentinel
+
+The [Workplace](https://www.workplace.com/) data connector provides the capability to ingest common Workplace events into Microsoft Sentinel through Webhooks. Webhooks enable custom integration apps to subscribe to events in Workplace and receive updates in real time. When a change occurs in Workplace, an HTTPS POST request with event information is sent to a callback data connector URL. Refer to [Webhooks documentation](https://developers.facebook.com/docs/workplace/reference/webhooks) for more information. The connector provides ability to get events which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | Workplace_Facebook_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Workplace Events - All Activities.**
+ ```kusto
+Workplace_Facebook_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Workplace from Facebook (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Webhooks Credentials/permissions**: WorkplaceAppSecret, WorkplaceVerifyToken, Callback URL are required for working Webhooks. See the documentation to learn more about [configuring Webhooks](https://developers.facebook.com/docs/workplace/reference/webhooks), [configuring permissions](https://developers.facebook.com/docs/workplace/reference/permissions).
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This data connector uses Azure Functions based on HTTP Trigger for waiting POST requests with logs to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Functions App.
++
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias WorkplaceFacebook and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Workplace%20from%20Facebook/Parsers/Workplace_Facebook.txt) on the second line of the query, enter the hostname(s) of your Workplace Facebook device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
++
+**STEP 1 - Configuration steps for the Workplace**
+
+ Follow the instructions to configure Webhooks.
+
+1. Log in to the Workplace with Admin user credentials.
+2. In the Admin panel, click **Integrations**.
+3. In the **All integrations** view, click **Create custom integration**
+4. Enter the name and description and click **Create**.
+5. In the **Integration details** panel show **App secret** and copy.
+6. In the **Integration permissions** pannel set all read permissions. Refer to [permission page](https://developers.facebook.com/docs/workplace/reference/permissions) for details.
+7. Now proceed to STEP 2 to follow the steps (listed in Option 1 or 2) to Deploy the Azure Function.
+8. Enter the requested parameters and also enter a Token of choice. Copy this Token / Note it for the upcoming step.
+9. After the deployment of Azure Functions completes successfully, open Function App page, select your app, go to **Functions**, click **Get Function URL** and copy this / Note it for the upcoming step.
+10. Go back to Workplace from Facebook. In the **Configure webhooks** panel on each Tab set **Callback URL** as the same value that you copied in point 9 above and Verify token as the same
+ value you copied in point 8 above which was obtained during STEP 2 of Azure Functions deployment.
+11. Click Save.
++
+**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Functions**
+
+>**IMPORTANT:** Before deploying the Workplace data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following).
+++++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-workplacefromfacebook?tab=Overview) in the Azure Marketplace.
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
- [Learn about Cybersixgill integration with Microsoft Sentinel @Cybersixgill](https://www.cybersixgill.com/partners/azure-sentinel/) - To connect Microsoft Sentinel to Cybersixgill TAXII Server and get access to Darkfeed, [contact Cybersixgill](mailto://azuresentinel@cybersixgill.com) to obtain the API Root, Collection ID, Username and Password.
+### ESET
+
+- [Learn about ESET's threat intelligence offering](https://www.eset.com/int/business/services/threat-intelligence/).
+- To connect Microsoft Sentinel to the ESET TAXII server, obtain the API root URL, Collection ID, Username and Password from your ESET account. Then follow the [general instructions](connect-threat-intelligence-taxii.md) and [ESET's knowledge base article](https://support.eset.com/en/kb8314-eset-threat-intelligence-with-ms-azure-sentinel).
+ ### Financial Services Information Sharing and Analysis Center (FS-ISAC) - Join [FS-ISAC](https://www.fsisac.com/membership?utm_campaign=ThirdParty&utm_source=MSFT&utm_medium=ThreatFeed-Join) to get the credentials to access this feed.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
description: This article describes new features in Microsoft Sentinel from the
Previously updated : 09/11/2023 Last updated : 10/25/2023 # What's new in Microsoft Sentinel
This article lists recent features added for Microsoft Sentinel, and new feature
The listed features were released in the last three months. For information about earlier features delivered, see our [Tech Community blogs](https://techcommunity.microsoft.com/t5/azure-sentinel/bg-p/AzureSentinelBlog/label-name/What's%20New).
-See these [important announcements](#announcements) about recent changes to features and services.
> [!TIP] > Get notified when this page is updated by copying and pasting the following URL into your feed reader:
See these [important announcements](#announcements) about recent changes to feat
> `https://aka.ms/sentinel/rss` [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
+## October 2023
+
+- [Changes to the documentation table of contents](#changes-to-the-documentation-table-of-contents)
+
+### Changes to the documentation table of contents
+
+We've made some significant changes in how the Microsoft Sentinel documentation is organized in the table of contents on the left-hand side of the library. Two important things to know:
+
+- Bookmarked links persist. Unless we retire an article, your saved and shared links to Microsoft Sentinel articles still work.
+- Articles used to be divided by concepts, how-tos, and tutorials. Now, the articles are organized by lifecycle or scenario with the related concepts, how-tos, and tutorials in those buckets.
+
+We hope these changes to the organization makes your exploration of Microsoft Sentinel documentation more intuitive!
## September 2023
Also generally available are the similar incidents widget and the ability to add
### Updated MISP2Sentinel solution
-The open source threat intelligence sharing platform, MISP, has an updated solution to push indicators to Microsoft Sentinel. This notable solution utilizes the new [upload indicators API](#connect-threat-intelligence-with-the-upload-indicators-api) to take advantage of workspace granularity and align the MISP ingested TI to STIX-based properties.
+The open source threat intelligence sharing platform, MISP, has an updated solution to push indicators to Microsoft Sentinel. This notable solution utilizes the new upload indicators API to take advantage of workspace granularity and align the MISP ingested TI to STIX-based properties.
Learn more about the implementation details from the [MISP blog entry for MISP2Sentinel](https://www.misp-project.org/2023/08/26/MISP-Sentinel-UploadIndicatorsAPI.html/).
Microsoft Sentinel now provides you enhanced and enriched entity pages and panel
- Learn more about [entities in Microsoft Sentinel](entities.md).
-## July 2023
--- [Higher limits for entities in alerts and entity mappings in analytics rules](#higher-limits-for-entities-in-alerts-and-entity-mappings-in-analytics-rules)-- Announcement: [Changes to Microsoft Defender for Office 365 connector alerts that apply when disconnecting and reconnecting](#changes-to-microsoft-defender-for-office-365-connector-alerts-that-apply-when-disconnecting-and-reconnecting)-- [Content Hub generally available and centralization changes released](#content-hub-generally-available-and-centralization-changes-released)-- [Deploy incident response playbooks for SAP](#deploy-incident-response-playbooks-for-sap)-- [Microsoft Sentinel solution for Dynamics 365 Finance and Operations (Preview)](#microsoft-sentinel-solution-for-dynamics-365-finance-and-operations-preview)-- [Simplified pricing tiers](#simplified-pricing-tiers) in [Announcements](#announcements) section below-- [Monitor and optimize the execution of your scheduled analytics rules (Preview)](#monitor-and-optimize-the-execution-of-your-scheduled-analytics-rules-preview)-
-### Higher limits for entities in alerts and entity mappings in analytics rules
-
-The following limits on entities in alerts and entity mappings in analytics rules have been raised:
-- You can now define **up to ten entity mappings** in an analytics rule (up from five).-- A single alert can now contain **up to 500 identified entities** in total, divided equally amongst the mapped entities.-- The *Entities* field in the alert has a **size limit of 64 KB**. (This size limit previously applied to the entire alert record.)-
-Learn more about entity mapping, and see a full description of these limits, in [Map data fields to entities in Microsoft Sentinel](map-data-fields-to-entities.md).
-
-Learn about other [service limits in Microsoft Sentinel](sentinel-service-limits.md).
-
-### Content Hub generally available and centralization changes released
-
-Content hub is now generally available (GA)! The [content hub centralization changes announced in February](#out-of-the-box-content-centralization-changes) have also been released. For more information on these changes and their impact, including more details about the tool provided to reinstate **IN USE** gallery templates, see [Out-of-the-box (OOTB) content centralization changes](sentinel-content-centralize.md).
-
-As part of the deployment for GA, the default view of the content hub is now the **List view**. The install process is streamlined as well. When selecting **Install** or **Install/Update**, the experience behaves like bulk installation. See our featured [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-microsoft-sentinel-content-hub-ga-and-ootb-content/ba-p/3854807) for more information.
-
-### Deploy incident response playbooks for SAP
-
-Take advantage of Microsoft Sentinel's security orchestration, automation, and response (SOAR) capabilities in conjunction with SAP. Microsoft Sentinel presents purpose-built playbooks included in the [Microsoft Sentinel solution for SAP® applications](sap/solution-overview.md). You can use these playbooks to respond automatically to suspicious user activity in SAP systems, automating remedial actions in SAP RISE, SAP ERP, SAP Business Technology Platform (BTP) as well as in Azure Active Directory.
-
-Learn more about [Microsoft Sentinel incident response playbooks for SAP](sap/sap-incident-response-playbooks.md).
-
-### Microsoft Sentinel solution for Dynamics 365 Finance and Operations (Preview)
-
-The Microsoft Sentinel Solution for Dynamics 365 Finance and Operations monitors and protects your Dynamics 365 Finance and Operations system: It collects audits and activity logs from the Dynamics 365 Finance and Operations environment, and detects threats, suspicious activities, illegitimate activities, and more.
-
-The solution includes the **Dynamics 365 Finance and Operations** connector and [built-in analytics rules](dynamics-365/dynamics-365-finance-operations-security-content.md#built-in-analytics-rules) to detect suspicious activity in your Dynamics 365 Finance and Operations environment.
-
-[Learn more about the solution](dynamics-365/dynamics-365-finance-operations-solution-overview.md).
-
-### Monitor and optimize the execution of your scheduled analytics rules (Preview)
-
-To ensure that Microsoft Sentinel's threat detection provides complete coverage in your environment, take advantage of its execution management tools. These tools consist of [insights](monitor-optimize-analytics-rule-execution.md#view-analytics-rule-insights) on your [scheduled analytics rules'](detect-threats-built-in.md#scheduled) execution, based on Microsoft Sentinel's [health and audit data](monitor-analytics-rule-integrity.md), and a facility to [manually rerun previous executions of rules](monitor-optimize-analytics-rule-execution.md#rerun-analytics-rules) on specific time windows, for testing and optimization purposes.
-
-[Learn more about monitoring and optimizing analytics rules](monitor-optimize-analytics-rule-execution.md).
-
-## June 2023
--- [Windows Forwarded Events connector is now generally available](#windows-forwarded-events-connector-is-now-generally-available)-- [Connect multiple SAP System Identifiers via the UI](#connect-multiple-sap-system-identifiers-via-the-ui-preview)-- [Classic alert automation due for deprecation](#classic-alert-automation-due-for-deprecation) (see Announcements)-- [Microsoft Sentinel solution for SAP® applications: new systemconfig.json file](#microsoft-sentinel-solution-for-sap-applications-new-systemconfigjson-file)-
-### Windows Forwarded Events connector is now generally available
-
-The Windows Forwarded Events connector is now generally available. The connector is available in both the Azure Commercial and Azure Government clouds. Review the [connector information](data-connectors/windows-forwarded-events.md).
-
-### Connect multiple SAP System Identifiers via the UI (Preview)
-
-You can now connect multiple SAP System Identifiers (SID) via the connector page in the UI, and gain insights to the connectivity health status of each. To gain access to this feature, **first [complete the sign-up form](https://aka.ms/SentinelSAPMultiSIDUX)**.
-
-Learn more about how to [deploy the container and SAP systems via the UI](sap/deploy-data-connector-agent-container.md) and how to [monitor the health of your SAP systems](monitor-sap-system-health.md).
-
-### Microsoft Sentinel solution for SAP® applications: new systemconfig.json file
-
-Microsoft Sentinel solution for SAP® applications uses the new *[systemconfig.json](sap/reference-systemconfig-json.md)* file from agent versions deployed on June 22 and later. For previous agent versions, you must still use the *[systemconfig.ini file](sap/reference-systemconfig.md)*.
-
-## May 2023
--- Connect your threat intelligence platform or custom solution to Microsoft Sentinel with the new [upload indicators API](#connect-threat-intelligence-with-the-upload-indicators-api)-- [Use Hunts to conduct end-to-end proactive threat hunting in Microsoft Sentinel](#use-hunts-to-conduct-end-to-end-proactive-threat-hunting)-- [Audit and track incident task activity](#audit-and-track-incident-task-activity)-- Updated the announcement for [Out-of-the-box content centralization changes](#out-of-the-box-content-centralization-changes) to include information on the **Next Steps** tab in data connectors that's deprecated.-
-### Connect threat intelligence with the upload indicators API
-
-There's a new and improved API for connecting your threat intelligence platform or custom solution to add Indicators of Compromise (IOCs) into Microsoft Sentinel. The data connector and the API it relies on offer the following improvements:
-- The threat indicator fields use the standardized format of the STIX specification.-- The Azure Active Directory (Azure AD) application registration only requires the Microsoft Sentinel Contributor role.-- The API request endpoint is scoped to the workspace level and the Azure AD application permissions required allow granular assignment at the workspace level.-
-Learn more about the [TI upload indicators API data connector](connect-threat-intelligence-upload-api.md).
-Learn more about the underlying [TI upload indicators API](upload-indicators-api.md).
-
-The [Threat Intelligence Platform data connector](connect-threat-intelligence-tip.md) is now on a path for deprecation. More details will be published on the precise timeline. New Microsoft Sentinel solutions should use the upload indicators API instead of the Microsoft Graph threat intelligence indicator API.
-
-### Use Hunts to conduct end-to-end proactive threat hunting
-
-Take your hunts to the next level. Stay organized and keep track of new, active and closed hunts. Don't wait to react, proactively track down detection gaps on specific MITRE ATT&CK techniques or your own hypotheses. Collect evidence, investigate entities, annotate your findings and share them with your team all in one screen.
-
-Learn more about [Hunts (Preview)](hunts.md).
-
-### Audit and track incident task activity
-
-Thanks to newly available information in the *SecurityIncident* table, you can now inspect the history and status of open tasks in your incidents, even on incidents that have been closed. Use the information to ensure your SOC's efficient and proper functioning.
-
-Learn more about [auditing and tracking incident tasks](audit-track-tasks.md).
-
-## April 2023
--- [RSA announcements](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/rsac-2023-microsoft-sentinel-empowering-the-soc-with-next-gen/ba-p/3803613)-- [Manage multiple workspaces with workspace manager](#manage-multiple-workspaces-with-workspace-manager-preview)-
-### Manage multiple workspaces with workspace manager (Preview)
-
-Centrally manage Microsoft Sentinel at scale with Workspace Manager. Whether you're working across workspaces or Azure AD tenants, workspace manager reduces the complexity.
-
-Learn more about [Microsoft Sentinel workspace manager](workspace-manager.md).
--
-## Announcements
--- [Changes to Microsoft Defender for Office 365 connector alerts that apply when disconnecting and reconnecting](#changes-to-microsoft-defender-for-office-365-connector-alerts-that-apply-when-disconnecting-and-reconnecting)-- [Simplified pricing tiers](#simplified-pricing-tiers)-- [Classic alert automation due for deprecation](#classic-alert-automation-due-for-deprecation)-- [When disconnecting and connecting the MDI alerts connector - UniqueExternalId field is not populated (use the AlertName field)](#when-disconnecting-and-connecting-the-mdi-alerts-connectoruniqueexternalid-field-is-not-populated-use-the-alertname-field)-- [Microsoft Defender for Identity alerts will no longer refer to the MDA policies in the Alert ExternalLinks properties](#microsoft-defender-for-identity-alerts-will-no-longer-refer-to-the-mda-policies-in-the-alert-externallinks-properties)-- [WindowsEvent table enhancements](#windowsevent-table-enhancements)-- [Out-of-the-box content centralization changes](#out-of-the-box-content-centralization-changes)-- [New behavior for alert grouping in analytics rules](#new-behavior-for-alert-grouping-in-analytics-rules)-- [Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip)-- [Account enrichment fields removed from Azure AD Identity Protection connector](#account-enrichment-fields-removed-from-azure-ad-identity-protection-connector)-- [Name fields removed from UEBA UserPeerAnalytics table](#name-fields-removed-from-ueba-userpeeranalytics-table)--
-### Changes to Microsoft Defender for Office 365 connector alerts that apply when disconnecting and reconnecting
-
-To improve the overall experience for Microsoft Defender for Office 365 alerts, we've improved the sync between alerts and incidents, and increased the number of alerts that flow through the connector.
-
-To benefit from this change, disconnect and reconnect the Microsoft Defender for Office 365 connector. However, by taking this action, some fields are no longer populated:
--- `ExtendedProperties["InvestigationName"]`-- `ExtendedProperties["Status"]`-- `ExtendedLinks`-- `AdditionalActionsAndResults` located inside the `Entities` field-
-To retrieve the information that was previously retrieved by these fields, in the Microsoft 365 Defender portal, on the left, select **Alerts**, and locate the following information:
--- `ExtendedProperties["InvestigationName"]`: Under **Investigation ID**:
-
- :::image type="content" source="medio-connector-fields-investigation-id.png":::
--- `ExtendedProperties["Status"]`: Under **Investigation status**:
-
- :::image type="content" source="medio-connector-fields-investigation-status.png" alt-text="Screenshot showing the Microsoft Defender for Office 365 alerts Investigation status field in the Microsoft 365 Defender portal.":::
--- `ExtendedLinks`: Select **Investigation ID**, which opens the relevant **Investigation** page. -
-### Simplified pricing tiers
-Microsoft Sentinel is billed for the volume of data *analyzed* in Microsoft Sentinel and *stored* in Azure Monitor Log Analytics. So far, there have been two sets of pricing tiers, one for each product. Two things are happening:
--- Starting July 1, 2023, the separate Microsoft Sentinel pricing tiers are prefixed as *Classic* when viewing meters in Microsoft Cost Management invoice details. -- New, simplified pricing tiers are rolling out a unifying the billing experience for Microsoft Sentinel customers starting July 5, 2023.-
-##### Switch to new pricing
-Combining the pricing tiers offers a simplification to the overall billing and cost management experience, including visualization in the pricing page, and fewer steps estimating costs in the Azure calculator. To add further value to the new simplified tiers, the current [Microsoft Defender for Servers P2 benefit granting 500 MB/VM/day](../defender-for-cloud/faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) security data ingestion into Log Analytics has been extended to the simplified pricing tiers. This greatly increases the financial benefit of bringing eligible data ingested into Microsoft Sentinel for each VM protected in this manner.
-
-##### Free trial changes
-A slight change to how free trials are offered was made to provide further simplification. There used to be a free trial option that waived Microsoft Sentinel costs and charged Log Analytics costs regularly, this will no longer be offered as an option. Starting July 5, 2023 all new Microsoft Sentinel workspaces will result in a 31 day free trial of 10 GB/day for the combined ingestion and analysis costs on Microsoft Sentinel and Log Analytics.
-
-##### How do I get started with the simplified pricing tier?
-All new Microsoft Sentinel workspaces will automatically default to the simplified pricing tiers. Existing workspaces will have the choice to switch to the new pricing from Microsoft Sentinel settings. For more information, see the [simplified pricing tiers](billing.md#simplified-pricing-tiers) section of our cost planning documentation and this featured [blog post](https://aka.ms/SentinelSimplifiedPricing).
-
-### Classic alert automation due for deprecation
-
-Automated responses to alerts, in the form of playbooks, can be run in one of two ways:
-- **Classic:** adding the playbook to the list under **Alert automation (classic)** in the **Automated response** tab of the analytics rule that produced the alert.--- **Automation rule:** creating an automation rule to run in response to the creation of alerts, and the automation rule will run the playbook. This method has [several advantages, as described here](migrate-playbooks-to-automation-rules.md).-
-As of **June 2023**, you can no longer add playbooks to be run using the **Classic** method; rather, you must use [automation rules](automate-incident-handling-with-automation-rules.md).
-
-Playbooks in the existing **Classic** lists will continue to run until this method's scheduled deprecation in **March 2026**.
-
-We strongly encourage you to migrate any remaining playbooks in your **Classic** lists to run from automation rules instead. [Learn how to migrate playbooks to automation rules](migrate-playbooks-to-automation-rules.md).
-
-### When disconnecting and connecting the MDI alerts connector - UniqueExternalId field is not populated (use the AlertName field) 
-
-The Microsoft Defender for Identity alerts now support the Government Community Cloud (GCC). To enable this support, there is a change to the way alerts are sent to Microsoft Sentinel. 
-
-For customers connecting and disconnecting the MDI alerts connector, the `UniqueExternalId` field is no longer populated. The `UniqueExternalId` represents the alert, and was formerly located in the`ExtendedProperties` field. You can now obtain the ID through the `AlertName` field, which contains the alert’s name. 
-
-Review the [complete mapping between the alert names and unique external IDs](/defender-for-identity/alerts-overview#security-alert-name-mapping-and-unique-external-ids).
-
-### Microsoft Defender for Identity alerts will no longer refer to the MDA policies in the Alert ExternalLinks properties
-
-Microsoft Defender for Identity alerts will no longer refer to the MDA policies in the Alert ExternalLinks properties due to a change in infrastructure performed on MDIs. Alerts will no longer contain any MDA links under **ExtendedLinks** with a **Label** that starts with **Defender for Cloud Apps**. This change will take effect April 30th, 2023. [Read more about this change](/defender-for-identity/whats-new#defender-for-identity-release-2198). 
-
-### WindowsEvent table enhancements
-
-The WindowsEvent schema has been expanded to include new fields, such as `Keywords`, `Version`, `Opcode`, `Correlation`, `SystemProcessId`, `SystemThreadId` and `EventRecordId`.
-
-These additions allow for more comprehensive analysis and for more information to be extracted and parsed from the event.
-
-If you aren't interested in ingesting the new fields, use ingest-time transformation in the AMA DCR to filter and drop the fields, while still ingesting the events. To ingest the events without the new fields, add the following to your DCRs: 
-
-```kusto
-"transformKql": "source | project-away TimeCreated, SystemThreadId, EventRecordId, SystemProcessId, Correlation, Keywords, Opcode, SystemUserId, Version"
-```
-Learn more about [ingest-time transformations](../azure-monitor/essentials/data-collection-transformations.md).
-
-### Out-of-the-box content centralization changes
-A new banner has appeared in Microsoft Sentinel gallery pages! This informational banner has rolled out to all tenants to explain upcoming changes regarding out-of-the-box (OOTB) content. In short, the **Content hub** will be the central source whether you're looking for standalone content or packaged solutions. Banners appear in the templates section of **Workbooks**, **Hunting**, **Automation**, **Analytics** and **Data connectors** galleries. Here's an example of the banner in the **Workbooks** gallery.
-
-The banner reads, 'All Workbook templates, and additional out-of-the-box (OOTB) content are now centrally available in Content hub. Starting Q2 2023, only Workbook templates installed from the content hub will be available in this gallery. Learn more about the OOTB content centralization changes.'
-
-As part of this centralization change, the **Next Steps** tab on data connector pages [has been deprecated](sentinel-content-centralize.md#data-connector-page-change).
-
-For all the details on what these upcoming changes will mean for you, see [Microsoft Sentinel out-of-the-box content centralization changes](sentinel-content-centralize.md).
-
-### New behavior for alert grouping in analytics rules
-
-Starting **February 6, 2023** and continuing through the end of February, Microsoft Sentinel is rolling out a change in the way that incidents are created from analytics rules with certain event and alert grouping settings, and also the way that such incidents are updated by automation rules. This change is being made in order to produce incidents with more complete information and to simplify automation triggered by the creating and updating of incidents.
-
-The affected analytics rules are those with both of the following two settings:
-- **Event grouping** is set to **Trigger an alert for each event** (sometimes referred to as "alert per row" or "alert per result").-- **Alert grouping** is enabled, in any one of the [three possible configurations](detect-threats-custom.md#alert-grouping).-
-#### The problem
-
-Rules with these two settings generate unique alerts for each event (result) returned by the query. These alerts are then all grouped together into a single incident (or a small number of incidents, depending on the alert grouping configuration choice).
-
-The problem is that the incident is created as soon as the first alert is generated, so at that point the incident contains only the first alert. The remaining alerts are joined to the incident, one after the other, as they are generated. So you end up with a *single running of an analytics rule* resulting in:
-- One incident creation event *and*-- Up to 149 incident update events-
-These circumstances result in unpredictable behavior when evaluating the conditions defined in automation rules or populating the incident schema in playbooks:
--- **Incorrect evaluation of an incident's conditions by automation rules:**-
- Automation rules on this incident will run immediately on its creation, even with just the one alert included. So the automation rule will only consider the incident's status as containing the first alert, even though other alerts are being created nearly simultaneously (by the same running of the analytics rule) and will continue being added while the automation rule is running. So you end up with a situation where the automation rule's evaluation of the incident is incomplete and likely incorrect.
-
- If there are automation rules defined to run when the incident is *updated*, they will run again and again as each subsequent alert is added to the incident (even though the alerts were all generated by the same running of the analytics rule). So you'll have alerts being added and automation rules running, each time possibly incorrectly evaluating the conditions of the incident.
-
- Automation rules' conditions might ignore entities that only later become part of the incident but weren't included in the first alert/creation of the incident.
-
- In these cases, incorrect evaluation of an incident's condition may cause automation rules to run when they shouldn't, or not to run when they should. The result of this would be that the wrong actions would be taken on an incident, or that the right actions would not be taken.
--- **Information in later alerts being unavailable to playbooks run on the incident:**-
- When an automation rule calls a playbook, it passes the incident's detailed information to the playbook. Because of the behavior mentioned above, a playbook might only receive the details (entities, custom details, and so on) of the first alert in an incident, but not those from subsequent alerts. This means that the playbook's actions would not have access to all the information in the incident.
-
-#### The solution
-
-Going forward, instead of creating the incident as soon as the first alert is generated, Microsoft Sentinel will wait until a single running of an analytics rule has generated all of its alerts, and then create the incident, adding all the alerts to it at once. So instead of an incident creation event and a whole bunch of incident update events, you have only the incident creation event.
-
-Now, automation rules that run on the creation of an incident can evaluate the complete incident with all of its alerts (as well as entities and custom details) and its most updated properties, and any playbooks that run will similarly have the complete details of the incident.
---
-The following table describes the change in the incident creation and automation behaviors:
-
-| When incident created/updated with multiple alerts | Before the change | After the change |
-| -- | -- | -- |
-| **Automation rule** conditions are evaluated based on... | The first alert generated by the current running of the analytics rule. | All alerts and entities resulting from the current running of the analytics rule. |
-| **Playbook input** includes... | - Alerts list containing only the first alert of the incident.<br>- Entities list containing only entities from the first alert of the incident. | - Alerts list containing all the alerts triggered by this rule execution and grouped to this incident.<br>- Entities list containing the entities from all the alerts triggered by this rule execution and grouped to this incident. |
-| **SecurityIncident** table in Log Analytics shows... | -&nbsp;One&nbsp;row&nbsp;for&nbsp;*incident&nbsp;created*&nbsp;with&nbsp;one&nbsp;alert.<br>- Multiple&nbsp;events&nbsp;of&nbsp;*alert&nbsp;added*. | One row for *incident created* only after all alerts triggered by this rule execution have been added and grouped to this incident. |
-
-### Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)
-
-As of **October 24, 2022**, [Microsoft 365 Defender](/microsoft-365/security/defender/) integrates [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents. Customers can choose between three levels of integration:
--- **Show high-impact alerts only (Default)** includes only alerts about known malicious or highly suspicious activities that might require attention. These alerts are chosen by Microsoft security researchers and are mostly of Medium and High severities.-- **Show all alerts** includes all AADIP alerts, including activity that might not be unwanted or malicious.-- **Turn off all alerts** disables any AADIP alerts from appearing in your Microsoft 365 Defender incidents.-
-Microsoft Sentinel customers (who are also AADIP subscribers) with [Microsoft 365 Defender integration](microsoft-365-defender-sentinel-integration.md) enabled now automatically receive AADIP alerts and incidents in their Microsoft Sentinel incidents queue. Depending on your configuration, this may affect you as follows:
--- If you already have your AADIP connector enabled in Microsoft Sentinel, and you've enabled incident creation, you may receive duplicate incidents. To avoid this, you have a few choices, listed here in descending order of preference:-
- | Preference | Action in Microsoft 365 Defender | Action in Microsoft Sentinel |
- | - | - | - |
- | **1** | Keep the default AADIP integration of **Show high-impact alerts only**. | Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
- | **2** | Choose the **Show all alerts** AADIP integration. | Create automation rules to automatically close incidents with unwanted alerts.<br><br>Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
- | **3** | Don't use Microsoft 365 Defender for AADIP alerts:<br>Choose the **Turn off all alerts** option for AADIP integration. | Leave enabled those [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. |
-
- See the [Microsoft 365 Defender documentation for instructions](/microsoft-365/security/defender/investigate-alerts#configure-aad-ip-alert-service) on how to take the prescribed actions in Microsoft 365 Defender.
--- If you don't have your [AADIP connector](data-connectors/azure-active-directory-identity-protection.md) enabled, you must enable it. Be sure **not** to enable incident creation on the connector page. If you don't enable the connector, you may receive AADIP incidents without any data in them.--- If you're first enabling your Microsoft 365 Defender connector now, the AADIP connection was made automatically behind the scenes. You won't need to do anything else.-
-### Account enrichment fields removed from Azure AD Identity Protection connector
-
-As of **September 30, 2022**, alerts coming from the **Azure Active Directory Identity Protection connector** no longer contain the following fields:
--- CompromisedEntity-- ExtendedProperties["User Account"]-- ExtendedProperties["User NameΓÇ¥]-
-We are working to adapt Microsoft Sentinel's built-in queries and other operations affected by this change to look up these values in other ways (using the *IdentityInfo* table).
-
-In the meantime, or if you've built any custom queries or rules directly referencing these fields, you'll need another way to get this information. Use the following two-step process to have your queries look up these values in the *IdentityInfo* table:
-
-1. If you haven't already, **enable the UEBA solution** to sync the *IdentityInfo* table with your Azure AD logs. Follow the instructions in [this document](enable-entity-behavior-analytics.md).
-(If you don't intend to use UEBA in general, you can ignore the last instruction about selecting data sources on which to enable entity behavior analytics.)
-
-1. Incorporate the query below in your existing queries or rules to look up this data by joining the *SecurityAlert* table with the *IdentityInfo* table.
-
- ```kusto
- SecurityAlert
- | where TimeGenerated > ago(7d)
- | where ProductName == "Azure Active Directory Identity Protection"
- | mv-expand Entity = todynamic(Entities)
- | where Entity.Type == "account"
- | extend AadTenantId = tostring(Entity.AadTenantId)
- | extend AadUserId = tostring(Entity.AadUserId)
- | join kind=inner (
- IdentityInfo
- | where TimeGenerated > ago(14d)
- | distinct AccountTenantId, AccountObjectId, AccountUPN, AccountDisplayName
- | extend UserAccount = AccountUPN
- | extend UserName = AccountDisplayName
- | where isnotempty(AccountDisplayName) and isnotempty(UserAccount)
- | project AccountTenantId, AccountObjectId, UserAccount, UserName
- )
- on
- $left.AadTenantId == $right.AccountTenantId,
- $left.AadUserId == $right.AccountObjectId
- | extend CompromisedEntity = iff(CompromisedEntity == "N/A" or isempty(CompromisedEntity), UserAccount, CompromisedEntity)
- | project-away AadTenantId, AadUserId, AccountTenantId, AccountObjectId
- ```
-
-For information on looking up data to replace enrichment fields removed from the UEBA UserPeerAnalytics table, See [Name fields removed from UEBA UserPeerAnalytics table](#name-fields-removed-from-ueba-userpeeranalytics-table) for a sample query.
-
-### Name fields removed from UEBA UserPeerAnalytics table
-
-As of **September 30, 2022**, the UEBA engine no longer performs automatic lookups of user IDs and resolves them into names. This change resulted in the removal of four name fields from the *UserPeerAnalytics* table:
--- UserName-- UserPrincipalName-- PeerUserName-- PeerUserPrincipalName -
-The corresponding ID fields remain part of the table, and any built-in queries and other operations execute the appropriate name lookups in other ways (using the IdentityInfo table), so you shouldnΓÇÖt be affected by this change in nearly all circumstances.
-
-The only exception to this is if youΓÇÖve built custom queries or rules directly referencing any of these name fields. In this scenario, you can incorporate the following lookup queries into your own, so you can access the values that would have been in these name fields.
-
-The following query resolves **user** and **peer identifier fields**:
-
-```kusto
-UserPeerAnalytics
-| where TimeGenerated > ago(24h)
-// join to resolve user identifier fields
-| join kind=inner (
- IdentityInfo
- | where TimeGenerated > ago(14d)
- | distinct AccountTenantId, AccountObjectId, AccountUPN, AccountDisplayName
- | extend UserPrincipalNameIdentityInfo = AccountUPN
- | extend UserNameIdentityInfo = AccountDisplayName
- | project AccountTenantId, AccountObjectId, UserPrincipalNameIdentityInfo, UserNameIdentityInfo
-) on $left.AADTenantId == $right.AccountTenantId, $left.UserId == $right.AccountObjectId
-// join to resolve peer identifier fields
-| join kind=inner (
- IdentityInfo
- | where TimeGenerated > ago(14d)
- | distinct AccountTenantId, AccountObjectId, AccountUPN, AccountDisplayName
- | extend PeerUserPrincipalNameIdentityInfo = AccountUPN
- | extend PeerUserNameIdentityInfo = AccountDisplayName
- | project AccountTenantId, AccountObjectId, PeerUserPrincipalNameIdentityInfo, PeerUserNameIdentityInfo
-) on $left.AADTenantId == $right.AccountTenantId, $left.PeerUserId == $right.AccountObjectId
-```
-If your original query referenced the user or peer names (not just their IDs), substitute this query in its entirety for the table name (ΓÇ£UserPeerAnalyticsΓÇ¥) in your original query.
- ## Next steps > [!div class="nextstepaction"]
service-connector How To Provide Correct Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-provide-correct-parameters.md
+
+ Title: Provide correct parameters to Service Connector
+description: Learn how to pass correct parameters to Service Connector.
+++ Last updated : 09/11/2023++
+# Provide correct parameters to Service Connector
+
+If you're using a CLI tool to manage connections, it's crucial to understand how to pass correct parameters to Service Connector. In this guide, you gain insights into the fundamental properties and their proper value formats.
+
+## Prerequisites
+
+- This guide assumes that you already know the [basic concepts of Service Connector](concept-service-connector-internals.md).
+
+## Source service
+
+Source services are usually Azure compute services. Service Connector is an [Azure extension resource](../azure-resource-manager/management/extension-resource-types.md). When sending requests using REST tools, to create a connection, for example, the request URL should use the format `{source_resource_id}/providers/Microsoft.ServiceLinker/linkers/{linkerName}`, and `{source_resource_id}` should match with one of the resource IDs listed in the table below.
+
+| Source service type | Resource ID format |
+| - | |
+| Azure App Service | `/subscriptions/{subscription}/resourceGroups/{source_resource_group}/providers/Microsoft.Web/sites/{site}` |
+| Azure App Service slot | `/subscriptions/{subscription}/resourceGroups/{source_resource_group}/providers/Microsoft.Web/sites/{site}/slots/{slot}` |
+| Azure Functions | `/subscriptions/{subscription}/resourceGroups/{source_resource_group}/providers/Microsoft.Web/sites/{site}` |
+| Azure Spring Apps | `/subscriptions/{subscription}/resourceGroups/{source_resource_group}/providers/Microsoft.AppPlatform/Spring/{spring}/apps/{app}/deployments/{deployment}` |
+| Azure Container Apps | `/subscriptions/{subscription}/resourceGroups/{source_resource_group}/providers/Microsoft.App/containerApps/{app}` |
+
+## Target service
+
+Target services are backing services or dependency services that your compute services connect to. When passing target resource info to Service Connector, the resource IDs aren't always top-level resources, and could also be subresources. Check the following table for the exact formats of all Service Connector supported target services.
+
+| Target service type | Resource ID format |
+| - | -- |
+| Azure App Configuration | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.AppConfiguration/configurationStores/{config_store}` |
+| Azure Cache for Redis | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.Cache/redis/{server}/databases/{database}` |
+| Azure Cache for Redis (Enterprise) | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.Cache/redisEnterprise/{server}/databases/{database}` |
+| Azure Cosmos DB (NoSQL) | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.DocumentDB/databaseAccounts/{account}/sqlDatabases/{database}` |
+| Azure Cosmos DB (MongoDB) | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.DocumentDB/databaseAccounts/{account}/mongodbDatabases/{database}` |
+| Azure Cosmos DB (Gremlin) | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.DocumentDB/databaseAccounts/{account}/gremlinDatabases/{database}/graphs/{graph}` |
+| Azure Cosmos DB (Cassandra) | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.DocumentDB/databaseAccounts/{account}/cassandraKeyspaces/{key_space}` |
+| Azure Cosmos DB (Table) | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.DocumentDB/databaseAccounts/{account}/tables/{table}` |
+| Azure Database for MySQL | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.DBforMySQL/flexibleServers/{server}/databases/{database}` |
+| Azure Database for PostgreSQL | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{server}/databases/{database}` |
+| Azure Event Hubs | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.EventHub/namespaces/{namespace}` |
+| Azure Key Vault | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.KeyVault/vaults/{vault}` |
+| Azure Service Bus | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.ServiceBus/namespaces/{namespace}` |
+| Azure SQL Database | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.Sql/servers/{server}/databases/{database}` |
+| Azure SignalR Service | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.SignalRService/SignalR/{signalr}` |
+| Azure Storage (Blob) | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.Storage/storageAccounts/{account}/blobServices/default` |
+| Azure Storage (Queue) | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.Storage/storageAccounts/{account}/queueServices/default` |
+| Azure Storage (File) | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.Storage/storageAccounts/{account}/fileServices/default` |
+| Azure Storage (Table) | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.Storage/storageAccounts/{account}/tableServices/default` |
+| Azure Web PubSub | `/subscriptions/{subscription}/resourceGroups/{target_resource_group}/providers/Microsoft.SignalRService/WebPubSub/{webpubsub}` |
+
+## Authentication type
+
+The authentication type refers to the authentication method used by the connection. The following authentication types are supported:
+
+* system managed identity
+* user managed identity
+* service principal
+* secret/connection string/access key
+
+A different subset of the authentication types can be used when specifying a different target service and a different client type, check [how to integrate with target services](./how-to-integrate-postgres.md) for their combinations.
+
+## Client type
+
+Client type refers to your compute service's runtime stack or development framework. The client type often affects the connection string format of a database. The possible client types are:
+
+* `dapr`
+* `django`
+* `dotnet`
+* `go`
+* `java`
+* `kafka-springBoot`
+* `nodejs`
+* `none`
+* `php`
+* `python`
+* `ruby`
+* `springBoot`
+
+A different subset of the client types can be used when specifying a different target service and a different authentication type, check [how to integrate with target services](./how-to-integrate-postgres.md) for their combinations.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to integrate target services](./how-to-integrate-postgres.md)
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md
Last updated 10/19/2023 - # What is Service Connector? Service Connector helps you connect Azure compute services to other backing services. Service Connector configures the network settings and connection information (for example, generating environment variables) between compute services and target backing services in management plane. Developers use their preferred SDK or library that consumes the connection information to do data plane operations against the target backing service.
See [what services are supported in Service Connector](#what-services-are-suppor
**Connect to a target backing service with just a single command or a few clicks:**
-Service Connector is designed for your ease of use. To create a connection, you'll need three required parameters: a target service instance, an authentication type between the compute service and the target service, and your application client type. Developers can use the Azure CLI or the guided Azure portal experience to create connections.
+Service Connector is designed for your ease of use. To create a connection, you need three required parameters: a target service instance, an authentication type between the compute service and the target service, and your application client type. Developers can use the Azure CLI or the guided Azure portal experience to create connections.
**Use Connection Status to monitor or identify connection issue:**
There are two major ways to use Service Connector for your Azure application:
* **Azure CLI:** Create, list, validate and delete service-to-service connections with connection commands in the Azure CLI. * **Azure portal:** Use the guided portal experience to create service-to-service connections and manage connections with a hierarchy list.
+What's more, Service Connector is also supported in the following client tools with its most fundamental features:
+
+* **Azure Powershell:** manage connections with commands in Azure PowerShell.
+* **Terraform:** create and delete connections with infrastructure as code tool (be aware of the [limitations](known-limitations.md)).
+* **Visual Studio:** manage connections of a project by integrating with [Connected Services](/visualstudio/azure/overview-connected-services) feature in Visual Studio.
+* **Intellij:** list connections of Azure compute services in [Azure Toolkit for Intellij](/azure/developer/java/toolkit-for-intellij/install-toolkit).
+
+Finally, you can also use Azure SDKs and API calls to interact with Service Connector. And you're recommended to read [how to provide correct parameters](how-to-provide-correct-parameters.md) before starting if using these ways.
+ ## Next steps Follow the tutorials listed below to start building your own application with Service Connector.
site-recovery Monitoring Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitoring-common-questions.md
Title: Common questions about Azure Site Recovery monitoring
description: Get answers to common questions about Azure Site Recovery monitoring, using inbuilt monitoring and Azure Monitor (Log Analytics) Previously updated : 07/31/2019 Last updated : 10/13/2023
By default, retention is for 31 days. You can increase the period in the **Usage
Typically the size of a log is 15-20 KB.
+## Built-in Azure Monitor alerts for Azure Site Recovery
+
+### Is there any cost for using built-in Azure Monitor alerts for Azure Site Recovery?
+
+With built-in Azure Monitor alerts, alerts for critical operations/failures generate by default (that you can view in the portal or via non-portal interfaces) at no additional cost. However, to route these alerts to a notification channel (such as email), it incurs a minor cost for notifications beyond the free tier (of 1000 emails per month). [Learn more about Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+### Will the current email notification solution for Azure Site Recovery in Recovery Services vault continue to work?
+
+As of today, the current email notification solution co-exists in parallel with the new built-in Azure Monitor alerts solution. we recommend you to try out the Azure Monitor based alerting to familiarize yourself with the new experience and leverage its capabilities.
+
+### What is the difference between alert rule, alert processing rule and action group?
+
+- Alert rule: Refers to a user-created rule that specifies the condition on which an alert should be fired.
+- Alert processing rule (earlier called Action rule): Refers to a user-created rule that specifies the notification channels a particular fired alert should be routed to. You can also use alert processing rules to suppress notifications for a period of time.
+- Action group: Refers to the notification channel (such as email, ITSM endpoint, logic app, webhook, and so on) that a fired alert can be routed to.
+
+In the case of built-in Azure Monitor alerts, as alerts already generate by default, you don't need to create an alert rule. To route these alerts to a notification channel, you should create an alert processing rule and an action group for these alerts. [Learn more](site-recovery-monitor-and-troubleshoot.md#configure-email-notifications-for-alerts)
+++++ ## Next steps
site-recovery Site Recovery Monitor And Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-monitor-and-troubleshoot.md
Title: Monitor Azure Site Recovery | Microsoft Docs
description: Monitor and troubleshoot Azure Site Recovery replication issues and operations using the portal Previously updated : 07/30/2019 Last updated : 10/25/2023
You might want to review [common monitoring questions](monitoring-common-questio
## Monitor in the dashboard
-1. In the vault, click **Overview**. The Recovery Services dashboard consolidates all monitoring information for the vault in a single location. There are pages for both Site Recovery and the Azure Backup service, and you can switch between them.
+1. In the vault, select **Overview**. The Recovery Services dashboard consolidates all monitoring information for the vault in a single location. There are pages for both Site Recovery and the Azure Backup service, and you can switch between them.
- ![Site Recovery dashboard](./media/site-recovery-monitor-and-troubleshoot/dashboard.png)
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/dashboard.png" alt-text="Screenshot displays Site Recovery dashboard." lightbox="./media/site-recovery-monitor-and-troubleshoot/dashboard.png":::
2. From the dashboard, drill down into different areas.
- ![Screenshot that shows the areas on the dashboard where you can drill down.](./media/site-recovery-monitor-and-troubleshoot/site-recovery-overview-page.png).
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/site-recovery-overview-page.png" alt-text="Screenshot displays the areas on the dashboard where you can drill down." lightbox="./media/site-recovery-monitor-and-troubleshoot/site-recovery-overview-page.png":::
-3. In **Replicated items**, click **View All** to see all the servers in the vault.
-4. Click the status details in each section to drill down.
+3. In **Replicated items**, select **View All** to see all the servers in the vault.
+4. Select the status details in each section to drill down.
5. In **Infrastructure view**, sort monitoring information by the type of machines you're replicating. ## Monitor replicated items
Not applicable | Machines that aren't currently eligible for a test failover. Fo
In **Configuration issues**, monitor any issues that might impact your ability to fail over successfully. -- Configuration issues (except for software update availability), are detected by a periodic validator operation that runs every 12 hours by default. You can force the validator operation to run immediately by clicking the refresh icon next to the **Configuration issues** section heading.-- Click the links to get more details. For issues impacting specific machines, click **needs attention** in the **Target configurations** column. Details include remediation recommendations.
+- Configuration issues (except for software update availability), are detected by a periodic validator operation that runs every 12 hours by default. You can force the validator operation to run immediately by selecting the refresh icon next to the **Configuration issues** section heading.
+- Select the links to get more details. For issues impacting specific machines, select **needs attention** in the **Target configurations** column. Details include remediation recommendations.
**State** | **Details** |
Missing resources | A specified resource can't be found or isn't available in th
Subscription quota | The available subscription resource quota balance is compared against the balance needed to fail over all of the machines in the vault.<br/><br/> If there aren't enough resources, an insufficient quota balance is reported.<br/><br/> Quotas are monitoring for VM core count, VM family core count, network interface card (NIC) count. Software updates | The availability of new software updates, and information about expiring software versions. - ## Monitor errors In **Error summary**, monitor currently active error symptoms that might impact replication of servers in the vault, and monitor the number of impacted machines. -- Errors impacting on-premises infrastructure components are shown are the beginning of the section. For example, non-receipt of a heartbeat from the Azure Site Recovery Provider on the on-premises configuration server, or Hyper-V host.
+- Errors impacting on-premises infrastructure components are shown at the beginning of the section. For example, non-receipt of a heartbeat from the Azure Site Recovery Provider on the on-premises configuration server, or Hyper-V host.
- Next, replication error symptoms impacting replicated servers are shown. - The table entries are sorted by decreasing order of the error severity, and then by decreasing count order of the impacted machines. - The impacted server count is a useful way to understand whether a single underlying issue might impact multiple machines. For example, a network glitch could potentially impact all machines that replicate to Azure.
In **Infrastructure view**, monitor the infrastructure components involved in re
- A green line indicates that connection is healthy. - A red line with the overlaid error icon indicates the existence of one or more error symptoms that impact connectivity.-- Hover the mouse pointer over the error icon to show the error and the number of impacted entities. Click the icon for a filtered list of impacted entities.
+- Hover the mouse pointer over the error icon to show the error and the number of impacted entities and select the icon for a filtered list of impacted entities.
- ![Site Recovery infrastructure view (vault)](./media/site-recovery-monitor-and-troubleshoot/site-recovery-vault-infra-view.png)
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/site-recovery-vault-infra-view.png" alt-text="Screenshot displays Site Recovery infrastructure view (vault)." lightbox="./media/site-recovery-monitor-and-troubleshoot/site-recovery-vault-infra-view.png":::
### Tips for monitoring the infrastructure
In **Infrastructure view**, monitor the infrastructure components involved in re
**VMware replication to Azure** | Failed over/failed back | No **Hyper-V replication to Azure** | Failed over/failed back | No -- To see the infrastructure view for a single replicating machine, in the vault menu, click **Replicated items**, and select a server. ---
+- To see the infrastructure view for a single replicating machine, in the vault menu, select **Replicated items**, and select a server.
## Monitor recovery plans
In **Jobs**, monitor the status of Site Recovery operations.
Monitor jobs as follows:
-1. In the dashboard > **Jobs** section, you can see a summary of jobs that have completed, are in progress, or waiting for input, in the last 24 hours. You can click on any state to get more information about the relevant jobs.
-2. Click **View all** to see all jobs in the last 24 hours.
+1. In the dashboard > **Jobs** section, you can see a summary of jobs that have completed, are in progress, or waiting for input, in the last 24 hours. You can select on any state to get more information about the relevant jobs.
+2. Select **View all** to see all jobs in the last 24 hours.
> [!NOTE] > You can also access job information from the vault menu > **Site Recovery Jobs**.
-2. In the **Site Recovery Jobs** list, a list of jobs is displayed. On the top menu you can get error details for a specific jobs, filter the jobs list based on specific criteria, and export selected job details to Excel.
-3. You can drill into a job by clicking it.
+2. In the **Site Recovery Jobs** list, a list of jobs is displayed. On the top menu you can get error details for a specific job, filter the jobs list based on specific criteria, and export selected job details to Excel.
+3. You can drill into a job by selecting it.
## Monitor virtual machines
-In **Replicated items**, get a list of replicated machines.
- ![Site Recovery replicated items list view](./media/site-recovery-monitor-and-troubleshoot/site-recovery-virtual-machine-list-view.png)
+1. In **Replicated items**, get a list of replicated machines.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/site-recovery-virtual-machine-list-view.png" alt-text="Screenshot displays Site Recovery replicated items list view." lightbox="./media/site-recovery-monitor-and-troubleshoot/site-recovery-virtual-machine-list-view.png":::
2. You can view and filter information. On the action menu at the top, you can perform actions for a particular machine, including running a test failover, or viewing specific errors.
-3. Click **Columns** to show additional columns, For example to show RPO, target configuration issues, and replication errors.
-4. Click **Filter** to view information based on specific parameters such as replication health, or a particular replication policy.
-5. Right-click a machine to initiate operations such as test failover for it, or to view specific error details associated with it.
-6. Click a machine to drill into more details for it. Details include:
+3. Select **Columns** to show additional columns, For example to show RPO, target configuration issues, and replication errors.
+4. Select **Filter** to view information based on specific parameters such as replication health, or a particular replication policy.
+5. Select a machine to initiate operations such as test failover for it, or to view specific error details associated with it.
+6. Select a machine to drill into more details for it. Details include:
- **Replication information**: Current status and health of the machine. - **RPO** (recovery point objective): Current RPO for the virtual machine and the time at which the RPO was last computed. - **Recovery points**: Latest available recovery points for the machine. - **Failover readiness**: Indicates whether a test failover was run for the machine, the agent version running on the machine (for machines running the Mobility service), and any configuration issues. - **Errors**: List of replication error symptoms currently observed on the machine, and possible causes/actions.
- - **Events**: A chronological list of recent events impacting the machine. Error details shows the currently observable error symptoms, while events is a historical record of issues that have impacted the machine.
+ - **Events**: A chronological list of recent events impacting the machine. Error details show the currently observable error symptoms, while events is a historical record of issues that have impacted the machine.
- **Infrastructure view**: Shows state of infrastructure for the scenario when machines are replicating to Azure.
- ![Site Recovery replicated item details/overview](./media/site-recovery-monitor-and-troubleshoot/site-recovery-virtual-machine-details.png)
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/site-recovery-virtual-machine-details.png" alt-text="Screenshot displays Site Recovery virtual machines items list view." lightbox="./media/site-recovery-monitor-and-troubleshoot/site-recovery-virtual-machine-details.png":::
## Subscribe to email notifications
You can subscribe to receive email notifications for these critical events:
Subscribe as follows:
-In the vault > **Monitoring** section, click **Site Recovery Events**.
-1. Click **Email notifications**.
+In the vault > **Monitoring** section, select **Site Recovery Events**.
+1. Select **Email notifications**.
1. In **Email notification**, turn on notifications and specify who to send to. You can send to all subscription admins be sent notifications, and optionally specific email addresses.
- ![Email notifications](./media/site-recovery-monitor-and-troubleshoot/email.png)
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/email.png" alt-text="Screenshot displays Email notifications view." lightbox="./media/site-recovery-monitor-and-troubleshoot/email.png":::
+
+## Built-in Azure Monitor alerts for Azure Site Recovery (preview)
+
+Azure Site Recovery also provides default alerts via Azure Monitor, which enables you to have a consistent experience for alert management across different Azure services. With Azure Monitor based alerts, you can route alerts to any notification channel supported by Azure Monitor, such as email, Webhook, Logic app, and more. You can also use other alert management capabilities offered by Azure Monitor, for example, suppressing notifications during a planned maintenance window.
+
+### Enable built-in Azure Monitor alerts
+
+To enable built-in Azure Monitor alerts for Azure Site Recovery, for a particular subscription, navigate to **Preview Features** in the [Azure portal](https://ms.portal.azure.com) and register the feature flag **EnableAzureSiteRecoveryAlertToAzureMonitor** for the selected subscription.
+
+> [!NOTE]
+> We recommended that you wait for 24 hours for the registration to take effect before testing out the feature.
++
+### Alerts scenarios
+
+Once you register this feature, Azure Site Recovery sends a default alert (surfaced via Azure Monitor) whenever any of the following critical events occur:
+
+- Enable disaster recovery failure alerts for Azure VM, Hyper-V, and VMware replication.
+- Replication health critical alerts for Azure VM, Hyper-V, and VMware replication.
+- Azure Site Recovery agent version expiry alerts for Azure VM and Hyper-V replication.
+- Azure Site Recovery agent not reachable alerts for Hyper-V replication.
+- Failover failure alerts for Azure VM, Hyper-V, and VMware replication.
+- Auto certification expiry alerts for Azure VM replication.
+
+To test the working of the alerts for a test VM using Azure Site Recovery, you can disable public network access for the cache storage account so that a **Replication Health turned to critical** alert is generated. *Alerts* are generated by default, without any need for rule configuration. However, to enable *notifications* (for example, email notifications) for these generated alerts, you must create an alert processing rule as described in the following sections.
+
+### View the generated Azure Site Recovery alerts in Azure Monitor
+
+Once alerts are generated, you can view and manage them from the Azure Monitor portal. Follow these steps:
+
+1. On the [Azure portal](https://ms.portal.azure.com), go to **Azure Monitor** > **Alerts**.
+2. Set the filter for **Monitor Service** = **Azure Site Recovery** to see Azure Site Recovery specific alerts.
+ You can also customize the values of other filters to see alerts of a specific time range up to 30 days or for vaults, subscriptions, severity and alert state (user response).
+3. Select any alert of your interest to see further details. For example, the affected VM, possible causes, recommended action, etc.
+4. Once the event is mitigated, you can modify its state to **Closed** or **Acknowledged**.
++
+### View the generated Azure Site Recovery alerts in Recovery Services vault
+
+Follow these steps to view the alerts generated for a particular vault via the vault experience:
+
+1. On the [Azure portal](https://ms.portal.azure.com), go to the Recovery Services vault that you are using.
+2. Select the **Alerts** section and filter for **Monitor Service** = **Azure Site Recovery** to see Azure Site Recovery specific alerts. You can customize the values of the other filters to see alerts of a specific time range up to 30 days, for vaults, subscriptions, severity and alert state (user response).
+3. Select any alert of your interest to see further details such as the affected VM, possible causes, recommended action, etc.
+4. Once the event is mitigated, you can modify its state to **Closed** or **Acknowledged**.
++
+### Configure email notifications for alerts
+
+To configure email notifications for built-in Azure Monitor alerts for Azure Site Recovery, you must create an alert processing rule in Azure Monitor. The alert processing rule will specify which alerts should be sent to a particular notification channel (action group).
+
+**Follow these steps to create an alert processing rule:**
+
+1. Go to **Azure Monitor** > **Alerts** and select **Alert processing rules** on the top pane.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-site-recovery-button.png" alt-text="Screenshot displays alert processing rules option in Azure Monitor." lightbox="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-site-recovery-button.png":::
+
+2. Select **Create**.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-create-button.png" alt-text="Screenshot displays create new alert processing rule." lightbox="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-create-button.png":::
+
+3. Under **Scope** > **Select scope** of the alert processing rule, you can apply the rule for all the resources within a subscription. Other customizations can be made to the scope by applying filters. For example, generating notification for alert of a certain severity.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-scope-inline.png" alt-text="Screenshot displays select scope for the alert processing rule." lightbox="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-scope-inline.png":::
++
+4. In **Rule settings**, select **Apply action group** and **Create action group** (or use an existing one). It is the destination to which the notification for an alert should be sent. For example, an email address.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/create-action-group.png" alt-text="Screenshot displays the Create new action group option." lightbox="./media/site-recovery-monitor-and-troubleshoot/create-action-group.png":::
+
+5. For the creation of an action group, in the **Basics** tab select the name of the action group, the subscription, and the resource group under which it must be created.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/azure-monitor-action-groups-basic.png" alt-text="Screenshot displays Configure notifications by creating action group." lightbox="./media/site-recovery-monitor-and-troubleshoot/azure-monitor-action-groups-basic.png":::
+
+6. Under the **Notifications** tab, select the destination of the notification **Email/ SMS Message/ Push/ Voice** and enter the recipient's email ID and other details as necessary.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/azure-monitor-email.png" alt-text="Screenshot displays the select required notification channel option." lightbox="./media/site-recovery-monitor-and-troubleshoot/azure-monitor-email.png":::
+
+7. Select **Review+Create** > **Create** to deploy the action group. The creation of the action group leads you back to the alert processing rule creation.
+ > [!NOTE]
+ > The created action group appears in the **Rule settings** page.
+
+8. In the **Scheduling** tab select **Always**.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-scheduling.png" alt-text="Screenshot displays Scheduling options for alert processing rule." lightbox="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-scheduling.png":::
+
+9. Under the **Details** tab specify the subscription, resource group and name of the alert processing rule being created.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-details.png" alt-text="Screenshot displays Save the alert processing rule in any subscription." lightbox="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-details.png":::
+
+10. Add Tags if needed and select **Review+Create** > **Create**. The alert processing rule will be active in a few minutes.
+
+### Configure notifications to non-email channels
+
+With Azure Monitor action groups, you can route alerts to other notification channels like webhooks, logic apps, functions, etc. [Learn more about supported action groups in Azure Monitor](../azure-monitor/alerts/action-groups.md).
++
+### Configure notifications through programmatic interfaces
+
+You can use the following interfaces supported by Azure Monitor to manage action groups and alert processing rules:
+
+- [Azure Monitor REST API reference](https://learn.microsoft.com/rest/api/monitor/)
+- [Azure Monitor PowerShell reference](https://learn.microsoft.com/powershell/module/az.monitor)
+- [Azure Monitor CLI reference](https://learn.microsoft.com/cli/azure/monitor)
+
+### Suppress notifications during a planned maintenance window
+
+There might be scenarios like maintenance windows during which Azure Site Recovery operations are expected to fail. If you have a requirement to suppress notifications during such periods, you can set up a suppression alert processing rule to run for a specific period.
+
+To create a suppression alert processing rule, use the same process followed for creating a notification-based alert processing rule described in the earlier section, with the following differences:
+
+1. Under **Rule Settings**, select **Suppress notifications**. If there is both a suppression alert processing rule and an action group alert processing rule applied on the same scope, the suppression rule takes precedence.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-suppression.png" alt-text="Screenshot displays Enable notification suppression." lightbox="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-suppression.png":::
+
+2. Under **Scheduling**, enter the window of time for which you want the alerts to be suppressed.
+
+ :::image type="content" source="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-schedule-window.png" alt-text="Screenshot displays Schedule time window for notification suppression." lightbox="./media/site-recovery-monitor-and-troubleshoot/alert-processing-rule-schedule-window.png":::
+
+### Pricing
+
+With built-in Azure Monitor alerts, alerts for critical operations or failures are generated by default. You can view these alerts in the portal or via non-portal interfaces at no extra cost. However, to route these alerts to a notification channel (such as email), you incur a minor cost for notifications beyond the free tier (of 1000 emails per month). [Learn more about Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+ ## Next steps
-[Learn about](monitor-log-analytics.md) monitoring Site Recovery with Azure Monitor.
+Learn about [monitoring Site Recovery with Azure Monitor](monitor-log-analytics.md).
spring-apps How To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-custom-domain.md
az spring certificate list \
### Auto sync certificate
-A certificate stored in Azure Key Vault sometimes gets renewed before it expires. Similarly, your organization's security policies for certificate management might require your DevOps team to replace certificates with new ones regularly. After you enable auto sync for a certificate, Azure Spring Apps starts to sync your key vault for a new version regularly - usually every 24 hours. If a new version is available, Azure Spring Apps imports it, and then reloads it for various components using the certificate without causing any downtime. The following list shows the affected components:
--- App custom domain-- [VMware Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md) custom domain-- [API portal for VMware Tanzu](./how-to-use-enterprise-api-portal.md) custom domain-- [VMware Tanzu Application Accelerator](./how-to-use-accelerator.md) custom domain
+A certificate stored in Azure Key Vault sometimes gets renewed before it expires. Similarly, your organization's security policies for certificate management might require your DevOps team to replace certificates with new ones regularly. After you enable auto sync for a certificate, Azure Spring Apps starts to sync your key vault for a new version regularly - usually every 24 hours. If a new version is available, Azure Spring Apps imports it, and then reloads it for various components using the certificate without causing any downtime. The following list shows the affected components and relevant scenarios:
+
+- App
+ - Custom domain
+- [VMware Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md)
+ - Custom domain
+- [API portal for VMware Tanzu](./how-to-use-enterprise-api-portal.md)
+ - Custom domain
+- [VMware Tanzu Application Accelerator](./how-to-use-accelerator.md)
+ - Connecting to a Git repository with a self-signed certificate.
- [Application Configuration Service for Tanzu](./how-to-enterprise-application-configuration-service.md)
+ - Connecting to a Git repository with a self-signed certificate.
When Azure Spring Apps imports or reloads a certificate, an activity log is generated. To see the activity logs, navigate to your Azure Spring Apps instance in the Azure portal and select **Activity log** in the navigation pane.
virtual-desktop Multimedia Redirection Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection-intro.md
The following sites work with video playback redirection:
The following websites work with call redirection: - [WebRTC Sample Site](https://webrtc.github.io/samples)
+- [Content Guru Storm App](https://www.contentguru.com/en-us/news/content-guru-announces-its-storm-ccaas-solution-is-now-compatible-with-microsoft-azure-virtual-desktop/)
Microsoft Teams live events aren't media-optimized for Azure Virtual Desktop and Windows 365 when using the native Teams app. However, if you use Teams live events with a browser that supports Teams live events and multimedia redirection, multimedia redirection is a workaround that provides smoother Teams live events playback on Azure Virtual Desktop. Multimedia redirection supports Enterprise Content Delivery Network (ECDN) for Teams live events.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
At a high level, you'll need:
## Azure account with an active subscription
-You'll need an Azure account with an active subscription to deploy Azure Virtual Desktop. If you don't have one already, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Your account must be assigned the [contributor or owner role](../role-based-access-control/built-in-roles.md) on your subscription.
+You need an Azure account with an active subscription to deploy Azure Virtual Desktop. If you don't have one already, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-You also need to make sure you've registered the *Microsoft.DesktopVirtualization* resource provider for your subscription. To check the status of the resource provider and register if needed, select the relevant tab for your scenario and follow the steps.
+To deploy Azure Virtual Desktop, you need to assign the relevant Azure role-based access control (RBAC) roles. The specific role requirements are covered in each related article for deploying Azure Virtual Desktop, which are listed in the [Next steps](#next-steps) section.
+
+Also make sure you've registered the *Microsoft.DesktopVirtualization* resource provider for your subscription. To check the status of the resource provider and register if needed, select the relevant tab for your scenario and follow the steps.
> [!IMPORTANT] > You must have permission to register a resource provider, which requires the `*/register/action` operation. This is included if your account is assigned the [contributor or owner role](../role-based-access-control/built-in-roles.md) on your subscription.
You also need to make sure you've registered the *Microsoft.DesktopVirtualizatio
## Identity
-To access desktops and applications from your session hosts, your users need to be able to authenticate. [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) is Microsoft's centralized cloud identity service that enables this capability. Microsoft Entra ID is always used to authenticate users for Azure Virtual Desktop. Session hosts can be joined to the same Microsoft Entra tenant, or to an Active Directory domain using [Active Directory Domain Services](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (AD DS) or [Microsoft Entra Domain Services](../active-directory-domain-services/overview.md) (Microsoft Entra Domain Services), providing you with a choice of flexible configuration options.
+To access desktops and applications from your session hosts, your users need to be able to authenticate. [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) is Microsoft's centralized cloud identity service that enables this capability. Microsoft Entra ID is always used to authenticate users for Azure Virtual Desktop. Session hosts can be joined to the same Microsoft Entra tenant, or to an Active Directory domain using [Active Directory Domain Services](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) (AD DS) or [Microsoft Entra Domain Services](../active-directory-domain-services/overview.md), providing you with a choice of flexible configuration options.
### Session hosts
To learn which URLs clients use to connect and that you must allow through firew
## Next steps
-Get started with Azure Virtual Desktop by creating a host pool. Head to the following tutorial to find out more.
+- For a simple way to get started with Azure Virtual Desktop by creating a sample infrastructure, see [Tutorial: Try Azure Virtual Desktop with a Windows 11 desktop](tutorial-create-connect-personal-desktop.md).
-> [!div class="nextstepaction"]
-> [Create and connect to a Windows 11 desktop with Azure Virtual Desktop](tutorial-create-connect-personal-desktop.md)
+- For a more in-depth and adaptable approach to deploying Azure Virtual Desktop, see [Create a host pool in Azure Virtual Desktop](create-host-pool.md).
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
description: Learn about incremental snapshots for managed disks, including how
Previously updated : 09/13/2023 Last updated : 10/24/2023 ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [virtual-machines-disks-incremental-snapshots-restrictions](../../includes/virtual-machines-disks-incremental-snapshots-restrictions.md)]
+## Create incremental snapshots
+ # [Azure CLI](#tab/azure-cli) You can use the Azure CLI to create an incremental snapshot. You need the latest version of the Azure CLI. See the following articles to learn how to either [install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) the Azure CLI.
You can also use Azure Resource Manager templates to create an incremental snaps
## Check snapshot status
-Incremental snapshots of Premium SSD v2 or Ultra Disks can't be used to create new disks until the background process copying the data into the snapshot has completed.
+Incremental snapshots of Premium SSD v2 or Ultra Disks can't be used to create new disks until the background process copying the data into the snapshot has completed.
You can use either the [CLI](#cli) or [PowerShell](#powershell) sections to check the status of the background copy from a disk to a snapshot.
+> [!IMPORTANT]
+> You can't use the following sections to get the status of the background copy process for disk types other than Ultra Disk or Premium SSD v2. Snapshots of other disk types always report 100%.
+ ### CLI You have two options for getting the status of snapshots. You can either get a [list of all incremental snapshots associated with a specific disk](#clilist-incremental-snapshots), and their respective status, or you can get the [status of an individual snapshot](#cliindividual-snapshot).
az snapshot show -g resourcegroupname -n snapshotname --query [creationData.logi
## Next steps
+See the following articles to create disks from your snapshots using the [Azure CLI](scripts/create-managed-disk-from-snapshot.md) or [Azure PowerShell module](scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot.md).
+ See [Copy an incremental snapshot to a new region](disks-copy-incremental-snapshot-across-regions.md) to learn how to copy an incremental snapshot across regions. If you have more questions on snapshots, see the [snapshots](faq-for-disks.yml#snapshots) section of the FAQ.
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
This article focuses on general guidance for running your Linux distribution on
* The maximum size allowed for the VHD is 1,023 GB.
-* When you're installing the Linux system, we recommend that you use standard partitions rather than Logical Volume Manager (LVM). LMV is the default for many installations.
+* When you're installing the Linux system, we recommend that you use standard partitions rather than Logical Volume Manager (LVM). LVM is the default for many installations.
Using standard partitions will avoid LVM name conflicts with cloned VMs, particularly if an OS disk is ever attached to another identical VM for troubleshooting. You can use [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) on data disks.
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
The following are the recommended limits for the mentioned indicators
| Total number of Resource associations to a schedule | 3000 | | Resource associations on each dynamic scope | 1000 | | Number of dynamic scopes per Resource Group or Subscription per Region | 250 |
+| Number of dynamic scopes per Maintenance Configuration | 50 |
The following are the Dynamic Scope Limits for **each dynamic scope**
virtual-machines Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-snapshot.md
ms.devlang: azurecli
vm-linux Previously updated : 02/22/2023 Last updated : 10/24/2023
diskEncryptionSetId=$(az disk-encryption-set show --name $diskEncryptionSetName
az disk create -g $resourceGroupName -n $diskName --source $snapshotId --disk-encryption-set $diskEncryptionSetID --location eastus2euap ```
+## Check disk status
+
+When you create a managed disk from a snapshot, it starts a background copy process. You can attach a disk to a VM while this process is running but you will experience performance impact (4k disks experience read impact, 512e experience both read and write impact). For Ultra Disks and Premium SSD v2, you can check the status of the background copy process with the following commands:
+
+> [!IMPORTANT]
+> You can't use the following sections to get the status of the background copy process for disk types other than Ultra Disk or Premium SSD v2. Other disk types will always report 100%.
+
+```azurecli
+subscriptionId=yourSubscriptionID
+resourceGroupName=yourResourceGroupName
+diskName=yourDiskName
+az account set --subscription $subscriptionId
+az disk show -n $diskName -g $resourceGroupName --query [completionPercent] -o tsv
+```
+ ## Clean up resources Run the following command to remove the resource group, VM, and all related resources.
virtual-machines Virtual Machines Powershell Sample Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot.md
vm-windows Previously updated : 06/05/2017 Last updated : 10/24/2023
This script creates a managed disk from a snapshot. Use it to restore a virtual
[!code-powershell[main](../../../powershell_scripts/virtual-machine/create-managed-disk-from-snapshot/create-managed-disk-from-snapshot.ps1 "Create managed disk from snapshot")]
+## Check disk status
+
+When you create a managed disk from a snapshot, it starts a background copy process. You can attach a disk to a VM while this process is running but you will experience performance impact (4k disks experience read impact, 512e experience both read and write impact). For Ultra Disks and Premium SSD v2, you can check the status of the background copy process with the [Azure CLI](create-managed-disk-from-snapshot.md#check-disk-status). This isn't currently supported with the Azure PowerShell module.
+
+> [!IMPORTANT]
+> You can't use the following sections to get the status of the background copy process for disk types other than Ultra Disk or Premium SSD v2. Other disk types will always report 100%.
## Script explanation
virtual-machines Virtual Machines Powershell Sample Create Managed Disk From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-vhd.md
Previously updated : 06/05/2017 Last updated : 10/24/2023
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Azure offers trusted launch as a seamless way to improve the security of [genera
|: |: | | Alma Linux | 8.4, 8.5, 8.6, 8.7, 8.8, 9.0, 9.1, 9.2 | | Azure Linux | 1.0, 2.0 |
-| CentOS | 8.3, 8.4 |
-| Debian |11 |
-| Oracle Linux |8.3, 8.4, 8.5, 8.6, 9.0 LVM |
+| Debian |11, 12 |
+| Oracle Linux |8.3, 8.4, 8.5, 8.6, 9.0, 9.1 LVM |
| RedHat Enterprise Linux |8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 9.0, 9.1 LVM |
-| SUSE Enterprise Linux |15SP3, 15SP4 |
+| SUSE Enterprise Linux |15SP3, 15SP4, 15SP5 |
| Ubuntu Server |18.04 LTS, 20.04 LTS, 22.04 LTS | | Windows 10 |Pro, Enterprise, Enterprise Multi-Session &#42; | | Windows 11 |Pro, Enterprise, Enterprise Multi-Session &#42; |
virtual-machines Oracle Weblogic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-weblogic.md
Previously updated : 12/22/2021 Last updated : 10/24/2023
You can also run WLS on the Azure Kubernetes Service. The solutions to do so are
WLS is a leading Java application server running some of the most mission-critical enterprise Java applications across the globe. WLS forms the middleware foundation for the Oracle software suite. Oracle and Microsoft are committed to empowering WLS customers with choice and flexibility to run workloads on Azure as a leading cloud platform.
-The Azure WLS solutions are aimed at making it as easy as possible to migrate your Java applications to Azure virtual machines. The solutions do so by generating deployed resources for most common cloud provisioning scenarios. The solutions automatically provision virtual network, storage, Java, WLS, and Linux resources. With minimal effort, WebLogic Server is installed. The solutions can set up security with a network security group, load balancing with Azure App Gateway or Oracle HTTP Server, authentication with Microsoft Entra ID, centralized logging using ELK and distributed caching with Oracle Coherence. You can also automatically connect to your existing database including Azure PostgreSQL, Azure SQL, and the Oracle Database on the Oracle Cloud or Azure.
+The Azure WLS solutions are aimed at making it as easy as possible to migrate your Java applications to Azure virtual machines. The solutions do so by generating deployed resources for most common cloud provisioning scenarios. The solutions automatically provision virtual network, storage, Java, WLS, and Linux resources. With minimal effort, WebLogic Server is provisioned. The solutions can set up security with a network security group, load balancing with Azure App Gateway or Oracle HTTP Server, and distributed caching with Oracle Coherence. You can also automatically connect to your existing database including Azure PostgreSQL, Azure MySQL, Azure SQL, and the Oracle Database on the Oracle Cloud or Azure.
:::image type="content" source="media/oracle-weblogic/wls-on-azure.gif" alt-text="You can use the Azure portal to deploy WebLogic Server on Azure":::
-There are four offers available to meet different scenarios: [single node without an admin server](https://portal.azure.com/#create/oracle.20191001-arm-oraclelinux-wls20191001-arm-oraclelinux-wls), [single node with an admin server](https://portal.azure.com/#create/oracle.20191009-arm-oraclelinux-wls-admin20191009-arm-oraclelinux-wls-admin), [cluster](https://portal.azure.com/#create/oracle.20191007-arm-oraclelinux-wls-cluster20191007-arm-oraclelinux-wls-cluster), and [dynamic cluster](https://portal.azure.com/#create/oracle.20191021-arm-oraclelinux-wls-dynamic-cluster20191021-arm-oraclelinux-wls-dynamic-cluster). The offers are available free of charge. These offers are described and linked below. You can find detailed documentation on the offers [here](https://wls-eng.github.io/arm-oraclelinux-wls/).
+There are solution templates available to meet different scenarios such as [single instance with an admin server](https://portal.azure.com/#create/oracle.20191009-arm-oraclelinux-wls-admin20191009-arm-oraclelinux-wls-admin), and [cluster](https://portal.azure.com/#create/oracle.20191007-arm-oraclelinux-wls-cluster20191007-arm-oraclelinux-wls-cluster). The solutions are available free of charge. These solutions are described and linked below. You can find detailed documentation on the solutions [here](https://wls-eng.github.io/arm-oraclelinux-wls/).
_These offers are Bring-Your-Own-License_. They assume you have already got the appropriate licenses with Oracle and are properly licensed to run offers in Azure.
-The offers support a range of operating system, Java, and WLS versions through base images (such as WebLogic Server 14 and Java 11 on Oracle Linux 7.6). These base images are also available on Azure on their own. The base images are suitable for customers that require complex, customized Azure deployments. The current set of base images is available in the [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=WebLogic%20Server%20Base%20Image&page=1).
+The solution templates support a range of operating system, Java, and WLS versions through base images (such as WebLogic Server 14 and Java 11 on Red Hat Enterprise Linux 8). These base images are also available on Azure Marketplace on their own. The base images are suitable for customers that require complex, customized Azure deployments.
-If you prefer step-by-step guidance for going from zero to a full three-node WLS cluster with database and message queue on Windows or GNU/Linux Azure Virtual machines, see [Install Oracle WebLogic Server on Azure Virtual Machines manually](/azure/developer/java/migration/migrate-weblogic-to-azure-vm-manually?toc=/azure/virtual-machines/workloads/oracle/toc.json&bc=/azure/virtual-machines/workloads/oracle/breadcrumb/toc.json).
+If you prefer step-by-step guidance for going from zero to a WLS cluster without any solution templates or base images, see [Install Oracle WebLogic Server on Azure Virtual Machines manually](/azure/developer/java/migration/migrate-weblogic-to-azure-vm-manually?toc=/azure/virtual-machines/workloads/oracle/toc.json&bc=/azure/virtual-machines/workloads/oracle/breadcrumb/toc.json).
-_If you are interested in working closely on your migration scenarios with the engineering team developing these offers, select the [CONTACT ME](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview) button_ on the [marketplace offer overview page](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview). Program managers, architects, and engineers will reach back out to you shortly and start close collaboration.
-
-## Oracle WebLogic Server Single Node
-
-[This offer](https://portal.azure.com/#create/oracle.20191001-arm-oraclelinux-wls20191001-arm-oraclelinux-wls) provisions a single virtual machine and installs WLS on it. It does not create a domain or start the administration server. The single node offer is useful for scenarios with highly customized domain configuration.
+_If you're interested in working closely on your migration scenarios with the engineering team developing these offers, select the [CONTACT ME](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview) button_ on the [marketplace offer overview page](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview). Program managers, architects, and engineers will reach back out to you shortly and start close collaboration.
## Oracle WebLogic Server with Admin Server
-[This offer](https://portal.azure.com/#create/oracle.20191009-arm-oraclelinux-wls-admin20191009-arm-oraclelinux-wls-admin) provisions a single virtual machine and installs WLS on it. It creates a domain and starts up the administration server. You can manage the domain and get started with application deployments right away.
+[This solution template](https://portal.azure.com/#create/oracle.20191009-arm-oraclelinux-wls-admin20191009-arm-oraclelinux-wls-admin) provisions a single virtual machine and installs WLS on it. It creates a domain and starts up the administration server. You can manage the domain and get started with application deployments right away.
## Oracle WebLogic Server Cluster
-[This offer](https://portal.azure.com/#create/oracle.20191007-arm-oraclelinux-wls-cluster20191007-arm-oraclelinux-wls-cluster) creates a highly available cluster of WLS virtual machines. The administration server and all managed servers are started by default. You can manage the cluster and get started with highly available applications right away.
-
-## Oracle WebLogic Server Dynamic Cluster
-
-[This offer](https://portal.azure.com/#create/oracle.20191021-arm-oraclelinux-wls-dynamic-cluster20191021-arm-oraclelinux-wls-dynamic-cluster) creates a highly available and scalable dynamic cluster of WLS virtual machines. The administration server and all managed servers are started by default.
+[This solution template](https://portal.azure.com/#create/oracle.20191007-arm-oraclelinux-wls-cluster20191007-arm-oraclelinux-wls-cluster) creates a highly available cluster of WLS virtual machines. The administration server and all managed servers are started by default. You can manage the cluster and get started with highly available applications right away.
-The solutions will enable a wide range of production-ready deployment architectures with relative ease. You can meet most migration cases in the most productive way possible by allowing a focus on business application development.
+The solutions enable a wide range of production-ready deployment architectures with relative ease. You can meet most migration cases in the most productive way possible by allowing a focus on business application development.
:::image type="content" source="media/oracle-weblogic/weblogic-architecture-vms.png" alt-text="Complex WebLogic Server deployments are enabled on Azure":::
-Beyond what is automatically provisioned by the solutions, customers have complete flexibility to customize their deployments further. It is likely on top of deploying applications customers will integrate further Azure resources with their deployments. Customers are encouraged to [connect with the development team](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview) and provide feedback on further improving the solutions.
+After resources are automatically provisioned by the solutions, you have complete flexibility to customize your deployments further. It's likely on top of deploying applications you'll integrate further Azure resources with your deployments. You're encouraged to [connect with the development team](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview) and provide feedback on further improving the solutions.
## Next steps Explore the offers on Azure.
-> [!div class="nextstepaction"]
-> [Oracle WebLogic Server Single Node](https://portal.azure.com/#create/oracle.20191001-arm-oraclelinux-wls20191001-arm-oraclelinux-wls)
- > [!div class="nextstepaction"] > [Oracle WebLogic Server with Admin Server](https://portal.azure.com/#create/oracle.20191009-arm-oraclelinux-wls-admin20191009-arm-oraclelinux-wls-admin) > [!div class="nextstepaction"] > [Oracle WebLogic Server Cluster](https://portal.azure.com/#create/oracle.20191007-arm-oraclelinux-wls-cluster20191007-arm-oraclelinux-wls-cluster)-
-> [!div class="nextstepaction"]
-> [Oracle WebLogic Server Dynamic Cluster](https://portal.azure.com/#create/oracle.20191021-arm-oraclelinux-wls-dynamic-cluster20191021-arm-oraclelinux-wls-dynamic-cluster)
virtual-machines Weblogic Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/weblogic-aks.md
Previously updated : 07/12/2023 Last updated : 10/24/2023
This page describes the solutions for running Oracle WebLogic Server (WLS) on the Azure Kubernetes Service (AKS). These solutions are jointly developed and supported by Oracle and Microsoft.
-It is also possible to run WebLogic Server on Azure Virtual Machines. The solutions to do so are described in [this Microsoft article](./oracle-weblogic.md).
+It's also possible to run WebLogic Server on Azure Virtual Machines. The solutions to do so are described in [this Microsoft article](./oracle-weblogic.md).
WebLogic Server is a leading Java application server running some of the most mission-critical enterprise Java applications across the globe. WebLogic Server forms the middleware foundation for the Oracle software suite. Oracle and Microsoft are committed to empowering WebLogic Server customers with choice and flexibility to run workloads on Azure as a leading cloud platform. ## WLS on AKS certified and supported
-WebLogic Server is certified by Oracle and Microsoft to run well on AKS. The WLS on AKS solutions are aimed at making it as easy as possible to run your containerized and orchestrated Java applications on Docker and Kubernetes infrastructure. The solutions are focused on reliability, scalability, manageability, and enterprise support.
+WebLogic Server is certified by Oracle and Microsoft to run well on AKS. The WLS on AKS solutions are aimed at making it as easy as possible to run your containerized and orchestrated Java applications on Kubernetes. The solutions are focused on reliability, scalability, manageability, and enterprise support.
-WLS clusters are fully enabled to run on Kubernetes via the WebLogic Kubernetes Operator (referred to simply as the 'Operator' from here onward). The Operator follows the standard Kubernetes Operator pattern. It simplifies the management and operation of WebLogic domains and deployments on Kubernetes by automating otherwise manual tasks and adding extra operational reliability features. The Operator supports Oracle WebLogic Server 12c, Oracle Fusion Middleware Infrastructure 12c and beyond. We have tested the official Docker images for WebLogic Server 12.2.1.4 and 14.1.1 with the Operator. For details on the Operator, refer to the [official documentation from Oracle](https://oracle.github.io/weblogic-kubernetes-operator/).
+WLS clusters are fully enabled to run on Kubernetes via the WebLogic Kubernetes Operator (referred to simply as the 'Operator' from here onward). The Operator follows the standard Kubernetes Operator pattern. It simplifies the management and operation of WebLogic domains on Kubernetes by automating otherwise manual tasks and adding extra operational reliability features. The Operator supports Oracle WebLogic Server 12c, Oracle Fusion Middleware Infrastructure 12c and beyond. For details on the Operator, refer to the [official documentation from Oracle](https://oracle.github.io/weblogic-kubernetes-operator/).
## WLS on AKS marketplace solution template
-Beyond certifying WLS on AKS, Oracle and Microsoft jointly provide a [marketplace solution template](https://portal.azure.com/#create/oracle.20210620-wls-on-aks20210620-wls-on-aks) with the goal of making it as quick and easy as possible to migrate WLS workloads to AKS. The offer does so by automating the provisioning of a number of Java and Azure resources. The automatically provisioned resources include an AKS cluster, the WebLogic Kubernetes Operator, WLS Docker images, and the Azure Container Registry (ACR). It is possible to use an existing AKS cluster or ACR instance with the offer if desired. The offer also supports configuring load balancing with Azure App Gateway or the Azure Load Balancer, easing database connectivity, publishing metrics to Azure Monitor as well as mounting Azure Files as Kubernetes Persistent Volumes. The currently supported database integrations include Azure PostgreSQL, Azure SQL, and the Oracle Database on the Oracle Cloud or Azure.
+Beyond certifying WLS on AKS, Oracle and Microsoft jointly provide a [marketplace solution template](https://portal.azure.com/#create/oracle.20210620-wls-on-aks20210620-wls-on-aks) with the goal of making it as quick and easy as possible to migrate WLS workloads to AKS. The offer does so by automating the provisioning of a number of Java and Azure resources. The automatically provisioned resources include an AKS cluster, the WebLogic Kubernetes Operator, WLS Docker images, and the Azure Container Registry (ACR). It's possible to use an existing AKS cluster or ACR instance with the offer. The offer also supports configuring load balancing with Azure App Gateway or the Azure Load Balancer, easing database connectivity, publishing metrics to Azure Monitor and mounting Azure Files as Kubernetes Persistent Volumes. The currently supported database integrations include Azure PostgreSQL, Azure MySQL, Azure SQL, and the Oracle Database on the Oracle Cloud or Azure.
:::image type="content" source="media/oracle-weblogic/wls-aks-demo.gif" alt-text="You can use the marketplace solution to deploy WebLogic Server on AKS":::
-After the offer performs most boilerplate resource provisioning and configuration, you can focus on deploying your WLS application to AKS, typically through a DevOps tool such as GitHub Actions and tools from WebLogic Kubernetes tooling such as the WebLogic Image Tool and WebLogic Deploy Tooling. You are completely free to customize the deployment further.
+After the solution template performs most boilerplate resource provisioning and configuration, you can focus on deploying your WLS application to AKS, typically through a DevOps tool such as GitHub Actions and tools from WebLogic Kubernetes tooling such as the WebLogic Image Tool and WebLogic Deploy Tooling. You're completely free to customize the deployment further.
You can find detailed documentation on the solution template [here](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster). ## Guidance, scripts, and samples for WLS on AKS
-Oracle and Microsoft also provide basic step-by-step guidance, scripts, and samples for running WebLogic Server on AKS. The guidance is suitable for customers that wish to remain as close as possible to a native Kubernetes manual deployment experience as an alternative to using a solution template. The guidance is incorporated into the Azure Kubernetes Service sample section of the [Operator documentation](https://oracle.github.io/weblogic-kubernetes-operator/samples/azure-kubernetes-service/). The guidance uses official WebLogic Server Docker images provided by Oracle. Failover is achieved via Azure Files accessed through Kubernetes Persistent Volume Claims. Azure Load Balancers are supported when provisioned using a Kubernetes Service of type 'LoadBalancer'. The Azure Container Registry (ACR) is supported for deploying WLS domains inside custom Docker images. The guidance allows a very high degree of configuration and customization.
+Oracle and Microsoft also provide basic step-by-step guidance, scripts, and samples for running WebLogic Server on AKS. The guidance is suitable for customers that wish to remain as close as possible to a native Kubernetes manual deployment experience as an alternative to using a solution template. The guidance is incorporated into the Azure Kubernetes Service sample section of the [Operator documentation](https://oracle.github.io/weblogic-kubernetes-operator/samples/azure-kubernetes-service/). The guidance allows a very high degree of configuration and customization.
-The guidance supports two ways of deploying WLS domains to AKS. Domains can be deployed directly to Kubernetes Persistent Volumes. This deployment option is good if you want to migrate to AKS but still want to administer WLS using the Admin Console or the WebLogic Scripting Tool (WLST). The option also allows you to move to AKS without adopting Docker development. The more Kubernetes native way of deploying WLS domains to AKS is to build custom Docker images based on official WLS images from the Oracle Container Registry, publish the custom images to ACR and deploy the domain to AKS using the Operator. This option in the solution also allows you to update the domain through Kubernetes ConfigMaps after the deployment is done.
+The guidance supports two ways of deploying WLS domains to AKS. Domains can be deployed directly to Kubernetes Persistent Volumes. This deployment option is good if you want to migrate to AKS but still want to administer WLS using the Admin Console or the WebLogic Scripting Tool (WLST). The option also allows you to move to AKS without adopting Docker development. The more Kubernetes native way of deploying WLS domains to AKS is to build custom container images based on official WLS images from the Oracle Container Registry, publish the custom images to ACR and deploy the domain to AKS using the Operator.
-_These solutions are all Bring-Your-Own-License_. They assume you have already got the appropriate licenses with Oracle and are properly licensed to run offers in Azure.
+_These solutions are all Bring-Your-Own-License_. They assume you've already got the appropriate licenses with Oracle and are properly licensed to run offers in Azure.
-_If you are interested in working closely on your migration scenarios with the engineering team developing these solutions, fill out [this short survey](https://aka.ms/wls-on-azure-survey) and include your contact information_. Program managers, architects, and engineers will reach back out to you shortly and start close collaboration.
+_If you're interested in working closely on your migration scenarios with the engineering team developing these solutions, fill out [this short survey](https://aka.ms/wls-on-azure-survey) and include your contact information_. Program managers, architects, and engineers will reach back out to you shortly and start close collaboration.
## Deployment architectures
-The solutions for running Oracle WebLogic Server on the Azure Kubernetes Service will enable a wide range of production-ready deployment architectures with relative ease.
+The solutions for running Oracle WebLogic Server on the Azure Kubernetes Service enable a wide range of production-ready deployment architectures with relative ease.
:::image type="content" source="media/oracle-weblogic/wls-aks-architecture.jpg" alt-text="Complex WebLogic Server deployments are enabled on AKS":::
-Beyond what the solutions provide you have complete flexibility to customize your deployments further. It is likely on top of deploying applications you will integrate further Azure resources with your deployments or tune the deployments to your specific applications. You are encouraged to provide feedback in the [survey](https://aka.ms/wls-on-azure-survey) on further improving the solutions.
+Beyond what the solutions provide you have complete flexibility to customize your deployments further. It's likely on top of deploying applications you'll integrate further Azure resources with your deployments or tune the deployments to your specific applications. You're encouraged to provide feedback in the [survey](https://aka.ms/wls-on-azure-survey) on further improving the solutions.
## Next steps
virtual-network-manager Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/common-issues.md
In this article, we cover common issues you may face when using Azure Virtual Network Manager and provide some possible solutions.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Why isn't my configuration getting applied?
virtual-network-manager Concept Azure Policy Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-azure-policy-integration.md
In this article, you learn how [Azure Policy](../governance/policy/overview.md) is used in Azure Virtual Network Manager to define dynamic network group membership. Dynamic network groups allow you to create scalable and dynamically adapting virtual network environments in your organization.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Azure Policy overview
virtual-network-manager Concept Connectivity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-connectivity-configuration.md
In this article, you learn about the different types of configurations you can create and deploy using Azure Virtual Network Manager. There are two types of configurations currently available: *Connectivity* and *Security Admins*.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Connectivity configuration
virtual-network-manager Concept Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-cross-tenant.md
In this article, youΓÇÖll learn about cross-tenant support in Azure Virtual Network Manager. Cross-tenant supports allows organizations to use a central Network Manager instance for managing virtual networks across different tenants and subscriptions.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Overview of Cross-tenant
virtual-network-manager Concept Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-deployments.md
In this article, you learn about how configurations are applied to your network resources. Also, you explore how updating a configuration deployment is different for each membership type. Then we go into details about *Deployment status* and *Goal state model*.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Deployment
virtual-network-manager Concept Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-enforcement.md
Last updated 3/22/2023
In this article, you'll learn how [security admins rules](concept-security-admins.md) provide flexible and scalable enforcement of security policies over tools like [network security groups](../virtual-network/network-security-groups-overview.md). First, you learn the different models of virtual network enforcement. Then, you learn the general steps for enforcing security with security admin rules.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Virtual network enforcement
virtual-network-manager Concept Event Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-event-logs.md
Last updated 04/13/2023
Azure Virtual Network Manager uses Azure Monitor for data collection and analysis like many other Azure services. Azure Virtual Network Manager provides event logs for each network manager. You can store and view event logs with Azure MonitorΓÇÖs Log Analytics tool in the Azure portal, and through a storage account. You may also send these logs to an event hub or partner solution. + ## Supported log categories Azure Virtual Network Manager currently provides the following log categories:
virtual-network-manager Concept Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-limitations.md
This article provides an overview of the current limitations when using [Azure Virtual Network Manager](overview.md) to manage virtual networks. As a network administrator, it's important to understand these limitations to properly deploy an Azure Virtual Network Manager instance in your environment. The article covers various limitations related to Azure Virtual Network Manager, including the maximum number of virtual networks, overlapping IP spaces, and policy compliance evaluation cycle.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## General limitations
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
Yes, Azure Virtual Network Manager is fully compatible with pre-existing hub and
### Can I migrate an existing hub and spoke topology to Azure Virtual Network Manager?
-Yes,
+Yes, migrating existing VNets to AVNMΓÇÖs hub and spoke topology is very easy and requires no down time. Customers can [create a hub and spoke topology connectivity configuration](how-to-create-hub-and-spoke.md) of the desired topology. When the deployment of this configuration is deployed, virtual network manager will automatically create the necessary peerings. Any pre-existing peerings set up by users will remain intact, ensuring there's no downtime.
### How do connected groups differ from virtual network peering regarding establishing connectivity between virtual networks?
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
The public IPv4 address used for the access is called the default outbound acces
If you deploy a virtual machine in Azure and it doesn't have explicit outbound connectivity, it's assigned a default outbound access IP. >[!Important] >On September 30, 2025, default outbound access for new deployments will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/default-outbound-access-for-vms-in-azure-will-be-retired-transition-to-a-new-method-of-internet-access/). We recommend that you use one of the explicit forms of connectivity discussed in the following section.
If you deploy a virtual machine in Azure and it doesn't have explicit outbound c
* Loss of IP address
- * Customers don't own the default outbound access IP. This IP may change, and any dependency on it could cause issues in the future.
+ * Customers don't own the default outbound access IP. This IP might change, and any dependency on it could cause issues in the future.
## How can I transition to an explicit method of public connectivity (and disable default outbound access)?
virtual-network Virtual Network Configure Vnet Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-configure-vnet-connections.md
You can add a site-to-site (*S2S* in the following diagram) connection to a virt
Azure currently works with two deployment models: Resource Manager and classic. The two models aren't completely compatible with each other. To configure a multisite connection with different models, see the following articles:
-* [Add a site-to-site connection to a virtual network with an existing VPN gateway connection](../vpn-gateway/vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md)
+* [Add a site-to-site connection to a virtual network with an existing VPN gateway connection](../vpn-gateway/add-remove-site-to-site-connections.md)
* [Add a site-to-site connection to a virtual network with an existing VPN gateway connection (classic)](../vpn-gateway/vpn-gateway-multi-site.md) > [!Note]
virtual-network Virtual Network For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-for-azure-services.md
Deploying services within a virtual network provides the following capabilities:
|Category|Service| Dedicated<sup>1</sup> Subnet |-|-|-|
-| Compute | Virtual machines: [Linux](/previous-versions/azure/virtual-machines/linux/infrastructure-example?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Windows](/previous-versions/azure/virtual-machines/windows/infrastructure-example?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-existing-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Cloud Service](/previous-versions/azure/reference/jj156091(v=azure.100)): Virtual network (classic) only<br/> [Azure Batch](../batch/nodes-and-pools.md?toc=%2fazure%2fvirtual-network%2ftoc.json#virtual-network-vnet-and-firewall-configuration)| No <br/> No <br/> No <br/> No<sup>2</sup>
-| Network | [Application Gateway - WAF](../application-gateway/application-gateway-ilb-arm.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Bastion](../bastion/bastion-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Firewall](../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Azure Route Server](../route-server/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[ExpressRoute Gateway](../expressroute/expressroute-about-virtual-network-gateways.md)<br/>[Network Virtual Appliances](/windows-server/networking/sdn/manage/use-network-virtual-appliances-on-a-vn)<br/>[VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json) | Yes <br/> Yes <br/> Yes <br/> Yes <br/> Yes <br/> No <br/> Yes
+| Compute | Virtual machines: [Linux](/previous-versions/azure/virtual-machines/linux/infrastructure-example?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Windows](/previous-versions/azure/virtual-machines/windows/infrastructure-example?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-existing-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Cloud Service](/previous-versions/azure/reference/jj156091(v=azure.100)): Virtual network (classic) only <br/> [Azure Batch](../batch/nodes-and-pools.md?toc=%2fazure%2fvirtual-network%2ftoc.json#virtual-network-vnet-and-firewall-configuration) <br/> [Azure Baremetal Infrastructure](../baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)| No <br/> No <br/> No <br/> No<sup>2</sup> </br> No |
+| Network | [Application Gateway - WAF](../application-gateway/application-gateway-ilb-arm.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Bastion](../bastion/bastion-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Firewall](../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Azure Route Server](../route-server/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[ExpressRoute Gateway](../expressroute/expressroute-about-virtual-network-gateways.md)<br/>[Network Virtual Appliances](/windows-server/networking/sdn/manage/use-network-virtual-appliances-on-a-vn)<br/>[VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)| Yes <br/> Yes <br/> Yes <br/> Yes <br/> Yes <br/> No <br/> Yes </br> No |
|Data|[RedisCache](../azure-cache-for-redis/cache-how-to-premium-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure SQL Managed Instance](/azure/azure-sql/managed-instance/connectivity-architecture-overview?toc=%2fazure%2fvirtual-network%2ftoc.json) </br> [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-networking-vnet.md) </br> [Azure Database for PostgreSQL - Flexible Server](../postgresql/flexible-server/concepts-networking.md#private-access-vnet-integration)| Yes <br/> Yes <br/> Yes </br> Yes | |Analytics | [Azure HDInsight](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks?toc=%2fazure%2fvirtual-network%2ftoc.json) |No<sup>2</sup> <br/> No<sup>2</sup> <br/> | Identity | [Microsoft Entra Domain Services](../active-directory-domain-services/tutorial-create-instance.md?toc=%2fazure%2fvirtual-network%2ftoc.json) |No <br/>
vpn-gateway Add Remove Site To Site Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/add-remove-site-to-site-connections.md
+
+ Title: 'Add or remove site-to-site connections'
+description: Learn how to add or remove site-to-site connections from a VPN gateway.
++++ Last updated : 10/25/2023+++
+# Add or remove VPN Gateway site-to-site connections
+
+This article helps you add or remove site-to-site (S2S) connections for a VPN gateway. You can also add S2S connections to a VPN gateway that already has a S2S connection, point-to-site connection, or VNet-to-VNet connection. There are some limitations when adding connections. Check the [Prerequisites](#before) section in this article to verify before you start your configuration.
++
+**About ExpressRoute/site-to-site coexisting connections**
+
+* You can use the steps in this article to add a new VPN connection to an already existing ExpressRoute/site-to-site coexisting connection.
+* You can't use the steps in this article to configure a new ExpressRoute/site-to-site coexisting connection. To create a new coexisting connection, see: [ExpressRoute/S2S coexisting connections](../expressroute/expressroute-howto-coexist-resource-manager.md).
+
+## <a name="before"></a>Prerequisites
+
+Verify the following items:
+
+* You're NOT configuring a new coexisting ExpressRoute and VPN Gateway site-to-site connection.
+* You have a virtual network that was created using the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) with an existing connection.
+* The virtual network gateway for your virtual network is RouteBased. If you have a PolicyBased VPN gateway, you must delete the virtual network gateway and create a new VPN gateway as RouteBased.
+* None of the address ranges overlap for any of the virtual networks that this virtual network is connecting to.
+* You have compatible VPN device and someone who is able to configure it. See [About VPN Devices](vpn-gateway-about-vpn-devices.md). If you aren't familiar with configuring your VPN device, or are unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you.
+* You have an externally facing public IP address for your VPN device.
+
+## <a name="local"></a>Create a local network gateway
+
+Create a local network gateway that represents the branch or location you want to connect to.
+
+The local network gateway is a specific object that represents your on-premises location (the site) for routing purposes. You give the site a name by which Azure can refer to it, then specify the IP address of the on-premises VPN device to which you'll create a connection. You also specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The address prefixes you specify are the prefixes located on your on-premises network. If your on-premises network changes or you need to change the public IP address for the VPN device, you can easily update the values later.
+
+In this example, we create a local network gateway using the following values.
+
+* **Name:** Site1
+* **Resource Group:** TestRG1
+* **Location:** East US
++
+## <a name="VPNDevice"></a>Configure your VPN device
+
+Site-to-site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following values:
+
+* A shared key. This is the same shared key that you specify when creating your site-to-site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use.
+* The public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the public IP address of your VPN gateway using the Azure portal, go to **Virtual network gateways**, then select the name of your gateway.
++
+## <a name="configure"></a>Configure a connection
+
+Create a site-to-site VPN connection between your virtual network gateway and your on-premises VPN device.
+
+Create a connection using the following values:
+
+* **Local network gateway name:** Site1
+* **Connection name:** VNet1toSite1
+* **Shared key:** For this example, we use abc123. But, you can use whatever is compatible with your VPN hardware. The important thing is that the values match on both sides of the connection.
++
+## <a name="verify"></a>View and verify the VPN connection
++
+## Remove a connection
++
+## Next steps
+
+For more information about site-to-site VPN gateway configurations, see [Tutorial: Configure a site-to-site VPN gateway configuration](tutorial-site-to-site-portal.md).
vpn-gateway Point To Site About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-about.md
description: Learn about Point-to-Site VPN.
Previously updated : 09/26/2023 Last updated : 10/25/2023
The following table shows gateway SKUs by tunnel, connection, and throughput. Fo
## <a name="IKE/IPsec policies"></a>What IKE/IPsec policies are configured on VPN gateways for P2S?
+The tables in this section show the values for the default policies. However, they don't reflect the available supported values for custom policies. For custom policies, see the **Accepted values** listed in the [New-AzVpnClientIpsecParameter](/powershell/module/az.network/new-azvpnclientipsecparameter) PowerShell cmdlet.
+ **IKEv2** | **Cipher** | **Integrity** | **PRF** | **DH Group** |
vpn-gateway Vpn Gateway Howto Multi Site To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md
- Title: 'Add multiple VPN Gateway site-to-site connections to a VNet: Azure portal'
-description: Learn how to add additional site-to-site connections to a VPN gateway.
---- Previously updated : 04/10/2023---
-# Add additional S2S connections to a VNet: Azure portal
-
-> [!div class="op_single_selector"]
-> * [Azure portal](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md)
-> * [PowerShell (classic)](vpn-gateway-multi-site.md)
->
-
-This article helps you add additional site-to-site (S2S) connections to a VPN gateway that has an existing connection. This architecture is often referred to as a "multi-site" configuration. You can add a S2S connection to a VNet that already has a S2S connection, point-to-site connection, or VNet-to-VNet connection. There are some limitations when adding connections. Check the [Prerequisites](#before) section in this article to verify before you start your configuration.
--
-**About ExpressRoute/site-to-site coexisting connections**
-
-* You can use the steps in this article to add a new VPN connection to an already existing ExpressRoute/site-to-site coexisting connection.
-* You can't use the steps in this article to configure a new ExpressRoute/site-to-site coexisting connection. To create a new coexisting connection see: [ExpressRoute/S2S coexisting connections](../expressroute/expressroute-howto-coexist-resource-manager.md).
-
-## <a name="before"></a>Prerequisites
-
-Verify the following items:
-
-* You're NOT configuring a new coexisting ExpressRoute and VPN Gateway site-to-site connection.
-* You have a virtual network that was created using the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) with an existing connection.
-* The virtual network gateway for your VNet is RouteBased. If you have a PolicyBased VPN gateway, you must delete the virtual network gateway and create a new VPN gateway as RouteBased.
-* None of the address ranges overlap for any of the VNets that this VNet is connecting to.
-* You have compatible VPN device and someone who is able to configure it. See [About VPN Devices](vpn-gateway-about-vpn-devices.md). If you aren't familiar with configuring your VPN device, or are unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you.
-* You have an externally facing public IP address for your VPN device.
-
-## <a name="configure"></a>Configure a connection
-
-1. From a browser, navigate to the [Azure portal](https://portal.azure.com) and, if necessary, sign in with your Azure account.
-1. Select **All resources** and locate your **virtual network gateway** from the list of resources and select it.
-1. On the **Virtual network gateway** page, select **Connections**.
-
- :::image type="content" source="./media/vpn-gateway-howto-multi-site-to-site-resource-manager-portal/connections.png" alt-text="VPN gateway connections":::
-1. On the **Connections** page, select **+Add**.
-1. This opens the **Add connection** page.
-
- :::image type="content" source="./media/vpn-gateway-howto-multi-site-to-site-resource-manager-portal/add-connection.png" alt-text="Add connection page":::
-1. On the **Add connection** page, fill out the following fields:
-
- * **Name:** The name you want to give to the site you're creating the connection to.
- * **Connection type:** Select **Site-to-site (IPsec)**.
-
-## <a name="local"></a>Add a local network gateway
-
-1. For the **Local network gateway** field, select ***Choose a local network gateway***. This opens the **Choose local network gateway** page.
-1. Select **+ Create new** to open the **Create local network gateway** page.
-
- :::image type="content" source="./media/vpn-gateway-howto-multi-site-to-site-resource-manager-portal/create-local-network-gateway.png" alt-text="Create local network gateway page":::
-1. On the **Create local network gateway** page, fill out the following fields:
-
- * **Name:** The name you want to give to the local network gateway resource.
- * **Endpoint:** The public IP address of the VPN device on the site that you want to connect to, or the FQDN of the endpoint. If you want to create a connection to another VPN gateway, you can use the IP address of the other gateway in this field.
- * **Address space:** The address space that you want to be routed to the new local network site.
-1. Select **OK** on the **Create local network gateway** page to save the changes.
-
-## <a name="part3"></a>Add the shared key
-
-1. After creating the local network gateway, return to the **Add connection** page.
-1. Complete the remaining fields. For the **Shared key (PSK)**, you can either get the shared key from your VPN device, or make one up here and then configure your VPN device to use the same shared key. The important thing is that the keys are exactly the same.
-
-## <a name="create"></a>Create the connection
-
-1. At the bottom of the page, select **OK** to create the connection. The connection begins creating immediately.
-1. Once the connection completes, you can view and verify it.
-
-## <a name="verify"></a>View and verify the VPN connection
--
-## Next steps
-
-Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual machines learning paths](/training/paths/deploy-a-website-with-azure-virtual-machines/).
vpn-gateway Vpn Gateway Multi Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-multi-site.md
This article walks you through using PowerShell to add Site-to-Site (S2S) connections to a VPN gateway that has an existing connection using the classic (legacy) deployment model. This type of connection is sometimes referred to as a "multi-site" configuration. These steps don't apply to ExpressRoute/Site-to-Site coexisting connection configurations.
-The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md)**.
+The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](add-remove-site-to-site-connections.md)**.
[!INCLUDE [deployment models](../../includes/vpn-gateway-classic-deployment-model-include.md)]
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
The following cross-premises virtual network gateway connections are supported:
* **Site-to-site:** VPN connection over IPsec (IKE v1 and IKE v2). This type of connection requires a VPN device or RRAS. For more information, see [Site-to-site](./tutorial-site-to-site-portal.md). * **Point-to-site:** VPN connection over SSTP (Secure Socket Tunneling Protocol) or IKE v2. This connection doesn't require a VPN device. For more information, see [Point-to-site](vpn-gateway-howto-point-to-site-resource-manager-portal.md). * **VNet-to-VNet:** This type of connection is the same as a site-to-site configuration. VNet to VNet is a VPN connection over IPsec (IKE v1 and IKE v2). It doesn't require a VPN device. For more information, see [VNet-to-VNet](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
-* **Multi-Site:** This is a variation of a site-to-site configuration that allows you to connect multiple on-premises sites to a virtual network. For more information, see [Multi-Site](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md).
* **ExpressRoute:** ExpressRoute is a private connection to Azure from your WAN, not a VPN connection over the public Internet. For more information, see the [ExpressRoute Technical Overview](../expressroute/expressroute-introduction.md) and the [ExpressRoute FAQ](../expressroute/expressroute-faqs.md). For more information about VPN Gateway connections, see [About VPN Gateway](vpn-gateway-about-vpngateways.md).