Updates from: 04/04/2024 01:16:34
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Groundedness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/groundedness.md
The maximum character limit for the grounding sources is 55,000 characters per A
To use this API, you must create your Azure AI Content Safety resource in the supported regions. Currently, it's available in the following Azure regions: - East US 2-- East US (only for non-reasoning)
+- East US
- West US - Sweden Central
If you need a higher rate, [contact us](mailto:contentsafetysupport@microsoft.co
Follow the quickstart to get started using Azure AI Content Safety to detect groundedness. > [!div class="nextstepaction"]
-> [Groundedness detection quickstart](../quickstart-groundedness.md)
+> [Groundedness detection quickstart](../quickstart-groundedness.md)
ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-composed-models.md
With the introduction of [**custom classification models**](./concept-custom-cla
* With models composed using v2.1 of the API continues to be supported, requiring no updates.
-* For custom models, the maximum number that can be composed is 100.
+* For custom models, the maximum number that can be composed is 200.
::: moniker-end
ai-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/managed-identity.md
description: Provides guidance on how to set managed identity with Microsoft Entra ID Previously updated : 02/29/2024 Last updated : 04/03/2024 recommendations: false
In the following sections, you'll use the Azure CLI to sign in, and obtain a bea
## Assign yourself to the Cognitive Services User role
-Assign yourself the [Cognitive Services User](role-based-access-control.md#cognitive-services-contributor) role to allow you to use your account to make Azure OpenAI API calls rather than having to use key-based auth. After you make this change it can take up to 5 minutes before the change takes effect.
+Assign yourself either the [Cognitive Services OpenAI User](role-based-access-control.md#cognitive-services-openai-user) or [Cognitive Services OpenAI Contributor](role-based-access-control.md#cognitive-services-openai-contributor) role to allow you to use your account to make Azure OpenAI inference API calls rather than having to use key-based auth. After you make this change it can take up to 5 minutes before the change takes effect.
## Sign into the Azure CLI
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/role-based-access-control.md
Previously updated : 11/15/2023 Last updated : 04/03/2024 recommendations: false
If a user were granted role-based access to only this role for an Azure OpenAI r
✅ Ability to view the resource and associated model deployments in Azure OpenAI Studio. <br> ✅ Ability to view what models are available for deployment in Azure OpenAI Studio. <br> ✅ Use the Chat, Completions, and DALL-E (preview) playground experiences to generate text and images with any models that have already been deployed to this Azure OpenAI resource. <br>
+✅ Make inference API calls with Microsoft Entra ID.
A user with only this role assigned would be unable to:
This role is typically granted access at the resource group level for a user in
A user with only this role assigned would be unable to: ❌ Access quota <br>
+❌ Make inference API calls with Microsoft Entra ID.
### Cognitive Services Usages Reader
All the capabilities of Cognitive Services Contributor plus the ability to:
|Create customized content filters|❌|❌|✅| ➖ | |Add a data source for the “on your data” feature|❌|❌|✅| ➖ | |Access quota|❌|❌|❌|✅|-
+|Make inference API calls with Microsoft Entra ID| ✅ | ✅ | ❌ | ➖ |
## Common Issues ### Unable to view Azure Cognitive Search option in Azure OpenAI Studio
ai-services Speech Synthesis Markup Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-pronunciation.md
The speech synthesis engine speaks the following example as "World Wide Web Cons
The Mathematical Markup Language (MathML) is an XML-compliant markup language that describes mathematical content and structure. The Speech service can use the MathML as input text to properly pronounce mathematical notations in the output audio. > [!NOTE]
-> The MathML elements (tags) are currently supported by all neural voices in the `en-US` and `en-AU` locales.
+> The MathML elements (tags) are currently supported in the following locales: `de-DE`, `en-AU`, `en-GB`, `en-US`, `es-ES`, `es-MX`, `fr-CA`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, and `zh-CN`.
All elements from the [MathML 2.0](https://www.w3.org/TR/MathML2/) and [MathML 3.0](https://www.w3.org/TR/MathML3/) specifications are supported, except the MathML 3.0 [Elementary Math](https://www.w3.org/TR/MathML3/chapter3.html#presm.elementary) elements.
ai-services What Is Text To Speech Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/what-is-text-to-speech-avatar.md
Azure AI text to speech avatar feature capabilities include:
With text to speech avatar's advanced neural network models, the feature empowers you to deliver lifelike and high-quality synthetic talking avatar videos for various applications while adhering to responsible AI practices. > [!TIP]
-> To convert text to speech with a no-code approach, try the [Text to speech avatar tool in Speech Studio](https://aka.ms/speechstudio/talkingavatar).
+> To convert text to speech with a no-code approach, try the [Text to speech avatar tool in Speech Studio](https://speech.microsoft.com/portal/talkingavatar).
## Avatar voice and language
-You can choose from a range of prebuilt voices for the avatar. The language support for text to speech avatar is the same as the language support for text to speech. For details, see [Language and voice support for the Speech service](../language-support.md?tabs=tts). Prebuilt text to speech avatars can be accessed through the [Speech Studio portal](https://aka.ms/speechstudio/talkingavatar) or via API.
+You can choose from a range of prebuilt voices for the avatar. The language support for text to speech avatar is the same as the language support for text to speech. For details, see [Language and voice support for the Speech service](../language-support.md?tabs=tts). Prebuilt text to speech avatars can be accessed through the [Speech Studio portal](https://speech.microsoft.com/portal/talkingavatar) or via API.
The voice in the synthetic video could be a prebuilt neural voice available on Azure AI Speech or the [custom neural voice](../custom-neural-voice.md) of voice talent selected by you.
aks Access Control Managed Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/access-control-managed-azure-ad.md
description: Learn how to access clusters when integrating Microsoft Entra ID in
Last updated 04/20/2023+++
Make sure the admin of the security group has given your account an *Active* ass
[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create [aad-assignments]: ../active-directory/privileged-identity-management/groups-assign-member-owner.md#assign-an-owner-or-member-of-a-group [az-aks-create]: /cli/azure/aks#az_aks_create+
aks Access Private Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/access-private-cluster.md
Last updated 09/15/2023+++ # Access a private Azure Kubernetes Service (AKS) cluster
In this article, you learned how to access a private cluster and run commands on
<!-- links - internal --> [command-invoke-troubleshoot]: /troubleshoot/azure/azure-kubernetes/resolve-az-aks-command-invoke-failures+
aks Active Active Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/active-active-solution.md
If you're considering a different solution, see the following articles:
- [Active passive disaster recovery solution overview for Azure Kubernetes Service (AKS)](./active-passive-solution.md) - [Passive cold solution overview for Azure Kubernetes Service (AKS)](./passive-cold-solution.md)+
aks Active Passive Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/active-passive-solution.md
If you're considering a different solution, see the following articles:
- [Active active high availability solution overview for Azure Kubernetes Service (AKS)](./active-active-solution.md) - [Passive cold solution overview for Azure Kubernetes Service (AKS)](./passive-cold-solution.md)+
aks Ai Toolchain Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ai-toolchain-operator.md
description: Learn how to enable the AI toolchain operator add-on on Azure Kuber
Last updated 02/28/2024+++ # Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (preview)
For more inference model options, see the [KAITO GitHub repository](https://gith
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-show]: /cli/azure/feature#az_feature_show [az-provider-register]: /cli/azure/provider#az_provider_register+
aks Aks Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-diagnostics.md
Title: Azure Kubernetes Service (AKS) Diagnose and Solve Problems Overview description: Learn about self-diagnosing clusters in Azure Kubernetes Service. -++ Last updated 03/10/2023
Deploying applications on AKS requires adherence to best practices to guarantee
* Read the [triage practices section](/azure/architecture/operator-guides/aks/aks-triage-practices) of the AKS day-2 operations guide. * Post your questions or feedback at [UserVoice](https://feedback.azure.com/d365community/forum/aabe212a-f724-ec11-b6e6-000d3a4f0da0) by adding "[Diag]" in the title.+
aks Aks Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-support-help.md
Title: Support and troubleshooting for Azure Kubernetes Service (AKS)
description: This article provides support and troubleshooting options for Azure Kubernetes Service (AKS). Last updated 09/27/2023+++
Learn about important product updates, roadmap, and announcements in [Azure Upda
## Next steps Visit the [Azure Kubernetes Service (AKS) documentation](./index.yml).+
aks Api Server Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md
For associated best practices, see [Best practices for network connectivity and
[az-role-assignment-create]: /cli/azure/role/assignment#az-role-assignment-create [ref-support-levels]: /cli/azure/reference-types-and-status [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials+
aks App Routing Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-migration.md
After migrating to the application routing add-on, learn how to [monitor Ingress
<!-- EXTERNAL LINKS --> [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete+
aks App Routing Nginx Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-nginx-configuration.md
The application routing add-on uses a Kubernetes [custom resource definition (CR
When you enable the application routing add-on with NGINX, it creates an ingress controller called `default` in the `app-routing-namespace` configured with a public facing Azure load balancer. That ingress controller uses an ingress class name of `webapprouting.kubernetes.azure.com`.
-You can modify the configuration of the default ingress controller by editing its configuration.
-
-```bash
-kubectl edit nginxingresscontroller default -n app-routing-system
-```
- ### Create another public facing NGINX ingress controller To create another NGINX ingress controller with a public facing Azure Load Balancer:
aks App Routing Nginx Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-nginx-prometheus.md
Then upload the desired dashboard file and click on **Load**.
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [grafana-nginx-dashboard]: https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json [grafana-nginx-request-performance-dashboard]: https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/request-handling-performance.json+
aks App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md
For other configurations, see:
* [Application routing add-on configuration][custom-ingress-configurations] * [Configure internal NGIX ingress controller for Azure private DNS zone][create-nginx-private-controller].
-With the retirement of [Open Service Mesh][open-service-mesh-docs] (OSM) by the Cloud Native Computing Foundation (CNCF), using the application routing add-on is the default method for all AKS clusters.
+With the retirement of [Open Service Mesh][open-service-mesh-docs] (OSM) by the Cloud Native Computing Foundation (CNCF), using the application routing add-on with OSM is not recommended.
## Prerequisites
With the retirement of [Open Service Mesh][open-service-mesh-docs] (OSM) by the
- The application routing add-on supports up to five Azure DNS zones. - All global Azure DNS zones integrated with the add-on have to be in the same resource group. - All private Azure DNS zones integrated with the add-on have to be in the same resource group.-- Editing any resources in the `app-routing-system` namespace, including the Ingress-nginx ConfigMap, isn't supported.
+- Editing the ingress-nginx `ConfigMap` in the `app-routing-system` namespace isn't supported.
## Enable application routing using Azure CLI
When the application routing add-on is disabled, some Kubernetes resources might
[kubectl]: https://kubernetes.io/docs/reference/kubectl/ [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [ingress-backend]: https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/#ingressbackend-api+
aks Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/artifact-streaming.md
This article described how to enable Artifact Streaming on your AKS node pools t
[az-acr-artifact-streaming-create]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-create [az-acr-manifest-list-referrers]: /cli/azure/acr/manifest#az-acr-manifest-list-referrers [az-aks-nodepool-show]: /cli/azure/aks/nodepool#az-aks-nodepool-show+
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
For a detailed discussion of upgrade best practices and other considerations, se
[pdb-best-practices]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ [release-tracker]: release-tracker.md [k8s-deprecation]: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#:~:text=A%20deprecated%20API%20is%20one%20that%20has%20been,point%20you%20must%20migrate%20to%20using%20the%20replacement
-[unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates
+[unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates
aks Auto Upgrade Node Os Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-os-image.md
For a detailed discussion of upgrade best practices and other considerations, se
[az-aks-update]: /cli/azure/aks#az-aks-update <!-- LINKS - external -->
-[Blog]: https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/increased-security-and-resiliency-of-canonical-workloads-on/ba-p/3970623
+[Blog]: https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/increased-security-and-resiliency-of-canonical-workloads-on/ba-p/3970623
aks Automated Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/automated-deployments.md
Learn more about [GitHub Actions for Kubernetes][kubernetes-action].
<!-- LINKS --> [kubernetes-action]: kubernetes-action.md+
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
description: Learn how to create a cluster that distributes nodes across availab
Last updated 12/06/2023+++ # Create an Azure Kubernetes Service (AKS) cluster that uses availability zones
This article described how to create an AKS cluster using availability zones. Fo
<!-- LINKS - external --> [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe [kubectl-well_known_labels]: https://kubernetes.io/docs/reference/labels-annotations-taints/+
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md
Title: Integrate Microsoft Entra ID with Azure Kubernetes Service (AKS) (legacy) description: Learn how to use the Azure CLI to create and Microsoft Entra ID-enabled Azure Kubernetes Service (AKS) cluster (legacy)-+
For best practices on identity and resource control, see [Best practices for aut
[managed-aad]: managed-azure-ad.md [managed-aad-migrate]: managed-azure-ad.md#migrate-a-legacy-azure-ad-cluster-to-integration [az-aks-show]: /cli/azure/aks#az_aks_show+
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Last updated 11/24/2023+++ # Use Azure Blob storage Container Storage Interface (CSI) driver
To have a storage volume persist for your workload, you can use a StatefulSet. T
[azure-disk-csi-driver]: azure-disk-csi.md [azure-files-csi-driver]: azure-files-csi.md [install-azure-cli]: /cli/azure/install-azure-cli+
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
To learn how to utilize AKS with your own Container Network Interface (CNI) plug
[az-aks-update]: /cli/azure/aks#az-aks-update [az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update+
aks Azure Cni Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overview.md
Learn more about networking in AKS in the following articles:
[azure-cni-overlay]: azure-cni-overlay.md [configure-azure-cni-dynamic-ip-allocation]: configure-azure-cni-dynamic-ip-allocation.md [configure-azure-cni-static-block-allocation]: configure-azure-cni-static-block-allocation.md+
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
Learn more about networking in AKS in the following articles:
<!-- LINKS - Internal --> [aks-ingress-basic]: ingress-basic.md+
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
Last updated 03/05/2024+++ # Create and use a volume with Azure Disks in Azure Kubernetes Service (AKS)
kubectl delete -f azure-pvc.yaml
[azure-disk-write-accelerator]: ../virtual-machines/windows/how-to-enable-write-accelerator.md [on-demand-bursting]: ../virtual-machines/disk-bursting.md [customer-usage-attribution]: ../marketplace/azure-partner-customer-usage-attribution.md+
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
Last updated 03/05/2024+++ # Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
For associated best practices, see [Best practices for storage and backups in AK
[tag-resources]: ../azure-resource-manager/management/tag-resources.md [azure-files-usage]: ../storage/files/understand-performance.md#choosing-a-performance-tier-based-on-usage-patterns [az-storage-account-create]: /cli/azure/storage/account#az-storage-account-create+
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Last updated 04/19/2023+++ # Use the Azure Disk Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
The output of the command resembles the following example:
[az-premium-ssd]: ../virtual-machines/disks-types.md#premium-ssds [general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md [disk-based-solutions]: /azure/cloud-adoption-framework/scenarios/app-platform/aks/storage#disk-based-solutions+
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Last updated 02/01/2024+++ # Bring your own keys (BYOK) with Azure managed disks in Azure Kubernetes Service (AKS)
Review [best practices for AKS cluster security][best-practices-security]
[customer-managed-keys-windows]: ../virtual-machines/disk-encryption.md#customer-managed-keys [customer-managed-keys-linux]: ../virtual-machines/disk-encryption.md#customer-managed-keys [key-vault-generate]: ../key-vault/general/manage-with-cli2.md+
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Last updated 01/11/2024+++ # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
The output of the commands resembles the following example:
[azure-private-endpoint-dns]: ../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration [azure-netapp-files-mount-options-best-practices]: ../azure-netapp-files/performance-linux-mount-options.md#rsize-and-wsize [nfs-file-share-mount-options]: ../storage/files/storage-files-how-to-mount-nfs-shares.md#mount-options+
aks Azure Hpc Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-hpc-cache.md
az feature show --namespace "Microsoft.StorageCache"
[az-hpc-cache-blob-storage-target-add]: /cli/azure/hpc-cache/blob-storage-target#az_hpc_cache_blob_storage_target_add [az-network-private-dns-zone-create]: /cli/azure/network/private-dns/zone#az_network_private_dns_zone_create [az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az_network_private_dns_link_vnet_create
-[az-network-private-dns-record-set-a-create]: /cli/azure/network/private-dns/record-set/a#az_network_private_dns_record_set_a_create
+[az-network-private-dns-record-set-a-create]: /cli/azure/network/private-dns/record-set/a#az_network_private_dns_record_set_a_create
aks Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-hybrid-benefit.md
To learn more about Windows containers on AKS, see the following resources:
* [Learn how to deploy, manage, and monitor Windows containers on AKS](/training/paths/deploy-manage-monitor-wincontainers-aks). * Open an issue or provide feedback in the [Windows containers GitHub repository](https://github.com/microsoft/Windows-Containers/issues). * Review the [third-party partner solutions for Windows on AKS](windows-aks-partner-solutions.md).+
aks Azure Linux Aks Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-linux-aks-partner-solutions.md
For more information, see [CloudCasa by Catalogic Solutions](https://cloudcasa.i
## Next steps [Learn more about the Azure Linux Container Host on AKS](../azure-linux/intro-azure-linux.md).+
aks Azure Netapp Files Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files-dual-protocol.md
Last updated 02/26/2024+++ # Provision Azure NetApp Files dual-protocol volumes for Azure Kubernetes Service
Astra Trident supports many features with Azure NetApp Files. For more informati
[azure-netapp-smb]: azure-netapp-files-smb.md [azure-netapp-files]: azure-netapp-files.md [azure-netapp-files-volume-dual-protocol]: ../azure-netapp-files/create-volumes-dual-protocol.md+
aks Azure Netapp Files Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files-nfs.md
Last updated 05/08/2023+++ # Provision Azure NetApp Files NFS volumes for Azure Kubernetes Service
Astra Trident supports many features with Azure NetApp Files. For more informati
[install-azure-cli]: /cli/azure/install-azure-cli [use-tags]: use-tags.md [azure-ad-app-registration]: ../active-directory/develop/howto-create-service-principal-portal.md+
aks Azure Netapp Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files-smb.md
Last updated 05/08/2023+++ # Provision Azure NetApp Files SMB volumes for Azure Kubernetes Service
Astra Trident supports many features with Azure NetApp Files. For more informati
[install-azure-cli]: /cli/azure/install-azure-cli [use-tags]: use-tags.md [azure-ad-app-registration]: ../active-directory/develop/howto-create-service-principal-portal.md+
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
Last updated 05/08/2023+++ # Configure Azure NetApp Files for Azure Kubernetes Service
Astra Trident supports many features with Azure NetApp Files. For more informati
[install-azure-cli]: /cli/azure/install-azure-cli [use-tags]: use-tags.md [azure-ad-app-registration]: ../active-directory/develop/howto-create-service-principal-portal.md+
aks Azure Nfs Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-nfs-volume.md
ls -l
[azure-linux-vm]: ../virtual-machines/linux/endorsed-distros.md [linux-create]: ../virtual-machines/linux/tutorial-manage-vm.md [azure-files-overview]: ../storage/files/storage-files-introduction.md+
aks Best Practices App Cluster Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-app-cluster-reliability.md
description: Learn the best practices for deployment and cluster reliability for Azure Kubernetes Service (AKS) workloads. Last updated 03/11/2024+++ # Deployment and cluster reliability best practices for Azure Kubernetes Service (AKS)
This article focused on best practices for deployment and cluster reliability fo
* [High availability and disaster recovery overview for AKS](./ha-dr-overview.md) * [Run AKS clusters at scale](./best-practices-performance-scale-large.md) * [Baseline architecture for an AKS cluster](/azure/architecture/reference-architectures/containers/aks/baseline-aks)+
aks Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-cost.md
description: Recommendations and best practices for optimizing costs in Azure Kubernetes Service (AKS). Last updated 02/21/2024+++ # Optimize costs in Azure Kubernetes Service (AKS)
Cost optimization is an ongoing and iterative effort. Learn more by reviewing th
* [Optimize Compute Costs on AKS](/training/modules/aks-optimize-compute-costs/) * [AKS Cost Optimization Techniques](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/azure-kubernetes-service-aks-cost-optimization-techniques/ba-p/3652908) * [What is FinOps?](/azure/cost-management-billing/finops/)+
aks Best Practices Performance Scale Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-performance-scale-large.md
description: Learn the best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS). Last updated 01/18/2024+++ # Best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS)
As you scale your AKS clusters to larger scale points, keep the following node p
<!-- LINKS - External --> [throttling-policies]: https://azure.microsoft.com/blog/api-management-advanced-caching-and-throttling-policies/+
aks Best Practices Performance Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-performance-scale.md
description: Learn the best practices for performance and scaling for small to medium workloads in Azure Kubernetes Service (AKS). Last updated 11/03/2023+++ # Best practices for performance and scaling for small to medium workloads in Azure Kubernetes Service (AKS)
Ephemeral OS disks can provide dynamic IOPS and throughput for your application,
### Pod scheduling The memory and CPU resources allocated to a VM have a direct impact on the performance of the pods running on the VM. When a pod is created, it's assigned a certain amount of memory and CPU resources, which are used to run the application. If the VM doesn't have enough memory or CPU resources available, it can cause the pods to slow down or even crash. If the VM has too much memory or CPU resources available, it can cause the pods to run inefficiently, wasting resources and increasing costs. We recommend monitoring the total pod requests across your workloads against the total allocatable resources for best scheduling predictability and performance. You can also set the maximum pods per node based on your capacity planning using `--max-pods`.+
aks Cis Azure Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-azure-linux.md
For more information about Azure Linux Container Host security, see the followin
[cis-benchmarks]: /compliance/regulatory/offering-CIS-Benchmark [linux-security-baseline]: ../governance/policy/samples/guest-configuration-baseline-linux.md [linux-container-host-aks]: ../azure-linux/intro-azure-linux.md+
aks Cis Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-windows.md
description: Learn how AKS applies the CIS benchmark to Windows Server 2022 imag
Last updated 09/27/2023+++ # Azure Kubernetes Service (AKS) Windows image alignment with Center for Internet Security (CIS) benchmark
For more information about AKS security, see the following articles:
<!-- INTERNAL LINKS --> [cis-benchmarks]: /compliance/regulatory/offering-CIS-Benchmark [security-concepts-aks-apps-clusters]: concepts-security.md
-[windows-security-baseline]: ../governance/policy/samples/guest-configuration-baseline-windows.md
+[windows-security-baseline]: ../governance/policy/samples/guest-configuration-baseline-windows.md
aks Cluster Autoscaler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler-overview.md
description: Learn about cluster autoscaling in Azure Kubernetes Service (AKS) using the cluster autoscaler. Last updated 01/05/2024+++ # Cluster autoscaling in Azure Kubernetes Service (AKS) overview
Depending on how long the scaling operations have been experiencing failures, it
<!-- LINKS > [vertical-pod-autoscaler]: vertical-pod-autoscaler.md [horizontal-pod-autoscaler]:concepts-scale.md#horizontal-pod-autoscaler+
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
description: Learn how to use the cluster autoscaler to automatically scale your
Last updated 01/11/2024+++ # Use the cluster autoscaler in Azure Kubernetes Service (AKS)
To further help improve cluster resource utilization and free up CPU and memory
[az-aks-nodepool-update]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview#enable-cluster-auto-scaler-for-a-node-pool [kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why [kubernetes-cluster-autoscaler]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler+
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
description: Learn how to configure a cluster in Azure Kubernetes Service (AKS)
Last updated 06/20/2023+++ # Configure an AKS cluster
az aks update -n aksTest -g aksTest --nrg-lockdown-restriction-level Unrestricte
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show [az-vm-list]: /cli/azure/vm#az_vm_list+
aks Cluster Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-extensions.md
You can also [select and deploy Kubernetes applications available through Market
<!-- EXTERNAL --> [arc-k8s-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc&regions=all+
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
description: Learn about the core components that make up workloads and clusters
Last updated 01/16/2024+++ # Core Kubernetes concepts for Azure Kubernetes Service
This article covers some of the core Kubernetes components and how they apply to
[aks-tags]: use-tags.md [aks-support]: support-policies.md#user-customization-of-agent-nodes [intro-azure-linux]: ../azure-linux/intro-azure-linux.md+
aks Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-identity.md
For more information on core Kubernetes and AKS concepts, see the following arti
[aks-concepts-network]: concepts-network.md [operator-best-practices-identity]: operator-best-practices-identity.md [upgrade-per-cluster]: ../azure-monitor/containers/container-insights-update-metrics.md#upgrade-per-cluster-using-azure-cli+
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
Title: Concepts - Networking in Azure Kubernetes Services (AKS)
description: Learn about networking in Azure Kubernetes Service (AKS), including kubenet and Azure CNI networking, ingress controllers, load balancers, and static IP addresses. Last updated 03/26/2024+++
For more information on core Kubernetes and AKS concepts, see the following arti
[azure-cni-powered-by-cilium]: azure-cni-powered-by-cilium.md [azure-cni-powered-by-cilium-limitations]: azure-cni-powered-by-cilium.md#limitations [use-byo-cni]: use-byo-cni.md+
aks Concepts Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-scale.md
Title: Concepts - Scale applications in Azure Kubernetes Services (AKS)
description: Learn about scaling in Azure Kubernetes Service (AKS), including the horizontal pod autoscaler, cluster autoscaler, and Azure Container Instances. Last updated 03/18/2024+++ # Scaling options for applications in Azure Kubernetes Service (AKS)
For more information on core Kubernetes and AKS concepts, see the following arti
[aks-concepts-identity]: concepts-identity.md [aks-concepts-network]: concepts-network.md [virtual-nodes-cli]: virtual-nodes-cli.md
-[keda-overview]: keda-about.md
+[keda-overview]: keda-about.md
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
For more information on core Kubernetes and AKS concepts, see:
[microsoft-vulnerability-management-aks]: concepts-vulnerability-management.md [aks-vulnerability-management-nodes]: concepts-vulnerability-management.md#worker-nodes [manage-ssh-access]: manage-ssh-node-access.md
-[trusted-launch]: use-trusted-launch.md
+[trusted-launch]: use-trusted-launch.md
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Title: Concepts - Storage in Azure Kubernetes Services (AKS)
description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims. Last updated 03/19/2024+++
For more information on core Kubernetes and AKS concepts, see the following arti
[azure-disk-customer-managed-key]: azure-disk-customer-managed-keys.md [azure-aks-storage-considerations]: /azure/cloud-adoption-framework/scenarios/app-platform/aks/storage [azure-container-storage]: ../storage/container-storage/container-storage-introduction.md+
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Title: Concepts - Sustainable software engineering in Azure Kubernetes Services
description: Learn about sustainable software engineering in Azure Kubernetes Service (AKS). Last updated 06/20/2023+++ # Sustainable software engineering practices in Azure Kubernetes Service (AKS)
Many attacks on cloud infrastructure seek to misuse deployed resources for the a
> [!div class="nextstepaction"] > [Azure Well-Architected Framework review of AKS](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service)+
aks Confidential Containers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/confidential-containers-overview.md
description: Learn about Confidential Containers (preview) on an Azure Kubernete
Last updated 03/18/2024+++ # Confidential Containers (preview) with Azure Kubernetes Service (AKS)
With the local container filesystem backed by VM memory, writing to the containe
[azure-dedicated-hosts]: ../virtual-machines/dedicated-hosts.md [deploy-confidential-containers-default-aks]: deploy-confidential-containers-default-policy.md [confidential-containers-security-policy]: ../confidential-computing/confidential-containers-aks-security-policy.md+
aks Configure Azure Cni Dynamic Ip Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni-dynamic-ip-allocation.md
Learn more about networking in AKS in the following articles:
[azure-cni-prereq]: ./configure-azure-cni.md#prerequisites [azure-cni-deployment-parameters]: ./azure-cni-overview.md#deployment-parameters [az-aks-enable-addons]: /cli/azure/aks#az_aks_enable_addons+
aks Configure Azure Cni Static Block Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni-static-block-allocation.md
Learn more about networking in AKS in the following articles:
[azure-cni-prereq]: ./configure-azure-cni.md#prerequisites [azure-cni-deployment-parameters]: ./azure-cni-overview.md#deployment-parameters [az-aks-enable-addons]: /cli/azure/aks#az_aks_enable_addons+
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
az aks create \
To configure Azure CNI networking with dynamic IP allocation and enhanced subnet support, see [Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS](configure-azure-cni-dynamic-ip-allocation.md). +
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
This article covered how to configure `kube-proxy` in Azure Kubernetes Service (
[az-extension-update]: /cli/azure/extension#az-extension-update [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update+
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md
Once the cluster has been created, you can deploy your workloads. This article w
[az-group-create]: /cli/azure/group#az_group_create [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials+
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
This article showed you how to deploy your AKS cluster into your existing virtua
[custom-route-table]: ../virtual-network/manage-route-table.md [Create an AKS cluster with user-assigned managed identity]: configure-kubenet.md#create-an-aks-cluster-with-user-assigned-managed-identity [bring-your-own-control-plane-managed-identity]: ../aks/use-managed-identity.md#bring-your-own-managed-identity+
aks Control Plane Metrics Default List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/control-plane-metrics-default-list.md
description: This article describes the minimal ingestion profile metrics for Az
Last updated 01/31/2024+++
The following are metrics that are allow-listed with `minimalingestionprofile=tr
<!-- INTERNAL LINKS --> [azure-monitor-prometheus-metrics-scrape-config-minimal]: ../azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md+
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
To learn more about core network concepts, see [Network concepts for application
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md+
aks Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cost-analysis.md
See the following guide to troubleshoot [AKS cost analysis add-on issues](/troub
## Learn more
-Visibility is one element of cost management. Refer to [Optimize Costs in Azure Kubernetes Service (AKS)](./best-practices-cost.md) for other best practices on how to gain control over your kubernetes cost.
+Visibility is one element of cost management. Refer to [Optimize Costs in Azure Kubernetes Service (AKS)](./best-practices-cost.md) for other best practices on how to gain control over your kubernetes cost.
aks Create Nginx Ingress Private Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-nginx-ingress-private-controller.md
For other configuration information related to SSL encryption other advanced NGI
[az-network-private-dns-zone-create]: /cli/azure/network/private-dns/zone?#az-network-private-dns-zone-create [az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az-network-private-dns-link-vnet-create [az-network-private-dns-record-set-a-list]: /cli/azure/network/private-dns/record-set/a#az-network-private-dns-record-set-a-list+
aks Create Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md
description: Learn how to create multiple node pools for a cluster in Azure Kube
Last updated 12/08/2023+++
In this article, you learned how to create multiple node pools in an AKS cluster
[use-system-pool]: use-system-pools.md [restricted-vm-sizes]: ../virtual-machines/sizes.md [aks-taints]: manage-node-pools.md#set-node-pool-taints+
aks Csi Secrets Store Configuration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-configuration-options.md
To learn more about the Azure Key Vault provider for Secrets Store CSI Driver, s
<!-- LINKS EXTERNAL --> [reloader]: https://github.com/stakater/Reloader+
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
In this article, you learned how to use the Azure Key Vault provider for Secrets
<!-- LINKS EXTERNAL --> [kube-csi]: https://kubernetes-csi.github.io/docs/ [kubernetes-version-support]: ./supported-kubernetes-versions.md?tabs=azure-cli#kubernetes-version-support-policy+
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
In this article, you learned how to create and provide an identity to access you
[az-identity-create]: /cli/azure/identity#az-identity-create [az-role-assignment-create]: /cli/azure/role/assignment#az-role-assignment-create [az-aks-disable-addons]: /cli/azure/aks#az-aks-disable-addons+
aks Csi Secrets Store Nginx Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-nginx-tls.md
We can now deploy a Kubernetes ingress resource referencing the secret.
<!-- LINKS EXTERNAL --> [kubernetes-ingress-tls]: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls+
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AK
description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure Disks and Azure Files in an Azure Kubernetes Service (AKS) cluster Last updated 03/14/2024+++
To review the migration options for your storage classes and upgrade your cluste
[azure-policy-aks-definition]: ../governance/policy/samples/built-in-policies.md#kubernetes [encrypt-managed-disks-customer-managed-keys]: ../virtual-machines/disks-cross-tenant-customer-managed-keys.md [azure-disk-customer-managed-keys]: azure-disk-customer-managed-keys.md+
aks Custom Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md
For more information on AKS security best practices, see [Best practices for clu
[az-feature-show]: /cli/azure/feature#az-feature-show [az-feature-register]: /cli/azure/feature#az-feature-register [az-provider-register]: /cli/azure/provider#az-provider-register+
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
The settings below can be used to tune the operation of the virtual memory (VM)
[az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show [az-provider-register]: /cli/azure/provider#az-provider-register+
aks Dapr Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-migration.md
Learn more about [Dapr][dapr-overview] and [how to use it][dapr-howto].
<!-- LINKS EXTERNAL --> [dapr-prod-guidelines]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production/#enabling-high-availability-in-an-existing-dapr-deployment+
aks Dapr Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md
Once you have successfully provisioned Dapr in your AKS cluster, try deploying a
[dapr-troubleshooting]: https://docs.dapr.io/operations/troubleshooting/common_issues/ [supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc [dapr-mariner]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-deploy/#using-mariner-based-images+
aks Dapr Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-workflow.md
Notice that the workflow status is marked as completed.
[deployment-yaml]: https://github.com/Azure/dapr-workflows-aks-sample/blob/main/Deploy/deployment.yaml [docker]: https://docs.docker.com/get-docker/ [helm]: https://helm.sh/docs/intro/install/+
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[dapr-supported-version]: https://docs.dapr.io/operations/support/support-release-policy/#supported-versions [dapr-troubleshooting]: https://docs.dapr.io/operations/troubleshooting/common_issues/ [supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc+
aks Deploy Application Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-application-az-cli.md
To deploy the application (extension) through Azure CLI, follow the steps outlin
- Learn about [Kubernetes applications available through Marketplace](deploy-marketplace.md). - Learn about [cluster extensions](cluster-extensions.md).+
aks Deploy Application Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-application-template.md
Once you've accepted the terms, you can deploy your ARM template. For instructio
- Learn about [Kubernetes applications available through Marketplace](deploy-marketplace.md). - Learn about [cluster extensions](cluster-extensions.md).+
aks Deploy Confidential Containers Default Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-confidential-containers-default-policy.md
Title: Deploy an AKS cluster with Confidential Containers (preview)
description: Learn how to create an Azure Kubernetes Service (AKS) cluster with Confidential Containers (preview) and a default security policy by using the Azure CLI. Last updated 01/10/2024+++
kubectl delete pod pod-name
[az-attestation-show]: /cli/azure/attestation#az-attestation-show [attestation-quickstart-azure-cli]: ../attestation/quickstart-azure-cli.md [symptom-role-assignment-changes-are-not-being-detected]: ../role-based-access-control/troubleshooting.md#symptomrole-assignment-changes-are-not-being-detected+
aks Deploy Extensions Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-extensions-az-cli.md
az k8s-extension delete --name azureml --cluster-name <clusterName> --resource-g
[use-managed-identity]: ./use-managed-identity.md [workload-identity-overview]: workload-identity-overview.md [use-azure-ad-pod-identity]: use-azure-ad-pod-identity.md+
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
If you experience issues, see the [troubleshooting checklist for failed deployme
[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer +
aks Deployment Safeguards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deployment-safeguards.md
To learn more, see [workload validation in Gatekeeper](https://open-policy-agent
[Azure-Policy-built-in-definition-docs]: /azure/aks/policy-reference#policy-definitions [Azure-Policy-compliance-portal]: https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyMenuBlade/~/Compliance [Azure-Policy-RBAC-permissions]: /azure/governance/policy/overview#azure-rbac-permissions-in-azure-policy+
aks Developer Best Practices Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/developer-best-practices-resource-management.md
description: Learn the application developer best practices for resource management in Azure Kubernetes Service (AKS). Last updated 05/25/2023+++ # Best practices for application developers to manage resources in Azure Kubernetes Service (AKS)
To implement some of these best practices, see [Develop with Bridge to Kubernete
[btk]: /visualstudio/containers/overview-bridge-to-kubernetes [operator-best-practices-isolation]: operator-best-practices-cluster-isolation.md [resource-quotas]: operator-best-practices-scheduler.md#enforce-resource-quotas+
aks Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/devops-pipeline.md
You're now ready to create a release, which means to start the process of runnin
1. In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output. ::: zone-end+
aks Draft Devx Extension Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/draft-devx-extension-aks.md
In this article, you learned how to use Draft and the DevX extension for Visual
[aks-acr-authenticate]: ../aks/cluster-container-registry-integration.md [devx-extension]: https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.aks-devx-tools [draft]: https://github.com/Azure/draft+
aks Draft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/draft.md
After you create your artifacts and set up GitHub OIDC, you can use `draft gener
[az-aks-draft-create]: /cli/azure/aks/draft#az-aks-draft-create [az-aks-draft-setup-gh]: /cli/azure/aks/draft#az-aks-draft-setup-gh [az-aks-draft-generate-workflow]: /cli/azure/aks/draft#az-aks-draft-generate-workflow+
aks Edge Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/edge-zones.md
After deploying your AKS cluster in an Edge Zone, learn about how you can [confi
[az-aks-create]: /cli/azure/aks#az_aks_create [preset-config]: ./quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal+
aks Egress Outboundtype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md
az aks update -g <resourceGroup> -n <clusterName> --outbound-type userAssignedNA
[az-feature-show]: /cli/azure/feature#az_feature_show [az-provider-register]: /cli/azure/provider#az_provider_register [az-aks-update]: /cli/azure/aks#az_aks_update+
aks Egress Udr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-udr.md
For more information on user-defined routes and Azure networking, see:
* [Azure networking UDR overview](../virtual-network/virtual-networks-udr-overview.md) * [How to create, change, or delete a route table](../virtual-network/manage-route-table.md).+
aks Enable Fips Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-fips-nodes.md
To learn more about AKS security, see [Best practices for cluster security and u
[install-azure-cli]: /cli/azure/install-azure-cli [node-image-upgrade]: node-image-upgrade.md [errors-mount-file-share-fips]: /troubleshoot/azure/azure-kubernetes/fail-to-mount-azure-file-share#fipsnodepool+
aks Enable Host Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-host-encryption.md
description: Learn how to configure a host-based encryption in an Azure Kubernet
Last updated 07/17/2023 +++ ms.devlang: azurecli
Before you begin, review the following prerequisites and limitations.
[akv-built-in-roles]: ../key-vault/general/rbac-guide.md#azure-built-in-roles-for-key-vault-data-plane-operations [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add+
aks Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/events.md
Now that you understand Kubernetes events, you can continue your monitoring and
[aks-azure-monitor]: ./monitor-aks.md [container-insights]: ../azure-monitor/containers/container-insights-enable-aks.md [k8s-events]: https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/+
aks Free Standard Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/free-standard-pricing-tiers.md
Title: Azure Kubernetes Service (AKS) Free, Standard and Premium pricing tiers f
description: Learn about the Azure Kubernetes Service (AKS) Free, Standard, and Premium pricing plans and what features, deployment patterns, and recommendations to consider between each plan. Last updated 04/07/2023+++
This process takes several minutes to complete. You shouldn't experience any dow
[long-term-support]: long-term-support.md [long-term-support-update]: long-term-support.md#enable-lts-on-an-existing-cluster [install-azure-cli]: /cli/azure/install-azure-cli+
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
Last updated 04/10/2023+++ #Customer intent: As a cluster administrator or developer, I want to create an AKS cluster that can use high-performance GPU-based VMs for compute-intensive workloads.
To see the GPU in action, you can schedule a GPU-enabled workload with the appro
[az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update [NVadsA10]: /azure/virtual-machines/nva10v5-series+
aks Ha Dr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ha-dr-overview.md
For more information, see the following articles:
- [About AKS backup using Azure Backup (preview)](../backup/azure-kubernetes-service-backup-overview.md) - [Back up AKS using Azure Backup (preview)](../backup/azure-kubernetes-service-cluster-backup.md)+
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
Title: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster recommendations: false description: Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster-+ Previously updated : 04/02/2024 Last updated : 01/16/2024 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes
The Open Liberty Operator simplifies the deployment and management of applicatio
For more information on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more information on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
-This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to AKS. The offer automatically provisions a number of Azure resources including an Azure Container Registry (ACR) instance, an AKS cluster, an Azure App Gateway Ingress Controller (AGIC) instance, the Liberty Operators, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aks). If you prefer manual step-by-step guidance for running Liberty on AKS that doesn't utilize the automation enabled by the offer, see [Manually deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster](/azure/developer/java/ee/howto-deploy-java-liberty-app-manual).
+This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to AKS. The offer automatically provisions a number of Azure resources including an Azure Container Registry (ACR) instance, an AKS cluster, an Azure App Gateway Ingress Controller (AGIC) instance, the Liberty Operator, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aks). If you prefer manual step-by-step guidance for running Liberty on AKS that doesn't utilize the automation enabled by the offer, see [Manually deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster](/azure/developer/java/ee/howto-deploy-java-liberty-app-manual).
This article is intended to help you quickly get to deployment. Before going to production, you should explore [Tuning Liberty](https://www.ibm.com/docs/was-liberty/base?topic=tuning-liberty). [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+* You can use Azure Cloud Shell or a local terminal.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
+* This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+ > [!NOTE] > You can also execute this guidance from the [Azure Cloud Shell](/azure/cloud-shell/quickstart). This approach has all the prerequisite tools pre-installed, with the exception of Docker. > > :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
-* Prepare a local machine with a Unix-like operating system installed (for example, Ubuntu, macOS, Windows Subsystem for Linux).
-* This article requires at least version 2.31.0 of Azure CLI.
-* Install a Java SE implementation, version 17 or later. (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
-* Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
-* Install [Docker](https://docs.docker.com/get-docker/) for your OS.
+* If running the commands in this guide locally (instead of Azure Cloud Shell):
+ * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Azure Linux, macOS, Windows Subsystem for Linux).
+ * Install a Java SE implementation, version 17 or later. (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
+ * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
+ * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
* Make sure you're assigned either the `Owner` role or the `Contributor` and `User Access Administrator` roles in the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group). ## Create a Liberty on AKS deployment using the portal
You can learn more from the following references:
* [Open Liberty](https://openliberty.io/) * [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator) * [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)+
aks Howto Deploy Java Quarkus App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-quarkus-app.md
Title: "Deploy Quarkus on Azure Kubernetes Service" description: Shows how to quickly stand up Quarkus on Azure Kubernetes Service.-+
You may also want to use `docker rmi` to delete the container images `postgres`
- [Deploy serverless Java apps with Quarkus on Azure Functions](/azure/azure-functions/functions-create-first-quarkus) - [Quarkus](https://quarkus.io/) - [Jakarta EE on Azure](/azure/developer/java/ee)+
aks Howto Deploy Java Wls App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-wls-app.md
Title: "Deploy WebLogic Server on Azure Kubernetes Service using the Azure portal" description: Shows how to quickly stand up WebLogic Server on Azure Kubernetes Service.-+ Last updated 02/09/2024
Learn more about running WLS on AKS or virtual machines by following these links
> [!div class="nextstepaction"] > [WLS on virtual machines](/azure/virtual-machines/workloads/oracle/oracle-weblogic)+
aks Http Application Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md
For information on how to install an HTTPS-secured ingress controller in AKS, se
[kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs [ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/ [ingress-resource]: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource+
aks Http Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md
For more information regarding the network requirements of AKS clusters, see [co
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az-extension-update [install-azure-cli]: /cli/azure/install-azure-cli+
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
The `eraser-aks-xxxxx` pod deletes within 10 minutes after work completion. You
[az-aks-update]: /cli/azure/aks#az_aks_update [trivy]: https://github.com/aquasecurity/trivy [az-aks-show]: /cli/azure/aks#az_aks_show+
aks Image Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-integrity.md
In this article, you learned how to use Image Integrity to validate signed image
<! External links -> [ratify]: https://github.com/deislabs/ratify [image-integrity-policy]: https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf426bb8-b320-4321-8545-1b784a5df3a4+
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
This article included some external components to AKS. To learn more about these
[acr-helm]: ../container-registry/container-registry-helm-repos.md [azure-powershell-install]: /powershell/azure/install-az-ps [aks-app-add-on]: app-routing.md+
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
You can also:
[new-az-public-ip-address]: /powershell/module/az.network/new-azpublicipaddress [aks-app-add-on]: app-routing.md [parameter-targettag]: /powershell/module/az.containerregistry/import-azcontainerregistryimage+
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
description: Learn about the add-ons, extensions, and open-source integrations y
Last updated 05/22/2023+++ # Add-ons, extensions, and other integrations with Azure Kubernetes Service (AKS)
For more information, see [Windows AKS partner solutions][windows-aks-partner-so
[github-actions-aks]: kubernetes-action.md [az-aks-enable-addons]: /cli/azure/aks#az-aks-enable-addons [windows-aks-partner-solutions]: windows-aks-partner-solutions.md+
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
To learn more about Kubernetes services, see the [Kubernetes services documentat
[get-azvirtualnetworksubnetconfig]: /powershell/module/az.network/get-azvirtualnetworksubnetconfig [az-network-private-link-service-list]: /cli/azure/network/private-link-service#az_network_private_link_service_list [az-network-private-endpoint-create]: /cli/azure/network/private-endpoint#az_network_private_endpoint_create+
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
description: Learn the features and benefits of Azure Kubernetes Service to depl
Last updated 05/02/2023+++ # What is Azure Kubernetes Service?
Learn more about deploying and managing AKS.
[helm]: quickstart-helm.md [aks-best-practices]: best-practices.md [intro-azure-linux]: ../azure-linux/intro-azure-linux.md+
aks Istio About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-about.md
Istio-based service mesh add-on for AKS has the following limitations:
[azure-cni-cilium]: azure-cni-powered-by-cilium.md [open-service-mesh-about]: open-service-mesh-about.md
-[istio-deploy-addon]: istio-deploy-addon.md
+[istio-deploy-addon]: istio-deploy-addon.md
aks Istio Deploy Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-addon.md
az group delete --name ${RESOURCE_GROUP} --yes --no-wait
[istio-deploy-ingress]: istio-deploy-ingress.md [az-aks-mesh-get-revisions]: /cli/azure/aks/mesh#az-aks-mesh-get-revisions(aks-preview) [bicep-aks-resource-definition]: /azure/templates/microsoft.containerservice/managedclusters+
aks Istio Deploy Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-ingress.md
az group delete --name ${RESOURCE_GROUP} --yes --no-wait
``` [istio-deploy-addon]: istio-deploy-addon.md+
aks Istio Meshconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-meshconfig.md
Fields present in [open source MeshConfig reference documentation][istio-meshcon
[istio-meshconfig]: https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/ [istio-sidecar-race-condition]: https://istio.io/latest/docs/ops/common-problems/injection/#pod-or-containers-start-with-network-issues-if-istio-proxy-is-not-ready+
aks Istio Plugin Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-plugin-ca.md
You may need to periodically rotate the certificate authorities for security or
[az-aks-mesh-disable]: /cli/azure/aks/mesh#az-aks-mesh-disable [istio-generate-certs]: https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/#plug-in-certificates-and-key-into-the-cluster [istio-mtls-reference]: https://istio.io/latest/docs/concepts/security/#mutual-tls-authentication+
aks Istio Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-upgrade.md
The following example illustrates how to upgrade from revision `asm-1-18` to `as
[istio-canary-upstream]: https://istio.io/latest/docs/setup/upgrade/canary/ [meshconfig]: ./istio-meshconfig.md [meshconfig-canary-upgrade]: ./istio-meshconfig.md#mesh-configuration-and-upgrades+
aks Keda About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-about.md
For GA Kubernetes versions, AKS offers full support of the corresponding KEDA mi
[keda-scalers]: https://keda.sh/docs/scalers/ [keda-http-add-on]: https://github.com/kedacore/http-add-on [keda-cosmos-db-scaler]: https://github.com/kedacore/external-scaler-azure-cosmos-db
-[azure-support-faq]: https://azure.microsoft.com/support/legal/faq/
+[azure-support-faq]: https://azure.microsoft.com/support/legal/faq/
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
To learn more, view the [upstream KEDA docs][keda].
[keda-scalers]: https://keda.sh/docs/scalers/ [keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue [keda]: https://keda.sh/docs/2.12/+
aks Keda Deploy Add On Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md
To learn more, view the [upstream KEDA docs][keda].
[kubectl]: https://kubernetes.io/docs/user-guide/kubectl [keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue [keda]: https://keda.sh/docs/2.12/+
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
You can also install external scalers to autoscale on other Azure
[keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue [prometheus-scaler]: https://keda.sh/docs/2.11/scalers/prometheus/ [keda]: https://keda.sh/docs/2.12/+
aks Kubelet Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubelet-logs.md
description: Learn how to view troubleshooting information in the kubelet logs f
Last updated 05/09/2023+++ #Customer intent: As a cluster operator, I want to view the logs for the kubelet that runs on each node in an AKS cluster to troubleshoot problems.
If you need more troubleshooting information for the Kubernetes main, see [view
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md [azure-container-logs]: ../azure-monitor/containers/container-insights-overview.md+
aks Kubernetes Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-action.md
Title: Build, test, and deploy containers to Azure Kubernetes Service (AKS) usin
description: Learn how to use GitHub Actions to build, test, and deploy containers to Azure Kubernetes Service (AKS). Last updated 09/12/2023+++
Review the following starter workflows for AKS. For more information, see [Using
[gh-azure-vote]: https://github.com/Azure-Samples/azure-voting-app-redis [actions/checkout]: https://github.com/actions/checkout [az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az-ad-sp-create-for-rbac+
aks Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-helm.md
Title: Install existing applications with Helm in Azure Kubernetes Service (AKS) description: Learn how to use the Helm packaging tool to deploy containers in an Azure Kubernetes Service (AKS) cluster-+ Last updated 05/09/2023-+ #Customer intent: As a cluster operator or developer, I want to learn how to deploy Helm into an AKS cluster and then install and manage applications using Helm charts.
For more information about managing Kubernetes application deployments with Helm
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md [taints]: operator-best-practices-advanced-scheduler.md+
aks Kubernetes Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-service-principal.md
description: Learn how to create and manage a Microsoft Entra service principal
Last updated 06/27/2023+++ #Customer intent: As a cluster operator, I want to understand how to create a service principal and delegate permissions for AKS to access required resources. In large enterprise environments, the user that deploys the cluster (or CI/CD system), may not have permissions to create this service principal automatically when the cluster is created.
For information on how to update the credentials, see [Update or rotate the cred
[remove-azadserviceprincipal]: /powershell/module/az.resources/remove-azadserviceprincipal [use-managed-identity]: use-managed-identity.md [managed-identity-resources-overview]: ..//active-directory/managed-identities-azure-resources/overview.md+
aks Quick Kubernetes Deploy Azd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-azd.md
To learn more about AKS and walk through a complete code-to-deployment example,
[kubernetes-concepts]: ../concepts-clusters-workloads.md [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json+
aks Quick Kubernetes Deploy Bicep Extensibility Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md
description: Learn how to quickly deploy a Kubernetes cluster using the Bicep ex
Last updated 01/11/2024+++ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
To learn more about AKS and walk through a complete code-to-deployment example,
[az-sshkey-create]: /cli/azure/sshkey#az_sshkey_create [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json+
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Bicep
description: Learn how to quickly deploy a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS). Last updated 12/27/2023+++ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
To learn more about AKS and walk through a complete code-to-deployment example,
[az-sshkey-create]: /cli/azure/sshkey#az_sshkey_create [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json+
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using Azure CLI. Last updated 01/10/2024+++ #Customer intent: As a developer or cluster operator, I want to deploy an AKS cluster and deploy an application so I can see how to run applications using the managed Kubernetes service in Azure.
To learn more about AKS and walk through a complete code-to-deployment example,
[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json+
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell. Last updated 01/11/2024+++ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
To learn more about AKS and walk through a complete code-to-deployment example,
[azure-resource-group]: ../../azure-resource-manager/management/overview.md [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json+
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using an AR
description: Learn how to quickly deploy a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS). Last updated 01/12/2024+++ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
To learn more about AKS and walk through a complete code-to-deployment example,
[ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json+
aks Quick Kubernetes Deploy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-terraform.md
To learn more about AKS and walk through a complete code-to-deployment example,
[azd-hooks]: /azure/developer/azure-developer-cli/reference#azd-hooks [azd-overview]: /azure/developer/azure-developer-cli [aks-home]: /azure/aks+
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
description: Learn how to quickly deploy a Kubernetes cluster and deploy an appl
Last updated 01/11/2024+++ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
To learn more about AKS, and to walk through a complete code-to-deployment examp
[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference [win-faq-change-admin-creds]: ../windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json+
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
Title: Deploy a Windows Server container on an Azure Kubernetes Service (AKS) cl
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell. Last updated 01/11/2024+++ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
To learn more about AKS, and to walk through a complete code-to-deployment examp
[new-azaksnodepool]: /powershell/module/az.aks/new-azaksnodepool [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json [win-faq-change-admin-creds]: ../windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster+
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
Previously updated : 04/02/2024 Last updated : 12/05/2023 #Customer intent: As a cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
For information on how to override Azure's default system routes or add addition
This section covers three network rules and an application rule you can use to configure on your firewall. You may need to adapt these rules based on your deployment. * The first network rule allows access to port 9000 via TCP.
-* The second network rule allows access to port 1194 via UDP. If you're deploying to Microsoft Azure operated by 21Vianet, see the [Azure operated by 21Vianet required network rules](./outbound-rules-control-egress.md#microsoft-azure-operated-by-21vianet-required-network-rules). Both these rules will only allow traffic destined to the Azure Region CIDR in this article, which is East US.
+* The second network rule allows access to port 1194 and 123 via UDP. If you're deploying to Microsoft Azure operated by 21Vianet, see the [Azure operated by 21Vianet required network rules](./outbound-rules-control-egress.md#microsoft-azure-operated-by-21vianet-required-network-rules). Both these rules will only allow traffic destined to the Azure Region CIDR in this article, which is East US.
+* The third network rule opens port 123 to `ntp.ubuntu.com` FQDN via UDP. Adding an FQDN as a network rule is one of the specific features of Azure Firewall, so you'll need to adapt it when using your own options.
* The fourth and fifth network rules allow access to pull containers from GitHub Container Registry (ghcr.io) and Docker Hub (docker.io). 1. Create the network rules using the [`az network firewall network-rule create`][az-network-firewall-network-rule-create] command.
This section covers three network rules and an application rule you can use to c
az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'apitcp' --protocols 'TCP' --source-addresses '*' --destination-addresses "AzureCloud.$LOC" --destination-ports 9000
+ az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'time' --protocols 'UDP' --source-addresses '*' --destination-fqdns 'ntp.ubuntu.com' --destination-ports 123
+ az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'ghcr' --protocols 'TCP' --source-addresses '*' --destination-fqdns ghcr.io pkg-containers.githubusercontent.com --destination-ports '443' az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'docker' --protocols 'TCP' --source-addresses '*' --destination-fqdns docker.io registry-1.docker.io production.cloudflare.docker.com --destination-ports '443'
In this article, you learned how to secure your outbound traffic using Azure Fir
[Use a pre-created kubelet managed identity]: use-managed-identity.md#use-a-pre-created-kubelet-managed-identity [az-identity-create]: /cli/azure/identity#az_identity_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials+
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
To learn more about using internal load balancer for inbound traffic, see the [A
[maxsurge]: ./upgrade-aks-cluster.md#customize-node-surge-upgrade [az-lb]: ../load-balancer/load-balancer-overview.md [alb-outbound-rules]: ../load-balancer/outbound-rules.md+
aks Long Term Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/long-term-support.md
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes
> [!NOTE] > Kubernetes 1.30.2 is used as an example version in this article. Check the [AKS release tracker](release-tracker.md) for available Kubernetes releases.+
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
Title: Abort an Azure Kubernetes Service (AKS) long running operation
description: Learn how to terminate a long running operation on an Azure Kubernetes Service cluster at the node pool or cluster level. Last updated 3/23/2023+++
Learn more about [Container insights](../azure-monitor/containers/container-insi
<!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli+
aks Manage Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-azure-rbac.md
To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azur
[az-role-definition-create]: /cli/azure/role/definition#az-role-definition-create [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [kubernetes-rbac]: /azure/aks/concepts-identity#azure-rbac-for-kubernetes-authorization+
aks Manage Local Accounts Managed Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-local-accounts-managed-azure-ad.md
description: Learn how to managed local accounts when integrating Microsoft Entr
Last updated 04/20/2023+++
You can disable local accounts using the parameter `disable-local-accounts`. The
[az-aks-update]: /cli/azure/aks#az_aks_update [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [azure-rbac-integration]: manage-azure-rbac.md+
aks Manage Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-node-pools.md
description: Learn how to manage node pools for a cluster in Azure Kubernetes Se
Last updated 07/19/2023+++
When you use an Azure Resource Manager template to create and manage resources,
[use-tags]: use-tags.md [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update+
aks Manage Ssh Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md
Last updated 02/12/2024+++ # Manage SSH for secure access to Azure Kubernetes Service (AKS) nodes
To help troubleshoot any issues with SSH connectivity to your clusters nodes, yo
[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az-aks-nodepool-upgrade [network-security-group-rules-overview]: concepts-security.md#azure-network-security-groups [kubelet-debug-node-access]: node-access.md
-[run-command-invoke]: /cli/azure/vmss/run-command#az-vmss-run-command-invoke
+[run-command-invoke]: /cli/azure/vmss/run-command#az-vmss-run-command-invoke
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
Title: Monitor AKS data reference
description: Important reference material needed when you monitor AKS Last updated 08/01/2023+++
For more information on the schema of Activity Log entries, see [Activity Log s
- See [Monitoring Azure AKS](monitor-aks.md) for a description of monitoring Azure AKS. - See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.+
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
Title: Monitor Azure Kubernetes Service (AKS) description: Start here to learn how to monitor Azure Kubernetes Service (AKS).-+
When the [Network Observability](/azure/aks/network-observability-overview) add-
<!-- Add additional links. You can change the wording of these and add more if useful. --> - See [Monitoring AKS data reference](monitor-aks-reference.md) for a reference of the metrics, logs, and other important values created by AKS.+
aks Monitor Control Plane Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-control-plane-metrics.md
After evaluating this preview feature, [share your feedback][share-feedback]. We
[list-of-default-metrics-aks-control-plane]: control-plane-metrics-default-list.md [az-feature-unregister]: /cli/azure/feature#az-feature-unregister [release-tracker]: https://releases.aks.azure.com/#tabversion+
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
For more information on Azure NAT Gateway, see [Azure NAT Gateway][nat-docs].
[az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-provider-register]: /cli/azure/provider#az_provider_register+
aks Network Observability Byo Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-byo-cli.md
In this how-to article, you learned how to install and enable AKS Network Observ
- For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md). - To create an AKS cluster with Network Observability and managed Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) Azure managed Prometheus and Grafana](network-observability-managed-cli.md).+
aks Network Observability Managed Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-managed-cli.md
In this how-to article, you learned how to install and enable AKS Network Observ
- For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md). - To create an AKS cluster with Network Observability and BYO Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) BYO Prometheus and Grafana](network-observability-byo-cli.md).+
aks Network Observability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-overview.md
Certain scale limitations apply when you use Azure managed Prometheus and Grafan
- To create an AKS cluster with Network Observability and Azure managed Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) Azure managed Prometheus and Grafana](network-observability-managed-cli.md). - To create an AKS cluster with Network Observability and BYO Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) BYO Prometheus and Grafana](network-observability-byo-cli.md).+
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
description: Learn how to connect to Azure Kubernetes Service (AKS) cluster node
Last updated 01/08/2024+++ #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
To learn about managing your SSH keys, see [Manage SSH configuration][manage-ssh
[agent-pool-rest-api]: /rest/api/aks/agent-pools/get#agentpool [manage-ssh-node-access]: manage-ssh-node-access.md [azure-bastion-linux]:../bastion/bastion-connect-vm-ssh-linux.md+
aks Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-auto-repair.md
Title: Automatically repair Azure Kubernetes Service (AKS) nodes
description: Learn about node auto-repair functionality and how AKS fixes broken worker nodes. Last updated 05/30/2023+++ # Azure Kubernetes Service (AKS) node auto-repair
Use [availability zones][availability-zones] to increase high availability with
[vm-updates]: ../virtual-machines/maintenance-and-updates.md [scheduled-events]: ../virtual-machines/linux/scheduled-events.md [spot-node-pools]: spot-node-pool.md+
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
Last updated 03/28/2023+++ # Upgrade Azure Kubernetes Service (AKS) node images
az aks nodepool show \
[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade [az-aks-show]: /cli/azure/aks#az_aks_show [upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices+
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
az aks create --name myAKSCluster2 --resource-group myResourceGroup --snapshot-i
[az-feature-register]: /cli/azure/feature#az_feature_register [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli [az-provider-register]: /cli/azure/provider#az_provider_register+
aks Node Problem Detector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-problem-detector.md
Title: Node Problem Detector (NPD) in Azure Kubernetes Service (AKS) nodes
description: Learn about how AKS uses Node Problem Detector to expose issues with the node. Last updated 05/31/2023+++ # Node Problem Detector (NPD) in Azure Kubernetes Service (AKS) nodes
problem_gauge{reason="VMEventScheduled",type="VMEventScheduled"} 0
## Next steps For more information on NPD, see [kubernetes/node-problem-detector](https://github.com/kubernetes/node-problem-detector).+
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
Last updated 04/19/2023+++ #Customer intent: As a cluster administrator, I want to know how to automatically apply Linux updates and reboot nodes in AKS for security and/or compliance
For a detailed discussion of upgrade best practices and other considerations, se
[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool [node-image-upgrade]: node-image-upgrade.md [upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices+
aks Node Upgrade Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md
Last updated 10/05/2023+++ #Customer intent: As a cluster administrator, I want to know how to automatically apply Linux updates and reboot nodes in AKS for security and/or compliance
For a detailed discussion of upgrade best practices and other considerations, se
[azure-rbac-scope-levels]: ../role-based-access-control/scope-overview.md#scope-format [az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az-ad-sp-create-for-rbac [upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices+
aks Open Ai Secure Access Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-secure-access-quickstart.md
For more information on Microsoft Entra Workload ID, see [Microsoft Entra Worklo
[kubectl-get-pods]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs [kubectl-describe-pod]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe+
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
After enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep t
[osm-nginx]: https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx [app-routing]: app-routing.md [istio-about]: istio-about.md+
aks Open Service Mesh Istio Migration Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-istio-migration-guidance.md
You should now see both the `bookbuyer` and `bookthief` UI incrementing for the
## Summary We hope this walk-through provided the necessary guidance on how to migrate your current OSM policies to Istio policies. Take time and review the [Istio Concepts](https://istio.io/latest/docs/concepts/) and walking through [Istio's own Getting Started guide](https://istio.io/latest/docs/setup/getting-started/) to learn how to use the Istio service mesh to manage your applications.+
aks Open Service Mesh Uninstall Add On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-uninstall-add-on.md
description: How to uninstall the Open Service Mesh on Azure Kubernetes Service
Last updated 06/19/2023+++ # Uninstall the Open Service Mesh (OSM) add-on from your Azure Kubernetes Service (AKS) cluster
Learn more about [Open Service Mesh][osm].
<!-- LINKS - Internal --> [az-aks-disable-addon]: /cli/azure/aks#az_aks_disable_addons [osm]: ./open-service-mesh-about.md+
aks Openfaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/openfaas.md
Continue to learn with the [OpenFaaS workshop][openfaas-workshop], which include
[az-group-create]: /cli/azure/group#az_group_create [az-cosmosdb-create]: /cli/azure/cosmosdb#az_cosmosdb_create [az-cosmosdb-list]: /cli/azure/cosmosdb#az_cosmosdb_list+
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://github.com/kubern
| K8s version | Upstream release | AKS preview | AKS GA | End of life | Platform support | |--|-|--||-|--|
-| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Jan 14, 2024 | Until 1.29 GA |
| 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA | | 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA | | 1.28 | Aug 2023 | Sep 2023 | Nov 2023 | Nov 2024 | Until 1.32 GA| | 1.29 | Dec 2023 | Feb 2024 | Mar 2024 | | Until 1.33 GA |
+| 1.30 | Apr 2024 | May 2024 | Jun 2024 | | Until 1.34 GA |
*\* Indicates the version is designated for Long Term Support*
Note the following important changes before you upgrade to any of the available
|Kubernetes Version | AKS Managed Addons | AKS Components | OS components | Breaking Changes | Notes |--||-||-||
-| 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
| 1.26 | Azure policy 1.3.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.1<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0<br>azurefile-csi-driver 1.26.10<br>| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|azurefile-csi-driver 1.26.10 |None | 1.27 | Azure policy 1.3.0<br>azuredisk-csi driver v1.28.5<br>azurefile-csi driver v1.28.7<br>blob-csi v1.22.4<br>csi-attacher v4.3.0<br>csi-resizer v1.8.0<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2<br>Metrics-Server 0.6.3<br>Keda 2.11.2<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>azurefile-csi-driver 1.28.7<br>KMS 0.5.0<br>CSI Secret store driver 1.3.4-1<br>|Cilium 1.13.10-1<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Keda 2.11.2<br>Cilium 1.13.10-1<br>azurefile-csi-driver 1.28.7<br>azuredisk-csi driver v1.28.5<br>blob-csi v1.22.4<br>csi-attacher v4.3.0<br>csi-resizer v1.8.0<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2|Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards. | 1.28 | Azure policy 1.3.0<br>azurefile-csi-driver 1.29.2<br>csi-node-driver-registrar v2.9.0<br>csi-livenessprobe 2.11.0<br>azuredisk-csi-linux v1.29.2<br>azuredisk-csi-windows v1.29.2<br>csi-provisioner v3.6.2<br>csi-attacher v4.5.0<br>csi-resizer v1.9.3<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>CSI Secret store driver 1.3.4-1<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.10-1<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br>Tigera-Operator 1.28.13| OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|azurefile-csi-driver 1.29.2<br>csi-resizer v1.9.3<br>csi-attacher v4.4.2<br>csi-provisioner v4.4.2<br>blob-csi v1.23.2<br>azurefile-csi driver v1.29.2<br>azuredisk-csi driver v1.29.2<br>csi-livenessprobe v2.11.0<br>csi-node-driver-registrar v2.9.0|None
New Supported Version List
Platform support policy is a reduced support plan for certain unsupported Kubernetes versions. During platform support, customers only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components aren't supported.
-Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, Kubernetes v1.25 is considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then auto-upgrade to v1.26. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy.
+Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, Kubernetes v1.26 is considered platform support when v1.29 is the latest GA version. However, during the v1.30 GA release, v1.26 will then auto-upgrade to v1.27. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy.
AKS relies on the releases and patches from [Kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of three minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on Kubernetes upstream.
api-management Add Api Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/add-api-manually.md
# Add an API manually + This article shows steps to add an API manually to the API Management instance. When you want to mock the API, you can create a blank API or define it manually. For details about mocking an API, see [Mock API responses](mock-api-responses.md). If you want to import an existing API, see [related topics](#related-topics) section.
api-management Api Management Api Import Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-api-import-restrictions.md
# API import restrictions and known issues + When importing an API, you might encounter some restrictions or need to identify and rectify issues before you can successfully import. In this article, you'll learn: * API Management's behavior during OpenAPI import.
api-management Api Management Authenticate Authorize Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-authenticate-authorize-azure-openai.md
# Authenticate and authorize access to Azure OpenAI APIs using Azure API Management + In this article, you learn about ways to authenticate and authorize to Azure OpenAI API endpoints that are managed using Azure API Management. This article shows the following common methods: * **Authentication** - Authenticate to an Azure OpenAI API using policies that authenticate using either an API key or a Microsoft Entra ID managed identity.
api-management Api Management Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md
# Capacity of an Azure API Management instance + **Capacity** is the most important [Azure Monitor metric](api-management-howto-use-azure-monitor.md#view-metrics-of-your-apis) for making informed decisions whether to [scale or upgrade](upgrade-and-scale.md) an API Management instance to accommodate more load. Its construction is complex and imposes certain behavior. This article explains what the **capacity** is and how it behaves. It shows how to access **capacity** metrics in the Azure portal and suggests when to consider scaling or upgrading your API Management instance.
api-management Api Management Configuration Repository Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-configuration-repository-git.md
# How to save and configure your API Management service configuration using Git + Each API Management service instance maintains a configuration database that contains information about the configuration and metadata for the service instance. Changes can be made to the service instance by changing a setting in the Azure portal, using Azure tools such as Azure PowerShell or the Azure CLI, or making a REST API call. In addition to these methods, you can manage your service instance configuration using Git, enabling scenarios such as: * **Configuration versioning** - Download and store different versions of your service configuration
This article describes how to enable and use Git to manage your service configur
> [!IMPORTANT] > This feature is designed to work with small to medium API Management service configurations, such as those with an exported size less than 10 MB, or with fewer than 10,000 entities. Services with a large number of entities (products, APIs, operations, schemas, and so on) may experience unexpected failures when processing Git commands. If you encounter such failures, please reduce the size of your service configuration and try again. Contact Azure Support if you need assistance. --- ## Access Git configuration in your service 1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
api-management Api Management Debug Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-debug-policies.md
Last updated 09/22/2020 + # Debug Azure API Management policies in Visual Studio Code + [Policies](api-management-policies.md) in Azure API Management provide powerful capabilities that help API publishers address cross-cutting concerns such as authentication, authorization, throttling, caching, and transformation. Policies are a collection of statements that are executed sequentially on the request or response of an API. This article describes how to debug API Management policies using the [Azure API Management Extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-apimanagement).
This article describes how to debug API Management policies using the [Azure API
## Restrictions and limitations
-* This feature is only available in the **Developer** tier of API Management. Each API Management instance supports only one concurrent debugging session.
- * This feature uses the built-in (service-level) all-access subscription (display name "Built-in all-access subscription") for debugging. The [**Allow tracing**](api-management-howto-api-inspector.md#verify-allow-tracing-setting) setting must be enabled in this subscription. [!INCLUDE [api-management-tracing-alert](../../includes/api-management-tracing-alert.md)]
api-management Api Management Error Handling Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-error-handling-policies.md
# Error handling in API Management policies + By providing a `ProxyError` object, Azure API Management allows publishers to respond to error conditions, which may occur during processing of requests. The `ProxyError` object is accessed through the [context.LastError](api-management-policy-expressions.md#ContextVariables) property and can be used by policies in the `on-error` policy section. This article provides a reference for the error handling capabilities in Azure API Management. ## Error handling in API Management
api-management Api Management Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md
Previously updated : 06/27/2023 Last updated : 03/13/2024 # Feature-based comparison of the Azure API Management tiers
-Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct set of features and per unit [capacity](api-management-capacity.md). The following table summarizes the key features available in each of the tiers. Some features might work differently or have different capabilities depending on the tier. In such cases the differences are called out in the documentation articles describing these individual features.
+
+Each API Management [pricing tier](api-management-key-concepts.md#api-management-tiers) offers a distinct set of features and per unit [capacity](api-management-capacity.md). The following table summarizes the key features available in each of the tiers. Some features might work differently or have different capabilities depending on the tier. In such cases the differences are called out in the documentation articles describing these individual features.
> [!IMPORTANT] > * The Developer tier is for non-production use cases and evaluations. It doesn't offer SLA.
-> * The Consumption tier isn't available in the US Government cloud or the Microsoft Azure operated by 21Vianet cloud.
-> * API Management **v2 tiers** are now in preview, with updated feature availability. [Learn more](v2-service-tiers-overview.md).
--
-| Feature | Consumption | Developer | Basic | Standard | Premium |
-| -- | -- | | -- | -- | - |
-| Microsoft Entra integration<sup>1</sup> | No | Yes | No | Yes | Yes |
-| Virtual Network (VNet) support | No | Yes | No | No | Yes |
-| Private endpoint support for inbound connections | No | Yes | Yes | Yes | Yes |
-| Multi-region deployment | No | No | No | No | Yes |
-| Availability zones | No | No | No | No | Yes |
-| Multiple custom domain names | No | Yes | No | No | Yes |
-| Developer portal<sup>2</sup> | No | Yes | Yes | Yes | Yes |
-| Built-in cache | No | Yes | Yes | Yes | Yes |
-| Built-in analytics | No | Yes | Yes | Yes | Yes |
-| [Self-hosted gateway](self-hosted-gateway-overview.md)<sup>3</sup> | No | Yes | No | No | Yes |
-| [Workspaces](workspaces-overview.md) | No | No | No | No | Yes |
-| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | Yes | Yes | Yes | Yes | Yes |
-| [External cache](./api-management-howto-cache-external.md) | Yes | Yes | Yes | Yes | Yes |
-| [Client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) | Yes | Yes | Yes | Yes | Yes |
-| [Policies](api-management-howto-policies.md)<sup>4</sup> | Yes | Yes | Yes | Yes | Yes |
-| [API credentials](credentials-overview.md) | Yes | Yes | Yes | Yes | Yes |
-| [Backup and restore](api-management-howto-disaster-recovery-backup-restore.md) | No | Yes | Yes | Yes | Yes |
-| [Management over Git](api-management-configuration-repository-git.md) | No | Yes | Yes | Yes | Yes |
-| Direct management API | No | Yes | Yes | Yes | Yes |
-| Azure Monitor metrics | Yes | Yes | Yes | Yes | Yes |
-| Azure Monitor and Log Analytics request logs | No | Yes | Yes | Yes | Yes |
-| Application Insights request logs | Yes | Yes | Yes | Yes | Yes |
-| Static IP | No | Yes | Yes | Yes | Yes |
-| [Pass-through WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes |
-| [Pass-through GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes |
-| [Synthetic GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes |
-| [Pass-through gRPC APIs](grpc-api.md) (preview) | No | Yes | No | No | Yes |
+> * The Consumption tier isn't available in the US Government cloud or the Microsoft Azure operated by 21Vianet cloud.
+> * For information about APIs supported in the API Management gateway available in different tiers, see [API Management gateways overview](api-management-gateways-overview.md#backend-apis).
++
+| Feature | Consumption | Developer | Basic | Basic v2 |Standard | Standard v2 | Premium |
+| -- | -- | | | | -- | -- | - |
+| Microsoft Entra integration<sup>1</sup> | No | Yes | No | Yes | Yes | Yes | Yes |
+| Virtual Network (VNet) injection support | No | Yes | No | No | No | No | Yes |
+| Private endpoint support for inbound connections | No | Yes | Yes | No | Yes | No | Yes |
+| Outbound virtual network integration support | No | No | No | No | No | Yes | No |
+| Multi-region deployment | No | No | No | No | No | No | Yes |
+| Availability zones | No | No | No | No | No | No | Yes |
+| Multiple custom domain names for gateway | No | Yes | No | No | No | No | Yes |
+| Developer portal<sup>2</sup> | No | Yes | Yes | Yes | Yes | Yes | Yes |
+| Built-in cache | No | Yes | Yes | Yes | Yes | Yes | Yes |
+| [External cache](./api-management-howto-cache-external.md) | Yes | Yes | Yes | Yes | Yes | Yes |Yes |
+| Autoscaling | No | No | Yes | No | Yes | No |Yes |
+| API analytics | No | Yes | Yes | Yes | Yes | Yes | Yes |
+| [Self-hosted gateway](self-hosted-gateway-overview.md)<sup>3</sup> | No | Yes | No | No | No | No | Yes |
+| [Workspaces](workspaces-overview.md) | No | No | No | No | No | No | Yes |
+| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
+| [Client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) | Yes | Yes | Yes | Yes | Yes | Yes |Yes |
+| [Policies](api-management-howto-policies.md)<sup>4</sup> | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
+| [Credential manager](credentials-overview.md) | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
+| [Backup and restore](api-management-howto-disaster-recovery-backup-restore.md) | No | Yes | Yes | No | Yes | No | Yes |
+| [Management over Git](api-management-configuration-repository-git.md) | No | Yes | Yes |No | Yes | No | Yes |
+| Direct management API | No | Yes | Yes | No | Yes |No | Yes |
+| Azure Monitor metrics | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
+| Azure Monitor and Log Analytics request logs | No | Yes | Yes | Yes | Yes | Yes |Yes |
+| Application Insights request logs | Yes | Yes | Yes | Yes | Yes | Yes |Yes |
+| Static IP | No | Yes | Yes | No |Yes | No | Yes |
<sup>1</sup> Enables the use of Microsoft Entra ID (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/> <sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/>
-<sup>4</sup> See [Gateway overview](api-management-gateways-overview.md#policies) for differences in policy support in the dedicated, consumption, and self-hosted gateways. <br/>
+<sup>4</sup> See [Gateway overview](api-management-gateways-overview.md#policies) for differences in policy support in the classic, v2, consumption, and self-hosted gateways. <br/>
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
Previously updated : 11/6/2023 Last updated : 03/28/2024 # API gateway in Azure API Management + This article provides information about the roles and features of the API Management *gateway* component and compares the gateways you can deploy. Related information: * For an overview of API Management scenarios, components, and concepts, see [What is Azure API Management?](api-management-key-concepts.md)
-* For more information about the API Management service tiers and features, see [Feature-based comparison of the Azure API Management tiers](api-management-features.md).
-
+* For more information about the API Management service tiers and features, see:
+ * [API Management tiers](api-management-key-concepts.md#api-management-tiers)
+ * [Feature-based comparison of the Azure API Management tiers](api-management-features.md).
## Role of the gateway
API Management offers both managed and self-hosted gateways:
* **Managed** - The managed gateway is the default gateway component that is deployed in Azure for every API Management instance in every service tier. With the managed gateway, all API traffic flows through Azure regardless of where backends implementing the APIs are hosted. > [!NOTE]
- > Because of differences in the underlying service architecture, the Consumption tier gateway currently lacks some capabilities of the dedicated gateway. For details, see the section [Feature comparison: Managed versus self-hosted gateways](#feature-comparison-managed-versus-self-hosted-gateways).
+ > Because of differences in the underlying service architecture, the gateways provided in the different API Management service tiers have some differences in capabilities. For details, see the section [Feature comparison: Managed versus self-hosted gateways](#feature-comparison-managed-versus-self-hosted-gateways).
>
-* **Self-hosted** - The [self-hosted gateway](self-hosted-gateway-overview.md) is an optional, containerized version of the default managed gateway. It's useful for hybrid and multicloud scenarios where there's a requirement to run the gateways off of Azure in the same environments where API backends are hosted. The self-hosted gateway enables customers with hybrid IT infrastructure to manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
+* **Self-hosted** - The [self-hosted gateway](self-hosted-gateway-overview.md) is an optional, containerized version of the default managed gateway that is available in select service tiers. It's useful for hybrid and multicloud scenarios where there's a requirement to run the gateways off of Azure in the same environments where API backends are hosted. The self-hosted gateway enables customers with hybrid IT infrastructure to manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
* The self-hosted gateway is [packaged](self-hosted-gateway-overview.md#packaging) as a Linux-based Docker container and is commonly deployed to Kubernetes, including to [Azure Kubernetes Service](how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md) and [Azure Arc-enabled Kubernetes](how-to-deploy-self-hosted-gateway-azure-arc.md).
API Management offers both managed and self-hosted gateways:
## Feature comparison: Managed versus self-hosted gateways
-The following table compares features available in the managed gateway versus the features in the self-hosted gateway. Differences are also shown between the managed gateway for dedicated service tiers (Developer, Basic, Standard, Premium) and for the Consumption tier.
+The following tables compare features available in the following API Management gateways:
+
+* **Classic** - the managed gateway available in the Developer, Basic, Standard, and Premium service tiers (formerly grouped as *dedicated* tiers)
+* **V2** - the managed gateway available in the Basic v2 and Standard v2 tiers
+* **Consumption** - the managed gateway available in the Consumption tier
+* **Self-hosted** - the optional self-hosted gateway available in select service tiers
> [!NOTE] > * Some features of managed and self-hosted gateways are supported only in certain [service tiers](api-management-features.md) or with certain [deployment environments](self-hosted-gateway-overview.md#packaging) for self-hosted gateways.
The following table compares features available in the managed gateway versus th
### Infrastructure
-| Feature support | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
-| | -- | -- | - |
-| [Custom domains](configure-custom-domain.md) | ✔️ | ✔️ | ✔️ |
-| [Built-in cache](api-management-howto-cache.md) | ✔️ | ❌ | ❌ |
-| [External Redis-compatible cache](api-management-howto-cache-external.md) | ✔️ | ✔️ | ✔️ |
-| [Virtual network injection](virtual-network-concepts.md) | Developer, Premium | ❌ | ✔️<sup>1,2</sup> |
-| [Private endpoints](private-endpoint.md) | ✔️ | ❌ | ❌ |
-| [Availability zones](zone-redundancy.md) | Premium | ❌ | ✔️<sup>1</sup> |
-| [Multi-region deployment](api-management-howto-deploy-multi-region.md) | Premium | ❌ | ✔️<sup>1</sup> |
-| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ❌ | ✔️<sup>3</sup> |
-| [Managed domain certificates](configure-custom-domain.md?tabs=managed#domain-certificate-options) | ✔️ | ✔️ | ❌ |
-| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ |
-| **HTTP/2** (Client-to-gateway) | ✔️<sup>4</sup> | ❌ | ✔️ |
-| **HTTP/2** (Gateway-to-backend) | ❌ | ❌ | ✔️ |
-| API threat detection with [Defender for APIs](protect-with-defender-for-apis.md) | ✔️ | ❌ | ❌ |
+| Feature support | Classic | V2 | Consumption | Self-hosted |
+| | | -- | -- | - |
+| [Custom domains](configure-custom-domain.md) | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Built-in cache](api-management-howto-cache.md) | ✔️ | ✔️ | ❌ | ❌ |
+| [External Redis-compatible cache](api-management-howto-cache-external.md) | ✔️ | ✔️ |✔️ | ✔️ |
+| [Virtual network injection](virtual-network-concepts.md) | Developer, Premium | ❌ | ❌ | ✔️<sup>1,2</sup> |
+| [Inbound private endpoints](private-endpoint.md) | Developer, Basic, Standard, Premium | ❌ | ❌ | ❌ |
+| [Outbound virtual network integration](integrate-vnet-outbound.md) | ❌ | Standard V2 | ❌ | ❌ |
+| [Availability zones](zone-redundancy.md) | Premium | ❌ | ❌ | ✔️<sup>1</sup> |
+| [Multi-region deployment](api-management-howto-deploy-multi-region.md) | Premium | ❌ | ❌ | ✔️<sup>1</sup> |
+| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ✔️ | ❌ | ✔️<sup>3</sup> |
+| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ✔️ | ❌ | ✔️<sup>3</sup> |
+| [Managed domain certificates](configure-custom-domain.md?tabs=managed#domain-certificate-options) | Developer, Basic, Standard, Premium | ✔️ | ✔️ | ❌ |
+| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ | ✔️ |
+| **HTTP/2** (Client-to-gateway) | ✔️<sup>4</sup> | ✔️<sup>4</sup> |❌ | ✔️ |
+| **HTTP/2** (Gateway-to-backend) | ❌ | ❌ | ❌ | ✔️ |
+| API threat detection with [Defender for APIs](protect-with-defender-for-apis.md) | ✔️ | ✔️ | ❌ | ❌ |
<sup>1</sup> Depends on how the gateway is deployed, but is the responsibility of the customer.<br/> <sup>2</sup> Connectivity to the self-hosted gateway v2 [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies) requires DNS resolution of the endpoint hostname.<br/>
The following table compares features available in the managed gateway versus th
### Backend APIs
-| API | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
-| | -- | -- | - |
-| [OpenAPI specification](import-api-from-oas.md) | ✔️ | ✔️ | ✔️ |
-| [WSDL specification](import-soap-api.md) | ✔️ | ✔️ | ✔️ |
-| WADL specification | ✔️ | ✔️ | ✔️ |
-| [Logic App](import-logic-app-as-api.md) | ✔️ | ✔️ | ✔️ |
-| [App Service](import-app-service-as-api.md) | ✔️ | ✔️ | ✔️ |
-| [Function App](import-function-app-as-api.md) | ✔️ | ✔️ | ✔️ |
-| [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ |
-| [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ | ❌ |
-| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ | ✔️ |
-| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️<sup>1</sup> | ✔️<sup>1</sup> |
-| [Pass-through WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ |
-| [Pass-through gRPC](grpc-api.md) | ❌ | ❌ | ✔️ |
-| [Azure OpenAI](azure-openai-api-from-specification.md) | ✔️ | ✔️ | ✔️ |
-| [Circuit breaker in backend](backends.md#circuit-breaker-preview) | ✔️ | ❌ | ✔️ |
-| [Load-balanced backend pool](backends.md#load-balanced-pool-preview) | ✔️ | ✔️ | ✔️ |
+| Feature support | Classic | V2 | Consumption | Self-hosted |
+| | | -- | -- | - |
+| [OpenAPI specification](import-api-from-oas.md) | ✔️ | ✔️ | ✔️ | ✔️ |
+| [WSDL specification](import-soap-api.md) | ✔️ | ✔️ | ✔️ | ✔️ |
+| WADL specification | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Logic App](import-logic-app-as-api.md) | ✔️ | ✔️ | ✔️ |✔️ |
+| [App Service](import-app-service-as-api.md) | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Function App](import-function-app-as-api.md) | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ |❌ | ❌ |
+| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ |✔️ | ✔️ |
+| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️ | ✔️<sup>1</sup> | ✔️<sup>1</sup> |
+| [Pass-through WebSocket](websocket-api.md) | ✔️ | ✔️ | ❌ | ✔️ |
+| [Pass-through gRPC](grpc-api.md) (preview) | ❌ | ❌ | ❌ | ✔️ |
+| [OData](import-api-from-odata.md) (preview) | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ |✔️ | ✔️ |
+| [Azure OpenAI](azure-openai-api-from-specification.md) | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Circuit breaker in backend](backends.md#circuit-breaker-preview) (preview) | ✔️ | ✔️ | ❌ | ✔️ |
+| [Load-balanced backend pool](backends.md#load-balanced-pool-preview) (preview) | ✔️ | ✔️ | ✔️ | ✔️ |
<sup>1</sup> Synthetic GraphQL subscriptions (preview) aren't supported.
The following table compares features available in the managed gateway versus th
Managed and self-hosted gateways support all available [policies](api-management-policies.md) in policy definitions with the following exceptions.
-| Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted<sup>1</sup> |
-| | -- | -- | - |
-| [Dapr integration](api-management-policies.md#dapr-integration-policies) | ❌ | ❌ | ✔️ |
-| [GraphQL resolvers](api-management-policies.md#graphql-resolver-policies) and [GraphQL validation](api-management-policies.md#validation-policies)| ✔️ | ✔️ | ❌ |
-| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ | ❌ |
-| [Quota and rate limit](api-management-policies.md#access-restriction-policies) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup>
+| Feature support | Classic | V2 | Consumption | Self-hosted<sup>1</sup> |
+| | | -- | -- | - |
+| [Dapr integration](api-management-policies.md#integration-and-external-communication) | ❌ | ❌ |❌ | ✔️ |
+| [GraphQL resolvers](api-management-policies.md#graphql-resolvers) and [GraphQL validation](api-management-policies.md#content-validation)| ✔️ | ✔️ |✔️ | ❌ |
+| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ |✔️ | ❌ |
+| [Quota and rate limit](api-management-policies.md#rate-limiting-and-quotas) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup> | ✔️<sup>4</sup> |
<sup>1</sup> Configured policies that aren't supported by the self-hosted gateway are skipped during policy execution.<br/>
+<sup>2</sup> The quota by key policy isn't available in the v2 tiers.<br/>
<sup>2</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/> <sup>3</sup> [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)] [Learn more](how-to-self-hosted-gateway-on-kubernetes-in-production.md#request-throttling)
Managed and self-hosted gateways support all available [policies](api-management
For details about monitoring options, see [Observability in Azure API Management](observability.md).
-| Feature | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
-| | -- | -- | - |
-| [API analytics](howto-use-analytics.md) | ✔️ | ❌ | ❌ |
-| [Application Insights](api-management-howto-app-insights.md) | ✔️ | ✔️ | ✔️ |
-| [Logging through Event Hubs](api-management-howto-log-event-hubs.md) | ✔️ | ✔️ | ✔️ |
-| [Metrics in Azure Monitor](api-management-howto-use-azure-monitor.md#view-metrics-of-your-apis) | ✔️ | ✔️ | ✔️ |
-| [OpenTelemetry Collector](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md) | ❌ | ❌ | ✔️ |
-| [Request logs in Azure Monitor and Log Analytics](api-management-howto-use-azure-monitor.md#resource-logs) | ✔️ | ❌ | ❌<sup>1</sup> |
-| [Local metrics and logs](how-to-configure-local-metrics-logs.md) | ❌ | ❌ | ✔️ |
-| [Request tracing](api-management-howto-api-inspector.md) | ✔️ | ✔️ | ✔️ |
-
-<sup>1</sup> The self-hosted gateway currently doesn't send resource logs (diagnostic logs) to Azure Monitor. Optionally [send metrics](how-to-configure-cloud-metrics-logs.md) to Azure Monitor, or [configure and persist logs locally](how-to-configure-local-metrics-logs.md) where the self-hosted gateway is deployed.
+| Feature support | Classic | V2 | Consumption | Self-hosted |
+| | | -- | -- | - |
+| [API analytics](howto-use-analytics.md) | ✔️ | ✔️<sup>1</sup> | ❌ | ❌ |
+| [Application Insights](api-management-howto-app-insights.md) | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Logging through Event Hubs](api-management-howto-log-event-hubs.md) | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Metrics in Azure Monitor](api-management-howto-use-azure-monitor.md#view-metrics-of-your-apis) | ✔️ | ✔️ |✔️ | ✔️ |
+| [OpenTelemetry Collector](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md) | ❌ | ❌ | ❌ | ✔️ |
+| [Request logs in Azure Monitor and Log Analytics](api-management-howto-use-azure-monitor.md#resource-logs) | ✔️ | ✔️ | ❌ | ❌<sup>2</sup> |
+| [Local metrics and logs](how-to-configure-local-metrics-logs.md) | ❌ | ❌ | ❌ | ✔️ |
+| [Request tracing](api-management-howto-api-inspector.md) | ✔️ | ❌<sup>3</sup> | ✔️ | ✔️ |
+
+<sup>1</sup> The v2 tiers support Azure Monitor-based analytics.<br/>
+<sup>2</sup> The self-hosted gateway currently doesn't send resource logs (diagnostic logs) to Azure Monitor. Optionally [send metrics](how-to-configure-cloud-metrics-logs.md) to Azure Monitor, or [configure and persist logs locally](how-to-configure-local-metrics-logs.md) where the self-hosted gateway is deployed.<br/>
+<sup>3</sup> Tracing is currently unavailable in the v2 tiers.
### Authentication and authorization Managed and self-hosted gateways support all available [API authentication and authorization options](authentication-authorization-overview.md) with the following exceptions.
-| Feature | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
-| | -- | -- | - |
-| [Credential manager](credentials-overview.md) | ✔️ | ✔️ | ❌ |
+| Feature support | Classic | V2 | Consumption | Self-hosted |
+| | | -- | -- | - |
+| [Credential manager](credentials-overview.md) | ✔️ | ✔️ | ✔️ | ❌ |
## Gateway throughput and scaling
For estimated maximum gateway throughput in the API Management service tiers, se
> [!IMPORTANT] > Throughput figures are presented for information only and must not be relied upon for capacity and budget planning. See [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/) for details.
-* **Dedicated service tiers**
+* **Classic tiers**
* Scale gateway capacity by adding and removing scale [units](upgrade-and-scale.md), or upgrade the service tier. (Scaling not available in the Developer tier.)
- * In the Standard and Premium tiers, optionally configure [Azure Monitor autoscale](api-management-howto-autoscale.md).
+ * In the Basic, Standard, and Premium tiers, optionally configure [Azure Monitor autoscale](api-management-howto-autoscale.md).
* In the Premium tier, optionally add and distribute gateway capacity across multiple [regions](api-management-howto-deploy-multi-region.md).
+* **v2 tiers**
+ * Scale gateway capacity by adding and removing scale [units](upgrade-and-scale.md), or upgrade the service tier.
+ * **Consumption tier** * API Management instances in the Consumption tier scale automatically based on the traffic.
api-management Api Management Get Started Publish Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-publish-versions.md
# Tutorial: Publish multiple versions of your API + There are times when it's impractical to have all callers to your API use exactly the same version. When callers want to upgrade to a later version, they want an approach that's easy to understand. As shown in this tutorial, it is possible to provide multiple *versions* in Azure API Management. For background, see [Versions](api-management-versions.md) & [Revisions](api-management-revisions.md).
api-management Api Management Get Started Revise Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-revise-api.md
# Tutorial: Use revisions to make non-breaking API changes safely++ When your API is ready to go and is used by developers, you eventually need to make changes to that API and at the same time not disrupt callers of your API. It's also useful to let developers know about the changes you made. In Azure API Management, use *revisions* to make non-breaking API changes so you can model and test changes safely. When ready, you can make a revision current and replace your current API.
api-management Api Management Howto Aad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad-b2c.md
# How to authorize developer accounts by using Azure Active Directory B2C in Azure API Management Azure Active Directory B2C is a cloud identity management solution for consumer-facing web and mobile applications. You can use it to manage access to your API Management developer portal.
For an overview of options to secure the developer portal, see [Secure access to
> * This article has been updated with steps to configure an Azure AD B2C app using the Microsoft Authentication Library ([MSAL](../active-directory/develop/msal-overview.md)). > * If you previously configured an Azure AD B2C app for user sign-in using the Azure AD Authentication Library (ADAL), we recommend that you [migrate to MSAL](#migrate-to-msal). ## Prerequisites
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
Although a new account will automatically be created when a new user signs in wi
[Publish a product]: api-management-howto-add-products.md#publish-product [Get started with Azure API Management]: get-started-create-service-instance.md [API Management policy reference]: ./api-management-policies.md
-[Caching policies]: ./api-management-policies.md#caching-policies
+[Caching policies]: ./api-management-policies.md#caching
[Create an API Management service instance]: get-started-create-service-instance.md [https://oauth.net/2/]: https://oauth.net/2/
api-management Api Management Howto Add Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-add-products.md
ms.devlang: azurecli
# Tutorial: Create and publish a product + In Azure API Management, a [*product*](api-management-terminology.md#term-definitions) contains one or more APIs, a usage quota, and the terms of use. After a product is published, developers can [subscribe](api-management-subscriptions.md) to the product and begin to use the product's APIs. In this tutorial, you learn how to:
api-management Api Management Howto Api Inspector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-api-inspector.md
Previously updated : 08/08/2022 Last updated : 03/26/2024 # Tutorial: Debug your APIs using request tracing + This tutorial describes how to inspect (trace) request processing in Azure API Management. Tracing helps you debug and troubleshoot your API. In this tutorial, you learn how to:
In this tutorial, you learn how to:
:::image type="content" source="media/api-management-howto-api-inspector/api-inspector-002.png" alt-text="Screenshot showing the API inspector." lightbox="media/api-management-howto-api-inspector/api-inspector-002.png"::: + ## Prerequisites + Learn the [Azure API Management terminology](api-management-terminology.md).
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
# How to integrate Azure API Management with Azure Application Insights + You can easily integrate Azure Application Insights with Azure API Management. Azure Application Insights is an extensible service for web developers building and managing apps on multiple platforms. In this guide, you will: * Walk through Application Insights integration into API Management. * Learn strategies for reducing performance impact on your API Management service instance.
api-management Api Management Howto Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-autoscale.md
-# Automatically scale an Azure API Management instance
+# Automatically scale an Azure API Management instance
-An Azure API Management service instance can scale automatically based on a set of rules. This behavior can be enabled and configured through [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale) and is currently supported only in the **Basic**, **Standard**, and **Premium** tiers of the Azure API Management service.
+
+An Azure API Management service instance can scale automatically based on a set of rules. This behavior can be enabled and configured through [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale).
The article walks through the process of configuring autoscale and suggests optimal configuration of autoscale rules.
To follow the steps from this article, you must:
+ Understand the concept of [capacity](api-management-capacity.md) of an API Management instance. + Understand [manual scaling](upgrade-and-scale.md) of an API Management instance, including cost consequences. - ## Azure API Management autoscale limitations Certain limitations and consequences of scaling decisions need to be considered before configuring autoscale behavior.
api-management Api Management Howto Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ca-certificates.md
# How to add a custom CA certificate in Azure API Management + Azure API Management allows installing CA certificates on the machine inside the trusted root and intermediate certificate stores. This functionality should be used if your services require a custom CA certificate. The article shows how to manage CA certificates of an Azure API Management service instance in the Azure portal. For example, if you use self-signed client certificates, you can upload custom trusted root certificates to API Management.
CA certificates uploaded to API Management can only be used for certificate vali
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## <a name="step1"> </a>Upload a CA certificate
api-management Api Management Howto Cache External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache-external.md
# Use an external Redis-compatible cache in Azure API Management + In addition to utilizing the built-in cache, Azure API Management allows for caching responses in an external Redis-compatible cache, such as Azure Cache for Redis. Using an external cache allows you to overcome a few limitations of the built-in cache:
Using an external cache allows you to overcome a few limitations of the built-in
* Use caching with the Consumption tier of API Management * Enable caching in the [API Management self-hosted gateway](self-hosted-gateway-overview.md)
-For more detailed information about caching, see [API Management caching policies](api-management-caching-policies.md) and [Custom caching in Azure API Management](api-management-sample-cache-by-key.md).
+For more detailed information about caching, see [API Management caching policies](api-management-policies.md#caching) and [Custom caching in Azure API Management](api-management-sample-cache-by-key.md).
![Bring your own cache to APIM](media/api-management-howto-cache-external/overview.png)
The **Use from** setting in the configuration specifies the location of your API
## Use the external cache
-After adding a Redis-compatible cache, configure [caching policies](api-management-caching-policies.md) to enable response caching, or caching of values by key, in the external cache.
+After adding a Redis-compatible cache, configure [caching policies](api-management-policies.md#caching) to enable response caching, or caching of values by key, in the external cache.
For a detailed example, see [Add caching to improve performance in Azure API Management](api-management-howto-cache.md).
For a detailed example, see [Add caching to improve performance in Azure API Man
* To cache items by key using policy expressions, see [Custom caching in Azure API Management](api-management-sample-cache-by-key.md). [API Management policy reference]: ./api-management-policies.md
-[Caching policies]: ./api-management-caching-policies.md
+[Caching policies]: ./api-management-policies.md#caching
api-management Api Management Howto Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache.md
ms.assetid: 740f6a27-8323-474d-ade2-828ae0c75e7a Previously updated : 11/13/2020 Last updated : 03/20/2024 # Add caching to improve performance in Azure API Management + APIs and operations in API Management can be configured with response caching. Response caching can significantly reduce latency for API callers and backend load for API providers. > [!IMPORTANT] > Built-in cache is volatile and is shared by all units in the same region in the same API Management service. Regardless of the cache type being used (internal or external), if the cache-related operations fail to connect to the cache due to the volatility of the cache or any other reason, the API call that uses the cache related operation doesn't raise an error, and the cache operation completes successfully. In the case of a read operation, a null value is returned to the calling policy expression. Your policy code should be designed to ensure that there's a "fallback" mechanism to retrieve data not found in the cache.
-For more detailed information about caching, see [API Management caching policies](api-management-caching-policies.md) and [Custom caching in Azure API Management](api-management-sample-cache-by-key.md).
+For more detailed information about caching, see [API Management caching policies](api-management-policies.md#caching) and [Custom caching in Azure API Management](api-management-sample-cache-by-key.md).
![cache policies](media/api-management-howto-cache/cache-policies.png)
What you'll learn:
> * Add response caching for your API > * Verify caching in action
-## Availability
> [!NOTE]
-> Internal cache is not available in the **Consumption** tier of Azure API Management. You can [use an external Azure Cache for Redis](api-management-howto-cache-external.md) instead.
+> Internal cache is not available in the **Consumption** tier of Azure API Management. You can [use an external Azure Cache for Redis](api-management-howto-cache-external.md) instead. You can also configure an external cache in other API Management service tiers.
>
-> For feature availability in the v2 tiers (preview), see the [v2 tiers overview](v2-service-tiers-overview.md).
+ ## Prerequisites
With caching policies shown in this example, the first request to the **GetSpeak
**Duration** specifies the expiration interval of the cached responses. In this example, the interval is **20** seconds. > [!TIP]
-> If you are using an external cache, as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md), you may want to specify the `caching-type` attribute of the caching policies. See [API Management caching policies](api-management-caching-policies.md) for more details.
+> If you are using an external cache, as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md), you may want to specify the `caching-type` attribute of the caching policies. See [API Management caching policies](api-management-policies.md#caching) for more details.
## <a name="test-operation"> </a>Call an operation and test the caching To see the caching in action, call the operation from the developer portal.
To see the caching in action, call the operation from the developer portal.
[Get started with Azure API Management]: get-started-create-service-instance.md [API Management policy reference]: ./api-management-policies.md
-[Caching policies]: ./api-management-caching-policies.md
+[Caching policies]: ./api-management-policies.md#caching
[Create an API Management service instance]: get-started-create-service-instance.md
api-management Api Management Howto Configure Custom Domain Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-custom-domain-gateway.md
# Configure a custom domain name for a self-hosted gateway
-When you provision a [self-hosted Azure API Management gateway](self-hosted-gateway-overview.md), it is not assigned a host name and has to be referenced by its IP address. This article shows how to map an existing custom DNS name (also referred to as hostname) to a self-hosted gateway.
- [!INCLUDE [api-management-availability-premium-dev](../../includes/api-management-availability-premium-dev.md)]
+When you provision a [self-hosted Azure API Management gateway](self-hosted-gateway-overview.md), it is not assigned a host name and has to be referenced by its IP address. This article shows how to map an existing custom DNS name (also referred to as hostname) to a self-hosted gateway.
+ ## Prerequisites To perform the steps described in this article, you must have:
api-management Api Management Howto Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-notifications.md
# How to configure notifications and notification templates in Azure API Management + API Management provides the ability to configure email notifications for specific events, and to configure the email templates that are used to communicate with the administrators and developers of an API Management instance. This article shows how to configure notifications for the available events, and provides an overview of configuring the email templates used for these events. ## Prerequisites If you don't have an API Management service instance, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md). - [!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)] ## <a name="publisher-notifications"> </a>Configure notifications in the portal
api-management Api Management Howto Create Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-groups.md
# How to create and use groups to manage developer accounts in Azure API Management + In API Management, groups are used to manage the visibility of products to developers. Products are first made visible to groups, and then developers in those groups can view and subscribe to the products that are associated with the groups. API Management has the following immutable system groups:
This guide shows how administrators of an API Management instance can add new gr
In addition to creating and managing groups in the Azure portal, you can create and manage your groups using the API Management REST API [Group](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-group-entity) entity. - ## Prerequisites Complete tasks in this article: [Create an Azure API Management instance](get-started-create-service-instance.md).
api-management Api Management Howto Create Or Invite Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-or-invite-developers.md
Previously updated : 02/13/2018 Last updated : 03/20/2024 # How to manage user accounts in Azure API Management
-In API Management, developers are the users of the APIs that you expose using API Management. This guide shows how to create and invite developers to use the APIs and products that you make available to them with your API Management instance. For information on managing user accounts programmatically, see the [User entity](/rest/api/apimanagement/current-ga/user) documentation in the [API Management REST](/rest/api/apimanagement/) reference.
+In API Management, developers are the users of the APIs that you expose using API Management. This guide shows how to create and invite developers to use the APIs and products that you make available to them with your API Management instance. For information on managing user accounts programmatically, see the [User entity](/rest/api/apimanagement/current-ga/user) documentation in the [API Management REST](/rest/api/apimanagement/) reference.
## Prerequisites
api-management Api Management Howto Create Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-subscriptions.md
Previously updated : 08/03/2022 Last updated : 03/26/2024 # Create subscriptions in Azure API Management + When you publish APIs through Azure API Management, it's easy and common to secure access to those APIs by using subscription keys. Client applications that need to consume the published APIs must include a valid subscription key in HTTP requests when they make calls to those APIs. To get a subscription key for accessing APIs, a subscription is required. For more information about subscriptions, see [Subscriptions in Azure API Management](api-management-subscriptions.md). This article walks through the steps for creating subscriptions in the Azure portal.
To take the steps in this article, the prerequisites are as follows:
1. Optionally, select **Allow tracing** to enable tracing for debugging and troubleshooting APIs. [Learn more](api-management-howto-api-inspector.md) [!INCLUDE [api-management-tracing-alert](../../includes/api-management-tracing-alert.md)]+
+ [!INCLUDE [api-management-availability-tracing-v2-tiers](../../includes/api-management-availability-tracing-v2-tiers.md)]
+ 1. Select a **Scope** of the subscription from the dropdown list. [Learn more](api-management-subscriptions.md#scope-of-subscriptions) 1. Optionally, choose if the subscription should be associated with a **User** and whether to send a notification for use with the developer portal. 1. Select **Create**.
api-management Api Management Howto Deploy Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md
# Deploy an Azure API Management instance to multiple Azure regions + Azure API Management supports multi-region deployment, which enables API publishers to add regional API gateways to an existing API Management instance in one or more supported Azure regions. Multi-region deployment helps reduce request latency perceived by geographically distributed API consumers and improves service availability if one region goes offline. When adding a region, you configure:
When adding a region, you configure:
>[!IMPORTANT] > The feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo. For all other regions, customer data is stored in Geo. - ## About multi-region deployment [!INCLUDE [api-management-multi-region-concepts](../../includes/api-management-multi-region-concepts.md)]
api-management Api Management Howto Developer Portal Customize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-developer-portal-customize.md
# Tutorial: Access and customize the developer portal
-In this tutorial, you'll get started with customizing the API Management *developer portal*. The developer portal is an automatically generated, fully customizable website with the documentation of your APIs. It's where API consumers can discover your APIs, learn how to use them, and request access.
+The *developer portal* is an automatically generated, fully customizable website with the documentation of your APIs. It is where API consumers can discover your APIs, learn how to use them, and request access.
In this tutorial, you learn how to:
For more information about developer portal features and options, see [Azure API
- Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md). - [Import and publish](import-and-publish.md) an API. + ## Access the portal as an administrator
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
# How to implement disaster recovery using service backup and restore in Azure API Management + By publishing and managing your APIs via Azure API Management, you're taking advantage of fault tolerance and infrastructure capabilities that you'd otherwise design, implement, and manage manually. The Azure platform mitigates a large fraction of potential failures at a fraction of the cost. To recover from availability problems that affect your API Management service, be ready to reconstitute your service in another region at any time. Depending on your recovery time objective, you might want to keep a standby service in one or more regions. You might also try to maintain their configuration and content in sync with the active service according to your recovery point objective. The API management backup and restore capabilities provide the necessary building blocks for implementing disaster recovery strategy.
This article shows how to automate backup and restore operations of your API Man
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] - ## Prerequisites * An API Management service instance. If you don't have one, see [Create an API Management service instance](get-started-create-service-instance.md).
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
# Integrate API Management in an internal virtual network with Application Gateway + You can configure Azure API Management in a [virtual network in internal mode](api-management-using-with-internal-vnet.md), which makes it accessible only within the virtual network. [Azure Application Gateway](../application-gateway/overview.md) is a platform as a service (PaaS) that acts as a Layer-7 load balancer. It acts as a reverse-proxy service and provides among its offerings Azure Web Application Firewall (WAF). By combining API Management provisioned in an internal virtual network with the Application Gateway front end, you can:
For architectural guidance, see:
> [!NOTE] > This article has been updated to use the [Application Gateway WAF_v2 SKU](../application-gateway/application-gateway-autoscaling-zone-redundant.md). - ## Prerequisites [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
# IP addresses of Azure API Management + In this article we describe how to retrieve the IP addresses of Azure API Management service. IP addresses can be public or private if the service is in a virtual network. You can use IP addresses to create firewall rules, filter the incoming traffic to the backend services, or restrict the outbound traffic. ## IP addresses of API Management service
API Management uses a public IP address for a connection outside the VNet or a p
* When API management is deployed in an external or internal virtual network and API management connects to private (intranet-facing) backends, internal IP addresses (dynamic IP, or DIP addresses) from the subnet are used for the runtime API traffic. When a request is sent from API Management to a private backend, a private IP address will be visible as the origin of the request.
- Therefore, if IP restriction lists secure resources within the VNet or a peered VNet, it is recommended to use the whole API Management [subnet range](virtual-network-concepts.md#subnet-size) with an IP rule - and (in internal mode) not just the private IP address associated with the API Management resource.
+ Therefore, if IP restriction lists secure resources within the VNet or a peered VNet, it is recommended to use the whole API Management [subnet range](virtual-network-injection-resources.md#subnet-size) with an IP rule - and (in internal mode) not just the private IP address associated with the API Management resource.
* When a request is sent from API Management to a public (internet-facing) backend, a public IP address will always be visible as the origin of the request. ## IP addresses of Consumption, Basic v2, and Standard v2 tier API Management service
-If your API Management instance is created in a service tier that runs on a shared infrastructure, it doesn't have a dedicated IP address. Currently, instances in the following service tiers run on a shared infrastructure and without a deterministic IP address: Consumption, Basic v2 (preview), Standard v2 (preview).
+If your API Management instance is created in a service tier that runs on a shared infrastructure, it doesn't have a dedicated IP address. Currently, instances in the following service tiers run on a shared infrastructure and without a deterministic IP address: Consumption, Basic v2, Standard v2.
If you need to add the outbound IP addresses used by your Consumption, Basic v2, or Standard v2 tier instance to an allowlist, you can add the instance's data center (Azure region) to an allowlist. You can [download a JSON file that lists IP addresses for all Azure data centers](https://www.microsoft.com/download/details.aspx?id=56519). Then find the JSON fragment that applies to the region that your instance runs in.
api-management Api Management Howto Log Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md
# How to log events to Azure Event Hubs in Azure API Management + This article describes how to log API Management events using Azure Event Hubs. Azure Event Hubs is a highly scalable data ingress service that can ingest millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Event Hubs acts as the "front door" for an event pipeline, and once data is collected into an event hub, it can be transformed and stored using any real-time analytics provider or batching/storage adapters. Event Hubs decouples the production of a stream of events from the consumption of those events, so that event consumers can access the events on their own schedule.
api-management Api Management Howto Manage Protocols Ciphers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-manage-protocols-ciphers.md
# Manage protocols and ciphers in Azure API Management + Azure API Management supports multiple versions of Transport Layer Security (TLS) protocol to secure API traffic for: * Client side * Backend side
By default, API Management enables TLS 1.2 for client and backend connectivity a
:::image type="content" source="media/api-management-howto-manage-protocols-ciphers/api-management-protocols-ciphers.png" alt-text="Screenshot of managing protocols and ciphers in the Azure portal."::: - > [!NOTE] > * If you're using the self-hosted gateway, see [self-hosted gateway security](self-hosted-gateway-overview.md#security) to manage TLS protocols and cipher suites.
-> * Currently, API Management doesn't support TLS 1.3.
-> * The Consumption tier doesn't support changes to the default cipher configuration.
+> * The following tiers don't support changes to the default cipher configuration: **Consumption**, **Basic v2**, **Standard v2**.
## Prerequisites
api-management Api Management Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-migrate.md
# How to move Azure API Management across regions + This article describes how to move an API Management instance to a different Azure region. You might move your instance to another region for many reasons. For example: * Locate your instance closer to your API consumers
To move API Management instances from one Azure region to another, use the servi
> [!NOTE] > API Management also supports [multi-region deployment](api-management-howto-deploy-multi-region.md), which distributes a single Azure API management service across multiple Azure regions. Multi-region deployment helps reduce request latency perceived by geographically distributed API consumers and improves service availability if one region goes offline. - ## Considerations * Choose the same API Management pricing tier in the source and target regions.
api-management Api Management Howto Mutual Certificates For Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
# How to secure APIs using client certificate authentication in API Management + API Management provides the capability to secure access to APIs (that is, client to API Management) using client certificates and mutual TLS authentication. You can validate certificates presented by the connecting client and check certificate properties against desired values using policy expressions. For information about securing access to the backend service of an API using client certificates (that is, API Management to backend), see [How to secure back-end services using client certificate authentication](./api-management-howto-mutual-certificates.md).
Using key vault certificates is recommended because it helps improve API Managem
### Developer, Basic, Standard, or Premium tier
-To receive and verify client certificates over HTTP/2 in the Developer, Basic, Standard, or Premium tiers, you must enable the **Negotiate client certificate** setting on the **Custom domain** blade as shown below.
+To receive and verify client certificates over HTTP/2 in the Developer, Basic, Basic v2, Standard, Standard v2, or Premium tiers, you must enable the **Negotiate client certificate** setting on the **Custom domain** blade as shown below.
![Negotiate client certificate](./media/api-management-howto-mutual-certificates-for-clients/negotiate-client-certificate.png)
You can also create policy expressions with the [`context` variable](api-managem
> [!IMPORTANT] > * Starting May 2021, the `context.Request.Certificate` property only requests the certificate when the API Management instance's [`hostnameConfiguration`](/rest/api/apimanagement/current-ga/api-management-service/create-or-update#hostnameconfiguration) sets the `negotiateClientCertificate` property to True. By default, `negotiateClientCertificate` is set to False. > * If TLS renegotiation is disabled in your client, you may see TLS errors when requesting the certificate using the `context.Request.Certificate` property. If this occurs, enable TLS renegotiation settings in the client.
+> * Certification renegotiation is not supported in the API Management v2 tiers.
### Checking the issuer and subject
api-management Api Management Howto Mutual Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates.md
# Secure backend services using client certificate authentication in Azure API Management ++ API Management allows you to secure access to the backend service of an API using client certificates and mutual TLS authentication. This guide shows how to manage certificates in an Azure API Management service instance using the Azure portal. It also explains how to configure an API to use a certificate to access a backend service. You can also manage API Management certificates using the [API Management REST API](/rest/api/apimanagement/current-ga/certificate).
To delete a certificate, select it and then select **Delete** from the context m
[Publish a product]: api-management-howto-add-products.md#publish-product [Get started with Azure API Management]: get-started-create-service-instance.md [API Management policy reference]: ./api-management-policies.md
-[Caching policies]: ./api-management-policies.md#caching-policies
+[Caching policies]: ./api-management-policies.md#caching
[Create an API Management service instance]: get-started-create-service-instance.md
-[Azure API Management REST API Certificate entity]: ./api-management-caching-policies.md
[WebApp-GraphAPI-DotNet]: https://github.com/AzureADSamples/WebApp-GraphAPI-DotNet [to configure certificate authentication in Azure WebSites refer to this article]: ../app-service/app-service-web-configure-tls-mutual-auth.md
api-management Api Management Howto Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-oauth2.md
# How to authorize test console of developer portal by configuring OAuth 2.0 user authorization + Many APIs support [OAuth 2.0](https://oauth.net/2/) to secure the API and ensure that only valid users have access, and they can only access resources to which they're entitled. To use Azure API Management's interactive developer console with such APIs, the service allows you to configure an external provider for OAuth 2.0 user authorization. Configuring OAuth 2.0 user authorization in the test console of the developer portal provides developers with a convenient way to acquire an OAuth 2.0 access token. From the test console, the token is then passed to the backend with the API call. Token validation must be configured separately - either using a [JWT validation policy](validate-jwt-policy.md), or in the backend service.
This article shows you how to configure your API Management service instance to
If you haven't yet created an API Management service instance, see [Create an API Management service instance][Create an API Management service instance]. ## Scenario overview
For more information about using OAuth 2.0 and API Management, see [Protect a we
[Publish a product]: api-management-howto-add-products.md#publish-product [Get started with Azure API Management]: get-started-create-service-instance.md [API Management policy reference]: ./api-management-policies.md
-[Caching policies]: ./api-management-policies.md#caching-policies
+[Caching policies]: ./api-management-policies.md#caching
[Create an API Management service instance]: get-started-create-service-instance.md [https://oauth.net/2/]: https://oauth.net/2/
api-management Api Management Howto Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-policies.md
# Policies in Azure API Management + In Azure API Management, API publishers can change API behavior through configuration using *policies*. Policies are a collection of statements that are run sequentially on the request or response of an API. API Management provides more than 50 policies out of the box that you can configure to address common API scenarios such as authentication, rate limiting, caching, and transformation of requests or responses. For a complete list, see [API Management policy reference](api-management-policies.md). Popular policies include:
Unless the policy specifies otherwise, [policy expressions](api-management-polic
Each expression has access to the implicitly provided `context` variable and an allowed subset of .NET Framework types.
-Policy expressions provide a sophisticated means to control traffic and modify API behavior without requiring you to write specialized code or modify backend services. Some policies are based on policy expressions, such as [Control flow][Control flow] and [Set variable][Set variable]. For more information, see [Advanced policies][Advanced policies].
+Policy expressions provide a sophisticated means to control traffic and modify API behavior without requiring you to write specialized code or modify backend services. Some policies are based on policy expressions, such as [Control flow][Control flow] and [Set variable][Set variable].
## Scopes
The following example uses [policy expressions][Policy expressions] and the [`se
[API]: api-management-howto-add-products.md [Operation]: ./mock-api-responses.md
-[Advanced policies]: ./api-management-advanced-policies.md
+[Policy control and flow policies]: ./api-management-policies.md#policy-control-and-flow
[Control flow]: choose-policy.md [Set variable]: set-variable-policy.md [Policy expressions]: ./api-management-policy-expressions.md
api-management Api Management Howto Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-properties.md
# Use named values in Azure API Management policies + [API Management policies](api-management-howto-policies.md) are a powerful capability of the system that allow the publisher to change the behavior of the API through configuration. Policies are a collection of statements that are executed sequentially on the request or response of an API. Policy statements can be constructed using literal text values, policy expressions, and named values. *Named values* are a global collection of name/value pairs in each API Management instance. There is no imposed limit on the number of items in the collection. Named values can be used to manage constant string values and secrets across all API configurations and policies.
api-management Api Management Howto Protect Backend With Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-protect-backend-with-aad.md
# Protect an API in Azure API Management using OAuth 2.0 authorization with Microsoft Entra ID + In this article, you'll learn high level steps to configure your [Azure API Management](api-management-key-concepts.md) instance to protect an API, by using the [OAuth 2.0 protocol with Microsoft Entra ID](../active-directory/develop/active-directory-v2-protocols.md). For a conceptual overview of API authorization, see [Authentication and authorization to APIs in API Management](authentication-authorization-overview.md).
api-management Api Management Howto Provision Self Hosted Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-provision-self-hosted-gateway.md
# Provision a self-hosted gateway in Azure API Management
-Provisioning a gateway resource in your Azure API Management instance is a prerequisite for deploying a self-hosted gateway. This article walks through the steps to provision a gateway resource in API Management.
- [!INCLUDE [api-management-availability-premium-dev](../../includes/api-management-availability-premium-dev.md)]
+Provisioning a gateway resource in your Azure API Management instance is a prerequisite for deploying a self-hosted gateway. This article walks through the steps to provision a gateway resource in API Management.
+ ## Prerequisites Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management Api Management Howto Setup Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-setup-delegation.md
# How to delegate user registration and product subscription
-Delegation enables your website to own the user data and perform custom validation. With delegation, you can handle developer sign-in/sign-up (and related account management operations) and product subscription using your existing website, instead of the developer portal's built-in functionality.
- [!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)]
+Delegation enables your website to own the user data and perform custom validation. With delegation, you can handle developer sign-in/sign-up (and related account management operations) and product subscription using your existing website, instead of the developer portal's built-in functionality.
+ ## Delegating developer sign-in and sign-up To delegate developer sign-in and sign-up and developer account management options to your existing website, create a special delegation endpoint on your site. This special delegation acts as the entry-point for any sign-in/sign-up and related requests initiated from the API Management developer portal.
api-management Api Management Howto Use Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-azure-monitor.md
# Tutorial: Monitor published APIs + With Azure Monitor, you can visualize, query, route, archive, and take actions on the metrics or logs coming from your Azure API Management service. In this tutorial, you learn how to:
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
# Use managed identities in Azure API Management + This article shows you how to create a managed identity for an Azure API Management instance and how to use it to access other resources. A managed identity generated by Microsoft Entra ID allows your API Management instance to easily and securely access other Microsoft Entra protected resources, such as Azure Key Vault. Azure manages this identity, so you don't have to provision or rotate any secrets. For more information about managed identities, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md). You can grant two types of identities to an API Management instance:
api-management Api Management In Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-in-workspace.md
Last updated 03/10/2023
# Manage APIs and other resources in your API Management workspace
-This article is an introduction to managing APIs, products, subscriptions, and other API Management resources in a *workspace*. A workspace is a place where a development team can own, manage, update, and productize their own APIs, while a central API platform team manages the API Management infrastructure. Learn about the [workspace features](workspaces-overview.md)
- [!INCLUDE [api-management-availability-premium](../../includes/api-management-availability-premium.md)]
+This article is an introduction to managing APIs, products, subscriptions, and other API Management resources in a *workspace*. A workspace is a place where a development team can own, manage, update, and productize their own APIs, while a central API platform team manages the API Management infrastructure. Learn about the [workspace features](workspaces-overview.md)
+ > [!NOTE] > * Workspaces are a preview feature of API Management and subject to certain [limitations](workspaces-overview.md#preview-limitations). > * Workspaces are supported in API Management REST API version 2022-09-01-preview or later.
api-management Api Management Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts.md
Previously updated : 12/13/2023 Last updated : 03/28/2024 # What is Azure API Management? + This article provides an overview of common scenarios and key components of Azure API Management. Azure API Management is a hybrid, multicloud management platform for APIs across all environments. As a platform-as-a-service, API Management supports the complete API lifecycle. > [!TIP]
Common scenarios include:
## API Management components
-Azure API Management is made up of an API *gateway*, a *management plane*, and a *developer portal*. These components are Azure-hosted and fully managed by default. API Management is available in various [tiers](api-management-features.md) differing in capacity and features.
+Azure API Management is made up of an API *gateway*, a *management plane*, and a *developer portal*. These components are Azure-hosted and fully managed by default. API Management is available in various [tiers](#api-management-tiers) differing in capacity and features.
:::image type="content" source="media/api-management-key-concepts-experiment/api-management-components.png" alt-text="Diagram showing key components of Azure API Management.":::
Using the developer portal, developers can:
* Download API definitions * Manage API keys
+## API Management tiers
+
+API Management is offered in a variety of pricing tiers to meet the needs of different customers. Each tier offers a distinct combination of features, performance, capacity limits, scalability, SLA, and pricing for different scenarios. The tiers are grouped as follows:
+
+* **Classic** - The original API Management offering, including the Developer, Basic, Standard, and Premium tiers. The Premium tier is designed for enterprises requiring access to private backends, enhanced security features, multi-region deployments, availability zones, and high scalability. The Developer tier is an economical option for non-production use, while the Basic, Standard, and Premium tiers are production-ready tiers.
+* **V2** - A new set of tiers that offer fast provisioning and scaling, including Basic v2 for development and testing, and Standard v2 for production workloads. Standard v2 supports simplified connection to network-isolated backends.
+* **Consumption** - The Consumption tier is a serverless gateway for managing APIs that scales based on demand and billed per execution. It is designed for applications with serverless compute, microservices-based architectures, and those with variable traffic patterns.
+
+**More information**:
+* [Feature-based comparison of the Azure API Management tiers](api-management-features.md)
+* [V2 service tiers](v2-service-tiers-overview.md)
+* [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/)
+ ## Integration with Azure services API Management integrates with many complementary Azure services to create enterprise solutions, including:
API Management integrates with many complementary Azure services to create enter
* [Basic enterprise integration](/azure/architecture/reference-architectures/enterprise-integration/basic-enterprise-integration?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) * [Landing zone accelerator](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) - ## Key concepts ### APIs
api-management Api Management Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-kubernetes.md
# Use Azure API Management with microservices deployed in Azure Kubernetes Service + Microservices are perfect for building APIs. With [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/) (AKS), you can quickly deploy and operate a [microservices-based architecture](/azure/architecture/guide/architecture-styles/microservices) in the cloud. You can then leverage [Azure API Management](https://aka.ms/apimrocks) (API Management) to publish your microservices as APIs for internal and external consumption. This article describes the options of deploying API Management with AKS. It assumes basic knowledge of Kubernetes, API Management, and Azure networking. ## Background
api-management Api Management Log To Eventhub Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-log-to-eventhub-sample.md
Last updated 01/23/2018
# Monitor your APIs with Azure API Management, Event Hubs, and Moesif++ The [API Management service](api-management-key-concepts.md) provides many capabilities to enhance the processing of HTTP requests sent to your HTTP API. However, the existence of the requests and responses is transient. The request is made and it flows through the API Management service to your backend API. Your API processes the request and a response flows back through to the API consumer. The API Management service keeps some important statistics about the APIs for display in the Azure portal dashboard, but beyond that, the details are gone. By using the log-to-eventhub policy in the API Management service, you can send any details from the request and response to an [Azure Event Hub](../event-hubs/event-hubs-about.md). There are a variety of reasons why you may want to generate events from HTTP messages being sent to your APIs. Some examples include audit trail of updates, usage analytics, exception alerting, and third-party integrations.
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
Previously updated : 03/08/2024 Last updated : 03/28/2024 # API Management policy reference
-This section provides links to reference articles for all API Management policies.
++
+This section provides brief descriptions and links to reference articles for all API Management policies. The API Management [gateways](api-management-gateways-overview.md) that support each policy are indicated. For detailed policy settings and examples, see the linked reference articles.
More information about policies:
More information about policies:
> [!IMPORTANT] > [Limit call rate by subscription](rate-limit-policy.md) and [Set usage quota by subscription](quota-policy.md) have a dependency on the subscription key. A subscription key isn't required when other policies are applied.
-## Access restriction policies
-- [Check HTTP header](check-header-policy.md) - Enforces existence and/or value of an HTTP Header.-- [Get authorization context](get-authorization-context-policy.md) - Gets the authorization context of a specified [connection](credentials-overview.md) to a credential provider configured in the API Management instance.-- [Limit call rate by subscription](rate-limit-policy.md) - Prevents API usage spikes by limiting call rate, on a per subscription basis.-- [Limit call rate by key](rate-limit-by-key-policy.md) - Prevents API usage spikes by limiting call rate, on a per key basis.-- [Restrict caller IPs](ip-filter-policy.md) - Filters (allows/denies) calls from specific IP addresses and/or address ranges.-- [Set usage quota by subscription](quota-policy.md) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis.-- [Set usage quota by key](quota-by-key-policy.md) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.-- [Validate Microsoft Entra token](validate-azure-ad-token-policy.md) - Enforces existence and validity of a Microsoft Entra JWT extracted from either a specified HTTP header, query parameter, or token value.-- [Validate JWT](validate-jwt-policy.md) - Enforces existence and validity of a JWT extracted from either a specified HTTP Header, query parameter, or token value.-- [Validate client certificate](validate-client-certificate-policy.md) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims.-
-## Advanced policies
-- [Control flow](choose-policy.md) - Conditionally applies policy statements based on the results of the evaluation of Boolean [expressions](api-management-policy-expressions.md).-- [Emit metrics](emit-metric-policy.md) - Sends custom metrics to Application Insights at execution.-- [Forward request](forward-request-policy.md) - Forwards the request to the backend service.-- [Include fragment](include-fragment-policy.md) - Inserts a policy fragment in the policy definition.-- [Limit concurrency](limit-concurrency-policy.md) - Prevents enclosed policies from executing by more than the specified number of requests at a time.-- [Log to event hub](log-to-eventhub-policy.md) - Sends messages in the specified format to an event hub defined by a Logger entity.-- [Mock response](mock-response-policy.md) - Aborts pipeline execution and returns a mocked response directly to the caller.-- [Retry](retry-policy.md) - Retries execution of the enclosed policy statements, if and until the condition is met. Execution will repeat at the specified time intervals and up to the specified retry count.-- [Return response](return-response-policy.md) - Aborts pipeline execution and returns the specified response directly to the caller.-- [Send one way request](send-one-way-request-policy.md) - Sends a request to the specified URL without waiting for a response.-- [Send request](send-request-policy.md) - Sends a request to the specified URL.-- [Set HTTP proxy](proxy-policy.md) - Allows you to route forwarded requests via an HTTP proxy.-- [Set request method](set-method-policy.md) - Allows you to change the HTTP method for a request.-- [Set status code](set-status-policy.md) - Changes the HTTP status code to the specified value.-- [Set variable](set-variable-policy.md) - Persists a value in a named [context](api-management-policy-expressions.md#ContextVariables) variable for later access.-- [Trace](trace-policy.md) - Adds custom traces into the [request tracing](./api-management-howto-api-inspector.md) output in the test console, Application Insights telemetries, and resource logs.-- [Wait](wait-policy.md) - Waits for enclosed [Send request](send-request-policy.md), [Get value from cache](cache-lookup-value-policy.md), or [Control flow](choose-policy.md) policies to complete before proceeding.-
-## Authentication policies
-- [Authenticate with Basic](authentication-basic-policy.md) - Authenticate with a backend service using Basic authentication.-- [Authenticate with client certificate](authentication-certificate-policy.md) - Authenticate with a backend service using client certificates.-- [Authenticate with managed identity](authentication-managed-identity-policy.md) - Authenticate with a backend service using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).-
-## Caching policies
-- [Get from cache](cache-lookup-policy.md) - Perform cache lookup and return a valid cached response when available.-- [Store to cache](cache-store-policy.md) - Caches response according to the specified cache control configuration.-- [Get value from cache](cache-lookup-value-policy.md) - Retrieve a cached item by key.-- [Store value in cache](cache-store-value-policy.md) - Store an item in the cache by key.-- [Remove value from cache](cache-remove-value-policy.md) - Remove an item in the cache by key.-
-## Cross-domain policies
-- [Allow cross-domain calls](cross-domain-policy.md) - Makes the API accessible from Adobe Flash and Microsoft Silverlight browser-based clients.-- [CORS](cors-policy.md) - Adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients.-- [JSONP](jsonp-policy.md) - Adds JSON with padding (JSONP) support to an operation or an API to allow cross-domain calls from JavaScript browser-based clients.-
-## Dapr integration policies
-- [Send request to a service](set-backend-service-dapr-policy.md): Uses Dapr runtime to locate and reliably communicate with a Dapr microservice. To learn more about service invocation in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md#service-invocation) file.-- [Send message to Pub/Sub topic](publish-to-dapr-policy.md): Uses Dapr runtime to publish a message to a Publish/Subscribe topic. To learn more about Publish/Subscribe messaging in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file.-- [Trigger output binding](invoke-dapr-binding-policy.md): Uses Dapr runtime to invoke an external system via output binding. To learn more about bindings in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file.-
-## GraphQL resolver policies
-- [Azure SQL data source for resolver](sql-data-source-policy.md) - Configures the Azure SQL request and optional response to resolve data for an object type and field in a GraphQL schema.-- [Cosmos DB data source for resolver](cosmosdb-data-source-policy.md) - Configures the Cosmos DB request and optional response to resolve data for an object type and field in a GraphQL schema.-- [HTTP data source for resolver](http-data-source-policy.md) - Configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema.-- [Publish event to GraphQL subscription](publish-event-policy.md) - Publishes an event to one or more subscriptions specified in a GraphQL API schema. Configure the policy in a GraphQL resolver for a related field in the schema for another operation type such as a mutation. -
-## Transformation policies
-- [Convert JSON to XML](json-to-xml-policy.md) - Converts request or response body from JSON to XML.-- [Convert XML to JSON](xml-to-json-policy.md) - Converts request or response body from XML to JSON.-- [Find and replace string in body](find-and-replace-policy.md) - Finds a request or response substring and replaces it with a different substring.-- [Mask URLs in content](redirect-content-urls-policy.md) - Rewrites (masks) links in the response body so that they point to the equivalent link via the gateway.-- [Set backend service](set-backend-service-policy.md) - Changes the backend service base URL of an incoming request to a URL or a [backend](backends.md). Referencing a backend resource allows you to manage the backend service base URL and other settings in a single place. Also implement [load balancing of traffic across a pool of backend services](backends.md#load-balanced-pool-preview) and [circuit breaker rules](backends.md#circuit-breaker-preview) to protect the backend from too many requests.-- [Set body](set-body-policy.md) - Sets the message body for a request or response.-- [Set HTTP header](set-header-policy.md) - Assigns a value to an existing response and/or request header or adds a new response and/or request header.-- [Set query string parameter](set-query-parameter-policy.md) - Adds, replaces value of, or deletes request query string parameter.-- [Rewrite URL](rewrite-uri-policy.md) - Converts a request URL from its public form to the form expected by the web service.-- [Transform XML using an XSLT](xsl-transform-policy.md) - Applies an XSL transformation to XML in the request or response body.-
-## Validation policies
--- [Validate content](validate-content-policy.md) - Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML.-- [Validate GraphQL request](validate-graphql-request-policy.md) - Validates and authorizes a request to a GraphQL API. -- [Validate OData request](validate-odata-request-policy.md) - Validates a request to an OData API to ensure conformance with the OData specification.-- [Validate parameters](validate-parameters-policy.md) - Validates the request header, query, or path parameters against the API schema.-- [Validate headers](validate-headers-policy.md) - Validates the response headers against the API schema.-- [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in responses against the API schema.
+## Rate limiting and quotas
+
+|Policy |Description |Classic | V2 | Consumption | Self-hosted |
+||||||--|
+| [Limit call rate by subscription](rate-limit-policy.md) | Prevents API usage spikes by limiting call rate, on a per subscription basis. | Yes | Yes | Yes | Yes |
+| [Limit call rate by key](rate-limit-by-key-policy.md) | Prevents API usage spikes by limiting call rate, on a per key basis. | Yes | Yes | No | Yes |
+| [Set usage quota by subscription](quota-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. | Yes | Yes | Yes | Yes
+| [Set usage quota by key](quota-by-key-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. | Yes | No | No | Yes |
+| [Limit concurrency](limit-concurrency-policy.md) | Prevents enclosed policies from executing by more than the specified number of requests at a time. | Yes | Yes | Yes | Yes |
+
+## Authentication and authorization
+
+|Policy |Description | Classic | V2 | Consumption |Self-hosted |
+||||||--|
+| [Check HTTP header](check-header-policy.md) | Enforces existence and/or value of an HTTP header. | Yes | Yes | Yes | Yes |
+| [Get authorization context](get-authorization-context-policy.md) | Gets the authorization context of a specified [connection](credentials-overview.md) to a credential provider configured in the API Management instance. | Yes | Yes | Yes | No |
+| [Restrict caller IPs](ip-filter-policy.md) | Filters (allows/denies) calls from specific IP addresses and/or address ranges. | Yes | Yes | Yes | Yes |
+| [Validate Microsoft Entra token](validate-azure-ad-token-policy.md) | Enforces existence and validity of a Microsoft Entra (formerly called Azure Active Directory) JWT extracted from either a specified HTTP header, query parameter, or token value. | Yes | Yes | Yes | Yes |
+| [Validate JWT](validate-jwt-policy.md) | Enforces existence and validity of a JWT extracted from either a specified HTTP header, query parameter, or token value. | Yes | Yes | Yes | Yes |
+| [Validate client certificate](validate-client-certificate-policy.md) |Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims. | Yes | Yes | Yes | Yes |
+| [Authenticate with Basic](authentication-basic-policy.md) | Authenticates with a backend service using Basic authentication. | Yes | Yes | Yes | Yes |
+| [Authenticate with client certificate](authentication-certificate-policy.md) | Authenticates with a backend service using client certificates. | Yes | Yes | Yes | Yes |
+| [Authenticate with managed identity](authentication-managed-identity-policy.md) | Authenticates with a backend service using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md). | Yes | Yes | Yes | Yes |
+
+## Content validation
+
+|Policy |Description | Classic | V2 | Consumption |Self-hosted |
+||||||--|
+| [Validate content](validate-content-policy.md) | Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML. | Yes | Yes | Yes | Yes |
+| [Validate GraphQL request](validate-graphql-request-policy.md) | Validates and authorizes a request to a GraphQL API. | Yes | Yes | Yes | Yes |
+| [Validate OData request](validate-odata-request-policy.md) | Validates a request to an OData API to ensure conformance with the OData specification. | Yes | Yes | Yes | Yes |
+| [Validate parameters](validate-parameters-policy.md) | Validates the request header, query, or path parameters against the API schema. | Yes | Yes | Yes | Yes |
+| [Validate headers](validate-headers-policy.md) | Validates the response headers against the API schema. | Yes | Yes | Yes | Yes |
+| [Validate status code](validate-status-code-policy.md) | Validates the HTTP status codes in responses against the API schema. | Yes | Yes | Yes | Yes |
+
+## Routing
+
+|Policy |Description | Classic | V2 | Consumption |Self-hosted |
+||||||--|
+| [Forward request](forward-request-policy.md) | Forwards the request to the backend service. | Yes | Yes | Yes | Yes |
+| [Set backend service](set-backend-service-policy.md) | Changes the backend service base URL of an incoming request to a URL or a [backend](backends.md). Referencing a backend resource allows you to manage the backend service base URL and other settings in a single place. Also implement [load balancing of traffic across a pool of backend services](backends.md#load-balanced-pool-preview) and [circuit breaker rules](backends.md#circuit-breaker-preview) to protect the backend from too many requests. | Yes | Yes | Yes | Yes |
+| [Set HTTP proxy](proxy-policy.md) | Allows you to route forwarded requests via an HTTP proxy. | Yes | Yes | Yes | Yes |
+
+## Caching
+
+|Policy |Description | Classic | V2 | Consumption |Self-hosted |
+||||||--|
+| [Get from cache](cache-lookup-policy.md) | Performs cache lookup and return a valid cached response when available. | Yes | Yes | Yes | Yes |
+| [Store to cache](cache-store-policy.md) | Caches response according to the specified cache control configuration. | Yes | Yes | Yes | Yes |
+| [Get value from cache](cache-lookup-value-policy.md) | Retrieves a cached item by key. | Yes | Yes | Yes | Yes |
+| [Store value in cache](cache-store-value-policy.md) | Stores an item in the cache by key. | Yes | Yes | Yes | Yes |
+| [Remove value from cache](cache-remove-value-policy.md) | Removes an item in the cache by key. | Yes | Yes | Yes | Yes |
+
+## Transformation
+
+|Policy |Description | Classic | V2 | Consumption |Self-hosted |
+||||||--|
+| [Set request method](set-method-policy.md) | Allows you to change the HTTP method for a request. | Yes | Yes | Yes | Yes |
+| [Set status code](set-status-policy.md) | Changes the HTTP status code to the specified value. | Yes | Yes | Yes | Yes |
+| [Set variable](set-variable-policy.md) | Persists a value in a named [context](api-management-policy-expressions.md#ContextVariables) variable for later access. | Yes | Yes | Yes | Yes |
+| [Set body](set-body-policy.md) | Sets the message body for a request or response. | Yes | Yes | Yes | Yes |
+| [Set HTTP header](set-header-policy.md) | Assigns a value to an existing response and/or request header or adds a new response and/or request header. | Yes | Yes | Yes | Yes |
+| [Set query string parameter](set-query-parameter-policy.md) | Adds, replaces value of, or deletes request query string parameter. | Yes | Yes | Yes | Yes |
+| [Rewrite URL](rewrite-uri-policy.md) | Converts a request URL from its public form to the form expected by the web service. | Yes | Yes | Yes | Yes |
+| [Convert JSON to XML](json-to-xml-policy.md) | Converts request or response body from JSON to XML. | Yes | Yes | Yes | Yes |
+| [Convert XML to JSON](xml-to-json-policy.md) | Converts request or response body from XML to JSON. | Yes | Yes | Yes | Yes |
+| [Find and replace string in body](find-and-replace-policy.md) | Finds a request or response substring and replaces it with a different substring. | Yes | Yes | Yes | Yes |
+| [Mask URLs in content](redirect-content-urls-policy.md) | Rewrites (masks) links in the response body so that they point to the equivalent link via the gateway. | Yes | Yes | Yes | Yes |
+| [Transform XML using an XSLT](xsl-transform-policy.md) | Applies an XSL transformation to XML in the request or response body. | Yes | Yes | Yes | Yes |
+| [Return response](return-response-policy.md) | Aborts pipeline execution and returns the specified response directly to the caller. | Yes | Yes | Yes | Yes |
+| [Mock response](mock-response-policy.md) | Aborts pipeline execution and returns a mocked response directly to the caller. | Yes | Yes | Yes | Yes |
+
+## Cross-domain
+
+|Policy |Description | Classic | V2 | Consumption |Self-hosted |
+||||||--|
+| [Allow cross-domain calls](cross-domain-policy.md) | Makes the API accessible from Adobe Flash and Microsoft Silverlight browser-based clients. | Yes | Yes | Yes | Yes |
+| [CORS](cors-policy.md) | Adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients. | Yes | Yes | Yes | Yes |
+| [JSONP](jsonp-policy.md) | Adds JSON with padding (JSONP) support to an operation or an API to allow cross-domain calls from JavaScript browser-based clients. | Yes | Yes | Yes | Yes |
+
+## Integration and external communication
+
+|Policy |Description | Classic | V2 | Consumption |Self-hosted |
+||||||--|
+ | [Send request](send-request-policy.md) | Sends a request to the specified URL. | Yes | Yes | Yes | Yes |
+ | [Send one way request](send-one-way-request-policy.md) | Sends a request to the specified URL without waiting for a response. | Yes | Yes | Yes | Yes |
+| [Log to event hub](log-to-eventhub-policy.md) | Sends messages in the specified format to an event hub defined by a Logger entity.| Yes | Yes | Yes | Yes |
+| [Send request to a service (Dapr)](set-backend-service-dapr-policy.md)| Uses Dapr runtime to locate and reliably communicate with a Dapr microservice. To learn more about service invocation in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md#service-invocation) file. | No | No | No | Yes |
+| [Send message to Pub/Sub topic (Dapr)](publish-to-dapr-policy.md) | Uses Dapr runtime to publish a message to a Publish/Subscribe topic. To learn more about Publish/Subscribe messaging in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. | No | No | No | Yes |
+| [Trigger output binding (Dapr)](invoke-dapr-binding-policy.md) | Uses Dapr runtime to invoke an external system via output binding. To learn more about bindings in Dapr, see the description in this [README](https://github.com/dapr/docs/blob/master/README.md) file. | No | No | No | Yes |
+
+## Logging
+
+|Policy |Description | Classic | V2 | Consumption |Self-hosted |
+||||||--|
+| [Trace](trace-policy.md) | Adds custom traces into the [request tracing](./api-management-howto-api-inspector.md) output in the test console, Application Insights telemetries, and resource logs. | Yes | Yes<sup>1</sup> | Yes | Yes |
+| [Emit metrics](emit-metric-policy.md) | Sends custom metrics to Application Insights at execution. | Yes | Yes | Yes | Yes |
+
+<sup>1</sup> In the V2 gateway, the `trace` policy currently does not add tracing output in the test console.
+
+## GraphQL resolvers
+
+|Policy |Description | Classic | V2 | Consumption |Self-hosted |
+||||||--|
+| [Azure SQL data source for resolver](sql-data-source-policy.md) | Configures the Azure SQL request and optional response to resolve data for an object type and field in a GraphQL schema. | Yes | Yes | No | No |
+| [Cosmos DB data source for resolver](cosmosdb-data-source-policy.md) | Configures the Cosmos DB request and optional response to resolve data for an object type and field in a GraphQL schema. | Yes | Yes | No | No |
+| [HTTP data source for resolver](http-data-source-policy.md) | Configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. | Yes | Yes | Yes | No |
+| [Publish event to GraphQL subscription](publish-event-policy.md) | Publishes an event to one or more subscriptions specified in a GraphQL API schema. Configure the policy in a GraphQL resolver for a related field in the schema for another operation type such as a mutation. | Yes | Yes | Yes | No |
+
+## Policy control and flow
+
+|Policy |Description | Classic | V2 | Consumption |Self-hosted |
+||||||--|
+| [Control flow](choose-policy.md) | Conditionally applies policy statements based on the results of the evaluation of Boolean [expressions](api-management-policy-expressions.md). | Yes | Yes | Yes | Yes |
+| [Include fragment](include-fragment-policy.md) | Inserts a policy fragment in the policy definition. | Yes | Yes | Yes | Yes |
+| [Retry](retry-policy.md) | Retries execution of the enclosed policy statements, if and until the condition is met. Execution will repeat at the specified time intervals and up to the specified retry count. | Yes | Yes | Yes | Yes |
+ | [Wait](wait-policy.md) | Waits for enclosed [Send request](send-request-policy.md), [Get value from cache](cache-lookup-value-policy.md), or [Control flow](choose-policy.md) policies to complete before proceeding. | Yes | Yes | Yes | Yes |
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md
Last updated 03/07/2023
# API Management policy expressions++ This article discusses policy expressions syntax in C# 7. Each expression has access to: * The implicitly provided [context](api-management-policy-expressions.md#ContextVariables) variable. * An allowed [subset](api-management-policy-expressions.md#CLRTypes) of .NET Framework types.
api-management Api Management Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-revisions.md
# Revisions in Azure API Management + Revisions allow you to make changes to your APIs in a controlled and safe way. When you want to make changes, create a new revision. You can then edit and test API without disturbing your API consumers. When you're ready, you then make your revision current. At the same time, you can optionally post an entry to the change log, to keep your API consumers up to date with what has changed. The change log is published to your developer portal. > [!NOTE]
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-role-based-access-control.md
# How to use role-based access control in Azure API Management + Azure API Management relies on Azure role-based access control (Azure RBAC) to enable fine-grained access management for API Management services and entities (for example, APIs and policies). This article gives you an overview of the built-in and custom roles in API Management. For more information on access management in the Azure portal, see [Get started with access management in the Azure portal](../role-based-access-control/overview.md). [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
api-management Api Management Sample Cache By Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-cache-by-key.md
# Custom caching in Azure API Management++ Azure API Management service has built-in support for [HTTP response caching](api-management-howto-cache.md) using the resource URL as the key. The key can be modified by request headers using the `vary-by` properties. This is useful for caching entire HTTP responses (also known as representations), but sometimes it's useful to just cache a portion of a representation. The [cache-lookup-value](cache-lookup-value-policy.md) and [cache-store-value](cache-store-value-policy.md) policies provide the ability to store and retrieve arbitrary pieces of data from within policy definitions. This ability also adds value to the [send-request](send-request-policy.md) policy because you can cache responses from external services. ## Architecture
api-management Api Management Sample Flexible Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-flexible-throttling.md
# Advanced request throttling with Azure API Management++ Being able to throttle incoming requests is a key role of Azure API Management. Either by controlling the rate of requests or the total requests/data transferred, API Management allows API providers to protect their APIs from abuse and create value for different API product tiers. ## Rate limits and quotas
Rate throttling capabilities that are scoped to a particular subscription are us
## Custom key-based throttling > [!NOTE]
-> The `rate-limit-by-key` and `quota-by-key` policies are not available when in the Consumption tier of Azure API Management.
+> The `rate-limit-by-key` and `quota-by-key` policies are not available when in the Consumption tier of Azure API Management. The `quota-by-key` policy is also currently not available in the v2 tiers.
The [rate-limit-by-key](rate-limit-by-key-policy.md) and [quota-by-key](quota-by-key-policy.md) policies provide a more flexible solution to traffic control. These policies allow you to define expressions to identify the keys that are used to track traffic usage. The way this works is easiest illustrated with an example.
api-management Api Management Sample Send Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-send-request.md
# Using external services from the Azure API Management service++ The policies available in Azure API Management service can do a wide range of useful work based purely on the incoming request, the outgoing response, and basic configuration information. However, being able to interact with external services from API Management policies opens up many more opportunities. You have previously seen how to interact with the [Azure Event Hub service for logging, monitoring, and analytics](api-management-log-to-eventhub-sample.md). This article demonstrates policies that allow you to interact with any external HTTP-based service. These policies can be used for triggering remote events or for retrieving information that is used to manipulate the original request and response in some way.
There are certain tradeoffs when using a fire-and-forget style of request. If fo
The `send-request` policy enables using an external service to perform complex processing functions and return data to the API management service that can be used for further policy processing. ### Authorizing reference tokens
-A major function of API Management is protecting backend resources. If the authorization server used by your API creates [JWT tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) as part of its OAuth2 flow, as [Microsoft Entra ID](../active-directory/hybrid/whatis-hybrid-identity.md) does, then you can use the `validate-jwt` policy to verify the validity of the token. Some authorization servers create what are called [reference tokens](https://leastprivilege.com/2015/11/25/reference-tokens-and-introspection/) that cannot be verified without making a callback to the authorization server.
+A major function of API Management is protecting backend resources. If the authorization server used by your API creates [JWT tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) as part of its OAuth2 flow, as [Microsoft Entra ID](../active-directory/hybrid/whatis-hybrid-identity.md) does, then you can use the `validate-jwt` policy or `validate-azure-ad-token` policy to verify the validity of the token. Some authorization servers create what are called [reference tokens](https://leastprivilege.com/2015/11/25/reference-tokens-and-introspection/) that cannot be verified without making a callback to the authorization server.
### Standardized introspection In the past, there has been no standardized way of verifying a reference token with an authorization server. However a recently proposed standard [RFC 7662](https://tools.ietf.org/html/rfc7662) was published by the IETF that defines how a resource server can verify the validity of a token.
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
# Subscriptions in Azure API Management + In Azure API Management, *subscriptions* are the most common way for API consumers to access APIs published through an API Management instance. This article provides an overview of the concept. > [!NOTE]
api-management Api Management Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-terminology.md
# Azure API Management terminology + This article gives definitions for the terms that are specific to Azure API Management. ## Term definitions
api-management Api Management Troubleshoot Cannot Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-troubleshoot-cannot-add-custom-domain.md
# Failed to update API Management service hostnames + This article describes the "Failed to update API Management service hostnames" error that you may experience when you add a custom domain for the Azure API Management service. This article provides troubleshooting steps to help you resolve the issue. ## Symptoms
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
Previously updated : 01/03/2022 Last updated : 03/26/2024 # Deploy your Azure API Management instance to a virtual network - internal mode
-Azure API Management can be deployed (injected) inside an Azure virtual network (VNet) to access backend services within the network. For VNet connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
+
+Azure API Management can be deployed (injected) inside an Azure virtual network (VNet) to access backend services within the network. For VNet connectivity options, requirements, and considerations, see:
+
+* [Using a virtual network with Azure API Management](virtual-network-concepts.md)
+* [Network resource requirements for API Management injection into a virtual network](virtual-network-injection-resources.md)
This article explains how to set up VNet connectivity for your API Management instance in the *internal* mode. In this mode, you can only access the following API Management endpoints within a VNet whose access you control. * The API gateway
For configurations specific to the *external* mode, where the API Management end
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] - [!INCLUDE [api-management-virtual-network-prerequisites](../../includes/api-management-virtual-network-prerequisites.md)] ## Enable VNet connection
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-vnet.md
Previously updated : 01/03/2022 Last updated : 03/26/2024 # Deploy your Azure API Management instance to a virtual network - external mode
-Azure API Management can be deployed (injected) inside an Azure virtual network (VNet) to access backend services within the network. For VNet connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
+
+Azure API Management can be deployed (injected) inside an Azure virtual network (VNet) to access backend services within the network. For VNet connectivity options, requirements, and considerations, see:
+
+* [Using a virtual network with Azure API Management](virtual-network-concepts.md)
+* [Network resource requirements for API Management injection into a virtual network](virtual-network-injection-resources.md)
This article explains how to set up VNet connectivity for your API Management instance in the *external* mode, where the developer portal, API gateway, and other API Management endpoints are accessible from the public internet, and backend services are located in the network.
For configurations specific to the *internal* mode, where the endpoints are acce
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] - [!INCLUDE [api-management-virtual-network-prerequisites](../../includes/api-management-virtual-network-prerequisites.md)] ## Enable VNet connection
For configurations specific to the *internal* mode, where the endpoints are acce
7. In the top navigation bar, select **Save**, then select **Apply network configuration**.
-It can take 15 to 45 minutes to update the API Management instance. The Developer tier has downtime during the process. The Basic and higher SKUs don't have downtime during the process.
+It can take 15 to 45 minutes to update the API Management instance. Instances in the Developer tier have downtime during the process. Instances in the Premium tier don't have downtime during the process.
### Enable connectivity using a Resource Manager template (`stv2` compute platform)
api-management Api Management Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-versions.md
# Versions in Azure API Management + Versions allow you to present groups of related APIs to your developers. You can use versions to handle breaking changes in your API safely. Clients can choose to use your new API version when they're ready, while existing clients continue to use an older version. Versions are differentiated through a version identifier (which is any string value you choose), and a versioning scheme allows clients to identify which version of an API they want to use. For most purposes, each API version can be considered its own independent API. Two different API versions might have different sets of operations and different policies.
api-management Authentication Authorization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-authorization-overview.md
# Authentication and authorization to APIs in Azure API Management + This article is an introduction to a rich, flexible set of features in API Management that help you secure users' access to managed APIs. API authentication and authorization in API Management involve securing the end-to-end communication of client apps to the API Management gateway and through to backend APIs. In many customer environments, OAuth 2.0 is the preferred API authorization protocol. API Management supports OAuth 2.0 authorization between the client and the API Management gateway, between the gateway and the backend API, or both independently.
api-management Authentication Basic Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-basic-policy.md
Previously updated : 12/01/2022 Last updated : 03/18/2024 # Authenticate with Basic + Use the `authentication-basic` policy to authenticate with a backend service using Basic authentication. This policy effectively sets the HTTP Authorization header to the value corresponding to the credentials provided in the policy. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
Use the `authentication-basic` policy to authenticate with a backend service usi
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
Use the `authentication-basic` policy to authenticate with a backend service usi
## Related policies
-* [API Management authentication policies](api-management-authentication-policies.md)
+* [Authentication and authorization](api-management-policies.md#authentication-and-authorization)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Authentication Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-certificate-policy.md
Previously updated : 12/01/2022 Last updated : 03/18/2024 # Authenticate with client certificate + Use the `authentication-certificate` policy to authenticate with a backend service using a client certificate. When the certificate is [installed into API Management](./api-management-howto-mutual-certificates.md) first, identify it first by its thumbprint or certificate ID (resource name). > [!CAUTION]
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Examples
## Related policies
-* [API Management authentication policies](api-management-authentication-policies.md)
+* [Authentication and authorization](api-management-policies.md#authentication-and-authorization)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Authentication Managed Identity Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-managed-identity-policy.md
Previously updated : 12/06/2022 Last updated : 03/18/2024 # Authenticate with managed identity + Use the `authentication-managed-identity` policy to authenticate with a backend service using the managed identity. This policy essentially uses the managed identity to obtain an access token from Microsoft Entra ID for accessing the specified resource. After successfully obtaining the token, the policy will set the value of the token in the `Authorization` header using the `Bearer` scheme. API Management caches the token until it expires. Both system-assigned identity and any of the multiple user-assigned identities can be used to request a token. If `client-id` is not provided, system-assigned identity is assumed. If the `client-id` variable is provided, token is requested for that user-assigned identity from Microsoft Entra ID.
Both system-assigned identity and any of the multiple user-assigned identities c
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Examples
Both system-assigned identity and any of the multiple user-assigned identities c
## Related policies
-* [API Management authentication policies](api-management-authentication-policies.md)
+* [Authentication and authorization](api-management-policies.md#authentication-and-authorization)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Automate Portal Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/automate-portal-deployments.md
# Automate developer portal deployments + The API Management developer portal supports programmatic access to content. It allows you to import data to or export from an API Management service through the [content management REST API](/rest/api/apimanagement/). The REST API access works for both managed and self-hosted portals. ## Automated migration script
api-management Automation Manage Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/automation-manage-api-management.md
Last updated 02/13/2018
# Managing Azure API Management using Azure Automation++ This guide introduces you to the Azure Automation service, and how it can be used to simplify management of Azure API Management. ## What is Azure Automation?
api-management Azure Openai Api From Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-api-from-specification.md
# Import an Azure OpenAI API as a REST API + This article shows how to import an [Azure OpenAI](/azure/ai-services/openai/overview) API into an Azure API Management instance from its OpenAPI specification. After importing the API as a REST API, you can manage and secure it, and publish it to developers. ## Prerequisites
api-management Backends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/backends.md
# Backends in API Management + A *backend* (or *API backend*) in API Management is an HTTP service that implements your front-end API and its operations. When importing certain APIs, API Management configures the API backend automatically. For example, API Management configures the backend web service when importing:
api-management Api Version Retirement Sep 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/api-version-retirement-sep-2023.md
# API version retirements (September 2023) + Azure API Management uses Azure Resource Manager (ARM) to configure your API Management instances. The API version is embedded in your use of templates that describe your infrastructure, tools that are used to configure the service, and programs that you write to manage your Azure API Management services. On 30 September 2023, all API versions for the Azure API Management service prior to **2021-08-01** will be retired and API calls using those API versions will fail. This means you'll no longer be able to create or manage your API Management services using your existing templates, tools, scripts, and programs until they've been updated. Data operations (such as accessing the APIs or Products configured on Azure API Management) will be unaffected by this update, including after 30 September 2023.
api-management Captcha Endpoint Change Sep 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/captcha-endpoint-change-sep-2025.md
# CAPTCHA endpoint update (September 2025) + On 30 September, 2025 as part of our continuing work to increase the resiliency of API Management services, we're permanently changing the CAPTCHA endpoint used by the developer portal. This change will have no effect on the availability of your API Management service. However, you may have to take steps described below to continue using the developer portal beyond 30 September, 2025.
api-management Identity Provider Adal Retirement Sep 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/identity-provider-adal-retirement-sep-2025.md
# ADAL-based Microsoft Entra ID or Azure AD B2C identity provider retirement (September 2025) + On 30 September, 2025 as part of our continuing work to increase the resiliency of API Management services, we're removing the support for the previous library for user authentication and authorization in the developer portal (AD Authentication Library, or ADAL). You need to migrate your Microsoft Entra ID or Azure AD B2C applications, change identity provider configuration to use the Microsoft Authentication Library (MSAL), and republish your developer portal. This change will have no effect on the availability of your API Management service. However, you have to take steps described below to configure your API Management service if you wish to continue using Microsoft Entra ID or Azure AD B2C identity providers beyond 30 September, 2025.
api-management Legacy Portal Retirement Oct 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/legacy-portal-retirement-oct-2023.md
# Legacy developer portal retirement (October 2023) + Azure API Management in the dedicated service tiers provides a customizable developer portal where API consumers can discover APIs managed in your API Management instance, learn how to use them, and request access. The current ("new") developer portal was released in October 2020 and is the successor to an earlier ("legacy") version of the developer portal. The legacy portal was deprecated with the release of the new developer portal. On 31 October 2023, the legacy portal was retired and will no longer be supported. If you want to continue using the developer portal, you must migrate to the new developer portal.
api-management Metrics Retirement Aug 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/metrics-retirement-aug-2023.md
# Metrics retirements (August 2023) + Azure API Management integrates natively with Azure Monitor and emits metrics every minute, giving customers visibility into the state and health of their APIs. The following five legacy metrics have been deprecated since May 2019 and will no longer be available after 31 August 2023: * Total Gateway Requests
api-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md
# Upcoming breaking changes + The following table lists all the upcoming breaking changes and feature retirements for Azure API Management. | Change Title | Effective Date |
api-management Rp Source Ip Address Change Mar 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-mar-2023.md
# Resource Provider source IP address updates (March 2023) + On 31 March, 2023 as part of our continuing work to increase the resiliency of API Management services, we're making the resource providers for Azure API Management zone redundant in each region. The IP address that the resource provider uses to communicate with your service will change in seven regions: | Region | Old IP Address | New IP Address |
api-management Rp Source Ip Address Change Sep 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/rp-source-ip-address-change-sep-2023.md
# Resource provider source IP address updates (September 2023) + On 30 September 2023 as part of our continuing work to increase the resiliency of API Management services, we're making the resource providers for Azure API Management zone redundant in each region. The IP address that the resource provider uses to communicate with your service will change if it's located in Switzerland North: * Old IP address: 51.107.0.91
api-management Self Hosted Gateway V0 V1 Retirement Oct 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/self-hosted-gateway-v0-v1-retirement-oct-2023.md
# Support ending for Azure API Management self-hosted gateway version 0 and version 1 container images (October 2023) + The [self-hosted gateway](../self-hosted-gateway-overview.md) is an optional, containerized version of the default managed gateway included in every API Management service. On 1 October 2023 we're removing support for the v0 and v1 versions of the self-hosted gateway container image. If you've deployed the self-hosted gateway using either of these container images, you need to take the steps below to continue using the self-hosted gateway by migrating to the v2 container image and configuration API. ## Is my service affected by this?
api-management Stv1 Platform Retirement August 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/stv1-platform-retirement-august-2024.md
# stv1 platform retirement (August 2024) + As a cloud platform-as-a-service (PaaS), Azure API Management abstracts many details of the infrastructure used to host and run your service. **The infrastructure associated with the API Management `stv1` compute platform version will be retired effective 31 August 2024.** A more current compute platform version (`stv2`) is already available, and provides enhanced service capabilities. The following table summarizes the compute platforms currently used for instances in the different API Management service tiers.
api-management Workspaces Breaking Changes June 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/workspaces-breaking-changes-june-2024.md
# Workspaces - breaking changes (June 2024) + On 14 June 2024, as part of our development of [workspaces](../workspaces-overview.md) (preview) in Azure API Management, we're introducing several breaking changes. These changes will have no effect on the availability of your API Management service. However, you may have to take action to continue using full workspaces functionality beyond 14 June 2024.
api-management Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md
Previously updated : 12/07/2022 Last updated : 03/18/2024 # Get from cache + Use the `cache-lookup` policy to perform cache lookup and return a valid cached response when available. This policy can be applied in cases where response content remains static over a period of time. Response caching reduces bandwidth and processing requirements imposed on the backend web server and lowers latency perceived by API consumers. > [!NOTE]
Use the `cache-lookup` policy to perform cache lookup and return a valid cached
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
For more information, see [Policy expressions](api-management-policy-expressions
## Related policies
-* [API Management caching policies](api-management-caching-policies.md)
+* [Caching](api-management-policies.md#caching)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Cache Lookup Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-value-policy.md
Previously updated : 12/07/2022 Last updated : 03/18/2024 # Get value from cache++ Use the `cache-lookup-value` policy to perform cache lookup by key and return a cached value. The key can have an arbitrary string value and is typically provided using a policy expression. > [!NOTE]
Use the `cache-lookup-value` policy to perform cache lookup by key and return a
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
For more information and examples of this policy, see [Custom caching in Azure A
## Related policies
-* [API Management caching policies](api-management-caching-policies.md)
+* [Caching](api-management-policies.md#caching)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Cache Remove Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-remove-value-policy.md
Previously updated : 12/07/2022 Last updated : 03/18/2024 # Remove value from cache++ The `cache-remove-value` deletes a cached item identified by its key. The key can have an arbitrary string value and is typically provided using a policy expression. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `cache-remove-value` deletes a cached item identified by its key. The key ca
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
For more information and examples of this policy, see [Custom caching in Azure A
## Related policies
-* [API Management caching policies](api-management-caching-policies.md)
+* [Caching](api-management-policies.md#caching)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md
Previously updated : 01/02/2024 Last updated : 03/18/2024 # Store to cache + The `cache-store` policy caches responses according to the specified cache settings. This policy can be applied in cases where response content remains static over a period of time. Response caching reduces bandwidth and processing requirements imposed on the backend web server and lowers latency perceived by API consumers. > [!NOTE]
The `cache-store` policy caches responses according to the specified cache setti
- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
For more information, see [Policy expressions](api-management-policy-expressions
## Related policies
-* [API Management caching policies](api-management-caching-policies.md)
+* [Caching](api-management-policies.md#caching)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Cache Store Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-value-policy.md
Previously updated : 12/07/2022 Last updated : 03/18/2024 # Store value in cache++ The `cache-store-value` performs cache storage by key. The key can have an arbitrary string value and is typically provided using a policy expression. > [!NOTE]
The `cache-store-value` performs cache storage by key. The key can have an arbit
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
For more information and examples of this policy, see [Custom caching in Azure A
## Related policies
-* [API Management caching policies](api-management-caching-policies.md)
+* [Caching](api-management-policies.md#caching)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Check Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/check-header-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Check HTTP header + Use the `check-header` policy to enforce that a request has a specified HTTP header. You can optionally check to see if the header has a specific value or one of a range of allowed values. If the check fails, the policy terminates request processing and returns the HTTP status code and error message specified by the policy. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
Use the `check-header` policy to enforce that a request has a specified HTTP he
- **[Policy sections:](./api-management-howto-policies.md#sections)** inbound - **[Policy scopes:](./api-management-howto-policies.md#scopes)** global, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
Use the `check-header` policy to enforce that a request has a specified HTTP he
## Related policies
-* [API Management access restriction policies](api-management-access-restriction-policies.md)
+* [Authentication and authorization](api-management-policies.md#authentication-and-authorization)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Choose Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/choose-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Control flow + Use the `choose` policy to conditionally apply policy statements based on the results of the evaluation of Boolean [expressions](api-management-policy-expressions.md). Use the policy for control flow similar to an if-then-else or a switch construct in a programming language.
The `choose` policy must contain at least one `<when/>` element. The `<otherwise
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Examples
This example shows how to perform content filtering by removing data elements fr
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Policy control and flow](api-management-policies.md#policy-control-and-flow)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
Title: Azure API Management compute platform
-description: Learn about the compute platform used to host your API Management service instance. Instances in the dedicated service tiers of API Management are hosted on the stv1 or stv2 compute platform.
+description: Learn about the compute platform used to host your API Management service instance. Instances in the classic service tiers of API Management are hosted on the stv1 or stv2 compute platform.
-# Compute platform for Azure API Management
+# Compute platform for Azure API Management - Classic tiers
+ As a cloud platform-as-a-service (PaaS), Azure API Management abstracts many details of the infrastructure used to host and run your service. You can create, manage, and scale most aspects of your API Management instance without needing to know about its underlying resources.
Most new instances created in service tiers other than the Consumption tier are
## What are the compute platforms for API Management?
-The following table summarizes the compute platforms currently used in the **Consumption**, **Developer**, **Basic**, **Standard**, and **Premium** tiers of API Management. This table doesn't apply to the [v2 pricing tiers (preview)](#what-about-the-v2-pricing-tiers).
+The following table summarizes the compute platforms currently used in the **Consumption**, **Developer**, **Basic**, **Standard**, and **Premium** tiers of API Management. This table doesn't apply to the [v2 pricing tiers](#what-about-the-v2-pricing-tiers).
| Version | Description | Architecture | Tiers | | -| -| -- | - |
Migration steps depend on features enabled in your API Management instance. If t
## What about the v2 pricing tiers?
-The v2 pricing tiers are a new set of tiers for API Management currently in preview. Hosted on a new, highly scalable and available Azure infrastructure that's different from the `stv1` and `stv2` compute platforms, the v2 tiers aren't affected by the retirement of the `stv1` platform.
+The v2 pricing tiers are a new set of tiers for API Management. Hosted on a new, highly scalable and available Azure infrastructure that's different from the `stv1` and `stv2` compute platforms, the v2 tiers aren't affected by the retirement of the `stv1` platform.
The v2 tiers are designed to make API Management accessible to a broader set of customers and offer flexible options for a wider variety of scenarios. For more information, see [v2 tiers overview](v2-service-tiers-overview.md).
api-management Configure Credential Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-credential-connection.md
# Configure multiple connections + You can configure multiple connections to a credential provider in your API Management instance. For example, if you configured Microsoft Entra ID as a credential provider, you might need to create multiple connections for different scenarios and users. In this article, you learn how to add a connection to an existing provider, using credential manager in the portal. For an overview of credential manager, see [About API credentials and credential manager](credentials-overview.md).
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
# Configure a custom domain name for your Azure API Management instance + When you create an Azure API Management service instance in the Azure cloud, Azure assigns it a `azure-api.net` subdomain (for example, `apim-service-name.azure-api.net`). You can also expose your API Management endpoints using your own custom domain name, such as **`contoso.com`**. This article shows you how to map an existing custom DNS name to endpoints exposed by an API Management instance. > [!IMPORTANT]
API Management offers a free, managed TLS certificate for your domain, if you do
* Not supported in the following Azure regions: France South and South Africa West * Currently available only in the Azure cloud * Does not support root domain names (for example, `contoso.com`). Requires a fully qualified name such as `api.contoso.com`.
+* Supports only public domain names
* Can only be configured when updating an existing API Management instance, not when creating an instance
api-management Configure Graphql Resolver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-graphql-resolver.md
# Configure a GraphQL resolver ++ Configure a resolver to retrieve or set data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management as a GraphQL API. Currently, API Management supports resolvers that can access the following data sources:
You can define the resolver as follows:
For more resolver examples, see:
-* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies)
+* [GraphQL resolver policies](api-management-policies.md#graphql-resolvers)
* [Sample APIs for Azure API Management](https://github.com/Azure-Samples/api-management-sample-apis)
api-management Cors Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cors-policy.md
Previously updated : 01/02/2024 Last updated : 03/18/2024 # CORS + The `cors` policy adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients. [!INCLUDE [api-management-policy-form-alert](../../includes/api-management-policy-form-alert.md)]
The `cors` policy adds cross-origin resource sharing (CORS) support to an operat
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes * You may configure the `cors` policy at more than one scope (for example, at the product scope and the global scope). Ensure that the `base` element is configured at the operation, API, and product scopes to inherit needed policies at the parent scopes.
This example demonstrates how to support [preflight requests](https://developer.
## Related policies
-* [API Management cross-domain policies](api-management-cross-domain-policies.md)
+* [Cross-domain](api-management-policies.md#cross-domain)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Cosmosdb Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cosmosdb-data-source-policy.md
Previously updated : 06/07/2023 Last updated : 03/18/2024 # Cosmos DB data source for a resolver + The `cosmosdb-data-source` resolver policy resolves data for an object type and field in a GraphQL schema by using a [Cosmos DB](../cosmos-db/introduction.md) data source. The schema must be imported to API Management as a GraphQL API. Use the policy to configure a single query request, read request, delete request, or write request and an optional response from the Cosmos DB data source.
Use the policy to configure a single query request, read request, delete request
## Usage - [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver-- [**Gateways:**](api-management-gateways-overview.md) dedicated
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2
### Usage notes
type Query {
## Related policies
-* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies)
+* [GraphQL resolvers](api-management-policies.md#graphql-resolvers)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Credentials Configure Common Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-configure-common-providers.md
# Configure common credential providers in credential manager + In this article, you learn about configuring identity providers for managed [connections](credentials-overview.md) in your API Management instance. Settings for the following common providers are shown: * Microsoft Entra provider
api-management Credentials How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-how-to-azure-ad.md
# Configure credential manager - Microsoft Graph API + This article guides you through the steps required to create a managed [connection](credentials-overview.md) to the Microsoft Graph API within Azure API Management. The authorization code grant type is used in this example. You learn how to:
The preceding policy definition consists of two parts:
## Related content
-* Learn more about [access restriction policies](api-management-access-restriction-policies.md)
+* Learn more about [authentication and authorization policies](api-management-policies.md#authentication-and-authorization) in Azure API Management.
* Learn more about [scopes and permissions](../active-directory/develop/scopes-oidc.md) in Microsoft Entra ID.
api-management Credentials How To Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-how-to-github.md
# Configure credential manager - GitHub API + In this article, you learn how to create a managed [connection](credentials-overview.md) in API Management and call a GitHub API that requires an OAuth 2.0 token. The authorization code grant type is used in this example. You learn how to:
The preceding policy definition consists of three parts:
## Related content
-* Learn more about [access restriction policies](api-management-access-restriction-policies.md).
+* Learn more about [authentication and authorization policies](api-management-policies.md#authentication-and-authorization)
* Learn more about GitHub's [REST API](https://docs.github.com/en/rest?apiVersion=2022-11-28)
api-management Credentials How To User Delegated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-how-to-user-delegated.md
# Configure credential manager - user-delegated access to backend API + This article guides you through the high level steps to configure and use a managed [connection](credentials-overview.md) that grants Microsoft Entra users or groups delegated permissions to a backend OAuth 2.0 API. Follow these steps for scenarios when a client app (or bot) needs to access backend secured online resources on behalf of an authenticated user (for example, checking emails or placing an order). ## Scenario overview
In the preceding policy definition, replace:
## Related content
-* Learn more about [access restriction policies](api-management-access-restriction-policies.md)
+* Learn more about [authentication and authorization policies](api-management-policies.md#authentication-and-authorization)
* Learn more about [scopes and permissions](../active-directory/develop/scopes-oidc.md) in Microsoft Entra ID.
api-management Credentials Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-overview.md
# About API credentials and credential manager + To help you manage access to backend APIs, your API Management instance includes a *credential manager*. Use credential manager to manage, store, and control access to API credentials from your API Management instance. > [!NOTE]
All underlying connections and access policies are also deleted.
### Are the access tokens cached by API Management?
-In the dedicated service tiers, the access token is cached by the API Management instance until 3 minutes before the token expiration time. If the access token is less than 3 minutes away from expiration, the cached time will be until the access token expires.
+In the classic and v2 service tiers, the access token is cached by the API Management instance until 3 minutes before the token expiration time. If the access token is less than 3 minutes away from expiration, the cached time will be until the access token expires.
Access tokens aren't cached in the Consumption tier.
api-management Credentials Process Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-process-flow.md
# OAuth 2.0 connections in credential manager - process details and flows ++ This article provides details about the process flows for managing OAuth 2.0 connections using credential manager in Azure API Management. The process flows are divided into two parts: **management** and **runtime**. For background about credential manager in API Management, see [About credential manager and API credentials in API Management](credentials-overview.md).
api-management Cross Domain Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cross-domain-policy.md
Previously updated : 12/07/2022 Last updated : 03/18/2024 # Allow cross-domain calls + Use the `cross-domain` policy to make the API accessible from Adobe Flash and Microsoft Silverlight browser-based clients. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
Child elements must conform to the [Adobe cross-domain policy file specification
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
Child elements must conform to the [Adobe cross-domain policy file specification
## Related policies
-* [API Management cross-domain policies](api-management-cross-domain-policies.md)
+* [Cross-domain](api-management-policies.md#cross-domain)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Developer Portal Alternative Processes Self Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-alternative-processes-self-host.md
# Alternative approaches to self-host developer portal + There are several alternative approaches you can explore when you [self-host a developer portal](developer-portal-self-host.md): * Use production builds of the designer and the publisher.
api-management Developer Portal Basic Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-basic-authentication.md
# Configure users of the developer portal to authenticate using usernames and passwords + In the developer portal for Azure API Management, the default authentication method for users is to provide a username and password. In this article, learn how to set up users with basic authentication credentials to the developer portal. For an overview of options to secure the developer portal, see [Secure access to the API Management developer portal](secure-developer-portal-access.md).
For an overview of options to secure the developer portal, see [Secure access to
- Complete the [Create an Azure API Management instance](get-started-create-service-instance.md) quickstart. - [!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)]
api-management Developer Portal Extend Custom Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md
# Extend the developer portal with custom widgets ++ The API Management [developer portal](api-management-howto-developer-portal.md) features a visual editor and built-in widgets so that you can customize and style the portal's appearance. However, you may need to customize the developer portal further with custom functionality. For example, you might want to integrate your developer portal with a support system that involves adding a custom interface. This article explains ways to add custom functionality such as custom widgets to your API Management developer portal.
-The following table summarizes three options, with links to more detail.
+The following table summarizes two options, with links to more detail.
|Method |Description | ||| |[Custom HTML code widget](#use-custom-html-code-widget) | - Lightweight solution for API publishers to add custom logic for basic use cases<br/><br/>- Copy and paste custom HTML code into a form, and developer portal renders it in an iframe | |[Create and upload custom widget](#create-and-upload-custom-widget) | - Developer solution for more advanced widget use cases<br/><br/>- Requires local implementation in React, Vue, or plain TypeScript<br/><br/>- Widget scaffold and tools provided to help developers create widget and upload to developer portal<br/><br/>- Widget creation, testing, and deployment can be scripted through open source [React Component Toolkit](#create-custom-widgets-using-open-source-react-component-toolkit)<br/><br/>- Supports workflows for source control, versioning, and code reuse |
-|[Self-host developer portal](developer-portal-self-host.md) | - Legacy extensibility option for customers who need to customize source code of the entire portal core<br/><br/> - Gives complete flexibility for customizing portal experience<br/><br/>- Requires advanced configuration<br/><br/>- Customer responsible for managing complete code lifecycle: fork code base, develop, deploy, host, patch, and upgrade |
+
+> [!NOTE]
+> [Self-hosting the developer portal](developer-portal-self-host.md) is an extensibility option for customers who need to customize the source code of the entire portal core. It gives complete flexibility for customizing portal experience, but requires advanced configuration. With self-hosting, you're responsible for managing complete code lifecycle: fork code base, develop, deploy, host, patch, and upgrade.
+++ ## Use Custom HTML code widget The managed developer portal includes a **Custom HTML code** widget where you can insert HTML code for small portal customizations. For example, use custom HTML to embed a video or to add a form. The portal renders the custom widget in an inline frame (iframe).
api-management Developer Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-faq.md
# API Management developer portal - frequently asked questions
-This article provides answers to frequently asked questions about the [developer portal](developer-portal-overview.md) in Azure API Management.
## What if I need functionality that isn't supported in the portal? You have the following options:
-* For small customizations, use a built-in widget to [add custom HTML](developer-portal-extend-custom-functionality.md#use-custom-html-code-widget).
+* For small customizations, use a built-in widget to [add custom HTML](developer-portal-extend-custom-functionality.md#use-custom-html-code-widget). Currently, the custom HTML code widget isn't available in the v2 tiers of API Management.
-* For larger customizations, [create and upload](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget) a custom widget to the managed developer portal.
+* For larger customizations, [create and upload](developer-portal-extend-custom-functionality.md#create-and-upload-custom-widget) a custom widget to the managed developer portal. Currently, custom widgets aren't available in the v2 tiers of API Management.
* [Self-host the developer portal](developer-portal-self-host.md), only if you need to make modifications to the core of the developer portal codebase.
api-management Developer Portal Integrate Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-integrate-application-insights.md
# Integrate Application Insights to developer portal + A popular feature of Azure Monitor is Application Insights. It's an extensible Application Performance Management (APM) service for developers and DevOps professionals. Use it to monitor your developer portal and detect performance anomalies. Application Insights includes powerful analytics tools to help you learn what users actually do while visiting your developer portal. ## Add Application Insights to your portal
api-management Developer Portal Integrate Google Tag Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-integrate-google-tag-manager.md
# Integrate Google Tag Manager to API Management developer portal + [Google Tag Manager](https://developers.google.com/tag-manager) is a tag management system created by Google. You can use it to manage JavaScript and HTML tags used for tracking and analytics on websites. For example, you can use Google Tag Manager to integrate Google Analytics, heatmaps, or chatbots like LiveChat. Follow the steps in this article to plug Google Tag Manager into your managed or self-hosted developer portal in Azure API Management.
api-management Developer Portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-overview.md
# Overview of the developer portal + The API Management *developer portal* is an automatically generated, fully customizable website with the documentation of your APIs. It's where API consumers can discover your APIs, learn how to use them, request access, and try them out. This article introduces features of the developer portal, the types of content the portal presents, and options to manage and extend the developer portal for your specific users and scenarios.
api-management Developer Portal Self Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-self-host.md
description: Learn how to self-host the developer portal for Azure API Management. Previously updated : 06/07/2022 Last updated : 03/29/2024 # Self-host the API Management developer portal + This tutorial describes how to self-host the [API Management developer portal](api-management-howto-developer-portal.md). Self-hosting is one of several options to [extend the functionality](developer-portal-extend-custom-functionality.md) of the developer portal. For example, you can self-host multiple portals for your API Management instance, with different features. When you self-host a portal, you become its maintainer and you're responsible for its upgrades. > [!IMPORTANT]
This tutorial describes how to self-host the [API Management developer portal](a
If you have already uploaded or modified media files in the managed portal, see [Move from managed to self-hosted](#move-from-managed-to-self-hosted-developer-portal), later in this article. - ## Prerequisites To set up a local development environment, you need to have:
api-management Developer Portal Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-testing.md
# Test the self-hosted developer portal + This article explains how to set up unit tests and end-to-end tests for your [self-hosted portal](developer-portal-self-host.md). ## Unit tests
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
# Use DevOps and CI/CD to publish APIs + With the strategic value of APIs in the enterprise, adopting DevOps continuous integration (CI) and deployment (CD) techniques has become an important aspect of API development. This article discusses the decisions you'll need to make to adopt DevOps principles for the management of APIs. API DevOps consists of three parts:
api-management Diagnose Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/diagnose-solve-problems.md
Title: Azure API Management Diagnose and solve problems description: Learn how to troubleshoot issues with your API in Azure API Management with the Diagnose and Solve tool in the Azure portal. -+ Last updated 02/05/2021-+ # Azure API Management Diagnostics overview + When you build and manage an API in Azure API Management, you want to be prepared for any issues that may arise, from 404 not found errors to 502 bad gateway error. API Management Diagnostics is an intelligent and interactive experience to help you troubleshoot your API published in APIM with no configuration required. When you do run into issues with your published APIs, API Management Diagnostics points out whatΓÇÖs wrong, and guides you to the right information to quickly troubleshoot and resolve the issue. Although this experience is most helpful when you re having issues with your API within the last 24 hours, all the diagnostic graphs are always available for you to analyze. - ## Open API Management Diagnostics To access API Management Diagnostics, navigate to your API Management service instance in the [Azure portal](https://portal.azure.com). In the left navigation, select **Diagnose and solve problems**.
api-management Diagnostic Logs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/diagnostic-logs-reference.md
# Diagnostics logs settings reference: API Management + This reference describes settings for API diagnostics logging from an API Management instance. To enable logging of API requests, see the following guidance: * [Collect resource logs](api-management-howto-use-azure-monitor.md#resource-logs)
api-management Edit Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/edit-api.md
# Edit an API + The steps in this tutorial show you how to use API Management to edit an API. + You can add, rename, or delete operations in the Azure portal.
api-management Emit Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/emit-metric-policy.md
Previously updated : 06/02/2023 Last updated : 03/18/2024 # Emit custom metrics + The `emit-metric` policy sends custom metrics in the specified format to Application Insights. > [!NOTE]
The `emit-metric` policy sends custom metrics in the specified format to Applica
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
The following example sends a custom metric to count the number of API requests
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Logging](api-management-policies.md#logging)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Enable Cors Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/enable-cors-power-platform.md
# Enable CORS policies for API Management custom connector ++ Cross-origin resource sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources. Customers can add a [CORS policy](cors-policy.md) to their web APIs in Azure API Management, which adds cross-origin resource sharing support to an operation or an API to allow cross-domain calls from browser-based clients. If you've exported an API from API Management as a [custom connector](export-api-power-platform.md) in the Power Platform and want to use browser-based clients including Power Apps or Power Automate to call the API, you need to configure your API to explicitly enable cross-origin requests from Power Platform applications. This article shows you how to configure the following two necessary policy settings:
api-management Export Api Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/export-api-postman.md
# Export API definition to Postman for API testing and monitoring + To enhance development of your APIs, you can export an API fronted in API Management to [Postman](https://www.postman.com/product/what-is-postman/). Export an API definition from API Management as a Postman [collection](https://learning.postman.com/docs/getting-started/creating-the-first-collection/) so that you can use Postman's tools to design, document, test, monitor, and collaborate on APIs. ## Prerequisites
api-management Export Api Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/export-api-power-platform.md
# Export APIs from Azure API Management to the Power Platform + Citizen developers using the Microsoft [Power Platform](https://powerplatform.microsoft.com) often need to reach the business capabilities that are developed by professional developers and deployed in Azure. [Azure API Management](https://aka.ms/apimrocks) enables professional developers to publish their backend service as APIs, and easily export these APIs to the Power Platform ([Power Apps](/powerapps/powerapps-overview) and [Power Automate](/power-automate/getting-started)) as custom connectors for discovery and consumption by citizen developers. This article walks through the steps in the Azure portal to create a Power Platform [custom connector](/connectors/custom-connectors/) to an API in API Management. With this capability, citizen developers can use the Power Platform to create and distribute apps that are based on internal and external APIs managed by API Management.
api-management Find And Replace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/find-and-replace-policy.md
Previously updated : 12/02/2022 Last updated : 03/18/2024 # Find and replace string in body++ The `find-and-replace` policy finds a request or response substring and replaces it with a different substring. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `find-and-replace` policy finds a request or response substring and replaces
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
The `find-and-replace` policy finds a request or response substring and replaces
## Related policies
-* [API Management transformation policies](api-management-transformation-policies.md)
+* [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Forward Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/forward-request-policy.md
Previously updated : 10/19/2023 Last updated : 03/18/2024 # Forward request + The `forward-request` policy forwards the incoming request to the backend service specified in the request [context](api-management-policy-expressions.md#ContextVariables). The backend service URL is specified in the API [settings](./import-and-publish.md) and can be changed using the [set backend service](api-management-transformation-policies.md) policy. > [!IMPORTANT]
The `forward-request` policy forwards the incoming request to the backend servic
- [**Policy sections:**](./api-management-howto-policies.md#sections) backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Examples
This operation level policy doesn't forward requests to the backend service.
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Routing](api-management-policies.md#routing)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Front Door Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/front-door-api-management.md
# Configure Front Door Standard/Premium in front of Azure API Management + Azure Front Door is a modern application delivery network platform providing a secure, scalable content delivery network (CDN), dynamic site acceleration, and global HTTP(s) load balancing for your global web applications. When used in front of API Management, Front Door can provide TLS offloading, end-to-end TLS, load balancing, response caching of GET requests, and a web application firewall, among other capabilities. For a full list of supported features, see [What is Azure Front Door?](../frontdoor/front-door-overview.md) [!INCLUDE [ddos-waf-recommendation](../../includes/ddos-waf-recommendation.md)]
api-management Gateway Log Schema Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/gateway-log-schema-reference.md
# Reference: API Management resource log schema + This article provides a schema reference for the Azure API Management GatewayLogs resource log. Log entries also include fields in the [top-level common schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). To enable collection of the resource log in API Management, see [Monitor published APIs](api-management-howto-use-azure-monitor.md#resource-logs).
api-management Get Authorization Context Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-authorization-context-policy.md
Previously updated : 11/15/2023 Last updated : 03/18/2024 # Get authorization context + Use the `get-authorization-context` policy to get the authorization context of a specified [connection](credentials-overview.md) (formerly called an *authorization*) to a credential provider that is configured in the API Management instance. The policy fetches and stores authorization and refresh tokens from the configured credential provider using the connection.
class Authorization
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption
### Usage notes
class Authorization
## Related policies
-* [API Management access restriction policies](api-management-access-restriction-policies.md)
+* [Authentication and authorization](api-management-policies.md#authentication-and-authorization)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Get Started Create Service Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-started-create-service-instance-cli.md
ms.devlang: azurecli
# Quickstart: Create a new Azure API Management instance by using the Azure CLI + This quickstart describes the steps for creating a new API Management instance by using Azure CLI commands. After creating an instance, you can use the Azure CLI for common management tasks such as importing APIs in your API Management instance. [!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)]
api-management Get Started Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-started-create-service-instance.md
# Quickstart: Create a new Azure API Management instance by using the Azure portal + This quickstart describes the steps for creating a new API Management instance using the Azure portal. After creating an instance, you can use the Azure portal for common management tasks such as importing APIs in your API Management instance. [!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)]
api-management Graphql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md
# Import a GraphQL API + [!INCLUDE [api-management-graphql-intro.md](../../includes/api-management-graphql-intro.md)] In this article, you'll:
If your GraphQL API supports a subscription, you can test it in the test consol
## Secure your GraphQL API
-Secure your GraphQL API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks.
+Secure your GraphQL API by applying both existing [authentication and authorization policies](api-management-policies.md#authentication-and-authorization) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks.
[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
api-management Graphql Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-apis-overview.md
# Overview of GraphQL APIs in Azure API Management + You can use API Management to manage GraphQL APIs - APIs based on the GraphQL query language. GraphQL provides a complete and understandable description of the data in an API, giving clients the power to efficiently retrieve exactly the data they need. [Learn more about GraphQL](https://graphql.org/learn/) API Management helps you import, manage, protect, test, publish, and monitor GraphQL APIs. You can choose one of two API models:
api-management Graphql Schema Resolve Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-schema-resolve-api.md
Last updated 05/31/2023
# Add a synthetic GraphQL API and set up field resolvers [!INCLUDE [api-management-graphql-intro.md](../../includes/api-management-graphql-intro.md)]
type User {
## Secure your GraphQL API
-Secure your GraphQL API by applying both existing [access control policies](api-management-policies.md#access-restriction-policies) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks.
+Secure your GraphQL API by applying both existing [authentication and authorization policies](api-management-policies.md#authentication-and-authorization) and a [GraphQL validation policy](validate-graphql-request-policy.md) to protect against GraphQL-specific attacks.
[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
api-management Grpc Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/grpc-api.md
# Import a gRPC API (preview) + This article shows how to import a gRPC service definition as an API in API Management. You can then manage the API in API Management, secure access and apply other polices, and pass gRPC API requests through the gateway to the gRPC backend. To add a gRPC API to API Management, you need to:
API Management supports pass-through with the following types of gRPC service me
> * Importing a gRPC API is in preview. Currently, gRPC APIs are only supported in the self-hosted gateway, not the managed gateway for your API Management instance. > * Currently, testing gRPC APIs isn't supported in the test console of the Azure portal or in the API Management developer portal. - ## Prerequisites * An API Management instance. If you don't already have one, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
api-management High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/high-availability.md
# Ensure API Management availability and reliability This article introduces service capabilities and considerations to ensure that your API Management instance continues to serve API requests if Azure outages occur.
api-management How To Configure Cloud Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-cloud-metrics-logs.md
# Configure cloud metrics and logs for Azure API Management self-hosted gateway + This article provides details for configuring cloud metrics and logs for the [self-hosted gateway](./self-hosted-gateway-overview.md). The self-hosted gateway has to be associated with an API management service and requires outbound TCP/IP connectivity to Azure on port 443. The gateway leverages the outbound connection to send telemetry to Azure, if configured to do so. - ## Metrics By default, the self-hosted gateway emits a number of metrics through [Azure Monitor](https://azure.microsoft.com/services/monitor/), same as the managed gateway [in the cloud](api-management-howto-use-azure-monitor.md).
api-management How To Configure Local Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-local-metrics-logs.md
# Configure local metrics and logs for Azure API Management self-hosted gateway
-This article provides details for configuring local metrics and logs for the [self-hosted gateway](./self-hosted-gateway-overview.md) deployed on a Kubernetes cluster. For configuring cloud metrics and logs, see [this article](how-to-configure-cloud-metrics-logs.md).
- [!INCLUDE [api-management-availability-premium-dev](../../includes/api-management-availability-premium-dev.md)]
+This article provides details for configuring local metrics and logs for the [self-hosted gateway](./self-hosted-gateway-overview.md) deployed on a Kubernetes cluster. For configuring cloud metrics and logs, see [this article](how-to-configure-cloud-metrics-logs.md).
+ ## Metrics The self-hosted gateway supports [StatsD](https://github.com/statsd/statsd), which has become a unifying protocol for metrics collection and aggregation. This section walks through the steps for deploying StatsD to Kubernetes, configuring the gateway to emit metrics via StatsD, and using [Prometheus](https://prometheus.io/) to monitor the metrics.
api-management How To Configure Service Fabric Backend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-service-fabric-backend.md
# Set up a Service Fabric backend in API Management using the Azure portal + This article shows how to configure a [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) service as a custom API backend using the Azure portal. For demonstration purposes, it shows how to set up a basic stateless ASP.NET Core Reliable Service as the Service Fabric backend. For background, see [Backends in API Management](backends.md).
api-management How To Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-create-workspace.md
# Set up a workspace
-Set up a [workspace](workspaces-overview.md) (preview) to enable a decentralized API development team to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure. After you create a workspace and assign permissions, workspace collaborators can create and manage their own APIs, products, subscriptions, and related resources.
- [!INCLUDE [api-management-availability-premium](../../includes/api-management-availability-premium.md)]
+Set up a [workspace](workspaces-overview.md) (preview) to enable a decentralized API development team to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure. After you create a workspace and assign permissions, workspace collaborators can create and manage their own APIs, products, subscriptions, and related resources.
+ > [!NOTE] > * Workspaces are a preview feature of API Management and subject to certain [limitations](workspaces-overview.md#preview-limitations). > * Workspaces are supported in API Management REST API version 2022-09-01-preview or later.
api-management How To Deploy Self Hosted Gateway Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md
Last updated 06/12/2023
# Deploy an Azure API Management gateway on Azure Arc (preview) + With the integration between Azure API Management and [Azure Arc on Kubernetes](../azure-arc/kubernetes/overview.md), you can deploy the API Management gateway component as an [extension in an Azure Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/extensions.md). Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster expands API Management support for hybrid and multicloud environments. Enable the deployment using a cluster extension to make managing and applying policies to your Azure Arc-enabled cluster a consistent experience.
Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster
> [!NOTE] > You can also deploy the self-hosted gateway [directly to Kubernetes](./how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md). - ## Prerequisites * [Connect your Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md) within a supported Azure Arc region.
api-management How To Deploy Self Hosted Gateway Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md
Last updated 06/11/2021
-# Deploy to Azure Kubernetes Service
+# Deploy an Azure API Management self-hosted gateway to Azure Kubernetes Service
+ This article provides the steps for deploying self-hosted gateway component of Azure API Management to [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/). For deploying self-hosted gateway to a Kubernetes cluster, see the how-to article for deployment by using a [deployment YAML file](how-to-deploy-self-hosted-gateway-kubernetes.md) or [with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
This article provides the steps for deploying self-hosted gateway component of A
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). - ## Prerequisites - [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management How To Deploy Self Hosted Gateway Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-container-apps.md
# Deploy an Azure API Management self-hosted gateway to Azure Container Apps + This article provides the steps to deploy the [self-hosted gateway](self-hosted-gateway-overview.md) component of Azure API Management to [Azure Container Apps](../container-apps/overview.md). Deploy a self-hosted gateway to a container app to access APIs that are hosted in the same Azure Container Apps environment. - ## Prerequisites - Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
api-management How To Deploy Self Hosted Gateway Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-docker.md
# Deploy an Azure API Management self-hosted gateway to Docker + This article provides the steps for deploying self-hosted gateway component of Azure API Management to a Docker environment. [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)]
This article provides the steps for deploying self-hosted gateway component of A
> [!NOTE] > Hosting self-hosted gateway in Docker is best suited for evaluation and development use cases. Kubernetes is recommended for production use. Learn how to [deploy with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md) or using [deployment YAML file](how-to-deploy-self-hosted-gateway-kubernetes.md) to learn how to deploy self-hosted gateway to Kubernetes. - ## Prerequisites - Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management How To Deploy Self Hosted Gateway Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md
Last updated 12/21/2021
-# Deploy to Kubernetes with Helm
+# Deploy self-hosted gateway to Kubernetes with Helm
+ [Helm][helm] is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. It allows you to manage Kubernetes charts, which are packages of pre-configured Kubernetes resources.
This article provides the steps for deploying self-hosted gateway component of A
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). - ## Prerequisites - Create a Kubernetes cluster, or have access to an existing one.
api-management How To Deploy Self Hosted Gateway Kubernetes Opentelemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md
Last updated 12/17/2021
# Deploy self-hosted gateway to Kubernetes with OpenTelemetry integration + This article describes the steps for deploying the self-hosted gateway component of Azure API Management to a Kubernetes cluster and automatically send all metrics to an [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/). [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-opentelemetry.md)]
You learn how to:
> * Generate metrics by consuming APIs on the self-hosted gateway. > * Use the metrics from the OpenTelemetry Collector. - ## Prerequisites - [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
Last updated 05/22/2023
# Deploy a self-hosted gateway to Kubernetes with YAML + This article describes the steps for deploying the self-hosted gateway component of Azure API Management to a Kubernetes cluster. [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)]
This article describes the steps for deploying the self-hosted gateway component
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). - ## Prerequisites - Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
api-management How To Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-event-grid.md
# Send events from API Management to Event Grid + API Management integrates with [Azure Event Grid](../event-grid/overview.md) so that you can send event notifications to other services and trigger downstream processes. Event Grid is a fully managed event routing service that uses a publish-subscribe model. Event Grid has built-in support for Azure services like [Azure Functions](../azure-functions/functions-overview.md) and [Azure Logic Apps](../logic-apps/logic-apps-overview.md), and can deliver event alerts to non-Azure services using webhooks. For example, using integration with Event Grid, you can build an application that updates a database, creates a billing account, and sends an email notification each time a user is added to your API Management instance.
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
Last updated 01/17/2023
# Guidance for running self-hosted gateway on Kubernetes in production + In order to run the self-hosted gateway in production, there are various aspects to take in to mind. For example, it should be deployed in a highly available manner, use configuration backups to handle temporary disconnects and many more. This article provides guidance on how to run [self-hosted gateway](./self-hosted-gateway-overview.md) on Kubernetes for production workloads to ensure that it will run smoothly and reliably. [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)] - ## Access token Without a valid access token, a self-hosted gateway can't access and download configuration data from the endpoint of the associated API Management service. The access token can be valid for a maximum of 30 days. It must be regenerated, and the cluster configured with a fresh token, either manually or via automation before it expires.
api-management How To Server Sent Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-server-sent-events.md
Last updated 02/24/2022
# Configure API for server-sent events + This article provides guidelines for configuring an API in API Management that implements server-sent events (SSE). SSE is based on the HTML5 `EventSource` standard for streaming (pushing) data automatically to a client over HTTP after a client has established a connection. > [!TIP]
This article provides guidelines for configuring an API in API Management that i
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md). - An API that implements SSE. [Import and publish](import-and-publish.md) the API to your API Management instance using one of the supported import methods. - ## Guidelines for SSE Follow these guidelines when using API Management to reach a backend API that implements SSE.
-* **Choose service tier for long-running HTTP connections** - SSE relies on a long-running HTTP connection. Long-running connections are supported in the dedicated API Management tiers, but not in the Consumption tier.
+* **Choose service tier for long-running HTTP connections** - SSE relies on a long-running HTTP connection that is supported in certain API Management [pricing tiers](api-management-key-concepts.md#api-management-tiers). Long-running connections are supported in the classic and v2 API Management tiers, but not in the Consumption tier.
* **Keep idle connections alive** - If a connection between client and backend could be idle for 4 minutes or longer, implement a mechanism to keep the connection alive. For example, enable a TCP keepalive signal at the backend of the connection, or send traffic from the client side at least once per 4 minutes.
Follow these guidelines when using API Management to reach a backend API that im
* **Avoid logging request/response body for Azure Monitor, Application Insights, and Event Hubs** - You can configure API request logging for Azure Monitor or Application Insights using diagnostic settings. The diagnostic settings allow you to log the request/response body at various stages of the request execution. For APIs that implement SSE, this can cause unexpected buffering which can lead to problems. Diagnostic settings for Azure Monitor and Application Insights configured at the global/All APIs scope apply to all APIs in the service. You can override the settings for individual APIs as needed. When logging to Event Hubs, you configure the scope and amount of context information for request/response logging by using the [log-to-eventhubs](api-management-howto-log-event-hubs.md#configure-log-to-eventhub-policy). For APIs that implement SSE, ensure you have disabled request/response body logging for Azure Monitor, Application Insights, and Event Hubs.
-* **Disable response caching** - To ensure that notifications to the client are timely, verify that [response caching](api-management-howto-cache.md) isn't enabled. For more information, see [API Management caching policies](api-management-caching-policies.md).
+* **Disable response caching** - To ensure that notifications to the client are timely, verify that [response caching](api-management-howto-cache.md) isn't enabled. For more information, see [API Management caching policies](api-management-policies.md#caching).
* **Test API under load** - Follow general practices to test your API under load to detect performance or configuration issues before going into production.
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
# Protect serverless APIs with Azure API Management and Azure AD B2C for consumption from a SPA + This scenario shows you how to configure your Azure API Management instance to protect an API. We'll use the Azure AD B2C SPA (Auth Code + PKCE) flow to acquire a token, alongside API Management to secure an Azure Functions backend using EasyAuth.
Open the Azure AD B2C blade in the portal and do the following steps.
> > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management. >
- > If you're using the API Management Consumption, Basic v2, and Standard v2 tiers then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-basic-v2-and-standard-v2-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management dedicated tiers [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the tiers that run on shared infrastructure, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for these tiers - steps 12-17 below do not apply.
+ > If you're using the API Management Consumption, Basic v2, and Standard v2 tiers then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-basic-v2-and-standard-v2-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management classic (dedicated) tiers [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the tiers that run on shared infrastructure, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for these tiers - steps 12-17 below do not apply.
1. Close the 'Authentication' blade from the App Service / Functions portal. 1. Open the *API Management blade of the portal*, then open *your instance*.
api-management Howto Use Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-use-analytics.md
Title: Use API analytics in Azure API Management | Microsoft Docs
-description: Use analytics in Azure API Management to help you understand and categorize the usage of your APIs and API performance.
+description: Use analytics in Azure API Management to understand and categorize the usage of your APIs and API performance. Analytics is provided using an Azure workbook.
Previously updated : 02/23/2022 Last updated : 03/26/2024 + # Get API analytics in Azure API Management
-Azure API Management provides built-in analytics for your APIs. Analyze the usage and performance of the APIs in your API Management instance across several dimensions, including:
+
+Azure API Management provides analytics for your APIs so that you can analyze their usage and performance. Use analytics for high-level monitoring and troubleshooting of your APIs. For other monitoring features, including near real-time metrics and resource logs for diagnostics and auditing, see [Tutorial: Monitor published APIs](api-management-howto-use-azure-monitor.md).
+++
+## About API analytics
+
+* API Management provides analytics using an [Azure Monitor-based dashboard](../azure-monitor/visualize/workbooks-overview.md). The dashboard aggregates data in an Azure Log Analytics workspace.
+
+* In the classic API Management service tiers, your API Management instance also includes legacy *built-in analytics* in the Azure portal, and analytics data can be accessed using the API Management REST API. Equivalent data is shown in the Azure Monitor-based dashboard and built-in analytics.
+
+> [!IMPORTANT]
+> * The Azure Monitor-based dashboard is the recommended way to access analytics data.
+> * Legacy built-in analytics isn't available in the v2 tiers.
+
+With API analytics, analyze the usage and performance of the APIs in your API Management instance across several dimensions, including:
* Time * Geography
Azure API Management provides built-in analytics for your APIs. Analyze the usag
* Requests > [!NOTE]
-> * API analytics provides data on requests (including failed and unauthorized requests) that are matched with an API and operation. Other calls aren't reported.
+> * API analytics provides data on requests, including failed and unauthorized requests.
> * Geography values are approximate based on IP address mapping.
+> * There may be a delay of 15 minutes or more in the availability of analytics data.
+## Azure Monitor-based dashboard
-Use analytics for high-level monitoring and troubleshooting of your APIs. For additional monitoring features, including near real-time metrics and resource logs for diagnostics and auditing, see [Tutorial: Monitor published APIs](api-management-howto-use-azure-monitor.md).
+To use the Azure Monitor-based dashboard, you need to configure a Log Analytics workspace as a data source for API Management gateway logs.
+If you need to configure one, the following are brief steps to send gateway logs to a Log Analytics workspace. For more information, see [Tutorial: Monitor published APIs](api-management-howto-use-azure-monitor.md#resource-logs). This is a one-time setup.
-## Analytics - portal
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left-hand menu, under **Monitoring**, select **Diagnostic settings** > **+ Add diagnostic setting**.
+1. Enter a descriptive name for the diagnostic setting.
+1. In **Logs**, select **Logs related to ApiManagement Gateway**.
+1. In **Destination details**, select **Send to Log Analytics** and select a Log Analytics workspace in the same or a different subscription. If you need to create a workspace, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
+1. Accept defaults for other settings, or customize as needed. Select **Save**.
-Use the Azure portal to review analytics data at a glance for your API Management instance.
+### Access the dashboard
-1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
-1. In the left-hand menu, under **Monitoring**, select **Analytics**.
+After a Log Analytics workspace is configured, access the Azure Monitor-based dashboard to analyze the usage and performance of your APIs.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left-hand menu, under **Monitoring**, select **Insights**. The analytics dashboard opens.
+1. Select a time range for data.
+1. Select a report category for analytics data, such as **Timeline**, **Geography**, and so on.
+
+## Legacy built-in analytics
- :::image type="content" source="media/howto-use-analytics/monitoring-menu-analytics.png" alt-text="Select analytics for API Management instance in portal":::
+In certain API Management service tiers, built-in analytics is also available in the Azure portal, and analytics data can be accessed using the API Management REST API.
+
+### Built-in analytics - portal
+
+To access the built-in analytics in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left-hand menu, under **Monitoring**, select **Analytics**.
1. Select a time range for data, or enter a custom time range. 1. Select a report category for analytics data, such as **Timeline**, **Geography**, and so on. 1. Optionally, filter the report by one or more additional categories.
-## Analytics - REST API
+### Analytics - REST API
-Use [Reports](/rest/api/apimanagement/current-ga/reports) operations in the API Management REST API to retrieve and filter analytics data for your API Management instance.
+Use [Reports](/rest/api/apimanagement/reports) operations in the API Management REST API to retrieve and filter analytics data for your API Management instance.
Available operations return report records by API, geography, API operations, product, request, subscription, time, or user.
-## Next steps
+## Related content
* For an introduction to Azure Monitor features in API Management, see [Tutorial: Monitor published APIs](api-management-howto-use-azure-monitor.md) * For detailed HTTP logging and monitoring, see [Monitor your APIs with Azure API Management, Event Hubs, and Moesif](api-management-log-to-eventhub-sample.md).
-* Learn about integrating [Azure API Management with Azure Application Insights](api-management-howto-app-insights.md).
+* Learn about integrating [Azure API Management with Azure Application Insights](api-management-howto-app-insights.md).
api-management Http Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/http-data-source-policy.md
# HTTP data source for a resolver + The `http-data-source` resolver policy configures the HTTP request and optionally the HTTP response to resolve data for an object type and field in a GraphQL schema. The schema must be imported to API Management as a GraphQL API. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `http-data-source` resolver policy configures the HTTP request and optionall
## Usage - [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption
### Usage notes
For this example, we mock the customer results from an external source, and hard
## Related policies
-* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies)
+* [GraphQL resolvers](api-management-policies.md#graphql-resolvers)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Import And Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-and-publish.md
# Tutorial: Import and publish your first API + This tutorial shows how to import an OpenAPI specification backend API in JSON format into Azure API Management. Microsoft provides the backend API used in this example, and hosts it on Azure at `https://conferenceapi.azurewebsites.net`. Once you import the backend API into API Management, your API Management API becomes a façade for the backend API. You can customize the façade to your needs in API Management without touching the backend API. For more information, see [Transform and protect your API](transform-api.md).
api-management Import Api From Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-oas.md
# Import an OpenAPI specification + This article shows how to import an "OpenAPI specification" backend API residing at `https://conferenceapi.azurewebsites.net?format=json`. This backend API is provided by Microsoft and hosted on Azure. The article also shows how to test the APIM API. In this article, you learn how to:
After importing the API, if needed, you can update the settings by using the [Se
## Validate against an OpenAPI specification
-You can configure API Management [validation policies](api-management-policies.md#validation-policies) to validate requests and responses (or elements of them) against the schema in an OpenAPI specification. For example, use the [validate-content](validate-content-policy.md) policy to validate the size or content of a request or response body.
+You can configure API Management [validation policies](api-management-policies.md#content-validation) to validate requests and responses (or elements of them) against the schema in an OpenAPI specification. For example, use the [validate-content](validate-content-policy.md) policy to validate the size or content of a request or response body.
## Next steps
api-management Import Api From Odata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-odata.md
# Import an OData API + This article shows how to import an OData-compliant service as an API in API Management. In this article, you learn how to:
api-management Import App Service As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-app-service-as-api.md
# Import an Azure Web App as an API + This article shows how to import an Azure Web App to Azure API Management and test the imported API, using the Azure portal. > [!NOTE]
api-management Import Container App With Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-container-app-with-oas.md
# Import an Azure Container App as an API + This article shows how to import an Azure Container App to Azure API Management and test the imported API using the Azure portal. In this article, you learn how to: > [!div class="checklist"]
api-management Import Function App As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-function-app-as-api.md
# Import an Azure Function App as an API in Azure API Management + Azure API Management supports importing Azure Function Apps as new APIs or appending them to existing APIs. The process automatically generates a host key in the Azure Function App, which is then assigned to a named value in Azure API Management. This article walks through importing and testing an Azure Function App as an API in Azure API Management.
api-management Import Logic App As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-logic-app-as-api.md
# Import a Logic App as an API + This article shows how to import a Logic App as an API and test the imported API. In this article, you learn how to:
api-management Import Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-soap-api.md
# Import SOAP API to API Management + This article shows how to import a WSDL specification, which is a standard XML representation of a SOAP API. The article also shows how to test the API in API Management. In this article, you learn how to:
api-management Include Fragment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/include-fragment-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Include fragment + The `include-fragment` policy inserts the contents of a previously created [policy fragment](policy-fragments.md) in the policy definition. A policy fragment is a centrally managed, reusable XML policy snippet that can be included in policy definitions in your API Management instance. The policy inserts the policy fragment as-is at the location you select in the policy definition.
The policy inserts the policy fragment as-is at the location you select in the p
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
In the following example, the policy fragment named *myFragment* is added in the
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Policy control and flow](api-management-policies.md#policy-control-and-flow)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Integrate Vnet Outbound https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/integrate-vnet-outbound.md
Previously updated : 11/20/2023 Last updated : 03/13/2024
-# Integrate an Azure API Management instance with a private VNet for outbound connections (preview)
+# Integrate an Azure API Management instance with a private VNet for outbound connections
+ This article guides you through the process of configuring *VNet integration* for your Azure API Management instance so that your API Management instance can make outbound requests to API backends that are isolated in the network.
When an API Management instance is integrated with a virtual network for outboun
:::image type="content" source="./media/integrate-vnet-outbound/vnet-integration.svg" alt-text="Diagram of integrating API Management instance with a delegated subnet." ::: - ## Prerequisites - An Azure API Management instance in the [Standard v2](v2-service-tiers-overview.md) pricing tier
api-management Invoke Dapr Binding Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/invoke-dapr-binding-policy.md
Previously updated : 12/07/2022 Last updated : 03/18/2024 # Trigger output binding + The `invoke-dapr-binding` policy instructs API Management gateway to trigger an outbound Dapr [binding](https://github.com/dapr/docs/blob/master/README.md). The policy accomplishes that by making an HTTP POST request to `http://localhost:3500/v1.0/bindings/{{bind-name}},` replacing the template parameter and adding content specified in the policy statement. The policy assumes that Dapr runtime is running in a sidecar container in the same pod as the gateway. Dapr runtime is responsible for invoking the external resource represented by the binding. Learn more about [Dapr integration with API Management](self-hosted-gateway-enable-dapr.md).
The "backend" section is empty and the request is not forwarded to the backend.
## Related policies
-* [API Management Dapr integration policies](api-management-dapr-policies.md)
+* [Integration and external communication](api-management-policies.md#integration-and-external-communication)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Ip Filter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/ip-filter-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Restrict caller IPs + The `ip-filter` policy filters (allows/denies) calls from specific IP addresses and/or address ranges. [!INCLUDE [api-management-policy-form-alert](../../includes/api-management-policy-form-alert.md)]
The `ip-filter` policy filters (allows/denies) calls from specific IP addresses
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
In the following example, the policy only allows requests coming either from the
## Related policies
-* [API Management access restriction policies](api-management-access-restriction-policies.md)
+* [Authentication and authorization](api-management-policies.md#authentication-and-authorization)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Json To Xml Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/json-to-xml-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Convert JSON to XML++ The `json-to-xml` policy converts a request or response body from JSON to XML. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `json-to-xml` policy converts a request or response body from JSON to XML.
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
The XML response to the client will be:
## Related policies
-* [API Management transformation policies](api-management-transformation-policies.md)
+* [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Jsonp Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/jsonp-policy.md
Previously updated : 12/07/2022 Last updated : 03/18/2024 # JSONP + The `jsonp` policy adds JSON with padding (JSONP) support to an operation or an API to allow cross-domain calls from JavaScript browser-based clients. JSONP is a method used in JavaScript programs to request data from a server in a different domain. JSONP bypasses the limitation enforced by most web browsers where access to web pages must be in the same domain. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `jsonp` policy adds JSON with padding (JSONP) support to an operation or an
- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
If you add the callback parameter `?cb=XXX`, it will return a JSONP result, wrap
## Related policies
-* [API Management cross-domain policies](api-management-cross-domain-policies.md)
+* [Cross-domain](api-management-policies.md#cross-domain)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Limit Concurrency Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/limit-concurrency-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Limit concurrency + The `limit-concurrency` policy prevents enclosed policies from executing by more than the specified number of requests at any time. When that number is exceeded, new requests will fail immediately with the `429` Too Many Requests status code. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `limit-concurrency` policy prevents enclosed policies from executing by more
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
The following example demonstrates how to limit number of requests forwarded to
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Rate limiting and quotas](api-management-policies.md#rate-limiting-and-quotas)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Log To Eventhub Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/log-to-eventhub-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Log to event hub + The `log-to-eventhub` policy sends messages in the specified format to an event hub defined by a [Logger](/rest/api/apimanagement/current-ga/logger) entity. As its name implies, the policy is used for saving selected request or response context information for online or offline analysis. > [!NOTE]
The `log-to-eventhub` policy sends messages in the specified format to an event
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
Any string can be used as the value to be logged in Event Hubs. In this example
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Integration and external communication](api-management-policies.md#integration-and-external-communication)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Migrate Stv1 To Stv2 No Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2-no-vnet.md
# Migrate a non-VNet-injected API Management instance to the stv2 compute platform + This article provides steps to migrate an API Management instance hosted on the `stv1` compute platform in-place to the `stv2` platform when the instance *is not* injected (deployed) in an external or internal VNet. For this scenario, migrate your instance using the Azure portal or the [Migrate to stv2](/rest/api/apimanagement/current-g#how-do-i-know-which-platform-hosts-my-api-management-instance). If you need to migrate a *VNnet-injected* API Management hosted on the `stv1` platform, see [Migrate a VNet-injected API Management instance to the stv2 platform](migrate-stv1-to-stv2-vnet.md).
If you need to migrate a *VNnet-injected* API Management hosted on the `stv1` pl
> * Depending on your migration process, you might have temporary downtime during migration, and you might need to update your network dependencies after migration to reach your API Management instance. Plan your migration accordingly. > * Migration to `stv2` is not reversible. - ## What happens during migration? API Management platform migration from `stv1` to `stv2` involves updating the underlying compute alone and has no impact on the service/API configuration persisted in the storage layer. For an instance that's not deployed in a VNet:
api-management Migrate Stv1 To Stv2 Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2-vnet.md
# Migrate a VNet-injected API Management instance hosted on the stv1 platform to stv2 + This article provides steps to migrate an API Management instance hosted on the `stv1` compute platform in-place to the `stv2` platform when the instance is injected (deployed) in an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet. For this scenario, migrate your instance by updating the VNet configuration settings. [Find out if you need to do this](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance). If you need to migrate a *non-VNnet-injected* API Management hosted on the `stv1` platform, see [Migrate a non-VNet-injected API Management instance to the stv2 platform](migrate-stv1-to-stv2-no-vnet.md).
If you need to migrate a *non-VNnet-injected* API Management hosted on the `stv1
> * The VIP address of your instance will change. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address. Plan your migration accordingly. > * Migration to `stv2` is not reversible. - ## What happens during migration? API Management platform migration from `stv1` to `stv2` involves updating the underlying compute alone and has no impact on the service/API configuration persisted in the storage layer.
api-management Migrate Stv1 To Stv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md
# Migrate an API Management instance hosted on the stv1 platform to stv2 + Here we help you find guidance to migrate your API Management instance hosted on the `stv1` compute platform to the newer `stv2` platform. [Find out if you need to do this](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance). There are two different migration scenarios, depending on whether or not your API Management instance is currently deployed (injected) in an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet. Choose the migration guide for your scenario. Both scenarios migrate an existing instance in-place to the `stv2` platform.
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
# Recommendations to mitigate OWASP API Security Top 10 threats using API Management + The Open Web Application Security Project ([OWASP](https://owasp.org/about/)) Foundation works to improve software security through its community-led open source software projects, hundreds of chapters worldwide, tens of thousands of members, and by hosting local and global conferences. The OWASP [API Security Project](https://owasp.org/www-project-api-security/) focuses on strategies and solutions to understand and mitigate the unique *vulnerabilities and security risks of APIs*. In this article, we'll discuss recommendations to use Azure API Management to mitigate the top 10 API threats identified by OWASP.
More information about this threat: [API2:2019 Broken User Authentication](https
Use API Management for user authentication and authorization:
-* **Authentication** - API Management supports the following [authentication methods](api-management-authentication-policies.md):
+* **Authentication** - API Management supports the following [authentication methods](api-management-policies.md#authentication-and-authorization):
* [Basic authentication](authentication-basic-policy.md) policy - Username and password credentials.
Use API Management for user authentication and authorization:
More recommendations:
-* Use [access restriction policies](api-management-access-restriction-policies.md) in API Management to increase security. For example, [call rate limiting](rate-limit-policy.md) slows down bad actors using brute force attacks to compromise credentials.
+* Use policies in API Management to increase security. For example, [call rate limiting](rate-limit-policy.md) slows down bad actors using brute force attacks to compromise credentials.
* APIs should use TLS/SSL (transport security) to protect the credentials or tokens. Credentials and tokens should be sent in request headers and not as query parameters.
More information about this threat: [API3:2019 Excessive Data Exposure](https://
* [Versions](api-management-versions.md) for breaking changes, for example, the removal of a field from an interface.
-* If it's not possible to alter the backend interface design and excessive data is a concern, use API Management [transformation policies](transform-api.md) to rewrite response payloads and mask or filter data. For example, [remove unneeded JSON properties](./policies/filter-response-content.md) from a response body.
+* If it's not possible to alter the backend interface design and excessive data is a concern, use API Management [transformation policies](api-management-policies.md#transformation) to rewrite response payloads and mask or filter data. For example, [remove unneeded JSON properties](./policies/filter-response-content.md) from a response body.
* [Response content validation](validate-content-policy.md) in API Management can be used with an XML or JSON schema to block responses with undocumented properties or improper values. The policy also supports blocking responses exceeding a specified size.
More information about this threat: [API6:2019 Mass assignment](https://github.c
* Precisely define XML and JSON contracts in the API schema and use [validate content](validate-content-policy.md) and [validate parameters](validate-parameters-policy.md) policies to block requests and responses with undocumented properties. Blocking requests with undocumented properties mitigates attacks, while blocking responses with undocumented properties makes it harder to reverse-engineer potential attack vectors.
-* If the backend interface can't be changed, use [transformation policies](transform-api.md) to rewrite request and response payloads and decouple the API contracts from backend contracts. For example, mask or filter data or [remove unneeded JSON properties](./policies/filter-response-content.md).
+* If the backend interface can't be changed, use [transformation policies](api-management-policies.md#transformation) to rewrite request and response payloads and decouple the API contracts from backend contracts. For example, mask or filter data or [remove unneeded JSON properties](./policies/filter-response-content.md).
## Security misconfiguration
More information about this threat: [API7:2019 Security misconfiguration](https:
* Configure the [CORS](cors-policy.md) policy and don't use wildcard `*` for any configuration option. Instead, explicitly list allowed values.
- * Set [validation policies](validation-policies.md) to `prevent` in production environments to validate JSON and XML schemas, headers, query parameters, and status codes, and to enforce the maximum size for request or response.
+ * Set [validation policies](api-management-policies.md#content-validation) to `prevent` in production environments to validate JSON and XML schemas, headers, query parameters, and status codes, and to enforce the maximum size for request or response.
* If API Management is outside a network boundary, client IP validation is still possible using the [restrict caller IPs](ip-filter-policy.md) policy. Ensure that it uses an allowlist, not a blocklist.
More information about this threat: [API8:2019 Injection](https://github.com/OWA
> [!IMPORTANT] > Ensure that a bad actor can't bypass the gateway hosting the WAF and connect directly to the API Management gateway or backend API itself. Possible mitigations include: [network ACLs](../virtual-network/network-security-groups-overview.md), using API Management policy to [restrict inbound traffic by client IP](ip-filter-policy.md), removing public access where not required, and [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) (also known as mutual TLS or mTLS).
-* Use schema and parameter [validation](validation-policies.md) policies, where applicable, to further constrain and validate the request before it reaches the backend API service.
+* Use schema and parameter [validation](api-management-policies.md#content-validation) policies, where applicable, to further constrain and validate the request before it reaches the backend API service.
The schema supplied with the API definition should have a regex pattern constraint applied to vulnerable fields. Each regex should be tested to ensure that it constrains the field sufficiently to mitigate common injection attempts.
api-management Mock Api Responses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-api-responses.md
# Tutorial: Mock API responses + Backend APIs are imported into an API Management (APIM) API or created and managed manually. The steps in this tutorial, show you how to: + Use API Management to create a blank HTTP API
api-management Mock Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-response-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Mock response + The `mock-response` policy, as the name implies, is used to mock APIs and operations. It cancels normal pipeline execution and returns a mocked response to the caller. The policy always tries to return responses of highest fidelity. It prefers response content examples, when available. It generates sample responses from schemas, when schemas are provided and examples aren't. If neither examples or schemas are found, responses with no content are returned. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `mock-response` policy, as the name implies, is used to mock APIs and operat
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
The `mock-response` policy, as the name implies, is used to mock APIs and operat
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Monetization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/monetization-overview.md
# Monetization with Azure API Management + Modern web APIs underpin the digital economy. They provide a company's intellectual property (IP) to third parties and generate revenue by: - Packaging IP in the form of data, algorithms, or processes.
api-management Monetization Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/monetization-support.md
# How API Management supports monetization + With [Azure API Management](./api-management-key-concepts.md) service platform, you can: * Publish APIs, to which your consumers subscribe. * De-risk implementation.
api-management Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/observability.md
# Observability in Azure API Management + Observability is the ability to understand the internal state of a system from the data it produces and the ability to explore that data to answer questions about what happened and why. Azure API Management helps organizations centralize the management of all APIs. Since it serves as a single point of entry of all API traffic, it is an ideal place to observe the APIs.
Azure API Management allows you to choose to use the managed gateway or [self-ho
The table below summarizes all the observability capabilities supported by API Management to operate APIs and what deployment models they support. These capabilities can be used by API publishers and others who have permissions to operate or manage the API Management instance. > [!NOTE]
-> For API consumers who use the developer portal, a built-in API report is available. It only provides information about their individual API usage during the preceding 90 days.
+> For API consumers who use the developer portal, a built-in API report is available. It only provides information about their individual API usage during the preceding 90 days. Currently, the built-in API report is not available in the developer portal for the v2 service tiers.
> | Tool | Useful for | Data lag | Retention | Sampling | Data kind | Supported Deployment Model(s) | |:- |:-|:- |:-|:- |: |:- |
api-management Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/plan-manage-costs.md
Last updated 06/11/2021
# Plan and manage costs for API Management + This article describes how you plan for and manage costs for Azure API Management. First, you use the Azure pricing calculator to help plan for API Management costs before you add any resources for the service to estimate costs. After you've started using API Management resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for API Management are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for API Management, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
When you create or use Azure resources with API Management, you'll get charged b
| Tiers | Description | | -- | -- | | Consumption | Incurs no fixed costs. You are billed based on the number of API calls to the service above a certain threshold. |
-| Developer, Basic, Standard, and Premium | Incur monthly costs, based on the number of [units](./api-management-capacity.md) and [self-hosted gateways](./self-hosted-gateway-overview.md). Self-hosted gateways are free for the Developer tier. [Upgrade](./upgrade-and-scale.md) to a different service tier at any time. |
+| Developer, Basic, Basic v2, Standard, Standard v2, and Premium | Incur monthly costs, based on the number of [units](./api-management-capacity.md) and [self-hosted gateways](./self-hosted-gateway-overview.md). Self-hosted gateways are free for the Developer tier. Different [upgrade](./upgrade-and-scale.md) options are available, depending on your service tier. |
You may also incur additional charges when you use other Azure resources with API Management, like virtual networks, availability zones, and multi-region writes. At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all API Management costs. There's a separate line item for each meter.
api-management Policy Fragments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md
# Reuse policy configurations in your API Management policy definitions + This article shows you how to create and use *policy fragments* in your API Management policy definitions. Policy fragments are centrally managed, reusable XML snippets containing one or more API Management [policy](api-management-howto-policies.md) configurations. Policy fragments help you configure policies consistently and maintain policy definitions without needing to repeat or retype XML code.
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
# Azure Policy built-in policy definitions for Azure API Management + This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy definitions for Azure API Management. For additional Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). If you're looking for policies you can use to modify API behavior in API Management, see [API Management policy reference](api-management-policies.md).
api-management Powershell Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/powershell-create-service-instance.md
# Quickstart: Create a new Azure API Management instance by using PowerShell + In this quickstart, you create a new API Management instance by using Azure PowerShell cmdlets. After creating an instance, you can use Azure PowerShell cmdlets for common management actions such as importing APIs in your API Management instance. [!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)]
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
Last updated 03/20/2023
# Connect privately to API Management using an inbound private endpoint + You can configure an inbound [private endpoint](../private-link/private-endpoint-overview.md) for your API Management instance to allow clients in your private network to securely access the instance over [Azure Private Link](../private-link/private-link-overview.md). * The private endpoint uses an IP address from an Azure VNet in which it's hosted.
You can configure an inbound [private endpoint](../private-link/private-endpoint
[!INCLUDE [api-management-private-endpoint](../../includes/api-management-private-endpoint.md)] -- ## Limitations * Only the API Management instance's Gateway endpoint supports inbound Private Link connections.
api-management Protect With Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-ddos-protection.md
# Defend your Azure API Management instance against DDoS attacks + This article shows how to defend your Azure API Management instance against distributed denial of service (DDoS) attacks by enabling [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md). Azure DDoS Protection provides enhanced DDoS mitigation features to defend against volumetric and protocol DDoS attacks.ΓÇï [!INCLUDE [ddos-waf-recommendation](../../includes/ddos-waf-recommendation.md)] - ## Supported configurations Enabling Azure DDoS Protection for API Management is supported only for instances **deployed (injected) in a VNet** in [external mode](api-management-using-with-vnet.md) or [internal mode](api-management-using-with-internal-vnet.md).
api-management Protect With Defender For Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-defender-for-apis.md
# Enable advanced API security features using Microsoft Defender for Cloud + [Defender for APIs](/azure/defender-for-cloud/defender-for-apis-introduction), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), offers full lifecycle protection, detection, and response coverage for APIs that are managed in Azure API Management. The service empowers security practitioners to gain visibility into their business-critical APIs, understand their security posture, prioritize vulnerability fixes, and detect active runtime threats within minutes. Capabilities of Defender for APIs include:
Capabilities of Defender for APIs include:
This article shows how to use the Azure portal to enable Defender for APIs from your API Management instance and view a summary of security recommendations and alerts for onboarded APIs. - ## Plan limitations * Currently, Defender for APIs discovers and analyzes REST APIs only.
api-management Proxy Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/proxy-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Set HTTP proxy + The `proxy` policy allows you to route requests forwarded to backends via an HTTP proxy. Only HTTP (not HTTPS) is supported between the gateway and the proxy. Basic and NTLM authentication only. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `proxy` policy allows you to route requests forwarded to backends via an HTT
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
In this example, [named values](api-management-howto-properties.md) are used for
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Routing](api-management-policies.md#routing)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Publish Event Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-event-policy.md
Previously updated : 05/24/2023 Last updated : 03/18/2024 # Publish event to GraphQL subscription + The `publish-event` policy publishes an event to one or more subscriptions specified in a GraphQL API schema. Configure the policy in a [GraphQL resolver](configure-graphql-resolver.md) for a related field in the schema for another operation type such as a mutation. At runtime, the event is published to connected GraphQL clients. Learn more about [GraphQL APIs in API Management](graphql-apis-overview.md). [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `publish-event` policy publishes an event to one or more subscriptions speci
- [**Policy sections:**](./api-management-howto-policies.md#sections) `http-response` element in `http-data-source` resolver - [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver only-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption
### Usage notes
type Subscription {
## Related policies
-* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies)
+* [GraphQL resolvers](api-management-policies.md#graphql-resolvers)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Publish To Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-to-dapr-policy.md
Previously updated : 12/07/2022 Last updated : 03/18/2024 # Send message to Pub/Sub topic + The `publish-to-dapr` policy instructs API Management gateway to send a message to a Dapr Publish/Subscribe topic. The policy accomplishes that by making an HTTP POST request to `http://localhost:3500/v1.0/publish/{{pubsub-name}}/{{topic}}`, replacing template parameters and adding content specified in the policy statement. The policy assumes that Dapr runtime is running in a sidecar container in the same pod as the gateway. Dapr runtime implements the Pub/Sub semantics. Learn more about [Dapr integration with API Management](self-hosted-gateway-enable-dapr.md).
The "backend" section is empty and the request is not forwarded to the backend.
## Related policies
-* [API Management Dapr integration policies](api-management-dapr-policies.md)
+* [Integration and external communication](api-management-policies.md#integration-and-external-communication)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-arm-template.md
Previously updated : 12/12/2023 Last updated : 03/25/2024 # Quickstart: Create a new Azure API Management service instance using an ARM template + This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure API Management instance. You can also use ARM templates for common management tasks such as importing APIs in your API Management instance. [!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)]
More Azure API Management template samples can be found in [Azure Quickstart Tem
- **Region**: select a location for the resource group. Example: **Central US**. - **Publisher Email**: enter an email address to receive notifications. - **Publisher Name**: enter a name you choose for the API publisher.
- - **Sku**: accept the default value of **Developer**.
+ - **Sku**: accept the default value of **Developer**. Alternatively, choose another value.
- **Sku Count**: accept the default value. - **Location**: accept the generated location for the API Management service.
More Azure API Management template samples can be found in [Azure Quickstart Tem
1. Select **Review + Create**, then review the terms and conditions. If you agree, select **Create**. > [!TIP]
- > It can take between 30 and 40 minutes to create and activate an API Management service in the Developer tier.
+ > It can take between 30 and 40 minutes to create and activate an API Management service in the Developer tier. Times vary by tier.
1. After the instance has been created successfully, you get a notification:
api-management Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-bicep.md
tags: azure-resource-manager, bicep
Previously updated : 12/12/2023 Last updated : 03/25/2024 # Quickstart: Create a new Azure API Management service instance using Bicep + This quickstart describes how to use a Bicep file to create an Azure API Management instance. You can also use Bicep for common management tasks such as importing APIs in your API Management instance. [!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)]
The following resource is defined in the Bicep file:
- **[Microsoft.ApiManagement/service](/azure/templates/microsoft.apimanagement/service)**
-In this example, the Bicep file configures the API Management instance in the Developer tier, an economical option to evaluate Azure API Management. This tier isn't for production use.
+In this example, the Bicep file by default configures the API Management instance in the Developer tier, an economical option to evaluate Azure API Management. This tier isn't for production use.
More Azure API Management Bicep samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Apimanagement&pageNumber=1&sort=Popular).
You can use Azure CLI or Azure PowerShell to deploy the Bicep file. For more in
When the deployment finishes, you should see a message indicating the deployment succeeded.
+ > [!TIP]
+ > It can take between 30 and 40 minutes to create and activate an API Management service in the Developer tier. Times vary by tier.
+ ## Review deployed resources Use the Azure portal, Azure CLI or Azure PowerShell to list the deployed App Configuration resource in the resource group.
api-management Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-terraform.md
ai-usage: ai-assisted
# Quickstart: Create an Azure API Management instance using Terraform + This article shows how to use [Terraform](/azure/terraform) to create an API Management instance on Azure. You can also use Terraform for common management tasks such as importing APIs in your API Management instance. [!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)]
api-management Quota By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-by-key-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Set usage quota by key + The `quota-by-key` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the quota. If multiple policies would increment the same key value, it is incremented only once per request. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds. To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
To understand the difference between rate limits and quotas, [see Rate limits an
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, self-hosted
### Usage notes
For more information and examples of this policy, see [Advanced request throttli
## Related policies
-* [API Management access restriction policies](api-management-access-restriction-policies.md)
+* [Rate limiting and quotas](api-management-policies.md#rate-limiting-and-quotas)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Quota Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-policy.md
Previously updated : 09/27/2022 Last updated : 03/18/2024 # Set usage quota by subscription + The `quota` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. When the quota is exceeded, the caller receives a `403 Forbidden` response status code, and the response includes a `Retry-After` header whose value is the recommended retry interval in seconds. To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
To understand the difference between rate limits and quotas, [see Rate limits an
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) product-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
To understand the difference between rate limits and quotas, [see Rate limits an
## Related policies
-* [API Management access restriction policies](api-management-access-restriction-policies.md)
+* [Rate limiting and quotas](api-management-policies.md#rate-limiting-and-quotas)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Rate Limit By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-by-key-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 + # Limit call rate by key + The `rate-limit-by-key` policy prevents API usage spikes on a per key basis by limiting the call rate to a specified number per a specified time period. The key can have an arbitrary string value and is typically provided using a policy expression. Optional increment condition can be added to specify which requests should be counted towards the limit. When this call rate is exceeded, the caller receives a `429 Too Many Requests` response status code. To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
To understand the difference between rate limits and quotas, [see Rate limits an
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, self-hosted
### Usage notes
For more information and examples of this policy, see [Advanced request throttli
## Related policies
-* [API Management access restriction policies](api-management-access-restriction-policies.md)
+* [Rate limiting and quotas](api-management-policies.md#rate-limiting-and-quotas)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Rate Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-policy.md
Previously updated : 01/11/2023 Last updated : 03/18/2024 # Limit call rate by subscription + The `rate-limit` policy prevents API usage spikes on a per subscription basis by limiting the call rate to a specified number per a specified time period. When the call rate is exceeded, the caller receives a `429 Too Many Requests` response status code. To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas)
To understand the difference between rate limits and quotas, [see Rate limits an
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
In the following example, the per subscription rate limit is 20 calls per 90 sec
## Related policies
-* [API Management access restriction policies](api-management-access-restriction-policies.md)
+* [Rate limiting and quotas](api-management-policies.md#rate-limiting-and-quotas)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Redirect Content Urls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/redirect-content-urls-policy.md
Previously updated : 12/02/2022 Last updated : 03/18/2024 # Mask URLs in content++ The `redirect-content-urls` policy rewrites (masks) links in the response body so that they point to the equivalent link via the gateway. Use in the outbound section to rewrite response body links to the backend service to make them point to the gateway. Use in the inbound section for an opposite effect. > [!NOTE]
The `redirect-content-urls` policy rewrites (masks) links in the response body s
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
The `redirect-content-urls` policy rewrites (masks) links in the response body s
## Related policies
-* [API Management transformation policies](api-management-transformation-policies.md)
+* [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Restify Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/restify-soap-api.md
# Import SOAP API to API Management and convert to REST + This article shows how to import a SOAP API as a WSDL specification and then convert it to a REST API. The article also shows how to test the API in API Management. In this article, you learn how to:
api-management Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/retry-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Retry + The `retry` policy executes its child policies once and then retries their execution until the retry `condition` becomes `false` or retry `count` is exhausted. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `retry` policy may contain any other policies as its child elements.
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Examples
In the following example, sending a request to a URL other than the defined back
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Policy control and flow](api-management-policies.md#policy-control-and-flow)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Return Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/return-response-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Return response + The `return-response` policy cancels pipeline execution and returns either a default or custom response to the caller. Default response is `200 OK` with no body. Custom response can be specified via a context variable or policy statements. When both are provided, the response contained within the context variable is modified by the policy statements before being returned to the caller. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `return-response` policy cancels pipeline execution and returns either a def
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
The `return-response` policy cancels pipeline execution and returns either a def
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Rewrite Uri Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rewrite-uri-policy.md
Previously updated : 03/28/2023 Last updated : 03/18/2024 # Rewrite URL + The `rewrite-uri` policy converts a request URL from its public form to the form expected by the web service, as shown in the following example. - Public URL - `http://api.example.com/storenumber/ordernumber`
This policy can be used when a human and/or browser-friendly URL should be trans
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
You can only add query string parameters using the policy. You can't add extra t
## Related policies -- [API Management transformation policies](api-management-transformation-policies.md)
+- [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Sap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sap-api.md
Last updated 07/21/2023
# Import SAP OData metadata as an API + This article shows how to import an OData service using its metadata description. In this article, [SAP Gateway Foundation](https://help.sap.com/viewer/product/SAP_GATEWAY) serves as an example. In this article, you'll:
Choose one of the following methods to import your API to API Management: import
:::image type="content" source="media/sap-api/get-root-operation.png" alt-text="Get operation for service root":::
-Also, configure authentication to your backend using an appropriate method for your environment. For examples, see [API Management authentication policies](api-management-authentication-policies.md).
+Also, configure authentication to your backend using an appropriate method for your environment. For examples, see [API Management authentication and authorization policies](api-management-policies.md#authentication-and-authorization).
## Test your API
api-management Secure Developer Portal Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/secure-developer-portal-access.md
# Secure access to the API Management developer portal + API Management has a fully customizable, standalone, managed [developer portal](api-management-howto-developer-portal.md), which can be used externally (or internally) to allow developer users to discover and interact with the APIs published through API Management. The developer portal has several options to facilitate secure user sign-up and sign-in.
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/06/2024 Last updated : 03/18/2024
# Azure Policy Regulatory Compliance controls for Azure API Management + [Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md) provides Microsoft created and managed initiative definitions, known as _built-ins_, for the **compliance domains** and **security controls** related to different compliance standards. This
api-management Self Hosted Gateway Arc Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-arc-reference.md
# Reference: Self-hosted gateway Azure Arc configuration settings + This article provides a reference for required and optional settings that are used to configure the Azure Arc extension for API Management [self-hosted gateway container](self-hosted-gateway-overview.md). [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-azure-arc.md)]
api-management Self Hosted Gateway Enable Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-enable-azure-ad.md
# Use Microsoft Entra authentication for the self-hosted gateway + The Azure API Management [self-hosted gateway](self-hosted-gateway-overview.md) needs connectivity with its associated cloud-based API Management instance for reporting status, checking for and applying configuration updates, and sending metrics and events. In addition to using a gateway access token (authentication key) to connect with its cloud-based API Management instance, you can enable the self-hosted gateway to authenticate to its associated cloud instance by using an [Microsoft Entra app](../active-directory/develop/app-objects-and-service-principals.md). With Microsoft Entra authentication, you can configure longer expiry times for secrets and use standard steps to manage and rotate secrets in Active Directory.
api-management Self Hosted Gateway Enable Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-enable-dapr.md
# Enable Dapr support in the self-hosted gateway + Dapr integration in API Management enables operations teams to directly expose Dapr microservices deployed on Kubernetes clusters as APIs, and make those APIs discoverable and easily consumable by developers with proper controls across multiple Dapr deploymentsΓÇöwhether in the cloud, on-premises, or on the edge. ## About Dapr
template:
## Dapr integration policies
-API Management provides specific [policies](api-management-policies.md#dapr-integration-policies) to interact with Dapr APIs exposed through the self-hosted gateway.
+API Management provides specific [policies](api-management-policies.md#integration-and-external-communication) to interact with Dapr APIs exposed through the self-hosted gateway.
## Next steps
api-management Self Hosted Gateway Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-migration-guide.md
# Self-hosted gateway migration guide + This article explains how to migrate existing self-hosted gateway deployments to self-hosted gateway v2. > [!IMPORTANT] > Support for Azure API Management self-hosted gateway version 0 and version 1 container images is ending on 1 October 2023, along with its corresponding Configuration API v1. [Learn more in our deprecation documentation](./breaking-changes/self-hosted-gateway-v0-v1-retirement-oct-2023.md) - ## What's new? As we strive to make it easier for customers to deploy our self-hosted gateway, we've **introduced a new configuration API** that removes the dependency on Azure Storage, unless you're using [API inspector](api-management-howto-api-inspector.md) or quotas.
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
# Self-hosted gateway overview + The self-hosted gateway is an optional, containerized version of the default managed gateway included in every API Management service. It's useful for scenarios such as placing gateways in the same environments where you host your APIs. Use the self-hosted gateway to improve API traffic flow and address API security and compliance requirements. This article explains how the self-hosted gateway feature of Azure API Management enables hybrid and multicloud API management, presents its high-level architecture, and highlights its capabilities. For an overview of the features across the various gateway offerings, see [API gateway in API Management](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways). - ## Hybrid and multicloud API management The self-hosted gateway feature expands API Management support for hybrid and multicloud environments and enables organizations to efficiently and securely manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
# Reference: Self-hosted gateway container configuration settings + This article provides a reference for required and optional settings that are used to configure the API Management [self-hosted gateway container](self-hosted-gateway-overview.md). To learn more about our (Kubernetes) production guidance, we recommend reading [this article](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
api-management Self Hosted Gateway Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-support-policies.md
Last updated 05/12/2023
# Support policies for self-hosted gateway + The Azure API Management service, in the Developer and Premium tiers, allows the deployment of the API Management gateway as a container running in on-premises infrastructure, other clouds, and Azure infrastructure options that support containers. This article provides details about technical support policies and limitations for the API Management [self-hosted gateway](self-hosted-gateway-overview.md). [!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)] - ## Differences between managed gateway and self-hosted gateway When deploying an instance of the API Management service, you'll always get a managed API gateway as part of the service. This gateway runs in infrastructure managed by Azure, and the software is also managed, updated, and managed by Azure.
api-management Send One Way Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-one-way-request-policy.md
Previously updated : 08/02/2023 Last updated : 03/18/2024 # Send one way request + The `send-one-way-request` policy sends the provided request to the specified URL without waiting for a response. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `send-one-way-request` policy sends the provided request to the specified UR
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
This example uses the `send-one-way-request` policy to send a message to a Slack
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Intergration and external communication](api-management-policies.md#integration-and-external-communication)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Send Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md
Previously updated : 08/02/2023 Last updated : 03/18/2024 # Send request + The `send-request` policy sends the provided request to the specified URL, waiting no longer than the set timeout value. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
This example shows one way to verify a reference token with an authorization ser
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Integration and external communication](api-management-policies.md#integration-and-external-communication)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Set Backend Service Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-dapr-policy.md
Previously updated : 12/07/2022 Last updated : 03/18/2024 # Send request to a service + The `set-backend-service` policy sets the target URL for the current request to `http://localhost:3500/v1.0/invoke/{app-id}[.{ns-name}]/method/{method-name}`, replacing template parameters with values specified in the policy statement. The policy assumes that Dapr runs in a sidecar container in the same pod as the gateway. Upon receiving the request, Dapr runtime performs service discovery and actual invocation, including possible protocol translation between HTTP and gRPC, retries, distributed tracing, and error handling. Learn more about [Dapr integration with API Management](self-hosted-gateway-enable-dapr.md).
The `forward-request` policy is shown here for clarity. The policy is typically
## Related policies
-* [API Management Dapr integration policies](api-management-dapr-policies.md)
+* [Integration and external communication](api-management-policies.md#integration-and-external-communication)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Set Backend Service Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-policy.md
Previously updated : 03/14/2024 Last updated : 03/18/2024 # Set backend service++ Use the `set-backend-service` policy to redirect an incoming request to a different backend than the one specified in the API settings for that operation. This policy changes the backend service base URL of the incoming request to a URL or [backend](backends.md) specified in the policy. Referencing a backend entity allows you to manage the backend service base URL and other settings in a single place and reuse them across multiple APIs and operations. Also implement [load balancing of traffic across a pool of backend services](backends.md#load-balanced-pool-preview) and [circuit breaker rules](backends.md#circuit-breaker-preview) to protect the backend from too many requests.
Referencing a backend entity allows you to manage the backend service base URL a
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
Initially the backend service base URL is derived from the API settings. So the
When the [<choose\>](choose-policy.md) policy statement is applied the backend service base URL may change again either to `http://contoso.com/api/8.2` or `http://contoso.com/api/9.1`, depending on the value of the version request query parameter. For example, if the value is `"2013-15"` the final request URL becomes `http://contoso.com/api/8.2/partners/15?version=2013-15&subscription-key=abcdef`.
-If further transformation of the request is desired, other [Transformation policies](api-management-transformation-policies.md) can be used. For example, to remove the version query parameter now that the request is being routed to a version specific backend, the [Set query string parameter](set-query-parameter-policy.md) policy can be used to remove the now redundant version attribute.
+If further transformation of the request is desired, other [Transformation policies](api-management-policies.md#transformation) can be used. For example, to remove the version query parameter now that the request is being routed to a version specific backend, the [Set query string parameter](set-query-parameter-policy.md) policy can be used to remove the now redundant version attribute.
### Route requests to a service fabric backend
In this example the policy routes the request to a service fabric backend, using
## Related policies
-* [API Management transformation policies](api-management-transformation-policies.md)
+* [Routing](api-management-policies.md#routing)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Set Body Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-body-policy.md
Previously updated : 02/02/2024 Last updated : 03/18/2024 # Set body + Use the `set-body` policy to set the message body for a request or response. To access the message body you can use the `context.Request.Body` property or the `context.Response.Body`, depending on whether the policy is in the inbound or outbound section. > [!IMPORTANT]
OriginalUrl.
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
The following example uses the `AsFormUrlEncodedContent()` expression to access
## Related policies
-* [API Management transformation policies](api-management-transformation-policies.md)
+* [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
# How to set or edit Azure API Management policies + This article shows you how to configure policies in your API Management instance by editing policy definitions in the Azure portal. Each policy definition is an XML document that describes a sequence of inbound and outbound statements that run sequentially on an API request and response. The policy editor in the portal provides guided forms for API publishers to add and edit policies in policy definitions. You can also edit the XML directly in the policy code editor.
api-management Set Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-header-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Set header + The `set-header` policy assigns a value to an existing HTTP response and/or request header or adds a new response and/or request header. Use the policy to insert a list of HTTP headers into an HTTP message. When placed in an inbound pipeline, this policy sets the HTTP headers for the request being passed to the target service. When placed in an outbound pipeline, this policy sets the HTTP headers for the response being sent to the gatewayΓÇÖs client.
The `set-header` policy assigns a value to an existing HTTP response and/or requ
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
This example shows how to apply policy at the API level to supply context inform
## Related policies -- [API Management transformation policies](api-management-transformation-policies.md)
+- [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Set Method Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-method-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Set request method + The `set-method` policy allows you to change the HTTP request method for a request. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The value of the element specifies the HTTP method, such as `POST`, `GET`, and s
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
This example uses the `set-method` policy to send a message to a Slack chat room
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Set Query Parameter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-query-parameter-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Set query string parameter + The `set-query-parameter` policy adds, replaces value of, or deletes request query string parameter. Can be used to pass query parameters expected by the backend service which are optional or never present in the request. [!INCLUDE [api-management-policy-form-alert](../../includes/api-management-policy-form-alert.md)]
The `set-query-parameter` policy adds, replaces value of, or deletes request que
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Examples
The `set-query-parameter` policy adds, replaces value of, or deletes request que
## Related policies -- [API Management transformation policies](api-management-transformation-policies.md)
+- [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Set Status Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-status-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Set status code ++ The `set-status` policy sets the HTTP status code to the specified value. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `set-status` policy sets the HTTP status code to the specified value.
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
This example shows how to return a 401 response if the authorization token is in
``` - ## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Set Variable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-variable-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Set variable
-The `set-variable` policy declares a [context](api-management-policy-expressions.md#ContextVariables) variable and assigns it a value specified via an [expression](api-management-policy-expressions.md) or a string literal. if the expression contains a literal it will be converted to a string and the type of the value will be `System.String`.
+
+The `set-variable` policy declares a [context](api-management-policy-expressions.md#ContextVariables) variable and assigns it a value specified via an [expression](api-management-policy-expressions.md) or a string literal. If the expression contains a literal it will be converted to a string and the type of the value will be `System.String`.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `set-variable` policy declares a [context](api-management-policy-expressions
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Allowed types
The following example demonstrates a `set-variable` policy in the inbound sectio
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/soft-delete.md
Last updated 02/07/2022
# API Management soft-delete (preview) + With API Management soft-delete, you can recover and restore a recently deleted API Management instance. This feature protects against accidental deletion of your API Management instance. Currently, depending on how you delete an API Management instance, the instance is either soft-deleted and recoverable during a retention period, or it's permanently deleted:
api-management Sql Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sql-data-source-policy.md
Previously updated : 06/07/2023 Last updated : 03/18/2024 # Azure SQL data source for a resolver + The `sql-data-source` resolver policy configures a Transact-SQL (T-SQL) request to an [Azure SQL](/azure/azure-sql/azure-sql-iaas-vs-paas-what-is-overview) database and an optional response to resolve data for an object type and field in a GraphQL schema. The schema must be imported to API Management as a GraphQL API. > [!NOTE]
The `sql-data-source` resolver policy configures a Transact-SQL (T-SQL) request
## Usage - [**Policy scopes:**](./api-management-howto-policies.md#scopes) GraphQL resolver-- [**Gateways:**](api-management-gateways-overview.md) dedicated
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2
### Usage notes
The following example resolves a GraphQL mutation using a T-SQL INSERT statement
## Related policies
-* [GraphQL resolver policies](api-management-policies.md#graphql-resolver-policies)
+* [GraphQL resolvers](api-management-policies.md#graphql-resolvers)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Trace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Trace + The `trace` policy adds a custom trace into the request tracing output in the test console, Application Insights telemetries, and/or resource logs. - The policy adds a custom trace to the [request tracing](./api-management-howto-api-inspector.md) output in the test console when tracing is triggered, that is, `Ocp-Apim-Trace` request header is present and set to `true` and `Ocp-Apim-Subscription-Key` request header is present and holds a valid key that allows tracing.
The `trace` policy adds a custom trace into the request tracing output in the te
[!INCLUDE [api-management-tracing-alert](../../includes/api-management-tracing-alert.md)] + [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)] ## Policy statement
The `trace` policy adds a custom trace into the request tracing output in the te
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
The `trace` policy adds a custom trace into the request tracing output in the te
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Logging](api-management-policies.md#logging)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Transform Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/transform-api.md
# Tutorial: Transform and protect your API + In this tutorial, you'll learn about configuring common [policies](api-management-howto-policies.md) to transform your API. You might want to transform your API so it doesn't reveal private backend info. Transforming an API can help you hide the technology stack info that's running in the backend, or hide the original URLs that appear in the body of the API's HTTP response. This tutorial also explains how to add protection to your backend API by configuring a rate limit policy, so that the API isn't overused by developers. For more policy options, see [API Management policies](api-management-policies.md).
api-management Troubleshoot Response Timeout And Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/troubleshoot-response-timeout-and-errors.md
# Troubleshooting client response timeouts and errors with API Management + This article helps you troubleshoot intermittent connection errors and related latency issues in [Azure API Management](./api-management-key-concepts.md). Specifically, this article will provide information and troubleshooting for the exhaustion of source address network translation (SNAT) ports. If you require more help, contact the Azure experts at [Azure Community Support](https://azure.microsoft.com/support/community/) or file a support request with [Azure Support](https://azure.microsoft.com/support/options/). ## Symptoms
For more, see [Add caching to improve performance in Azure API Management](api-m
If it makes sense for your business scenario, you can implement access restriction policies for your API Management product. For example, the `rate-limit-by-key` policy can be used to prevent API usage spikes on a per key basis by limiting the call rate per a specified time period.
-See [API Management access restriction policies](api-management-access-restriction-policies.md) for more info.
+See [Rate limiting and quota policies](api-management-policies.md#rate-limiting-and-quotas) for more info.
## See also
api-management Upgrade And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/upgrade-and-scale.md
Previously updated : 03/30/2023 Last updated : 03/21/2024 # Upgrade and scale an Azure API Management instance
-Customers can scale an Azure API Management instance in a dedicated service tier by adding and removing units. A **unit** is composed of dedicated Azure resources and has a certain load-bearing capacity expressed as a number of API calls per second. This number doesn't represent a call limit, but rather an estimated maximum throughput value to allow for rough capacity planning. Actual throughput and latency vary broadly depending on factors such as number and rate of concurrent connections, the kind and number of configured policies, request and response sizes, and backend latency.
+Customers can scale an Azure API Management instance in a dedicated service tier by adding and removing units. A **unit** is composed of dedicated Azure resources and has a certain load-bearing capacity expressed as a number of API calls per second. This number doesn't represent a call limit, but rather an estimated maximum throughput value to allow for rough capacity planning. Actual throughput and latency vary broadly depending on factors such as number and rate of concurrent connections, the kind and number of configured policies, request and response sizes, and backend latency.
> [!NOTE]
-> * In the **Standard** and **Premium** tiers of the API Management service, you can configure an instance to [scale automatically](api-management-howto-autoscale.md) based on a set of rules.
+> * In the **Basic**, **Standard**, and **Premium** tiers of the API Management service, you can configure an instance to [scale automatically](api-management-howto-autoscale.md) based on a set of rules.
> * API Management instances in the **Consumption** tier scale automatically based on the traffic. Currently, you cannot upgrade from or downgrade to the Consumption tier. The throughput and price of each unit depend on the [service tier](api-management-features.md) in which the unit exists. If you need to increase capacity for a service within a tier, you should add a unit. If the tier that is currently selected in your API Management instance doesn't allow adding more units, you need to upgrade to a higher-level tier.
->[!NOTE]
->See [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) for features, scale limits, and estimated throughput in each tier. To get more accurate throughput numbers, you need to look at a realistic scenario for your APIs. See [Capacity of an Azure API Management instance](api-management-capacity.md).
+> [!NOTE]
+> See [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) for features, scale limits, and estimated throughput in each tier. To get more accurate throughput numbers, you need to look at a realistic scenario for your APIs. See [Capacity of an Azure API Management instance](api-management-capacity.md).
## Prerequisites
To follow the steps from this article, you must:
## Upgrade and scale
-You can choose between four dedicated tiers: **Developer**, **Basic**, **Standard**, and **Premium**.
+You can choose between the following dedicated tiers: **Developer**, **Basic**, **Basic v2**, **Standard**, **Standard v2**, and **Premium**.
* The **Developer** tier should be used to evaluate the service; it shouldn't be used for production. The **Developer** tier doesn't have SLA and you can't scale this tier (add/remove units).
-* **Basic**, **Standard**, and **Premium** are production tiers that have SLA and can be scaled. For pricing details and scale limits, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/#pricing).
+* **Basic**, **Basic v2**, **Standard**, **Standard v2**, and **Premium** are production tiers that have SLA and can be scaled. For pricing details and scale limits, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/#pricing).
* The **Premium** tier enables you to distribute a single Azure API Management instance across any number of desired Azure regions. When you initially create an Azure API Management service, the instance contains only one unit and resides in a single Azure region (the **primary** region). Additional regions can be easily added. When adding a region, you specify the number of units you want to allocate. For example, you can have one unit in the primary region and five units in some other region. You can tailor the number of units to the traffic you have in each region. For more information, see [How to deploy an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md).
-* You can upgrade and downgrade to and from any dedicated service tier. Downgrading can remove some features. For example, downgrading to Standard or Basic from the Premium tier can remove virtual networks or multi-region deployment.
+* You can upgrade and downgrade to and from certain dedicated services tiers:
+ * You can upgrade and downgrade to and from classic tiers (**Developer**, **Basic**, **Standard**, and **Premium**).
+
+ * You can upgrade and downgrade to and from v2 tiers (**Basic v2** and **Standard v2**).
+
+ Downgrading can remove some features. For example, downgrading to **Standard** or **Basic** from the **Premium** tier can remove virtual networks or multi-region deployment.
> [!NOTE]
-> The upgrade or scale process can take from 15 to 45 minutes to apply. You get notified when it is done.
+> The upgrade or scale process can take up to 15 to 45 minutes to apply. You get notified when it is done.
## Scale your API Management instance
+You can use the portal to scale your API Management instance. How you scale depends on the service tier you are using.
+ ![Scale API Management service in Azure portal](./media/upgrade-and-scale/portal-scale.png)
+### Add or remove units - classic service tiers
+ 1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
-1. Select **Locations** from the menu.
+1. Select **Locations** from the left-hand menu.
1. Select the row with the location you want to scale. 1. Specify the new number of **Units** - use the slider if available, or select or type the number. 1. Select **Apply**. > [!NOTE]
-> In the Premium service tier, you can optionally configure availability zones and a virtual network in a selected location. For more information, see [Deploy API Management service to an additional location](api-management-howto-deploy-multi-region.md).
+> In the **Premium** service tier, you can optionally configure availability zones and a virtual network in a selected location. For more information, see [Deploy API Management service to an additional location](api-management-howto-deploy-multi-region.md).
+
+### Add or remove units - v2 service tiers
+
+1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
+1. Select **Scale** from the left-hand menu.
+1. Specify the new number of **Units** - use the slider, or select or type the number.
+1. Select **Save**.
## Change your API Management service tier
You can choose between four dedicated tiers: **Developer**, **Basic**, **Standa
1. Select **Save**. ## Downtime during scaling up and down
-If you're scaling from or to the Developer tier, there will be downtime. Otherwise, there is no downtime.
+If you're scaling from or to the **Developer** tier, there will be downtime. Otherwise, there is no downtime.
## Compute isolation If your security requirements include [compute isolation](../azure-government/azure-secure-isolation-guidance.md#compute-isolation), you can use the **Isolated** pricing tier. This tier ensures the compute resources of an API Management service instance consume the entire physical host and provide the necessary level of isolation required to support, for example, US Department of Defense Impact Level 5 (IL5) workloads. To get access to the Isolated tier, [create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
-## Next steps
+## Related content
- [How to deploy an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md) - [How to automatically scale an Azure API Management service instance](api-management-howto-autoscale.md)
api-management V2 Service Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md
Title: Azure API Management - v2 tiers (preview)
-description: Introduction to key scenarios, capabilities, and concepts of the v2 tiers (SKUs) of the Azure API Management service. The v2 tiers are in preview.
+ Title: Azure API Management - v2 tiers
+description: Introduction to key scenarios, capabilities, and concepts of the v2 tiers (SKUs) of the Azure API Management service.
Previously updated : 01/31/2024 Last updated : 03/21/2024
-# New Azure API Management tiers (preview)
+# Azure API Management v2 tiers
-We're introducing a new set of pricing tiers (SKUs) for Azure API Management: the *v2 tiers*. The new tiers are built on a new, more reliable and scalable platform and are designed to make API Management accessible to a broader set of customers and offer flexible options for a wider variety of scenarios.
-Currently in preview, the following v2 tiers are available:
+We're introducing a new set of pricing tiers (SKUs) for Azure API Management: the *v2 tiers*. The new tiers are built on a new, more reliable and scalable platform and are designed to make API Management accessible to a broader set of customers and offer flexible options for a wider variety of scenarios. The v2 tiers are in addition to the existing classic tiers (Developer, Basic, Standard, and Premium) and the Consumption tier. [Learn more](api-management-key-concepts.md#api-management-tiers).
-* **Basic v2** - The Basic v2 tier is designed for development and testing scenarios, and is supported with an SLA. In the Basic v2 tier, the developer portal is an optional add-on.
+The following v2 tiers are generally available:
-* **Standard v2** - Standard v2 is a production-ready tier with support planned for advanced API Management features previously available only in a Premium tier of API Management, including high availability and networking options.
+* **Basic v2** - The Basic v2 tier is designed for development and testing scenarios, and is supported with an SLA.
+
+* **Standard v2** - Standard v2 is a production-ready tier with support for network-isolated backends.
## Key capabilities
Currently in preview, the following v2 tiers are available:
* **More options for production workloads** - The v2 tiers are all supported with an SLA. Upgrade from Basic v2 to Standard v2 to add more production options.
-* **Developer portal options** - Enable the [developer portal](api-management-howto-developer-portal.md) when you're ready to let API consumers discover your APIs. The developer portal is included in the Standard v2 tier, and is an add-on in the Basic v2 tier.
+* **Developer portal options** - Enable the [developer portal](api-management-howto-developer-portal.md) when you're ready to let API consumers discover your APIs.
## Networking options
-In preview, the v2 tiers currently support the following options to limit network traffic from your API Management instance to protected API backends:
--
-* **Standard v2**
-
- **Outbound** - VNet integration to allow your API Management instance to reach API backends that are isolated in a VNet. The API Management gateway, management plane, and developer portal remain publicly accessible from the internet. The VNet must be in the same region as the API Management instance. [Learn more](integrate-vnet-outbound.md).
+The Standard v2 tier supports VNet integration to allow your API Management instance to reach API backends that are isolated in a single connected VNet. The API Management gateway, management plane, and developer portal remain publicly accessible from the internet. The VNet must be in the same region as the API Management instance. [Learn more](integrate-vnet-outbound.md).
-
-## Features and limitations
+## Features
### API version
-The v2 tiers are supported in API Management API version **2023-03-01-preview** or later.
+The v2 tiers are supported in API Management API version **2023-05-01-preview** or later.
### Supported regions-
-In preview, the v2 tiers are available in the following regions:
-
-* East US
+The v2 tiers are available in the following regions:
* South Central US * West US * France Central
+* Germany West Central
* North Europe * West Europe * UK South
+* UK West
* Brazil South
+* Australia Central
* Australia East * Australia Southeast * East Asia
+* Southeast Asia
+* Korea Central
### Feature availability
-Most capabilities of the existing (v1) tiers are planned for the v2 tiers. However, the following capabilities aren't supported in the v2 tiers:
+Most capabilities of the classic API Management tiers are supported in the v2 tiers. However, the following capabilities aren't supported in the v2 tiers:
* API Management service configuration using Git * Back up and restore of API Management instance * Enabling Azure DDoS Protection
+* Built-in analytics (replaced with Azure Monitor-based dashboard)
-### Preview limitations
-
-Currently, the following API Management capabilities are unavailable in the v2 tiers preview and are planned for later release. Where indicated, certain features are planned only for the Standard v2 tier. Features may be enabled during the preview period.
+### Limitations
+The following API Management capabilities are currently unavailable in the v2 tiers.
**Infrastructure and networking**
-* Zone redundancy (*Standard v2*)
-* Multi-region deployment (*Standard v2*)
-* Multiple custom domain names (*Standard v2*)
+* Zone redundancy
+* Multi-region deployment
+* Multiple custom domain names
* Capacity metric * Autoscaling
-* Built-in analytics
* Inbound connection using a private endpoint
+* Injection in a VNet in external mode or internal mode
* Upgrade to v2 tiers from v1 tiers
-* Workspaces (*Standard v2*)
+* Workspaces
**Developer portal** * Delegation of user registration and product subscription * Reports
+* Custom HTML code widget and custom widget
+* Self-hosted developer portal
**Gateway**
-* Self-hosted gateway (*Standard v2*)
-* Management of Websocket APIs
-* Rate limit by key and quota by key policies
+* Self-hosted gateway
+* Quota by key policy
* Cipher configuration * Client certificate renegotiation
+* Request tracing in the test console
* Requests to the gateway over localhost
- > [!NOTE]
- > Currently the policy document size limit in the v2 tiers is 16 KiB.
+## Resource limits
+
+The following resource limits apply to the v2 tiers.
++
+## Developer portal limits
+
+The following limits apply to the developer portal in the v2 tiers.
+ ## Deployment
Deploy an instance of the Basic v2 or Standard v2 tier using the Azure portal, A
### Q: Can I migrate from my existing API Management instance to a new v2 tier instance?
-A: No. Currently you can't migrate an existing API Management instance (in the Consumption, Developer, Basic, Standard, or Premium tier) to a new v2 tier instance. Currently the new tiers are available for newly created service instances only.
+A: No. Currently you can't migrate an existing API Management instance (in the Consumption, Developer, Basic, Standard, or Premium tier) to a new v2 tier instance. Currently the v2 tiers are available for newly created service instances only.
### Q: What's the relationship between the stv2 compute platform and the v2 tiers?
A: Yes, there are no changes to the Basic or Standard tiers.
### Q: What is the difference between VNet integration in Standard v2 tier and VNet support in the Premium tier?
-A: A Standard v2 service instance can be integrated with a VNet to provide secure access to the backends residing there. A Standard v2 service instance integrated with a VNet will have a public IP address that can be secured separately, via Private Link, if necessary. The Premium tier supports a [fully private integration](api-management-using-with-internal-vnet.md) with a VNet (often referred to as injection into VNet) without exposing a public IP address.
+A: A Standard v2 service instance can be integrated with a VNet to provide secure access to the backends residing there. A Standard v2 service instance integrated with a VNet will have a public IP address. The Premium tier supports a [fully private integration](api-management-using-with-internal-vnet.md) with a VNet (often referred to as injection into VNet) without exposing a public IP address.
### Q: Can I deploy an instance of the Basic v2 or Standard v2 tier entirely in my VNet?
A: Yes, a Premium v2 preview is planned and will be announced separately.
## Related content
-* Learn more about the API Management [tiers](api-management-features.md).
--
+* Compare the API Management [tiers](api-management-features.md).
+* Learn more about the [API Management gateways](api-management-gateways-overview.md)
+* Learn about [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/).
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
Previously updated : 10/19/2023 Last updated : 03/18/2024 # Validate Microsoft Entra token
-The `validate-azure-ad-token` policy enforces the existence and validity of a JSON web token (JWT) that was provided by the Microsoft Entra service for a specified set of principals in the directory. The JWT can be extracted from a specified HTTP header, query parameter, or value provided using a policy expression or context variable.
+
+The `validate-azure-ad-token` policy enforces the existence and validity of a JSON web token (JWT) that was provided by the Microsoft Entra (formerly called Azure Active Directory) service for a specified set of principals in the directory. The JWT can be extracted from a specified HTTP header, query parameter, or value provided using a policy expression or context variable.
> [!NOTE] > To validate a JWT that was provided by another identity provider, API Management also provides the generic [`validate-jwt`](validate-jwt-policy.md) policy.
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
For more details on optional claims, read [Provide optional claims to your app](
## Related policies
-* [API Management access restriction policies](api-management-access-restriction-policies.md)
+* [Authentication and authorization](api-management-policies.md#authentication-and-authorization)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Validate Client Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-client-certificate-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Validate client certificate + Use the `validate-client-certificate` policy to enforce that a certificate presented by a client to an API Management instance matches specified validation rules and claims such as subject or issuer for one or more certificate identities. To be considered valid, a client certificate must match all the validation rules defined by the attributes at the top-level element and match all defined claims for at least one of the defined identities.
For more information about custom CA certificates and certificate authorities, s
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
The following example validates a client certificate to match the policy's defau
## Related policies
-* [API Management access restriction policies](api-management-access-restriction-policies.md)
+* [Authentication and authorization](api-management-policies.md#authentication-and-authorization)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Validate Content Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-content-policy.md
Previously updated : 12/05/2022 Last updated : 03/18/2024 # Validate content++ The `validate-content` policy validates the size or content of a request or response body against one or more [supported schemas](#schemas-for-content-validation). The following table shows the schema formats and request or response content types that the policy supports. Content type values are case insensitive.
The policy validates the following content in the request or response against th
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
[!INCLUDE [api-management-validation-policy-common](../../includes/api-management-validation-policy-common.md)]
In the following example, API Management interprets any request as a request wit
## Related policies
-* [API Management validation policies](validation-policies.md)
+* [Content validation](api-management-policies.md#content-validation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Validate Graphql Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-graphql-request-policy.md
Previously updated : 12/02/2022 Last updated : 03/18/2024 # Validate GraphQL request + The `validate-graphql-request` policy validates the GraphQL request and authorizes access to specific query paths in a GraphQL API. An invalid query is a "request error". Authorization is only done for valid requests. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
Available actions are described in the following table.
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
This example applies the following validation and authorization rules to a Graph
## Related policies
-* [Validation policies](api-management-policies.md#validation-policies)
+* [Content validation](api-management-policies.md#content-validation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Validate Headers Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-headers-policy.md
Previously updated : 12/05/2022 Last updated : 03/18/2024 # Validate headers + The `validate-headers` policy validates the response headers against the API schema. > [!IMPORTANT]
The `validate-headers` policy validates the response headers against the API sch
- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
The `validate-headers` policy validates the response headers against the API sch
## Related policies
-* [API Management validation policies](validation-policies.md)
+* [Content validation](api-management-policies.md#content-validation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
Previously updated : 03/05/2024 Last updated : 03/18/2024 # Validate JWT + The `validate-jwt` policy enforces existence and validity of a supported JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value. > [!NOTE]
The `validate-jwt` policy enforces existence and validity of a supported JSON we
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
This example shows how to use the `validate-jwt` policy to authorize access to o
``` ## Related policies
-* [API Management access restriction policies](api-management-access-restriction-policies.md)
+* [Authentication and authorization](api-management-policies.md#authentication-and-authorization)
api-management Validate Odata Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-odata-request-policy.md
Previously updated : 06/06/2023 Last updated : 03/18/2024 # Validate OData request + The `validate-odata-request` policy validates the request URL, headers, and parameters of a request to an OData API to ensure conformance with the [OData specification](https://www.odata.org/documentation). > [!NOTE]
The `validate-odata-request` policy validates the request URL, headers, and para
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
The following example validates a request to an OData API and assumes a default
## Related policies
-* [Validation policies](api-management-policies.md#validation-policies)
+* [Content validation](api-management-policies.md#content-validation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Validate Parameters Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-parameters-policy.md
Previously updated : 12/05/2022 Last updated : 03/18/2024 # Validate parameters + The `validate-parameters` policy validates the header, query, or path parameters in requests against the API schema. > [!IMPORTANT]
The `validate-parameters` policy validates the header, query, or path parameters
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
In this example, all query and path parameters are validated in the prevention m
## Related policies
-* [API Management validation policies](validation-policies.md)
+* [Content validation](api-management-policies.md#content-validation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Validate Service Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-service-updates.md
# Validate service updates to avoid disruption to your production API Management instances
-*"One of the value propositions of the cloud is that itΓÇÖs continually improving, delivering new capabilities and features, as well as security and reliability enhancements. But since the platform is continuously evolving, change is inevitable." - Mark Russinovich, CTO, Azure*
+
+*"One of the value propositions of the cloud is that itΓÇÖs continually improving, delivering new capabilities and features, as well as security and reliability enhancements. But since the platform is continuously evolving, change is inevitable."* - Mark Russinovich, CTO, Azure
Microsoft uses a safe deployment practices framework to thoroughly test, monitor, and validate service updates, and then deploy them to Azure regions using a phased approach. Even so, service updates that reach your API Management instances could introduce unanticipated risks to your production workloads and disrupt your API consumers. Learn how you can apply our safe deployment approach to reduce risks by validating the updates before they reach your production API Management environments.
Here are example strategies to use an API Management instance as a canary deploy
* **Deploy duplicate instances in a region** - If your production workload is a Premium tier instance in a specific region, consider deploying a similarly configured instance in a lower tier that receives updates earlier. For example, configure a pre-production instance in the Developer tier to validate updates.
-## Next steps
+## Related content
* Learn [how to monitor](api-management-howto-use-azure-monitor.md) your API Management instance. * Learn about other options to [observe](observability.md) your API Management instance.
api-management Validate Status Code Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-status-code-policy.md
Previously updated : 12/05/2022 Last updated : 03/18/2024 # Validate status code + The `validate-status-code` policy validates the HTTP status codes in responses against the API schema. This policy may be used to prevent leakage of backend errors, which can contain stack traces. [!INCLUDE [api-management-validation-policy-schema-size-note](../../includes/api-management-validation-policy-schema-size-note.md)]
The `validate-status-code` policy validates the HTTP status codes in responses a
- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
The `validate-status-code` policy validates the HTTP status codes in responses a
## Related policies
-* [API Management validation policies](validation-policies.md)
+* [Content validation](api-management-policies.md#content-validation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
Title: Azure API Management with an Azure virtual network
-description: Learn about scenarios and requirements to secure inbound and outbound traffic for your API Management instance using an Azure virtual network.
+description: Learn about scenarios and requirements to secure inbound or outbound traffic for your API Management instance using an Azure virtual network.
Previously updated : 09/14/2023 Last updated : 03/26/2024
-# Use a virtual network to secure inbound and outbound traffic for Azure API Management
+# Use a virtual network to secure inbound or outbound traffic for Azure API Management
-API Management provides several options to secure access to your API Management instance and APIs using an Azure virtual network. API Management supports the following options. Available options depend on the [service tier](api-management-features.md) of your API Management instance.
+By default your API Management is accessed from the internet at a public endpoint, and acts as a gateway to public backends. API Management provides several options to secure access to your API Management instance and to backend APIs using an Azure virtual network. Available options depend on the [service tier](api-management-features.md) of your API Management instance.
* **Injection** of the API Management instance into a subnet in the virtual network, enabling the gateway to access resources in the network. You can choose one of two injection modes: *external* or *internal*. They differ in whether inbound connectivity to the gateway and other API Management endpoints is allowed from the internet or only from within the virtual network.
+* **Integration** of your API Management instance with a subnet in a virtual network so that your API Management gateway can make outbound requests to API backends that are isolated in the network.
+ * **Enabling secure and private inbound connectivity** to the API Management gateway using a *private endpoint*. The following table compares virtual networking options. For more information, see later sections of this article and links to detailed guidance. |Networking model |Supported tiers |Supported components |Supported traffic |Usage scenario | |||||-|
-|**[Virtual network injection - external](#virtual-network-injection)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository | Inbound and outbound traffic can be allowed to internet, peered virtual networks, Express Route, and S2S VPN connections. | External access to private and on-premises backends
-|**[Virtual network injection - internal](#virtual-network-injection)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository. | Inbound and outbound traffic can be allowed to peered virtual networks, Express Route, and S2S VPN connections. | Internal access to private and on-premises backends
-|**[Inbound private endpoint](#inbound-private-endpoint)** | Developer, Basic, Standard, Premium | Gateway only (managed gateway supported, self-hosted gateway not supported). | Only inbound traffic can be allowed from internet, peered virtual networks, Express Route, and S2S VPN connections. | Secure client connection to API Management gateway |
-
+|**[Virtual network injection - external](#virtual-network-injection)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository | Inbound and outbound traffic can be allowed to internet, peered virtual networks, Express Route, and S2S VPN connections. | External access to private and on-premises backends |
+|**[Virtual network injection - internal](#virtual-network-injection)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository | Inbound and outbound traffic can be allowed to peered virtual networks, Express Route, and S2S VPN connections. | Internal access to private and on-premises backends |
+|**[Outbound integration](#outbound-integration)** | Standard v2 | Gateway only | Outbound request traffic can reach APIs hosted in a delegated subnet of a virtual network. | External access to private and on-premises backends |
+|**[Inbound private endpoint](#inbound-private-endpoint)** | Developer, Basic, Standard, Premium | Gateway only (managed gateway supported, self-hosted gateway not supported) | Only inbound traffic can be allowed from internet, peered virtual networks, Express Route, and S2S VPN connections. | Secure client connection to API Management gateway |
## Virtual network injection+ With VNet injection, deploy ("inject") your API Management instance in a subnet in a non-internet-routable network to which you control access. In the virtual network, your API Management instance can securely access other networked Azure resources and also connect to on-premises networks using various VPN technologies. To learn more about Azure VNets, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md). You can use the Azure portal, Azure CLI, Azure Resource Manager templates, or other tools for the configuration. You control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security groups](../virtual-network/network-security-groups-overview.md).
For detailed deployment steps and network configuration, see:
* [Deploy your API Management instance to a virtual network - external mode](./api-management-using-with-vnet.md). * [Deploy your API Management instance to a virtual network - internal mode](./api-management-using-with-internal-vnet.md).
+* [Network resource requirements for API Management injection into a virtual network](virtual-network-injection-resources.md).
### Access options Using a virtual network, you can configure the developer portal, API gateway, and other API Management endpoints to be accessible either from the internet (external mode) or only within the VNet (internal mode).
Using a virtual network, you can configure the developer portal, API gateway, an
* Enable hybrid cloud scenarios by exposing your cloud-based APIs and on-premises APIs through a common gateway. * Manage your APIs hosted in multiple geographic locations, using a single gateway endpoint.
+## Outbound integration
-### Network resource requirements for injection
-
-The following are virtual network resource requirements for API Management injection into a VNet. Some requirements differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
-
-#### [stv2](#tab/stv2)
-
-* An Azure Resource Manager virtual network is required.
-* You must provide a Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) in addition to specifying a virtual network and subnet.
-* The subnet used to connect to the API Management instance may contain other Azure resource types.
-* The subnet used to connect to the API Management instance should not have any delegations enabled. The "Delegate subnet to a service" setting for the subnet should be set to "None".
-* A [network security group](../virtual-network/network-security-groups-overview.md) attached to the subnet above. A network security group (NSG) is required to explicitly allow inbound connectivity, because the load balancer used internally by API Management is secure by default and rejects all inbound traffic.
-* The API Management service, virtual network and subnet, and public IP address resource must be in the same region and subscription.
-* For multi-region API Management deployments, configure virtual network resources separately for each location.
-
-#### [stv1](#tab/stv1)
-
-* An Azure Resource Manager virtual network is required.
-* The subnet used to connect to the API Management instance must be dedicated to API Management. It can't contain other Azure resource types.
-* The subnet used to connect to the API Management instance should not have any delegations enabled. The "Delegate subnet to a service" setting for the subnet should be set to "None".
-* The API Management service, virtual network, and subnet resources must be in the same region and subscription.
-* For multi-region API Management deployments, configure virtual network resources separately for each location.
--
+The Standard v2 tier supports VNet integration to allow your API Management instance to reach API backends that are isolated in a single connected VNet. The API Management gateway, management plane, and developer portal remain publicly accessible from the internet.
-### Subnet size
+Outbound integration enables the API Management instance to reach both public and network-isolated backend services.
-The minimum size of the subnet in which API Management can be deployed is /29, which provides three usable IP addresses. Each extra scale [unit](api-management-capacity.md) of API Management requires two more IP addresses. The minimum size requirement is based on the following considerations:
-* Azure reserves five IP addresses within each subnet that can't be used. The first and last IP addresses of the subnets are reserved for protocol conformance. Three more addresses are used for Azure services. For more information, see [Are there any restrictions on using IP addresses within these subnets?](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets).
-
-* In addition to the IP addresses used by the Azure VNet infrastructure, each API Management instance in the subnet uses:
- * Two IP addresses per unit of Basic, Standard, or Premium SKU, or
- * One IP address for the Developer SKU.
-
-* When deploying into an [internal VNet](./api-management-using-with-internal-vnet.md), the instance requires an extra IP address for the internal load balancer.
-
-#### Examples
-
-* For Basic, Standard, or Premium SKUs:
-
- * **/29 subnet**: 8 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 0 remaining IP addresses left for scale-out units.
-
- * **/28 subnet**: 16 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 8 remaining IP addresses left for four scale-out units (2 IP addresses/scale-out unit) for a total of five units. **This subnet efficiently maximizes Basic and Standard SKU scale-out limits.**
-
- * **/27 subnet**: 32 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 24 remaining IP addresses left for twelve scale-out units (2 IP addresses/scale-out unit) for a total of thirteen units. **This subnet efficiently maximizes the soft-limit Premium SKU scale-out limit.**
-
- * **/26 subnet**: 64 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 56 remaining IP addresses left for twenty-eight scale-out units (2 IP addresses/scale-out unit) for a total of twenty-nine units. It is possible, with an Azure Support ticket, to scale the Premium SKU past twelve units. If you foresee such high demand, consider the /26 subnet.
-
- * **/25 subnet**: 128 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 120 remaining IP addresses left for sixty scale-out units (2 IP addresses/scale-out unit) for a total of sixty-one units. This is an extremely large, theoretical number of scale-out units.
-
-> [!IMPORTANT]
-> The private IP addresses of internal load balancer and API Management units are assigned dynamically. Therefore, it is impossible to anticipate the private IP of the API Management instance prior to its deployment. Additionally, changing to a different subnet and then returning may cause a change in the private IP address.
-
-### Routing
-
-See the Routing guidance when deploying your API Management instance into an [external VNet](./api-management-using-with-vnet.md#routing) or [internal VNet](./api-management-using-with-internal-vnet.md#routing).
-
-Learn more about the [IP addresses of API Management](api-management-howto-ip-addresses.md).
-
-### DNS
-
-* In external mode, the VNet enables [Azure-provided name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution) by default for your API Management endpoints and other Azure resources. It doesn't provide name resolution for on-premises resources. Optionally, configure your own DNS solution.
-
-* In internal mode, you must provide your own DNS solution to ensure name resolution for API Management endpoints and other required Azure resources. We recommend configuring an Azure [private DNS zone](../dns/private-dns-overview.md).
-
-For more information, see the DNS guidance when deploying your API Management instance into an [external VNet](./api-management-using-with-vnet.md#routing) or [internal VNet](./api-management-using-with-internal-vnet.md#routing).
-
-Related information:
-* [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
-* [Create an Azure private DNS zone](../dns/private-dns-getstarted-portal.md)
-
-> [!IMPORTANT]
-> If you plan to use a custom DNS solution for the VNet, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/current-ga/api-management-service/apply-network-configuration-updates), or by selecting **Apply network configuration** in the service instance's network configuration window in the Azure portal.
-
-### Limitations
-
-Some virtual network limitations differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
-
-#### [stv2](#tab/stv2)
-
-* A subnet containing API Management instances can't be moved across subscriptions.
-* For multi-region API Management deployments configured in internal VNet mode, users own the routing and are responsible for managing the load balancing across multiple regions.
-* To import an API to API Management from an [OpenAPI specification](import-and-publish.md), the specification URL must be hosted at a publicly accessible internet address.
-
-#### [stv1](#tab/stv1)
-
-* A subnet containing API Management instances can't be moved across subscriptions.
-* For multi-region API Management deployments configured in internal VNet mode, users own the routing and are responsible for managing the load balancing across multiple regions.
-* To import an API to API Management from an [OpenAPI specification](import-and-publish.md), the specification URL must be hosted at a publicly accessible internet address.
-* Due to platform limitations, connectivity between a resource in a globally peered VNet in another region and an API Management service in internal mode doesn't work. For more information, see the [virtual network documentation](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints).
--
+For more information, see [Integrate an Azure API Management instance with a private VNet for outbound connections](integrate-vnet-outbound.md).
## Inbound private endpoint
Virtual network configuration with API Management:
* [Deploy your Azure API Management instance to a virtual network - external mode](./api-management-using-with-vnet.md). * [Deploy your Azure API Management instance to a virtual network - internal mode](./api-management-using-with-internal-vnet.md). * [Connect privately to API Management using a private endpoint](private-endpoint.md)
+* [Integrate an Azure API Management instance with a private VNet for outbound connections](integrate-vnet-outbound.md)
* [Defend your Azure API Management instance against DDoS attacks](protect-with-ddos-protection.md) Related articles:
api-management Virtual Network Injection Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-injection-resources.md
+
+ Title: Azure API Management virtual network integration - network resources
+description: Learn about requirements for network resources when you deploy (inject) your API Management instance in an Azure virtual network.
++++ Last updated : 03/26/2024+++
+# Network resource requirements for API Management injection into a virtual network
++
+The following are virtual network resource requirements for API Management injection into a virtual network. Some requirements differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
+
+#### [stv2](#tab/stv2)
+
+* An Azure Resource Manager virtual network is required.
+* You must provide a Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) in addition to specifying a virtual network and subnet.
+* The subnet used to connect to the API Management instance may contain other Azure resource types.
+* The subnet used to connect to the API Management instance should not have any delegations enabled. The "Delegate subnet to a service" setting for the subnet should be set to "None".
+* A [network security group](../virtual-network/network-security-groups-overview.md) attached to the subnet above. A network security group (NSG) is required to explicitly allow inbound connectivity, because the load balancer used internally by API Management is secure by default and rejects all inbound traffic.
+* The API Management service, virtual network and subnet, and public IP address resource must be in the same region and subscription.
+* For multi-region API Management deployments, configure virtual network resources separately for each location.
+
+#### [stv1](#tab/stv1)
+
+* An Azure Resource Manager virtual network is required.
+* The subnet used to connect to the API Management instance must be dedicated to API Management. It can't contain other Azure resource types.
+* The subnet used to connect to the API Management instance should not have any delegations enabled. The "Delegate subnet to a service" setting for the subnet should be set to "None".
+* The API Management service, virtual network, and subnet resources must be in the same region and subscription.
+* For multi-region API Management deployments, configure virtual network resources separately for each location.
++
+## Subnet size
+
+The minimum size of the subnet in which API Management can be deployed is /29, which provides three usable IP addresses. Each extra scale [unit](api-management-capacity.md) of API Management requires two more IP addresses. The minimum size requirement is based on the following considerations:
+
+* Azure reserves five IP addresses within each subnet that can't be used. The first and last IP addresses of the subnets are reserved for protocol conformance. Three more addresses are used for Azure services. For more information, see [Are there any restrictions on using IP addresses within these subnets?](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets).
+
+* In addition to the IP addresses used by the Azure virtual network infrastructure, each API Management instance in the subnet uses:
+ * Two IP addresses per unit of Basic, Standard, or Premium SKU, or
+ * One IP address for the Developer SKU.
+
+* When deploying into an [internal virtual network](./api-management-using-with-internal-vnet.md), the instance requires an extra IP address for the internal load balancer.
+
+### Examples
+
+* **/29 subnet**: 8 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 0 remaining IP addresses left for scale-out units.
+
+* **/28 subnet**: 16 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 8 remaining IP addresses left for four scale-out units (2 IP addresses/scale-out unit) for a total of five units.
+
+* **/27 subnet**: 32 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 24 remaining IP addresses left for 12 scale-out units (2 IP addresses/scale-out unit) for a total of 13 units.
+
+* **/26 subnet**: 64 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 56 remaining IP addresses left for 28 scale-out units (2 IP addresses/scale-out unit) for a total of 29 units.
+
+* **/25 subnet**: 128 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 120 remaining IP addresses left for 60 scale-out units (2 IP addresses/scale-out unit) for a total of 61 units. This is a large, theoretical number of scale-out units.
+
+> [!NOTE]
+> It is currently possible to scale the Premium SKU to 31 units. If you foresee demand approaching this limit, consider the /26 subnet or /25 submit.
+
+> [!IMPORTANT]
+> The private IP addresses of internal load balancer and API Management units are assigned dynamically. Therefore, it is impossible to anticipate the private IP of the API Management instance prior to its deployment. Additionally, changing to a different subnet and then returning may cause a change in the private IP address.
+
+## Routing
+
+See the Routing guidance when deploying your API Management instance into an [external virtual network](./api-management-using-with-vnet.md#routing) or [internal virtual network](./api-management-using-with-internal-vnet.md#routing).
+
+Learn more about the [IP addresses of API Management](api-management-howto-ip-addresses.md).
+
+## DNS
+
+* In external mode, the virtual network enables [Azure-provided name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution) by default for your API Management endpoints and other Azure resources. It doesn't provide name resolution for on-premises resources. Optionally, configure your own DNS solution.
+
+* In internal mode, you must provide your own DNS solution to ensure name resolution for API Management endpoints and other required Azure resources. We recommend configuring an Azure [private DNS zone](../dns/private-dns-overview.md).
+
+For more information, see the DNS guidance when deploying your API Management instance into an [external virtual network](./api-management-using-with-vnet.md#routing) or [internal virtual network](./api-management-using-with-internal-vnet.md#routing).
+
+Related information:
+* [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
+* [Create an Azure private DNS zone](../dns/private-dns-getstarted-portal.md)
+
+> [!IMPORTANT]
+> If you plan to use a custom DNS solution for the VNet, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/current-ga/api-management-service/apply-network-configuration-updates), or by selecting **Apply network configuration** in the service instance's network configuration window in the Azure portal.
+
+## Limitations
+
+Some virtual network limitations differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
+
+#### [stv2](#tab/stv2)
+
+* A subnet containing API Management instances can't be moved across subscriptions.
+* For multi-region API Management deployments configured in internal virtual network mode, users own the routing and are responsible for managing the load balancing across multiple regions.
+* To import an API to API Management from an [OpenAPI specification](import-and-publish.md), the specification URL must be hosted at a publicly accessible internet address.
+
+#### [stv1](#tab/stv1)
+
+* A subnet containing API Management instances can't be moved across subscriptions.
+* For multi-region API Management deployments configured in internal virtual network mode, users own the routing and are responsible for managing the load balancing across multiple regions.
+* To import an API to API Management from an [OpenAPI specification](import-and-publish.md), the specification URL must be hosted at a publicly accessible internet address.
+* Due to platform limitations, connectivity between a resource in a globally peered virtual network in another region and an API Management service in internal mode doesn't work. For more information, see the [virtual network documentation](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints).
++++
+## Related content
+
+* [Site-to-site VPN](../vpn-gateway/design.md#s2smulti)
+* [Connect virtual networks from different deployment models using PowerShell](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md)
+* [Azure Virtual Network frequently asked questions](../virtual-network/virtual-networks-faq.md)
++++
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
# Virtual network configuration reference: API Management
-This reference provides detailed network configuration settings for an API Management instance deployed in an Azure virtual network in the [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) mode.
+
+This reference provides detailed network configuration settings for an API Management instance deployed (injected) in an Azure virtual network in the [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) mode.
For VNet connectivity options, requirements, and considerations, see [Using a virtual network with Azure API Management](virtual-network-concepts.md).
api-management Visual Studio Code Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/visual-studio-code-tutorial.md
# Tutorial: Use the Azure API Management extension for Visual Studio Code to import and manage APIs + In this tutorial, you learn how to use the API Management extension for Visual Studio Code for common operations in API Management. Use the familiar Visual Studio Code environment to import, update, test, and manage APIs. You learn how to:
api-management Visualize Using Managed Grafana Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/visualize-using-managed-grafana-dashboard.md
# Visualize API Management monitoring data using a Managed Grafana dashboard + You can use [Azure Managed Grafana](../managed-grafana/index.yml) to visualize API Management monitoring data that is collected into a Log Analytics workspace. Use a prebuilt [API Management dashboard](https://grafana.com/grafana/dashboards/16604-azure-api-management) for real-time visualization of logs and metrics collected from your API Management instance. * [Learn more about Azure Managed Grafana](../managed-grafan)
api-management Vscode Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/vscode-create-service-instance.md
# Quickstart: Create a new Azure API Management instance using Visual Studio Code + This quickstart describes the steps to create a new API Management instance using the *Azure API Management Extension* for Visual Studio Code. After creating an instance, you can use the extension for common management tasks such as importing APIs in your API Management instance. [!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)]
api-management Wait Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/wait-policy.md
Previously updated : 12/08/2022 Last updated : 03/18/2024 # Wait + The `wait` policy executes its immediate child policies in parallel, and waits for either all or one of its immediate child policies to complete before it completes. The `wait` policy can have as its immediate child policies one or more of the following: [`send-request`](send-request-policy.md), [`cache-lookup-value`](cache-lookup-value-policy.md), and [`choose`](choose-policy.md) policies. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
May contain as child elements only `send-request`, `cache-lookup-value`, and `ch
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
In the following example, there are two `choose` policies as immediate child pol
## Related policies
-* [API Management advanced policies](api-management-advanced-policies.md)
+* [Policy control and flow](api-management-policies.md#policy-control-and-flow)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md
# Import a WebSocket API + With API ManagementΓÇÖs WebSocket API solution, API publishers can quickly add a WebSocket API in API Management via the Azure portal, Azure CLI, Azure PowerShell, and other Azure tools. You can secure WebSocket APIs by applying existing access control policies, like [JWT validation](validate-jwt-policy.md). You can also test WebSocket APIs using the API test consoles in both Azure portal and developer portal. Building on existing observability capabilities, API Management provides metrics and logs for monitoring and troubleshooting WebSocket APIs.
api-management Workspaces Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md
# Workspaces in Azure API Management
-In API Management, *workspaces* allow decentralized API development teams to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure. Each workspace contains APIs, products, subscriptions, and related entities that are accessible only to the workspace collaborators. Access is controlled through Azure role-based access control (RBAC).
- [!INCLUDE [api-management-availability-premium](../../includes/api-management-availability-premium.md)]
+In API Management, *workspaces* allow decentralized API development teams to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure. Each workspace contains APIs, products, subscriptions, and related entities that are accessible only to the workspace collaborators. Access is controlled through Azure role-based access control (RBAC).
> [!NOTE] > * Workspaces are a preview feature of API Management and subject to certain [limitations](#preview-limitations).
api-management Xml To Json Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xml-to-json-policy.md
Previously updated : 12/02/2022 Last updated : 03/18/2024 # Convert XML to JSON++ The `xml-to-json` policy converts a request or response body from XML to JSON. This policy can be used to modernize APIs based on XML-only backend web services. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `xml-to-json` policy converts a request or response body from XML to JSON. T
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
## Example
The `xml-to-json` policy converts a request or response body from XML to JSON. T
## Related policies
-* [API Management transformation policies](api-management-transformation-policies.md)
+* [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Xsl Transform Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xsl-transform-policy.md
Previously updated : 01/02/2024 Last updated : 03/18/2024 # Transform XML using an XSLT + The `xsl-transform` policy applies an XSL transformation to XML in the request or response body. [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `xsl-transform` policy applies an XSL transformation to XML in the request o
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes
The `xsl-transform` policy applies an XSL transformation to XML in the request o
## Related policies -- [API Management transformation policies](api-management-transformation-policies.md)
+- [Transformation](api-management-policies.md#transformation)
[!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
This example transform adds a new connector node to `server.xml`. Note the *Iden
<!-- This is the new connector --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true"
- keystroreFile="${{user.home}}/.keystore" keystorePass="changeit"
+ keystoreFile="${{user.home}}/.keystore" keystorePass="changeit"
clientAuth="false" sslProtocol="TLS" /> </xsl:template>
An example xsl file is provided below. The example xsl file adds a new connector
<!-- This is the new connector --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true"
- keystroreFile="${{user.home}}/.keystore" keystorePass="changeit"
+ keystoreFile="${{user.home}}/.keystore" keystorePass="changeit"
clientAuth="false" sslProtocol="TLS" /> </xsl:template>
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
To run the application locally:
pip install -r requirements.txt ```
-1. Integrate a database:
-
- ```Python
-
- from azure.cosmos.aio import CosmosClient
- from azure.cosmos import exceptions
- from azure.cosmos.partition_key import PartitionKey
-
- from configs.credential import HOST, MASTER_KEY, DATABASE_ID
--
- def get_database_client():
- # Initialize the Cosmos client
- client = CosmosClient(HOST, MASTER_KEY)
-
- # Create or get a reference to a database
- try:
- database = client.create_database_if_not_exists(id=DATABASE_ID)
- print(f'Database "{DATABASE_ID}" created or retrieved successfully.')
-
- except exceptions.CosmosResourceExistsError:
- database = client.get_database_client(DATABASE_ID)
- print('Database with id \'{0}\' was found'.format(DATABASE_ID))
-
- return database
--
- def get_container_client(container_id):
- database = get_database_client()
- # Create or get a reference to a container
- try:
- container = database.create_container(id=container_id, partition_key=PartitionKey(path='/partitionKey'))
- print('Container with id \'{0}\' created'.format(container_id))
-
- except exceptions.CosmosResourceExistsError:
- container = database.get_container_client(container_id)
- print('Container with id \'{0}\' was found'.format(container_id))
-
- return container
-
- async def create_item(container_id, item):
- async with CosmosClient(HOST, credential=MASTER_KEY) as client:
- database = client.get_database_client(DATABASE_ID)
- container = database.get_container_client(container_id)
- await container.upsert_item(body=item)
-
- async def get_items(container_id):
- items = []
- try:
- async with CosmosClient(HOST, credential=MASTER_KEY) as client:
- database = client.get_database_client(DATABASE_ID)
- container = database.get_container_client(container_id)
- async for item in container.read_all_items():
- items.append(item)
- except Exception as e:
- print(f"An error occurred: {e}")
-
- return items
- ```
- 1. Run the app: ```Console
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
description: Create your first WordPress site on Azure App Service in minutes.
keywords: app service, azure app service, wordpress, preview, app service on linux, plugins, mysql flexible server, wordpress on linux, php Previously updated : 05/15/2023 Last updated : 03/28/2024 # ms.devlang: wordpress
In this quickstart, you'll learn how to create and deploy your first [WordPress]
To complete this quickstart, you need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs).
-> [!IMPORTANT]
-> After November 28, 2022, [PHP will only be supported on App Service on Linux.](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#end-of-life-for-php-74).
->
-> For migrating WordPress to App Service, visit [Migrating to App Service](migrate-wordpress.md). Additional documentation can be found at [WordPress - App Service on Linux](https://github.com/Azure/wordpress-linux-appservice).
->
-> To submit feedback on improving the WordPress experience on App Service, visit [Web Apps Community](https://feedback.azure.com/d365community/forum/b09330d1-c625-ec11-b6e6-000d3a4f0f1c).
->
- ## Create WordPress site using Azure portal 1. To start creating the WordPress site, browse to [https://portal.azure.com/#create/WordPress.WordPress](https://portal.azure.com/#create/WordPress.WordPress).
azure-app-configuration Quickstart Feature Flag Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-azure-kubernetes-service.md
+
+ Title: Quickstart for using Azure App Configuration Feature Management in Azure Kubernetes Service
+description: In this quickstart, create an ASP.NET core web app and use feature flag in it running in AKS and use the Azure App Configuration Kubernetes Provider to load key-values and feature flags from App Configuration store.
+++
+ms.devlang: csharp
++ Last updated : 02/23/2024+
+#Customer intent: As an Azure Kubernetes Service user, I want to manage all my app settings in one place using Azure App Configuration.
++
+# Quickstart: Add feature flags to workloads in Azure Kubernetes Service
+
+In this quickstart, you'll create a feature flag in Azure App Configuration and use it to dynamically control the visibility of a new web page in an ASP.NET Core app running in AKS without restarting or redeploying it.
+
+## Prerequisites
+
+Follow the documents to use dynamic configuration in Azure Kubernetes Service.
+
+* [Quickstart: Use Azure App Configuration in Azure Kubernetes Service](./quickstart-azure-kubernetes-service.md)
+* [Tutorial: Use dynamic configuration in Azure Kubernetes Service](./enable-dynamic-configuration-azure-kubernetes-service.md)
+
+## Create a feature flag
+
+Add a feature flag called *Beta* to the App Configuration store and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./quickstart-azure-app-configuration-create.md#create-a-feature-flag).
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing creating feature flag named Beta.](./media/add-beta-feature-flag.png)
+
+## Use a feature flag
+
+In this section, you will use feature flags in a simple ASP.NET web application and run it in Azure Kubernetes Service (AKS).
+
+1. Navigate into the project's directory you created in the [Quickstart](./quickstart-azure-kubernetes-service.md), and run the following command to add a reference to the [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) NuGet package version 3.2.0 or later.
+
+ ```dotnetcli
+ dotnet add package Microsoft.FeatureManagement.AspNetCore
+ ```
+
+1. Open *program.cs*, and add feature management to the service collection of your app by calling `AddFeatureManagement`.
+
+ ```csharp
+ // Existing code in Program.cs
+ // ... ...
+
+ // Add a JSON configuration source
+ builder.Configuration.AddJsonFile("config/mysettings.json", reloadOnChange: true, optional: false);
+
+ // Add feature management to the container of services.
+ builder.Services.AddFeatureManagement();
+
+ var app = builder.Build();
+
+ // The rest of existing code in program.cs
+ // ... ...
+ ```
+
+ Add `using Microsoft.FeatureManagement;` at the top of the file if it's not present.
+
+1. Add a new empty Razor page named **Beta** under the *Pages* directory. It includes two files *Beta.cshtml* and *Beta.cshtml.cs*.
+
+ Open *Beta.cshtml*, and update it with the following markup:
+
+ ```cshtml
+ @page
+ @model MyWebApp.Pages.BetaModel
+ @{
+ ViewData["Title"] = "Beta Page";
+ }
+
+ <h1>This is the beta website.</h1>
+ ```
+
+ Open *Beta.cshtml.cs*, and add `FeatureGate` attribute to the `BetaModel` class. The `FeatureGate` attribute ensures the *Beta* page is accessible only when the *Beta* feature flag is enabled. If the *Beta* feature flag isn't enabled, the page will return 404 Not Found.
+
+ ```csharp
+ using Microsoft.AspNetCore.Mvc.RazorPages;
+ using Microsoft.FeatureManagement.Mvc;
+
+ namespace MyWebApp.Pages
+ {
+ [FeatureGate("Beta")]
+ public class BetaModel : PageModel
+ {
+ public void OnGet()
+ {
+ }
+ }
+ }
+ ```
+
+1. Open *Pages/_ViewImports.cshtml*, and register the feature manager Tag Helper using an `@addTagHelper` directive:
+
+ ```cshtml
+ @addTagHelper *, Microsoft.FeatureManagement.AspNetCore
+ ```
+
+ The preceding code allows the `<feature>` Tag Helper to be used in the project's *.cshtml* files.
+
+1. Open *_Layout.cshtml* in the *Pages*\\*Shared* directory. Insert a new `<feature>` tag in between the *Home* and *Privacy* navbar items, as shown in the highlighted lines below.
+
+ :::code language="html" source="../../includes/azure-app-configuration-navbar.md" range="22-36" highlight="6-10":::
+
+ The `<feature>` tag ensures the *Beta* menu item is shown only when the *Beta* feature flag is enabled.
+
+1. [Containerize the application](./quickstart-azure-kubernetes-service.md#containerize-the-application) and [Push the image to Azure Container Registry](./quickstart-azure-kubernetes-service.md#push-the-image-to-azure-container-registry).
+
+1. [Deploy the application](./quickstart-azure-kubernetes-service.md#deploy-the-application). Refresh the browser and the web page will look like this:
+
+ ![Screenshot showing Kubernetes Provider after using configMap without feature flag.](./media/quickstarts/kubernetes-provider-feature-flag-no-beta-home.png)
+
+## Use Kubernetes Provider to load feature flags
+
+1. Update the *appConfigurationProvider.yaml* file located in the *Deployment* directory with the following content.
+
+ ```yaml
+ apiVersion: azconfig.io/v1
+ kind: AzureAppConfigurationProvider
+ metadata:
+ name: appconfigurationprovider-sample
+ spec:
+ endpoint: <your-app-configuration-store-endpoint>
+ target:
+ configMapName: configmap-created-by-appconfig-provider
+ configMapData:
+ type: json
+ key: mysettings.json
+ auth:
+ workloadIdentity:
+ managedIdentityClientId: <your-managed-identity-client-id>
+ featureFlag:
+ selectors:
+ - keyFilter: 'Beta'
+ refresh:
+ enabled: true
+ ```
+
+ > [!TIP]
+ > When no `selectors` are specified in `featureFlag` section, the Kubernetes Provider will not load feature flags from your App Configuration store. The default refresh interval of feature flags is 30 seconds when `featureFlag.refresh` enabled. You can customize this behavior via the `featureFlag.refresh.interval` parameter.
+
+1. Run the following command to apply the changes.
+
+ ```console
+ kubectl apply -f ./Deployment -n appconfig-demo
+ ```
+
+1. Update the **Beta** feature flag in your App Configuration store. Enable the flag by selecting the checkbox under **Enabled**.
+
+1. After refreshing the browser multiple times, the updated content will become visible once the ConfigMap has been updated within 30 seconds.
+
+ ![Screenshot showing Kubernetes Provider after using configMap with feature flag enabled.](./media/quickstarts/kubernetes-provider-feature-flag-home.png)
+
+1. Select the **Beta** menu. It will bring you to the beta website that you enabled dynamically.
+
+ ![Screenshot showing beta page Kubernetes Provider after using configMap.](./media/quickstarts/kubernetes-provider-feature-flag-beta-page.png)
+
+## Clean up resources
+
+Uninstall the App Configuration Kubernetes Provider from your AKS cluster if you want to keep the AKS cluster.
+
+```console
+helm uninstall azureappconfiguration.kubernetesprovider --namespace azappconfig-system
+```
++
+## Next steps
+
+In this quickstart, you:
+
+* Added feature management capability to an ASP.NET Core app running in Azure Kubernetes Service (AKS).
+* Connected your AKS cluster to your App Configuration store using the App Configuration Kubernetes Provider.
+* Created a ConfigMap with key-values and feature flags from your App Configuration store.
+* Ran the application with dynamic configuration from your App Configuration store without changing your application code.
+
+To learn more about the Azure App Configuration Kubernetes Provider, see [Azure App Configuration Kubernetes Provider reference](./reference-kubernetes-provider.md).
+
+To learn more about feature management capability, continue to the following tutorial.
+
+> [!div class="nextstepaction"]
+> [Enable features for targeted audiences](./howto-targetingfilter-aspnet-core.md)
+
+> [!div class="nextstepaction"]
+> [Use feature filters for conditional feature flags](./howto-feature-filters-aspnet-core.md)
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
See [Class based model](../azure-signalr/signalr-concept-serverless-development-
public class HubName1 : ServerlessHub { [FunctionName("SignalRTest")]
- public async Task SendMessage([SignalRTrigger]InvocationContext invocationContext, string message, ILogger logger)
+ public Task SendMessage([SignalRTrigger]InvocationContext invocationContext, string message, ILogger logger)
{ logger.LogInformation($"Receive {message} from {invocationContext.ConnectionId}."); }
Traditional model obeys the convention of Azure Function developed by C#. If you
```cs [FunctionName("SignalRTest")]
-public static async Task Run([SignalRTrigger("SignalRTest", "messages", "SendMessage", parameterNames: new string[] {"message"})]InvocationContext invocationContext, string message, ILogger logger)
+public static Task Run([SignalRTrigger("SignalRTest", "messages", "SendMessage", parameterNames: new string[] {"message"})]InvocationContext invocationContext, string message, ILogger logger)
{ logger.LogInformation($"Receive {message} from {invocationContext.ConnectionId}."); }
Because it can be hard to use `ParameterNames` in the trigger, the following exa
```cs [FunctionName("SignalRTest")]
-public static async Task Run([SignalRTrigger("SignalRTest", "messages", "SendMessage")]InvocationContext invocationContext, [SignalRParameter]string message, ILogger logger)
+public static Task Run([SignalRTrigger("SignalRTest", "messages", "SendMessage")]InvocationContext invocationContext, [SignalRParameter]string message, ILogger logger)
{ logger.LogInformation($"Receive {message} from {invocationContext.ConnectionId}."); }
app.generic("function1",
Here's the JavaScript code: ```javascript
-module.exports = async function (context, invocation) {
+module.exports = function (context, invocation) {
context.log(`Receive ${context.bindingData.message} from ${invocation.ConnectionId}.`) }; ```
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
When you deploy your project to a function app in Azure, the entire contents of
## Connect to a database
-[Azure Cosmos DB](../cosmos-db/introduction.md) is a fully managed NoSQL and relational database for modern app development including AI, digital commerce, Internet of Things, booking management, and other types of solutions. It offers single-digit millisecond response times, automatic and instant scalability, and guaranteed speed at any scale. Its various APIs can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table.
+[Azure Cosmos DB](../cosmos-db/introduction.md) is a fully managed NoSQL, relational, and vector database for modern app development including AI, digital commerce, Internet of Things, booking management, and other types of solutions. It offers single-digit millisecond response times, automatic and instant scalability, and guaranteed speed at any scale. Its various APIs can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table.
To connect to Cosmos DB, first [create an account, database, and container](../cosmos-db/nosql/quickstart-portal.md). Then you may connect Functions to Cosmos DB using [trigger and bindings](functions-bindings-cosmosdb-v2.md), like this [example](functions-add-output-binding-cosmos-db-vs-code.md). You may also use the Python library for Cosmos DB, like so:
azure-monitor Alerts Processing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md
Severity | The rule applies only to alerts with the selected severities. |
* If you define multiple filters in a rule, all the rules apply. There's a logical AND between all filters. For example, if you set both `resource type = "Virtual Machines"` and `severity = "Sev0"`, then the rule applies only for `Sev0` alerts on virtual machines in the scope. * Each filter can include up to five values. There's a logical OR between the values.
- For example, if you set `description contains "this, that" (in the field there is no need to write the apostrophes), then the rule applies only to alerts whose description contains either `this` or `that`.
+ For example, if you set description contains "this, that" (in the field there is no need to write the apostrophes), then the rule applies only to alerts whose description contains either "this" or "that".
* Notice that you dont have any spaces (before, after or between) the string that is matched it will effect the matching of the filter. ### What should this rule do?
azure-monitor Proactive Failure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md
Notice that if you delete an Application Insights resource, the associated Failu
## Triage and diagnose an alert
-An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there's some problem with your app or its environment.
An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there's some problem with your app or its environment. To investigate further, click on 'View full details in Application Insights.' The links in this page take you straight to a [search page](../app/diagnostic-search.md) filtered to the relevant requests, exception, dependency, or traces.
azure-monitor Sampling Classic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling-classic-api.md
Use the [examples in the earlier section of this page](#configuring-adaptive-sam
* Configuring too high a sampling percentage (not aggressive enough) results in an insufficient reduction in the volume of the collected telemetry. You can still experience telemetry data loss related to throttling, and the cost of using Application Insights might be higher than you planned due to overage charges.
+*What happens if I configure both IncludedTypes and ExcludedTypes settings?*
+
+* It's best not to set both `ExcludedTypes` and `IncludedTypes` in your configuration to prevent any conflicts and ensure clear telemetry collection settings.
+* Telemetry types that are listed in `ExcludedTypes` are excluded even if they are also set in `IncludedTypes` settings. ExcludedTypes will take precedence over IncludedTypes.
+ *On what platforms can I use sampling?* * Ingestion sampling can occur automatically for any telemetry above a certain volume, if the SDK isn't performing sampling. This configuration would work, for example, if you're using an older version of the ASP.NET SDK or Java SDK.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | | Microsoft Graph | [MicrosoftGraphActivityLogs](/azure/azure-monitor/reference/tables/microsoftgraphactivitylogs) | | Monitor | [AzureMetricsV2](/azure/azure-monitor/reference/tables/AzureMetricsV2) |
-| Network Devices (Operator Nexus) | [MNFDeviceUpdates](/azure/azure-monitor/reference/tables/MNFDeviceUpdates)<br>[MNFSystemStateMessageUpdates](/azure/azure-monitor/reference/tables/MNFSystemStateMessageUpdates) |
+| Network Devices (Operator Nexus) | [MNFDeviceUpdates](/azure/azure-monitor/reference/tables/MNFDeviceUpdates)<br>[MNFSystemStateMessageUpdates](/azure/azure-monitor/reference/tables/MNFSystemStateMessageUpdates) <br>[MNFSystemSessionHistoryUpdates](/azure/azure-monitor/reference/tables/mnfsystemsessionhistoryupdates) |
| Network Managers | [AVNMConnectivityConfigurationChange](/azure/azure-monitor/reference/tables/AVNMConnectivityConfigurationChange)<br>[AVNMIPAMPoolAllocationChange](/azure/azure-monitor/reference/tables/AVNMIPAMPoolAllocationChange) | | Nexus Clusters | [NCCKubernetesLogs](/azure/azure-monitor/reference/tables/NCCKubernetesLogs)<br>[NCCVMOrchestrationLogs](/azure/azure-monitor/reference/tables/NCCVMOrchestrationLogs) | | Nexus Storage Appliances | [NCSStorageLogs](/azure/azure-monitor/reference/tables/NCSStorageLogs)<br>[NCSStorageAlerts](/azure/azure-monitor/reference/tables/NCSStorageAlerts) |
azure-netapp-files Faq Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-integration.md
Using Azure NetApp Files NFS or SMB volumes with AVS for *Guest OS mounts* is su
## Which Unicode Character Encoding does Azure NetApp Files support for the creation and display of file and directory names?
-Azure NetApp Files only supports file and directory names that are encoded with the [UTF-8 Unicode Character Encoding](https://en.wikipedia.org/wiki/UTF-8), *C locale* (or _C.UTF-8_) format for both NFS and SMB volumes. Only strict ASCII characters are valid.
-
-If you try to create files or directories using supplementary characters or surrogate pairs such as nonregular characters or emoji unsupported by C.UTF-8, the operation fails. A Windows client produces an error message similar to ΓÇ£The file name you specified is not valid or too long. Specify a different file name.ΓÇ¥
-
-For more information, see [Understand volume languages](understand-volume-languages.md).
+For information on Unicode character support, see [Understand volume languages](understand-volume-languages.md) and [Understand path lengths](understand-path-lengths.md).
## Does Azure Databricks support mounting Azure NetApp Files NFS volumes?
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## March 2024
+ * [Large volumes (Preview) improvement:](large-volumes-requirements-considerations.md) new minimum size of 50 TiB Large volumes support a minimum size of 50 TiB. Large volumes still support a maximum quota of 500 TiB.
-## March 2024
- * [Availability zone volume placement](manage-availability-zone-volume-placement.md) is now generally available (GA). You can deploy new volumes in the logical availability zone of your choice to create cross-zone volumes to improve resiliency in case of zonal failures. This feature is available in all availability zone-enabled regions with Azure NetApp Files presence.
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following limits apply when you use Azure Resource Manager and Azure resourc
## API Management limits
+This section provides information about limits that apply to Azure API Management instances in different [service tiers](../../api-management/api-management-features.md), including the following:
+
+* [API Management classic tiers](#limitsapi-management-classic-tiers)
+* [API Management v2 tiers](#limitsapi-management-v2-tiers)
+* [Developer portal in API Management v2 tiers](#limitsdeveloper-portal-in-api-management-v2-tiers)
+
+### Limits - API Management classic tiers
+ [!INCLUDE [api-management-service-limits](../../../includes/api-management-service-limits.md)]
+### Limits - API Management v2 tiers
++
+### Limits - Developer portal in API Management v2 tiers
+++ ## App Service limits [!INCLUDE [azure-websites-limits](../../../includes/azure-websites-limits.md)]
azure-signalr Signalr Concept Serverless Development Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-serverless-development-config.md
The class-based model is dedicated for C#.
The class-based model provides better programming experience, which can replace SignalR input and output bindings, with the following features: - More flexible negotiation, sending messages and managing groups experience. - More managing functionalities are supported, including closing connections, checking whether a connection, user, or group exists.-- Strongly Typed hub-- Unified connection string setting in one place.
+- Strongly typed hub
+- Unified hub name and connection string setting in one place.
The following code demonstrates how to write SignalR bindings in class-based model:
-In the *Functions.cs* file, define your hub, which extends a base class `ServerlessHub`:
+Firstly, define your hub derived from a class `ServerlessHub`:
```cs [SignalRConnection("AzureSignalRConnectionString")] public class Functions : ServerlessHub {
- private const string HubName = nameof(Functions);
+ private const string HubName = nameof(Functions); // Used by SignalR trigger only
public Functions(IServiceProvider serviceProvider) : base(serviceProvider) {
var host = new HostBuilder()
### Negotiation experience in class-based model
-Instead of using SignalR input binding `[SignalRConnectionInfoInput]`, negotiation in class-based model can be more flexible. Base class `ServerlessHub` has a method `NegotiateAsync`, which allows user to customize negotiation options such as `userId`, `claims`, etc.
+Instead of using SignalR input binding `[SignalRConnectionInfoInput]`, negotiation in class-based model can be more flexible. Base class `ServerlessHub` has a method `NegotiateAsync`, which allows users to customize negotiation options such as `userId`, `claims`, etc.
```cs Task<BinaryData> NegotiateAsync(NegotiationOptions? options = null)
You could send messages, manage groups, or manage clients by accessing the membe
- `ServerlessHub.UserGroups` for managing users with groups, such as adding users to groups, removing users from groups. - `ServerlessHub.ClientManager` for checking connections existence, closing connections, etc.
-### Strongly Typed Hub
+### Strongly typed Hub
[Strongly typed hub](/aspnet/core/signalr/hubs?#strongly-typed-hubs) allows you to use strongly typed methods when you send messages to clients. To use strongly typed hub in class based model, extract client methods into an interface `T`, and make your hub class derived from `ServerlessHub<T>`.
Then you can use the strongly typed methods as follows:
[SignalRConnection("AzureSignalRConnectionString")] public class Functions : ServerlessHub<IChatClient> {
- private const string HubName = nameof(Functions);
+ private const string HubName = nameof(Functions); // Used by SignalR trigger only
public Functions(IServiceProvider serviceProvider) : base(serviceProvider) {
public class Functions : ServerlessHub<IChatClient>
> [!NOTE] > You can get a complete project sample from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/DotnetIsolated-ClassBased/).
-### Unified connection string setting in one place
+### Unified hub name and connection string setting in one place
-You might have noticed the `SignalRConnection` attribute used on serverless hub classes. It looks like this:
-```cs
-[SignalRConnection("AzureSignalRConnectionString")]
-public class Functions : ServerlessHub<IChatClient>
-```
-
-It allows you to customize where the SignalR Service bindings look for connection string. If it's absent, the default value `AzureSignalRConnectionString` is used.
+* The class name of the serverless hub is automatically used as `HubName`.
+* You might have noticed the `SignalRConnection` attribute used on serverless hub classes as follows:
+ ```cs
+ [SignalRConnection("AzureSignalRConnectionString")]
+ public class Functions : ServerlessHub<IChatClient>
+ ```
+ It allows you to customize where the connection string for serverless hub is. If it's absent, the default value `AzureSignalRConnectionString` is used.
> [!IMPORTANT]
-> `SignalRConnection` attribute doesn't change the connection string setting of SignalR triggers, even though you use SignalR triggers inside the serverless hub. You should specify the connection string setting for each SignalR trigger if you want to customize it.
+> SignalR triggers and serverless hubs are independent. Therefore, the class name of serverless hub and `SignalRConnection` attribute doesn't change the settings of SignalR triggers, even though you use SignalR triggers inside the serverless hub.
# [In-process model](#tab/in-process)
public class HubName1 : ServerlessHub
} ```
-All functions that want to use the class-based model need to be a method of the class that inherits from **ServerlessHub**. The class name `SignalRTestHub` in the sample is the hub name.
+All functions that want to use the class-based model need to be a method of the class that inherits from **ServerlessHub**. The class name `HubName1` in the sample is the hub name.
### Define hub method
In class based model, `[SignalRParameter]` is unnecessary because all the argume
### Negotiation experience in class-based model
-Instead of using SignalR input binding `[SignalR]`, negotiation in class-based model can be more flexible. Base class `ServerlessHub` has a method.
+Instead of using SignalR input binding `[SignalR]`, negotiation in class-based model can be more flexible. Base class `ServerlessHub` has a method:
```cs SignalRConnectionInfo Negotiate(string userId = null, IList<Claim> claims = null, TimeSpan? lifeTime = null) ```
-This features user customizes `userId` or `claims` during the function execution.
+This feature allows user to customize `userId` or `claims` during the function execution.
## Use `SignalRFilterAttribute`
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-security-integration.md
After connecting data sources to Microsoft Sentinel, you can create rules to gen
6. On the **Incident settings** tab, enable **Create incidents from alerts triggered by this analytics rule** and select **Next: Automated response**.
- :::image type="content" source="../sentinel/media/tutorial-detect-threats-custom/general-tab.png" alt-text="Screenshot showing the Analytic rule wizard for creating a new rule in Microsoft Sentinel.":::
+ :::image type="content" source="../sentinel/media/detect-threats-custom/general-tab.png" alt-text="Screenshot showing the Analytic rule wizard for creating a new rule in Microsoft Sentinel.":::
7. Select **Next: Review**.
azure-web-pubsub Concept Azure Ad Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-azure-ad-authorization.md
Microsoft Entra authorizes access rights to secured resources through [Azure rol
Before assigning an Azure RBAC role to a security principal, it's important to identify the appropriate level of access that the principal should have. It's recommended to grant the role with the narrowest possible scope. Resources located underneath inherit Azure RBAC roles with broader scopes.
-You can scope access to Azure SignalR resources at the following levels, beginning with the narrowest scope:
+You can scope access to Azure Web PubSub resources at the following levels, beginning with the narrowest scope:
- **An individual resource.**
backup Backup Instant Restore Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-instant-restore-capability.md
Title: Azure Instant Restore Capability description: Azure Instant Restore Capability and FAQs for VM backup stack, Resource Manager deployment model- Previously updated : 07/20/2023 Last updated : 04/03/2024 # Get improved backup and restore performance with Azure Backup Instant Restore capability
-> [!NOTE]
-> Based on feedback from users, we've renamed **VM backup stack V2** to **Instant Restore** to reduce confusion with Azure Stack functionality.
-> All Azure Backup users have now been upgraded to **Instant Restore**.
+This article describes the improved backup and restore performance of Instant Restore capability in Azure Backup.
+
+## Key capabilities
-The new model for Instant Restore provides the following feature enhancements:
+The Instant Restore feature provides the following capabilities:
* Ability to use snapshots taken as part of a backup job that's available for recovery without waiting for data transfer to the vault to finish. It reduces the wait time for snapshots to copy to the vault before triggering restore.
-* Reduces backup and restore times by retaining snapshots locally, for two days by default. This default snapshot retention value is configurable to any value between 1 to 5 days.
+* Reduces backup and restore times by retaining snapshots locally, for *two days* using Standard policy and for *seven days* using Enhanced policy by default. This default snapshot retention value is configurable to any value between 1 to 5 days for Standard policy and 1 to 30 days for Enhanced policy.
* Supports disk sizes up to 32 TB. Resizing of disks isn't recommended by Azure Backup.
-* Supports Standard SSD disks along with Standard HDD disks and Premium SSD disks.
+* Standard policy supports Standard SSD disks along with Standard HDD disks and Premium SSD disks. Enhanced policy supports backup and instant restore of Premium SSD v2 and Ultra Disks, in addition to standard HDD, standard SSD, and Premium SSD v1 disks.
* Ability to use an unmanaged VMs original storage accounts (per disk), when restoring. This ability exists even when the VM has disks that are distributed across storage accounts. It speeds up restore operations for a wide variety of VM configurations. * For backup of VMs that are using unmanaged premium disks in storage accounts, with Instant Restore, we recommend allocating *50%* free space of the total allocated storage space, which is required **only** for the first backup. The 50% free space isn't a requirement for backups after the first backup is complete.
-## What's new in this feature
+## How Instant Restore works?
-Currently, the backup job consists of two phases:
+A backup job consists of two phases:
1. Taking a VM snapshot. 2. Transferring a VM snapshot to the Azure Recovery Services vault.
-A recovery point is considered created only after phases 1 and 2 are completed. As a part of this upgrade, a recovery point is created as soon as the snapshot is finished and this recovery point of snapshot type can be used to perform a restore using the same restore flow. You can identify this recovery point in the Azure portal by using ΓÇ£snapshotΓÇ¥ as the recovery point type, and after the snapshot is transferred to the vault, the recovery point type changes to ΓÇ£snapshot and vaultΓÇ¥.
-
-![Backup job in VM backup stack Resource Manager deployment model--storage and vault](./media/backup-azure-vms/instant-rp-flow.png)
+A recovery point is created as soon as the snapshot is finished and this recovery point of snapshot type can be used to perform a restore using the same restore flow. You can identify this recovery point in the Azure portal by using *snapshot* as the recovery point type, and after the snapshot is transferred to the vault, the recovery point type changes to *snapshot and vault*.
-By default, snapshots are retained for two days. This feature allows restore operation from these snapshots there by cutting down the restore times. It reduces the time required to transform and copy data back from the vault.
## Feature considerations
-* Snapshots are stored along with the disks to boost recovery point creation and to speed up restore operations. As a result, you'll see storage costs that correspond to snapshots taken during this period.
-* Incremental snapshots are stored as page blobs. All the users using unmanaged disks are charged for the snapshots stored in their local storage account. Since the restore point collections used by Managed VM backups use blob snapshots at the underlying storage level, for managed disks you'll see costs corresponding to blob snapshot pricing and they're incremental.
-* For premium storage accounts, the snapshots taken for instant recovery points count towards the 10-TB limit of allocated space.
-* You get an ability to configure the snapshot retention based on the restore needs. Depending on the requirement, you can set the snapshot retention to a minimum of one day in the backup policy pane as explained below. This will help you save cost for snapshot retention if you donΓÇÖt perform restores frequently.
-* It's a one directional upgrade. Once upgraded to Instant restore, you can't go back.
-* When you use an Instant Restore recovery point, you must restore the VM or disks to a subscription and resource group that don't require CMK-encrypted disks via Azure Policy.
-
->[!NOTE]
->With this instant restore upgrade, the snapshot retention duration of all the customers (**new and existing both included**) will be set to a default value of two days. However, you can set the duration according to your requirement to any value between 1 to 5 days.
+* The snapshots are stored along with the disks to boost recovery point creation and to speed up restore operations. As a result, you'll see storage costs that correspond to snapshots taken during this period.
+* For standard policy, all snapshots are incremental in nature and are stored as page blobs. All the users using unmanaged disks are charged for the snapshots stored in their local storage account. Since the restore point collections used by Managed VM backups use blob snapshots at the underlying storage level, for managed disks you'll see costs corresponding to blob snapshot pricing and they're incremental.
+* For premium storage accounts, the snapshots taken for instant recovery points count towards the 10-TB limit of allocated space. For Enhanced policy, only Managed VM backups are supported. The initial snapshot is a full copy of the disk(s). The subsequent snapshots are incremental in nature and occupy only delta changes to disks since the last snapshot.
+ When you use an Instant Restore recovery point, you must restore the VM or disks to a subscription and resource group that don't require CMK-encrypted disks via Azure Policy.
## Cost impact
-The incremental snapshots are stored in the VM's storage account, which is used for instant recovery. Incremental snapshot means the space occupied by a snapshot is equal to the space occupied by pages that are written after the snapshot was created. Billing is still for the per GB used space occupied by the snapshot, and the price per GB is same as mentioned on the [pricing page](https://azure.microsoft.com/pricing/details/managed-disks/). For VMs that use unmanaged disks, the snapshots can be seen in the menu for the VHD file of each disk. For managed disks, snapshots are stored in a restore point collection resource in a designated resource group, and the snapshots themselves aren't directly visible.
+Instant Restore feature for snapshots (stored along with the disks) boosts recovery point creation and speed up restore operations. This incurs additional storage costs for the corresponding snapshots taken during this period. The snapshot storage cost varies depending on the type of backup policy.
+
+### Cost impact of standard policy
+
+Standard policy uses blob snapshots for Instant Restore functionality. All snapshots are incremental in nature and stored in the VM's storage account, which is used for instant recovery. Incremental snapshot means the space occupied by a snapshot is equal to the space occupied by pages that are written after the snapshot was created. Billing is still for the per GB used space occupied by the snapshot as explained in [this section](../storage/blobs/snapshots-overview.md#pricing-and-billing). As an illustration, consider a VM with 100GB in size, change rate of 2% and retention of 5 days for Instant Restore. In this case, the snapshot storage billed will be 10GB (100* 0.02* 5).
+
+For VMs that use unmanaged disks, the snapshots can be seen in the menu for the VHD file of each disk. For managed disks, snapshots are stored in a restore point collection resource in a designated resource group, and the snapshots themselves aren't directly visible.
+
+### Cost impact of enhanced policy
+
+Enhanced policy uses Managed disk snapshots for Instant Restore functionality. The initial snapshot is a full copy of the disk(s). The subsequent snapshots are incremental in nature and occupy only delta changes to disks since the last snapshot. Pricing for managed disk snapshots is explained in [this pricing page](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+For example, a VM with 100GB in size has a change rate of 2% and retention of 5 days for Instant Restore. In this case, the snapshot storage billed will be 108GB (100 + 100 X 0.02 X 4).
>[!NOTE]
-> Snapshot retention is fixed to 5 days for weekly policies.
+> Snapshot retention is fixed to 5 days for weekly policies for Standard policy and can vary between 5 to 20 days for enhanced policy.
## Configure snapshot retention
Yes, for premium storage accounts the snapshots taken for instant recovery point
### How does the snapshot retention work during the five-day period?
-Each day a new snapshot is taken, then there are five individual incremental snapshots. The size of the snapshot depends on the data churn, which are in most cases around 2%-7%.
+For Standard policy, each day a new snapshot is taken, then there are five individual incremental snapshots. The size of the snapshot depends on the data churn, which are in most cases around 2%-7%. For Enhanced policy, the initial snapshot is a full snapshot and subsequent snapshots are incremental in nature.
### Is an instant restore snapshot an incremental snapshot or full snapshot?
-Snapshots taken as a part of instant restore capability are incremental snapshots.
+For Standard policy, snapshots taken as a part of instant restore capability are incremental snapshots. For Enhanced policy, the initial snapshot is a full snapshot and subsequent snapshots are incremental in nature.
### How can I calculate the approximate cost increase due to instant restore feature?
-It depends on the churn of the VM. In a steady state, you can assume the increase in cost is = Snapshot retention period daily churn per VM storage cost per GB.
+It depends on the churn of the VM.
+
+- **Standard policy**: In a steady state, you can assume the increase in cost is = Snapshot retention period daily churn per VM snapshot storage cost per GB.
+- **Enhanced policy**: In a steady state, you can assume the increase in cost is = ((Size of VM) + (Snapshot retention period-1)*daily churn per VM) * snapshot storage cost per GB.
### If the recovery type for a restore point is ΓÇ£Snapshot and vaultΓÇ¥ and I perform a restore operation, which recovery type will be used? If the recovery type is ΓÇ£snapshot and vaultΓÇ¥, restore will be automatically done from the local snapshot, which will be much faster compared to the restore done from the vault.
-### What happens if I select retention period of restore point (Tier 2) less than the snapshot (Tier1) retention period?
+### What happens if I select retention period of restore point (Tier 2) less than the snapshot (Tier 1) retention period?
-The new model doesn't allow deleting the restore point (Tier2) unless the snapshot (Tier1) is deleted. We recommend scheduling restore point (Tier2) retention period greater than the snapshot retention period.
+The new model doesn't allow deleting the restore point (Tier 2) unless the snapshot (Tier 1) is deleted. We recommend scheduling restore point (Tier 2) retention period greater than the snapshot retention period.
### Why does my snapshot still exist, even after the set retention period in backup policy?
If the recovery point has a snapshot and it's the latest recovery point availabl
### Why do I see more snapshots than my retention policy?
-In a scenario where a retention policy is set as ΓÇ£1ΓÇ¥, you can find two snapshots. This mandates that at least one latest recovery point always be present, in case all subsequent backups fail due to an issue in the VM. This can cause the presence of two snapshots.<br></br>So, if the policy is for "n" snapshots, you can find ΓÇ£n+1ΓÇ¥ snapshots at times. Further, you can even find ΓÇ£n+1+2ΓÇ¥ snapshots if there is a delay in garbage collection. This can happen at rare times when:
+In a scenario where a retention policy is set as ΓÇ£1ΓÇ¥, you can find two snapshots. This mandates that at least one latest recovery point always be present, in case all subsequent backups fail due to an issue in the VM. This can cause the presence of two snapshots.<br></br>So, if the policy is for "n" snapshots, you can find ΓÇ£n+1ΓÇ¥ snapshots at times. Further, you can even find ΓÇ£n+1+2ΓÇ¥ snapshots if there's a delay in garbage collection. This can happen at rare times when:
- You clean up snapshots, which are past retention. - The garbage collector (GC) in the backend is under heavy load.
Instant restore feature is enabled for everyone and can't be disabled. You can r
### Is it safe to restart the VM during the transfer process (which can take many hours)? Will restarting the VM interrupt or slow down the transfer?
-Yes it's safe, and there is absolutely no impact in data transfer speed.
+Yes it's safe, and there's absolutely no impact in data transfer speed.
batch Batch Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md
Title: Autoscale compute nodes in an Azure Batch pool description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Previously updated : 02/29/2024 Last updated : 04/02/2024
You can use both resource and task metrics when you define a formula. You adjust
| Metric | Description | |-|--|
-| Resource | Resource metrics are based on the CPU, the bandwidth, the memory usage of compute nodes, and the number of nodes.<br><br>These service-defined variables are useful for making adjustments based on node count:<br>- $TargetDedicatedNodes <br>- $TargetLowPriorityNodes <br>- $CurrentDedicatedNodes <br>- $CurrentLowPriorityNodes <br>- $PreemptedNodeCount <br>- $SampleNodeCount <br><br>These service-defined variables are useful for making adjustments based on node resource usage: <br>- $CPUPercent <br>- $WallClockSeconds <br>- $MemoryBytes <br>- $DiskBytes <br>- $DiskReadBytes <br>- $DiskWriteBytes <br>- $DiskReadOps <br>- $DiskWriteOps <br>- $NetworkInBytes <br>- $NetworkOutBytes |
+| Resource | Resource metrics are based on the CPU, the bandwidth, the memory usage of compute nodes, and the number of nodes.<br><br>These service-defined variables are useful for making adjustments based on node count:<br>- $TargetDedicatedNodes <br>- $TargetLowPriorityNodes <br>- $CurrentDedicatedNodes <br>- $CurrentLowPriorityNodes <br>- $PreemptedNodeCount <br>- $UsableNodeCount <br><br>These service-defined variables are useful for making adjustments based on node resource usage: <br>- $CPUPercent <br>- $WallClockSeconds <br>- $MemoryBytes <br>- $DiskBytes <br>- $DiskReadBytes <br>- $DiskWriteBytes <br>- $DiskReadOps <br>- $DiskWriteOps <br>- $NetworkInBytes <br>- $NetworkOutBytes |
| Task | Task metrics are based on the status of tasks, such as Active, Pending, and Completed. The following service-defined variables are useful for making pool-size adjustments based on task metrics: <br>- $ActiveTasks <br>- $RunningTasks <br>- $PendingTasks <br>- $SucceededTasks <br>- $FailedTasks | ## Obtain sample data
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 02/29/2024 Last updated : 04/02/2024
A job doesn't automatically move to completed state unless explicitly terminated
There's a default [active job and job schedule quota](batch-quota-limit.md#resource-quotas). Jobs and job schedules in completed state don't count towards this quota.
+Delete jobs when they're no longer needed, even if in completed state. Although completed jobs don't count towards
+active job quota, it's beneficial to periodically clean up completed jobs. For example,
+[listing jobs](/rest/api/batchservice/job/list) will be more efficient when the total number of jobs is a smaller
+set (even if proper filters are applied to the request).
+ ## Tasks [Tasks](jobs-and-tasks.md#tasks) are individual units of work that comprise a job. Tasks are submitted by the user and scheduled by Batch on to compute nodes. The following sections provide suggestions for designing your tasks to handle issues and perform efficiently.
Deleting tasks accomplishes two things:
> For tasks just submitted to Batch, the DeleteTask API call takes up to 10 minutes to take effect. Before it takes effect, > other tasks might be prevented from being scheduled. It's because Batch Scheduler still tries to schedule the tasks just > deleted. If you wanted to delete one task shortly after it's submitted, please terminate the task instead (since the
-> terminate task will take effect immediately). And then delete the task 10 minutes later.
+> terminate task request will take effect immediately). And then delete the task 10 minutes later.
### Submit large numbers of tasks in collection
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-overview.md
The below table highlights comparison between these two options.
| Redeploy | In-place migration | |||
-| Customers can deploy a new cloud service directly in Azure Resource Manager and then delete the old cloud service in Azure Service Manager thorough validation. | The in-place migration tool enables a seamless, platform orchestrated migration of existing Cloud Services (classic) deployments to Cloud Services (extended support). |
+| Customers can deploy a new cloud service directly in Azure Resource Manager and then delete the old cloud service in Azure Service Manager after thorough validation. | The in-place migration tool enables a seamless, platform orchestrated migration of existing Cloud Services (classic) deployments to Cloud Services (extended support). |
| Redeploy allows customers to: <br><br> - Define resource names. <br><br> - Organize or reuse resources as preferred. <br><br> - Reuse service configuration and definition files with minimal changes. | For in-place migration, the platform: <br><br> - Defines resource names. <br><br> - Organizes each deployment and related resources in individual Resource Groups. <br><br> - Modifies existing configuration and definition file for Azure Resource Manager. | | Customers need to orchestrate traffic to the new deployment. | Migration retains IP address and data path remains the same. | | Customers need to delete the old cloud services in Azure Resource Manager. | Platform deletes the Cloud Services (classic) resources after migration. |
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
The Call Automation events are sent to the web hook callback URI specified when
To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation/actions-for-call-control.md) that provides code samples and sequence diagrams for various call control flows.
+When acknowledging callback events, it's best practice to respond with standard HTTP status codes like 200 OK. Detailed information is unnecessary and is more suitable for your debugging processes.
+ To learn how to secure the callback event delivery, refer to [this guide](../../how-tos/call-automation/secure-webhook-endpoint.md). ### Operation Callback Uri
communication-services Email Optout Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-optout-management.md
+
+ Title: Emails opt out management using suppression list within Azure Communication Service Email
+
+description: Learn about Managing Opt-outs to enhance Email Delivery in your B2C Communications.
++++ Last updated : 04/01/2024++++
+# Overview
++
+This article provides the Email delivery best practices and how to use the Azure Communication Services Email suppression list feature that allows customers to manage opt-out capabilities for email communications. It also provides information on the features that are important for emails opt out management that helps you improve email complaint management, promote better email practices, and increase your email delivery success, boosting the likelihood of getting to recipients' inboxes efficiently.
+
+## Opt out or unsubscribe management: Ensuring transparent sender reputation
+It's important to know how interested your customers are in your email communication and to respect their opt-out or unsubscribe requests when they decide not to get emails from you. This helps you keep a good sender reputation. Whether you have a manual or automated process in place for handling unsubscribes, it's important to provide an "unsubscribe" link in the email payload you send. When recipients decide not to receive further emails, they can click on the 'unsubscribe' link and remove their email address from your mailing list.
+
+The functionality of the links and instructions in the email is vital; they must be working correctly and promptly notify the application mailing list to remove the contact from the appropriate list or lists. A proper unsubscribe mechanism should be explicit and transparent from the subscriber's perspective, ensuring they know precisely which messages they're unsubscribing from. Ideally, they should be offered a preferences center that gives them the option to unsubscribe in cases where they're subscribed to multiple lists within your organization. This process prevents accidental unsubscribes and allows users to manage their opt-in and opt-out preferences effectively through the unsubscribe management process.
+
+## Managing emails opt out preferences with suppression list in Azure Communication Service Email
+Azure Communication Service Email offers a powerful platform with a centralized managed unsubscribe list with opt out preferences saved to our data store. This feature helps the developers to meet guidelines of email providers, requiring one-click list-unsubscribe implementation in the emails sent from our platform. To proactively identify and avoid significant delivery problems, suppression list features, including but not limited to:
+
+* Offers domain-level, customer managed lists that provide opt-out capabilities.
+* Provides Azure resources that allow for Create, Read, Update, and Delete (CRUD) operations via Azure portal, Management SDKs, or REST APIs.
+* Apply filters in the sending pipeline, all recipients are filtered against the addresses in the domain suppression lists and email delivery isn't attempted for the recipient addresses.
+* Gives the ability to manage a suppression list for each sender email address, which is used to filter/suppress email recipient addresses when sending emails.
+* Caches suppression list data to reduce expensive database lookups, and this caching is domain-specific based on the frequency of use.
+* Adds Email addresses programmatically for an easy opt-out process for unsubscribing.
+
+### Benefits of opt out or unsubscribe management
+Using a suppression list in Azure Communication Services offers several benefits:
+* Compliance and Legal Considerations: This feature is crucial for adhering to legal responsibilities defined in local government legislation like the CAN-SPAM Act in the United States. It ensures that customers can easily manage opt-outs and maintain compliance with these regulations.
+* Better Sender Reputation: When emails aren't sent to users who have chosen to opt out, it helps protect the senderΓÇÖs reputation and lowers the chance of being blocked by email providers.
+* Improved User Experience: It respects the preferences of users who don't wish to receive communications, leading to a better user experience and potentially higher engagement rates with recipients who choose to receive emails.
+* Operational Efficiency: Suppression lists can be managed programmatically, allowing for efficient handling of large numbers of opt-out requests without manual intervention.
+* Cost-Effectiveness: By not sending emails to recipients who opted out, it reduces the volume of sent emails, which can lower operational costs associated with email delivery.
+* Data-Driven Decisions: The suppression list feature provides insights into the number of opt-outs, which can be valuable data for making informed decisions about email campaign strategies.
+
+These benefits contribute to a more efficient, compliant, and user-friendly email communication system when using Azure Communication Services. To enable email logs and monitor your email delivery, follow the steps outlined in [Azure Communication Services email logs Communication Service in Azure Communication Service](../../concepts/analytics/logs/email-logs.md).
+
+## Next steps
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../email/sdk-features.md)
+- How to send emails with custom verified domains? [Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+- How to send emails with Azure Managed Domains? [Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
communication-services Email Smtp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-smtp-overview.md
# Azure Communication Services Email SMTP as Service+ Email is still a vital channel for global businesses to connect with customers, and it's an essential part of business communications. Many businesses made large investments in on-premises infrastructures to support the strong SMTP email needs of their line-of-business (LOB) applications. However, delivering and securing outgoing emails from these existing LOB applications poses a varied challenge. As outgoing emails become more numerous and important, the difficulties of managing this critical aspect of communication become more obvious. Organizations often face problems such as email deliverability, security risks, and the need for centralized control over outgoing communications.
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md
The list of geographies you can choose from includes:
- United Kingdom - United States
+> [!Note]
+> Advanced Messaging for WhatsApp is only available in the following Regions.
+
+- Asia Pacific
+- Australia
+- Europe
+- United Kingdom
+- United States
+ ## Data collection Azure Communication Services only collects diagnostic data required to deliver the service.
communication-services Known Limitations Acs Telephony https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/known-limitations-acs-telephony.md
Previously updated : 12/05/2023 Last updated : 04/03/2024
This article provides information about limitations and known issues related to
- Location-based routing isn't supported. - No quality dashboard is available for customers. - Enhanced 911 isn't supported.-- In-band DTMF is not supported, use RFC 2833 DTMF instead.-- Multiple IP addresses mapped with the same FQDN on the SBC side are not supported.
+- In-band Dual-tone multi-frequency (DTMF) isn't supported. Use RFC 2833 DTMF instead.
+- Multiple IP addresses mapped with the same FQDN on the SBC side aren't supported.
+- Maximum call duration is 30 hours.
## Next steps
communication-services Send Email Smtp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-smtp/send-email-smtp.md
zone_pivot_groups: acs-smtp-sending-method # Quickstart: Send email with SMTP In this quick start, you learn about how to send email using SMTP.
communication-services Smtp Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-smtp/smtp-authentication.md
# Quickstart: How to create authentication credentials for sending emails using SMTP - In this quick start, you learn about how to use an Entra application to create the authentication credentials for using SMTP to send an email using Azure Communication Services. ## Prerequisites
communication-services Ask Device Permission Api Takes Too Long https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/device-issues/ask-device-permission-api-takes-too-long.md
+
+ Title: Device and permission issues - askDevicePermission API takes too long
+
+description: Learn how to troubleshoot when askDevicePermission API takes too long.
++++ Last updated : 03/29/2024+++++
+# The askDevicePermission API takes too long
+The [`askDevicePermission`](/javascript/api/%40azure/communication-react/calladapterdevicemanagement?view=azure-node-latest&preserve-view=true#@azure-communication-react-calladapterdevicemanagement-askdevicepermission) API prompts the end user via the browser asking if they allow permission to use camera or microphone.
+If the end user approves camera or microphone usage, then those devices are available to be used in a call. The devices availability is reflected in available device list.
+
+User taking a long time to approve the permission can cause delay in the API response.
+
+Occasionally, the device list update step can take a long time.
+A delay in the driver layer is usually the cause of the issue. The issue can happen with some virtual audio devices in particular. [Chromium Issue 1402866](https://bugs.chromium.org/p/chromium/issues/detail?id=1402866&no_tracker_redirect=1)
+
+## How to detect using the SDK
+To detect this issue, you can measure the time difference between when you call the [`askDevicePermission`](/javascript/api/%40azure/communication-react/calladapterdevicemanagement?view=azure-node-latest&preserve-view=true#@azure-communication-react-calladapterdevicemanagement-askdevicepermission) API and when the promise resolves or rejects.
+
+## How to mitigate or resolve
+If the [`askDevicePermission`](/javascript/api/%40azure/communication-react/calladapterdevicemanagement?view=azure-node-latest&preserve-view=true#@azure-communication-react-calladapterdevicemanagement-askdevicepermission) API fails due to the user not responding to the UI permission prompt,
+the application can retry the API again and the user should see the UI permission prompt.
+
+As for other reasons, such as the device list updating taking too long to complete, the user should check their devices and see if there's any device that could potentially be causing this issue.
+They may need to update or remove the problematic device to resolve the issue.
communication-services No Enumerated Microphone List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/device-issues/no-enumerated-microphone-list.md
+
+ Title: Device and permission issues - getMicrophones API doesn't return detailed microphone list
+
+description: Learn how to troubleshoot when getMicrophones API doesn't return detailed microphone list.
++++ Last updated : 03/29/2024+++++
+# The getMicrophones API doesn't return detailed microphone list
+If a user reports they can't see the detailed microphone list,
+it's likely because the user didn't grant permission to access the microphone.
+When the permission state is `prompt` or `denied`, the browser doesn't provide detailed information about the microphone devices.
+In this scenario, the [`DeviceManager.getMicrophones`](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-getmicrophones) API returns an array with one object, where the `id` is set to `microphone:` and the name is set to an empty string.
+
+It's important to note that this scenario differs from the scenario where a user doesn't have any microphone on their device. If a device doesn't have any microphones the [`DeviceManager.getMicrophones`](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-getmicrophones) API returns an empty array, indicating that there's no available microphone devices on the user's system.
+
+## How to detect using the SDK
+[`DeviceManager.getMicrophones`](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-getmicrophones) API returns an empty array or an array with an object, where the `id` is set to `microphone:` and the name is set to an empty string.
+
+Additionally, to detect the scenario where the user removes the microphone during the call and there are no available microphones in the system,
+the application can listen to the [`noMicrophoneDevicesEnumerated`](/javascript/api/azure-communication-services/@azure/communication-calling/latestmediadiagnostics?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-latestmediadiagnostics-nomicrophonedevicesenumerated) event being raised to true in the [User Facing Diagnostics Feature](../../../../concepts/voice-video-calling/user-facing-diagnostics.md).
+This event can help the application understand the current situation, so it can show a warning message on its UI accordingly.
+
+## How to mitigate or resolve
+Your application should always call the [`DeviceManager.askDevicePermission`](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-askdevicepermission) API to ensure that the required permissions are granted.
+If the user doesn't grant the microphone permission, your application should display a warning message on its user interface.
+
+Additionally, your application should listen to the [`noMicrophoneDevicesEnumerated`](/javascript/api/azure-communication-services/@azure/communication-calling/latestmediadiagnostics?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-latestmediadiagnostics-nomicrophonedevicesenumerated) event and show a message when there are no available microphone devices.
+If the application provides a device selection page before the call,
+it can also check whether the microphone list is empty and shows a warning accordingly indicating no mic devices available.
communication-services No Enumerated Speaker List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/device-issues/no-enumerated-speaker-list.md
+
+ Title: Device and permission issues - getSpeakers API doesn't return detailed speaker list
+
+description: Learn how to troubleshoot when getSpeakers API doesn't return detailed speaker list.
++++ Last updated : 03/28/2024+++++
+# The getSpeakers API doesn't return detailed speaker list
+If a user reports that they can't see the detailed speaker list, it could be because the application doesn't have permission to access the microphone.
+Alternatively, the platform may not support speaker enumeration.
+
+The way browsers currently work may seem counterintuitive, as the permission to access the microphone can interfere with the enumeration of speakers.
+The speaker and microphone enumeration shares the same permission information.
+
+When the microphone permission state is `prompt` or `denied`, the browser doesn't provide detailed information about the microphone devices and speaker devices.
+In this scenario, [`DeviceManager.getSpeakers`](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-getspeakers) API returns an array with one object, where the `id` is set to `speaker:` and the name is set to an empty string.
+
+Some platforms, such as iOS Safari, macOS Safari, or earlier versions of Firefox don't support speaker enumeration.
+
+It's important to note that this scenario is different from the scenario where a user doesn't have any audio output device.
+In the latter case, the [`DeviceManager.getSpeakers`](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-getspeakers) API only returns an empty array, indicating that there's no available audio output device in the user's system.
+
+## How to detect using the SDK
+[`DeviceManager.getSpeakers`](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-getspeakers) API returns an empty array or an array with an object, where the `id` is set to `speaker:` and the name is set to an empty string.
+
+Additionally, to detect the scenario where the user removes the speaker during the call and there are no available audio output devices in the system, the application can listen to the `noSpeakerDevicesEnumerated` event being raised to true in the [User Facing Diagnostics Feature](../../../../concepts/voice-video-calling/user-facing-diagnostics.md). This event can help the application understand the current situation, and show the warning message on its UI accordingly.
+
+For the platform that doesn't support speaker enumeration, you get an error when calling [`DeviceManager.getSpeakers`](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-getspeakers) API.
+
+The error code/subcode is
+
+| error | Details |
+||-|
+| code | 405 (Method Not Allowed) |
+| subcode | 40606 |
+| message | This device doesn't support speaker enumeration. |
+| resultCategories | Expected |
+
+## How to mitigate or resolve
+The application should always call the `DeviceManager.askDevicePermission` API to ensure that the required permissions are granted.
+If the user doesn't grant the microphone permission, the application should show a warning on its user interface, so the user knows that they aren't able to see the speaker device list.
+
+The application should also check whether the speaker list is empty or handle the error when calling [`DeviceManager.getSpeakers`](/javascript/api/azure-communication-services/@azure/communication-calling/devicemanager?view=azure-communication-services-js&preserve-view=true#@azure-communication-calling-devicemanager-getspeakers) API, and show a warning accordingly.
+Additionally, the application should listen to the `noSpeakerDevicesEnumerated` event and show a message when there are no available speaker devices.
communication-services No Permission Prompt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/device-issues/no-permission-prompt.md
+
+ Title: Device and permission issues - no permission prompt after calling askDevicePermission
+
+description: Learn why there's no permission prompt after calling askDevicePermission.
++++ Last updated : 03/29/2024+++++
+# No permission prompt shows when calling askDevicePermission
+If a user reports that they don't see any permission prompts, it may be because they previously granted or denied permission and the browser caches the result.
+
+Not showing the permission prompt isn't a problem if the browser has the required permission.
+However, if the user can't see the device list, it could be because they denied permission before.
+
+Another possible reason for the lack of a permission prompt is that the user's system doesn't have any microphone or camera devices available,
+causing the browser to skip the prompt even if the permission state is set to `prompt`.
+
+## How to detect using the SDK
+We can't detect whether the permission prompt actually shows or not, as this browser behavior can't be detected at JavaScript layer.
+
+## How to mitigate or resolve
+The application should check the result of [`DeviceManager.askDevicePermission`](/javascript/api/%40azure/communication-react/calladapterdevicemanagement?view=azure-node-latest&preserve-view=true#@azure-communication-react-calladapterdevicemanagement-askdevicepermission) API.
+If the result is false, it may indicate that user denied the permission now or previously.
+
+The application should show a warning message and ask the user to check their browser settings to ensure that correct permissions were granted.
+They also need to verify that their system has the necessary devices installed and configured properly.
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/resources/troubleshooting/voice-video-calling/device-issues/overview.md
+
+ Title: Device and permission issues - Overview
+
+description: Overview of device and permission issues
++++ Last updated : 03/29/2024+++++
+# Overview of device and permission issues
+In the WebJS calling SDK, there are two types of permissions: browser permissions and system permissions.
+When an application needs to access a user's audio or video input device, it requires permissions granted at both the browser and system level.
+
+If an application doesn't have the required permission, it can't access the device,
+which means that other participants in the call are unable to see or hear the user.
+
+To avoid these issues, it's important for users to grant the necessary permissions when prompted by the browser.
+If a user accidentally denies permission or needs to change their permissions later, they can usually do so through the browser settings.
+
+The permission is also necessary for the application to retrieve detailed device list information.
+The application can call [`DeviceManager.askDevicePermission`](/javascript/api/%40azure/communication-react/calladapterdevicemanagement?view=azure-node-latest&preserve-view=true#@azure-communication-react-calladapterdevicemanagement-askdevicepermission) to trigger the permission prompt UI.
+However, the browser may cache the permission result and return it without showing the permission prompt UI.
+If the permission result is `denied`, the user needs to update the permission through the browser settings.
+
+## Common issues related to the device and permission
+Here are some common issues related to devices and permissions, along with their potential causes:
+
+### The getMicrophones API returns an empty array or doesn't return detailed microphone list
+* The microphone device isn't available in the system.
+* The microphone permission isn't granted.
+
+### The getSpeakers API returns an empty array or doesn't return detailed speaker list
+* The speaker device isn't available in the system.
+* The browser doesn't support speaker enumeration.
+* The microphone permission isn't granted.
+
+### No permission prompt shows when calling askDevicePermission
+* The browser caches the permission result granted or denied previously and returns it without prompting the user.
+* The microphone device isn't available when requesting microphone permission.
+* The camera device isn't available when requesting camera permission.
+
+### The askDevicePermission API takes too long
+* The user doesn't grant or deny the permission prompt.
+* The device driver layer responds slowly.
+
+## Next steps
+
+This overview article provides basic information on device and permission issues you may encounter when using the WebJS calling SDK.
+For more detailed guidance, follow the links to the pages listed within the `Device and permission issues` section of this troubleshooting guide.
container-apps Workload Profiles Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-overview.md
There are different types and sizes of workload profiles available by region. By
| Display name | Name | vCPU | Memory (GiB) | GPU | Category | Allocation | |||||||
-| Consumption | consumption |4 | 8 | - | Consumption | per replica |
+| Consumption | Consumption |4 | 8 | - | Consumption | per replica |
| Dedicated-D4 | D4 | 4 | 16 | - | General purpose | per node | | Dedicated-D8 | D8 | 8 | 32 | - | General purpose | per node | | Dedicated-D16 | D16 | 16 | 64 | - | General purpose | per node |
cosmos-db Ai Advantage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/ai-advantage.md
There are many benefits when using Azure Cosmos DB and Azure AI together:
The Azure AI Advantage offer is for existing Azure AI and GitHub Copilot customers who want to use Azure Cosmos DB as part of their solution stack. With this offer, you get: -- Free 40,000 RU/s of Azure Cosmos DB throughput for 90 days.
+- Free 40,000 [RU/s](request-units.md) of Azure Cosmos DB throughput (equivalent of up to $6,000) for 90 days.
- Funding to implement a new AI application using Azure Cosmos DB and/or Azure Kubernetes Service. For more information, speak to your Microsoft representative.
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/free-tier.md
Last updated 07/08/2022
Azure Cosmos DB free tier makes it easy to get started, develop, test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 1000 RU/s and 25 GB of storage in the account for free. The throughput and storage consumed beyond these limits are billed at regular price. Free tier is available for all API accounts with provisioned throughput, autoscale throughput, single, or multiple write regions.
-Free tier lasts indefinitely for the lifetime of the account and it comes with all the [benefits and features](introduction.md#key-benefits) of a regular Azure Cosmos DB account. These benefits include unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more.
+Free tier lasts indefinitely for the lifetime of the account and it comes with all the [benefits and features](introduction.md#an-ai-database-with-unmatched-reliability-and-flexibility) of a regular Azure Cosmos DB account. These benefits include unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more.
You can have up to one free tier Azure Cosmos DB account per an Azure subscription and you must opt in when creating the account. If you don't see the option to apply the free tier discount, another account in the subscription has already been enabled with free tier. If you create an account with free tier and then delete it, you can apply free tier for a new account. When creating a new account, itΓÇÖs recommended to enable the free tier discount if itΓÇÖs available.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Title: Azure Cosmos DB ΓÇô Unified AI Database
-description: Azure Cosmos DB is a global multi-model database and ideal database for AI applications requiring speed, elasticity and availability with native support for NoSQL, relational, and vector data.
+ Title: Unified AI Database
+
+description: Database for AI Era - Azure Cosmos DB is a NoSQL, relational, and vector database that provides unmatched reliability and flexibility for your operational data needs.
Previously updated : 11/02/2023 Last updated : 04/03/2024 adobe-target: true
-# Azure Cosmos DB ΓÇô Unified AI Database
+# Database for AI Era
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table, PostgreSQL](includes/appliesto-nosql-mongodb-cassandra-gremlin-table-postgresql.md)]
-> OpenAI relies on Cosmos DB to dynamically scale their ChatGPT service ΓÇô one of the fastest-growing consumer apps ever ΓÇô enabling high reliability and low maintenance.ΓÇ¥ ΓÇô Satya Nadella, Microsoft chairman and chief executive officer
+> "OpenAI relies on Cosmos DB to dynamically scale their ChatGPT service ΓÇô one of the fastest-growing consumer apps ever ΓÇô enabling high reliability and low maintenance." ΓÇô Satya Nadella, Microsoft chairman and chief executive officer
Today's applications are required to be highly responsive and always online. They must respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users.
-Recently, the surge of AI-powered applications created another layer of complexity, because many of these applications currently integrate a multitude of data stores. For example, some teams built applications that simultaneously connect to MongoDB, Postgres, Redis, and Gremlin. These databases differ in implementation workflow and operational performances, posing extra complexity for scaling applications.
+The surge of AI-powered applications created another layer of complexity, because many of these applications integrate a multitude of data stores. For example, some organizations built applications that simultaneously connect to MongoDB, Postgres, Redis, and Gremlin. These databases differ in implementation workflow and operational performances, posing extra complexity for scaling applications.
-Azure Cosmos DB simplifies and expedites your application development by being the single AI database for your operational data needs, from caching to vector search. It accommodates all your operational data models, including relational, document, vector, key-value, graph, and table.
+Azure Cosmos DB simplifies and expedites your application development by being the single database for your operational data needs, from caching to backup to vector search. It provides the data infrastructure for modern applications like AI, digital commerce, Internet of Things, and booking management. It can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table.
-Azure Cosmos DB is a fully managed NoSQL, relational, and vector database for AI, digital commerce, Internet of Things, booking management, and other types of modern applications. It offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
+## An AI database providing industry-leading capabilities... for free
+
+Azure Cosmos DB is a fully managed NoSQL, relational, and vector database. It offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
App development is faster and more productive thanks to: - Turnkey multi-region data distribution anywhere in the world - Open source APIs-- SDKs for popular languages.-- AI database functionalities like native vector search or seamless integration with Azure AI Services to support Retrieval Augmented Generation
+- SDKs for popular languages
+- AI database functionalities like integrated vector database or seamless integration with Azure AI Services to support Retrieval Augmented Generation
+- Query Copilot for generating NoSQL queries based on your natural language prompts [(preview)](nosql/query/how-to-enable-use-copilot.md)
-As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
+As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates, and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
-If you are an existing Azure AI or GitHub Copilot customer, you may try Azure Cosmos DB for free with 40,000 [RU/s](request-units.md) of throughput for 90 days under the Azure AI Advantage offer.
+If you're an existing Azure AI or GitHub Copilot customer, you may try Azure Cosmos DB for free with 40,000 [RU/s](request-units.md) of throughput for 90 days under the Azure AI Advantage offer.
> [!div class="nextstepaction"] > [90-day Free Trial with Azure AI Advantage](ai-advantage.md)
-If you are not an Azure customer, you may use the 30-day Free Trial without an Azure subscription. No commitment follows the end of your trial period.
-
-> [!div class="nextstepaction"]
-> [30-day Free Trial without an Azure subscription](https://azure.microsoft.com/try/cosmosdb/)
-
-Alternatively, you may use the Azure Cosmos DB lifetime free tier with the first 1000 [RU/s](request-units.md) of throughput and 25 GB of storage free.
+If you aren't an Azure customer, you may use the [30-day Free Trial without an Azure subscription](https://azure.microsoft.com/try/cosmosdb/). No commitment follows the end of your trial period.
-> [!div class="nextstepaction"]
-> [Azure Cosmos DB lifetime free tier](free-tier.md)
+Alternatively, you may use the [Azure Cosmos DB lifetime free tier](free-tier.md) with the first 1000 [RU/s](request-units.md) of throughput and 25 GB of storage free.
> [!TIP] > To learn more about Azure Cosmos DB, join us every Thursday at 1PM Pacific on Azure Cosmos DB Live TV. See the [Upcoming session schedule and past episodes](https://gotcosmos.com/tv).
-## Azure Cosmos DB is more than an AI database
-
-Besides AI database, Azure Cosmos DB should also be your goto database for web, mobile, gaming, and IoT applications. Azure Cosmos DB is well positioned for solutions that handle massive amounts of data, reads, and writes at a global scale with near-real response times. Azure Cosmos DB's guaranteed high availability, high throughput, low latency, and tunable consistency are huge advantages when building these types of applications. Learn about how Azure Cosmos DB can be used to build IoT and telematics, retail and marketing, gaming and web and mobile applications.
+## An AI database for more than just AI apps
-## Key Benefits
+Besides AI, Azure Cosmos DB should also be your goto database for web, mobile, gaming, and IoT applications. Azure Cosmos DB is well positioned for solutions that handle massive amounts of data, reads, and writes at a global scale with near-real response times. Azure Cosmos DB's guaranteed high availability, high throughput, low latency, and tunable consistency are huge advantages when building these types of applications. Learn about how Azure Cosmos DB can be used to build IoT and telematics, retail and marketing, gaming and web and mobile applications.
-Here's some key benefits of using Azure Cosmos DB.
+## An AI database with unmatched reliability and flexibility
### Guaranteed speed at any scale
Gain unparalleled [SLA-backed](https://azure.microsoft.com/support/legal/sla/cos
### Simplified application development
-Build fast with open-source APIs, multiple SDKs, schemaless data and no-ETL analytics over operational data.
+Build fast with open-source APIs, multiple SDKs, schemaless data, and no-ETL analytics over operational data.
- Deeply integrated with key Azure services used in modern (cloud-native) app development including Azure Functions, IoT Hub, AKS (Azure Kubernetes Service), App Service, and more. - Choose from multiple database APIs including the native API for NoSQL, MongoDB, PostgreSQL, Apache Cassandra, Apache Gremlin, and Table. - Use Azure Cosmos DB as your unified AI database for data models like relational, document, vector, key-value, graph, and table.-- Build apps on API for NoSQL using the languages of your choice with SDKs for .NET, Java, Node.js and Python. Or your choice of drivers for any of the other database APIs.
+- Build apps on API for NoSQL using the languages of your choice with SDKs for .NET, Java, Node.js, and Python. Or your choice of drivers for any of the other database APIs.
- Change feed makes it easy to track and manage changes to database containers and create triggered events with Azure Functions. - Azure Cosmos DB's schema-less service automatically indexes all your data, regardless of the data model, to deliver blazing fast queries.
Guarantee business continuity, 99.999% availability, and enterprise-level securi
### Fully managed and cost-effective
-End-to-end database management, with serverless and automatic scaling matching your application and TCO needs
+End-to-end database management, with serverless and automatic scaling matching your application and total cost of ownership (TCO) needs.
- Fully managed database service. Automatic, no touch, maintenance, patching, and updates, saving developers time and money. - Cost-effective options for unpredictable or sporadic workloads of any size or scale, enabling developers to get started easily without having to plan or manage capacity.
cosmos-db Vector Search Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search-ai.md
Title: Build AI apps with vector search-
-description: Enhance AI-powered applications with Retrieval Augmented Generation (RAG) by using Azure Cosmos DB for MongoDB vCore vector search.
+ Title: Open-source vector databases
+
+description: Open-source vector databases
Previously updated : 08/28/2023 Last updated : 04/02/2024
-# Build AI apps with Azure Cosmos DB for MongoDB vCore vector search
+# Open-source vector databases
[!INCLUDE[MongoDB vCore](../../includes/appliesto-mongodb-vcore.md)]
-Language models available in Azure OpenAI Service can elevate the capabilities of your AI-driven applications. To fully unleash the potential of language models, you must give them access to timely and relevant data from your application's data store. You can accomplish this process, known as Retrieval Augmented Generation (RAG), by using Azure Cosmos DB.
+When developers select vector databases, the open-source options provide numerous benefits. "Open source" means that the software's source code is available freely, enabling users to customize the database according to their specific needs. This flexibility is beneficial for organizations that are subject to unique regulatory requirements on data, such as companies in the financial services industry.
-This article delves into the core concepts of RAG. It provides links to tutorials and sample code that exemplify RAG strategies by using vector search in Azure Cosmos DB for MongoDB vCore.
+Another advantage of open-source vector databases is the strong community support they enjoy. Active user communities often contribute to the development of these databases, provide support, and share best practices, promoting innovation.
-RAG elevates AI-powered applications by incorporating external knowledge and data into model inputs. With vector search in Azure Cosmos DB for MongoDB vCore, this process becomes seamless. You can use it to integrate the most pertinent information into your AI models with minimal effort.
+Some individuals opt for open-source vector databases because they are "free," meaning there's no cost to acquire or use the software. An alternative is using the free tiers offered by managed vector database services. These managed services provide not only cost-free access up to a certain usage limit but also simplify the operational burden by handling maintenance, updates, and scalability. Therefore, by using the free tier of managed vector database services, users can achieve cost savings while reducing management overhead. This approach allows users to focus more on their core activities rather than on database administration.
-By using [embeddings](../../../ai-services/openai/tutorials/embeddings.md) and vector search, you can provide your AI applications with the context that they need to excel. Through the provided tutorials and code samples, you can become proficient in using RAG to create smarter and more context-aware AI solutions.
+## Working mechanism of open-source vector databases
-## What is Retrieval Augmented Generation?
+Open-source vector databases are designed to store and manage vector embeddings, which are mathematical representations of data in a high-dimensional space. In this space, each dimension corresponds to a feature of the data, and tens of thousands of dimensions might be used to represent sophisticated data. A vector's position in this space represents its characteristics. Words, phrases, or entire documents, and images, audio, and other types of data can all be vectorized. These vector embeddings are used in similarity search, multi-modal search, recommendations engines, large languages models (LLMs), etc.
-RAG uses external knowledge and models to efficiently manage custom data or domain-specific expertise. This process involves extracting information from an external data source and integrating it into the model's input through prompt engineering. A robust approach is essential to identify the most pertinent data from the external source within the [token limitations of a request](../../../ai-services/openai/quotas-limits.md).
+These databases' architecture typically includes a storage engine and an indexing mechanism. The storage engine optimizes the storage of vector data for efficient retrieval and manipulation, while the indexing mechanism organizes the data for fast searching and retrieval operations.
-RAG addresses these limitations by using embeddings, which convert data into vectors. Embeddings capture the semantic essence of the text and enable context comprehension beyond simple keywords.
+In a vector database, embeddings are indexed and queried through vector search algorithms based on their vector distance or similarity. A robust mechanism is necessary to identify the most relevant data. Some well-known vector search algorithms include Hierarchical Navigable Small World (HNSW), Inverted File (IVF), etc.
-## What is vector search?
+Vector databases are used in numerous domains and situations across analytical and generative AI, including natural language processing, video and image recognition, recommendation system, search, etc. For example, you can use a vector database to:
-[Vector search](./vector-search.md) is an approach that enables the discovery of analogous items based on shared data characteristics. It deviates from the necessity for precise matches within a property field.
+- Identify similar images, documents, and songs based on their contents, themes, sentiments, and styles
+- Identify similar products based on their characteristics, features, and user groups
+- Recommend contents, products, or services based on individuals' preferences
+- Recommend contents, products, or services based on user groups' similarities
+- Identify the best-fit potential options from a large pool of choices to meet complex requirements
+- Identify data anomalies or fraudulent activities that are dissimilar from predominant or normal patterns
+- Implement persistent memory for AI agents
+- Enable retrieval-augmented generation (RAG)
-This method is invaluable in applications like text similarity searches, image association, recommendation systems, and anomaly detection. Its functionality revolves around the use of vector representations (sequences of numerical values) that are generated from your data via machine learning models or embeddings APIs. Examples of such APIs encompass [Azure OpenAI embeddings](/azure/ai-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/).
+## Selecting the best open-source vector database
-The technique gauges the disparity between your query vector and the data vectors. The data vectors that show the closest proximity to your query vector are identified as semantically akin.
+Choosing the best open-source vector database requires considering several factors. Performance and scalability of the database are crucial, as they impact whether the database can handle your specific workload requirements. Databases with efficient indexing and querying capabilities usually offer optimal performance. Another factor is the community support and documentation available for the database. A robust community and ample documentation can provide valuable assistance. Here are some popular open-source vector databases:
-## How does vector search work in Azure Cosmos DB for MongoDB vCore?
+- Chroma
+- Milvus
+- Qdrant
+- Weaviate
-You can truly harness the power of RAG through the native vector search capability in Azure Cosmos DB for MongoDB vCore. This feature combines AI-focused applications with stored data in Azure Cosmos DB.
+>[!NOTE]
+>The most popular option may not be the best option for you. To find the best fit for your needs, you should compare different options based on features, supported data types, compatibility with existing tools and frameworks you use. Ease of installation, configuration, and maintenance should also be considered to ensure smooth integration into your workflow.
-Vector search optimally stores, indexes, and searches high-dimensional vector data directly within Azure Cosmos DB for MongoDB vCore, alongside other application data. This capability eliminates the need to migrate data to costlier alternatives for vector search functionality.
+## Challenges with open-source vector databases
-## Code samples and tutorials
+Open-source vector databases pose challenges that are typical of open-source software:
-- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore): Walk through creating a recipe chatbot by using .NET, to showcase the application of RAG in a culinary scenario.-- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore): Learn how to construct an Azure product chatbot that highlights the benefits of RAG.
+- Setup: Users need in-depth knowledge to install, configure, and operate, especially for complex deployments. Optimizing resources and configuration while scaling up operation requires close monitoring and adjustments.
+- Maintenance: Users must manage their own updates, patches, and maintenance. Thus, ML expertise wouldn't suffice; users must also have extensive experience in database administration.
+- Support: Official support can be limited compared to managed services, relying more on community assistance.
-## Next steps
+Therefore, while free initially, open-source vector databases incur significant costs when scaling up. Expanding operations necessitates more hardware, skilled IT staff, and advanced infrastructure management, leading to higher expenses in hardware, personnel, and operational costs. Scaling open-source vector databases can be financially demanding despite the lack of licensing fees.
+
+## Addressing the challenges
-- Learn more about [Azure OpenAI embeddings](../../../ai-services/openai/concepts/understand-embeddings.md)-- Learn how to [generate embeddings using Azure OpenAI](../../../ai-services/openai/tutorials/embeddings.md)
+A fully managed database service helps developers avoid the hassles from setting up, maintaining, and relying on community assistance for an open-source vector database. The Integrated Vector Database in Azure Cosmos DB for MongoDB vCore offers a life-time free tier. It allows developers to enjoy the same financial benefit associated with open-source vector databases, while the service provider handles maintenance, updates, and scalability. When itΓÇÖs time to scale up operations, upgrading is quick and easy while keeping a low [total cost of ownership (TCO)](introduction.md#low-total-cost-of-ownership-tco).
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Create a lifetime free-tier vCore cluster for Azure Cosmos DB for MongoDB](free-tier.md)
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md
This guide demonstrates how to create a vector index, add documents that have ve
## Related content -- [With Semantic Kernel, orchestrate your data retrieval with Azure Cosmos DB for MongoDB vCore](/semantic-kernel/memories/vector-db#available-connectors-to-vector-databases)
+- [.NET RAG Pattern retail reference solution](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore)
+- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore)
+- [C# RAG pattern - Integrate Open AI Services with Cosmos](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore)
+- [Python RAG pattern - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore)
+- [Python notebook tutorial - Vector database integration through LangChain](https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db)
+- [Python notebook tutorial - LLM Caching integration through LangChain](https://python.langchain.com/docs/integrations/llms/llm_caching#azure-cosmos-db-semantic-cache)
+- [Python - LlamaIndex integration](https://docs.llamaindex.ai/en/stable/examples/vector_stores/AzureCosmosDBMongoDBvCoreDemo.html)
+- [Python - Semantic Kernel memory integration](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/memory/azure_cosmosdb)
## Next step > [!div class="nextstepaction"]
-> [Build AI apps with Integrated Vector Database in Azure Cosmos DB for MongoDB vCore](vector-search-ai.md)
+> [Create a lifetime free-tier vCore cluster for Azure Cosmos DB for MongoDB](free-tier.md)
cosmos-db Computed Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/computed-properties.md
During the preview, computed properties must be created using the .NET v3 or Jav
| | | | | **.NET SDK v3** | >= [3.34.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.34.0-preview) | Computed properties are currently available only in preview package versions. | | **Java SDK v4** | >= [4.46.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.46.0) | Computed properties are currently under preview version. |
+| **Python SDK** | >= [v4.5.2b5](https://pypi.org/project/azure-cosmos/4.5.2b5/) | Computed properties are currently under preview version. |
### Create computed properties by using the SDK
containerProperties.setComputedProperties(computedProperties);
client.getDatabase("myDatabase").createContainer(containerProperties); ```
+### [Python](#tab/python)
+
+You can define multiple computed properties in a list and then add them to the container properties. Python SDK currently doesn't support computed properties on existing containers.
+
+```python
+computed_properties = [{'name': "cp_lower", 'query': "SELECT VALUE LOWER(c.db_group) FROM c"},
+ {'name': "cp_power", 'query': "SELECT VALUE POWER(c.val, 2) FROM c"},
+ {'name': "cp_str_len", 'query': "SELECT VALUE LENGTH(c.stringProperty) FROM c"}]
+
+container_with_computed_props = db.create_container_if_not_exists(
+ "myContainer", PartitionKey(path="/pk"), computed_properties=computed_properties)
+```
+Computed properties can be used like any other property in queries. For example, you can use the computed property `cp_lower` in a query like this:
+
+```python
+queried_items = list(
+ container_with_computed_props.query_items(query='Select * from c Where c.cp_power = 25', partition_key="test"))
+```
++ Here's an example of how to update computed properties on an existing container:
containerProperties.setComputedProperties(modifiedComputedProperites);
container.replace(containerProperties); ```
+### [Python](#tab/python)
+Updating computed properties on an existing container is not supported in Python SDK. You can only define computed properties when creating a new container. This is a work in progress currently.
+ > [!TIP]
cosmos-db Optimize Dev Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-dev-test.md
This article describes the different options to use Azure Cosmos DB for developm
Azure Cosmos DB free tier makes it easy to get started, develop and test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 1000 RU/s and 25 GB of storage in the account free.
-Free tier lasts indefinitely for the lifetime of the account and comes with all the [benefits and features](introduction.md#key-benefits) of a regular Azure Cosmos DB account, including unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more. You can create a free tier account using Azure portal, CLI, PowerShell, and a Resource Manager template. To learn more, see how to [create a free tier account](free-tier.md) article and the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
+Free tier lasts indefinitely for the lifetime of the account and comes with all the [benefits and features](introduction.md#an-ai-database-with-unmatched-reliability-and-flexibility) of a regular Azure Cosmos DB account, including unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more. You can create a free tier account using Azure portal, CLI, PowerShell, and a Resource Manager template. To learn more, see how to [create a free tier account](free-tier.md) article and the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
## Azure free account
cosmos-db Priority Based Execution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/priority-based-execution.md
To get started using priority-based execution, navigate to the **Features** page
- Java v4: [v4.45.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.45.0) or later - Spark 3.2: [v4.19.0](https://central.sonatype.com/artifact/com.azure.cosmos.spark/azure-cosmos-spark_3-2_2-12/4.19.0) or later - JavaScript v4: [v4.0.0](https://www.npmjs.com/package/@azure/cosmos) or later-- Python 4.6.0: [v4.6.0](https://pypi.org/project/azure-cosmos/4.6.0/) or later
+- Python: [v4.5.2b2](https://pypi.org/project/azure-cosmos/4.5.2b2/) or later. Available only in preview version.
## Code samples
cosmos-db Vector Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md
Use the natively [integrated vector database in Azure Cosmos DB for MongoDB vCor
- [.NET RAG Pattern retail reference solution](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore) - [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore)-- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore)
+- [C# RAG pattern - Integrate Open AI Services with Cosmos](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore)
+- [Python RAG pattern - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore)
- [Python notebook tutorial - Vector database integration through LangChain](https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db) - [Python notebook tutorial - LLM Caching integration through LangChain](https://python.langchain.com/docs/integrations/llms/llm_caching#azure-cosmos-db-semantic-cache) - [Python - LlamaIndex integration](https://docs.llamaindex.ai/en/stable/examples/vector_stores/AzureCosmosDBMongoDBvCoreDemo.html)
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
Users with this role have the highest level of access to the Enrollment. They ca
- Manage other enterprise administrators. - Manage department administrators. - Manage notification contacts.-- Purchase Azure services, including reservations.
+- Purchase Azure services, including reservations/savings plans.
- View usage across all accounts. - View unbilled charges across all accounts. - Create new subscriptions under active enrollment accounts.-- View and manage all reservation orders and reservations that apply to the Enterprise Agreement.
- - Enterprise administrator (read-only) can view reservation orders and reservations. They can't manage them.
+- View and manage all reservation/savings plan orders and reservations/savings plans that apply to the Enterprise Agreement.
+ - Enterprise administrator (read-only) can view reservation/savings plan orders and reservations/savings plans. They can't manage them.
You can have multiple enterprise administrators in an enterprise enrollment. You can grant read-only access to enterprise administrators.
The enterprise administrator role can be assigned to multiple accounts.
Users with this role have permissions to purchase Azure services, but aren't allowed to manage accounts. They can: -- Purchase Azure services, including reservations.
+- Purchase Azure services, including reservations/savings plans.
- View usage across all accounts. - View unbilled charges across all accounts.-- View and manage all reservation orders and reservations that apply to the Enterprise Agreement.
+- View and manage all reservation/savings plan orders and reservations/savings plans that apply to the Enterprise Agreement.
The EA purchaser role is currently enabled only for SPN-based access. To learn how to assign the role to a service principal name, see [Assign roles to Azure Enterprise Agreement service principal names](assign-roles-azure-service-principals.md).
The following sections describe the limitations and capabilities of each role.
|Add or remove Department Administrators|✔|✘|✘|✔|✘|✘|✘| |View Accounts in the enrollment |✔|✔|✔|✔⁵|✔⁵|✘|✔| |Add Accounts to the enrollment and change Account Owner|✔|✘|✘|✔⁵|✘|✘|✘|
-|Purchase reservations|✔|✘⁶|✔|✘|✘|✘|✘|
+|Purchase reservations/savings plans|✔|✘⁶|✔|✘|✘|✘|✘|
|Create and manage subscriptions and subscription permissions|✔|✘|✘|✘|✘|✔|✘| - ⁴ Notification contacts are sent email communications about the Azure Enterprise Agreement. - ⁵ Task is limited to accounts in your department.-- ⁶ A subscription owner or reservation purchaser can purchase and manage reservations and savings plans within the subscription, and only if permitted by the reservation purchase enabled flag. Enterprise administrators can purchase and manage reservations and savings plans across the billing account. Enterprise administrators (read-only) can view all purchased reservations and savings plans. The reservation purchase enabled flag doesn't affect the EA administrator roles. The Enterprise Admin (read-only) role holder isn't permitted to make purchases. However, if a user with that role also holds either a subscription owner or reservation purchaser permission, the user can purchase reservations and savings plans, regardless of the flag.
+- ⁶ A subscription owner, reservation purchaser or savings plan purchaser can purchase and manage reservations and savings plans within the subscription, and only if permitted by the reservation/savings plan purchase-enabled flags. Enterprise administrators can purchase and manage reservations and savings plans across the billing account. Enterprise administrators (read-only) can view all purchased reservations and savings plans. The reservation/savings plan purchase-enabled flags don't affect the EA administrator roles. The Enterprise Admin (read-only) role holder isn't permitted to make purchases. However, if a user with that role also holds either a subscription owner, reservation purchaser or savings plan purchaser permission, the user can purchase reservations and/or savings plans, regardless of the flags.
## Add a new enterprise administrator
cost-management-billing Buy Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/buy-savings-plan.md
Savings plan discounts only apply to resources associated with subscriptions pur
> Azure savings plan isn't supported for the China legacy Online Service Premium Agreement (OSPA) platform. ### Enterprise Agreement customers
+Saving plan purchasing for Enterprice Agreement (EA) customers is limited to the following:
+- EA admins with write permissions can purchase savings plans from **Cost Management + Billing** > **Savings plan**. No subscription-specific permissions are needed.
+- Users with Subscription owner or Savings plan purchaser roles in at least one subscription in the enrollment account can purchase savings plans from **Home** > **Savings plan**.
-- EA admins with write permissions can directly purchase savings plans from **Cost Management + Billing** > **Savings plan**. No subscription-specific permissions are needed.-- Subscription owners for one of the subscriptions in the enrollment account can purchase savings plans from **Home** > **Savings plan**.-
-Enterprise Agreement (EA) customers can limit purchases to only EA admins by disabling the Add Savings Plan option in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the **Policies** menu to change settings.
+EA customers can limit savings plan purchases to only EA admins by disabling the Add Savings Plan option in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the **Policies** menu to change settings.
### Microsoft Customer Agreement (MCA) customers
+Saving plan purchasing for Microsoft Customer Agreement (MCA) customers is limited to the following:
+- Users with billing profile contributor permissions or higher can purchase savings plans from **Cost Management + Billing** > **Savings plan** experience. No subscription-specific permissions are needed.
+- Users with Subscription owner or Savings plan purchaser roles in at least one subscription in the billing profile can purchase savings plans from **Home** > **Savings plan**.
-- Customers with billing profile contributor permissions or higher can purchase savings plans from **Cost Management + Billing** > **Savings plan** experience. No subscription-specific permissions are needed.-- Subscription owners for one of the subscriptions in the billing profile can purchase savings plans from **Home** > **Savings plan**.-
-To disallow savings plan purchases on a billing profile, billing profile contributors can navigate to the **Policies** menu under the billing profile and adjust the Azure Savings Plan option.
+MCA customers can limit savings plan purchases to users with billing profile contributor permissions or higher by disabling the Add Savings Plan option in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the **Policies** menu to change settings.
### Microsoft Partner Agreement partners
Buy savings plans by using Azure RBAC permissions or with permissions on your bi
#### To purchase using Azure RBAC permissions -- You must be an Owner of the subscription that you plan to use, specified as `billingScopeId`.
+- You must have the Savings plan purchaser role within, or be an Owner of, the subscription that you plan to use, specified as `billingScopeId`.
- The `billingScopeId` property in the request body must use the `/subscriptions/10000000-0000-0000-0000-000000000000` format. #### To purchase using billing permissions
cost-management-billing Download Savings Plan Price Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/download-savings-plan-price-sheet.md
This article explains how you can download the price sheet for an Enterprise Agr
## Download EA price sheet
-To download your EA price sheet, do the following tasks.
+To download your EA price sheet via Azure portal, do the following tasks.
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Search for **Cost Management + Billing**.
To download your EA price sheet, do the following tasks.
## Download MCA price sheet
-To download your MCA price sheet, do the following tasks.
+To download your MCA price sheet via Azure portal, do the following tasks.
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Search for **Cost Management + Billing**.
To download your MCA price sheet, do the following tasks.
5. Select **Download Azure price sheet for** _current month and year_. File generation may take a few moments. 6. Open the file and filter on `priceType` to see `SavingsPlan` plan price records.
+## Download price sheet using APIs
+To learn more about downloading your price sheet using price sheet APIs, see the following articles:
+ - [Learn more about EA price sheet](/rest/api/cost-management/price-sheet).
+ - [Learn more about MCA price sheet](/rest/api/consumption/price-sheet).
+ - [Learn more about retail price sheet](/rest/api/cost-management/retail-prices/azure-retail-prices).
++ ## Need help? Contact us. If you have questions about Azure savings plan for compute, contact your account team or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft only provides expert support for Azure savings plan for compute in English.
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
To generate the error report ID for Microsoft Support, follow these instructions
> [!NOTE] > The folder is not `C:\Program Files (x86)\Java\`
- - JRE 7 and JRE 8 are both compatible for this copy activity. JRE 6 and versions that are earlier than JRE 6 have not been validated for this use.
+ - Java Runtime (JRE) is version 11 or greater, from a JRE provider such as [Microsoft OpenJDK 11](https://aka.ms/download-jdk/microsoft-jdk-11.0.19-windows-x64.msi) or [Eclipse Temurin 11](https://adoptium.net/temurin/releases/?version=11). Ensure that the JAVA_HOME system environment variable is set to the JDK folder (not just the JRE folder) you may also need to add the bin folder to your system's PATH environment variable.
2. Check the registry for the appropriate settings. To do this, follow these steps:
defender-for-cloud Agentless Vulnerability Assessment Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-vulnerability-assessment-aws.md
Container vulnerability assessment powered by Microsoft Defender Vulnerability M
- **Reporting** - Container Vulnerability Assessment for AWS powered by Microsoft Defender Vulnerability Management provides vulnerability reports using following recommendations:
- | Recommendation | Description | Assessment Key|
- |--|--|--|
- | [AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainerRegistryRecommendationDetailsBlade/assessmentKey/c27441ae-775c-45be-8ffa-655de37362ce) | Scans your AWS registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce |
- | [AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainersRuntimeRecommendationDetailsBlade/assessmentKey/682b2595-d045-4cff-b5aa-46624eb2dd8f)ΓÇ»| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Elastic Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 682b2595-d045-4cff-b5aa-46624eb2dd8f |
+These are the new recommendations that report on runtime container vulnerabilities and registry image vulnerabilities. They are currently in preview, but are intended to replace the old recommendations. These new recommendations do not count toward secure score while in preview. The scan engine for both sets of recommendations is the same.
+
+| Recommendation | Description | Assessment Key|
+|--|--|--|
+| [[Preview] Container images in AWS registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2a139383-ec7e-462a-90ac-b1b60e87d576) | Defender for Cloud scans your registry images for known vulnerabilities (CVEs) and provides detailed findings for each scanned image. Scanning and remediating vulnerabilities for container images in the registry helps maintain a secure and reliable software supply chain, reduces the risk of security incidents, and ensures compliance with industry standards. | 2a139383-ec7e-462a-90ac-b1b60e87d576 |
+| [[Preview] Containers running in AWS should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d5d1e526-363a-4223-b860-f4b6e710859f)ΓÇ»| Defender for Cloud creates an inventory of all container workloads currently running in your Kubernetes clusters and provides vulnerability reports for those workloads by matching the images being used and the vulnerability reports created for the registry images. Scanning and remediating vulnerabilities of container workloads is critical to ensure a robust and secure software supply chain, reduce the risk of security incidents, and ensures compliance with industry standards. | d5d1e526-363a-4223-b860-f4b6e710859f |
+
+These are the older recommendations that are currently on a retirement path:
+
+| Recommendation | Description | Assessment Key|
+|--|--|--|
+| [AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainerRegistryRecommendationDetailsBlade/assessmentKey/c27441ae-775c-45be-8ffa-655de37362ce) | Scans your AWS registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce |
+| [AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainersRuntimeRecommendationDetailsBlade/assessmentKey/682b2595-d045-4cff-b5aa-46624eb2dd8f)ΓÇ»| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Elastic Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 682b2595-d045-4cff-b5aa-46624eb2dd8f |
- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](../governance/resource-graph/overview.md#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md).
A detailed description of the scan process is described as follows:
- All newly discovered images are pulled, and an inventory is created for each image. Image inventory is kept to avoid further image pulls, unless required by new scanner capabilities.ΓÇï - Using the inventory, vulnerability reports are generated for new images, and updated for images previously scanned which were either pushed in the last 90 days to a registry, or are currently running. To determine if an image is currently running, Defender for Cloud uses both [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability) and [inventory collected via the Defender sensor running on EKS nodes](defender-for-containers-enable.md#enablement-method-per-capability)
- - Vulnerability reports for registry container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainerRegistryRecommendationDetailsBlade/assessmentKey/c27441ae-775c-45be-8ffa-655de37362ce).
-- For customers using either [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability) or [inventory collected via the Defender sensor running on EKS nodes](defender-for-containers-enable.md#enablement-method-per-capability), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5) for remediating vulnerabilities for vulnerable images running on an EKS cluster. For customers using only [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability), the refresh time for inventory in this recommendation is once every seven hours. Clusters that are also running the [Defender sensor](defender-for-containers-enable.md#enablement-method-per-capability) benefit from a two hour inventory refresh rate. Image scan results are updated based on registry scan in both cases, and are therefore only refreshed every 24 hours.
+ - Vulnerability reports for registry container images are provided as a [recommendation](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2a139383-ec7e-462a-90ac-b1b60e87d576).
+- For customers using either [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability) or [inventory collected via the Defender sensor running on EKS nodes](defender-for-containers-enable.md#enablement-method-per-capability), Defender for Cloud also creates a [recommendation](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d5d1e526-363a-4223-b860-f4b6e710859f) for remediating vulnerabilities for vulnerable images running on an EKS cluster. For customers using only [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability), the refresh time for inventory in this recommendation is once every seven hours. Clusters that are also running the [Defender sensor](defender-for-containers-enable.md#enablement-method-per-capability) benefit from a two hour inventory refresh rate. Image scan results are updated based on registry scan in both cases, and are therefore only refreshed every 24 hours.
> [!NOTE] > For [Defender for Container Registries (deprecated)](defender-for-container-registries-introduction.md), images are scanned once on push, on pull, and rescanned only once a week.
defender-for-cloud Agentless Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-vulnerability-assessment-azure.md
Container vulnerability assessment powered by Microsoft Defender Vulnerability M
- **Exploitability information** - Each vulnerability report is searched through exploitability databases to assist our customers with determining actual risk associated with each reported vulnerability. - **Reporting** - Container Vulnerability Assessment for Azure powered by Microsoft Defender Vulnerability Management provides vulnerability reports using following recommendations:
- | Recommendation | Description | Assessment Key |
- |--|--|--|
- | [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AzureContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 |
- | [Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 |
+These are the new recommendations that report on runtime container vulnerabilities and registry image vulnerabilities. They are currently in preview, but are intended to replace the old recommendations. These new recommendations do not count toward secure score while in preview. The scan engine for both sets of recommendations is the same.
+
+| Recommendation | Description | Assessment Key |
+|--|--|--|
+| [[Preview] Container images in Azure registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9) | Defender for Cloud scans your registry images for known vulnerabilities (CVEs) and provides detailed findings for each scanned image. Scanning and remediating vulnerabilities for container images in the registry helps maintain a secure and reliable software supply chain, reduces the risk of security incidents, and ensures compliance with industry standards. | 33422d8f-ab1e-42be-bc9a-38685bb567b9 |
+| [[Preview] Containers running in Azure should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0)  | Defender for Cloud creates an inventory of all container workloads currently running in your Kubernetes clusters and provides vulnerability reports for those workloads by matching the images being used and the vulnerability reports created for the registry images. Scanning and remediating vulnerabilities of container workloads is critical to ensure a robust and secure software supply chain, reduce the risk of security incidents, and ensures compliance with industry standards. | e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0 |
+
+These are the older recommendations that are currently on a retirement path:
+
+| Recommendation | Description | Assessment Key|
+|--|--|--|
+| [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AzureContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 |
+| [Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 |
- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](../governance/resource-graph/overview.md#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md). - **Query scan results via REST API** - Learn how to query scan results via [REST API](subassessment-rest-api.md).
A detailed description of the scan process is described as follows:
- All newly discovered images are pulled, and an inventory is created for each image. Image inventory is kept to avoid further image pulls, unless required by new scanner capabilities.ΓÇï - Using the inventory, vulnerability reports are generated for new images, and updated for images previously scanned which were either pushed in the last 90 days to a registry, or are currently running. To determine if an image is currently running, Defender for Cloud uses both [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability) and [inventory collected via the Defender sensor running on AKS nodes](defender-for-containers-enable.md#enablement-method-per-capability)
- - Vulnerability reports for registry container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AzureContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
-- For customers using either [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability) or [inventory collected via the Defender sensor running on AKS nodes](defender-for-containers-enable.md#enablement-method-per-capability), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5) for remediating vulnerabilities for vulnerable images running on an AKS cluster. For customers using only [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability), the refresh time for inventory in this recommendation is once every seven hours. Clusters that are also running the [Defender sensor](defender-for-containers-enable.md#enablement-method-per-capability) benefit from a two hour inventory refresh rate. Image scan results are updated based on registry scan in both cases, and are therefore only refreshed every 24 hours.
+ - Vulnerability reports for registry container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AzureContainerRegistryRecommendationDetailsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9).
+- For customers using either [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability) or [inventory collected via the Defender sensor running on AKS nodes](defender-for-containers-enable.md#enablement-method-per-capability), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0) for remediating vulnerabilities for vulnerable images running on an AKS cluster. For customers using only [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability), the refresh time for inventory in this recommendation is once every seven hours. Clusters that are also running the [Defender sensor](defender-for-containers-enable.md#enablement-method-per-capability) benefit from a two hour inventory refresh rate. Image scan results are updated based on registry scan in both cases, and are therefore only refreshed every 24 hours.
> [!NOTE] > For [Defender for Container Registries (deprecated)](defender-for-container-registries-introduction.md), images are scanned once on push, on pull, and rescanned only once a week.
defender-for-cloud Agentless Vulnerability Assessment Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-vulnerability-assessment-gcp.md
Container vulnerability assessment powered by Microsoft Defender Vulnerability M
- **Reporting** - Container Vulnerability Assessment for GCP powered by Microsoft Defender Vulnerability Management provides vulnerability reports using following recommendations:
- | Recommendation | Description | Assessment Key|
- |--|--|--|
- | [GCP registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainerRegistryRecommendationDetailsBlade/assessmentKey/5cc3a2c1-8397-456f-8792-fe9d0d4c9145) | Scans your GCP registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce |
- | [GCP running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainersRuntimeRecommendationDetailsBlade/assessmentKey/e538731a-80c8-4317-a119-13075e002516)ΓÇ»| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Google Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 5cc3a2c1-8397-456f-8792-fe9d0d4c9145 |
+These are the new recommendations that report on runtime container vulnerabilities and registry image vulnerabilities. They are currently in preview, but are intended to replace the old recommendations. These new recommendations do not count toward secure score while in preview. The scan engine for both sets of recommendations is the same.
+
+| Recommendation | Description | Assessment Key|
+|--|--|--|
+| [[Preview] Container images in GCP registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24e37609-dcf5-4a3b-b2b0-b7d76f2e4e04) | Defender for Cloud scans your registry images for known vulnerabilities (CVEs) and provides detailed findings for each scanned image. Scanning and remediating vulnerabilities for container images in the registry helps maintain a secure and reliable software supply chain, reduces the risk of security incidents, and ensures compliance with industry standards. | 24e37609-dcf5-4a3b-b2b0-b7d76f2e4e04 |
+| [[Preview] Containers running in GCP should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c7c1d31d-a604-4b86-96df-63448618e165)ΓÇ»| Defender for Cloud creates an inventory of all container workloads currently running in your Kubernetes clusters and provides vulnerability reports for those workloads by matching the images being used and the vulnerability reports created for the registry images. Scanning and remediating vulnerabilities of container workloads is critical to ensure a robust and secure software supply chain, reduce the risk of security incidents, and ensures compliance with industry standards. | c7c1d31d-a604-4b86-96df-63448618e165 |
+
+These are the older recommendations that are currently on a retirement path:
+
+| Recommendation | Description | Assessment Key|
+|--|--|--|
+| [GCP registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainerRegistryRecommendationDetailsBlade/assessmentKey/5cc3a2c1-8397-456f-8792-fe9d0d4c9145) | Scans your GCP registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce |
+| [GCP running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainersRuntimeRecommendationDetailsBlade/assessmentKey/e538731a-80c8-4317-a119-13075e002516)ΓÇ»| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Google Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 5cc3a2c1-8397-456f-8792-fe9d0d4c9145 |
- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](../governance/resource-graph/overview.md#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md).
A detailed description of the scan process is described as follows:
- All newly discovered images are pulled, and an inventory is created for each image. Image inventory is kept to avoid further image pulls, unless required by new scanner capabilities.ΓÇï - Using the inventory, vulnerability reports are generated for new images, and updated for images previously scanned which were either pushed in the last 90 days to a registry, or are currently running. To determine if an image is currently running, Defender for Cloud uses both [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability) and [inventory collected via the Defender sensor running on GKE nodes](defender-for-containers-enable.md#enablement-method-per-capability)
- - Vulnerability reports for registry container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainerRegistryRecommendationDetailsBlade/assessmentKey/5cc3a2c1-8397-456f-8792-fe9d0d4c9145).
-- For customers using either [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability) or [inventory collected via the Defender sensor running on GKE nodes](defender-for-containers-enable.md#enablement-method-per-capability), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainersRuntimeRecommendationDetailsBlade/assessmentKey/e538731a-80c8-4317-a119-13075e002516) for remediating vulnerabilities for vulnerable images running on a GKE cluster. For customers using only [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability), the refresh time for inventory in this recommendation is once every seven hours. Clusters that are also running the [Defender sensor](defender-for-containers-enable.md#enablement-method-per-capability) benefit from a two hour inventory refresh rate. Image scan results are updated based on registry scan in both cases, and are therefore only refreshed every 24 hours.
+ - Vulnerability reports for registry container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainerRegistryRecommendationDetailsBlade/assessmentKey/24e37609-dcf5-4a3b-b2b0-b7d76f2e4e04).
+- For customers using either [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability) or [inventory collected via the Defender sensor running on GKE nodes](defender-for-containers-enable.md#enablement-method-per-capability), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainersRuntimeRecommendationDetailsBlade/assessmentKey/c7c1d31d-a604-4b86-96df-63448618e165) for remediating vulnerabilities for vulnerable images running on a GKE cluster. For customers using only [Agentless discovery for Kubernetes](defender-for-containers-enable.md#enablement-method-per-capability), the refresh time for inventory in this recommendation is once every seven hours. Clusters that are also running the [Defender sensor](defender-for-containers-enable.md#enablement-method-per-capability) benefit from a two hour inventory refresh rate. Image scan results are updated based on registry scan in both cases, and are therefore only refreshed every 24 hours.
> [!NOTE] > For [Defender for Container Registries (deprecated)](defender-for-container-registries-introduction.md), images are scanned once on push, on pull, and rescanned only once a week.
defender-for-cloud Attack Path Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-api.md
+
+ Title: Retrieve attack path data with API
+description: Learn how to Retrieve attack path data with APIs in Microsoft Defender for Cloud and enhance the security of your environment.
+++ Last updated : 03/03/2024
+#customer intent: As a developer, I want to learn how to retrieve attack path data with APIs in Microsoft Defender for Cloud so that I can enhance the security of my environment.
++
+# Retrieve attack path data with API
+
+You can consume attack path data programmatically by querying Azure Resource Graph (ARG) API.
+Learn [how to query ARG API](/rest/api/azureresourcegraph/resourcegraph(2020-04-01-preview)/resources/resources?source=recommendations&tabs=HTTP).
+
+## Consume attack path data programmatically using API
+
+The following examples show sample ARG queries that you can run:
+
+**Get all attack paths in subscription ΓÇÿXΓÇÖ**:
+
+```kusto
+securityresources
+| where type == "microsoft.security/attackpaths"
+| where subscriptionId == <SUBSCRIPTION_ID>
+```
+
+**Get all instances for a specific attack path**:
+For example, `Internet exposed VM with high severity vulnerabilities and read permission to a Key Vault`.
+
+```kusto
+securityresources
+| where type == "microsoft.security/attackpaths"
+| where subscriptionId == "212f9889-769e-45ae-ab43-6da33674bd26"
+| extend AttackPathDisplayName = tostring(properties["displayName"])
+| where AttackPathDisplayName == "<DISPLAY_NAME>"
+```
+
+### API response schema
+
+The following table lists the data fields returned from the API response:
+
+| Field | Description |
+|--|--|
+| ID | The Azure resource ID of the attack path instance|
+| Name | The Unique identifier of the attack path instance|
+| Type | The Azure resource type, always equals `microsoft.security/attackpaths`|
+| Tenant ID | The tenant ID of the attack path instance |
+| Location | The location of the attack path |
+| Subscription ID | The subscription of the attack path |
+| Properties.description | The description of the attack path |
+| Properties.displayName | The display name of the attack path |
+| Properties.attackPathType | The type of the attack path|
+| Properties.manualRemediationSteps | Manual remediation steps of the attack path |
+| Properties.refreshInterval | The refresh interval of the attack path |
+| Properties.potentialImpact | The potential impact of the attack path being breached |
+| Properties.riskCategories | The categories of risk of the attack path |
+| Properties.entryPointEntityInternalID | The internal ID of the entry point entity of the attack path |
+| Properties.targetEntityInternalID | The internal ID of the target entity of the attack path |
+| Properties.assessments | Mapping of entity internal ID to the security assessments on that entity |
+| Properties.graphComponent | List of graph components representing the attack path |
+| Properties.graphComponent.insights | List of insights graph components related to the attack path |
+| Properties.graphComponent.entities | List of entities graph components related to the attack path |
+| Properties.graphComponent.connections | List of connections graph components related to the attack path |
+| Properties.AttackPathID | The unique identifier of the attack path instance |
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [build queries with cloud security explorer](how-to-manage-cloud-security-explorer.md).
defender-for-cloud Concept Regulatory Compliance Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-regulatory-compliance-standards.md
Title: Regulatory compliance standards in Microsoft Defender for Cloud
-description: Learn about regulatory compliance standards in Microsoft Defender for Cloud
- Previously updated : 11/27/2023
+ Title: Regulatory compliance in Defender for Cloud
+description: Learn about regulatory compliance standards and certification in Microsoft Defender for Cloud, and how it helps ensure compliance with industry regulations.
+++ Last updated : 03/31/2024
+#customer intent: As a cloud security professional, I want to understand how Defender for Cloud helps me meet regulatory compliance standards, so that I can ensure my organization is compliant with industry standards and regulations.
-# Regulatory compliance standards
+# Regulatory compliance standards in Microsoft Defender for Cloud
Microsoft Defender for Cloud streamlines the regulatory compliance process by helping you to identify issues that are preventing you from meeting a particular compliance standard, or achieving compliance certification.
By default, when you enable Defender for Cloud, the following standards are enab
- For **AWS**: [Microsoft Cloud Security Benchmark (MCSB)](concept-regulatory-compliance.md) and [AWS Foundational Security Best Practices standard](https://docs.aws.amazon.com/securityhub/latest/userguide/fsbp-standard.html). - For **GCP**: [Microsoft Cloud Security Benchmark (MCSB)](concept-regulatory-compliance.md) and **GCP Default**.
-## Next steps
+## Available regulatory standards
+
+The following regulatory standards are available in Defender for Cloud:
+
+| Standards for Azure subscriptions | Standards for AWS accounts | Standards for GCP projects |
+|--|--|--|
+| Australian Government ISM Protected | AWS Foundational Security Best Practices | Brazilian General Personal Data Protection Law (LGPD)|
+| Canada Federal PBMM | AWS Well-Architected Framework | California Consumer Privacy Act (CCPA)|
+| CIS Azure Foundations | Brazilian General Personal Data Protection Law (LGPD) | CIS Controls|
+| CMMC | California Consumer Privacy Act (CCPA) | CIS GCP Foundations|
+| FedRAMP ΓÇÿHΓÇÖ & ΓÇÿMΓÇÖ | CIS AWS Foundations | CIS Google Cloud Platform Foundation Benchmark|
+| HIPAA/HITRUST | CRI Profile | CIS Google Kubernetes Engine (GKE) Benchmark|
+| ISO/IEC 27001 | CSA Cloud Controls Matrix (CCM) | CRI Profile|
+| New Zealand ISM Restricted | GDPR | CSA Cloud Controls Matrix (CCM)|
+| NIST SP 800-171 | ISO/IEC 27001 | Cybersecurity Maturity Model Certification (CMMC)|
+| NIST SP 800-53 | ISO/IEC 27002 | FFIEC Cybersecurity Assessment Tool (CAT)|
+| PCI DSS | NIST Cybersecurity Framework (CSF) | GDPR|
+| RMIT Malaysia | NIST SP 800-172 | ISO/IEC 27001|
+| SOC 2 | PCI DSS | ISO/IEC 27002|
+| SWIFT CSP CSCF | | ISO/IEC 27017|
+| UK OFFICIAL and UK NHS | | NIST Cybersecurity Framework (CSF)|
+| | | NIST SP 800-53 |
+| | | NIST SP 800-171|
+| | | NIST SP 800-172|
+| | | PCI DSS|
+| | | Sarbanes Oxley Act (SOX)|
+| | | SOC 2|
+
+## Related content
- [Assign regulatory compliance standards](update-regulatory-compliance-packages.md)-- [Improve regulatory compliance](regulatory-compliance-dashboard.md)
defender-for-cloud Defender For Databases Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-introduction.md
Title: Microsoft Defender for open-source relational databases
+ Title: What is Defender for open-source databases
description: Learn about the benefits and features of Microsoft Defender for open-source relational databases such as PostgreSQL, MySQL, and MariaDB Previously updated : 06/19/2022 Last updated : 04/02/2024
+#customer intent: As a reader, I want to understand the purpose and features of Microsoft Defender for open-source relational databases so that I can make informed decisions about its usage.
-# Overview of Microsoft Defender for open-source relational databases
+# What is Microsoft Defender for open-source relational databases
This plan brings threat protections for the following open-source relational databases:
Defender for Cloud detects anomalous activities indicating unusual and potential
## Availability
-| Aspect | Details |
-|--|:-|
-| Release state: | General availability (GA) |
-| Pricing: | **Microsoft Defender for open-source relational databases** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-| Supported environments:|:::image type="icon" source="./media/icons/yes-icon.png"::: PaaS<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Arc-enabled machines |
-| Protected versions of PostgreSQL: | Single Server - General Purpose and Memory Optimized. Learn more in [PostgreSQL Single Server pricing tiers](../postgresql/concepts-pricing-tiers.md). Flexible Server - all pricing tiers (enablement is currently only supported at resource level).|
-| Protected versions of MySQL: | Single Server - General Purpose and Memory Optimized. Learn more in [MySQL pricing tiers](../mysql/concepts-pricing-tiers.md). |
-| Protected versions of MariaDB: | General Purpose and Memory Optimized. Learn more in [MariaDB pricing tiers](../mariadb/concepts-pricing-tiers.md). |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br> :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet |
+Check out the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) for pricing information for Microsoft Defender for open-source relational databases.
+
+Defender for open-source relational database is supported on PaaS environments and not on Azure Arc-enabled machines.
+
+**Protected versions of PostgreSQL include**:
+- Single Server - General Purpose and Memory Optimized. Learn more in [PostgreSQL Single Server pricing tiers](../postgresql/concepts-pricing-tiers.md).
+- Flexible Server - all pricing tiers.
+
+**Protected versions of MySQL include**:
+- Single Server - General Purpose and Memory Optimized. Learn more in [MySQL pricing tiers](../mysql/concepts-pricing-tiers.md).
+- Flexible Server - all pricing tiers.
+
+**Protected versions of MariaDB include**:
+- General Purpose and Memory Optimized. Learn more in [MariaDB pricing tiers](../mariadb/concepts-pricing-tiers.md).
+
+View [cloud availability](support-matrix-cloud-environment.md#cloud-support) for Defender for open-source relational databases
## What are the benefits of Microsoft Defender for open-source relational databases?
These alerts appear in Defender for Cloud's security alerts page and include:
Threat intelligence enriched security alerts are triggered when there are: -- **Anomalous database access and query patterns** - For example, an abnormally high number of failed sign-in attempts with different credentials (a brute force attempt)-- **Suspicious database activities** - For example, a legitimate user accessing an SQL Server from a breached computer which communicated with a crypto-mining C&C server-- **Brute-force attacks** ΓÇô With the ability to separate simple brute force from brute force on a valid user or a successful brute force
+- **Anomalous database access and query patterns** - For example, an abnormally high number of failed sign-in attempts with different credentials (a brute force attempt).
+- **Suspicious database activities** - For example, a legitimate user accessing an SQL Server from a breached computer which communicated with a crypto-mining C&C server.
+- **Brute-force attacks** ΓÇô With the ability to separate simple brute force or a successful brute force.
> [!TIP] > View the full list of security alerts for database servers [in the alerts reference page](alerts-reference.md#alerts-for-open-source-relational-databases).
-## Next steps
-
-In this article, you learned about Microsoft Defender for open-source relational databases.
+## Related articles
-> [!div class="nextstepaction"]
-> [Enable enhanced protections](enable-enhanced-security.md)
+- [Enable Microsoft Defender for open-source relational databases and respond to alerts](defender-for-databases-usage.md)
+- [Common questions about Defender for Databases](faq-defender-for-databases.yml)
defender-for-cloud Defender For Databases Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-usage.md
Title: Setting up and responding to alerts from Microsoft Defender for open-source relational databases
-description: Learn how to configure Microsoft Defender for open-source relational databases to detect anomalous database activities indicating potential security threats to the database.
Previously updated : 11/09/2021
+ Title: Microsoft Defender for open-source relational databases
+description: Configure Microsoft Defender for open-source relational databases to detect potential security threats.
Last updated : 04/02/2024
+#customer intent: As a reader, I want to learn how to configure Microsoft Defender for open-source relational databases to enhance the security of my databases.
+ # Enable Microsoft Defender for open-source relational databases and respond to alerts Microsoft Defender for Cloud detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases for the following
Defender for Cloud sends email notifications when it detects anomalous database
1. For additional details and recommended actions for investigating the current threat and remediating future threats, select a specific alert.
- :::image type="content" source="media/defender-for-databases-usage/specific-alert-details.png" alt-text="Details of a specific alert." lightbox="media/defender-for-databases-usage/specific-alert-details.png":::
+ :::image type="content" source="media/defender-for-databases-usage/specific-alert-details.png" alt-text="Screenshot that shows the details of a specific alert." lightbox="media/defender-for-databases-usage/specific-alert-details.png":::
> [!TIP] > For a detailed tutorial on how to handle your alerts, see [Manage and respond to alerts](tutorial-security-incident.md).
-## Next steps
+## Next step
-- [Automate responses to Defender for Cloud triggers](workflow-automation.md)-- [Stream alerts to a SIEM, SOAR, or ITSM solution](export-to-siem.md)-- [Suppress alerts from Defender for Cloud](alerts-suppression-rules.md)
+> [!div class="nextstepaction"]
+> [Automate responses to Defender for Cloud triggers](workflow-automation.md)
defender-for-cloud Disable Vulnerability Findings Containers Secure Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/disable-vulnerability-findings-containers-secure-score.md
+
+ Title: Creating exemptions and disabling vulnerabilities (Secure score)
+description: Learn how to create exemptions and disable vulnerabilities (Secure score)
+ Last updated : 07/09/2023++
+# Create exemptions and disable vulnerability assessment findings on Container registry images and running images (Secure score)
+
+>[!NOTE]
+>You can customize your vulnerability assessment experience by exempting management groups, subscriptions, or specific resources from your secure score. Learn how to [create an exemption](exempt-resource.md) for a resource or subscription.
+
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
+
+When a finding matches the criteria you defined in your disable rules, it doesn't appear in the list of findings. Typical scenario examples include:
+
+- Disable findings with severity below medium
+- Disable findings for images that the vendor won't fix
+
+> [!IMPORTANT]
+> To create a rule, you need permissions to edit a policy in Azure Policy.
+> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
+
+You can use a combination of any of the following criteria:
+
+- **CVE** - Enter the CVEs of the findings you want to exclude. Ensure the CVEs are valid. Separate multiple CVEs with a semicolon. For example, CVE-2020-1347; CVE-2020-1346.
+- **Image digest** - Specify images for which vulnerabilities should be excluded based on the image digest. Separate multiple digests with a semicolon, for example: `sha256:9b920e938111710c2768b31699aac9d1ae80ab6284454e8a9ff42e887fa1db31;sha256:ab0ab32f75988da9b146de7a3589c47e919393ae51bbf2d8a0d55dd92542451c`
+- **OS version** - Specify images for which vulnerabilities should be excluded based on the image OS. Separate multiple versions with a semicolon, for example: ubuntu_linux_20.04;alpine_3.17
+- **Minimum Severity** - Select low, medium, high, or critical to exclude vulnerabilities less than the specified severity level.
+- **Fix status** - Select the option to exclude vulnerabilities based on their fix status.
+
+Disable rules apply per recommendation, for example, to disable [CVE-2017-17512](https://github.com/advisories/GHSA-fc69-2v7r-7r95) both on the registry images and runtime images, the disable rule has to be configured in both places.
+
+> [!NOTE]
+> The [Azure Preview Supplemental Terms](//azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+ To create a rule:
+
+1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved powered by Microsoft Defender Vulnerability Management](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) or [Running container images should have vulnerability findings resolved powered by Microsoft Defender Vulnerability Management
+](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5), select **Disable rule**.
+
+1. Select the relevant scope.
+
+1. Define your criteria. You can use any of the following criteria:
+
+ - **CVE** - Enter the CVEs of the findings you want to exclude. Ensure the CVEs are valid. Separate multiple CVEs with a semicolon. For example, CVE-2020-1347; CVE-2020-1346.
+ - **Image digest** - Specify images for which vulnerabilities should be excluded based on the image digest. Separate multiple digests with a semicolon, for example: `sha256:9b920e938111710c2768b31699aac9d1ae80ab6284454e8a9ff42e887fa1db31;sha256:ab0ab32f75988da9b146de7a3589c47e919393ae51bbf2d8a0d55dd92542451c`
+ - **OS version** - Specify images for which vulnerabilities should be excluded based on the image OS. Separate multiple versions with a semicolon, for example: ubuntu_linux_20.04;alpine_3.17
+ - **Minimum Severity** - Select low, medium, high, or critical to exclude vulnerabilities less than and equal to the specified severity level.
+ - **Fix status** - Select the option to exclude vulnerabilities based on their fix status.
+
+1. In the justification text box, add your justification for why a specific vulnerability was disabled. This provides clarity and understanding for anyone reviewing the rule.
+
+1. Select **Apply rule**.
+
+ :::image type="content" source="./media/disable-vulnerability-findings-containers/disable-rules-secure-score.png" alt-text="Screenshot showing where to create a disable rule for vulnerability findings on registry images." lightbox="media/disable-vulnerability-findings-containers/disable-rules.png":::
+
+ > [!IMPORTANT]
+ > Changes might take up to 24 hours to take effect.
+
+**To view, override, or delete a rule:**
+
+1. From the recommendations detail page, select **Disable rule**.
+1. From the scope list, subscriptions with active rules show as **Rule applied**.
+1. To view or delete the rule, select the ellipsis menu ("...").
+1. Do one of the following:
+ - To view or override a disable rule - select **View rule**, make any changes you want, and select **Override rule**.
+ - To delete a disable rule - select **Delete rule**.
+
+ :::image type="content" source="./media/disable-vulnerability-findings-containers/override-rules.png" alt-text="Screenshot showing where to view, delete or override a rule for vulnerability findings on registry images." lightbox="media/disable-vulnerability-findings-containers/override-rules.png":::
+
+## Next steps
+
+- Learn how to [view and remediate vulnerability assessment findings for registry images](view-and-remediate-vulnerability-assessment-findings.md).
+- Learn about [agentless container posture](concept-agentless-containers.md).
defender-for-cloud Disable Vulnerability Findings Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/disable-vulnerability-findings-containers.md
Disable rules apply per recommendation, for example, to disable [CVE-2017-17512]
> [!NOTE] > The [Azure Preview Supplemental Terms](//azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- To create a rule:
+## To create a rule
-1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved powered by Microsoft Defender Vulnerability Management](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) or [Running container images should have vulnerability findings resolved powered by Microsoft Defender Vulnerability Management
-](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5), select **Disable rule**.
+1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved powered by Microsoft Defender Vulnerability Management](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9) or [Containers running in Azure should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0), select **Disable rule**.
1. Select the relevant scope.
Disable rules apply per recommendation, for example, to disable [CVE-2017-17512]
> [!IMPORTANT] > Changes might take up to 24 hours to take effect.
-**To view, override, or delete a rule:**
+## To view, override, or delete a rule
1. From the recommendations detail page, select **Disable rule**. 1. From the scope list, subscriptions with active rules show as **Rule applied**.
defender-for-cloud How To Manage Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md
Title: Identify and remediate attack paths in Microsoft Defender for Cloud
-description: Learn how to identify and remediate attack paths in Microsoft Defender for Cloud
+ Title: Identify and remediate attack paths
++
+description: Learn how to identify and remediate attack paths in Microsoft Defender for Cloud and enhance the security of your environment.
- Previously updated : 12/06/2023 Last updated : 03/05/2024
+#customer intent: As a security analyst, I want to learn how to identify and remediate attack paths in Microsoft Defender for Cloud so that I can enhance the security of my environment.
# Identify and remediate attack paths
Defender for Cloud's contextual security capabilities assists security teams in
Attack path analysis helps you to address the security issues that pose immediate threats with the greatest potential of being exploited in your environment. Defender for Cloud analyzes which security issues are part of potential attack paths that attackers could use to breach your environment. It also highlights the security recommendations that need to be resolved in order to mitigate it.
-## Availability
+By default attack paths are organized by their risk level. The risk level is determined by a context-aware risk-prioritization engine that considers the risk factors of each resource. Learn more about how Defender for Cloud [prioritizes security recommendations](risk-prioritization.md).
-| Aspect | Details |
-|--|--|
-| Release state | GA (General Availability) |
-| Prerequisites | - [Enable agentless scanning](enable-vulnerability-assessment-agentless.md), or [Enable Defender for Server P1 (which includes MDVM)](defender-for-servers-introduction.md) or [Defender for Server P2 (which includes MDVM and Qualys)](defender-for-servers-introduction.md). <br> - [Enable Defender CSPM](enable-enhanced-security.md) <br> - Enable agentless container posture extension in Defender CSPM, or [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This also gives you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. |
-| Required plans | - Defender Cloud Security Posture Management (CSPM) enabled |
-| Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS, GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
+## Prerequisites
-## Features of the attack path overview page
+You must [enable Defender Cloud Security Posture Management (CSPM)](enable-enhanced-security.md) and have [agentless scanning](enable-vulnerability-assessment-agentless.md) enabled.
-The attack path page shows you an overview of all of your attack paths. You can also see your affected resources and a list of active attack paths.
+- You must enable [Defender for Server P1 (which includes MDVM)](defender-for-servers-introduction.md) or [Defender for Server P2 (which includes MDVM and Qualys)](defender-for-servers-introduction.md).
+**To view attack paths that are related to containers**:
-On this page you can organize your attack paths based on risk level, name, environment, paths count, risk factors, entry point, target, the number of affected resources, or the number of active recommendations.
+- You must [enable agentless container posture extension](tutorial-enable-cspm-plan.md) in Defender CSPM
+ or
+- You can [enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This also gives you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer.
-For each attack path, you can see all of risk factors and any affected resources.
+- **Required roles and permissions**: Security Reader, Security Admin, Reader, Contributor or Owner.
-The potential risk factors include credentials exposure, compute abuse, data exposure, subscription and account takeover.
+## Identify attack paths
-Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer?](concept-attack-path.md).
+The attack path page shows you an overview of all of your attack paths. You can also see your affected resources and a list of active attack paths.
-## Investigate and remediate attack paths
You can use Attack path analysis to locate the biggest risks to your environment and to remediate them.
-**To investigate and remediate an attack path**:
+**To identify attack paths**:
1. Sign in to the [Azure portal](https://portal.azure.com).
You can use Attack path analysis to locate the biggest risks to your environmen
1. Select a node.
- :::image type="content" source="media/how-to-manage-cloud-map/node-select.png" alt-text="Screenshot of the attack path screen that shows you where the nodes are located for selection." lightbox="media/how-to-manage-cloud-map/node-select.png":::
+ :::image type="content" source="media/how-to-manage-attack-path/node-select.png" alt-text="Screenshot of the attack path screen that shows you where the nodes are located for selection." lightbox="media/how-to-manage-attack-path/node-select.png":::
1. Select **Insight** to view the associated insights for that node.
- :::image type="content" source="media/how-to-manage-cloud-map/insights.png" alt-text="Screenshot of the insights tab for a specific node." lightbox="media/how-to-manage-cloud-map/insights.png":::
+ :::image type="content" source="media/how-to-manage-attack-path/insights.png" alt-text="Screenshot of the insights tab for a specific node." lightbox="media/how-to-manage-attack-path/insights.png":::
1. Select **Recommendations**.
- :::image type="content" source="media/how-to-manage-cloud-map/attack-path-recommendations.png" alt-text="Screenshot that shows you where to select recommendations on the screen." lightbox="media/how-to-manage-cloud-map/attack-path-recommendations.png":::
+ :::image type="content" source="media/how-to-manage-attack-path/attack-path-recommendations.png" alt-text="Screenshot that shows you where to select recommendations on the screen." lightbox="media/how-to-manage-attack-path/attack-path-recommendations.png":::
1. Select a recommendation.
-1. Follow the remediation steps to remediate the recommendation.
+1. [Remediate the recommendation](implement-security-recommendations.md).
+
+## Remediate attack paths
+
+Once you have investigated an attack path and reviewed all of the associated findings and recommendations, you can start to remediate the attack path.
+
+**To remediate an attack path**:
+
+1. Navigate to **Microsoft Defender for Cloud** > **Attack path analysis**.
+
+1. Select an attack path.
-1. Select other nodes as necessary and view their insights and recommendations as necessary.
+1. Select **Remediation**.
+
+ :::image type="content" source="media/how-to-manage-attack-path/recommendations-tab.png" alt-text="Screenshot of the attack path that shows you where to select remediation." lightbox="media/how-to-manage-attack-path/recommendations-tab.png":::
+
+1. Select a recommendation.
+
+1. [Remediate the recommendation](implement-security-recommendations.md).
Once an attack path is resolved, it can take up to 24 hours for an attack path to be removed from the list.
-## View all recommendations with attack path
+## Remediate all recommendations within an attack path
-Attack path analysis also gives you the ability to see all recommendations by attack path without having to check each node individually. You can resolve all recommendations without having to view each node individually.
+Attack path analysis grants you the ability to see all recommendations by attack path without having to check each node individually. You can resolve all recommendations without having to view each node individually.
The remediation path contains two types of recommendation:
The remediation path contains two types of recommendation:
1. Select **Remediation**.
- :::image type="content" source="media/how-to-manage-cloud-map/bulk-recommendations.png" alt-text="Screenshot that shows where to select on the screen to see the attack paths full list of recommendations." lightbox="media/how-to-manage-cloud-map/bulk-recommendations.png":::
+ :::image type="content" source="media/how-to-manage-attack-path/bulk-recommendations.png" alt-text="Screenshot that shows where to select on the screen to see the attack paths full list of recommendations." lightbox="media/how-to-manage-attack-path/bulk-recommendations.png":::
+
+1. Expand **Additional recommendations**.
1. Select a recommendation.
-1. Follow the remediation steps to remediate the recommendation.
+1. [Remediate the recommendation](implement-security-recommendations.md).
Once an attack path is resolved, it can take up to 24 hours for an attack path to be removed from the list.
-## Consume attack path data programmatically using API
-
-You can consume attack path data programmatically by querying Azure Resource Graph (ARG) API.
-Learn [how to query ARG API](/rest/api/azureresourcegraph/resourcegraph(2020-04-01-preview)/resources/resources?source=recommendations&tabs=HTTP).
-
-The following examples show sample ARG queries that you can run:
-
-**Get all attack paths in subscription ΓÇÿXΓÇÖ**:
-
-```kusto
-securityresources
-| where type == "microsoft.security/attackpaths"
-| where subscriptionId == <SUBSCRIPTION_ID>
-```
-
-**Get all instances for a specific attack path**:
-For example, `Internet exposed VM with high severity vulnerabilities and read permission to a Key Vault`.
-
-```kusto
-securityresources
-| where type == "microsoft.security/attackpaths"
-| where subscriptionId == "212f9889-769e-45ae-ab43-6da33674bd26"
-| extend AttackPathDisplayName = tostring(properties["displayName"])
-| where AttackPathDisplayName == "<DISPLAY_NAME>"
-```
-
-### API response schema
-
-The following table lists the data fields returned from the API response:
-
-| Field | Description |
-|--|--|
-| ID | The Azure resource ID of the attack path instance|
-| Name | The Unique identifier of the attack path instance|
-| Type | The Azure resource type, always equals `microsoft.security/attackpaths`|
-| Tenant ID | The tenant ID of the attack path instance |
-| Location | The location of the attack path |
-| Subscription ID | The subscription of the attack path |
-| Properties.description | The description of the attack path |
-| Properties.displayName | The display name of the attack path |
-| Properties.attackPathType | The type of the attack path|
-| Properties.manualRemediationSteps | Manual remediation steps of the attack path |
-| Properties.refreshInterval | The refresh interval of the attack path |
-| Properties.potentialImpact | The potential impact of the attack path being breached |
-| Properties.riskCategories | The categories of risk of the attack path |
-| Properties.entryPointEntityInternalID | The internal ID of the entry point entity of the attack path |
-| Properties.targetEntityInternalID | The internal ID of the target entity of the attack path |
-| Properties.assessments | Mapping of entity internal ID to the security assessments on that entity |
-| Properties.graphComponent | List of graph components representing the attack path |
-| Properties.graphComponent.insights | List of insights graph components related to the attack path |
-| Properties.graphComponent.entities | List of entities graph components related to the attack path |
-| Properties.graphComponent.connections | List of connections graph components related to the attack path |
-| Properties.AttackPathID | The unique identifier of the attack path instance |
-
-## Next Steps
-
-Learn how to [build queries with cloud security explorer](how-to-manage-cloud-security-explorer.md).
+## Next Step
+
+> [!div class="nextstepaction"]
+> [build queries with cloud security explorer](how-to-manage-cloud-security-explorer.md).
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
Title: Build queries with cloud security explorer in Microsoft Defender for Cloud
-description: Learn how to build queries with cloud security explorer in Microsoft Defender for Cloud
+ Title: Build queries with cloud security explorer
+description: Learn how to build queries with cloud security explorer in Microsoft Defender for Cloud to proactively identify security risks in your cloud environment.
Previously updated : 11/01/2023 Last updated : 02/29/2024++
+ai-usage: ai-assisted
+# Customer Intent: As a security professional, I want to learn how to build queries with cloud security explorer in Microsoft Defender for Cloud so that I can proactively identify security risks in my cloud environment and improve my security posture.
# Build queries with cloud security explorer Defender for Cloud's contextual security capabilities assists security teams in reducing the risk of impactful breaches. Defender for Cloud uses environmental context to perform a risk assessment of your security issues, identifies the biggest security risks, and distinguishes them from less risky issues.
-Use the cloud security explorer, to proactively identify security risks in your cloud environment by running graph-based queries on the cloud security graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account.
+Use the cloud security explorer, to proactively identify security risks in your cloud environment by running graph-based queries on the cloud security graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account.
With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, and lateral movement between resources and across multiple clouds (Azure AWS, and GCP).
-Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
-
-## Availability
-
-| Aspect | Details |
-|--|--|
-| Release state | GA (General Availability) |
-| Required plans | - Defender Cloud Security Posture Management (CSPM) enabled<br>- Defender for Servers P2 customers can use the explorer UI to query for keys and secrets, but must have Defender CSPM enabled to get the full value of the Explorer. |
-| Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS, GCP) <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
- ## Prerequisites -- You must [enable Defender CSPM](enable-enhanced-security.md).
- - For agentless container posture, you must enable the following extensions:
- - Agentless discovery for Kubernetes (preview)
- - Container registries vulnerability assessments (preview)
+- You must [enable Defender CSPM](enable-enhanced-security.md)
+ - You must [enable agentless scanning](enable-vulnerability-assessment-agentless.md).
+
+ For agentless container posture, you must enable the following extensions:
+ - [Agentless discovery for Kubernetes](tutorial-enable-cspm-plan.md#enable-the-components-of-the-defender-cspm-plan)
+ - [Agentless container vulnerability assessment](tutorial-enable-cspm-plan.md#enable-the-components-of-the-defender-cspm-plan)
-- You must [enable agentless scanning](enable-vulnerability-assessment-agentless.md).
+ > [!NOTE]
+ > If you only have [Defender for Servers P2](tutorial-enable-servers-plan.md) plan 2 enabled, you can use the cloud security explorer to query for keys and secrets, but you must have Defender CSPM enabled to get the full value of the explorer.
- Required roles and permissions: - Security Reader
Use the query link to share a query with other people. After creating a query, s
:::image type="content" source="media/how-to-manage-cloud-security/cloud-security-explorer-share-query.png" alt-text="Screenshot showing the Share Query Link icon." lightbox="media/how-to-manage-cloud-security/cloud-security-explorer-share-query.png":::
-## Next steps
-
-View the [reference list of attack paths and cloud security graph components](attack-path-reference.md).
+## Next step
-Learn about the [Defender CSPM plan options](concept-cloud-security-posture-management.md).
+> [!div class="nextstepaction"]
+> [Learn about the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md)
defender-for-cloud Implement Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md
Title: Remediate security recommendations in Microsoft Defender for Cloud
-description: Learn how to remediate security recommendations in Microsoft Defender for Cloud.
+ Title: Remediate recommendations
+description: Remediate security recommendations in Microsoft Defender for Cloud to improve the security posture of your environments.
-- Previously updated : 03/05/2024++ Last updated : 03/07/2024
+ai-usage: ai-assisted
+#customer intent: As a security professional, I want to understand how to remediate security recommendations in Microsoft Defender for Cloud so that I can improve my security posture.
-# Remediate security recommendations
+# Remediate recommendations
Resources and workloads protected by Microsoft Defender for Cloud are assessed against built-in and custom security standards enabled in your Azure subscriptions, AWS accounts, and GCP projects. Based on those assessments, security recommendations provide practical steps to remediate security issues, and improve security posture.
-This article describes how to remediate security recommendations in your Defender for Cloud deployment using the latest version of the portal experience.
-
-## Before you start
+This article describes how to remediate security recommendations in your Defender for Cloud deployment.
Before you attempt to remediate a recommendation you should review it in detail. Learn how to [review security recommendations](review-security-recommendations.md).
-> [!IMPORTANT]
-> This page discusses how to use the new recommendations experience where you have the ability to prioritize your recommendations by their effective risk level. To view this experience, you must select **Try it now**.
->
-> :::image type="content" source="media/review-security-recommendations/try-it-now.png" alt-text="Screenshot that shows where the try it now button is located on the recommendations page." lightbox="media/review-security-recommendations/try-it-now.png":::
-
-## Group recommendations by risk level
-
-Before you start remediating, we recommend grouping your recommendations by risk level in order to remediate the most critical recommendations first.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.
-
-1. Select **Group by** > **Primary grouping** > **Risk level** > **Apply**.
-
- :::image type="content" source="media/implement-security-recommendations/group-by-risk-level.png" alt-text="Screenshot of the recommendations page that shows how to group your recommendations." lightbox="media/implement-security-recommendations/group-by-risk-level.png":::
-
- Recommendations are displayed in groups of risk levels.
-
-You can now review critical and other recommendations to understand the recommendation and remediation steps. Use the graph to understand the risk to your business, including which resources are exploitable, and the effect that the recommendation has on your business.
+## Remediate a recommendation
-## Remediate recommendations
-
-After reviewing recommendations by risk, decide which one to remediate first.
+Recommendations are prioritized based on the risk level of the security issue by default.
In addition to risk level, we recommend that you prioritize the security controls in the default [Microsoft Cloud Security Benchmark (MCSB)](concept-regulatory-compliance.md) standard in Defender for Cloud, since these controls affect your [secure score](secure-score-security-controls.md).
In addition to risk level, we recommend that you prioritize the security control
1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.
-1. Select a recommendation to remediate.
+ :::image type="content" source="media/implement-security-recommendations/recommendations-page.png" alt-text="Screenshot of the recommendations page that shows all of the affected resources by their risk level." lightbox="media/implement-security-recommendations/recommendations-page.png":::
+
+1. Select a recommendation.
1. Select **Take action**. 1. Locate the Remediate section and follow the remediation instructions.
- :::image type="content" source="./media/implement-security-recommendations/security-center-remediate-recommendation.png" alt-text="This screenshot shows manual remediation steps for a recommendation." lightbox="./media/implement-security-recommendations/security-center-remediate-recommendation.png":::
+ :::image type="content" source="./media/implement-security-recommendations/remediate-recommendation.png" alt-text="This screenshot shows manual remediation steps for a recommendation." lightbox="./media/implement-security-recommendations/remediate-recommendation.png":::
## Use the Fix option
-To simplify remediation and improve your environment's security (and increase your secure score), many recommendations include a **Fix** option to help you quickly remediate a recommendation on multiple resources. If the Fix button isn't present in the recommendation, then there's no option to apply a quick fix.
+To simplify the remediation process, a Fix button may appear in a recommendation. The Fix button helps you quickly remediate a recommendation on multiple resources. If the Fix button is not present in the recommendation, then there is no option to apply a quick fix, and you must follow the presented remediation steps to address the recommendation.
**To remediate a recommendation with the Fix button**:
Security admins can fix issues at scale with automatic script generation in AWS
Copy and run the script to remediate the recommendation.
-## Next steps
+## Next step
-Learn about [using governance rules in your remediation processes](governance-rules.md).
+> [!div class="nextstepaction"]
+> [Governance rules in your remediation processes](governance-rules.md)
defender-for-cloud Recommendations Reference Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-aws.md
To learn more about the supported runtimes that this control checks for the supp
## AWS Container recommendations
+### [[Preview] Container images in AWS registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2a139383-ec7e-462a-90ac-b1b60e87d576)
+
+**Description**: Defender for Cloud scans your registry images for known vulnerabilities (CVEs) and provides detailed findings for each scanned image. Scanning and remediating vulnerabilities for container images in the registry helps maintain a secure and reliable software supply chain, reduces the risk of security incidents, and ensures compliance with industry standards.
+
+**Severity**: High
+
+**Type**: Vulnerability Assessment
+
+### [[Preview] Containers running in AWS should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d5d1e526-363a-4223-b860-f4b6e710859f)
+
+**Description**: Defender for Cloud creates an inventory of all container workloads currently running in your Kubernetes clusters and provides vulnerability reports for those workloads by matching the images being used and the vulnerability reports created for the registry images. Scanning and remediating vulnerabilities of container workloads is critical to ensure a robust and secure software supply chain, reduce the risk of security incidents, and ensures compliance with industry standards.
+
+**Severity**: High
+
+**Type**: Vulnerability Assessment
+ ### [EKS clusters should grant the required AWS permissions to Microsoft Defender for Cloud](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7d3a977e-46f1-419a-9046-4bd44db80aac) **Description**: Microsoft Defender for Containers provides protections for your EKS clusters.
Enabling managed platform updates ensures that the latest available platform fix
### [Elastic Load Balancer shouldn't have ACM certificate expired or expiring in 90 days.](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a5e0d700-3de1-469a-96d2-6536d9a92604)
-**Description**: This check identifies Elastic Load Balancers (ELB) which are using ACM certificates expired or expiring in 90 days. AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy your server certificates. With ACM. you can request a certificate or deploy an existing ACM or external certificate to AWS resources. As a best practice, it's recommended to reimport expiring/expired certificates while preserving the ELB associations of the original certificate.
+**Description**: This check identifies Elastic Load Balancers (ELB) which are using ACM certificates expired or expiring in 90 days. AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy your server certificates. With ACM, you can request a certificate or deploy an existing ACM or external certificate to AWS resources. As a best practice, it's recommended to reimport expiring/expired certificates while preserving the ELB associations of the original certificate.
**Severity**: High
IAM database authentication allows authentication to database instances with an
### [IAM customer managed policies should not allow decryption actions on all KMS keys](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d088fb9f-11dc-451e-8f79-393916e42bb2)
-**Description**: Checks whether the default version of IAM customer managed policies allow principals to use the AWS KMS decryption actions on all resources. This control uses [Zelkova](https://aws.amazon.com/blogs/security/protect-sensitive-data-in-the-cloud-with-automated-reasoning-zelkova), an automated reasoning engine, to validate and warn you about policies that might grant broad access to your secrets across AWS accounts.This control fails if the "kms:Decrypt" or "kms:ReEncryptFrom" actions are allowed on all KMS keys. The control evaluates both attached and unattached customer managed policies. It doesn't check inline policies or AWS managed policies.
+**Description**: Checks whether the default version of IAM customer managed policies allow principals to use the AWS KMS decryption actions on all resources. This control uses [Zelkova](https://aws.amazon.com/blogs/security/protect-sensitive-data-in-the-cloud-with-automated-reasoning-zelkova), an automated reasoning engine, to validate and warn you about policies that might grant broad access to your secrets across AWS accounts. This control fails if the "kms:Decrypt" or "kms:ReEncryptFrom" actions are allowed on all KMS keys. The control evaluates both attached and unattached customer managed policies. It doesn't check inline policies or AWS managed policies.
With AWS KMS, you control who can use your KMS keys and gain access to your encrypted data. IAM policies define which actions an identity (user, group, or role) can perform on which resources. Following security best practices, AWS recommends that you allow least privilege. In other words, you should grant to identities only the "kms:Decrypt" or "kms:ReEncryptFrom" permissions and only for the keys that are required to perform a task. Otherwise, the user might use keys that aren't appropriate for your data. Instead of granting permissions for all keys, determine the minimum set of keys that users need to access encrypted data. Then design policies that allow users to use only those keys. For example, don't allow "kms:Decrypt" permission on all KMS keys. Instead, allow "kms:Decrypt" only on keys in a particular Region for your account. By adopting the principle of least privilege, you can reduce the risk of unintended disclosure of your data.
defender-for-cloud Recommendations Reference Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-gcp.md
At least business critical VMs should have VM disks encrypted with CSEK.
## GCP Container recommendations
+### [[Preview] Container images in GCP registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24e37609-dcf5-4a3b-b2b0-b7d76f2e4e04)
+
+**Description**: Defender for Cloud scans your registry images for known vulnerabilities (CVEs) and provides detailed findings for each scanned image. Scanning and remediating vulnerabilities for container images in the registry helps maintain a secure and reliable software supply chain, reduces the risk of security incidents, and ensures compliance with industry standards.
+
+**Severity**: High
+
+**Type**: Vulnerability Assessment
+
+### [[Preview] Containers running in GCP should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c7c1d31d-a604-4b86-96df-63448618e165)
+
+**Description**: Defender for Cloud creates an inventory of all container workloads currently running in your Kubernetes clusters and provides vulnerability reports for those workloads by matching the images being used and the vulnerability reports created for the registry images. Scanning and remediating vulnerabilities of container workloads is critical to ensure a robust and secure software supply chain, reduce the risk of security incidents, and ensures compliance with industry standards.
+
+**Severity**: High
+
+**Type**: Vulnerability Assessment
+ ### [Advanced configuration of Defender for Containers should be enabled on GCP connectors](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b7683ca3-3a11-49b6-b9d4-a112713edfa3) **Description**: Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. To ensure you the solution is provisioned properly, and the full set of capabilities are available, enable all advanced configuration settings.
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
When you restore from a recovery point, you can restore the whole VM or specific
**Severity**: Low
-### [EDR solution should be installed on Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/06e3a6db-6c0c-4ad9-943f-31d9d73ecf6c)
+### [EDR solution should be installed on Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/06e3a6db-6c0c-4ad9-943f-31d9d73ecf6c)
**Description**: Installing an Endpoint Detection and Response (EDR) solution on virtual machines is important for protection against advanced threats. EDRs aid in preventing, detecting, investigating, and responding to these threats. Microsoft Defender for Servers can be used to deploy Microsoft Defender for Endpoint. If a resource is classified as "Unhealthy", it indicates the absence of a supported EDR solution. If an EDR solution is installed but not discoverable by this recommendation, it can be exempted. Without an EDR solution, the virtual machines are at risk of advanced threats.
Learn more about [Trusted launch for Azure virtual machines](../virtual-machines
## Container recommendations
+### [[Preview] Container images in Azure registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9)
+
+**Description**: Defender for Cloud scans your registry images for known vulnerabilities (CVEs) and provides detailed findings for each scanned image. Scanning and remediating vulnerabilities for container images in the registry helps maintain a secure and reliable software supply chain, reduces the risk of security incidents, and ensures compliance with industry standards.
+
+**Severity**: High
+
+**Type**: Vulnerability Assessment
+
+### [[Preview] Containers running in Azure should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0)
+
+**Description**: Defender for Cloud creates an inventory of all container workloads currently running in your Kubernetes clusters and provides vulnerability reports for those workloads by matching the images being used and the vulnerability reports created for the registry images. Scanning and remediating vulnerabilities of container workloads is critical to ensure a robust and secure software supply chain, reduce the risk of security incidents, and ensures compliance with industry standards.
+
+**Severity**: High
+
+**Type**: Vulnerability Assessment
+ ### [(Enable if required) Container registries should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/af560c4d-9c05-e073-b9f1-f7a94958ff25) **Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements.
Privileged containers have all of the root capabilities of a host machine. They
### [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5)
+> [!IMPORTANT]
+> This recommendation is on a retirement path. It is being replaced by the recommendation [[[Preview] Container images in Azure registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9)](#preview-container-images-in-azure-registry-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey33422d8f-ab1e-42be-bc9a-38685bb567b9).
+ **Description**: Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. (Related policy: [Vulnerabilities in Azure Container Registry images should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f0f936f-2f01-4bf5-b6be-d423792fa562)).
Privileged containers have all of the root capabilities of a host machine. They
**Type**: Vulnerability Assessment
-### [Azure running container images should have vulnerabilities resolved - (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c)
-
-**Description**: Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks.
-(No related policy)
-
-**Severity**: High
-
-**Type**: Vulnerability Assessment
- ### [Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)
+> [!IMPORTANT]
+> This recommendation is on a retirement path. It is being replaced by the recommendation [[[Preview] Containers running in Azure should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0)](#preview-containers-running-in-azure-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeye9acaf48-d2cf-45a3-a6e7-3caa2ef769e0).
+ **Description**: Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. **Severity**: High
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
If you're looking for items older than six months, you can find them in the [Arc
## April 2024
-| Date | Update |
-| - | - |
+|Date | Update |
+|--|--|
+| April 3 | [Risk prioritization is now the default experience in Defender for Cloud](#risk-prioritization-is-now-the-default-experience-in-defender-for-cloud) |
+| April 3 | [New container vulnerability assessment recommendations](#new-container-vulnerability-assessment-recommendations) |
+| April 3 | [Defender for open-source relational databases updates](#defender-for-open-source-relational-databases-updates) |
| April 2 | [Update to recommendations to align with Azure AI Services resources](#update-to-recommendations-to-align-with-azure-ai-services-resources) | | April 2 | [Deprecation of Cognitive Services recommendation](#deprecation-of-cognitive-services-recommendation) | | April 2 | [Containers multicloud recommendations (GA)](#containers-multicloud-recommendations-ga) |
+### Risk prioritization is now the default experience in Defender for Cloud
+
+April 3, 2024
+
+Risk prioritization is now the default experience in Defender for Cloud. This feature helps you to focus on the most critical security issues in your environment by prioritizing recommendations based on the risk factors of each resource. The risk factors include the potential impact of the security issue being breached, the categories of risk, and the attack path that the security issue is part of.
+
+Learn more about [risk prioritization](risk-prioritization.md).
+
+### New container vulnerability assessment recommendations
+
+April 3, 2024
+
+To support the new [risk-based prioritization](risk-prioritization.md) experience for recommendations, we've created new recommendations for container vulnerability assessments in Azure, AWS, and GCP. They report on container images for registry and container workloads for runtime:
+
+- [[Container images in Azure registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9)](recommendations-reference.md#container-images-in-azure-registry-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey33422d8f-ab1e-42be-bc9a-38685bb567b9)
+- [[Containers running in Azure should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0)](recommendations-reference.md#containers-running-in-azure-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeye9acaf48-d2cf-45a3-a6e7-3caa2ef769e0)
+- [[Container images in AWS registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2a139383-ec7e-462a-90ac-b1b60e87d576)](recommendations-reference-aws.md#container-images-in-aws-registry-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey2a139383-ec7e-462a-90ac-b1b60e87d576)
+- [[Containers running in AWS should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d5d1e526-363a-4223-b860-f4b6e710859f)](recommendations-reference-aws.md#containers-running-in-aws-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeyd5d1e526-363a-4223-b860-f4b6e710859f)
+- [[Container images in GCP registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24e37609-dcf5-4a3b-b2b0-b7d76f2e4e04)](recommendations-reference-gcp.md#container-images-in-gcp-registry-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey24e37609-dcf5-4a3b-b2b0-b7d76f2e4e04)
+- [[Containers running in GCP should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c7c1d31d-a604-4b86-96df-63448618e165)](recommendations-reference-gcp.md#containers-running-in-gcp-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeyc7c1d31d-a604-4b86-96df-63448618e165)
+
+The previous container vulnerability assessment recommendations are on a retirement path and will be removed when the new recommendations are generally available.
+
+- [[Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5)](recommendations-reference.md#azure-registry-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-managementhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeyc0b7cfc6-3172-465a-b378-53c7ff2cc0d5)
+- [[Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)](recommendations-reference.md#azure-running-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-managementhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeyc609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)
+- [AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainerRegistryRecommendationDetailsBlade/assessmentKey/c27441ae-775c-45be-8ffa-655de37362ce)
+- [AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainersRuntimeRecommendationDetailsBlade/assessmentKey/682b2595-d045-4cff-b5aa-46624eb2dd8f)
+- [GCP registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainerRegistryRecommendationDetailsBlade/assessmentKey/5cc3a2c1-8397-456f-8792-fe9d0d4c9145)
+- [GCP running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainersRuntimeRecommendationDetailsBlade/assessmentKey/e538731a-80c8-4317-a119-13075e002516)
+
+> [!NOTE]
+> The new recommendations are currently in public preview and will not be used for secure score calculation.
+
+### Defender for open-source relational databases updates
+
+April 3, 2024
+
+**Defender for PostgreSQL Flexible Servers post-GA updates** - The update enables customers to enforce protection for existing PostgreSQL flexible servers at the subscription level, allowing complete flexibility to enable protection on a per-resource basis or for automatic protection of all resources at the subscription level.
+
+**Defender for MySQL Flexible Servers Availability and GA** - Defender for Cloud expanded its support for Azure open-source relational databases by incorporating MySQL Flexible Servers.
+
+This release includes:
+
+- Alert compatibility with existing alerts for Defender for MySQL Single Servers.
+- Enablement of individual resources.
+- Enablement at the subscription level.
+
+If you're already protecting your subscription with Defender for open-source relational databases, your flexible server resources are automatically enabled, protected, and billed.
+
+Specific billing notifications have been sent via email for affected subscriptions.
+
+Learn more about [Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md).
+
+> [!NOTE]
+> Updates for Azure Database for MySQL flexible servers are rolling out over the next few weeks. If you see the error message `The server <servername> is not compatible with Advanced Threat Protection`, you can either wait for the update to roll out, or open a support ticket to update the server sooner to a supported version.
+ ### Update to recommendations to align with Azure AI Services resources April 2, 2024
The following recommendations have been updated to align with the Azure AI Servi
| Cognitive Services accounts should restrict network access | [Azure AI Services resources should restrict network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243) | | Cognitive Services accounts should have local authentication methods disabled | [Azure AI Services resources should have key access disabled (disable local authentication)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/13b10b36-aa99-4db6-b00c-dcf87c4761e6) | | Diagnostic logs in Search services should be enabled | [Diagnostic logs in Azure AI services resources should be enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/dea5192e-1bb3-101b-b70c-4646546f5e1e) |
-
+ See the [list of security recommendations](recommendations-reference.md). ### Deprecation of Cognitive Services recommendation
Learn more about [automated remediation scripts](implement-security-recommendati
March 6, 2024
-Based on customer feedback, we've added the following compliance standards in preview to our compliance dashboard. As shown, these are for reviewing the compliance status of AWS and GCP resources protected by Defender for Cloud.
-
-| Compliance standard | Version | AWS | GCP |
-| -- | - | - | - |
-| AWS Well-Architected Framework | N/A | :white_check_mark: | :x: |
-| Brazilian General Personal Data Protection Law (LGPD) | 53/2018 | :white_check_mark: | :white_check_mark: |
-| California Consumer Privacy Act (CCPA) | 2018 | :white_check_mark: | :white_check_mark: |
-| CIS Controls | v8 | :x: | :white_check_mark: |
-| CIS Google Cloud Platform Foundation Benchmark | v2.0.0 | :x: | :white_check_mark: |
-| CIS Google Kubernetes Engine (GKE) Benchmark | v1.5.0 | :x: | :white_check_mark: |
-| CPS 234 (APRA) | 2019 | :x: | :white_check_mark: |
-| CRI Profile | v1.2.1 | :white_check_mark: | :white_check_mark: |
-| CSA Cloud Controls Matrix (CCM) | v4.0.10 | :white_check_mark: | :white_check_mark: |
-| Cybersecurity Maturity Model Certification (CMMC) | v2.0 | :x: | :white_check_mark: |
-| FFIEC Cybersecurity Assessment Tool (CAT) | 2017 | :x: | :white_check_mark: |
-| GDPR | 2016/679 | :white_check_mark: | :white_check_mark: |
-| ISO/IEC 27001 | 27001:2022 | :white_check_mark: | :white_check_mark: **(Update)** |
-| ISO/IEC 27002 | 27002:2022 | :white_check_mark: | :white_check_mark: |
-| ISO/IEC 27017 | 27017:2015 | :x: | :white_check_mark: |
-| NIST Cybersecurity Framework (CSF) | v1.1 | :white_check_mark: | :white_check_mark: |
-| NIST SP 800-171 | Revision 2 | :x: | :white_check_mark: |
-| NIST SP 800-172 | 2021 | :white_check_mark: | :white_check_mark: |
-| PCI-DSS | v4.0.0 | :white_check_mark: **(Update)** | :white_check_mark: **(Update)** |
-| Sarbanes Oxley Act (SOX) | 2002 | :x: | :white_check_mark: |
-| SOC 2 | 2017 | :x: | :white_check_mark: |
+Based on customer feedback, we've added compliance standards in preview to Defender for Cloud.
+
+Check out the [full list of supported compliance standards](concept-regulatory-compliance-standards.md#available-regulatory-standards)
We are continuously working on adding and updating new standards for Azure, AWS, and GCP environments.
You can now prioritize your security recommendations according to the risk level
By organizing your recommendations based on their risk level (Critical, high, medium, low), you're able to address the most critical risks within your environment and efficiently prioritize the remediation of security issues based on the actual risk such as internet exposure, data sensitivity, lateral movement possibilities, and potential attack paths that could be mitigated by resolving the recommendations.
-Learn more about [risk prioritization](implement-security-recommendations.md#group-recommendations-by-risk-level).
+Learn more about [risk prioritization](implement-security-recommendations.md).
### Attack path analysis new engine and extensive enhancements
defender-for-cloud Review Exemptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-exemptions.md
Title: Exempt a recommendation in Microsoft Defender for Cloud
+ Title: Review resources exempted from recommendations
description: Learn how to exempt recommendations so they're not taken into account in Microsoft Defender for Cloud. Previously updated : 11/22/2023 Last updated : 02/29/2024
+#customer intent: As a user, I want to learn how to exempt recommendations in Microsoft Defender for Cloud so that I can customize the security recommendations for my environment.
# Review resources exempted from recommendations In Microsoft Defender for Cloud, you can [exempt protected resources from Defender for Cloud security recommendations](exempt-resource.md). This article describes how to review and work with exempted resources.
-> [!IMPORTANT]
-> This page discusses how to use the new recommendations experience where you have the ability to prioritize your recommendations by their effective risk level. To view this experience, you must select **Try it now**.
->
-> :::image type="content" source="media/review-security-recommendations/try-it-now.png" alt-text="Screenshot that shows where the try it now button is located on the recommendation page." lightbox="media/review-security-recommendations/try-it-now.png":::
- ## Review exempted resources in the portal
+Once a resource has been exempted it will no longer be taken into account for security recommendation. You can review the exempted resources and manage each one in the Defender for Cloud portal.
+
+### Review exempted resources on the recommendations page
+
+**To review exempted resources**:
+ 1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Navigate to **Defender for Cloud** > **Recommendations**.
-1. Select **Add filter** > **Is exempt**.
+1. Select **Recommendation status**.
-1. Select **All**, **Yes** or **No**.
+1. Select **Exempted**.
1. Select **Apply**.
- :::image type="content" source="media/review-exemptions/filter-exemptions.png" alt-text="Steps to create an exemption rule to exempt a recommendation from your subscription or management group." lightbox="media/review-exemptions/filter-exemptions.png":::
+ :::image type="content" source="media/review-exemptions/exempted-resources.png" alt-text="Screenshot of the recommendations page that shows where the recommendation status, exempted and apply button are located." lightbox="media/review-exemptions/exempted-resources.png":::
-1. In the details page for the relevant recommendation, review the exemption rules.
+1. Select a resource to review it.
-1. For each resource, the **Reason** column shows why the resource is exempted. To modify the exemption settings for a resource, select the ellipsis in the resource > **Manage exemption**.
+### Review exempted resources on the inventory page
You can also find all resources that are exempted from one or more recommendations on the Inventory page.
You can also find all resources that are exempted from one or more recommendatio
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to **Defender for Cloud** > **Recommendations**.
+1. Navigate to **Defender for Cloud** > **Inventory**.
1. Select **Add filter**
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Review security recommendations in Microsoft Defender for Cloud
-description: Learn how to review security recommendations in Microsoft Defender for Cloud
+ Title: Review security recommendations
+description: Learn how to review security recommendations in Microsoft Defender for Cloud and improve the security posture of your environments.
Previously updated : 11/21/2023++ Last updated : 03/07/2024
+#customer intent: As a security analyst, I want to learn how to review security recommendations in Microsoft Defender for Cloud so that I can improve the security posture of my environments.
# Review security recommendations In Microsoft Defender for Cloud, resources and workloads are assessed against built-in and custom security standards enabled in your Azure subscriptions, AWS accounts, and GCP projects. Based on those assessments, security recommendations provide practical steps to remediate security issues, and improve security posture.
-This article describes how to review security recommendations in your Defender for Cloud deployment using the latest version of the portal experience.
+Defender for Cloud proactively utilizes a dynamic engine which assesses the risks in your environment while taking into account the potential for the exploitation and the potential business impact to your organization. The engine [prioritizes security recommendations based on the risk factors](risk-prioritization.md) of each resource, which are determined by the context of the environment, including the resource's configuration, network connections, and security posture.
-## Get an overview
+## Prerequisites
-In Defender for Cloud, navigate to the **Overview** dashboard to get a holistic look at your environments, including:
+- You must [enable Defender CSPM](enable-enhanced-security.md) on your environment.
-- **Active recommendations**: Recommendations that are active in your environment.-- **Unassigned recommendations**: See which recommendations don't have owners assigned to them.-- **Overdue recommendations**: Recommendations that have an expired due date.-- **Attack paths**: See the number of attack paths.-
-## Review recommendations
-
-> [!IMPORTANT]
-> This page discusses how to use the new recommendations experience where you have the ability to prioritize your recommendations by their effective risk level. To view this experience, you must select **Try it now**.
->
-> :::image type="content" source="media/review-security-recommendations/try-it-now.png" alt-text="Screenshot that shows where the try it now button is located on the recommendation page." lightbox="media/review-security-recommendations/try-it-now.png":::
-
-**To review recommendations**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Navigate to **Defender for Cloud** > **Recommendations**.
-
-1. For each recommendation, review:
-
- - **Risk level** - Specifies whether the recommendation risk is Critical, High, Medium or Low.
- - **Affected resource** - Indicated affected resources.
- - **Risk factors** - Environmental factors of the resource affected by the recommendation, which influences the exploitability and the business effect of the underlying security issue. For example, Internet exposure, sensitive data, lateral movement potential and more.
- - **Attack Paths** - The number of attack paths.
- - **Owner** - The person assigned to this recommendation.
- - **Due date** - Indicates the due date for fixing the recommendation.
- - **Recommendation status** indicates whether the recommendation is assigned, and the status of the due date for fixing the recommendation.
+> [!NOTE]
+> Recommendations are included by default with Defender for Cloud, but you will not be able to see risk prioritization without Defender CSPM enabled on your environment.
## Review recommendation details
It's important to review all of the details related to a recommendation before t
1. Select a recommendation. 1. In the recommendation page, review the details:
+ - **Risk level** - The exploitability and the business impact of the underlying security issue, taking into account environmental resource context such as: Internet exposure, sensitive data, lateral movement, and more.
+ - **Risk factors** - Environmental factors of the resource affected by the recommendation, which influence the exploitability and the business impact of the underlying security issue. Examples for risk factors include internet exposure, sensitive data, lateral movement potential.
+ - **Resource** - The name of the affected resource.
+ - **Status** - The status of the recommendation. For example, unassigned, on time, overdue.
- **Description** - A short description of the security issue. - **Attack Paths** - The number of attack paths. - **Scope** - The affected subscription or resource.
It's important to review all of the details related to a recommendation before t
- **Last change date** - The date this recommendation last had a change - **Owner** - The person assigned to this recommendation. - **Due date** - The assigned date the recommendation must be resolved by.
- - **Severity** - The severity of the recommendation (High, Medium, or Low). More details below.
- **Tactics & techniques** - The tactics and techniques mapped to MITRE ATT&CK.
- :::image type="content" source="./media/review-security-recommendations/recommendation-details-page.png" alt-text="Screenshot of the recommendation details page with labels for each element." lightbox="./media/security-policy-concept/recommendation-details-page.png":::
- ## Explore a recommendation You can perform many actions to interact with recommendations. If an option isn't available, it isn't relevant for the recommendation.
You can perform many actions to interact with recommendations. If an option isn'
- Select **View policy definition** to view the Azure Policy entry for the underlying recommendation (if relevant).
-1. In **Findings**, you can review affiliated findings by severity.
-
- :::image type="content" source="media/review-security-recommendations/recommendation-findings.png" alt-text="Screenshot of the findings tab in a recommendation that shows all of the attack paths for that recommendation." lightbox="media/review-security-recommendations/recommendation-findings.png":::
- 1. In **Take action**: - **Remediate**: A description of the manual steps required to remediate the security issue on the affected resources. For recommendations with the **Fix** option, you can select **View remediation logic** before applying the suggested fix to your resources.
You can perform many actions to interact with recommendations. If an option isn'
:::image type="content" source="media/review-security-recommendations/recommendation-take-action.png" alt-text="Screenshot that shows what you can see in the recommendation when you select the take action tab." lightbox="media/review-security-recommendations/recommendation-take-action.png":::
+1. In **Findings**, you can review affiliated findings by severity.
+
+ :::image type="content" source="media/review-security-recommendations/recommendation-findings.png" alt-text="Screenshot of the findings tab in a recommendation that shows all of the attack paths for that recommendation." lightbox="media/review-security-recommendations/recommendation-findings.png":::
+ 1. In **Graph**, you can view and investigate all context that is used for risk prioritization, including [attack paths](how-to-manage-attack-path.md). You can select a node in an attack path to view the details of the selected node. :::image type="content" source="media/review-security-recommendations/recommendation-graph.png" alt-text="Screenshot of the graph tab in a recommendation that shows all of the attack paths for that recommendation." lightbox="media/review-security-recommendations/recommendation-graph.png":::
-## How are recommendations classified?
+1. Select a node to view additional details.
+
+ :::image type="content" source="media/review-security-recommendations/select-node.png" alt-text="Screenshot of a node located in the graph tab that is selected and showing the additional details." lightbox="media/review-security-recommendations/select-node.png":::
+
+1. Select **Insights**.
+
+1. In the vulnerability dropdown menu, select a vulnerability to view the details.
-Every security recommendation from Defender for Cloud is assigned one of three severity ratings:
+ :::image type="content" source="media/review-security-recommendations/insights.png" alt-text="Screenshot of the insights tab for a specific node." lightbox="media/review-security-recommendations/insights.png":::
-- **High severity**: These recommendations should be addressed immediately, as they indicate a critical security vulnerability that could be exploited by an attacker to gain unauthorized access to your systems or data. Examples of high severity recommendations are when weΓÇÖve discovered unprotected secrets on a machine, overly-permissive inbound NSG rules, clusters allowing images to be deployed from untrusted registries, and unrestricted public access to storage accounts or databases.
+1. (Optional) Select **Open the vulnerability page** to view the associated recommendation page.
-- **Medium severity**: These recommendations indicate a potential security risk that should be addressed in a timely manner, but may not require immediate attention. Examples of medium severity recommendations might include containers sharing sensitive host namespaces, web apps not using managed identities, Linux machines not requiring SSH keys during authentication, and unused credentials being left in the system after 90 days of inactivity.
+1. [Remediate the recommendation](implement-security-recommendations.md).
+
+## Group recommendations by title
+
+Defender for Cloud's recommendation page allows you to group recommendations by title. This feature is useful when you want to remediate a recommendation that is affecting multiple resources caused by a specific security issue.
+
+**To group recommendations by title**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Defender for Cloud** > **Recommendations**.
-- **Low severity**: These recommendations indicate a relatively minor security issue that can be addressed at your convenience. Examples of low severity recommendations might include the need to disable local authentication in favor of Microsoft Entra ID, health issues with your endpoint protection solution, best practices not being followed with network security groups, or misconfigured logging settings that could make it harder to detect and respond to security incidents.
+1. Select **Group by title**.
-Of course, the internal views of an organization might differ with MicrosoftΓÇÖs classification of a specific recommendation. So, it's always a good idea to review each recommendation carefully and consider its potential impact on your security posture before deciding how to address it.
+ :::image type="content" source="media/review-security-recommendations/group-by-title.png" alt-text="Screenshot of the recommendations page that shows where the group by title toggle is located on the screen." lightbox="media/review-security-recommendations/group-by-title.png":::
## Manage recommendations assigned to you
In this example, this recommendation details page shows 15 affected resources:
When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same affected resources for this recommendation.
-## Next steps
+## Next step
-[Remediate security recommendations](implement-security-recommendations.md)
+> [!div class="nextstepaction"]
+> [Remediate security recommendations](implement-security-recommendations.md)
defender-for-cloud Risk Prioritization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/risk-prioritization.md
Defender for Cloud then analyzes which security issues are part of potential att
Microsoft Defender for Cloud's resources and workloads are assessed against built-in and custom security standards enabled in your Azure subscriptions, AWS accounts, and GCP projects. Based on those assessments, security recommendations provide practical steps to remediate security issues, and improve security posture.
+> [!NOTE]
+> Recommendations are included with the [Foundational CSPM plan](concept-cloud-security-posture-management.md#plan-availability) which is included with Defender for Cloud. However, risk prioritization and governance is supported only with the [Defender CSPM plan](concept-cloud-security-posture-management.md#plan-availability).
+>
+> If your environment is not protected by the Defender CSPM plan the columns with the risk prioritization features will appear blurred out.
+ Different resources can have the same recommendation with different risk levels. For example, a recommendation to enable MFA on a user account can have a different risk level for different users. The risk level is determined by the risk factors of each resource, such as its configuration, network connections, and security posture. The risk level is calculated based on the potential impact of the security issue being breached, the categories of risk, and the attack path that the security issue is part of. In Defender for Cloud, navigate to the **Recommendations** dashboard to view an overview of the recommendations that exist for your environments prioritized by risk look at your environments.
On this page you can review the:
- **Risk factors** - Environmental factors of the resource affected by the recommendation, which influence the exploitability and the business impact of the underlying security issue. Examples for risk factors include internet exposure, sensitive data, lateral movement potential. -- **Attack paths** - The number of attack paths that the recommendation is part of based on the security engine's search for all potential attack paths based on the resources that exist in the environment and relationship that exists between them. Each environment will present it's own unique attack paths.
+- **Attack paths** - The number of attack paths that the recommendation is part of based on the security engine's search for all potential attack paths based on the resources that exist in the environment and relationship that exists between them. Each environment will present its own unique attack paths.
- **Owner** - The person the recommendation is assigned to.
On this page you can review the:
- **Insights** - Information related to the recommendation such as, if it's in preview, if it can be denied, if there is a fix option available and more.
- :::image type="content" source="media/risk-prioritization/recommendations-dashboard.png" alt-text="Screenshot of teh recommendations dashboard which shows recommendations prioritized by their risk." lightbox="media/risk-prioritization/recommendations-dashboard.png":::
+ :::image type="content" source="media/risk-prioritization/recommendations-dashboard.png" alt-text="Screenshot of the recommendations dashboard which shows recommendations prioritized by their risk." lightbox="media/risk-prioritization/recommendations-dashboard.png":::
When you select a recommendation, you can view the details of the recommendation, including the description, attack paths, scope, freshness, last change date, owner, due date, severity, tactics & techniques, and more.
The risk level is determined by a context-aware risk-prioritization engine that
- [Review security recommendations](review-security-recommendations.md) - [Remediate security recommendations](implement-security-recommendations.md) - [Drive remediation with governance rules](governance-rules.md)-- [Automate remediation responses](workflow-automation.md)
+- [Automate remediation responses](workflow-automation.md)
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
The secure score in Microsoft Defender for Cloud can help you to improve your cl
When you turn on Defender for Cloud in a subscription, the [Microsoft cloud security benchmark (MCSB)](/security/benchmark/azure/introduction) standard is applied by default in the subscription. Assessment of resources in scope against the MCSB standard begins.
-The MCSB issues recommendations based on assessment findings. Only built-in recommendations from the MCSB affect the secure score. Currently, [risk prioritization](how-to-manage-attack-path.md#features-of-the-attack-path-overview-page) doesn't affect the secure score.
+The MCSB issues recommendations based on assessment findings. Only built-in recommendations from the MCSB affect the secure score. Currently, [risk prioritization](risk-prioritization.md) doesn't affect the secure score.
> [!NOTE] > Recommendations flagged as **Preview** aren't included in secure score calculations. You should still remediate these recommendations wherever possible, so that when the preview period ends, they'll contribute toward your score. Preview recommendations are marked with an icon: :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false":::.
defender-for-cloud Security Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-policy-concept.md
Each recommendation provides the following information:
Every recommendation in Defender for Cloud has an associated risk level that represents how exploitable and impactful the security issue is in your environment. The risk assessment engine takes into account factors such as internet exposure, sensitivity of data, lateral movement possibilities, and attack path remediation. You can prioritize recommendations based on their risk levels.
-> [!NOTE]
-> Currently, [risk prioritization](how-to-manage-attack-path.md#features-of-the-attack-path-overview-page) is in public preview and doesn't affect the secure score.
+> [!IMPORTANT]
+> [Risk prioritization](risk-prioritization.md) doesn't affect the secure score.
### Example
defender-for-cloud Subassessment Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/subassessment-rest-api.md
- Title: Container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments description: Learn about container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments
defender-for-cloud Support Matrix Cloud Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-cloud-environment.md
Title: Support across Azure clouds
-description: Review Defender for Cloud features and plans supported across different clouds.
++
+description: This article provides an overview of the supported features and plans for Defender for Cloud in Azure commercial cloud and government clouds.
Last updated 03/10/2024
This article indicates which Defender for Cloud features are supported in Azure
In the support table, **NA** indicates that the feature isn't available. + |**Feature/Plan** | **Azure** | **Azure Government** | **Microsoft Azure operated by 21Vianet**| | | | | | |**GENERAL FEATURES** | | ||
In the support table, **NA** indicates that the feature isn't available.
|[DevOps security posture](concept-devops-environment-posture-management-overview.md) | Preview | NA | NA| | **DEFENDER CSPM FEATURES** | | | | | [Data security dashboard](data-aware-security-dashboard-overview.md) | GA | NA | NA |
+| [Attack path](concept-attack-path.md) | GA | NA | NA |
|**DEFENDER FOR CLOUD PLANS** | | || |[Defender CSPM](concept-cloud-security-posture-management.md)| GA | NA | NA| |[Defender for APIs](defender-for-apis-introduction.md). | GA | NA | NA|
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
Following are the features for each of the domains in Defender for Containers:
| Aspect | Details | |--|--| | Registries and images | **Supported**<br> ΓÇó ACR registries <br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Container images in Docker V2 format <br> ΓÇó Images with [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> is currently unsupported <br> |
-| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12) <br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows Server 2016, 2019, 2022|
+| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.19 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12) <br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows Server 2016, 2019, 2022|
| Language specific packages <br><br> | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go | ### Kubernetes distributions and configurations for Azure - Runtime threat protection
defender-for-cloud Transition To Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/transition-to-defender-vulnerability-management.md
securityresources
securityresources | where type =~ "microsoft.security/assessments/subassessments" | extend assessmentKey=extract(@"(?i)providers/Microsoft.Security/assessments/([^/]*)", 1, id)
-| where assessmentKey == "c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5"
+| where assessmentKey == "c0b7cfc6-3172-465a-b378-53c7ff2cc0d5"
| extend azureClusterId = tostring(properties.additionalData.clusterDetails.clusterResourceId) | extend cve =tostring(properties.id) | extend status = properties.status.code
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan. Previously updated : 04/01/2024 Last updated : 04/03/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [Deprecation of encryption recommendation](#deprecation-of-encryption-recommendation) | April 3, 2024 | May 2024 |
| [Deprecating of virtual machine recommendation](#deprecating-of-virtual-machine-recommendation) | April 2, 2024 | April 30, 2024 | | [General Availability of Unified Disk Encryption recommendations](#general-availability-of-unified-disk-encryption-recommendations) | March 28, 2024 | April 30, 2024 |
-| [Defender for open-source relational databases updates](#defender-for-open-source-relational-databases-updates) | March 6, 2024 | April, 2024 |
| [Changes in where you access Compliance offerings and Microsoft Actions](#changes-in-where-you-access-compliance-offerings-and-microsoft-actions) | March 3, 2024 | September 30, 2025 | | [Microsoft Security Code Analysis (MSCA) is no longer operational](#microsoft-security-code-analysis-msca-is-no-longer-operational) | February 26, 2024 | February 26, 2024 | | [Decommissioning of Microsoft.SecurityDevOps resource provider](#decommissioning-of-microsoftsecuritydevops-resource-provider) | February 5, 2024 | March 6, 2024 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## Deprecation of encryption recommendation
+
+**Announcement date: April 3, 2024**
+
+**Estimated date for change: May 2024**
+
+the recommendation ### [Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d57a4221-a804-52ca-3dea-768284f06bb7) is set to be deprecated.
+ ## Deprecating of virtual machine recommendation **Announcement date: April 2, 2024**
The recommendations depend on [Guest Configuration](/azure/governance/machine-co
These recommendations will replace the recommendation "Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources."
-## Defender for open-source relational databases updates
-
-**Announcement date: March 6, 2024**
-
-**Estimated date for change: April, 2024**
-
-**Defender for PostgreSQL Flexible Servers post-GA updates** - The update enables customers to enforce protection for existing PostgreSQL flexible servers at the subscription level, allowing complete flexibility to enable protection on a per-resource basis or for automatic protection of all resources at the subscription level.
-
-**Defender for MySQL Flexible Servers Availability and GA** - Defender for Cloud is set to expand its support for Azure open-source relational databases by incorporating MySQL Flexible Servers.
-This release will include:
--- Alert compatibility with existing alerts for Defender for MySQL Single Servers.-- Enablement of individual resources.-- Enablement at the subscription level.-
-If you're already protecting your subscription with Defender for open-source relational databases, your flexible server resources are automatically enabled, protected, and billed.
-Specific billing notifications have been sent via email for affected subscriptions.
-
-Learn more about [Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md).
- ## Changes in where you access Compliance offerings and Microsoft Actions **Announcement date: March 3, 2024**
defender-for-cloud View And Remediate Vulnerabilities Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerabilities-containers.md
+
+ Title: Assess vulnerabilities for containers running on your Kubernetes clusters
+description: Learn how to view and remediate runtime threat findings for containers running on your Kubernetes clusters.
+++ Last updated : 09/06/2023++
+# View and remediate vulnerabilities for containers running on your Kubernetes clusters (Risk based)
+
+> [!NOTE]
+> This page describes the new risk-based approach to vulnerability management in Defender for Cloud. Defender for CSPM customers should use this method. To use the classic secure score approach, see [View and remediate vulnerabilities for images running on your Kubernetes clusters (Secure Score)](view-and-remediate-vulnerabilities-for-images-secure-score.md).
+
+Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities containers running on your Kubernetes clusters based on contextual risk analysis of the vulnerabilities in your cloud environment. In this article, we review the [Containers running in Azure should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0) recommendation. For the other clouds, see the parallel recommendations in [Vulnerability assessments for AWS with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-aws.md) and [Vulnerability assessments for GCP with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-gcp.md).
+
+To provide findings for the recommendation, Defender for Cloud uses [agentless discovery for Kubernetes](defender-for-containers-introduction.md) or the [Defender sensor](tutorial-enable-containers-azure.md#deploy-the-defender-sensor-in-azure) to create a full inventory of your Kubernetes clusters and their workloads and correlates that inventory with the vulnerability reports created for your registry images. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and remediation steps.
+
+Defender for Cloud presents the findings and related information as recommendations, including related information such as remediation steps and relevant CVEs. You can view the identified vulnerabilities for one or more subscriptions, or for a specific resource.
+
+## View vulnerabilities for a container
+
+**To view vulnerabilities for a container, do the following:**
+
+1. In Defender for Cloud, open the **Recommendations** page. If you're not on the new risk-based page, select **Recommendations by risk** on the top menu. If issues were found, you'll see the recommendation [Containers running in Azure should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0). Select the recommendation.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png" alt-text="Screenshot showing the recommendation line for running container images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png":::
+
+1. The recommendation details page opens with additional information. This information includes details about your vulnerable container and the remediation steps.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-cluster.png" alt-text="Screenshot showing the affected clusters for the recommendation." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-cluster.png":::
+
+1. Select the **Findings** tab to see the list of vulnerabilities impacting the container.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-container.png" alt-text="Screenshot showing the findings tab containing the vulnerabilities." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-container.png":::
+
+1. Select each vulnerability for a detailed description of the vulnerability, additional containers affected by that vulnerability, information on the software version that contributes to resolving the vulnerability, and links to external resources to help with patching the vulnerability.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-list-vulnerabilities.png" alt-text="Screenshot showing the container vulnerabilities." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-list-vulnerabilities.png":::
+
+To find all containers impacted by a specific vulnerability, group recommendations by title. For more information, see [Group recommendations by title](review-security-recommendations.md#group-recommendations-by-title).
+
+For information on how to remediate the vulnerabilities, see [Remediate recommendations](implement-security-recommendations.md).
+
+## Next step
+
+- Learn how to [view and remediate vulnerabilities for registry images](view-and-remediate-vulnerability-assessment-findings.md).
defender-for-cloud View And Remediate Vulnerabilities For Images Secure Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerabilities-for-images-secure-score.md
+
+ Title: Assess vulnerabilities for images running on your Kubernetes clusters (Secure Score)
+description: Learn how to view and remediate vulnerabilities for images running on your Kubernetes clusters (Secure Score).
+++ Last updated : 09/06/2023++
+# View and remediate vulnerabilities for images running on your Kubernetes clusters (Secure Score)
+
+> [!NOTE]
+> This page describes the classic secure score approach to vulnerability management in Defender for Cloud. Customers using Defender CSPM should use the new risk-based approach: [View and remediate vulnerabilities for images running on your Kubernetes clusters (Risk based)](view-and-remediate-vulnerabilities-for-images.md).
+
+Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462ce) recommendation.
+
+To provide findings for the recommendation, Defender for Cloud uses [agentless discovery for Kubernetes](defender-for-containers-introduction.md) or the [Defender sensor](tutorial-enable-containers-azure.md#deploy-the-defender-sensor-in-azure) to create a full inventory of your Kubernetes clusters and their workloads and correlates that inventory with the vulnerability reports created for your registry images. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and remediation steps.
+
+Defender for Cloud presents the findings and related information as recommendations, including related information such as remediation steps and relevant CVEs. You can view the identified vulnerabilities for one or more subscriptions, or for a specific resource.
+
+Within each recommendation, resources are grouped into tabs:
+
+- **Healthy resources** ΓÇô relevant resources, which either aren't impacted or on which you've already remediated the issue.
+- **Unhealthy resources** ΓÇô resources that are still impacted by the identified issue.
+- **Not applicable resources** ΓÇô resources for which the recommendation can't give a definitive answer. The not applicable tab also includes reasons for each resource.
+
+## View vulnerabilities on a specific cluster
+
+**To view vulnerabilities for a specific cluster, do the following:**
+
+1. Open the **Recommendations** page. If you are on the new risk-based page, select **Switch to classic view** in the menu item on the top of the page. Use the **>** arrow to open the sub-levels. If issues were found, you'll see the recommendation [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5). Select the recommendation.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-image-recommendation-line.png" alt-text="Screenshot showing the recommendation line for running container images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-image-recommendation-line.png":::
+
+1. The recommendation details page opens showing the list of Kubernetes clusters ("affected resources") and categorizes them as healthy, unhealthy and not applicable, based on the images used by your workloads. Select the relevant cluster for which you want to remediate vulnerabilities.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-select-cluster.png" alt-text="Screenshot showing the affected clusters for the recommendation." lightbox="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-select-cluster.png":::
+
+1. The cluster details page opens. It lists all currently running containers categorized into three tabs based on the vulnerability assessments of the images used by those containers. Select the specific container you want to explore.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-select-container.png" alt-text="Screenshot showing where to select a specific container." lightbox="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-select-container.png":::
+
+1. This pane includes a list of the container vulnerabilities. Select each vulnerability to [resolve the vulnerability](#remediate-vulnerabilities).
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-list-vulnerabilities.png" alt-text="Screenshot showing the list of container vulnerabilities." lightbox="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-list-vulnerabilities.png":::
+
+## View container images affected by a specific vulnerability
+
+**To view findings for a specific vulnerability, do the following:**
+
+1. Open the **Recommendations** page, using the **>** arrow to open the sub-levels. If issues were found, you'll see the recommendation [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5). Select the recommendation.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-image-recommendation-line.png" alt-text="Screenshot showing the recommendation line for running container images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-image-recommendation-line.png":::
+
+1. The recommendation details page opens with additional information. This information includes the list of vulnerabilities impacting the clusters. Select the specific vulnerability.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-select-vulnerability.png" alt-text="Screenshot showing the list of vulnerabilities impacting the container clusters." lightbox="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-select-vulnerability.png":::
+
+1. The vulnerability details pane opens. This pane includes a detailed description of the vulnerability, images affected by that vulnerability, and links to external resources to help mitigate the threats, affected resources, and information on the software version that contributes to [resolving the vulnerability](#remediate-vulnerabilities).
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-containers-affected.png" alt-text="Screenshot showing the list of container images impacted by the vulnerability." lightbox="media/view-and-remediate-vulnerabilities-for-images-secure-score/running-containers-affected.png":::
+
+## Remediate vulnerabilities
+
+Use these steps to remediate each of the affected images found either in a specific cluster or for a specific vulnerability:
+
+1. Follow the steps in the remediation section of the recommendation pane.
+1. When you've completed the steps required to remediate the security issue, replace each affected image in your cluster, or replace each affected image for a specific vulnerability:
+ 1. Build a new image (including updates for each of the packages) that resolves the vulnerability according to the remediation details.
+ 1. Push the updated image and delete the old image. It might take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
+ 1. Use the new image across all vulnerable workloads.
+1. Check the recommendations page for the recommendation [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c).
+1. If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
+
+## Next steps
+
+- Learn how to [view and remediate vulnerabilities for registry images](view-and-remediate-vulnerability-assessment-findings.md).
+- Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads)
defender-for-cloud View And Remediate Vulnerability Assessment Findings Secure Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerability-assessment-findings-secure-score.md
+
+ Title: How-to view and remediate vulnerability assessment findings for registry images (Secure Score).
+description: Learn how to view and remediate vulnerability assessment findings for registry images (Secure Score).
++ Last updated : 07/11/2023++
+# View and remediate vulnerabilities for registry images (Secure Score)
+
+> [!NOTE]
+> This page describes the classic secure score approach to vulnerability management in Defender for Cloud. Customers using Defender CSPM should use the new risk-based approach: [View and remediate vulnerabilities for images running on your Kubernetes clusters (Risk based)](view-and-remediate-vulnerabilities-for-images.md).
++
+Defender for Cloud gives its customers the ability to remediate vulnerabilities in container images while still stored in the registry by using the [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) recommendation.
+
+Within the recommendation, resources are grouped into tabs:
+
+- **Healthy resources** ΓÇô relevant resources, which either aren't impacted or on which you've already remediated the issue.
+- **Unhealthy resources** ΓÇô resources that are still impacted by the identified issue.
+- **Not applicable resources** ΓÇô resources for which the recommendation can't give a definitive answer. The not applicable tab also includes reasons for each resource.
+
+## View vulnerabilities on a specific container registry
+
+1. Open the **Recommendations** page. If you are on the new risk-based page, select **Switch to classic view** in the menu item on the top of the page. Use the **>** arrow to open the sublevels. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5). Select the recommendation.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings-secure-score/open-recommendations-page.png" alt-text="Screenshot showing the line for recommendation container registry images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerability-assessment-findings-secure-score/open-recommendations-page.png":::
+
+1. The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("affected resources") and the remediation steps. Select the affected registry.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings-secure-score/select-registry.png" alt-text="Screenshot showing the recommendation details and affected registries." lightbox="media/view-and-remediate-vulnerability-assessment-findings-secure-score/select-registry.png":::
+
+1. This opens the registry details with a list of repositories in it that have vulnerable images. Select the affected repository to see the images in it that are vulnerable.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings-secure-score/select-repo.png" alt-text="Screenshot showing where to select the specific repository." lightbox="media/view-and-remediate-vulnerability-assessment-findings-secure-score/select-repo.png":::
+
+1. The repository details page opens. It lists all vulnerable images on that repository with distribution of the severity of vulnerabilities per image. Select the unhealthy image to see the vulnerabilities.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings-secure-score/select-unhealthy-image.png" alt-text="Screenshot showing where to select the unhealthy image." lightbox="media/view-and-remediate-vulnerability-assessment-findings-secure-score/select-unhealthy-image.png":::
+
+1. The list of vulnerabilities for the selected image opens. To learn more about a finding, select the finding.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings-secure-score/select-image-finding.png" alt-text="Screenshot showing the list of findings on the specific image." lightbox="media/view-and-remediate-vulnerability-assessment-findings-secure-score/select-image-finding.png":::
+
+1. The vulnerabilities details pane opens. This pane includes a detailed description of the issue and links to external resources to help mitigate the threats, affected resources, and information on the software version that contributes to resolving the vulnerability.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings-secure-score/image-details.png" alt-text="Screenshot showing the details of the finding on the specific image." lightbox="media/view-and-remediate-vulnerability-assessment-findings-secure-score/image-details.png":::
+
+## View images affected by a specific vulnerability
+
+1. Open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5). Select the recommendation.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings-secure-score/open-recommendations-page.png" alt-text="Screenshot showing the line for recommendation container registry images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerability-assessment-findings-secure-score/open-recommendations-page.png":::
+
+1. The recommendation details page opens with additional information. This information includes the list of vulnerabilities impacting the images. Select the specific vulnerability.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings-secure-score/select-specific-vulnerability.png" alt-text="Screenshot showing the list of vulnerabilities impacting the images." lightbox="media/view-and-remediate-vulnerability-assessment-findings-secure-score/select-specific-vulnerability.png":::
+
+1. The vulnerability finding details pane opens. This pane includes a detailed description of the vulnerability, images affected by that vulnerability, and links to external resources to help mitigate the threats, affected resources, and information on the software version that contributes to [resolving the vulnerability](#remediate-vulnerabilities).
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings-secure-score/specific-vulnerability-details.png" alt-text="Screenshot showing the list of images impacted by the vulnerability." lightbox="media/view-and-remediate-vulnerability-assessment-findings-secure-score/specific-vulnerability-details.png":::
+
+## Remediate vulnerabilities
+
+Use these steps to remediate each of the affected images found either in a specific cluster or for a specific vulnerability:
+
+1. Follow the steps in the remediation section of the recommendation pane.
+1. When you've completed the steps required to remediate the security issue, replace each affected image in your registry or replace each affected image for a specific vulnerability:
+ 1. Build a new image (including updates for each of the packages) that resolves the vulnerability according to the remediation details.
+ 1. Push the updated image to trigger a scan and delete the old image. It might take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
+
+1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
+If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
+
+## Next steps
+
+- Learn how to [view and remediate vulnerabilities for images running on Kubernetes clusters](view-and-remediate-vulnerabilities-for-images.md).
+- Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
defender-for-cloud View And Remediate Vulnerability Registry Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerability-registry-images.md
+
+ Title: How-to view and remediate vulnerability assessment findings for registry images
+description: Learn how to view and remediate vulnerability assessment findings for registry images.
++ Last updated : 07/11/2023++
+# View and remediate vulnerabilities for registry images (Risk based)
+
+> [!NOTE]
+> This page describes the new risk-based approach to vulnerability management in Defender for Cloud. Defender for CSPM customers should use this method. To use the classic secure score approach, see [View and remediate vulnerabilities for registry images (Secure Score)](view-and-remediate-vulnerability-assessment-findings-secure-score.md).
+
+Defender for Cloud offers customers the capability to remediate vulnerabilities in container images while they're still stored in the registry. Additionally, it conducts contextual analysis of the vulnerabilities in your environment, aiding in prioritizing remediation efforts based on the risk level associated with each vulnerability.
+
+In this article, we review the [Container images in Azure registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9) recommendation. For the other clouds, see the parallel recommendations in [Vulnerability assessments for AWS with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-aws.md) and [Vulnerability assessments for GCP with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-gcp.md).
+
+## View vulnerabilities on a specific container image
+
+1. In Defender for Cloud, open the **Recommendations** page. If you're not on the new risk-based page, select **Recommendations by risk** on the top menu. If issues were found, you'll see the recommendation [Container images in Azure registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9). Select the recommendation.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png" alt-text="Screenshot showing the line for recommendation container registry images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png":::
+
+1. The recommendation details page opens with additional information. This information includes details about your registry image and the remediation steps.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-registry.png" alt-text="Screenshot showing the recommendation details and affected registries." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-registry.png":::
+
+1. Select the **Findings** tab to see the list of vulnerabilities impacting the registry image.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-unhealthy-image.png" alt-text="Screenshot showing the list of vulnerabilities impacting the registry image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-unhealthy-image.png":::
+
+1. Select each vulnerability for a detailed description of the vulnerability, additional images affected by that vulnerability, information on the software version that contributes to resolving the vulnerability, and links to external resources to help with patching the vulnerability.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-image-finding.png" alt-text="Screenshot showing the list of findings on the specific image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-image-finding.png":::
+
+To find all images impacted by a specific vulnerability, group recommendations by title. For more information, see [Group recommendations by title](review-security-recommendations.md#group-recommendations-by-title).
+
+For information on how to remediate the vulnerabilities, see [Remediate recommendations](implement-security-recommendations.md).
+
+## Next step
+
+- Learn how to [view and remediate vulnerabilities for images running on Kubernetes clusters](view-and-remediate-vulnerabilities-for-images.md).
education-hub Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/faq.md
+
+ Title: Frequently Asked Questions
+description: List of Frequently Asked Questions for Azure for Education
++++ Last updated : 4/2/2024+++
+# Azure for Education Frequently Asked Questions
+
+Discover all you need to know about Azure for Education in our FAQ section. Find answers to common queries on eligibility, benefits, and usage to optimize your experience.
+
+## Azure for Students
+
+### What happens after I use my $100 credit or I'm at the end of 12 months?
+If you exhaust your available credit before 12 months and you want to continue to use Azure, you can upgrade to a [pay-as-you-go subscription](../cost-management-billing/manage/upgrade-azure-subscription.md) in the Azure portal. If you do not upgrade, your Azure subscription becomes disabled. If youΓÇÖre at the end of 12 months, you can renew your subscription by signing up again for the offer. To get detailed terms of use for Azure for Students, please visit the [offer terms](https://azure.microsoft.com/offers/ms-azr-0170p/).
+
+### Who is eligible for Azure for Students?
+Azure for Students is available only to students who meet the following requirements:
+
+- You must affirm that you attend an accredited, degree-granting, two-year, or four-year educational institution where youΓÇÖre a full-time student.
+- You must verify your academic status through your organization's email address.
+- See the [Azure for Students Offer](https://azure.microsoft.com/offers/ms-azr-0170p/) for detailed terms of use.
+
+This offer isn't available for use in a massive open online course (MOOC) or in other professional trainings from for-profit organizations.
+
+This offer is limited to one Azure for Student subscription per eligible customer. It's nontransferable and can't be combined with any other offer, unless otherwise permitted by Microsoft.
+
+### What products are included in Azure for Students?
+Agents for Visual Studio
+Azure DevOps Server
+Datazen Enterprise Server
+Remote Tools for Visual Studio
+Machine Learning Server
+Microsoft R Client
+Microsoft R Server
+Microsoft Hyper-V
+Skype for Business Server
+SQL Server Developer
+SQL Server Standard
+System Center
+Visio Professional
+Visual Studio Code
+Visual Studio Community
+Visual Studio for Mac
+Windows Server
+
+### Can I deploy Windows 10 and Windows 11 VMs with my Azure for Students subscription?
+Yes, as a benefit of your Azure for Students subscription, you may use Windows 10 and 11 virtual machines without the need for a Windows 11 Enterprise license.
+
+### Can I get Azure for Students again next year?
+Yes! You can renew your Azure for Students subscription after one year. We send you emails reminding you to renew just before your anniversary. To renew, simply re-signup for the offer from the Azure for Students [website](https://aka.ms/azure4students)
+
+### Why did I receive an invoice from Microsoft?
+You may receive an invoice from Microsoft detailing the usage you incurred in the previous month while on Azure for Students. DonΓÇÖt worry, you don't have to pay for that usage ΓÇô itΓÇÖs all covered by the credit provided by Azure for Students. To learn more about invoices and how they work, check out the [article](../cost-management-billing/understand/mca-overview.md) on Microsoft technical documentation.
+
+### What are subscriptions, and how do they relate to Azure for Students?
+Subscriptions provide access to Azure services. Azure for Students gives you $100 credit for 12 months and includes access to more than 25 free services, including compute, network, storage, and databases. Any charges incurred during this period are deducted from the credit. To continue using Azure services after you've exhausted your $100 credit, you must either renew (if you're 12 months in) or upgrade to a pay-as-you-go subscription.
+
+### What happens with my Azure services if I don't upgrade?
+If you decide not to upgrade at the end of 12 months or after you've exhausted your $100 credit, whichever occurs first, any products you've deployed are decommissioned and you won't be able to access them. You have 90 days from the end of your free subscription to upgrade to a pay-as-you-go subscription.
+
+### How do I know how much of the $100 credit I have left?
+You can see your remaining credit on the [Azure Sponsorships portal](https://www.microsoftazuresponsorships.com/).
+
+### How do I download the software developer tools?
+Your Azure for Students subscription provides you with access to certain software developer tools. You must have a current, active Azure for Students subscription to access and download the software developer tools. Go to the [Education Hub](https://portal.azure.com/#blade/Microsoft_Azure_Education/EducationMenuBlade/software) to download the software developer tools by using your Azure for Students subscription.
+
+### What is Microsoft Learn training?
+[Microsoft Learn training](/training/) is a free online learning platform that allows you to learn Azure technologies at your own pace. Learning paths combine modules that allow you to start with the basics, then move to advanced methods that address real-world challenges.
+
+### Can Azure for Students be used for production or only for development?
+Azure for Students provides access to all Azure products that are expressly intended to support education or teaching, non-commercial research, or efforts to design, develop, test, and demonstrate software applications for these purposes.
+
+### Can I apply any of my $100 credit toward Azure Marketplace offers?
+No. You can't apply your credit to Azure Marketplace offers. However, many Azure Marketplace partners offer free trials or free tier plans for their solutions.
+
+## Azure for Students Starter
+
+### What is Azure for Students Starter?
+Azure for Students Starter gets you started with the Azure products you need to develop in the cloud. There's no cost to you. This benefit provides you access to a free tier of the following
+
+- Azure App Service
+- Azure Functions
+- Azure Notification Hubs
+- Azure Database for MySQL
+- Application Insights
+- Azure DevOps Services (formerly Visual Studio Team Services)
+
+Azure for Students Starter is available to verified students at no cost and without commitment or time limit. See the [Azure for Students Starter Offer](https://azure.microsoft.com/offers/ms-azr-0144p/) for detailed terms of use.
+
+### Who is eligible for Azure for Students Starter?
+Azure for Students Starter is available only to students who meet the following requirements:
+
+- You must affirm that you're age 13 or older if you reside in the United States.
+- You must affirm that you're age 16 or older if you reside in a country/region other than the United States.
+- You must verify your academic status through your organization's email address. You can also use Shibboleth if it's supported by your organization.
+
+This offer isn't available for use in a massive open online course (MOOC) or in other professional trainings from for-profit organizations.
+
+This offer is limited to one Azure for Students Starter subscription per eligible customer. It's non-transferable and can't be combined with any other offer, unless otherwise permitted by Microsoft.
+
+### Will I have to pay something at any point in time?
+A credit card isn't required for the Azure for Students Starter offer. The offer gives you access to a limited set of Azure services. But, at any point in time, you may upgrade to a pay-as-you-go subscription to get access to all Azure services.
+
+### How do I download the software developer tools?
+Your Azure for Students subscription provides you with access to certain software developer tools that are available to download for free.
+
+You must have a current, active Azure for Students subscription in order to access the software developer tools.
+
+You can download this software in the [Education Hub](https://portal.azure.com/#blade/Microsoft_Azure_Education/EducationMenuBlade/software).
+
+### What is Microsoft Learn training?
+[Microsoft Learn training](/training/) is a free online learning platform that helps you learn Azure technologies at your own pace. Learning paths combine modules to allow you to start with the basics, and then move to advanced methods that address real-world challenges.
+
+## Azure Academic Grant
+
+### How do I start using my Azure course credits?
+You can access your Azure course credits by creating a new Microsoft Azure Academic Grant subscription. Select the **Activate** button in the sponsorship approval email.
+
+You can also convert an existing subscription to the Microsoft Azure Sponsorship Offer to access your credits. Details on how to convert your subscription are in the next question.
+
+### Can I associate my course credits with an existing subscription?
+You can associate your course credits with an existing subscription on the account that is entitled to the offer. Contact [Azure Support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) to associate your course credits.
+
+### Why do I see a $0 balance in the Azure portal?
+When you go to your subscription details in the Azure portal, you'll see $0.00. The offer places a 100% discount off of all services. Therefore, the portal shows you what you will be charged for during your monthly usage period, which should be $0.00.
+
+To view your balance and sponsorship information, go to [Azure Sponsorships](https://www.microsoftazuresponsorships.com/balance) and sign in to your account.
+
+### Can I apply my course credits to an existing Enterprise Agreement (EA)?
+You can't associate your course credits offer to any subscription that's on an account under an Enterprise Agreement (EA).
+
+To apply course credits, you must create a new account that's outside of the EA, which we can then entitle.
+
+Once the Sponsorship period ends, you can associate that subscription back into the EA.
+
+> [!WARNING]
+> If you associate the account to your EA prior to the end of the sponsorship, all sponsorship funds will be terminated. Refer to the terms and conditions of the [Azure Sponsorship Offer](https://azure.microsoft.com/offers/ms-azr-0143p/) for more information.
+
+### Can I apply my course credits to my CSP subscription?
+No, you cannot associate your course credits offer with any Cloud Solution Provider (CSP) subscription.
+
+### Can I move my course credits to another account?
+Yes, you can move a course credits entitlement to another account. To do so, contact [Azure Support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+
+### Are Azure Marketplace applications covered by course credits?
+No, third party applications aren't covered by course credits. They'll be charged to the credit card on the account. For more information:
+- View marketplace services at the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/).
+- Refer to the [Azure Sponsorship offer](https://azure.microsoft.com/offers/ms-azr-0143p/) terms and conditions.
+
+### I have a previous balance due, can I pay it off with my course credits?
+Azure course credits only cover usage from the time you activate the Azure Sponsorship offer. You're responsible for all charges prior to your offer start date.
+
+### How do I know if my subscription is on the Azure course credit offer?
+If you look at a specific subscription in the Subscriptions blade in your Azure portal, you will see **Offer Name** as one of the properties. The Offer Name will say **Azure Sponsorship** if it is connected to your course credits. If **Azure Sponsorship** doesn't display, then contact support to get it converted.
+
+## Azure Dev Tools for Teaching
+
+### Who is eligible to purchase Microsoft Azure Dev Tools for Teaching?
+Only academic institutions that have purchased a Volume Licensing (VL) agreement with Microsoft are able to enroll into Azure Dev Tools for Teaching. If you're currently a Dev Tools for Teaching customer without a VL agreement, you can continue to renew your subscription. For more information on VL agreements for academic institutions, please visit https://aka.ms/ees
+
+### What products are included in the Azure Dev Tools for Teaching subscription?
+ :::column span="":::
+ Access<br>
+ Agents for Visual Studio<br>
+ Azure DevOps Server<br>
+ Datazen Enterprise Server<br>
+ Machine Learning Server <br>
+ Microsoft R Client<br>
+ Microsoft R Server<br>
+ Microsoft Hyper-V<br>
+ Project Professional<br>
+ Remote Tools for Visual Studio<br>
+ SharePoint Server<br>
+ Skype for Business Server<br>
+ :::column-end:::
+ :::column span="":::
+ SQL Server Developer<br>
+ SQL Server Standard<br>
+ System Center<br>
+ Visio Professional<br>
+ Visual Studio Code<br>
+ Visual Studio Community<br>
+ Visual Studio Enterprise<br>
+ Visual Studio for Mac<br>
+ Windows 10<br>
+ Windows 11 Education<br>
+ Windows Server<br>
+ :::column-end:::
+
+### How do I download software?
+Your Microsoft Azure Dev Tools for Teaching subscription provides you with access to certain software developer tools. These tools are available to download for free.
+
+You can download this software in the [Education Hub Software section](https://portal.azure.com/#blade/Microsoft_Azure_Education/EducationMenuBlade/software).
+
+### How do we distribute software to our students?
+As an Azure Dev Tools for Teaching subscriber, your school or institution gets access to our Education Hub Store. Your students access their cloud services in the [Education Hub Store](https://azureforeducation.microsoft.com/devtools), which is in the [Azure portal](https://portal.azure.com/).
+
+Students sign in to the Azure portal with their school (or Azure Dev Tools for Teaching) credentials. Then, students open the Education Hub Store and access the available software downloads.
+
+### Is Azure Dev Tools for Teaching available internationally?
+Yes, it is available in the more than 140 countries/regions where Azure is commercially available.
+
+### Which languages are available in the software to end-users?
+The Education Hub Store is available in the following languages: Arabic, Chinese Simplified, Chinese Traditional, Danish, Dutch, English, French, German, Hebrew, Italian, Japanese, Korean, Portuguese, Russian, Spanish, Swedish, and Turkish.
+
+### If our students download software through the Azure Dev Tools for Teaching program, do they get unlimited use of the software?
+Yes. Students receive unlimited software usage to further their learning and research efforts.
+
+### If our students create viable apps and products using Azure Dev Tools for Teaching software, can they sell them commercially?
+In general, no. Students cannot sell apps and products made by using Azure Dev Tools for Teaching. However, thanks to a partnership between Azure Dev Tools for Teaching and Microsoft App Store teams, students can create games and applications to sell in the Windows Store.
+
+### Do I have unlimited use of the software through the Azure Dev Tools for Teaching program?
+Yes. If a faculty member is enrolled in an approved course, they are eligible to install Azure Dev Tools for Teaching software onto their personal computer for non-commercial use.
+
+### How do I access my Visual Studio Enterprise benefit?
+As an administrator of the Azure Dev Tools for Teaching subscription, you can access your Visual Studio Enterprise subscription by requesting access through the [Azure Dev Tools for Teaching Management portal](https://azureforeducation.microsoft.com/account/Subscriptions). Once approved you'll be able to sign in to the [Visual Studio portal](https://my.visualstudio.com/) and redeem more benefits.
+
+### Does Microsoft Azure Dev Tools for Teaching include Microsoft Office?
+No. The focus of Microsoft Azure Dev Tools for Teaching is to provide departments, faculty and students with the tools necessary to specifically expand their study of software development and testing. Therefore, we provide technologies such as Windows Server, Visual Studio .NET, SQL Server and Platform SDK.
+
+### Does Azure Dev Tools for Teaching include Azure Credit?
+No, your Microsoft Azure Dev Tools for Teaching subscription doesn't include Azure credit. But you can sign up for Azure for Students, which gives you $100USD worth of Azure credit to use to pay for Azure services. Go to [Start building the future with Azure for Students!](https://aka.ms/student) for more information.
+
+### Do students need an Office 365 or Active Directory account to access Azure Dev Tools for Teaching?
+No. Students don't need an Office 365 account. If students have access to your Active Directory account, they use the same credentials to sign in to the software. If students don't use Active Directory, they must create a [Microsoft Account](https://account.microsoft.com/account) (if they don't already have one) using the same email address that you provide them.
+
+### Why aren't my sign in credentials recognized when I sign in to Azure Dev Tools for Teaching?
+Make sure that you're trying to sign in to Azure Dev Tools for Teaching with your school credentials. It might help to open a private browsing window session. If you're still unable to sign in, contact your subscription admin. To find your subscription admin, [contact us](https://aka.ms/adt4tsupport).
+
+### How do I find my Subscriber ID?
+- **When you first enroll in the program**: Your Subscriber ID number is in the subscription welcome email that you receive.
+- **If you renewed your subscription**: Your Subscriber ID is in the renewal email that the subscription administrator received.
+Your Subscriber ID is also in the Visual Studio Subscription portal. After you sign in, look under **My Subscription** on the **My Account** page.
+If you need help locating your Subscriber ID, [contact us](https://azureforeducation.microsoft.com/institutions/Contact).
+
+### Are we automatically enrolled in Azure Dev Tools for Teaching if we receive it as part of our academic volume licensing agreement?
+No, Microsoft doesn't automatically enroll you if you have an academic volume licensing agreement. The academic volume license agreement includes:
+- Enrollment for Education Solutions (EES)
+- Open Value Subscription Agreement for Education Solutions (OVS-ES)
+- Campus Agreement
+- School Agreement
+You must enroll in Azure Dev Tools for Teaching by using the appropriate promotional codes from the subscription welcome email that you receive for your academic volume license.
+You must also renew your subscription when it expires. It doesn't renew automatically.
+If you're unable to locate your promotional code, [contact us](https://azureforeducation.microsoft.com/institutions/Contact).
+
+### How and when do we renew our Azure Dev Tools for Teaching subscription?
+Sixty days before your membership expires, you'll start receiving email reminders to renew your subscription.
+If you don't receive these reminder emails and are concerned that your subscription is about to expire, [contact us](https://aka.ms/adt4tsupport).
+To check the expiration date of your subscription, go to the [Azure Dev Tools for Teaching Management portal](https://azureforeducation.microsoft.com/account/Subscriptions), and look under **Subscriptions**.
+
+### What if I need more help?
+[Contact us](https://azureforeducation.microsoft.com/institutions/Contact) by going to our Subscriptions Support page and locating your region.
+
+### Where is the Azure Dev Tools for Teaching privacy and cookies policy?
+The [Microsoft Privacy Statement](https://privacy.microsoft.com/PrivacyStatement) describes the personal data that Microsoft collects, how it processes that data, and why it shares that data.
+This privacy statement covers a range of Microsoft products including its apps, devices, servers, services, software, and websites. It also provides product-specific information and details its policy for using cookies.
expressroute Metro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/metro.md
The following diagram allows for a comparison between the standard ExpressRoute
| Metro location | Peering locations | Location address | Zone | Local Azure Region | ER Direct | Service Provider | |--|--|--|--|--|--|--|
-| Amsterdam Metro | Amsterdam<br>Amsterdam2 | Equinix AM5<br>Digital Reality AMS8 | 1 | West Europe | &check; | Megaport<br>Equinix<sup>1</sup><br>Colt<sup>1</sup><br>Console Connect<sup>1</sup><br>Digital Reality<sup>1</sup> |
+| Amsterdam Metro | Amsterdam<br>Amsterdam2 | Equinix AM5<br>Digital Realty AMS8 | 1 | West Europe | &check; | Megaport<br>Equinix<sup>1</sup><br>Colt<sup>1</sup><br>Console Connect<sup>1</sup><br>Digital Reality<sup>1</sup> |
| Singapore Metro | Singapore<br>Singapore2 | Equinix SG1<br>Global Switch Tai Seng | 2 | Southeast Asia | &check; | Megaport<sup>1</sup><br>Equinix<sup>1</sup><br>Console Connect<sup>1</sup> |
-| Zurich Metro | Zurich<br>Zurich2 | Interxion ZUR2<br>Equinix ZH5 | 1 | Switzerland North | &check; | Colt<sup>1</sup><br>Digital Reality<sup>1</sup> |
+| Zurich Metro | Zurich<br>Zurich2 | Digital Realty ZUR2<br>Equinix ZH5 | 1 | Switzerland North | &check; | Colt<sup>1</sup><br>Digital Realty<sup>1</sup> |
<sup>1<sup> These service providers will be available in the future.
firewall Choose Firewall Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/choose-firewall-sku.md
Title: Choose the right Azure Firewall SKU to meet your needs
-description: Learn about the different Azure Firewall SKUs and how to choose the right one for your needs.
+ Title: Choose the right Azure Firewall version to meet your needs
+description: Learn about the different Azure Firewall versions and how to choose the right one for your needs.
Previously updated : 03/15/2023 Last updated : 04/03/2024
-# Choose the right Azure Firewall SKU to meet your needs
+# Choose the right Azure Firewall version to meet your needs
-Azure Firewall now supports three different SKUs to cater to a wide range of customer use cases and preferences.
+Azure Firewall now supports three different versions to cater to a wide range of customer use cases and preferences.
- Azure Firewall Premium is recommended to secure highly sensitive applications (such as payment processing). It supports advanced threat protection capabilities like malware and TLS inspection. - Azure Firewall Standard is recommended for customers looking for Layer 3ΓÇôLayer 7 firewall and needs autoscaling to handle peak traffic periods of up to 30 Gbps. It supports enterprise features like threat intelligence, DNS proxy, custom DNS, and web categories.
Azure Firewall now supports three different SKUs to cater to a wide range of cus
## Feature comparison
-Take a closer look at the features across the three Azure Firewall SKUs:
+Take a closer look at the features across the three Azure Firewall versions:
## Next steps
governance Recommended Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/recommended-policies.md
Title: Recommended policies for Azure services description: Describes how to find and apply recommended policies for Azure services such as Azure Virtual Machines. Previously updated : 08/17/2021 Last updated : 04/03/2024 + # Recommended policies for Azure services
-Customers who are new to Azure Policy often look to find common policy definitions to manage and
-govern their resources. Azure Policy's **Recommended policies** provides a focused list of common
-policy definitions to start with. The **Recommended policies** experience for supported resources is
-embedded within the portal experience for that resource.
+Customers who are new to Azure Policy often look to find common policy definitions to manage and govern their resources. Azure Policy's **Recommended policies** provides a focused list of common policy definitions to start with. The **Recommended policies** experience for supported resources is embedded within the portal experience for that resource.
-For more Azure Policy built-ins, see
-[Azure Policy built-in definitions](../samples/built-in-policies.md).
+For more Azure Policy built-ins, go to [Azure Policy built-in definitions](../samples/built-in-policies.md).
## Azure Virtual Machines
-The **Recommended policies** for [Azure Virtual Machines](../../../virtual-machines/index.yml) are
-on the **Overview** page for virtual machines and under the **Capabilities** tab. In the _Azure
-Policy_ card, select the "Not configured" or "# assigned" text to open a side pane with the
-recommended policies. Any policy definition already assigned to a scope the virtual machine is a
-member of is grayed-out. Select the recommended policies to apply to this virtual machine and select
-**Assign policies** to create an assignment for each.
+The **Recommended policies** for [Azure Virtual Machines](../../../virtual-machines/index.yml) are on the **Overview** page for virtual machines and under the **Capabilities** tab. Select the **Azure Policy** card to open a side pane with the recommended policies. Select the recommended policies to apply to this virtual machine and select **Assign policies** to create an assignment for each policy. **Assign policies** is unavailable, or greyed out, for any policy already assigned to a scope where the virtual machine is a member.
-As an organization reaches maturity with
-[organizing their resources and resource hierarchy](/azure/cloud-adoption-framework/ready/azure-best-practices/organize-subscriptions),
-it's recommended to transition these policy assignments from one per resource to the subscription or
-[management group](../../management-groups/index.yml) level.
+As an organization reaches maturity with [organizing their resources and resource hierarchy](/azure/cloud-adoption-framework/ready/azure-best-practices/organize-subscriptions), the recommendation is to transition these policy assignments from one per resource to the subscription or [management group](../../management-groups/index.yml) level.
### Azure Virtual Machines recommended policies
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect |Version<br /><sub>(GitHub)</sub> |
|||||
-|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](../../../site-recovery/index.yml). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
+|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
|[Audit VMs that do not use managed disks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F06a78e20-9358-41c9-923c-fb736d382a4d) |This policy audits VMs that do not use managed disks |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json) |
-|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
## Next steps
hdinsight-aks Azure Service Bus Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/azure-service-bus-demo.md
Title: Use Apache Flink on HDInsight on AKS with Azure Service Bus
-description: Use Apache Flink DataStream API on HDInsight on AKS with Azure Service Bus
+description: Use Apache Flink DataStream API on HDInsight on AKS with Azure Service Bus.
Previously updated : 11/27/2023 Last updated : 04/02/2024 # Use Apache Flink on HDInsight on AKS with Azure Service Bus
This article provides an overview and demonstration of Apache Flink DataStream A
## Prerequisites -- [Flink Cluster 1.16.0 on HDInsight on AKS](./flink-create-cluster-portal.md)
+- [Flink Cluster 1.17.0 on HDInsight on AKS](./flink-create-cluster-portal.md)
- For this demonstration, we use a Window VM as maven project develop env in the same VNET as HDInsight on AKS. - During the [creation](./flink-create-cluster-portal.md) of the Flink cluster, you are required to ensure that SSH access is selected. This enables you to access the cluster using Secure Shell (SSH). - Set up an [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) instance.
In the POM.xml file, we define the project's dependencies using Maven, ensuring
<properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target>
- <flink.version>1.16.0</flink.version>
+ <flink.version>1.17.0</flink.version>
<java.version>1.8</java.version> </properties> <dependencies>
hdinsight-aks Change Data Capture Connectors For Apache Flink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/change-data-capture-connectors-for-apache-flink.md
Title: How to perform Change Data Capture of SQL Server with Apache Flink® Data
description: Learn how to perform Change Data Capture of SQL Server with Apache Flink® DataStream API and DataStream Source. Previously updated : 03/22/2024 Last updated : 04/02/2024 # Change Data Capture of SQL Server with Apache Flink® DataStream API and DataStream Source on HDInsight on AKS
GO
``` ##### Maven source code on IdeaJ
-In the below snippet, we use HDInsight Kafka 2.4.1. Based on your usage, update the version of Kafka on `<kafka.version>`.
+In the below snippet, we use Kafka 2.4.1. Based on your usage, update the version of Kafka on `<kafka.version>`.
**maven pom.xml**
In the below snippet, we use HDInsight Kafka 2.4.1. Based on your usage, update
<flink.version>1.17.0</flink.version> <java.version>1.8</java.version> <scala.binary.version>2.12</scala.binary.version>
- <kafka.version>3.2.0</kafka.version> // Replace with 3.2 if you're using HDInsight Kafka 3.2
+ <kafka.version>3.2.0</kafka.version> //
</properties> <dependencies> <dependency>
hdinsight-aks Flink How To Setup Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-how-to-setup-event-hub.md
Title: How to connect Apache Flink® on HDInsight on AKS with Azure Event Hubs f
description: Learn how to connect Apache Flink® on HDInsight on AKS with Azure Event Hubs for Apache Kafka® Previously updated : 08/29/2023 Last updated : 04/02/2024 # Connect Apache Flink® on HDInsight on AKS with Azure Event Hubs for Apache Kafka®
In this article, we explore how to connect [Azure Event Hubs](/azure/event-hubs/
## Packaging the JAR for Flink 1. Package com.example.app;
- ```
+ ```
+ package contoso.example;
+
import org.apache.flink.api.common.functions.MapFunction; import org.apache.flink.api.common.serialization.SimpleStringSchema;
+
+ import org.apache.flink.api.java.utils.ParameterTool;
+ import org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema;
+ import org.apache.flink.connector.kafka.sink.KafkaSink;
+
import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
- import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer; //v0.11.0.0
- import java.io.FileNotFoundException;
+
import java.io.FileReader; import java.util.Properties;-
- public class FlinkTestProducer {
-
- private static final String TOPIC = "test";
- private static final String FILE_PATH = "src/main/resources/producer.config";
-
- public static void main(String... args) {
- try {
- Properties properties = new Properties();
- properties.load(new FileReader(FILE_PATH));
-
- final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
- DataStream stream = createStream(env);
- FlinkKafkaProducer<String> myProducer = new FlinkKafkaProducer<>(
- TOPIC,
- new SimpleStringSchema(), // serialization schema
- properties);
-
- stream.addSink(myProducer);
- env.execute("Testing flink print");
-
- } catch(FileNotFoundException e){
- System.out.println("FileNotFoundException: " + e);
- } catch (Exception e) {
- System.out.println("Failed with exception:: " + e);
- }
- }
-
- public static DataStream createStream(StreamExecutionEnvironment env){
- return env.generateSequence(0, 200)
- .map(new MapFunction<Long, String>() {
- @Override
- public String map(Long in) {
- return "FLINK PRODUCE " + in;
- }
- });
- }
- }
- ```
-
+
+ public class AzureEventHubDemo {
+
+ public static void main(String[] args) throws Exception {
+ // 1. get stream execution environment
+ StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment().setParallelism(1);
+ ParameterTool parameters = ParameterTool.fromArgs(args);
+ String input = parameters.get("input");
+ Properties properties = new Properties();
+ properties.load(new FileReader(input));
+
+ // 2. generate stream input
+ DataStream<String> stream = createStream(env);
+
+ // 3. sink to eventhub
+ KafkaSink<String> sink = KafkaSink.<String>builder().setKafkaProducerConfig(properties)
+ .setRecordSerializer(KafkaRecordSerializationSchema.builder()
+ .setTopic("topic1")
+ .setValueSerializationSchema(new SimpleStringSchema())
+ .build())
+ .build();
+
+ stream.sinkTo(sink);
+
+ // 4. execute the stream
+ env.execute("Produce message to Azure event hub");
+ }
+
+ public static DataStream<String> createStream(StreamExecutionEnvironment env){
+ return env.generateSequence(0, 200)
+ .map(new MapFunction<Long, String>() {
+ @Override
+ public String map(Long in) {
+ return "FLINK PRODUCE " + in;
+ }
+ });
+ }
+ }
+ ```
+
1. Add the snippet to run the Flink Producer. :::image type="content" source="./media/flink-eventhub/testing-flink.png" alt-text="Screenshot showing how to test Flink in Event Hubs." border="true" lightbox="./media/flink-eventhub/testing-flink.png":::
-1. Once the code is executed, the events are stored in the topic **ΓÇ£TESTΓÇ¥**
+1. Once the code is executed, the events are stored in the topic **"topic1"**
:::image type="content" source="./media/flink-eventhub/events-stored-in-topic.png" alt-text="Screenshot showing Event Hubs stored in topic." border="true" lightbox="./media/flink-eventhub/events-stored-in-topic.png":::
hdinsight-aks Process And Consume Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/process-and-consume-data.md
Title: Using Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS
description: Learn how to use Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS Previously updated : 03/28/2024 Last updated : 04/03/2024 # Using Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS
public class Event {
``` ## Package the jar and submit the job to Flink
+On Webssh, upload the jar and submit the jar
++
+On Flink Dashboard UI
+ ## Produce the topic - clicks on Kafka
hdinsight-aks Use Flink To Sink Kafka Message Into Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-to-sink-kafka-message-into-hbase.md
Title: Write messages to Apache HBase® with Apache Flink® DataStream API
description: Learn how to write messages to Apache HBase with Apache Flink DataStream API. Previously updated : 03/25/2024 Last updated : 04/02/2024 # Write messages to Apache HBase® with Apache Flink® DataStream API
hbase:002:0>
<dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-hbase-base</artifactId>
- <version>1.16.0</version>
+ <version>${flink.version}</version>
</dependency> <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-client --> <dependency>
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
The systems listed in the following table are considered compatible with Azure I
| [RHEL 7](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7) | ![RHEL 7 + AMD64](./media/support/green-check.png) | ![RHEL 7 + ARM32v7](./media/support/green-check.png) | ![RHEL 7 + ARM64](./media/support/green-check.png) | [June 2024](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) | | [Ubuntu 20.04 <sup>2</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | | [April 2025](https://wiki.ubuntu.com/Releases) | | [Ubuntu 22.04 <sup>2</sup>](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | | ![Ubuntu 22.04 + ARM32v7](./media/support/green-check.png) | | [June 2027](https://wiki.ubuntu.com/Releases) |
+| [Ubuntu Core <sup>3</sup>](https://snapcraft.io/azure-iot-edge) | | ![Ubuntu Core + AMD64](./media/support/green-check.png) | ![Ubuntu Core + ARM64](./media/support/green-check.png) | [April 2027](https://ubuntu.com/about/release-cycle) |
| [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/support/green-check.png) | | | | | [Yocto](https://www.yoctoproject.org/)<br>For Yocto issues, open a [GitHub issue](https://github.com/Azure/meta-iotedge/issues) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | [April 2024](https://wiki.yoctoproject.org/wiki/Releases) | | Raspberry Pi OS Buster | | ![Raspberry Pi OS Buster + ARM32v7](./media/support/green-check.png) | ![Raspberry Pi OS Buster + ARM64](./media/support/green-check.png) | |
The systems listed in the following table are considered compatible with Azure I
<sup>2</sup> Installation packages are made available on the [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases). See the installation steps in [Offline or specific version installation](how-to-provision-single-device-linux-symmetric.md#offline-or-specific-version-installation-optional).
+<sup>3</sup> Ubuntu Core is fully supported but the automated testing of Snaps currently happens on Ubuntu 22.04 Server LTS.
+ > [!NOTE] > When a *Tier 2* operating system reaches its end of support date, it's removed from the supported platform list. If you take no action, IoT Edge devices running on the unsupported operating system continue to work but ongoing security patches and bug fixes in the host packages for the operating system won't be available after the end of support date. To continue to receive support and security updates, we recommend that you update your host OS to a *Tier 1* supported platform.
iot-operations Howto Configure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-data-lake.md
Configure a data lake connector to connect to an Azure Data Lake Storage Gen2 (A
```bash kubectl create secret generic my-sas \
- --from-literal=accessToken='sv=2022-11-02&ss=b&srt=c&sp=rwdlax&se=2023-07-22T05:47:40Z&st=2023-07-21T21:47:40Z&spr=https&sig=xDkwJUO....'
+ --from-literal=accessToken='sv=2022-11-02&ss=b&srt=c&sp=rwdlax&se=2023-07-22T05:47:40Z&st=2023-07-21T21:47:40Z&spr=https&sig=xDkwJUO....' \
+ -n azure-iot-operations
``` 1. Create a [DataLakeConnector](#datalakeconnector) resource that defines the configuration and endpoint settings for the connector. You can use the YAML provided as an example, but make sure to change the following fields:
kubernetes-fleet L4 Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/l4-load-balancing.md
You can follow this document to set up layer 4 load balancing for such multi-clu
If successful, the output looks similar to the following example: ```console
- clusterresourceplacement.fleet.azure.com/kuard-demo created
+ clusterresourceplacement.placement.kubernetes-fleet.io/kuard-demo created
``` 1. Check the status of the `ClusterResourcePlacement`:
machine-learning Concept Secure Code Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-code-best-practice.md
Title: Secure code best practices
-description: Learn about potential security threats that may exist when developing for Azure Machine Learning, mitigations, and best practices.
+description: Learn about potential security threats that exist when developing for Azure Machine Learning, mitigations, and best practices.
Previously updated : 03/11/2024 Last updated : 04/02/2024
-# Secure code best practices with Azure Machine Learning
+# Best practices for secure code
-In Azure Machine Learning, you can upload files and content from any source into Azure. Content within Jupyter notebooks or scripts that you load can potentially read data from your sessions, access data within your organization in Azure, or run malicious processes on your behalf.
+In Azure Machine Learning, you can upload files and content from any source into Azure. Content within Jupyter notebooks or scripts that you load can potentially read data from your sessions, access sensitive data within your organization in Azure, or run malicious processes on your behalf.
> [!IMPORTANT] > Only run notebooks or scripts from trusted sources. For example, where you or your security team have reviewed the notebook or script. ## Potential threats
-Development with Azure Machine Learning often involves web-based development environments (Notebooks & Azure Machine Learning studio). When you use web-based development environments, the potential threats are:
+Development with Azure Machine Learning often involves web-based development environments, such as notebooks or the Azure Machine Learning studio. When you use web-based development environments, the potential threats are:
-* [Cross site scripting (XSS)](https://owasp.org/www-community/attacks/xss/)
+* [Cross-site scripting (XSS)](https://owasp.org/www-community/attacks/xss/)
* __DOM injection__: This type of attack can modify the UI displayed in the browser. For example, by changing how the run button behaves in a Jupyter Notebook.
- * __Access token/cookies__: XSS attacks can also access local storage and browser cookies. Your Microsoft Entra authentication token is stored in local storage. An XSS attack could use this token to make API calls on your behalf, and then send the data to an external system or API.
+ * __Access token or cookies__: XSS attacks can also access local storage and browser cookies. Your Microsoft Entra authentication token is stored in local storage. An XSS attack could use this token to make API calls on your behalf, and then send the data to an external system or API.
-* [Cross site request forgery (CSRF)](https://owasp.org/www-community/attacks/csrf): This attack may replace the URL of an image or link with the URL of a malicious script or API. When the image is loaded, or link clicked, a call is made to the URL.
+* [Cross-site request forgery (CSRF)](https://owasp.org/www-community/attacks/csrf): This attack could replace the URL of an image or link with the URL of a malicious script or API. When the image is loaded, or link clicked, a call is made to the URL.
## Azure Machine Learning studio notebooks
-Azure Machine Learning studio provides a hosted notebook experience in your browser. Cells in a notebook can output HTML documents or fragments that contain malicious code. When the output is rendered, the code can be executed.
+Azure Machine Learning studio provides a hosted notebook experience in your browser. Cells in a notebook can output HTML documents or fragments that contain malicious code. When the output is rendered, the code can be executed.
__Possible threats__:
-* Cross site scripting (XSS)
-* Cross site request forgery (CSRF)
+* Cross-site scripting (XSS)
+* Cross-site request forgery (CSRF)
__Mitigations provided by Azure Machine Learning__: * __Code cell output__ is sandboxed in an iframe. The iframe prevents the script from accessing the parent DOM, cookies, or session storage. * __Markdown cell__ contents are cleaned using the dompurify library. This blocks malicious scripts from executing with markdown cells are rendered.
-* __Image URL__ and __Markdown links__ are sent to a Microsoft owned endpoint, which checks for malicious values. If a malicious value is detected, the endpoint rejects the request.
+* __Image URL__ and __markdown links__ are sent to a Microsoft-owned endpoint, which checks for malicious values. If a malicious value is detected, the endpoint rejects the request.
__Recommended actions__:
-* Verify that you trust the contents of files before uploading to studio. When uploading, you must acknowledge that you're uploading trusted files.
-* When selecting a link to open an external application, you'll be prompted to trust the application.
+* Verify that you trust the contents of files before uploading to the studio. You must acknowledge that you're uploading trusted files.
+* When selecting a link to open an external application, you're prompted to trust the application.
## Azure Machine Learning compute instance
-Azure Machine Learning compute instance hosts __Jupyter__ and __Jupyter Lab__. When you use either, cells in a notebook or code in can output HTML documents or fragments that contain malicious code. When the output is rendered, the code can be executed. The same threats also apply when you use __RStudio__ and __Posit Workbench (formerly RStudio Workbench)__ hosted on a compute instance.
+Azure Machine Learning compute instance hosts Jupyter and JupyterLab. When you use either, code inside notebook cells can output HTML documents or fragments that contain malicious code. When the output is rendered, the code can be executed. The same threats apply when you use RStudio or Posit Workbench (formerly RStudio Workbench) hosted on a compute instance.
__Possible threats__:
-* Cross site scripting (XSS)
-* Cross site request forgery (CSRF)
+* Cross-site scripting (XSS)
+* Cross-site request forgery (CSRF)
__Mitigations provided by Azure Machine Learning__:
-* None. Jupyter and Jupyter Lab are open-source applications hosted on the Azure Machine Learning compute instance.
+* None. Jupyter and JupyterLab are open-source applications hosted on the Azure Machine Learning compute instance.
__Recommended actions__:
-* Verify that you trust the contents of files before uploading to studio. When uploading, you must acknowledge that you're uploading trusted files.
+* Verify that you trust the contents of files before uploading. You must acknowledge that you're uploading trusted files.
-## Report security issues or concerns
+## Report security issues or concerns
Azure Machine Learning is eligible under the Microsoft Azure Bounty Program. For more information, visitΓÇ»[https://www.microsoft.com/msrc/bounty-microsoft-azure](https://www.microsoft.com/msrc/bounty-microsoft-azure).
-## Next steps
+## Related content
-* [Enterprise security for Azure Machine Learning](concept-enterprise-security.md)
+* [Enterprise security and governance for Azure Machine Learning](concept-enterprise-security.md)
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
Previously updated : 10/03/2022 Last updated : 04/03/2024 #Customer intent: As a TensorFlow developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
Whether you're developing a TensorFlow model from the ground-up or you're bringi
## Prerequisites
-To benefit from this article, you'll need to:
+To benefit from this article, you need to:
- Access an Azure subscription. If you don't have one already, [create a free account](https://azure.microsoft.com/free/). - Run the code in this article using either an Azure Machine Learning compute instance or your own Jupyter notebook. - Azure Machine Learning compute instanceΓÇöno downloads or installation necessary
- - Complete the [Create resources to get started](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
+ - Complete the [Create resources to get started](quickstart-create-resources.md) tutorial to create a dedicated notebook server preloaded with the SDK and the sample repository.
- In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **v2 > sdk > python > jobs > single-step > tensorflow > train-hyperparameter-tune-deploy-with-tensorflow**. - Your Jupyter notebook server - [Install the Azure Machine Learning SDK (v2)](https://aka.ms/sdk-v2-install).
This section sets up the job for training by loading the required Python package
### Connect to the workspace
-First, you'll need to connect to your Azure Machine Learning workspace. The [Azure Machine Learning workspace](concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
+First, you need to connect to your Azure Machine Learning workspace. The [Azure Machine Learning workspace](concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
We're using `DefaultAzureCredential` to get access to the workspace. This credential should be capable of handling most Azure SDK authentication scenarios.
Next, get a handle to the workspace by providing your Subscription ID, Resource
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=ml_client)]
-The result of running this script is a workspace handle that you'll use to manage other resources and jobs.
+The result of running this script is a workspace handle that you use to manage other resources and jobs.
> [!NOTE] > - Creating `MLClient` will not connect the client to the workspace. The client initialization is lazy and will wait for the first time it needs to make a call. In this article, this will happen during compute creation.
-### Create a compute resource to run the job
+### Create a compute resource
Azure Machine Learning needs a compute resource to run a job. This resource can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
In the following example script, we provision a Linux [`compute cluster`](./how-
### Create a job environment
-To run an Azure Machine Learning job, you'll need an environment. An Azure Machine Learning [environment](concept-environments.md) encapsulates the dependencies (such as software runtime and libraries) needed to run your machine learning training script on your compute resource. This environment is similar to a Python environment on your local machine.
+To run an Azure Machine Learning job, you need an environment. An Azure Machine Learning [environment](concept-environments.md) encapsulates the dependencies (such as software runtime and libraries) needed to run your machine learning training script on your compute resource. This environment is similar to a Python environment on your local machine.
Azure Machine Learning allows you to either use a curated (or ready-made) environmentΓÇöuseful for common training and inference scenariosΓÇöor create a custom environment using a Docker image or a Conda configuration.
-In this article, you'll reuse the curated Azure Machine Learning environment `AzureML-tensorflow-2.7-ubuntu20.04-py38-cuda11-gpu`. You'll use the latest version of this environment using the `@latest` directive.
+In this article, you reuse the curated Azure Machine Learning environment `AzureML-tensorflow-2.7-ubuntu20.04-py38-cuda11-gpu`. You use the latest version of this environment using the `@latest` directive.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=curated_env_name)] ## Configure and submit your training job
-In this section, we'll begin by introducing the data for training. We'll then cover how to run a training job, using a training script that we've provided. You'll learn to build the training job by configuring the command for running the training script. Then, you'll submit the training job to run in Azure Machine Learning.
+In this section, we begin by introducing the data for training. We then cover how to run a training job, using a training script that we've provided. You learn to build the training job by configuring the command for running the training script. Then, you submit the training job to run in Azure Machine Learning.
### Obtain the training data You'll use data from the Modified National Institute of Standards and Technology (MNIST) database of handwritten digits. This data is sourced from Yan LeCun's website and stored in an Azure storage account.
The provided training script does the following:
- trains a model, using the data; and - returns the output model.
-During the pipeline run, you'll use MLFlow to log the parameters and metrics. To learn how to enable MLFlow tracking, see [Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md).
+During the pipeline run, you use MLFlow to log the parameters and metrics. To learn how to enable MLFlow tracking, see [Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md).
In the training script `tf_mnist.py`, we create a simple deep neural network (DNN). This DNN has:
In the training script `tf_mnist.py`, we create a simple deep neural network (DN
### Build the training job
-Now that you have all the assets required to run your job, it's time to build it using the Azure Machine Learning Python SDK v2. For this example, we'll be creating a `command`.
+Now that you have all the assets required to run your job, it's time to build it using the Azure Machine Learning Python SDK v2. For this example, we are creating a `command`.
An Azure Machine Learning `command` is a resource that specifies all the details needed to execute your training code in the cloud. These details include the inputs and outputs, type of hardware to use, software to install, and how to run your code. The `command` contains information to execute a single command. #### Configure the command
-You'll use the general purpose `command` to run the training script and perform your desired tasks. Create a `Command` object to specify the configuration details of your training job.
+You use the general purpose `command` to run the training script and perform your desired tasks. Create a `Command` object to specify the configuration details of your training job.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=job)]
To tune the model's hyperparameters, define the parameter space in which to sear
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=job_for_sweep)]
-Then, you'll configure sweep on the command job, using some sweep-specific parameters, such as the primary metric to watch and the sampling algorithm to use.
+Then, you configure sweep on the command job, using some sweep-specific parameters, such as the primary metric to watch and the sampling algorithm to use.
In the following code, we use random sampling to try different configuration sets of hyperparameters in an attempt to maximize our primary metric, `validation_acc`.
-We also define an early termination policyΓÇöthe `BanditPolicy`. This policy operates by checking the job every two iterations. If the primary metric, `validation_acc`, falls outside the top ten percent range, Azure Machine Learning will terminate the job. This saves the model from continuing to explore hyperparameters that show no promise of helping to reach the target metric.
+We also define an early termination policyΓÇöthe `BanditPolicy`. This policy operates by checking the job every two iterations. If the primary metric, `validation_acc`, falls outside the top 10 percent range, Azure Machine Learning terminates the job. This saves the model from continuing to explore hyperparameters that show no promise of helping to reach the target metric.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=sweep_job)]
You can then register this model.
## Deploy the model as an online endpoint
-After you've registered your model, you can deploy it as an [online endpoint](concept-endpoints.md)ΓÇöthat is, as a web service in the Azure cloud.
+After you register your model, you can deploy it as an [online endpoint](concept-endpoints.md)ΓÇöthat is, as a web service in the Azure cloud.
-To deploy a machine learning service, you'll typically need:
+To deploy a machine learning service, you typically need:
- The model assets that you want to deploy. These assets include the model's file and metadata that you already registered in your training job. - Some code to run as a service. The code executes the model on a given input request (an entry script). This entry script receives data submitted to a deployed web service and passes it to the model. After the model processes the data, the script returns the model's response to the client. The script is specific to your model and must understand the data that the model expects and returns. When you use an MLFlow model, Azure Machine Learning automatically creates this script for you.
For more information about deployment, see [Deploy and score a machine learning
### Create a new online endpoint
-As a first step to deploying your model, you need to create your online endpoint. The endpoint name must be unique in the entire Azure region. For this article, you'll create a unique name using a universally unique identifier (UUID).
+As a first step to deploying your model, you need to create your online endpoint. The endpoint name must be unique in the entire Azure region. For this article, you create a unique name using a universally unique identifier (UUID).
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=online_endpoint_name)] [!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=endpoint)]
-Once you've created the endpoint, you can retrieve it as follows:
+Once you create the endpoint, you can retrieve it as follows:
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=get_endpoint)]
Once you've created the endpoint, you can retrieve it as follows:
After you've created the endpoint, you can deploy the model with the entry script. An endpoint can have multiple deployments. Using rules, the endpoint can then direct traffic to these deployments.
-In the following code, you'll create a single deployment that handles 100% of the incoming traffic. We've specified an arbitrary color name (*tff-blue*) for the deployment. You could also use any other name such as *tff-green* or *tff-red* for the deployment.
+In the following code, you create a single deployment that handles 100% of the incoming traffic. We use an arbitrary color name (*tff-blue*) for the deployment. You could also use any other name such as *tff-green* or *tff-red* for the deployment.
The code to deploy the model to the endpoint does the following: - deploys the best version of the model that you registered earlier;
The code to deploy the model to the endpoint does the following:
### Test the deployment with a sample query
-Now that you've deployed the model to the endpoint, you can predict the output of the deployed model, using the `invoke` method on the endpoint. To run the inference, use the sample request file `sample-request.json` from the *request* folder.
+After you deploy the model to the endpoint, you can predict the output of the deployed model, using the `invoke` method on the endpoint. To run the inference, use the sample request file `sample-request.json` from the *request* folder.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=invoke)]
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2. In the foll
|`adla_step`|None|None| |`automl_step`|`automl` job|`automl` component| |`azurebatch_step`| None| None|
-|`command_step`| `command` job|`command` component|
-|`data_transfer_step`| coming soon | coming soon|
-|`databricks_step`| coming soon|coming soon|
-|`estimator_step`| command job|`command` component|
-|`hyper_drive_step`|`sweep` job| `sweep` component|
+|`command_step`| [`command` job](reference-yaml-job-command.md) | [`command` component](reference-yaml-component-command.md)|
+|`data_transfer_step`| None | None |
+|`databricks_step`| None | None |
+|`estimator_step`| [`command` job](reference-yaml-job-command.md) | [`command` component](reference-yaml-component-command.md)|
+|`hyper_drive_step`|[`sweep` job](reference-yaml-job-sweep.md)| None |
|`kusto_step`| None|None|
-|`module_step`|None|command component|
-|`mpi_step`| command job|command component|
+|`module_step`|None| [`command` component](reference-yaml-component-command.md)|
+|`mpi_step`| [`command` job](reference-yaml-job-command.md) | [`command` component](reference-yaml-component-command.md)|
|`parallel_run_step`|`Parallel` job| `Parallel` component|
-|`python_script_step`| `command` job|command component|
-|`r_script_step`| `command` job|`command` component|
-|`synapse_spark_step`| coming soon|coming soon|
+|`python_script_step`| [`command` job](reference-yaml-job-command.md) | [`command` component](reference-yaml-component-command.md)|
+|`r_script_step`| [`command` job](reference-yaml-job-command.md) | [`command` component](reference-yaml-component-command.md)|
+|`synapse_spark_step`| [`spark` job](reference-yaml-job-spark.md) | [`spark` component](reference-yaml-component-spark.md) |
## Published pipelines
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images.md
Previously updated : 10/13/2021 Last updated : 04/01/2024 # Prepare data for computer vision tasks with automated machine learning v1
Last updated 10/13/2021
In this article, you learn how to prepare image data for training computer vision models with [automated machine learning in Azure Machine Learning](../concept-automated-ml.md).
-To generate models for computer vision tasks with automated machine learning, you need to bring labeled image data as input for model training in the form of an [Azure Machine Learning TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset).
+To generate models for computer vision tasks with AutoML, you need to bring labeled image data as input for model training in the form of an [Azure Machine Learning TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset).
To ensure your TabularDataset contains the accepted schema for consumption in automated ML, you can use the Azure Machine Learning data labeling tool or use a conversion script.
If you already have a data labeling project and you want to use that data, you c
If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml).
-If your data doesn't follow any of the previously mentioned formats, you can use your own script to generate JSON Lines files based on schemas defined in [Schema for JSONL files for AutoML image experiments](../reference-automl-images-schema.md).
+If your data doesn't follow any of the previously mentioned formats, you can use your own script to generate JSON Lines files. To generate JSON Lines files, use schemas defined in [Schema for JSONL files for AutoML image experiments](../reference-automl-images-schema.md).
-After your data file(s) are converted to the accepted JSONL format, you can upload them to your storage account on Azure.
+After your data files are converted to the accepted JSONL format, you can upload them to your storage account on Azure.
## Upload the JSONL file and images to storage To use the data for automated ML training, upload the data to your [Azure Machine Learning workspace](../concept-workspace.md) via a [datastore](../how-to-access-data.md). The datastore provides a mechanism for you to upload/download data to storage on Azure, and interact with it from your remote compute targets.
-Upload the entire parent directory consisting of images and JSONL files to the default datastore that is automatically created upon workspace creation. This datastore connects to the default Azure blob storage container that was created as part of workspace creation.
+Upload the entire parent directory consisting of images and JSONL files to the default datastore that is automatically created upon workspace creation. This datastore connects to the default Azure blob storage container that was created as part of workspace creation.
```python # Retrieve default datastore that's automatically created when we setup a workspace ds = ws.get_default_datastore() ds.upload(src_dir='./fridgeObjects', target_path='fridgeObjects') ```
-Once the data upload is done, you can create an [Azure Machine Learning TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) and register it to your workspace for future use as input to your automated ML experiments for computer vision models.
+Once the data upload is done, you can create an [Azure Machine Learning TabularDataset.](/python/api/azureml-core/azureml.data.tabulardataset) Then, register the dataset to your workspace for future use as input to your automated ML experiments for computer vision models.
```python from azureml.core import Dataset
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-train-deploy-notebook.md
Previously updated : 09/14/2022 Last updated : 04/02/2024 #Customer intent: As a professional data scientist, I can build an image classification model with Azure Machine Learning by using Python in a Jupyter Notebook.
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-In this tutorial, you train a machine learning model on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning in a Python Jupyter Notebook. You can then use the notebook as a template to train your own machine learning model with your own data.
+In this tutorial, you train a machine learning model on remote compute resources. You use the training and deployment workflow for Azure Machine Learning in a Python Jupyter Notebook. You can then use the notebook as a template to train your own machine learning model with your own data.
This tutorial trains a simple logistic regression by using the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and [scikit-learn](https://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28 x 28 pixels, representing a number from zero to nine. The goal is to create a multi-class classifier to identify the digit a given image represents.
Learn how to take the following actions:
## Run a notebook from your workspace
-Azure Machine Learning includes a cloud notebook server in your workspace for an install-free and pre-configured experience. Use [your own environment](how-to-configure-environment.md) if you prefer to have control over your environment, packages, and dependencies.
+Azure Machine Learning includes a cloud notebook server in your workspace for an install-free and preconfigured experience. Use [your own environment](how-to-configure-environment.md) if you prefer to have control over your environment, packages, and dependencies.
## Clone a notebook folder
You complete the following experiment setup and run steps in Azure Machine Learn
## Install packages
-Once the compute instance is running and the kernel appears, add a new code cell to install packages needed for this tutorial.
+Once the compute instance is running and the kernel appears, add a new code cell to install packages needed for this tutorial.
1. At the top of the notebook, add a code cell. :::image type="content" source="media/tutorial-train-deploy-notebook/add-code-cell.png" alt-text="Screenshot of add code cell for notebook.":::
Once the compute instance is running and the kernel appears, add a new code cell
%pip install scipy==1.5.2 ```
-You may see a few install warnings. These can safely be ignored.
+You may see a few install warnings. These can safely be ignored.
## Run the notebook This tutorial and accompanying **utils.py** file is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to use it on your own [local environment](how-to-configure-environment.md). If you aren't using the compute instance, add `%pip install azureml-sdk[notebooks] azureml-opendatasets matplotlib` to the install above. > [!Important]
-> The rest of this article contains the same content as you see in the notebook.
+> The rest of this article contains the same content as you see in the notebook.
> > Switch to the Jupyter Notebook now if you want to run the code while you read along. > To run a single code cell in a notebook, click the code cell and hit **Shift+Enter**. Or, run the entire notebook by choosing **Run all** from the top toolbar.
+<!-- nbstart https://raw.githubusercontent.com/Azure/MachineLearningNotebooks/master/tutorials/compute-instance-quickstarts/quickstart-azureml-in-10mins/quickstart-azureml-in-10mins.ipynb -->
+ ## Import data Before you train a model, you need to understand the data you're using to train it. In this section, learn how to:
Before you train a model, you need to understand the data you're using to train
* Download the MNIST dataset * Display some sample images
-You'll use Azure Open Datasets to get the raw MNIST data files. Azure Open Datasets are curated public datasets that you can use to add scenario-specific features to machine learning solutions for better models. Each dataset has a corresponding class, `MNIST` in this case, to retrieve the data in different ways.
+You use Azure Open Datasets to get the raw MNIST data files. Azure Open Datasets are curated public datasets that you can use to add scenario-specific features to machine learning solutions for better models. Each dataset has a corresponding class, `MNIST` in this case, to retrieve the data in different ways.
```python
mnist_file_dataset.download(data_folder, overwrite=True)
Load the compressed files into `numpy` arrays. Then use `matplotlib` to plot 30 random images from the dataset with their labels above them.
-Note this step requires a `load_data` function that's included in an `utils.py` file. This file is placed in the same folder as this notebook. The `load_data` function simply parses the compressed files into numpy arrays.
+Note this step requires a `load_data` function, included in an `utils.py` file. This file is placed in the same folder as this notebook. The `load_data` function simply parses the compressed files into numpy arrays.
```python
for i in np.random.permutation(X_train.shape[0])[:sample_size]:
plt.imshow(X_train[i].reshape(28, 28), cmap=plt.cm.Greys) plt.show() ```
-The code above displays a random set of images with their labels, similar to this:
+The code displays a random set of images with their labels, similar to this:
:::image type="content" source="media/tutorial-train-deploy-notebook/image-data-with-labels.png" alt-text="Sample images with their labels."::: ## Train model and log metrics with MLflow
-You'll train the model using the code below. Note that you are using MLflow autologging to track metrics and log model artifacts.
+Train the model using the following code. This code uses MLflow autologging to track metrics and log model artifacts.
You'll be using the [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) classifier from the [SciKit Learn framework](https://scikit-learn.org/) to classify the data. > [!NOTE]
-> The model training takes approximately 2 minutes to complete.**
+> The model training takes approximately 2 minutes to complete.
```python
clf = LogisticRegression(
with mlflow.start_run() as run: clf.fit(X_train, y_train) ```- ## View experiment
-In the left-hand menu in Azure Machine Learning studio, select __Jobs__ and then select your job (__azure-ml-in10-mins-tutorial__). A job is a grouping of many runs from a specified script or piece of code. Multiple jobs can be grouped together as an experiment.
+In the left-hand menu in Azure Machine Learning studio, select __Jobs__ and then select your job (__azure-ml-in10-mins-tutorial__). A job is a grouping of many runs from a specified script or piece of code. Multiple jobs can be grouped together as an experiment.
-Information for the run is stored under that job. If the name doesn't exist when you submit a job, if you select your run you will see various tabs containing metrics, logs, explanations, etc.
+Information for the run is stored under that job. If the name doesn't exist when you submit a job, if you select your run you'll see various tabs containing metrics, logs, explanations, etc.
## Version control your models with the model registry
-You can use model registration to store and version your models in your workspace. Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. The code below registers and versions the model you trained above. Once you have executed the code cell below you will be able to see the model in the registry by selecting __Models__ in the left-hand menu in Azure Machine Learning studio.
+You can use model registration to store and version your models in your workspace. Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. The code below registers and versions the model you trained above. Once you execute the following code cell, you'll see the model in the registry by selecting __Models__ in the left-hand menu in Azure Machine Learning studio.
```python # register the model
model = mlflow.register_model(model_uri, "sklearn_mnist_model")
## Deploy the model for real-time inference
-In this section you learn how to deploy a model so that an application can consume (inference) the model over REST.
+In this section, learn how to deploy a model so that an application can consume (inference) the model over REST.
### Create deployment configuration
-The code cell gets a _curated environment_, which specifies all the dependencies required to host the model (for example, the packages like scikit-learn). Also, you create a _deployment configuration_, which specifies the amount of compute required to host the model. In this case, the compute will have 1CPU and 1GB memory.
+The code cell gets a _curated environment_, which specifies all the dependencies required to host the model (for example, the packages like scikit-learn). Also, you create a _deployment configuration_, which specifies the amount of compute required to host the model. In this case, the compute has 1CPU and 1-GB memory.
```python
from azureml.core.webservice import AciWebservice
# get a curated environment env = Environment.get( workspace=ws,
- name="AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference",
- version=1
+ name="AzureML-sklearn-1.0"
) env.inferencing_stack_version='latest'
aciconfig = AciWebservice.deploy_configuration(
This next code cell deploys the model to Azure Container Instance. > [!NOTE]
-> The deployment takes approximately 3 minutes to complete.**
+> The deployment takes approximately 3 minutes to complete. But it might be longer to until it is available for use, perhaps as long as 15 minutes.**
```python
service = Model.deploy(
service.wait_for_deployment(show_output=True) ```
-The scoring script file referenced in the code above can be found in the same folder as this notebook, and has two functions:
+The scoring script file referenced in the preceding code can be found in the same folder as this notebook, and has two functions:
1. An `init` function that executes once when the service starts - in this function you normally get the model from the registry and set global variables 1. A `run(data)` function that executes each time a call is made to the service. In this function, you normally format the input data, run a prediction, and output the predicted result. ### View endpoint
-Once the model has been successfully deployed, you can view the endpoint by navigating to __Endpoints__ in the left-hand menu in Azure Machine Learning studio. You will be able to see the state of the endpoint (healthy/unhealthy), logs, and consume (how applications can consume the model).
+Once the model is successfully deployed, you can view the endpoint by navigating to __Endpoints__ in the left-hand menu in Azure Machine Learning studio. You'll see the state of the endpoint (healthy/unhealthy), logs, and consume (how applications can consume the model).
## Test the model service
If you're not going to continue to use this model, delete the Model service usin
service.delete() ```
-If you want to control cost further, stop the compute instance by selecting the "Stop compute" button next to the **Compute** dropdown. Then start the compute instance again the next time you need it.
+If you want to control cost further, stop the compute instance by selecting the "Stop compute" button next to the **Compute** dropdown. Then start the compute instance again the next time you need it.
+
+<!-- nbend -->
### Delete everything
Use these steps to delete your Azure Machine Learning workspace and all compute
[!INCLUDE [aml-delete-resource-group](../includes/aml-delete-resource-group.md)]
-## Next steps
+## Related resources
+ Learn about all of the [deployment options for Azure Machine Learning](../how-to-deploy-online-endpoints.md). + Learn how to [authenticate to the deployed model](../how-to-authenticate-online-endpoint.md).
network-watcher Network Watcher Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-overview.md
Previously updated : 09/15/2023 Last updated : 04/03/2023 #CustomerIntent: As someone with basic Azure network experience, I want to understand how Azure Network Watcher can help me resolve some of the network-related problems I've encountered and provide insight into how I use Azure networking.
Network Watcher offers two traffic tools that help you log and visualize network
### Flow logs **Flow logs** allows you to log information about your Azure IP traffic and stores the data in Azure storage. You can log IP traffic flowing through a network security group or Azure virtual network. For more information, see:-- [NSG flow logs](nsg-flow-logs-overview.md) and [Log network traffic to and from a virtual machine](nsg-flow-logs-portal.md).-- [VNet flow logs (preview)](vnet-flow-logs-overview.md) and [Manage VNet flow logs](vnet-flow-logs-powershell.md).
+- [NSG flow logs](nsg-flow-logs-overview.md) and [Manage NSG flow logs](nsg-flow-logs-portal.md).
+- [VNet flow logs (preview)](vnet-flow-logs-overview.md) and [Manage VNet flow logs](vnet-flow-logs-portal.md).
### Traffic analytics
network-watcher Vnet Flow Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-cli.md
# Create, change, enable, disable, or delete VNet flow logs using the Azure CLI
-> [!IMPORTANT]
-> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md). In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using the Azure CLI. You can learn how to manage a VNet flow log using [PowerShell](vnet-flow-logs-powershell.md).
+> [!IMPORTANT]
+> The VNet flow logs feature is currently in preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
network-watcher Vnet Flow Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-portal.md
+
+ Title: Manage VNet flow logs - Azure portal
+
+description: Learn how to create, change, enable, disable, or delete Azure Network Watcher VNet flow logs using the Azure portal.
++++ Last updated : 04/03/2024+
+#CustomerIntent: As an Azure administrator, I want to log my virtual network IP traffic using Network Watcher VNet flow logs so that I can analyze it later.
++
+# Create, change, enable, disable, or delete VNet flow logs using the Azure portal
+
+Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md).
+
+In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using the Azure portal. You can also learn how to manage a VNet flow log using [Azure PowerShell](vnet-flow-logs-powershell.md) or [Azure CLI](vnet-flow-logs-cli.md).
+
+> [!IMPORTANT]
+> The VNet flow logs feature is currently in preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Insights provider. For more information, see [Register Insights provider](#register-insights-provider).
+
+- A virtual network. If you need to create a virtual network, see [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md?toc=/azure/network-watcher/toc.json).
+
+- An Azure storage account. If you need to create a storage account, see [Create a storage account using the Azure portal](../storage/common/storage-account-create.md?tabs=azure-portal&toc=/azure/network-watcher/toc.json).
+
+## Register Insights provider
+
+*Microsoft.Insights* provider must be registered to successfully log traffic flowing through a virtual network. If you aren't sure if the *Microsoft.Insights* provider is registered, check its status in the Azure portal by following these steps:
+
+1. In the search box at the top of the portal, enter *subscriptions*. Select **Subscriptions** from the search results.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/subscriptions.png" alt-text="Screenshot that shows how to search for Subscriptions in the Azure portal." lightbox="./media/vnet-flow-logs-portal/subscriptions.png":::
+
+1. Select the Azure subscription that you want to enable the provider for in **Subscriptions**.
+
+1. Under **Settings**, select **Resource providers**.
+
+1. Enter *insight* in the filter box.
+
+1. Confirm the status of the provider displayed is **Registered**. If the status is **NotRegistered**, select the **Microsoft.Insights** provider then select **Register**.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/register-microsoft-insights.png" alt-text="Screenshot that shows how to register Microsoft Insights provider in the Azure portal.":::
+
+## Create a flow log
+
+Create a flow log for your virtual network, subnet, or network interface. This flow log is saved in an Azure storage account.
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Logs**, select **Flow logs**.
+
+1. In **Network Watcher | Flow logs**, select **+ Create** or **Create flow log** blue button.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/flow-logs.png" alt-text="Screenshot of Network Watcher flow logs in the Azure portal." lightbox="./media/vnet-flow-logs-portal/flow-logs.png":::
+
+1. On the **Basics** tab of **Create a flow log**, enter or select the following values:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select the Azure subscription of your virtual network that you want to log. |
+ | Flow log type | Select **Virtual Network** then select **+ Select target resource** (available options are: **Virtual network**, **Subnet**, and **Network interface**). <br> Select the resources that you want to flow log, then select **Confirm selection**. |
+ | Flow Log Name | Enter a name for the flow log or leave the default name. Azure portal uses ***{ResourceName}-{ResourceGroupName}-flowlog*** as a default name for the flow log. |
+ | **Instance details** | |
+ | Subscription | Select the Azure subscription of the storage account. |
+ | Storage Accounts | Select the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. |
+ | Retention (days) | Enter a retention time for the logs (this option is only available with [Standard general-purpose v2](../storage/common/storage-account-overview.md?toc=/azure/network-watcher/toc.json#types-of-storage-accounts) storage accounts). Enter *0* if you want to retain the flow logs data in the storage account forever (until you manually delete it from the storage account). For information about pricing, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). |
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/create-vnet-flow-log-basics.png" alt-text="Screenshot that shows the Basics tab of creating a VNet flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/create-vnet-flow-log-basics.png":::
+
+ > [!NOTE]
+ > If the storage account is in a different subscription, the resource that you're logging (virtual network, subnet, or network interface) and the storage account must be associated with the same Microsoft Entra tenant. The account you use for each subscription must have the [necessary permissions](required-rbac-permissions.md).
+
+1. To enable traffic analytics, select **Next: Analytics** button, or select the **Analytics** tab. Enter or select the following values:
+
+ | Setting | Value |
+ | - | -- |
+ | Enable traffic analytics | Select the checkbox to enable traffic analytics for your flow log. |
+ | Traffic analytics processing interval | Select the processing interval that you prefer, available options are: **Every 1 hour** and **Every 10 mins**. The default processing interval is every one hour. For more information, see [Traffic analytics](traffic-analytics.md). |
+ | Subscription | Select the Azure subscription of your Log Analytics workspace. |
+ | Log Analytics Workspace | Select your Log Analytics workspace. By default, Azure portal creates ***DefaultWorkspace-{SubscriptionID}-{Region}*** Log Analytics workspace in ***defaultresourcegroup-{Region}*** resource group. |
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/create-vnet-flow-log-analytics.png" alt-text="Screenshot that shows how to enable traffic analytics for a new flow log in the Azure portal.":::
+
+ > [!NOTE]
+ > To create and select a Log Analytics workspace other than the default one, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?toc=/azure/network-watcher/toc.json)
+
+1. Select **Review + create**.
+
+1. Review the settings, and then select **Create**.
+
+## Enable or disable traffic analytics
+
+Enable traffic analytics for a flow log to analyze the flow log data. Traffic analytics provides insights into the traffic patterns of your virtual network. You can enable or disable traffic analytics for a flow log at any time.
+
+To enable traffic analytics for a flow log, follow these steps:
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Logs**, select **Flow logs**.
+
+1. In **Network Watcher | Flow logs**, select the flow log that you want to enable traffic analytics for.
+
+1. In **Flow logs settings**, check the **Enable traffic analytics** checkbox.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/enable-traffic-analytics.png" alt-text="Screenshot that shows how to enable traffic analytics for an existing flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/enable-traffic-analytics.png":::
+
+1. Enter or select the following values:
+
+ | Setting | Value |
+ | - | -- |
+ | Traffic analytics processing interval | Select the processing interval that you prefer, available options are: **Every 1 hour** and **Every 10 mins**. The default processing interval is every one hour. For more information, see [Traffic analytics](traffic-analytics.md). |
+ | Subscription | Select the Azure subscription of your Log Analytics workspace. |
+ | Log Analytics Workspace | Select your Log Analytics workspace. By default, Azure portal creates ***DefaultWorkspace-{SubscriptionID}-{Region}*** Log Analytics workspace in ***defaultresourcegroup-{Region}*** resource group. |
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/enable-traffic-analytics-settings.png" alt-text="Screenshot that shows configurations of traffic analytics for an existing flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/enable-traffic-analytics-settings.png":::
+
+1. Select **Save** to apply the changes.
+
+To disable traffic analytics for a flow log, take the previous steps 1-3, then uncheck the **Enable traffic analytics** checkbox and select **Save**.
++
+## Change a flow log
+
+You can configure and change a flow log after you create it. For example, you can change the storage account or Log Analytics workspace.
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Logs**, select **Flow logs**.
+
+1. In **Network Watcher | Flow logs**, select the flow log that you want to change.
+
+1. In **Flow logs settings**, you can change any of the following settings:
+
+ - **Storage Account**: Change the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. You can also choose a storage account from a different subscription. If the storage account is in a different subscription, the resource that you're logging (virtual network, subnet, or network interface) and the storage account must be associated with the same Microsoft Entra tenant.
+ - **Retention (days)**: Change the retention time in the storage account (this option is only available with [Standard general-purpose v2](../storage/common/storage-account-overview.md#types-of-storage-accounts) storage accounts). Enter *0* if you want to retain the flow logs data in the storage account forever (until you manually delete the data from the storage account).
+ - **Traffic analytics**: Enable or disable traffic analytics for your flow log. For more information, see [Traffic analytics](traffic-analytics.md).
+ - **Traffic analytics processing interval**: Change the processing interval of traffic analytics (if traffic analytics is enabled). Available options are: one hour and 10 minutes. The default processing interval is every one hour. For more information, see [Traffic analytics](traffic-analytics.md).
+ - **Log Analytics Workspace**: Change the Log Analytics workspace that you want to save the flow logs to (if traffic analytics is enabled). For more information, see [Log Analytics workspace overview](../azure-monitor/logs/log-analytics-workspace-overview.md). You can also choose a Log Analytics Workspace from a different subscription.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/change-flow-log.png" alt-text="Screenshot that shows how to edit flow log's settings in the Azure portal where you can change some VNet flow log settings." lightbox="./media/vnet-flow-logs-portal/change-flow-log.png":::
+
+1. Select **Save** to apply the changes.
+
+## List all flow logs
+
+You can list all flow logs in a subscription or a group of subscriptions. You can also list all flow logs in a region.
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Logs**, select **Flow logs**.
+
+1. Select **Subscription equals** filter to choose one or more of your subscriptions. You can apply other filters like **Location equals** to list all the flow logs in a region.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/list-flow-logs.png" alt-text="Screenshot that shows how to list existing flow logs in the Azure portal." lightbox="./media/vnet-flow-logs-portal/list-flow-logs.png":::
+
+## View details of a flow log resource
+
+You can view the details of a flow log in a subscription or a group of subscriptions. You can also list all flow logs in a region.
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Logs**, select **Flow logs**.
+
+1. In **Network Watcher | Flow logs**, select the flow log that you want to see.
+
+1. In **Flow logs settings**, you can view the settings of the flow log resource.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/flow-log-settings.png" alt-text="Screenshot of Flow logs settings page in the Azure portal." lightbox="./media/vnet-flow-logs-portal/flow-log-settings.png":::
+
+1. Select **Discard** to close the settings page without making changes.
+
+## Download a flow log
+
+You can download the flow logs data from the storage account that you saved the flow log to.
+
+1. In the search box at the top of the portal, enter *storage accounts*. Select **Storage accounts** in the search results.
+
+1. Select the storage account you used to store the logs.
+
+1. Under **Data storage**, select **Containers**.
+
+1. Select the **insights-logs-flowlogflowevent** container.
+
+1. In **insights-logs-flowlogflowevent**, navigate the folder hierarchy until you get to the `PT1H.json` file that you want to download. VNet flow log files follow the following path:
+
+ ```
+ https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/{subscriptionID}_NETWORKWATCHERRG/NETWORKWATCHER_{Region}_{ResourceName}-{ResourceGroupName}-FLOWLOGS/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
+ ```
+
+1. Select the ellipsis **...** to the right of the `PT1H.json` file, then select **Download**.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/flow-log-file-download.png" alt-text="Screenshot shows how to download a VNet flow log data file from the storage account container in the Azure portal." lightbox="./media/vnet-flow-logs-portal/flow-log-file-download.png":::
+
+> [!NOTE]
+> As an alternative way to access and download flow logs from your storage account, you can use Azure Storage Explorer. For more information, see [Get started with Storage Explorer](../storage/storage-explorer/vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+For information about the structure of a flow log, see [Log format of VNet flow logs](vnet-flow-logs-overview.md#log-format).
+
+## Disable a flow log
+
+You can temporarily disable a VNet flow log without deleting it. Disabling a flow log stops flow logging for the associated virtual network. However, the flow log resource remains with all its settings and associations. You can re-enable it at any time to resume flow logging for the configured virtual network.
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Logs**, select **Flow logs**.
+
+1. In **Network Watcher | Flow logs**, select the checkbox of the flow log that you want to disable.
+
+1. Select **Disable**.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/disable-flow-log.png" alt-text="Screenshot shows how to disable a flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/disable-flow-log.png":::
+
+> [!NOTE]
+> If traffic analytics is enabled for a flow log, you must disable it before you can disable the flow log. To disable traffic analytics, see [Enable or disable traffic analytics](#enable-or-disable-traffic-analytics).
+
+## Enable a flow log
+
+You can enable a VNet flow log that you previously disabled to resume flow logging with the same settings you previously selected.
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Logs**, select **Flow logs**.
+
+1. In **Network Watcher | Flow logs**, select the checkbox of the flow log that you want to enable.
+
+1. Select **Enable**.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/enable-flow-log.png" alt-text="Screenshot shows how to enable a flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/enable-flow-log.png":::
+
+## Delete a flow log
+
+You can permanently delete a VNet flow log. Deleting a flow log deletes all its settings and associations. To begin flow logging again for the same virtual network, you must create a new flow log for it.
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Logs**, select **Flow logs**.
+
+1. In **Network Watcher | Flow logs**, select the checkbox of the flow log that you want to delete.
+
+1. Select **Delete**.
+
+ :::image type="content" source="./media/vnet-flow-logs-portal/delete-flow-log.png" alt-text="Screenshot shows how to delete a flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/delete-flow-log.png":::
+
+> [!NOTE]
+> Deleting a flow log doesn't delete the flow log data from the storage account. Flow logs data stored in the storage account follows the configured retention policy or stays stored in the storage account until manually deleted.
+
+## Related content
+
+- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md).
+- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
network-watcher Vnet Flow Logs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-powershell.md
# Create, change, enable, disable, or delete VNet flow logs using Azure PowerShell
-> [!IMPORTANT]
-> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md). In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using Azure PowerShell. You can learn how to manage a VNet flow log using the [Azure CLI](vnet-flow-logs-cli.md).
+> [!IMPORTANT]
+> The VNet flow logs feature is currently in preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
operator-insights Consumption Plane Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/consumption-plane-configure-permissions.md
Title: Manage permissions for Azure Operator Insights consumption plane
-description: This article helps you configure consumption URI permissions for Azure Operator Insights.
+ Title: Manage permissions to the consumption URL for Azure Operator Insights
+description: Learn how to add and remove user permissions to the consumption URL for Azure Operator Insights.
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
Title: Major version upgrade description: Learn about the concepts of in-place major version upgrade with Azure Database for PostgreSQL - Flexible Server.--++ Previously updated : 03/18/2024 Last updated : 4/2/2024
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-Azure Database for PostgreSQL flexible server supports PostgreSQL versions 11, 12, 13, 14, 15, and 16. Postgres community releases a new major version containing new features about once a year. Additionally, major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward-compatible with existing applications. Azure Database for PostgreSQL flexible server periodically updates the minor versions during customerΓÇÖs maintenance window. Major version upgrades are more complicated than minor version upgrades as they can include internal changes and new features that may not be backward-compatible with existing applications.
+Azure Database for PostgreSQL flexible server supports PostgreSQL versions 16, 15, 14, 13, 12, and 11. Postgres community releases a new major version containing new features about once a year. Additionally, major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward-compatible with existing applications. Azure Database for PostgreSQL flexible server periodically updates the minor versions during customerΓÇÖs maintenance window. Major version upgrades are more complicated than minor version upgrades as they can include internal changes and new features that may not be backward-compatible with existing applications.
## Overview
If in-place major version upgrade pre-check operations fail, then the upgrade ab
- When upgrading servers with PostGIS extension installed, set the `search_path` server parameter to explicitly include the schemas of the PostGIS extension, extensions that depend on PostGIS, and extensions that serve as dependencies for the below extensions. **e.g postgis,postgis_raster,postgis_sfcgal,postgis_tiger_geocoder,postgis_topology,address_standardizer,address_standardizer_data_us,fuzzystrmatch (required for postgis_tiger_geocoder).** -- Servers configured with logical replication slots aren't supported. --- In-place major version upgrade doesn't yet support upgrading to version 16, our team is actively working on this feature.
+- Servers configured with logical replication slots aren't supported.
## Next steps
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Previously updated : 1/17/2024 Last updated : 4/3/2024 # Monitor metrics on Azure Database for PostgreSQL - Flexible Server
The following metrics are available for an Azure Database for PostgreSQL flexibl
|**CPU Credits Consumed** |`cpu_credits_consumed` |Count |Number of credits used by the flexible server. Applies to the Burstable tier. |Yes | |**CPU Credits Remaining** |`cpu_credits_remaining` |Count |Number of credits available to burst. Applies to the Burstable tier. |Yes | |**CPU percent** |`cpu_percent` |Percent |Percentage of CPU in use. |Yes |
+|**Database Size (preview)** |`database_size_bytes` |Bytes |Database size in bytes. |Yes |
|**Disk Queue Depth** |`disk_queue_depth` |Count |Number of outstanding I/O operations to the data disk. |Yes | |**IOPS** |`iops` |Count |Number of I/O operations to disk per second. |Yes | |**Maximum Used Transaction IDs**|`maximum_used_transactionIDs`|Count |Maximum number of transaction IDs in use. |Yes |
postgresql Concepts Networking Ssl Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-ssl-tls.md
There are many connection parameters for configuring the client for SSL. Few imp
For more on SSL\TLS configuration on the client, see [PostgreSQL documentation](https://www.postgresql.org/docs/current/ssl-tcp.html#SSL-CLIENT-CERTIFICATES). > [!NOTE]
-> For clients that use **verify-ca** and **verify-full** sslmode configuration settings, i.e. certificate pinning, they have to accept **both** [DigiCert Global Root G2](https://www.digicert.com/kb/digicert-root-certificates.htm) and [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/docs/repository.htm) root CA certificates, as services are migrating from Digicert to Microsoft CA.
+> For clients that use **verify-ca** and **verify-full** sslmode configuration settings, i.e. certificate pinning, they have to accept **both** root CA certificates:
+> * For connectivity to servers deployed to Azure government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona): [DigiCert Global Root G2](https://www.digicert.com/kb/digicert-root-certificates.htm) and [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/docs/repository.htm) root CA certificates, as services are migrating from Digicert to Microsoft CA.
+> * For connectivity to servers deployed to Azure public cloud regions worldwide : [Digicert Global Root CA](https://www.digicert.com/kb/digicert-root-certificates.htm) and [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/docs/repository.htm), as services are migrating from Digicert to Microsoft CA.
-### Importing Root Certificates in Java Key Store on the client for certificate pinning scenarios
+### Importing Root CA Certificates in Java Key Store on the client for certificate pinning scenarios
Custom-written Java applications use a default keystore, called *cacerts*, which contains trusted certificate authority (CA) certificates. It's also often known as Java trust store. A certificates file named *cacerts* resides in the security properties directory, java.home\lib\security, where java.home is the runtime environment directory (the jre directory in the SDK or the top-level directory of the JavaΓäó 2 Runtime Environment). You can use following directions to update client root CA certificates for client certificate pinning scenarios with PostgreSQL Flexible Server: 1. Make a backup copy of your custom keystore.
-2. Download Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root G2 certificates from following URIs:
-For Microsoft RSA Root Certificate Authority 2017 https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt.
-For DigiCert Global Root G2 https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem.
+2. Download following certificates:
+* For connectivity to servers deployed to Azure Government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona) download Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root G2 certificates from following URIs:
+ Microsoft RSA Root Certificate Authority 2017 https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt,
+ DigiCert Global Root G2 https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem.
+* For connectivity to servers deployed in Azure public regions worldwide download Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root CA certificates from following URIs:
+Microsoft RSA Root Certificate Authority 2017 https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt, Digicert Global Root CA https://cacerts.digicert.com/DigiCertGlobalRootCA.crt
3. Optionally, to prevent future disruption, it's also recommended to add the following roots to the trusted store: Microsoft ECC Root Certificate Authority 2017 - https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt
-4. Generate a combined CA certificate store with both Microsoft RSA Root Certificate Authority 2017 and DigiCertGlobalRootG2 certificates are included. Example below shows using DefaultJavaSSLFactory for PostgreSQL JDBC users
+4. Generate a combined CA certificate store with both Root CA certificates are included. Example below shows using DefaultJavaSSLFactory for PostgreSQL JDBC users.
+ * For connectivity to servers deployed to Azure Government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona)
```powershell
For DigiCert Global Root G2 https://cacerts.digicert.com/DigiCertGlobalRootG2.c
keytool -importcert -alias PostgreSQLServerCACert2 -file "D:\ Microsoft ECC Root Certificate Authority 2017.crt.pem" -keystore truststore -storepass password -noprompt ```
+ * For connectivity to servers deployed in Azure public regions worldwide
+```powershell
+
+ keytool -importcert -alias PostgreSQLServerCACert -file D:\ DigiCertGlobalRootCA.crt.pem -keystore truststore -storepass password -noprompt
+
+keytool -importcert -alias PostgreSQLServerCACert2 -file "D:\ Microsoft ECC Root Certificate Authority 2017.crt.pem" -keystore truststore -storepass password -noprompt
+```
+ 5. Replace the original keystore file with the new generated one: ```java
System.setProperty("javax.net.ssl.trustStorePassword","password");
``` 6. Replace the original root CA pem file with the combined root CA file and restart your application/client.
-For more information on configuring client certificates with PostgreSQL JDBC driver see this [documentation](https://jdbc.postgresql.org/documentation/ssl/)
+For more information on configuring client certificates with PostgreSQL JDBC driver, see this [documentation](https://jdbc.postgresql.org/documentation/ssl/)
> [!NOTE] > Azure Database for PostgreSQL - Flexible server doesn't support [certificate based authentication](https://www.postgresql.org/docs/current/auth-cert.html) at this time.
public void whenLoadingCacertsKeyStore_thenCertificatesArePresent() {
assertFalse(certificates.isEmpty()); } ```
-### Updating Root certificates when using clients in Azure App Services with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
+### Updating Root CA certificates when using clients in Azure App Services with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
For Azure App services, connecting to Azure Database for PostgreSQL, we can have two possible scenarios on updating client certificates and it depends on how on you're using SSL with your application deployed to Azure App Services. * Usually new certificates are added to App Service at platform level prior to changes in Azure Database for PostgreSQL - Flexible Server. If you are using the SSL certificates included on App Service platform in your application, then no action is needed. Consult following [Azure App Service documentation](../../app-service/configure-ssl-certificate.md) for more information. * If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and update the code to use the new cert. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress)
- ### Updating Root certificates when using clients in Azure Kubernetes Service (AKS) with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
+ ### Updating Root CA certificates when using clients in Azure Kubernetes Service (AKS) with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
If you're trying to connect to the Azure Database for PostgreSQL using applications hosted in Azure Kubernetes Services (AKS) and pinning certificates, it's similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-tls.md).
+### Updating Root CA certificates for For .NET (Npgsql) users on Windows with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
+
+For .NET (Npgsql) users on Windows, connecting to Azure Database for PostgreSQL - Flexible Servers deployed in Azure Government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona) make sure **both** Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root G2 both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
+
+For .NET (Npgsql) users on Windows, connecting to Azure Database for PostgreSQL - Flexible Servers deployed in Azure pubiic regions worldwide make sure **both** Microsoft RSA Root Certificate Authority 2017 and DigiCert Global Root CA **both** exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates don't exist, import the missing certificate.
+++
+### Updating Root CA certificates for other clients for certificate pinning scenarios
+
+For other PostgreSQL client users, you can merge two CA certificate files like this format below:
++
+--BEGIN CERTIFICATE--
+(Root CA1: DigiCertGlobalRootCA.crt.pem)
+--END CERTIFICATE--
+--BEGIN CERTIFICATE--
+(Root CA2: Microsoft ECC Root Certificate Authority 2017.crt.pem)
+--END CERTIFICATE--
+
+### Read Replicas with certificate pinning scenarios
+
+With Root CA migration to [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/docs/repository.htm) it's feasible for newly created replicas to be on a newer Root CA certificate than primary server created earlier.
+Therefore, for clients that use **verify-ca** and **verify-full** sslmode configuration settings, i.e. certificate pinning, is imperative for interrupted connectivity to accept **both** root CA certificates:
+ * For connectivity to servers deployed to Azure Government cloud regions (US Gov Virginia, US Gov Texas, US Gov Arizona): [DigiCert Global Root G2](https://www.digicert.com/kb/digicert-root-certificates.htm) and [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/docs/repository.htm) root CA certificates, as services are migrating from Digicert to Microsoft CA.
+ * For connectivity to servers deployed to Azure public cloud regions worldwide: [Digicert Global Root CA](https://www.digicert.com/kb/digicert-root-certificates.htm) and [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/docs/repository.htm), as services are migrating from Digicert to Microsoft CA.
++ ## Testing SSL\TLS Connectivity
-Before trying to access your SSL enabled server from client application, make sure you can get to it via psql. You should see output like the following if you have established a SSL connection.
+Before trying to access your SSL enabled server from client application, make sure you can get to it via psql. You should see output similar to the following if you established an SSL connection.
*psql (14.5)*
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
description: Learn how to manage read replicas for Azure Database for PostgreSQL
Previously updated : 01/17/2024 Last updated : 04/02/2024
az postgres flexible-server replica create \
Replace `<replica-name>`, `<resource-group>`, `<source-server-name>` and `<location>` with your specific values.
+After the read replica is created, the properties of all servers which are replicas of a primary replica can be obtained by using the [`az postgres flexible-server replica create`](/cli/azure/postgres/flexible-server/replica#az-postgres-flexible-server-replica-list) command.
+
+```azurecli-interactive
+az postgres flexible-server replica list \
+ --name <source-server-name> \
+ --resource-group <resource-group>
+```
+
+Replace `<source-server-name>`, and `<resource-group>` with your specific values.
++ #### [REST API](#tab/restapi)
-Initiate an `HTTP PUT` request by using the [create API](/rest/api/postgresql/flexibleserver/servers/create):
+Initiate an `HTTP PUT` request by using the [servers create API](/rest/api/postgresql/flexibleserver/servers/create):
```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2022-12-01
Here, you need to replace `{subscriptionId}`, `{resourceGroupName}`, and `{repli
} ```
+After the read replica is created, the properties of all servers which are replicas of a primary replica can be obtained by initiating an `HTTP GET` request by using [replicas list by server API](/rest/api/postgresql/flexibleserver/replicas/list-by-server):
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{sourceserverName}/replicas?api-version=2022-12-01
+```
+
+Here, you need to replace `{subscriptionId}`, `{resourceGroupName}`, and `{sourceserverName}` with your specific Azure subscription ID, the name of your resource group, and the name you assigned to your primary replica, respectively.
+
+```json
+[
+ {
+ "administratorLogin": null,
+ "administratorLoginPassword": null,
+ "authConfig": null,
+ "availabilityZone": null,
+ "backup": {
+ "backupRetentionDays": null,
+ "earliestRestoreDate": "2023-11-23T12:55:33.3443218+00:00",
+ "geoRedundantBackup": "Disabled"
+ },
+ "createMode": null,
+ "dataEncryption": {
+ "geoBackupEncryptionKeyStatus": null,
+ "geoBackupKeyUri": null,
+ "geoBackupUserAssignedIdentityId": null,
+ "primaryEncryptionKeyStatus": null,
+ "primaryKeyUri": null,
+ "primaryUserAssignedIdentityId": null,
+ "type": "SystemManaged"
+ },
+ "fullyQualifiedDomainName": null,
+ "highAvailability": null,
+ "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{replicaserverName}",
+ "identity": null,
+ "location": "eastus",
+ "maintenanceWindow": {
+ "customWindow": "Disabled",
+ "dayOfWeek": 0,
+ "startHour": 0,
+ "startMinute": 0
+ },
+ "minorVersion": null,
+ "name": "{replicaserverName}",
+ "network": {
+ "delegatedSubnetResourceId": null,
+ "privateDnsZoneArmResourceId": null,
+ "publicNetworkAccess": "Disabled"
+ },
+ "pointInTimeUtc": null,
+ "privateEndpointConnections": null,
+ "replica": {
+ "capacity": null,
+ "promoteMode": null,
+ "promoteOption": null,
+ "replicationState": "Active",
+ "role": "AsyncReplica"
+ },
+ "replicaCapacity": null,
+ "replicationRole": "AsyncReplica",
+ "resourceGroup": "{resourceGroupName}",
+ "sku": {
+ "name": "",
+ "tier": null
+ },
+ "sourceServerResourceId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{serverName}",
+ "state": "Ready",
+ "storage": {
+ "autoGrow": "Disabled",
+ "iops": null,
+ "storageSizeGb": 0,
+ "throughput": null,
+ "tier": null,
+ "type": null
+ },
+ "systemData": {
+ "createdAt": "2023-11-22T17:11:42.2461489Z",
+ "createdBy": null,
+ "createdByType": null,
+ "lastModifiedAt": null,
+ "lastModifiedBy": null,
+ "lastModifiedByType": null
+ },
+ "tags": null,
+ "type": "Microsoft.DBforPostgreSQL/flexibleServers",
+ "version": null
+ }
+]
+```
- Set the replica server name.
Replace `<resource-group>`, `<source-server-name>` and `<location>` with your sp
#### [REST API](#tab/restapi)
-You can create a secondary read replica by using the [create API](/rest/api/postgresql/flexibleserver/servers/create):
+You can create a secondary read replica by using the [servers create API](/rest/api/postgresql/flexibleserver/servers/create):
```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2022-12-01
az postgres flexible-server delete \
Replace `<resource-group>` and `<server-name>` with the name of your resource group name and the replica server name you wish to delete. #### [REST API](#tab/restapi)
-To delete a primary or replica server, use the [delete API](/rest/api/postgresql/flexibleserver/servers/delete). If server has read replicas then read replicas should be deleted first before deleting the primary server.
+To delete a primary or replica server, use the [servers delete API](/rest/api/postgresql/flexibleserver/servers/delete). If server has read replicas then read replicas should be deleted first before deleting the primary server.
```http DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{replicaserverName}?api-version=2022-12-01
az postgres flexible-server delete \
Replace `<resource-group>` and `<server-name>` with the name of your resource group name and the primary server name you wish to delete. #### [REST API](#tab/restapi)
-To delete a primary or replica server, use the [delete API](/rest/api/postgresql/flexibleserver/servers/delete). If server has read replicas then read replicas should be deleted first before deleting the primary server.
+To delete a primary or replica server, use the [servers delete API](/rest/api/postgresql/flexibleserver/servers/delete). If server has read replicas then read replicas should be deleted first before deleting the primary server.
```http DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{sourceserverName}?api-version=2022-12-01
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: March 2024 * Public preview of [Major Version Upgrade Support for PostgreSQL 16](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL flexible server.
+* Public preview of [real-time language translations](generative-ai-azure-cognitive.md#language-translation) with azure_ai extension on Azure Database for PostgreSQL flexible server.
+* Public preview of [real-time machine learning predictions](generative-ai-azure-machine-learning.md) with azure_ai extension on Azure Database for PostgreSQL flexible server.
+* General availability of version 0.6.0 of [vector](how-to-use-pgvector.md) extension on Azure Database for PostgreSQL flexible server.
## Release: February 2024 * Support for [minor versions](./concepts-supported-versions.md) 16.1, 15.5, 14.10, 13.13, 12.17, 11.22 <sup>$</sup>
postgresql Concepts Known Issues Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-known-issues-migration-service.md
Here are common limitations that apply to migration scenarios:
- The migration service doesn't support superuser privileges and objects.
+- Azure Database for PostgreSQL - Flexible Server does not support the creation of custom tablespaces due to superuser privilege restrictions. During migration, data from custom tablespaces in the source PostgreSQL instance is migrated into the default tablespaces of the target Azure Database for PostgreSQL - Flexible Server.
+ - The following PostgreSQL objects can't be migrated into the PostgreSQL flexible server target: - Create casts - Creation of FTS parsers and FTS templates
postgresql Concepts User Roles Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-user-roles-migration-service.md
> [!IMPORTANT] > The migration of user roles, ownerships, and privileges feature is available only for the Azure Database for PostgreSQL Single server as the source. This feature is currently disabled for PostgreSQL version 16 servers.
-The service automatically provides the following built-in capabilities for the Azure Database for PostgreSQL single server as the source and data migration.
+The migration service automatically provides the following built-in capabilities for the Azure Database for PostgreSQL single server as the source and data migration.
- Migration of user roles on your source server to the target server. - Migration of ownership of all the database objects on your source server to the target server.
private-5g-core Azure Stack Edge Virtual Machine Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-virtual-machine-sizing.md
Title: Azure Stack Edge virtual machine sizing
-description: Learn about the VMs that Azure Private 5G Core uses when running on an Azure Stack Edge device.
+ Title: Service limits and resource usage
+description: Learn about the limits and resource usage of your Azure Private 5G Core deployment when running on an Azure Stack Edge device.
Previously updated : 09/29/2023 Last updated : 02/27/2024
-# Azure Stack Edge virtual machine sizing
+# Service limits and resource usage
+
+This article describes the maximum supported limits of the Azure Private 5G Core solution and the hardware resources required. You should use this information to help choose the appropriate AP5GC service package and Azure Stack Edge hardware for your needs. Refer to [Azure Private 5G Core pricing](https://azure.microsoft.com/pricing/details/private-5g-core/) and [Azure Stack Edge pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/) for the package options and overage rates.
+
+## Service limits
+
+The following table lists the maximum supported limits for a range of parameters in an Azure Private 5G Core deployment. These limits have been confirmed through testing, but other factors may affect what is achievable in a given scenario. For example, usage patterns, UE types and third-party network elements may impact one or more of these parameters. It is important to test the limits of your deployment before launching a live service.
+
+| Element | Maximum supported |
+||-|
+| PDU sessions | Enterprise radios typically support up to 1000 simultaneous PDU sessions per radio |
+| Bandwidth | Over 25 Gbps per ASE |
+| RAN nodes (eNB/gNB) | 200 per packet core |
+| UEs | 10,000 per deployment (all sites) |
+| SIMs | 1000 per ASE |
+| SIM provisioning | 1000 per API call |
+
+Your chosen service package may define lower limits, with overage charges for exceeding them - see [Azure Private 5G Core pricing](https://azure.microsoft.com/pricing/details/private-5g-core/) for details. If you require higher throughput for your use case, please contact us to discuss your needs.
+
+## Azure Stack Edge virtual machine sizing
The following table lists the hardware resources that Azure Private 5G Core (AP5GC) uses when running on supported Azure Stack Edge (ASE) devices.
private-5g-core Data Plane Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/data-plane-packet-capture.md
# Perform packet capture on a packet core instance
-Packet capture for control or data plane packets is performed using the **UPF Trace** tool. UPF Trace is similar to **tcpdump**, a data-network packet analyzer computer program that runs on a command line interface (CLI). You can use UPF Trace to monitor and record packets on any user plane interface on the access network (N3 interface) or data network (N6 interface) on your device, as well as the control plane (N2 interface). You can access UPF Trace using the Azure portal or the Azure CLI.
+Packet capture for control or data plane packets is performed using the **MEC-Dataplane Trace** tool. MEC-Dataplane (MEC-DP) Trace is similar to **tcpdump**, a data-network packet analyzer computer program that runs on a command line interface (CLI). You can use MEC-DP Trace to monitor and record packets on any user plane interface on the access network (N3 interface) or data network (N6 interface) on your device, as well as the control plane (N2 interface). You can access MEC-DP Trace using the Azure portal or the Azure CLI.
Packet capture works by mirroring packets to a Linux kernel interface, which can then be monitored using tcpdump. In this how-to guide, you'll learn how to perform packet capture on a packet core instance.
To perform packet capture using the command line, you must:
## Performing packet capture using the Azure CLI
-1. In a command line with kubectl access to the Azure Arc-enabled Kubernetes cluster, enter the UPF-PP troubleshooter pod:
+1. In a command line with kubectl access to the Azure Arc-enabled Kubernetes cluster, enter the MEC-DP troubleshooter pod:
```azurecli
- kubectl exec -it -n core core-upf-pp-0 -c troubleshooter -- bash
+ kubectl exec -it -n core core-mec-dp-0 -c troubleshooter -- bash
``` 1. View the list of configured user plane interfaces: ```azurecli
- upft list
+ mect list
``` This should report a single interface on the control plane network (N2), a single interface on the access network (N3) and an interface for each attached data network (N6). For example:
To perform packet capture using the command line, you must:
n6trace2 (Data Network: test) ```
-1. Run `upftdump` with any parameters that you would usually pass to tcpdump. In particular, `-i` to specify the interface, and `-w` to specify where to write to. Close the UPFT tool when done by pressing <kbd>Ctrl + C</kbd>. The following examples are common use cases:
- - To run capture packets on all interfaces run `upftdump -i any -w any.pcap`
- - To run capture packets for the N3 interface and the N6 interface for a single data network, enter the UPF-PP troubleshooter pod in two separate windows. In one window run `upftdump -i n3trace -w n3.pcap` and in the other window run `upftdump -i <N6 interface> -w n6.pcap` (use the N6 interface for the data network as identified in step 2).
+1. Run `mectdump` with any parameters that you would usually pass to tcpdump. In particular, `-i` to specify the interface, and `-w` to specify where to write to. Close the tool when finished by pressing <kbd>Ctrl + C</kbd>. The following examples are common use cases:
+ - To run capture packets on all interfaces, run `mectdump -i any -w any.pcap`
+ - To run capture packets for the N3 interface and the N6 interface for a single data network, enter the MEC-DP troubleshooter pod in two separate windows. In one window run `mectdump -i n3trace -w n3.pcap` and in the other window run `mectdump -i <N6 interface> -w n6.pcap` (use the N6 interface for the data network as identified in step 2).
> [!IMPORTANT] > Packet capture files might be large, particularly when running packet capture on all interfaces. Specify filters when running packet capture to reduce the file size - see the tcpdump documentation for the available filters.
To perform packet capture using the command line, you must:
1. Copy the output files: ```azurecli
- kubectl cp -n core core-upf-pp-0:<path to output file> <location to copy to> -c troubleshooter
+ kubectl cp -n core core-mec-dp-0:<path to output file> <location to copy to> -c troubleshooter
``` The `tcpdump` might have been stopped in the middle of writing a packet, which can cause this step to produce an error stating `unexpected EOF`. However, your file should have copied successfully, but you can check your target output file to confirm.
To perform packet capture using the command line, you must:
1. Remove the output files: ```azurecli
- kubectl exec -it -n core core-upf-pp-0 -c troubleshooter -- rm <path to output file>
+ kubectl exec -it -n core core-mec-dp-0 -c troubleshooter -- rm <path to output file>
``` ## Next steps
private-5g-core Ping Traceroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/ping-traceroute.md
To access the local UI, see [Tutorial: Connect to Azure Stack Edge Pro with GPU]
## Run the ping and traceroute tools
-1. In a command line with kubectl access to the Azure Arc-enabled Kubernetes cluster, enter the UPF-PP troubleshooter pod:
+1. In a command line with kubectl access to the Azure Arc-enabled Kubernetes cluster, enter the MEC-DP troubleshooter pod:
```azurecli
- kubectl exec -it -n core core-upf-pp-0 -c troubleshooter -- bash
+ kubectl exec -it -n core core-mec-dp-0 -c troubleshooter -- bash
``` 1. View the list of configured user plane interfaces: ```azurecli
- upft list
+ mect list
``` This should report a single interface on the control plane network (N2), a single interface on the access network (N3) and an interface for each attached data network (N6). For example:
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-create-indexers.md
- ignite-2023 Previously updated : 10/05/2023 Last updated : 03/28/2024 # Create an indexer in Azure AI Search
There are several ways to run an indexer:
Scheduled execution is usually implemented when you have a need for incremental indexing so that you can pick up the latest changes. As such, scheduling has a dependency on change detection.
+Indexers are one of the few subsystems that make overt outbound calls to other Azure resources. In terms of Azure roles, indexers don't have separate identities: a connection from the search engine to another Azure resource is made using the [system or user-assigned managed identity](search-howto-managed-identities-data-sources.md) of a search service. If the indexer connects to an Azure resource on a virtual network, you should create a [shared private link](search-indexer-howto-access-private.md) for that connection. For more information about secure connections, see the [Security in Azure AI Search](search-security-overview.md).
+ ## Check results [Monitor indexer status](search-howto-monitor-indexers.md) to check for status. Successful execution can still include warning and notifications. Be sure to check both successful and failed status notifications for details about the job.
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
- ignite-2023 Previously updated : 12/18/2023 Last updated : 04/02/2024 # Connect a search service to other Azure resources using a managed identity
A search service uses Azure Storage as an indexer data source and as a data sink
| [Enrichment cache (hosted in Azure Storage)](search-howto-incremental-index.md) <sup>1,</sup> <sup>2</sup> | Yes | Yes | | [Knowledge Store (hosted in Azure Storage)](knowledge-store-create-rest.md) <sup>1</sup>| Yes | Yes | | [Custom skills (hosted in Azure Functions or equivalent)](cognitive-search-custom-skill-interface.md) | Yes | Yes |
+| [Azure OpenAI embedding skill](cognitive-search-skill-azure-openai-embedding.md) | Yes | Yes |
+| [Azure OpenAI vectorizer](vector-search-how-to-configure-vectorizer.md) | Yes | Yes |
<sup>1</sup> For connectivity between search and storage, your network security configuration imposes constraints on which type of managed identity you can use. Only a system managed identity can be used for a same-region connection to storage via the trusted service exception or resource instance rule. See [Access to a network-protected storage account](search-indexer-securing-resources.md#access-to-a-network-protected-storage-account) for details.
The following steps are for Azure Storage. If your resource is Azure Cosmos DB o
| Write to a knowledge store | Add **Storage Blob DataContributor** for object and file projections, and **Reader and Data Access** for table projections. | | Write to an enrichment cache | Add **Storage Blob Data Contributor** | | Save debug session state | Add **Storage Blob Data Contributor** |
+ | Embedding data (vectorizing) using Azure OpenAI embedding models | Add **Cognitive Services OpenAI User** |
1. On the **Members** page, select **Managed Identity**.
A custom skill targets the endpoint of an Azure function or app hosting custom c
"outputs": [ ...] } ```
+[**Azure OpenAI embedding skill**](cognitive-search-skill-azure-openai-embedding.md) and [**Azure OpenAI vectorizer:**](vector-search-how-to-configure-vectorizer.md)
+
+ An Azure OpenAI embedding skill and vectorizer in AI Search target the endpoint of an Azure OpenAI service hosting an embedding model. The endpoint is specified in the [Azure OpenAI embedding skill definition](cognitive-search-skill-azure-openai-embedding.md) and/or in the [Azure OpenAI vectorizer definition](vector-search-how-to-configure-vectorizer.md). The system-managed identity is used if configured and if the "apikey" and "authIdentity" are empty. The "authIdentity" property is used for user-assigned managed identity only.
+
+
+```json
+{
+ "@odata.type": "#Microsoft.Skills.Text.AzureOpenAIEmbeddingSkill",
+ "description": "Connects a deployed embedding model.",
+ "resourceUri": "https://url.openai.azure.com/",
+ "deploymentId": "text-embedding-ada-002",
+ "inputs": [
+ {
+ "name": "text",
+ "source": "/document/content"
+ }
+ ],
+ "outputs": [
+ {
+ "name": "embedding"
+ }
+ ]
+}
+```
+
+```json
+ "vectorizers": [
+ {
+ "name": "my_azure_open_ai_vectorizer",
+ "kind": "azureOpenAI",
+ "azureOpenAIParameters": {
+ "resourceUri": "https://url.openai.azure.com",
+ "deploymentId": "text-embedding-ada-002"
+ }
+ }
+ ]
+```
+ ## See also
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-run-reset-indexers.md
- ignite-2023 Previously updated : 02/26/2024 Last updated : 03/28/2024 # Run or reset indexers, skills, or documents
In Azure AI Search, there are several ways to run an indexer:
This article explains how to run indexers on demand, with and without a reset. It also describes indexer execution, duration, and concurrency.
+## How indexers connect to Azure resources
+
+Indexers are one of the few subsystems that make overt outbound calls to other Azure resources. In terms of Azure roles, indexers don't have separate identities: a connection from the search engine to another Azure resource is made using the [system or user-assigned managed identity](search-howto-managed-identities-data-sources.md) of a search service. If the indexer connects to an Azure resource on a virtual network, you should create a [shared private link](search-indexer-howto-access-private.md) for that connection. For more information about secure connections, see the [Security in Azure AI Search](search-security-overview.md).
+ ## Indexer execution A search service runs one indexer job per [search unit](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards). Every search service starts with one search unit, but each new partition or replica increases the search units of your service. You can check the search unit count in the portal's Essential section of the **Overview** page. If you need concurrent processing, make sure you have sufficient replicas. Indexers don't run in the background, so you might detect more query throttling than usual if the service is under pressure.
search Search Security Manage Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-manage-encryption-keys.md
Previously updated : 01/20/2024 Last updated : 04/03/2024 - references_regions - ignite-2023
This article walks you through the steps of setting up customer-managed key (CMK
+ CMK encryption is enacted on individual objects. If you require CMK across your search service, [set an enforcement policy](#encryption-enforcement-policy).
-+ CMK encryption depends on [Azure Key Vault](../key-vault/general/overview.md). You can create your own encryption keys and store them in a key vault, or you can use Azure Key Vault APIs to generate encryption keys.
++ CMK encryption depends on [Azure Key Vault](../key-vault/general/overview.md). You can create your own encryption keys and store them in a key vault, or you can use Azure Key Vault APIs to generate encryption keys. Azure Key Vault must be in the same subscription and tenant as Azure AI Search. Azure AI Search retrieves your managed key by connecting through a system or user-managed identity. This behavior requires both services to share the same tenant. + CMK encryption becomes operational when an object is created. You can't encrypt objects that already exist. CMK encryption occurs whenever an object is saved to disk, either data at rest for long-term storage or temporary data for short-term storage. With CMK, the disk never sees unencrypted data.
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
- ignite-2023 Previously updated : 03/25/2024 Last updated : 04/03/2024 # Security overview for Azure AI Search
At a minimum, all inbound requests must be authenticated using either of these o
Additionally, you can add [network security features](#service-access-and-authentication) to further restrict access to the endpoint. You can create either inbound rules in an IP firewall, or create private endpoints that fully shield your search service from the public internet.
+### Internal traffic
+
+Internal requests are secured and managed by Microsoft. You can't configure or control these connections. If you're locking down network access, no action on your part is required because internal traffic isn't customer-configurable.
+
+Internal traffic consists of:
+++ Service-to-service calls for tasks like authentication and authorization through Microsoft Entra ID, resource logging sent to Azure Monitor, and [private endpoint connections](service-create-private-endpoint.md) that utilize Azure Private Link.++ Requests made to Azure AI services APIs for [built-in skills](cognitive-search-predefined-skills.md).++ Requests made to the machine learning models that support [semantic ranking](semantic-search-overview.md#availability-and-pricing).+ ### Outbound traffic
-Outbound requests from a search service to other applications are typically made by indexers for text-based indexing, skills-based AI enrichment, and vectorization. Outbound requests include both read and write operations.
+Outbound requests can be secured and managed by you. Outbound requests originate from a search service to other applications. These requests are typically made by indexers for text-based indexing, skills-based AI enrichment, and vectorizations at query time. Outbound requests include both read and write operations.
-The following list is a full enumeration of the outbound requests that can be made by a search service. A search service makes requests on its own behalf, and on the behalf of an indexer or custom skill.
+The following list is a full enumeration of the outbound requests for which you can configure secure connections. A search service makes requests on its own behalf, and on the behalf of an indexer or custom skill.
| Operation | Scenario | | -| -- | | Indexers | Connect to external data sources to retrieve data. For more information, see [Indexer access to content protected by Azure network security](search-indexer-securing-resources.md). | | Indexers | Connect to Azure Storage to persist [knowledge stores](knowledge-store-concept-intro.md), [cached enrichments](cognitive-search-incremental-indexing-conceptual.md), [debug sessions](cognitive-search-debug-session.md). | | Custom skills | Connect to Azure functions, Azure web apps, or other apps running external code that's hosted off-service. The request for external processing is sent during skillset execution. |
-| Indexers and [integrated vectorization](vector-search-integrated-vectorization.md) | Connect to Azure OpenAI and a deployed embedding model, or it goes through a custom skill to connect to an embedding model that you provide. The search service sends text to embedding models for vectorization during indexing or query execution. |
-| Search service | Connect to Azure Key Vault for customer-managed keys, used to encrypt and decrypt sensitive data. |
+| Indexers and [integrated vectorization](vector-search-integrated-vectorization.md) | Connect to Azure OpenAI and a deployed embedding model, or it goes through a custom skill to connect to an embedding model that you provide. The search service sends text to embedding models for vectorization during indexing. |
+| Vectorizers | Connect to Azure OpenAI or other embedding models at query time to [convert user text strings to vectors](vector-search-how-to-configure-vectorizer.md) for vector search. |
+| Search service | Connect to Azure Key Vault for [customer-managed encyrption keys](search-security-manage-encryption-keys.md), used to encrypt and decrypt sensitive data. |
Outbound connections can be made using a resource's full access connection string that includes a key or a database login, or [a managed identity](search-howto-managed-identities-data-sources.md) if you're using Microsoft Entra ID and role-based access.
-To reach Azure resources behind a firewall, [create inbound rules that admit search service requests](search-indexer-howto-access-ip-restricted.md).
+To reach Azure resources behind a firewall, [create inbound rules on other Azure resources that admit search service requests](search-indexer-howto-access-ip-restricted.md).
To reach Azure resources protected by Azure Private Link, [create a shared private link](search-indexer-howto-access-private.md) that an indexer uses to make its connection.
Configure same-region connections using either of the following approaches:
+ [Trusted service exception](search-indexer-howto-access-trusted-service-exception.md) + [Resource instance rules](/azure/storage/common/storage-network-security?tabs=azure-portal#grant-access-from-azure-resource-instances)
-### Internal traffic
-
-Internal requests are secured and managed by Microsoft. You can't configure or control these connections. If you're locking down network access, no action on your part is required because internal traffic isn't customer-configurable.
-
-Internal traffic consists of:
-
-+ Service-to-service calls for tasks like authentication and authorization through Microsoft Entra ID, resource logging sent to Azure Monitor, and private endpoint connections that utilize Azure Private Link.
-+ Requests made to Azure AI services APIs for [built-in skills](cognitive-search-predefined-skills.md).
-+ Requests made to the machine learning models that support [semantic ranking](semantic-search-overview.md#availability-and-pricing).
- <a name="service-access-and-authentication"></a> ## Network security
sentinel Add Advanced Conditions To Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/add-advanced-conditions-to-automation-rules.md
Title: Add advanced conditions to Microsoft Sentinel automation rules description: This article explains how to add complex, advanced "Or" conditions to automation rules in Microsoft Sentinel, for more effective triage of incidents.-- Previously updated : 05/09/2023++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
++ # Add advanced conditions to Microsoft Sentinel automation rules
Condition groups can contain two levels of conditions:
You can see that this capability affords you great power and flexibility in determining when rules will run. It can also greatly increase your efficiency by enabling you to combine many old automation rules into one new rule. + ## Add a condition group Since condition groups offer a lot more power and flexibility in creating automation rules, the best way to explain how to do this is by presenting some examples. Let's create a rule that will change the severity of an incoming incident from whatever it is to High, assuming it meets the conditions we'll set.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), select the **Configuration** > **Automation** page. For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Automation**.
+ 1. From the **Automation** page, select **Create > Automation rule** from the button bar at the top. See the [general instructions for creating an automation rule](create-manage-use-automation-rules.md) for details.
Let's create a rule that will change the severity of an incoming incident from w
1. Select the trigger **When incident is created**.
-1. Under **Conditions**, leave the **Incident provider** and **Analytics rule name** conditions as they are. We'll add more conditions below.
+1. Under **Conditions**, if you see the **Incident provider** and **Analytics rule name** conditions, leave them as they are. These conditions aren't available if your workspace is onboarded to the unified security operations platform. In either case, we'll add more conditions later in this process.
1. Under **Actions**, select **Change severity** from the drop-down list. 1. Select **High** from the drop-down list that appears below **Change severity**.
+For example, the following tabs show samples from a workspace that's onboarded to the unified security operations platform, in either the Azure or Defender portals, and a workspace that isn't:
+
+### [Onboarded workspaces](#tab/after-onboarding)
++
+### [Workspaces that aren't onboarded](#tab/before-onboarding)
+ :::image type="content" source="media/add-advanced-conditions-to-automation-rules/create-automation-rule-no-conditions.png" alt-text="Screenshot of creating new automation rule without adding conditions."::: + ## Example 1: simple conditions In this first example, we'll create a simple condition group: If either condition A **or** condition B is true, the rule will run and the incident's severity will be set to *High*.
sentinel Add Entity To Threat Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/add-entity-to-threat-intelligence.md
Title: Add entities to threat intelligence in Microsoft Sentinel
-description: This article shows you, if you discover a malicious entity in an incident investigation, how to add the entity to your threat intelligence indicator lists in Microsoft Sentinel.
+ Title: Add entities to threat intelligence
+
+description: Learn how to add a malicious entity discovered in an incident investigation to your threat intelligence in Microsoft Sentinel.
Previously updated : 01/17/2023 Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+
+#Customer intent: As a security analyst, I want to quickly add relevant threat intelligence from my investigation for myself and others so I don't lose important information.
# Add entities to threat intelligence in Microsoft Sentinel
-When investigating an incident, you examine entities and their context as an important part of understanding the scope and nature of the incident. In the course of the investigation, you may discover a domain name, URL, file, or IP address in the incident that should be labeled and tracked as an indicator of compromise (IOC), a threat indicator.
+During an investigation, you examine entities and their context as an important part of understanding the scope and nature of an incident. When you discover an entity as a malicious domain name, URL, file, or IP address in the incident, it should be labeled and tracked as an indicator of compromise (IOC) in your threat intelligence.
-For example, you may discover an IP address performing port scans across your network, or functioning as a command and control node, sending and/or receiving transmissions from large numbers of nodes in your network.
+For example, you discover an IP address performing port scans across your network, or functioning as a command and control node, sending and/or receiving transmissions from large numbers of nodes in your network.
-Microsoft Sentinel allows you to flag these types of entities as malicious, right from within your incident investigation, and add it to your threat indicator lists. You'll then be able to view the added indicators both in Logs and in the Threat Intelligence blade, and use them across your Microsoft Sentinel workspace.
+Microsoft Sentinel allows you to flag these types of entities right from within your incident investigation, and add it to your threat intelligence. You are able to view the added indicators both in **Logs** and **Threat Intelligence**, and use them across your Microsoft Sentinel workspace.
-## Add an entity to your indicators list
+## Add an entity to your threat intelligence
The new [incident details page](investigate-incidents.md) gives you another way to add entities to threat intelligence, in addition to the investigation graph. Both ways are shown below.
Whichever of the two interfaces you choose, you will end up here:
1. The entity will be added as a threat indicator in your workspace. You can find it [in the list of indicators in the **Threat intelligence** page](work-with-threat-indicators.md#find-and-view-your-indicators-in-the-threat-intelligence-page), and also [in the *ThreatIntelligenceIndicators* table in **Logs**](work-with-threat-indicators.md#find-and-view-your-indicators-in-logs).
-## Next steps
+## Related content
In this article, you learned how to add entities to your threat indicator lists. For more information, see:
sentinel Anomalies Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/anomalies-reference.md
Title: Anomalies detected by the Microsoft Sentinel machine learning engine
description: Learn about the anomalies detected by Microsoft Sentinel's machine learning engines. Previously updated : 06/13/2022 Last updated : 03/17/2024
Microsoft Sentinel uses two different models to create baselines and detect anom
- [UEBA anomalies](#ueba-anomalies) - [Machine learning-based anomalies](#machine-learning-based-anomalies) + ## UEBA anomalies Sentinel UEBA detects anomalies based on dynamic baselines created for each entity across various data inputs. Each entity's baseline behavior is set according to its own historical activities, those of its peers, and those of the organization as a whole. Anomalies can be triggered by the correlation of different attributes such as action type, geo-location, device, resource, ISP, and more.
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK techniques:** | T1531 - Account Access Removal | | **Activity:** | Microsoft.Authorization/roleAssignments/delete<br>Log Out |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Account Creation
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK sub-techniques:** | Cloud Account | | **Activity:** | Core Directory/UserManagement/Add user |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Account Deletion
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK techniques:** | T1531 - Account Access Removal | | **Activity:** | Core Directory/UserManagement/Delete user<br>Core Directory/Device/Delete user<br>Core Directory/UserManagement/Delete user |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Account Manipulation
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK techniques:** | T1098 - Account Manipulation | | **Activity:** | Core Directory/UserManagement/Update user |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Code Execution (UEBA)
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK sub-techniques:** | PowerShell | | **Activity:** | Microsoft.Compute/virtualMachines/runCommand/action |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Data Destruction
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK techniques:** | T1485 - Data Destruction | | **Activity:** | Microsoft.Compute/disks/delete<br>Microsoft.Compute/galleries/images/delete<br>Microsoft.Compute/hostGroups/delete<br>Microsoft.Compute/hostGroups/hosts/delete<br>Microsoft.Compute/images/delete<br>Microsoft.Compute/virtualMachines/delete<br>Microsoft.Compute/virtualMachineScaleSets/delete<br>Microsoft.Compute/virtualMachineScaleSets/virtualMachines/delete<br>Microsoft.Devices/digitalTwins/Delete<br>Microsoft.Devices/iotHubs/Delete<br>Microsoft.KeyVault/vaults/delete<br>Microsoft.Logic/integrationAccounts/delete  <br>Microsoft.Logic/integrationAccounts/maps/delete <br>Microsoft.Logic/integrationAccounts/schemas/delete <br>Microsoft.Logic/integrationAccounts/partners/delete <br>Microsoft.Logic/integrationServiceEnvironments/delete<br>Microsoft.Logic/workflows/delete<br>Microsoft.Resources/subscriptions/resourceGroups/delete<br>Microsoft.Sql/instancePools/delete<br>Microsoft.Sql/managedInstances/delete<br>Microsoft.Sql/managedInstances/administrators/delete<br>Microsoft.Sql/managedInstances/databases/delete<br>Microsoft.Storage/storageAccounts/delete<br>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete<br>Microsoft.Storage/storageAccounts/fileServices/fileshares/files/delete<br>Microsoft.Storage/storageAccounts/blobServices/containers/delete<br>Microsoft.AAD/domainServices/delete |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Defensive Mechanism Modification
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK sub-techniques:** | Disable or Modify Tools<br>Disable or Modify Cloud Firewall | | **Activity:** | Microsoft.Sql/managedInstances/databases/vulnerabilityAssessments/rules/baselines/delete<br>Microsoft.Sql/managedInstances/databases/vulnerabilityAssessments/delete<br>Microsoft.Network/networkSecurityGroups/securityRules/delete<br>Microsoft.Network/networkSecurityGroups/delete<br>Microsoft.Network/ddosProtectionPlans/delete<br>Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies/delete<br>Microsoft.Network/applicationSecurityGroups/delete<br>Microsoft.Authorization/policyAssignments/delete<br>Microsoft.Sql/servers/firewallRules/delete<br>Microsoft.Network/firewallPolicies/delete<br>Microsoft.Network/azurefirewalls/delete |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Failed Sign-in
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK techniques:** | T1110 - Brute Force | | **Activity:** | **Microsoft Entra ID:** Sign-in activity<br>**Windows Security:** Failed login (Event ID 4625) |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Password Reset
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK techniques:** | T1531 - Account Access Removal | | **Activity:** | Core Directory/UserManagement/User password reset |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Privilege Granted
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK sub-techniques:** | Additional Azure Service Principal Credentials | | **Activity:** | Account provisioning/Application Management/Add app role assignment to service principal |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Sign-in
You must [enable the UEBA feature](enable-entity-behavior-analytics.md) for UEBA
| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts | | **Activity:** | **Microsoft Entra ID:** Sign-in activity<br>**Windows Security:** Successful login (Event ID 4624) |
-[Back to UEBA anomalies list](#ueba-anomalies)
+[Back to UEBA anomalies list](#ueba-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
## Machine learning-based anomalies
Microsoft Sentinel's customizable, machine learning-based anomalies can identify
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts<br>T1566 - Phishing<br>T1133 - External Remote Services |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Azure operations
Microsoft Sentinel's customizable, machine learning-based anomalies can identify
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1190 - Exploit Public-Facing Application |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous Code Execution
Microsoft Sentinel's customizable, machine learning-based anomalies can identify
| **MITRE ATT&CK tactics:** | Execution | | **MITRE ATT&CK techniques:** | T1059 - Command and Scripting Interpreter |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous local account creation
Microsoft Sentinel's customizable, machine learning-based anomalies can identify
| **MITRE ATT&CK tactics:** | Persistence | | **MITRE ATT&CK techniques:** | T1136 - Create Account |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous scanning activity
Configuration details:
| **MITRE ATT&CK tactics:** | Discovery | | **MITRE ATT&CK techniques:** | T1046 - Network Service Scanning |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous user activities in Office Exchange
Configuration details:
| **MITRE ATT&CK tactics:** | Persistence<br>Collection | | **MITRE ATT&CK techniques:** | **Collection:**<br>T1114 - Email Collection<br>T1213 - Data from Information Repositories<br><br>**Persistence:**<br>T1098 - Account Manipulation<br>T1136 - Create Account<br>T1137 - Office Application Startup<br>T1505 - Server Software Component |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous user/app activities in Azure audit logs
Configuration details:
| **MITRE ATT&CK tactics:** | Collection<br>Discovery<br>Initial Access<br>Persistence<br>Privilege Escalation | | **MITRE ATT&CK techniques:** | **Collection:**<br>T1530 - Data from Cloud Storage Object<br><br>**Discovery:**<br>T1087 - Account Discovery<br>T1538 - Cloud Service Dashboard<br>T1526 - Cloud Service Discovery<br>T1069 - Permission Groups Discovery<br>T1518 - Software Discovery<br><br>**Initial Access:**<br>T1190 - Exploit Public-Facing Application<br>T1078 - Valid Accounts<br><br>**Persistence:**<br>T1098 - Account Manipulation<br>T1136 - Create Account<br>T1078 - Valid Accounts<br><br>**Privilege Escalation:**<br>T1484 - Domain Policy Modification<br>T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous W3CIIS logs activity
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access<br>Persistence | | **MITRE ATT&CK techniques:** | **Initial Access:**<br>T1190 - Exploit Public-Facing Application<br><br>**Persistence:**<br>T1505 - Server Software Component |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Anomalous web request activity
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access<br>Persistence | | **MITRE ATT&CK techniques:** | **Initial Access:**<br>T1190 - Exploit Public-Facing Application<br><br>**Persistence:**<br>T1505 - Server Software Component |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Attempted computer brute force
Configuration details:
| **MITRE ATT&CK tactics:** | Credential Access | | **MITRE ATT&CK techniques:** | T1110 - Brute Force |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Attempted user account brute force
Configuration details:
| **MITRE ATT&CK tactics:** | Credential Access | | **MITRE ATT&CK techniques:** | T1110 - Brute Force |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Attempted user account brute force per login type
Configuration details:
| **MITRE ATT&CK tactics:** | Credential Access | | **MITRE ATT&CK techniques:** | T1110 - Brute Force |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Attempted user account brute force per failure reason
Configuration details:
| **MITRE ATT&CK tactics:** | Credential Access | | **MITRE ATT&CK techniques:** | T1110 - Brute Force |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Detect machine generated network beaconing behavior
Configuration details:
| **MITRE ATT&CK tactics:** | Command and Control | | **MITRE ATT&CK techniques:** | T1071 - Application Layer Protocol<br>T1132 - Data Encoding<br>T1001 - Data Obfuscation<br>T1568 - Dynamic Resolution<br>T1573 - Encrypted Channel<br>T1008 - Fallback Channels<br>T1104 - Multi-Stage Channels<br>T1095 - Non-Application Layer Protocol<br>T1571 - Non-Standard Port<br>T1572 - Protocol Tunneling<br>T1090 - Proxy<br>T1205 - Traffic Signaling<br>T1102 - Web Service |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Domain generation algorithm (DGA) on DNS domains
Configuration details:
| **MITRE ATT&CK tactics:** | Command and Control | | **MITRE ATT&CK techniques:** | T1568 - Dynamic Resolution |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Domain Reputation Palo Alto anomaly
Configuration details:
| **MITRE ATT&CK tactics:** | Command and Control | | **MITRE ATT&CK techniques:** | T1568 - Dynamic Resolution |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Excessive data transfer anomaly
Configuration details:
| **MITRE ATT&CK tactics:** | Exfiltration | | **MITRE ATT&CK techniques:** | T1030 - Data Transfer Size Limits<br>T1041 - Exfiltration Over C2 Channel<br>T1011 - Exfiltration Over Other Network Medium<br>T1567 - Exfiltration Over Web Service<br>T1029 - Scheduled Transfer<br>T1537 - Transfer Data to Cloud Account |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Excessive Downloads via Palo Alto GlobalProtect
Configuration details:
| **MITRE ATT&CK tactics:** | Exfiltration | | **MITRE ATT&CK techniques:** | T1030 - Data Transfer Size Limits<br>T1041 - Exfiltration Over C2 Channel<br>T1011 - Exfiltration Over Other Network Medium<br>T1567 - Exfiltration Over Web Service<br>T1029 - Scheduled Transfer<br>T1537 - Transfer Data to Cloud Account |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Excessive uploads via Palo Alto GlobalProtect
Configuration details:
| **MITRE ATT&CK tactics:** | Exfiltration | | **MITRE ATT&CK techniques:** | T1030 - Data Transfer Size Limits<br>T1041 - Exfiltration Over C2 Channel<br>T1011 - Exfiltration Over Other Network Medium<br>T1567 - Exfiltration Over Web Service<br>T1029 - Scheduled Transfer<br>T1537 - Transfer Data to Cloud Account |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Login from an unusual region via Palo Alto GlobalProtect account logins
Configuration details:
| **MITRE ATT&CK tactics:** | Credential Access<br>Initial Access<br>Lateral Movement | | **MITRE ATT&CK techniques:** | T1133 - External Remote Services |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Multi-region logins in a single day via Palo Alto GlobalProtect
Configuration details:
| **MITRE ATT&CK tactics:** | Defense Evasion<br>Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Potential data staging
Configuration details:
| **MITRE ATT&CK tactics:** | Collection | | **MITRE ATT&CK techniques:** | T1074 - Data Staged |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Potential domain generation algorithm (DGA) on next-level DNS Domains
Configuration details:
| **MITRE ATT&CK tactics:** | Command and Control | | **MITRE ATT&CK techniques:** | T1568 - Dynamic Resolution |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious geography change in Palo Alto GlobalProtect account logins
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access<br>Credential Access | | **MITRE ATT&CK techniques:** | T1133 - External Remote Services<br>T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious number of protected documents accessed
Configuration details:
| **MITRE ATT&CK tactics:** | Collection | | **MITRE ATT&CK techniques:** | T1530 - Data from Cloud Storage Object<br>T1213 - Data from Information Repositories<br>T1005 - Data from Local System<br>T1039 - Data from Network Shared Drive<br>T1114 - Email Collection |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious volume of AWS API calls from Non-AWS source IP address
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious volume of AWS CloudTrail log events of group user account by EventTypeName
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious volume of AWS write API calls from a user account
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious volume of failed login attempts to AWS Console by each group user account
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious volume of failed login attempts to AWS Console by each source IP address
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious volume of logins to computer
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious volume of logins to computer with elevated token
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious volume of logins to user account
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious volume of logins to user account by logon types
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Suspicious volume of logins to user account with elevated token
Configuration details:
| **MITRE ATT&CK tactics:** | Initial Access | | **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Unusual external firewall alarm detected
Configuration details:
| **MITRE ATT&CK tactics:** | Discovery<br>Command and Control | | **MITRE ATT&CK techniques:** | **Discovery:**<br>T1046 - Network Service Scanning<br>T1135 - Network Share Discovery<br><br>**Command and Control:**<br>T1071 - Application Layer Protocol<br>T1095 - Non-Application Layer Protocol<br>T1571 - Non-Standard Port |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Unusual mass downgrade AIP label
Configuration details:
| **MITRE ATT&CK tactics:** | Collection | | **MITRE ATT&CK techniques:** | T1530 - Data from Cloud Storage Object<br>T1213 - Data from Information Repositories<br>T1005 - Data from Local System<br>T1039 - Data from Network Shared Drive<br>T1114 - Email Collection |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Unusual network communication on commonly used ports
Configuration details:
| **MITRE ATT&CK tactics:** | Command and Control<br>Exfiltration | | **MITRE ATT&CK techniques:** | **Command and Control:**<br>T1071 - Application Layer Protocol<br><br>**Exfiltration:**<br>T1030 - Data Transfer Size Limits |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Unusual network volume anomaly
Configuration details:
| **MITRE ATT&CK tactics:** | Exfiltration | | **MITRE ATT&CK techniques:** | T1030 - Data Transfer Size Limits |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
### Unusual web traffic detected with IP in URL path
Configuration details:
| **MITRE ATT&CK tactics:** | Command and Control<br>Initial Access | | **MITRE ATT&CK techniques:** | **Command and Control:**<br>T1071 - Application Layer Protocol<br><br>**Initial Access:**<br>T1189 - Drive-by Compromise |
-[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies) | [Back to top](#anomalies-detected-by-the-microsoft-sentinel-machine-learning-engine)
## Next steps
sentinel Authenticate Playbooks To Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/authenticate-playbooks-to-sentinel.md
Title: Authenticate playbooks to Microsoft Sentinel | Microsoft Docs description: Learn how to give your playbooks access to Microsoft Sentinel and authorization to take remedial actions.- Previously updated : 11/09/2021-++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
++ # Authenticate playbooks to Microsoft Sentinel
To authenticate with managed identity:
| Role | Situation | | | |
- | [**Microsoft Sentinel Responder**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder) | Playbook has steps which update incidents or watchlists |
+ | [**Microsoft Sentinel Responder**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder) | Playbook has steps that update incidents or watchlists |
| [**Microsoft Sentinel Reader**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-reader) | Playbook only receives incidents | |
To authenticate with managed identity:
1. Enable the managed identity authentication method in the Microsoft Sentinel Logic Apps connector:
- 1. In the Logic Apps designer, add a Microsoft Sentinel Logic Apps connector step. If the connector is already enabled for an existing connection, click the **Change connection** link.
+ 1. In the Logic Apps designer, add a Microsoft Sentinel Logic Apps connector step. If the connector is already enabled for an existing connection, select the **Change connection** link.
![Change connection](media/authenticate-playbooks-to-sentinel/change-connection.png)
To use your own application with the Microsoft Sentinel connector, perform the f
1. Get credentials (for future authentication).
- In the registered application blade, get the application credentials for signing in:
+ In the registered application page, get the application credentials for signing in:
- **Client ID**: under **Overview** - **Client secret**: under **Certificates & secrets**.
To use your own application with the Microsoft Sentinel connector, perform the f
1. Select **Add role assignment**.
- 1. Select the role you wish to assign to the application. For example, to allow the application to perform actions that will make changes in the Sentinel workspace, like updating an incident, select the **Microsoft Sentinel Contributor** role. For actions which only read data, the **Microsoft Sentinel Reader** role is sufficient. [Learn more about the available roles in Microsoft Sentinel](./roles.md).
+ 1. Select the role you wish to assign to the application. For example, to allow the application to perform actions that will make changes in the Sentinel workspace, like updating an incident, select the **Microsoft Sentinel Contributor** role. For actions that only read data, the **Microsoft Sentinel Reader** role is sufficient. [Learn more about the available roles in Microsoft Sentinel](./roles.md).
1. Find the required application and save. By default, Microsoft Entra applications aren't displayed in the available options. To find your application, search for the name and select it.
To use your own application with the Microsoft Sentinel connector, perform the f
![Service principal option](media/authenticate-playbooks-to-sentinel/auth-methods-spn-choice.png)
- - Fill in the required parameters (can be found in the registered application blade)
+ - Fill in the required parameters (can be found in the registered application page)
- **Tenant**: under **Overview** - **Client ID**: under **Overview** - **Client Secret**: under **Certificates & secrets**
To use your own application with the Microsoft Sentinel connector, perform the f
Every time an authentication is created for the first time, a new Azure resource of type API Connection is created. The same API connection can be used in all the Microsoft Sentinel actions and triggers in the same Resource Group.
-All the API connections can be found in the **API connections** blade (search for *API connections* in the Azure portal).
+All the API connections can be found in the **API connections** page (search for *API connections* in the Azure portal).
-You can also find them by going to the **Resources** blade and filtering the display by type *API Connection*. This way allows you to select multiple connections for bulk operations.
+You can also find them by going to the **Resources** page and filtering the display by type *API Connection*. This way allows you to select multiple connections for bulk operations.
In order to change the authorization of an existing connection, enter the connection resource, and select **Edit API connection**.
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
Title: Automate threat response in Microsoft Sentinel with automation rules | Microsoft Docs description: This article explains what Microsoft Sentinel automation rules are, and how to use them to implement your Security Orchestration, Automation and Response (SOAR) operations, increasing your SOC's effectiveness and saving you time and resources.-++ Previously updated : 06/27/2022- Last updated : 03/27/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
++ # Automate threat response in Microsoft Sentinel with automation rules This article explains what Microsoft Sentinel automation rules are, and how to use them to implement your Security Orchestration, Automation and Response (SOAR) operations, increasing your SOC's effectiveness and saving you time and resources. + ## What are automation rules? Automation rules are a way to centrally manage automation in Microsoft Sentinel, by allowing you to define and coordinate a small set of rules that can apply across different scenarios.
Automation rules are made up of several components:
### Triggers
-Automation rules are triggered **when an incident is created or updated** or **when an alert is created**. Recall that incidents include alerts, and that both alerts and incidents are created by analytics rules, of which there are several types, as explained in [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
+Automation rules are triggered **when an incident is created or updated** or **when an alert is created**. Recall that incidents include alerts, and that both alerts and incidents can be created by analytics rules, of which there are several types, as explained in [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
The following table shows the different possible scenarios that will cause an automation rule to run. | Trigger type | Events that cause the rule to run | | | |
-| **When incident is created** | - A new incident is created by an analytics rule.<br>- An incident is ingested from Microsoft Defender XDR.<br>- A new incident is created manually. |
-| **When incident is updated**<br> | - An incident's status is changed (closed/reopened/triaged).<br>- An incident's owner is assigned or changed.<br>- An incident's severity is raised or lowered.<br>- Alerts are added to an incident.<br>- Comments, tags, or tactics are added to an incident. |
-| **When alert is created**<br> | - An alert is created by a scheduled analytics rule.
+| **When incident is created** | <li>A new incident is created by an analytics rule.<li>An incident is ingested from Microsoft Defender XDR.<li>A new incident is created manually. |
+| **When incident is updated** | <li>An incident's status is changed (closed/reopened/triaged).<li>An incident's owner is assigned or changed.<li>An incident's severity is raised or lowered.<li>Alerts are added to an incident.<li>Comments, tags, or tactics are added to an incident. |
+| **When alert is created** | <li>An alert is created by an analytics rule. |
#### Incident-based or alert-based automation? Now that both incident automation and alert automation are handled centrally by automation rules as well as playbooks, how should you choose when to use which?
-For most use cases, **incident-triggered automation** is the preferable approach. In Microsoft Sentinel, an **incident** is a ΓÇ£case fileΓÇ¥ ΓÇô an aggregation of all the relevant evidence for a specific investigation. ItΓÇÖs a container for alerts, entities, comments, collaboration, and other artifacts. Unlike **alerts** which are single pieces of evidence, incidents are modifiable, have the most updated status, and can be enriched with comments, tags, and bookmarks. The incident allows you to track the attack story which keeps evolving with the addition of new alerts.
+For most use cases, **incident-triggered automation** is the preferable approach. In Microsoft Sentinel, an **incident** is a ΓÇ£case fileΓÇ¥ ΓÇô an aggregation of all the relevant evidence for a specific investigation. ItΓÇÖs a container for alerts, entities, comments, collaboration, and other artifacts. Unlike **alerts** which are single pieces of evidence, incidents are modifiable, have the most updated status, and can be enriched with comments, tags, and bookmarks. The incident allows you to track the attack story that keeps evolving with the addition of new alerts.
For these reasons, it makes more sense to build your automation around incidents. So the most appropriate way to create playbooks is to base them on the Microsoft Sentinel incident trigger in Azure Logic Apps.
-The main reason to use **alert-triggered automation** is for responding to alerts generated by analytics rules which *do not create incidents* (that is, where incident creation has been *disabled* in the **Incident settings** tab of the [analytics rule wizard](detect-threats-custom.md#configure-the-incident-creation-settings)). A SOC might decide to do this if it wants to use its own logic to determine if and how incidents are created from alerts, as well as if and how alerts are grouped into incidents. For example:
+The main reason to use **alert-triggered automation** is for responding to alerts generated by analytics rules that *do not create incidents* (that is, where incident creation has been *disabled* in the **Incident settings** tab of the [analytics rule wizard](detect-threats-custom.md#configure-the-incident-creation-settings)). A SOC might decide to do this if it wants to use its own logic to determine if and how incidents are created from alerts, as well as if and how alerts are grouped into incidents. For example:
- A playbook can be triggered by an alert that doesnΓÇÖt have an associated incident, enrich the alert with information from other sources, and based on some external logic decide whether to create an incident or not.
The main reason to use **alert-triggered automation** is for responding to alert
- A playbook can be triggered by an alert and send the alert to an external ticketing system for incident creation and management, creating a new ticket for each alert. > [!NOTE]
-> Alert-triggered automation is available only for [alerts](detect-threats-built-in.md) created by **Scheduled** analytics rules. Alerts created by **Microsoft Security** analytics rules are not supported.
+> - Alert-triggered automation is available only for [alerts](detect-threats-built-in.md) created by **Scheduled** analytics rules. Alerts created by **Microsoft Security** analytics rules are not supported.
+>
+> - Alert-triggered automation is not currently available in the unified security operations platform in the Microsoft Defender portal.
### Conditions
When an automation rule is triggered, it checks the triggering incident or alert
For rules defined using the trigger **When an incident is created**, you can define conditions that check the **current state** of the values of a given list of incident properties, using one or more of the following operators:
-An incident property's value
+An incident property's value
- **equals** or **does not equal** the value defined in the condition. - **contains** or **does not contain** the value defined in the condition. - **starts with** or **does not start with** the value defined in the condition.
The **current state** in this context refers to the moment the condition is eval
The conditions evaluated in rules defined using the trigger **When an incident is updated** include all of those listed for the incident creation trigger. But the update trigger includes more properties that can be evaluated.
-One of these properties is **Updated by**. This property lets you track the type of source that made the change in the incident. You can create a condition evaluating whether the incident was updated by one of the following:
+One of these properties is **Updated by**. This property lets you track the type of source that made the change in the incident. You can create a condition evaluating whether the incident was updated by one of the following values, depending on whether you've onboarded your workspace to the unified security operations platform:
+
+##### [Onboarded workspaces](#tab/onboarded)
+
+- An application, including applications in both the Azure and Defender portals.
+- A user, including changes made by users in both the Azure and Defender portals.
+- **AIR**, for updates by [automated investigation and response in Microsoft Defender for Office 365](/microsoft-365/security/office-365-security/air-about)
+- An alert grouping (that added alerts to the incident), including alert groupings that were done both by analytics rules and built-in Microsoft Defender XDR correlation logic
+- A playbook
+- An automation rule
+- Other, if none of the above values apply
-- an application-- a user-- an alert grouping (that added alerts to the incident)-- a playbook-- an automation rule
+##### [Workspaces not onboarded](#tab/not-onboarded)
+
+- An application
+- A Microsoft Sentinel user
+- An alert grouping done by analytics rules (that added alerts to the incident).
+- A playbook
+- An automation rule
- Microsoft Defender XDR + Using this condition, for example, you can instruct this automation rule to run on any change made to an incident, except if it was made by another automation rule. More to the point, the update trigger also uses other operators that check **state changes** in the values of incident properties as well as their current state. A **state change** condition would be satisfied if:
An incident property's value was
- **changed to** the value defined in the condition. - **added** to (this applies to properties with a list of values).
-> [!NOTE]
-> - An automation rule, based on the update trigger, can run on an incident that was updated by another automation rule, based on the incident creation trigger, that ran on the incident.
->
-> - Also, if an incident is updated by an automation rule that ran on the incident's creation, the incident can be evaluated by *both* a subsequent *incident-creation* automation rule *and* an *incident-update* automation rule, both of which will run if the incident satisfies the rules' conditions.
->
-> - If an incident triggers both create-trigger and update-trigger automation rules, the create-trigger rules will run first, according to their **[Order](#order)** numbers, and then the update-trigger rules will run, according to *their* **Order** numbers.
+#### *Tag* property: individual vs. collection
+
+The incident property **Tag** is a collection of individual items&mdash;a single incident can have multiple tags applied to it. You can define conditions that check **each tag in the collection individually**, and conditions that check **the collection of tags as a unit**.
+
+- **Any individual tag** operators check the condition against every tag in the collection. The evaluation is *true* when *at least one tag* satisfies the condition.
+- **Collection of all tags** operators check the condition against the collection of tags as a single unit. The evaluation is *true* only if *the collection as a whole* satisfies the condition.
+
+This distinction matters when your condition is a negative (does not contain), and some tags in the collection satisfy the condition and others don't.
+
+Let's look at an example where your condition is, **Tag does not contain "2024"**, and you have two incidents, each with two tags:
+
+| \ Incidents &#9654;<br>Condition &#9660; \ | Incident 1<br>Tag 1: 2024<br>Tag 2: 2023 | Incident 2<br>Tag 1: 2023<br>Tag 2: 2022 |
+| -- | :: | :: |
+| **Any individual tag<br>does not contain "2024"** | ***TRUE*** | TRUE |
+| **Collection of all tags<br>does not contain "2024"** | ***FALSE*** | TRUE |
+
+In this example, in *Incident 1*:
+- If the condition checks each tag individually, then since there's at least one tag that *satisfies the condition* (that *doesn't* contain "2024"), the overall condition is **true**.
+- If the condition checks all the tags in the incident as a single unit, then since there's at least one tag that *doesn't satisfy the condition* (that *does* contain "2024"), the overall condition is **false**.
+
+In *Incident 2*, the outcome will be the same, regardless of which type of condition is defined.
#### Alert create trigger
Rules based on the update trigger have their own separate order queue. If such r
- For rules of different *incident trigger* types, all applicable rules with the *incident creation* trigger type will run first (according to their order numbers), and only then the rules with the *incident update* trigger type (according to *their* order numbers). - Rules always run sequentially, never in parallel.
+> [!NOTE]
+> After onboarding to the unified security operations platform, if multiple changes are made to the same incident in a five to ten minute period, a single update is sent to Microsoft Sentinel, with only the most recent change.
+ ## Common use cases and scenarios ### Incident tasks
You can assign incidents to the right owner automatically. If your SOC has an an
### Incident suppression
-You can use rules to automatically resolve incidents that are known false/benign positives without the use of playbooks. For example, when running penetration tests, doing scheduled maintenance or upgrades, or testing automation procedures, many false-positive incidents may be created that the SOC wants to ignore. A time-limited automation rule can automatically close these incidents as they are created, while tagging them with a descriptor of the cause of their generation.
+You can use rules to automatically resolve incidents that are known false/benign positives without the use of playbooks. For example, when running penetration tests, doing scheduled maintenance or upgrades, or testing automation procedures, many false-positive incidents might be created that the SOC wants to ignore. A time-limited automation rule can automatically close these incidents as they are created, while tagging them with a descriptor of the cause of their generation.
### Time-limited automation
-You can add expiration dates for your automation rules. There may be cases other than incident suppression that warrant time-limited automation. You may want to assign a particular type of incident to a particular user (say, an intern or a consultant) for a specific time frame. If the time frame is known in advance, you can effectively cause the rule to be disabled at the end of its relevancy, without having to remember to do so.
+You can add expiration dates for your automation rules. There might be cases other than incident suppression that warrant time-limited automation. You might want to assign a particular type of incident to a particular user (say, an intern or a consultant) for a specific time frame. If the time frame is known in advance, you can effectively cause the rule to be disabled at the end of its relevancy, without having to remember to do so.
### Automatically tag incidents
If you've used playbooks to create tickets in external systems when incidents ar
Automation rules are run sequentially, according to the [order](#order) you [determine](create-manage-use-automation-rules.md#finish-creating-your-rule). Each automation rule is executed after the previous one has finished its run. Within an automation rule, all actions are run sequentially in the order in which they are defined.
-Playbook actions within an automation rule may be treated differently under some circumstances, according to the following criteria:
+Playbook actions within an automation rule might be treated differently under some circumstances, according to the following criteria:
| Playbook run time | Automation rule advances to the next action... | | -- | |
In order for an automation rule to run a playbook, this account must be granted
When you're configuring an automation rule and adding a **run playbook** action, a drop-down list of playbooks will appear. Playbooks to which Microsoft Sentinel does not have permissions will show as unavailable ("grayed out"). You can grant Microsoft Sentinel permission to the playbooks' resource groups on the spot by selecting the **Manage playbook permissions** link. To grant those permissions, you'll need **Owner** permissions on those resource groups. [See the full permissions requirements](tutorial-respond-threats-playbook.md#respond-to-incidents).
-#### Permissions in a multi-tenant architecture
+#### Permissions in a multitenant architecture
Automation rules fully support cross-workspace and [multitenant deployments](extend-sentinel-across-workspaces-tenants.md#manage-workspaces-across-tenants-using-azure-lighthouse) (in the case of multitenant, using [Azure Lighthouse](../lighthouse/index.yml)).
In the specific case of a Managed Security Service Provider (MSSP), where a serv
## Creating and managing automation rules
-You can [create and manage automation rules](create-manage-use-automation-rules.md) from different points in the Microsoft Sentinel experience, depending on your particular need and use case.
+You can [create and manage automation rules](create-manage-use-automation-rules.md) from different areas in Microsoft Sentinel or the unified security operations platform, depending on your particular need and use case.
-- **Automation blade**
+- **Automation page**
- Automation rules can be centrally managed in the **Automation** blade, under the **Automation rules** tab. From there, you can create new automation rules and edit the existing ones. You can also drag automation rules to change the order of execution, and enable or disable them.
+ Automation rules can be centrally managed in the **Automation** page, under the **Automation rules** tab. From there, you can create new automation rules and edit the existing ones. You can also drag automation rules to change the order of execution, and enable or disable them.
- In the **Automation** blade, you see all the rules that are defined on the workspace, along with their status (Enabled/Disabled) and which analytics rules they are applied to.
+ In the **Automation** page, you see all the rules that are defined on the workspace, along with their status (Enabled/Disabled) and which analytics rules they are applied to.
- When you need an automation rule that will apply to many analytics rules, create it directly in the **Automation** blade.
+ When you need an automation rule that will apply to many analytics rules, create it directly in the **Automation** page.
- **Analytics rule wizard**
You can [create and manage automation rules](create-manage-use-automation-rules.
You'll notice that when you create an automation rule from here, the **Create new automation rule** panel shows the **analytics rule** condition as unavailable, because this rule is already set to apply only to the analytics rule you're editing in the wizard. All the other configuration options are still available to you. -- **Incidents blade**
+- **Incidents page**
- You can also create an automation rule from the **Incidents** blade, in order to respond to a single, recurring incident. This is useful when creating a [suppression rule](#incident-suppression) for [automatically closing "noisy" incidents](false-positives.md).
+ You can also create an automation rule from the **Incidents** page, in order to respond to a single, recurring incident. This is useful when creating a [suppression rule](#incident-suppression) for [automatically closing "noisy" incidents](false-positives.md).
You'll notice that when you create an automation rule from here, the **Create new automation rule** panel has populated all the fields with values from the incident. It names the rule the same name as the incident, applies it to the analytics rule that generated the incident, and uses all the available entities in the incident as conditions of the rule. It also suggests a suppression (closing) action by default, and suggests an expiration date for the rule. You can add or remove conditions and actions, and change the expiration date, as you wish.
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md
Title: Automate threat response with playbooks in Microsoft Sentinel | Microsoft Docs description: This article explains automation in Microsoft Sentinel, and shows how to use playbooks to automate threat prevention and response.- Previously updated : 06/21/2023-++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
++ # Automate threat response with playbooks in Microsoft Sentinel This article explains what Microsoft Sentinel playbooks are, and how to use them to implement your Security Orchestration, Automation and Response (SOAR) operations, achieving better results while saving time and resources. + ## What is a playbook? SOC analysts are typically inundated with security alerts and incidents on a regular basis, at volumes so large that available personnel are overwhelmed. This results all too often in situations where many alerts are ignored and many incidents aren't investigated, leaving the organization vulnerable to attacks that go unnoticed.
For example, if an account and machine are compromised, a playbook can isolate t
While the **Active playbooks** tab on the **Automation** page displays all the active playbooks available across any selected subscriptions, by default a playbook can be used only within the subscription to which it belongs, unless you specifically grant Microsoft Sentinel permissions to the playbook's resource group.
+After onboarding to the unified security operations platform, the **Active playbooks** tab shows a pre-defined filter with onboarded workspace's subscription. In the Azure portal, add data for other subscriptions using the Azure subscription filter.
+ ### Playbook templates A playbook template is a prebuilt, tested, and ready-to-use workflow that can be customized to meet your needs. Templates can also serve as a reference for best practices when developing playbooks from scratch, or as inspiration for new automation scenarios.
Azure Logic Apps communicates with other systems and services using connectors.
Microsoft Sentinel now supports the following logic app resource types: -- **Consumption**, which runs in multi-tenant Azure Logic Apps and uses the classic, original Azure Logic Apps engine.
+- **Consumption**, which runs in multitenant Azure Logic Apps and uses the classic, original Azure Logic Apps engine.
- **Standard**, which runs in single-tenant Azure Logic Apps and uses a redesigned Azure Logic Apps engine. The **Standard** logic app type offers higher performance, fixed pricing, multiple workflow capability, easier API connections management, native network capabilities such as support for virtual networks and private endpoints (see note below), built-in CI/CD features, better Visual Studio Code integration, an updated workflow designer, and more.
There are many differences between these two resource types, some of which affec
- **Microsoft Sentinel Contributor** role lets you attach a playbook to an analytics or automation rule. - **Microsoft Sentinel Responder** role lets you access an incident in order to run a playbook manually. But to actually run the playbook, you also need...-- **Microsoft Sentinel Playbook Operator** role lets you run a playbook manually.-- **Microsoft Sentinel Automation Contributor** allows automation rules to run playbooks. It is not used for any other purpose.+
+ - **Microsoft Sentinel Playbook Operator** role lets you run a playbook manually.
+ - **Microsoft Sentinel Automation Contributor** allows automation rules to run playbooks. It is not used for any other purpose.
#### Learn more
They are designed to be run automatically, and ideally that is how they should b
There are circumstances, though, that call for running playbooks manually. For example: - When creating a new playbook, you'll want to test it before putting it in production. -- There may be situations where you'll want to have more control and human input into when and whether a certain playbook runs.
+- There might be situations where you'll want to have more control and human input into when and whether a certain playbook runs.
You [run a playbook manually](tutorial-respond-threats-playbook.md#run-a-playbook-on-demand) by opening an incident, alert, or entity and selecting and running the associated playbook displayed there. Currently this feature is generally available for alerts, and in preview for incidents and entities.
Security operations teams can significantly reduce their workload by fully autom
Setting automated response means that every time an analytics rule is triggered, in addition to creating an alert, the rule will run a playbook, which will receive as an input the alert created by the rule.
-If the alert creates an incident, the incident will trigger an automation rule which may in turn run a playbook, which will receive as an input the incident created by the alert.
+If the alert creates an incident, the incident will trigger an automation rule which might in turn run a playbook, which will receive as an input the incident created by the alert.
#### Alert creation automated response
For playbooks that are triggered by incident creation and receive incidents as t
- Edit the analytics rule that generates the incident you want to define an automated response for. Under **Incident automation** in the **Automated response** tab, create an automation rule. This will create an automated response only for this analytics rule. -- From the **Automation rules** tab in the **Automation** blade, create a new automation rule and specify the appropriate conditions and desired actions. This automation rule will be applied to any analytics rule that fulfills the specified conditions.
+- From the **Automation rules** tab in the **Automation** page, create a new automation rule and specify the appropriate conditions and desired actions. This automation rule will be applied to any analytics rule that fulfills the specified conditions.
> [!NOTE] > **Microsoft Sentinel requires permissions to run incident-trigger playbooks.**
See the [complete instructions for creating automation rules](tutorial-respond-t
Full automation is the best solution for as many incident-handling, investigation, and mitigation tasks as you're comfortable automating. Having said that, there can be good reasons for a sort of hybrid automation: using playbooks to consolidate a string of activities against a range of systems into a single command, but running the playbooks only when and where you decide. For example: -- You may prefer your SOC analysts have more human input and control over some situations.
+- You might prefer your SOC analysts have more human input and control over some situations.
-- You may also want them to be able to take action against specific threat actors (entities) on-demand, in the course of an investigation or a threat hunt, in context without having to pivot to another screen. (This ability is now in Preview.)
+- You might also want them to be able to take action against specific threat actors (entities) on-demand, in the course of an investigation or a threat hunt, in context without having to pivot to another screen. (This ability is now in Preview.)
-- You may want your SOC engineers to write playbooks that act on specific entities (now in Preview) and that can only be run manually.
+- You might want your SOC engineers to write playbooks that act on specific entities (now in Preview) and that can only be run manually.
- You would probably like your engineers to be able to test the playbooks they write before fully deploying them in automation rules. For these and other reasons, Microsoft Sentinel allows you to **run playbooks manually** on-demand for entities and incidents (both now in Preview), as well as for alerts. -- **To run a playbook on a specific incident,** select the incident from the grid in the **Incidents** blade. Select **Actions** from the incident details pane, and choose **Run playbook (Preview)** from the context menu.
+- **To run a playbook on a specific incident,** select the incident from the grid in the **Incidents** page. In the [Azure portal](https://portal.azure.com), select **Actions** from the incident details pane, and choose **Run playbook (Preview)** from the context menu. In the [Defender portal](https://security.microsoft.com/), select **Run playbook (Preview)** directly from the incident details page.
This opens the **Run playbook on incident** panel.
To see all the API connections, enter *API connections* in the header search box
- Status - indicates the connection status: error, connected. - Resource group - API connections are created in the resource group of the playbook (Azure Logic Apps) resource.
-Another way to view API connections would be to go to the **All Resources** blade and filter it by type *API connection*. This way allows the selection, tagging, and deletion of multiple connections at once.
+Another way to view API connections would be to go to the **All Resources** page and filter it by type *API connection*. This way allows the selection, tagging, and deletion of multiple connections at once.
In order to change the authorization of an existing connection, enter the connection resource, and select **Edit API connection**.
sentinel Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation.md
Title: Introduction to automation in Microsoft Sentinel | Microsoft Docs description: This article introduces the Security Orchestration, Automation, and Response (SOAR) capabilities of Microsoft Sentinel and describes its SOAR components - automation rules and playbooks.- Previously updated : 11/09/2021-++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
++ # Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel This article describes the Security Orchestration, Automation, and Response (SOAR) capabilities of Microsoft Sentinel, and shows how the use of automation rules and playbooks in response to security threats increases your SOC's effectiveness and saves you time and resources. + ## Microsoft Sentinel as a SOAR solution ### The problem
Playbooks in Microsoft Sentinel are based on workflows built in [Azure Logic App
Learn more with this [complete explanation of playbooks](automate-responses-with-playbooks.md).
+## Automation with the unified security operations platform
+
+After onboarding your Microsoft Sentinel workspace to the unified security operations platform, note the following differences in the way automation functions in your workspace:
+
+|Functionality |Description |
+|||
+|**Automation rules with alert triggers** | In the unified security operations platform, automation rules with alert triggers act only on Microsoft Sentinel alerts. <br><br>For more information, see [Alert create trigger](automate-incident-handling-with-automation-rules.md#alert-create-trigger). |
+|**Automation rules with incident triggers** | In both the Azure portal and the unified security operations platform, the **Incident provider** condition property is removed, as all incidents have *Microsoft Defender XDR* as the incident provider. <br><br>At that point, any existing automation rules run on both Microsoft Sentinel and Microsoft Defender XDR incidents, including those where the **Incident provider** condition is set to only *Microsoft Sentinel* or *Microsoft 365 Defender*. <br><br>However, automation rules that specify a specific analytics rule name will run only on the incidents that were created by the specified analytics rule. This means that you can define the **Analytic rule name** condition property to an analytics rule that exists only in Microsoft Sentinel to limit your rule to run on incidents only in Microsoft Sentinel. <br><br>For more information, see [Incident trigger conditions](automate-incident-handling-with-automation-rules.md#conditions). |
+|***Updated by* field** | - After onboarding your workspace, the **Updated by** field has a [new set of supported values](automate-incident-handling-with-automation-rules.md#incident-update-trigger), which no longer include *Microsoft 365 Defender*. In existing automation rules, *Microsoft 365 Defender* is replaced by a value of *Other* after onboarding your workspace. <br><br>- If multiple changes are made to the same incident in a 5-10 minute period, a single update is sent to Microsoft Sentinel, with only the most recent change. <br><br>For more information, see [Incident update trigger](automate-incident-handling-with-automation-rules.md#incident-update-trigger). |
+|**Automation rules that add incident tasks** | If an automation rule add an incident task, the task is shown only in the Azure portal. |
+|**Microsoft incident creation rules** | Microsoft incident creation rules aren't supported in the unified security operations platform. <br><br>For more information, see [Microsoft Defender XDR incidents and Microsoft incident creation rules](microsoft-365-defender-sentinel-integration.md#microsoft-defender-xdr-incidents-and-microsoft-incident-creation-rules). |
+|**Active playbooks tab** | After onboarding to the unified security operations platform, by default the **Active playbooks** tab shows a pre-defined filter with onboarded workspace's subscription. Add data for other subscriptions using the subscription filter. <br><br>For more information, see [Create and customize Microsoft Sentinel playbooks from content templates](use-playbook-templates.md). |
+|**Running playbooks manually on demand** |The following procedures are not supported in the unified security operations platform: <br><br>- [Run a playbook manually on an alert](tutorial-respond-threats-playbook.md?tabs=LAC%2Cincidents#run-a-playbook-manually-on-an-alert) <br>- [Run a playbook manually on an entity](tutorial-respond-threats-playbook.md?tabs=LAC%2Cincidents#run-a-playbook-manually-on-an-entity-preview) |
++ ## Next steps In this document, you learned how Microsoft Sentinel uses automation to help your SOC operate more effectively and efficiently.
sentinel Billing Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-monitor-costs.md
description: Learn how to manage and monitor costs and billing for Microsoft Sen
- Previously updated : 07/05/2023+ Last updated : 03/07/2024+
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
# Manage and monitor costs for Microsoft Sentinel
After you've started using Microsoft Sentinel resources, use Cost Management fea
Costs for Microsoft Sentinel are only a portion of the monthly costs in your Azure bill. Although this article explains how to manage and monitor costs for Microsoft Sentinel, you're billed for all Azure services and resources your Azure subscription uses, including Partner services. + ## Prerequisites To view cost data and perform cost analysis in Cost Management, you must have a supported Azure account type, with at least read access. While cost analysis in Cost Management supports most Azure account types, not all are supported. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+For information about assigning access to Microsoft Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
## View costs by using cost analysis As you use Azure resources with Microsoft Sentinel, you incur costs. Azure resource usage unit costs vary by time intervals such as seconds, minutes, hours, and days, or by unit usage, like bytes and megabytes. As soon as Microsoft Sentinel starts to analyze billable data, it incurs costs. View these costs by using cost analysis in the Azure portal. For more information, see [Start using cost analysis](../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-When you use cost analysis, you view Microsoft Sentinel costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
+When you use cost analysis, you view Microsoft Sentinel costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you created budgets, you can also easily see where they're exceeded.
-The [Azure Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md) hub provides useful functionality. After you open **Cost Management + Billing** in the Azure portal, select **Cost Management** in the left navigation and then select the [scope](..//cost-management-billing/costs/understand-work-scopes.md) or set of resources to investigate, such as an Azure subscription or resource group.
+The [Microsoft Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md) hub provides useful functionality. After you open **Cost Management + Billing** in the Azure portal, select **Cost Management** in the left navigation and then select the [scope](..//cost-management-billing/costs/understand-work-scopes.md) or set of resources to investigate, such as an Azure subscription or resource group.
The **Cost Analysis** screen shows detailed views of your Azure usage and costs, with the option to apply various controls and filters.
You could also apply further controls. For example, to view only the costs assoc
Microsoft Sentinel data ingestion volumes appear under **Security Insights** in some portal Usage Charts.
-The Microsoft Sentinel classic pricing tiers don't include Log Analytics charges, so you may see those charges billed separately. Microsoft Sentinel simplified pricing combines the two costs into one set of tiers. To learn more about Microsoft Sentinel's simplified pricing tiers, see [Simplified pricing tiers](billing.md#simplified-pricing-tiers).
+The Microsoft Sentinel classic pricing tiers don't include Log Analytics charges, so you might see those charges billed separately. Microsoft Sentinel simplified pricing combines the two costs into one set of tiers. To learn more about Microsoft Sentinel's simplified pricing tiers, see [Simplified pricing tiers](billing.md#simplified-pricing-tiers).
For more information on reducing costs, see [Create budgets](#create-budgets) and [Reduce costs in Microsoft Sentinel](billing-monitor-costs.md). ## Using Azure Prepayment with Microsoft Sentinel
-You can pay for Microsoft Sentinel charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay bills to third-party organizations for their products and services, or for products from the Azure Marketplace.
+You can pay for Microsoft Sentinel charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay bills to non-Microsoft organizations for their products and services, or for products from the Azure Marketplace.
## Run queries to understand your data ingestion
The Microsoft Sentinel GitHub community provides the [`Send-IngestionCostAlert`]
## Define a data volume cap in Log Analytics
-> [!IMPORTANT]
-> Starting September 18, 2023, the Log Analytics Daily Cap will no longer exclude the below set of data types from the daily cap, and all billable data types will be capped if the daily cap is met. This change improves your ability to fully contain costs from higher-than-expected data ingestion.
-> If you have a Daily Cap set on your workspace which has [Microsoft Defender for Servers](/azure/defender-for-cloud/plan-defender-for-servers-select-plan) or Microsoft Sentinel, be sure that the cap is high enough to accommodate this change. Also, be sure to set an alert so that you are notified as soon as your Daily Cap is met, see [Set daily cap on Log Analytics workspace](../azure-monitor/logs/daily-cap.md).
- In Log Analytics, you can enable a daily volume cap that limits the daily ingestion for your workspace. The daily cap can help you manage unexpected increases in data volume, stay within your limit, and limit unplanned charges. To define a daily volume cap, select **Usage and estimated costs** in the left navigation of your Log Analytics workspace, and then select **Daily cap**. Select **On**, enter a daily volume cap amount, and then select **OK**.
To define a daily volume cap, select **Usage and estimated costs** in the left n
The **Usage and estimated costs** screen also shows your ingested data volume trend in the past 31 days, and the total retained data volume.
-Until September 18, 2023, the following is true. If a workspace enabled the Γüá[Microsoft Defenders for Servers](/azure/defender-for-cloud/plan-defender-for-servers-select-plan) solution after June 19, 2017, some security related data types are collected for Microsoft Defender for Cloud or Microsoft Sentinel despite any daily cap configured. The following data types will be subject to this special exception from the daily cap:
--- WindowsEvent-- SecurityAlert-- SecurityBaseline-- SecurityBaselineSummary-- SecurityDetection-- SecurityEvent-- WindowsFirewall-- MaliciousIPCommunication-- LinuxAuditLog-- SysmonEvent-- ProtectionStatus-- Update-- UpdateSummary -- CommonSecurityLog-- Syslog-
-For more information about managing the daily cap in Log Analytics, see [Set daily cap on Log Analytics workspace](../azure-monitor/logs/daily-cap.md).
+For more information, see [Set daily cap on Log Analytics workspace](../azure-monitor/logs/daily-cap.md).
## Next steps - [Reduce costs for Microsoft Sentinel](billing-reduce-costs.md)-- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn [how to optimize your cloud investment with Microsoft Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
sentinel Billing Reduce Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-reduce-costs.md
description: Learn how to reduce costs for Microsoft Sentinel by using different
- Previously updated : 02/22/2022+ Last updated : 03/07/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Reduce costs for Microsoft Sentinel Costs for Microsoft Sentinel are only a portion of the monthly costs in your Azure bill. Although this article explains how to reduce costs for Microsoft Sentinel, you're billed for all Azure services and resources your Azure subscription uses, including Partner services. + ## Set or change pricing tier To optimize for highest savings, monitor your ingestion volume to ensure you have the Commitment Tier that aligns most closely with your ingestion volume patterns. Consider increasing or decreasing your Commitment Tier to align with changing data volumes.
To change your pricing tier commitment, select one of the other tiers on the pri
:::image type="content" source="media/billing-reduce-costs/simplified-pricing-tier.png" alt-text="Screenshot of pricing page in Microsoft Sentinel settings, with Pay-As-You-Go selected as current pricing tier." lightbox="media/billing-reduce-costs/simplified-pricing-tier.png":::
-To learn more about how to monitor your costs, see [Manage and monitor costs for Microsoft Sentinel](billing-monitor-costs.md)
+To learn more about how to monitor your costs, see [Manage and monitor costs for Microsoft Sentinel](billing-monitor-costs.md).
For workspaces still using classic pricing tiers, the Microsoft Sentinel pricing tiers don't include Log Analytics charges. For more information, see [Simplified pricing tiers](billing.md#simplified-pricing-tiers).
When hunting or investigating threats in Microsoft Sentinel, you might need to a
## Turn on basic logs data ingestion for data that's high-volume low security value (preview)
-Unlike analytics logs, [basic logs](../azure-monitor/logs/basic-logs-configure.md) are typically verbose. They contain a mix of high volume and low security value data, that isn't frequently used or accessed on demand for ad-hoc querying, investigations and search. Enable basic log data ingestion at a significantly reduced cost for eligible data tables. For more information, see [Microsoft Sentinel Pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
+Unlike analytics logs, [basic logs](../azure-monitor/logs/basic-logs-configure.md) are typically verbose. They contain a mix of high volume and low security value data that isn't frequently used or accessed on demand for ad-hoc querying, investigations, and search. Enable basic log data ingestion at a significantly reduced cost for eligible data tables. For more information, see [Microsoft Sentinel Pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
## Optimize Log Analytics costs with dedicated clusters
-If you ingest at least 500 GB into your Microsoft Sentinel workspace or workspaces in the same region, consider moving to a Log Analytics dedicated cluster to decrease costs. A Log Analytics dedicated cluster Commitment Tier aggregates data volume across workspaces that collectively ingest a total of 500 GB or more.
-
-For more information on how this affects pricing, see [Simplified pricing tier for dedicated cluster](enroll-simplified-pricing-tier.md#simplified-pricing-tiers-for-dedicated-clusters).
+If you ingest at least 500 GB into your Microsoft Sentinel workspace or workspaces in the same region, consider moving to a Log Analytics dedicated cluster to decrease costs. A Log Analytics dedicated cluster Commitment Tier aggregates data volume across workspaces that collectively ingest a total of 500 GB or more. For more information, see [Simplified pricing tier for dedicated cluster](enroll-simplified-pricing-tier.md#simplified-pricing-tiers-for-dedicated-clusters).
You can add multiple Microsoft Sentinel workspaces to a Log Analytics dedicated cluster. There are a couple of advantages to using a Log Analytics dedicated cluster for Microsoft Sentinel:
The [Windows Security Events connector](connect-windows-security-events.md?tabs=
Data collection rules enable you to manage collection settings at scale, while still allowing unique, scoped configurations for subsets of machines. For more information, see [Configure data collection for the Azure Monitor agent](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md).
-Besides for the predefined sets of events that you can select to ingest, such as All events, Minimal, or Common, data collection rules enable you to build custom filters and select specific events to ingest. The Azure Monitor Agent uses these rules to filter the data at the source, and then ingest only the events you've selected, while leaving everything else behind. Selecting specific events to ingest can help you optimize your costs and save more.
+Besides for the predefined sets of events that you can select to ingest, such as All events, Minimal, or Common, data collection rules enable you to build custom filters and select specific events to ingest. The Azure Monitor Agent uses these rules to filter the data at the source, and then ingest only the events you selected, while leaving everything else behind. Selecting specific events to ingest can help you optimize your costs and save more.
## Next steps -- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn [how to optimize your cloud investment with Microsoft Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
description: Learn how to plan your Microsoft Sentinel costs, and understand pri
- Previously updated : 07/05/2023+ Last updated : 03/07/2024+
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
#Customer intent: As a SOC manager, plan Microsoft Sentinel costs so I can understand and optimize the costs of my SIEM.
Costs for Microsoft Sentinel are only a portion of the monthly costs in your Azu
This article is part of the [Deployment guide for Microsoft Sentinel](deploy-overview.md). + ## Free trial Enable Microsoft Sentinel on an Azure Monitor Log Analytics workspace and the first 10 GB/day is free for 31 days. The cost for both Log Analytics data ingestion and Microsoft Sentinel analysis charges up to the 10 GB/day limit are waived during the 31-day trial period. This free trial is subject to a 20 workspace limit per Azure tenant.
-Usage beyond these limits will be charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel) page. Charges related to extra capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial.
+Usage beyond these limits is charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel) page. Charges related to extra capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial.
-During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel. This tab also displays details about the dates of your free trial, and how many days you have left until it expires.
+During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel. This tab also displays details about the dates of your free trial, and how many days left until the trial expires.
## Identify data sources and plan costs accordingly
There are two ways to pay for the analytics logs: **Pay-As-You-Go** and **Commit
- **Pay-As-You-Go** is the default model, based on the actual data volume stored and optionally for data retention beyond 90 days. Data volume is measured in GB (10<sup>9</sup> bytes). -- Log Analytics and Microsoft Sentinel have **Commitment Tier** pricing, formerly called Capacity Reservations. These pricing tiers are combined into simplified pricing tiers which are more predictable and offer substantial savings compared to **Pay-As-You-Go** pricing.
+- Log Analytics and Microsoft Sentinel have **Commitment Tier** pricing, formerly called Capacity Reservations. These pricing tiers are combined into simplified pricing tiers that are more predictable and offer substantial savings compared to **Pay-As-You-Go** pricing.
**Commitment Tier** pricing starts at 100 GB/day. Any usage above the commitment level is billed at the Commitment Tier rate you selected. For example, a Commitment Tier of 100-GB bills you for the committed 100-GB data volume, plus any extra GB/day at the discounted rate for that tier. Increase your commitment tier anytime to optimize costs as your data volume increases. Lowering the commitment tier is only allowed every 31 days. To see your current Microsoft Sentinel pricing tier, select **Settings** in Microsoft Sentinel, and then select the **Pricing** tab. Your current pricing tier is marked as **Current tier**.
- To set and change your Commitment Tier, see [Set or change pricing tier](billing-reduce-costs.md#set-or-change-pricing-tier). Workspaces older than July 2023 will have the option to switch to the simplified pricing tiers experience to unify billing meters, or continue to use the classic pricing tiers which separate out the Log Analytics pricing from the classic Microsoft Sentinel classic pricing. For more information, see [simplified pricing tiers](#simplified-pricing-tiers).
+ To set and change your Commitment Tier, see [Set or change pricing tier](billing-reduce-costs.md#set-or-change-pricing-tier). Switch any workspaces older than July 2023 to the simplified pricing tiers experience to unify billing meters. Or, continue to use the classic pricing tiers that separate out the Log Analytics pricing from the classic Microsoft Sentinel classic pricing. For more information, see [simplified pricing tiers](#simplified-pricing-tiers).
#### Basic logs
Basic logs are best suited for use in playbook automation, ad-hoc querying, inve
### Simplified pricing tiers
-Simplified pricing tiers combine the data analysis costs for Microsoft Sentinel and ingestion storage costs of Log Analytics into a single pricing tier. Here's a screenshot showing the simplified pricing tier that all new workspaces will use.
+Simplified pricing tiers combine the data analysis costs for Microsoft Sentinel and ingestion storage costs of Log Analytics into a single pricing tier. The following screenshot shows the simplified pricing tier that all new workspaces use.
:::image type="content" source="media/billing/simplified-pricing-tier.png" alt-text="Screenshot shows simplified pricing tier." lightbox="media/billing/simplified-pricing-tier.png":::
-Workspaces configured with classic pricing tiers have the option to switch to the simplified pricing tiers. For more information on how to **Switch to new pricing**, see [Enroll in a simplified pricing tier](enroll-simplified-pricing-tier.md).
+Switch any workspace configured with classic pricing tiers to the simplified pricing tiers. For more information on how to **Switch to new pricing**, see [Enroll in a simplified pricing tier](enroll-simplified-pricing-tier.md).
-Combining the pricing tiers offers a simplification to the overall billing and cost management experience, including visualization in the pricing page, and fewer steps estimating costs in the Azure calculator. To add further value to the new simplified tiers, the current [Microsoft Defender for Servers P2 benefit granting 500 MB/VM/day](../defender-for-cloud/faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) security data ingestion into Log Analytics has been extended to the simplified pricing tiers. This greatly increases the financial benefit of bringing eligible data ingested into Microsoft Sentinel for each VM protected in this manner.
+Combining the pricing tiers offers a simplification to the overall billing and cost management experience, including visualization in the pricing page, and fewer steps estimating costs in the Azure calculator. To add further value to the new simplified tiers, the current Microsoft Defender for Servers P2 benefit granting 500 MB of security data ingestion into Log Analytics is extended to the simplified pricing tiers. This change greatly increases the financial benefit of bringing eligible data ingested into Microsoft Sentinel for each virtual machine (VM) protected in this manner. For more information, see [FAQ - Microsoft Defender for Servers P2 benefit granting 500 MB](../defender-for-cloud/faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-).
### Understand your Microsoft Sentinel bill
The costs shown in the following image are for example purposes only. They're no
:::image type="content" source="media/billing/sample-bill-classic.png" alt-text="Screenshot showing the Microsoft Sentinel section of a sample Azure bill, to help you estimate costs." lightbox="media/billing/sample-bill-classic.png":::
-Microsoft Sentinel and Log Analytics charges might appear on your Azure bill as separate line items based on your selected pricing plan. Simplified pricing tiers are represented as a single `sentinel` line item for the pricing tier. Since ingestion and analysis are billed on a daily basis, if your workspace exceeds its Commitment Tier usage allocation in any given day, the Azure bill shows one line item for the Commitment Tier with its associated fixed cost, and a separate line item for the cost beyond the Commitment Tier, billed at the same effective Commitment Tier rate.
+Microsoft Sentinel and Log Analytics charges might appear on your Azure bill as separate line items based on your selected pricing plan. Simplified pricing tiers are represented as a single `sentinel` line item for the pricing tier. Ingestion and analysis are billed on a daily basis. If your workspace exceeds its Commitment Tier usage allocation in any given day, the Azure bill shows one line item for the Commitment Tier with its associated fixed cost, and a separate line item for the cost beyond the Commitment Tier, billed at the same effective Commitment Tier rate.
# [Simplified](#tab/simplified) The following tabs show how Microsoft Sentinel costs appear in the **Service name** and **Meter** columns of your Azure bill depending on your simplified pricing tier.
Learn about pricing for these
- [BYOML pricing](https://azure.microsoft.com/pricing/details/machine-learning-studio/) - [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/)
-Any other services you use could have associated costs.
+Any other services you use might have associated costs.
## Data retention and archived logs costs
-After you enable Microsoft Sentinel on a Log Analytics workspace consider these configuration options:
+After you enable Microsoft Sentinel on a Log Analytics workspace, consider these configuration options:
- Retain all data ingested into the workspace at no charge for the first 90 days. Retention beyond 90 days is charged per the standard [Log Analytics retention prices](https://azure.microsoft.com/pricing/details/monitor/). - Specify different retention settings for individual data types. Learn about [retention by data type](../azure-monitor/logs/data-retention-archive.md#configure-retention-and-archive-at-the-table-level).
Removing Microsoft Sentinel doesn't remove the Log Analytics workspace Microsoft
The following data sources are free with Microsoft Sentinel: -- Azure Activity Logs.-- Office 365 Audit Logs, including all SharePoint activity, Exchange admin activity, and Teams.-- Security alerts, including alerts from Microsoft Defender XDR, Microsoft Defender for Cloud, Microsoft Defender for Office 365, Microsoft Defender for Identity, Microsoft Defender for Cloud Apps, and Microsoft Defender for Endpoint.-- Microsoft Defender for Cloud and Microsoft Defender for Cloud Apps alerts.
+- Azure Activity Logs
+- Office 365 Audit Logs, including all SharePoint activity, Exchange admin activity, and Teams
+- Security alerts, including alerts from the following sources:
+ - Microsoft Defender XDR
+ - Microsoft Defender for Cloud
+ - Microsoft Defender for Office 365
+ - Microsoft Defender for Identity
+ - Microsoft Defender for Cloud Apps
+ - Microsoft Defender for Endpoint
+- Alerts from the following sources:
+ - Microsoft Defender for Cloud
+ - Microsoft Defender for Cloud Apps
Although alerts are free, the raw logs for some Microsoft Defender XDR, Defender for Cloud Apps, Microsoft Entra ID, and Azure Information Protection (AIP) data types are paid.
-The following table lists the data sources in Microsoft Sentinel that aren't charged. This is the same list as Log Analytics. For more information, see [excluded tables](../azure-monitor/logs/cost-logs.md#excluded-tables).
+The following table lists the data sources in Microsoft Sentinel and Log Analytics that aren't charged. For more information, see [excluded tables](../azure-monitor/logs/cost-logs.md#excluded-tables).
| Microsoft Sentinel data connector | Free data type | |-|--|
sentinel Bookmarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/bookmarks.md
Title: Use hunting bookmarks for data investigations in Microsoft Sentinel description: This article describes how to use the Microsoft Sentinel hunting bookmarks to keep track of data.--++ - Previously updated : 11/09/2021 Last updated : 03/12/2024+
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
# Keep track of data during hunting with Microsoft Sentinel Threat hunting typically requires reviewing mountains of log data looking for evidence of malicious behavior. During this process, investigators find events that they want to remember, revisit, and analyze as part of validating potential hypotheses and understanding the full story of a compromise.
-Hunting bookmarks in Microsoft Sentinel help you do this, by preserving the queries you ran in **Microsoft Sentinel - Logs**, along with the query results that you deem relevant. You can also record your contextual observations and reference your findings by adding notes and tags. Bookmarked data is visible to you and your teammates for easy collaboration.
+Hunting bookmarks in Microsoft Sentinel help you by preserving the queries you ran in **Microsoft Sentinel - Logs**, along with the query results that you deem relevant. You can also record your contextual observations and reference your findings by adding notes and tags. Bookmarked data is visible to you and your teammates for easy collaboration.
Now you can identify and address gaps in MITRE ATT&CK technique coverage, across all hunting queries, by mapping your custom hunting queries to MITRE ATT&CK techniques.
-You can also investigate more types of entities while hunting with bookmarks, by mapping the full set of entity types and identifiers supported by Microsoft Sentinel Analytics in your custom queries. This enables you to use bookmarks to explore the entities returned in hunting query results using [entity pages](entities.md#entity-pages), [incidents](investigate-cases.md) and the [investigation graph](investigate-cases.md#use-the-investigation-graph-to-deep-dive). If a bookmark captures results from a hunting query, it automatically inherits the query's MITRE ATT&CK technique and entity mappings.
+Investigate more types of entities while hunting with bookmarks, by mapping the full set of entity types and identifiers supported by Microsoft Sentinel Analytics in your custom queries. Use bookmarks to explore the entities returned in hunting query results using [entity pages](entities.md#entity-pages), [incidents](investigate-cases.md) and the [investigation graph](investigate-cases.md#use-the-investigation-graph-to-deep-dive). If a bookmark captures results from a hunting query, it automatically inherits the query's MITRE ATT&CK technique and entity mappings.
If you find something that urgently needs to be addressed while hunting in your logs, you can easily create a bookmark and either promote it to an incident or add it to an existing incident. For more information about incidents, see [Investigate incidents with Microsoft Sentinel](investigate-cases.md).
Alternatively, you can view your bookmarked data directly in the **HuntingBookma
Viewing bookmarks from the table enables you to filter, summarize, and join bookmarked data with other data sources, making it easy to look for corroborating evidence. + ## Add a bookmark
-1. In the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Hunting** to run queries for suspicious and anomalous behavior.
+Create a bookmark to preserve the queries, results, your observations, and findings.
+
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management** select **Hunting**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Hunting**.
-1. Select one of the hunting queries and on the right, in the hunting query details, select **Run Query**.
+1. Select one of the hunting queries.
+1. In the hunting query details, select **Run Query**.
1. Select **View query results**. For example:
Viewing bookmarks from the table enables you to filter, summarize, and join book
1. On the right, in the **Add bookmark** pane, optionally, update the bookmark name, add tags, and notes to help you identify what was interesting about the item.
-1. Bookmarks can be optionally mapped to MITRE ATT&CK techniques or sub-techniques. MITRE ATT&CK mappings are inherited from mapped values in hunting queries, but you can also create them manually. Select the MITRE ATT&CK tactic associated with the desired technique from the drop-down menu in the **Tactics & Techniques** section of the **Add bookmark** pane. The menu will expand to show all the MITRE ATT&CK techniques, and you can select multiple techniques and sub-techniques in this menu.
+1. Bookmarks can be optionally mapped to MITRE ATT&CK techniques or sub-techniques. MITRE ATT&CK mappings are inherited from mapped values in hunting queries, but you can also create them manually. Select the MITRE ATT&CK tactic associated with the desired technique from the drop-down menu in the **Tactics & Techniques** section of the **Add bookmark** pane. The menu expands to show all the MITRE ATT&CK techniques, and you can select multiple techniques and sub-techniques in this menu.
:::image type="content" source="media/bookmarks/mitre-attack-mapping.png" alt-text="Screenshot of how to map Mitre Attack tactics and techniques to bookmarks.":::
Viewing bookmarks from the table enables you to filter, summarize, and join book
:::image type="content" source="media/bookmarks/map-entity-types-bookmark.png" alt-text="Screenshot to map entity types for hunting bookmarks.":::
- To view the bookmark in the investigation graph, you must map at least one entity. Entity mappings to account, host, IP, and URL entity types you've previously created are supported, preserving backwards compatibility.
+ To view the bookmark in the investigation graph, you must map at least one entity. Entity mappings to account, host, IP, and URL entity types you created are supported, preserving backwards compatibility.
-1. Click **Save** to commit your changes and add the bookmark. All bookmarked data is shared with other analysts, and is a first step toward a collaborative investigation experience.
+1. Select **Save** to commit your changes and add the bookmark. All bookmarked data is shared with other analysts, and is a first step toward a collaborative investigation experience.
-> [!NOTE]
-> The log query results support bookmarks whenever this pane is opened from Microsoft Sentinel. For example, you select **General** > **Logs** from the navigation bar, select event links in the investigations graph, or select an alert ID from the full details of an incident. You can't create bookmarks when the **Logs** pane is opened from other locations, such as directly from Azure Monitor.
+The log query results support bookmarks whenever this pane is opened from Microsoft Sentinel. For example, you select **General** > **Logs** from the navigation bar, select event links in the investigations graph, or select an alert ID from the full details of an incident. You can't create bookmarks when the **Logs** pane is opened from other locations, such as directly from Azure Monitor.
## View and update bookmarks
-1. In the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Hunting**.
+Find and update a bookmark from the bookmark tab.
+
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management** select **Hunting**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Hunting**.
2. Select the **Bookmarks** tab to view the list of bookmarks.
-3. To help you find a specific bookmark, use the search box or filter options.
+3. Search or filter to find a specific bookmark or bookmarks.
-4. Select individual bookmarks and view the bookmark details in the right-hand details pane.
+4. Select individual bookmarks to view the bookmark details in the right-hand pane.
-5. Make your changes as needed, which are automatically saved.
+5. Make your changes as needed. Your changes are automatically saved.
## Exploring bookmarks in the investigation graph
-1. In the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Hunting** > **Bookmarks** tab, and select the bookmark or bookmarks you want to investigate.
+Visualize your bookmarked data by launching the investigation experience in which you can view, investigate, and visually communicate your findings by using an interactive entity-graph diagram and timeline.
+
+1. From the **Bookmarks** tab, select the bookmark or bookmarks you want to investigate.
2. In the bookmark details, ensure that at least one entity is mapped.
For instructions to use the investigation graph, see [Use the investigation grap
## Add bookmarks to a new or existing incident
-1. In the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Hunting** > **Bookmarks** tab, and select the bookmark or bookmarks you want to add to an incident.
+Add bookmarks to an incident from the bookmarks tab on the **Hunting** page.
+
+1. From the **Bookmarks** tab, select the bookmark or bookmarks you want to add to an incident.
2. Select **Incident actions** from the command bar:
For instructions to use the investigation graph, see [Use the investigation grap
- For a new incident: Optionally update the details for the incident, and then select **Create**. - For adding a bookmark to an existing incident: Select one incident, and then select **Add**.
-To view the bookmark within the incident: Navigate to **Microsoft Sentinel** > **Threat management** > **Incidents** and select the incident with your bookmark. Select **View full details**, and then select the **Bookmarks** tab.
+As an alternative to the **Incident actions** option on the command bar, you can use the context menu (**...**) for one or more bookmarks to select options to **Create new incident**, **Add to existing incident**, and **Remove from incident**.
-> [!TIP]
-> As an alternative to the **Incident actions** option on the command bar, you can use the context menu (**...**) for one or more bookmarks to select options to **Create new incident**, **Add to existing incident**, and **Remove from incident**.
+To view the bookmark within the incident: Navigate to **Microsoft Sentinel** > **Threat management** > **Incidents** and select the incident with your bookmark. Select **View full details**, and then select the **Bookmarks** tab.
## View bookmarked data in logs
-To view bookmarked queries, results, or their history, select the bookmark from the **Hunting** > **Bookmarks** tab, and use the links provided in the details pane:
+View bookmarked queries, results, or their history.
-- **View source query** to view the source query in the **Logs** pane.
+1. Select the bookmark from the **Hunting** > **Bookmarks** tab.
+1. Select the links provided in the details pane:
-- **View bookmark logs** to see all bookmark metadata, which includes who made the update, the updated values, and the time the update occurred.
+ - **View source query** to view the source query in the **Logs** pane.
-You can also view the raw bookmark data for all bookmarks by selecting **Bookmark Logs** from the command bar on the **Hunting** > **Bookmarks** tab:
+ - **View bookmark logs** to see all bookmark metadata, which includes who made the update, the updated values, and the time the update occurred.
+1. View the raw bookmark data for all bookmarks by selecting **Bookmark Logs** from the command bar on the **Hunting** > **Bookmarks** tab:
-This view shows all your bookmarks with associated metadata. You can use [Kusto Query Language](/azure/data-explorer/kql-quick-reference) (KQL) queries to filter down to the latest version of the specific bookmark you are looking for.
+ :::image type="content" source="media/bookmarks/bookmark-logs.png" alt-text="Screenshot of bookmark logs command.":::
-> [!NOTE]
-> There can be a significant delay (measured in minutes) between the time you create a bookmark and when it is displayed in the **Bookmarks** tab.
+This view shows all your bookmarks with associated metadata. You can use [Kusto Query Language](/azure/data-explorer/kql-quick-reference) (KQL) queries to filter down to the latest version of the specific bookmark you're looking for.
+
+There can be a significant delay (measured in minutes) between the time you create a bookmark and when it's displayed in the **Bookmarks** tab.
## Delete a bookmark
-1. In the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Hunting** > **Bookmarks** tab, and select the bookmark or bookmarks you want to delete.
+Deleting the bookmark removes the bookmark from the list in the **Bookmark** tab. The **HuntingBookmark** table for your Log Analytics workspace continues to contain previous bookmark entries, but the latest entry changes the **SoftDelete** value to true, making it easy to filter out old bookmarks. Deleting a bookmark doesn't remove any entities from the investigation experience that are associated with other bookmarks or alerts.
+
+To delete a bookmark, complete the following steps.
-2. Right-click your selections, and select the option to delete the number of bookmarks you have selected.
+1. From the **Hunting** > **Bookmarks** tab, select the bookmark or bookmarks you want to delete.
-Deleting the bookmark removes the bookmark from the list in the **Bookmark** tab. The **HuntingBookmark** table for your Log Analytics workspace will continue to contain previous bookmark entries, but the latest entry will change the **SoftDelete** value to true, making it easy to filter out old bookmarks. Deleting a bookmark does not remove any entities from the investigation experience that are associated with other bookmarks or alerts.
+2. Right-click, and select the option to delete the bookmarks selected.
-## Next steps
+## Related content
In this article, you learned how to run a hunting investigation using bookmarks in Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles: -- [Proactively hunt for threats](hunting.md)
+- [Threat hunting with Microsoft Sentinel](hunting.md)
- [Use notebooks to run automated hunting campaigns](notebooks.md)-- [Threat hunting with Microsoft Sentinel (Learn module)](/training/modules/hunt-threats-sentinel/)
+- [Threat hunting with Microsoft Sentinel (Training module)](/training/modules/hunt-threats-sentinel/)
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd.md
description: This article describes how to create connections with a GitHub or Azure DevOps repository where you can manage your custom content and deploy it to Microsoft Sentinel. Previously updated : 8/25/2022 Last updated : 03/07/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ #Customer intent: As a SOC collaborator or MSSP analyst, I want to know how to connect my source control repositories for continuous integration and continuous delivery (CI/CD). Specifically as an MSSP content manager, I want to know how to deploy one solution to many customer workspaces and still be able to tailor custom content for their environments.
When creating custom content, you can manage it from your own Microsoft Sentinel workspaces, or an external source control repository. This article describes how to create and manage connections between Microsoft Sentinel and GitHub or Azure DevOps repositories. Managing your content in an external repository allows you to make updates to that content outside of Microsoft Sentinel, and have it automatically deployed to your workspaces. For more information, see [Update custom content with repository connections](ci-cd-custom-content.md). > [!IMPORTANT]
->
-> The Microsoft Sentinel **Repositories** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - The Microsoft Sentinel **Repositories** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - [!INCLUDE [unified-soc-preview](includes/unified-soc-preview-without-alert.md)]
## Prerequisites and scope
Microsoft Sentinel currently supports connections to GitHub and Azure DevOps rep
- Third-party application access via OAuth enabled for Azure DevOps [application connection policies](/azure/devops/organizations/accounts/change-application-access-policies#manage-a-policy). - Ensure custom content files you want to deploy to your workspaces are in relevant [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/index.yml).
-For more information, see [Validate your content](ci-cd-custom-content.md#validate-your-content)
+For more information, see [Validate your content](ci-cd-custom-content.md#validate-your-content).
## Connect a repository
-This procedure describes how to connect a GitHub or Azure DevOps repository to your Microsoft Sentinel workspace, where you can save and manage your custom content, instead of in Microsoft Sentinel.
+This procedure describes how to connect a GitHub or Azure DevOps repository to your Microsoft Sentinel workspace.
Each connection can support multiple types of custom content, including analytics rules, automation rules, hunting queries, parsers, playbooks, and workbooks. For more information, see [About Microsoft Sentinel content and solutions](sentinel-solutions.md).
+You can't create duplicate connections, with the same repository and branch, in a single Microsoft Sentinel workspace.
+ **Create your connection**:
-1. Make sure that you're signed into your source control app with the credentials you want to use for your connection. If you're currently signed in using different credentials, sign out first.
+1. Make sure that you're signed into your source control app with the credentials you want to use for your connection. If you're currently signed in using different credentials, sign out first.
-1. In Microsoft Sentinel, on the left under **Content management**, select **Repositories**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Repositories**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Repositories**.
-1. Select **Add new**, and then, on the **Create a new connection** page, enter a meaningful name and description for your connection.
+1. Select **Add new**, and then, on the **Create new deployment connection** page, enter a meaningful name and description for your connection.
1. From the **Source Control** dropdown, select the type of repository you want to connect to, and then select **Authorize**.
Each connection can support multiple types of custom content, including analytic
1. Enter your GitHub credentials when prompted.
- The first time you add a connection, you'll see a new browser window or tab, prompting you to authorize the connection to Microsoft Sentinel. If you're already logged into your GitHub account on the same browser, your GitHub credentials will be auto-populated.
+ The first time you add a connection, you're prompted to authorize the connection to Microsoft Sentinel. If you're already logged into your GitHub account on the same browser, your GitHub credentials are autopopulated.
- 1. A **Repository** area now shows on the **Create a new connection** page, where you can select an existing repository to connect to. Select your repository from the list, and then select **Add repository**.
+ 1. A **Repository** area now shows on the **Create new deployment connection** page, where you can select an existing repository to connect to. Select your repository from the list, and then select **Add repository**.
The first time you connect to a specific repository, you'll see a new browser window or tab, prompting you to install the **Azure-Sentinel** app on your repository. If you have multiple repositories, select the ones where you want to install the **Azure-Sentinel** app, and install it.
- You'll be directed to GitHub to continue the app installation.
+ You're directed to GitHub to continue the app installation.
- 1. After the **Azure-Sentinel** app is installed in your repository, the **Branch** dropdown in the **Create a new connection** page is populated with your branches. Select the branch you want to connect to your Microsoft Sentinel workspace.
+ 1. After the **Azure-Sentinel** app is installed in your repository, the **Branch** dropdown in the **Create new deployment connection** page is populated with your branches. Select the branch you want to connect to your Microsoft Sentinel workspace.
- 1. From the **Content Types** dropdown, select the type of content you'll be deploying.
+ 1. From the **Content Types** dropdown, select the type of content you're deploying.
- Both parsers and hunting queries use the **Saved Searches** API to deploy content to Microsoft Sentinel. If you select one of these content types, and also have content of the other type in your branch, both content types are deployed.
- - For all other content types, selecting a content type in the **Create a new connection** pane deploys only that content to Microsoft Sentinel. Content of other types isn't deployed.
+ - For all other content types, selecting a content type in the **Create new deployment connection** pane deploys only that content to Microsoft Sentinel. Content of other types isn't deployed.
1. Select **Create** to create your connection. For example:
Each connection can support multiple types of custom content, including analytic
# [Azure DevOps](#tab/azure-devops)
- > [!NOTE]
- > Due to cross-tenant limitations, if you are creating a connection as a [guest user](../active-directory/external-identities/what-is-b2b.md) on the workspace, your Azure DevOps URL won't appear in the dropdown. Enter it manually instead.
- >
+ You're automatically authorized to Azure DevOps using your current Azure credentials. [Verify that you're authorized to the same Azure DevOps organization](https://aex.dev.azure.com/) that you're connecting to from Microsoft Sentinel or use an InPrivate browser window to create your connection.
- You're automatically authorized to Azure DevOps using your current Azure credentials. To ensure valid connectivity, [verify that you've authorized to the same Azure DevOps organization](https://aex.dev.azure.com/) that you're connecting to from Microsoft Sentinel or use an InPrivate browser window to create your connection.
+ Due to cross-tenant limitations, if you're creating a connection as a [guest user](../active-directory/external-identities/what-is-b2b.md) on the workspace, your Azure DevOps URL doesn't appear in the dropdown. Enter it manually instead.
1. In Microsoft Sentinel, from the dropdown lists that appear, select your **Organization**, **Project**, **Repository**, **Branch**, and **Content Types**. - Both parsers and hunting queries use the **Saved Searches** API to deploy content to Microsoft Sentinel. If you select one of these content types, and also have content of the other type in your branch, both content types are deployed.
- - For all other content types, selecting a content type in the **Create a new connection** pane deploys only that content to Microsoft Sentinel. Content of other types isn't deployed.
+ - For all other content types, selecting a content type in the **Create new deployment connection** pane deploys only that content to Microsoft Sentinel. Content of other types isn't deployed.
1. Select **Create** to create your connection. For example:
Each connection can support multiple types of custom content, including analytic
- > [!NOTE]
- > You cannot create duplicate connections, with the same repository and branch, in a single Microsoft Sentinel workspace.
- >
-
-After the connection is created, a new workflow or pipeline is generated in your repository, and the content stored in your repository is deployed to your Microsoft Sentinel workspace.
+After you create the connection, a new workflow or pipeline is generated in your repository. The content stored in your repository is deployed to your Microsoft Sentinel workspace.
-The deployment time may vary depending on the volume of content that you're deploying.
+The deployment time might vary depending on the volume of content that you're deploying.
### View the deployment status -- **In GitHub**: On the repository's **Actions** tab. Select the workflow **.yaml** file shown there to access detailed deployment logs and any specific error messages, if relevant.-- **In Azure DevOps**: On the repository's **Pipelines** tab.
+**In GitHub**: On the repository's **Actions** tab, select the workflow **.yaml** file to access detailed deployment logs and any specific error messages.
+
+**In Azure DevOps**: View the deployment status from the repository's **Pipelines** tab.
After the deployment is complete:
After the deployment is complete:
:::image type="content" source="media/ci-cd/deployment-logs-status.png" alt-text="Screenshot of a GitHub repository connection's deployment logs.":::
-The default workflow only deploys content that has been modified since the last deployment based on commits to the repository. But you may want to turn off smart deployments or perform other customizations. For example, you can configure different deployment triggers, or deploy content exclusively from a specific root folder. To learn more about how this is done visit [customize repository deployments](ci-cd-custom-deploy.md).
+The default workflow only deploys content that is modified since the last deployment based on commits to the repository. But you might want to turn off smart deployments or perform other customizations. For example, you can configure different deployment triggers, or deploy content exclusively from a specific root folder. To learn more, see [customize repository deployments](ci-cd-custom-deploy.md).
## Edit content
If you edit the content in Microsoft Sentinel instead, make sure to export it to
## Delete content
-Deleting content from your repository doesn't delete it from your Microsoft Sentinel workspace. If you want to remove content that was deployed through repositories, make sure to delete it from both your repository and Sentinel. For example, set a filter for the content based on source name to make is easier to identify content from repositories.
+Deleting content from your repository doesn't delete it from your Microsoft Sentinel workspace. If you want to remove content that was deployed through repositories, delete it from both your repository and Microsoft Sentinel. For example, set a filter for the content based on source name to make it easier to identify content from repositories.
:::image type="content" source="media/ci-cd/delete-repo-content.png" alt-text="Screenshot of analytics rules filtered by source name of repositories.":::
This procedure describes how to remove the connection to a source control reposi
**To remove your connection**:
-1. In Microsoft Sentinel, on the left under **Content management**, select **Repositories**.
+1. In Microsoft Sentinel, under **Content management**, select **Repositories**.
1. In the grid, select the connection you want to remove, and then select **Delete**. 1. Select **Yes** to confirm the deletion.
-After you've removed your connection, content that was previously deployed via the connection remains in your Microsoft Sentinel workspace. Content added to the repository after removing the connection isn't deployed.
+After you remove your connection, content that was previously deployed via the connection remains in your Microsoft Sentinel workspace. Content added to the repository after removing the connection isn't deployed.
-> [!TIP]
-> If you encounter issues or an error message when deleting your connection, we recommend that you check your source control to confirm that the GitHub workflow or Azure DevOps pipeline associated with the connection was deleted.
->
+If you encounter issues or an error message when you delete your connection, we recommend that you check your source control. Confirm that the GitHub workflow or Azure DevOps pipeline associated with the connection is deleted.
-### Removing the Microsoft Sentinel app from your GitHub repository
+### Remove the Microsoft Sentinel app from your GitHub repository
If you intend to delete the Microsoft Sentinel app from a GitHub repository, we recommend that you *first* remove all associated connections from the Microsoft Sentinel **Repositories** page.
-Each Microsoft Sentinel App installation has a unique ID that's used when both adding and removing the connection. If the ID is missing or has been changed, you'll need to both remove the connection from the Microsoft Sentinel **Repositories** page and manually remove the workflow from your GitHub repository to prevent any future content deployments.
+Each Microsoft Sentinel App installation has a unique ID that's used when both adding and removing the connection. If the ID is missing or changed, remove the connection from the Microsoft Sentinel **Repositories** page and manually remove the workflow from your GitHub repository to prevent any future content deployments.
## Next steps
sentinel Configure Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-content.md
In the previous deployment step, you enabled Microsoft Sentinel, health monitori
|**Set up analytics rules** |After you've set up Microsoft Sentinel to collect data from all over your organization, you can begin using threat detection rules or [analytics rules](detect-threats-built-in.md). Select the steps you need to set up and configure your analytics rules:<br><br>- [Create a scheduled query rule](detect-threats-custom.md): Create custom analytics rules to help discover threats and anomalous behaviors in your environment.<br>- [Map data fields to entities](map-data-fields-to-entities.md): Add or change entity mappings in an existing analytics rule.<br>- [Surface custom details in alerts](surface-custom-details-in-alerts.md): Add or change custom details in an existing analytics rule.<br>- [Customize alert details](customize-alert-details.md): Override the default properties of alerts with content from the underlying query results.<br>- [Export and import analytics rules](import-export-analytics-rules.md): Export your analytics rules to Azure Resource Manager (ARM) template files, and import rules from these files. The export action creates a JSON file in your browser's downloads location, that you can then rename, move, and otherwise handle like any other file.<br>- [Create near-real-time (NRT) detection analytics rules](create-nrt-rules.md): Create near-time analytics rules for up-to-the-minute threat detection out-of-the-box. This type of rule was designed to be highly responsive by running its query at intervals just one minute apart.<br>- [Work with anomaly detection analytics rules](work-with-anomaly-rules.md): Work with built-in anomaly templates that use thousands of data sources and millions of events, or change thresholds and parameters for the anomalies within the user interface.<br>- [Manage template versions for your scheduled analytics rules](manage-analytics-rule-templates.md): Track the versions of your analytics rule templates, and either revert active rules to existing template versions, or update them to new ones.<br>- [Handle ingestion delay in scheduled analytics rules](ingestion-delay.md): Learn how ingestion delay might impact your scheduled analytics rules and how you can fix them to cover these gaps. | |**Set up automation rules** |[Create automation rules](create-manage-use-automation-rules.md). Define the triggers and conditions that determine when your [automation rule](automate-incident-handling-with-automation-rules.md) runs, the various actions that you can have the rule perform, and the remaining features and functionalities. | |**Set up playbooks** |A [playbook](automate-responses-with-playbooks.md) is a collection of remediation actions that you run from Microsoft Sentinel as a routine, to help automate and orchestrate your threat response. To set up playbooks:<br><br>- Review these [steps for creating a playbook](automate-responses-with-playbooks.md#steps-for-creating-a-playbook)<br>- [Create playbooks from templates](use-playbook-templates.md): A playbook template is a prebuilt, tested, and ready-to-use workflow that can be customized to meet your needs. Templates can also serve as a reference for best practices when developing playbooks from scratch, or as inspiration for new automation scenarios. |
-|**Set up workbooks** |[Workbooks](monitor-your-data.md) provide a flexible canvas for data analysis and the creation of rich visual reports within Microsoft Sentinel. Workbook templates allow you to quickly gain insights across your data as soon as you connect a data source. To set up workbooks:<br><br>- [Create custom workbooks across your data](monitor-your-data.md#create-new-workbook)<br>- [Use existing workbook templates available with packaged solutions](monitor-your-data.md#use-a-workbook-template) |
+|**Set up workbooks** |[Workbooks](monitor-your-data.md) provide a flexible canvas for data analysis and the creation of rich visual reports within Microsoft Sentinel. Workbook templates allow you to quickly gain insights across your data as soon as you connect a data source. To set up workbooks:<br><br>- [Create custom workbooks across your data](monitor-your-data.md#create-new-workbook)<br>- [Use existing workbook templates available with packaged solutions](monitor-your-data.md) |
|**Set up watchlists** |[Watchlists](watchlists.md) allow you to correlate data from a data source you provide with the events in your Microsoft Sentinel environment. To set up watchlists:<br><br>- [Create watchlists](watchlists-create.md)<br>- [Build queries or detection rules with watchlists](watchlists-queries.md): Query data in any table against data from a watchlist by treating the watchlist as a table for joins and lookups. When you create a watchlist, you define the SearchKey. The search key is the name of a column in your watchlist that you expect to use as a join with other data or as a frequent object of searches. | ## Next steps
sentinel Configure Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-data-connector.md
+
+ Title: Connect your data sources to Microsoft Sentinel by using data connectors
+description: Learn how to install and configure a data connector in Microsoft Sentinel.
++ Last updated : 03/28/2024+
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#customer intent: As a security architect or SOC analyst, I want to connect my data source so that I can ingest data into Microsoft Sentinel for security monitoring and threat protection.
++
+# Connect your data sources to Microsoft Sentinel by using data connectors
+
+Install and configure data connectors to ingest your data into Microsoft Sentinel. Data connectors are available as part of solutions from the content hub in Microsoft Sentinel. After you install a solution from the content hub, the related data connectors are available to enable and configure. To find and install solutions that include data connectors, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
+
+This article provides general information about how to enable a data connector and how to find more detailed installation instructions for other data connectors. For more information about data connectors in Microsoft Sentinel, see the following articles:
+
+- [Microsoft Sentinel data connectors](connect-data-sources.md)
+- [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
+++
+## Prerequisites
+
+Before you begin, make sure you have the appropriate access and you or someone in your organization installs the related solution.
+- You must have read and write permissions on the Microsoft Sentinel workspace.
+- Install the solution that includes the data connector from the **Content Hub** in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
++
+## Enable a data connector
+
+After you or someone in your organization installs the solution that includes the data connector you need, configure the data connector to start ingesting data.
+
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Data connectors**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configurations** > **Data connectors**.
+1. Search for and select the connector. If you don't see the data connector you want, install the solution associated with it from the **Content Hub**.
+1. Select **Open connector page**.
+
+ #### [Azure portal](#tab/azure-portal)
+
+ :::image type="content" source="media/configure-data-connector/open-connector-page-option.png" alt-text="Screenshot of data connector details page with open connector page button.":::
+
+ #### [Defender portal](#tab/defender-portal)
+
+ :::image type="content" source="media/configure-data-connector/open-connector-page-option-defender-portal.png" alt-text="Screenshot of data connector details page in the Defender portal.":::
+
+1. Review the **Prerequisites**. To configure the data connector, fulfill all the prerequisites.
+1. Follow the steps outlined in the **Configurations** section.
+
+ For some connectors, find more specific configuration information in the **Collect data** section in the Microsoft Sentinel documentation. For example, see the following articles:
+ - [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md)
+ - [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
+
+ After you configure the data connector, it might take some time for the data to be ingested into Microsoft Sentinel. When the data connector is connected, you see a summary of the data in the **Data received** graph, and the connectivity status of the data types.
+
+ :::image type="content" source="media/configure-data-connector/connected-data-connector.png" alt-text="Screenshot of a data connector page with status connected and graph that shows the data received.":::
+
+## Find support for a data connector
+
+Both Microsoft and other organizations author Microsoft Sentinel data connectors. Find the support contact from data connector page in Microsoft Sentinel.
+
+1. In the Microsoft Sentinel **Data connectors** page, select the relevant connector.
+1. To access support and maintenance for the connector, use the support contact link in the **Supported by** field on the side panel for the connecter.
+
+ :::image type="content" source="media/configure-data-connector/support.png" alt-text="Screenshot showing the Supported by field for a data connector in Microsoft Sentinel." lightbox="media/configure-data-connector/support.png":::
+
+For more information, see [Data connector support](connect-data-sources.md#data-connector-support).
+
+## Related content
+
+For more information about solutions and data connectors in Microsoft Sentinel, see the following articles.
+
+- [Microsoft Sentinel data connectors](connect-data-sources.md)
+- [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
+- [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md)
+- [About Microsoft Sentinel content and solutions](sentinel-solutions.md)
+- [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md)
sentinel Configure Fusion Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-fusion-rules.md
This detection is enabled by default in Microsoft Sentinel. To check or change i
- [Suspicious Resource deployment](https://github.com/Azure/Azure-Sentinel/blob/83c6d8c7f65a5f209f39f3e06eb2f7374fd8439c/Detections/AzureActivity/NewResourceGroupsDeployedTo.yaml) - [Palo Alto Threat signatures from Unusual IP addresses](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/PaloAlto-PAN-OS/Analytic%20Rules/PaloAlto-UnusualThreatSignatures.yaml)
- To add queries that are not currently available as a rule template, see [create a custom analytics rule with a scheduled query](detect-threats-custom.md#create-a-custom-analytics-rule-with-a-scheduled-query).
+ To add queries that are not currently available as a rule template, see [Create a custom analytics rule from scratch](detect-threats-custom.md).
- [New Admin account activity seen which was not seen historically](https://github.com/Azure/Azure-Sentinel/blob/83c6d8c7f65a5f209f39f3e06eb2f7374fd8439c/Hunting%20Queries/OfficeActivity/new_adminaccountactivity.yaml)
sentinel Connect Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-data-sources.md
Title: Microsoft Sentinel data connectors
description: Learn about supported data connectors, like Microsoft Defender XDR (formerly Microsoft 365 Defender), Microsoft 365 and Office 365, Microsoft Entra ID, ATP, and Defender for Cloud Apps to Microsoft Sentinel. Previously updated : 05/16/2023 Last updated : 03/02/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#customer intent: As a security architect or SOC analyst, I want to understand what data connectors are in Microsoft Sentinel.
# Microsoft Sentinel data connectors
-After you onboard Microsoft Sentinel into your workspace, use data connectors to start ingesting your data into Microsoft Sentinel. Microsoft Sentinel comes with many out of the box connectors for Microsoft services, which integrate in real time. For example, the Microsoft Defender XDR connector is a [service-to-service connector](#service-to-service-integration-for-data-connectors) that integrates data from Office 365, Microsoft Entra ID, Microsoft Defender for Identity, and Microsoft Defender for Cloud Apps.
+After you onboard Microsoft Sentinel into your workspace, use data connectors to start ingesting your data into Microsoft Sentinel. Microsoft Sentinel comes with many out of the box connectors for Microsoft services, which integrate in real time. For example, the Microsoft Defender XDR connector is a service-to-service connector that integrates data from Office 365, Microsoft Entra ID, Microsoft Defender for Identity, and Microsoft Defender for Cloud Apps.
-Built-in connectors enable connection to the broader security ecosystem for non-Microsoft products. For example, use [Syslog](#syslog), [Common Event Format (CEF)](#common-event-format-cef), or [REST APIs](#rest-api-integration-for-data-connectors) to connect your data sources with Microsoft Sentinel.
+Built-in connectors enable connection to the broader security ecosystem for non-Microsoft products. For example, use Syslog, Common Event Format (CEF), or REST APIs to connect your data sources with Microsoft Sentinel.
-Learn about [types of Microsoft Sentinel data connectors](data-connectors-reference.md) or learn about the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
-
-The Microsoft Sentinel **Data connectors** page shows the full list of connectors and their status for your workspace. Soon this page will only show the list of in-use data connectors. For more information on this upcoming change, see [Out-of-the-box content centralization changes](sentinel-content-centralize.md)
-
-To add more data connectors, install the solution associated with the data connector from the **Content Hub**. For more information, see the following articles:
-- [Find your Microsoft Sentinel data connector](data-connectors-reference.md)-- [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md)-- [Microsoft Sentinel content hub catalog](sentinel-solutions-catalog.md) <a name="agent-options"></a> <a name="data-connection-methods"></a> <a name="map-data-types-with-microsoft-sentinel-connection-options"></a>
+## Data connectors provided with solutions
-## Enable a data connector
+Microsoft Sentinel solutions provide packaged security content, including data connectors, workbooks, analytics rules, playbooks, and more. When you deploy a solution with a data connector, you get the data connector together with related content in the same deployment.
-From the **Data connectors** page, select the active or custom connector you want to connect, and then select **Open connector page**. If you don't see the data connector you want, install the solution associated with it from the **Content Hub**.
+The Microsoft Sentinel **Data connectors** page lists the installed or in-use data connectors.
-- Once you fulfill all the prerequisites listed in the **Instructions** tab, the connector page describes how to ingest the data to Microsoft Sentinel. It may take some time for data to start arriving.-- After you connect, you see a summary of the data in the **Data received** graph, and the connectivity status of the data types.
-
- :::image type="content" source="media/connect-data-sources/azure-ad-opened-connector-page.png" alt-text="Screenshot showing how to configure data connectors." border="false":::
+#### [Azure portal](#tab/azure-portal)
++
+#### [Defender portal](#tab/defender-portal)
++++
+To add more data connectors, install the solution associated with the data connector from the **Content Hub**. For more information, see the following articles:
-Learn about your specific data connector in the [data connectors reference](data-connectors-reference.md).
+- [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
+- [About Microsoft Sentinel content and solutions](sentinel-solutions.md)
+- [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md)
+- [Microsoft Sentinel content hub catalog](sentinel-solutions-catalog.md)
+- [Advanced Security Information Model (ASIM) based domain solutions for Microsoft Sentinel](domain-based-essential-solutions.md)
## REST API integration for data connectors
-Many security technologies provide a set of APIs for retrieving log files, and some data sources can use those APIs to connect to Microsoft Sentinel.
+Many security technologies provide a set of APIs for retrieving log files. Some data sources can use those APIs to connect to Microsoft Sentinel.
Data connectors that use APIs either integrate from the provider side or integrate using Azure Functions, as described in the following sections.
-Learn more about data connectors in the [data connectors reference](data-connectors-reference.md).
-
-### REST API integration on the provider side
+### Integration on the provider side
-An API integration built by the provider connects with the provider data sources and pushes data into Microsoft Sentinel custom log tables using the [Azure Monitor Data Collector API](../azure-monitor/logs/data-collector-api.md).
+An API integration built by the provider connects with the provider data sources and pushes data into Microsoft Sentinel custom log tables by using the Azure Monitor Data Collector API. For more information, see [Send log data to Azure Monitor by using the HTTP Data Collector API](/azure/azure-monitor/logs/data-collector-api?branch=main&tabs=powershell).
To learn about REST API integration, read your provider documentation and [Connect your data source to Microsoft Sentinel's REST-API to ingest data](connect-rest-api-template.md).
-### REST API integration using Azure Functions
+### Integration using Azure Functions
-Integrations that use [Azure Functions](../azure-functions/index.yml) to connect with a provider API first format the data, and then send it to Microsoft Sentinel custom log tables using the [Azure Monitor Data Collector API](../azure-monitor/logs/data-collector-api.md). Learn how to [use Azure Functions to connect your data source to Microsoft Sentinel](connect-azure-functions-template.md).
+Integrations that use Azure Functions to connect with a provider API first format the data, and then send it to Microsoft Sentinel custom log tables using the Azure Monitor Data Collector API.
-> [!IMPORTANT]
-> Integrations that use Azure Functions may have extra data ingestion costs, because you host Azure Functions on your Azure tenant. Learn more about [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/).
+For more information, see:
+- [Send log data to Azure Monitor by using the HTTP Data Collector API](/azure/azure-monitor/logs/data-collector-api?branch=main&tabs=powershell)
+- [Use Azure Functions to connect your data source to Microsoft Sentinel](connect-azure-functions-template.md)
+- [Azure Functions documentation](../azure-functions/index.yml)
-## Agent-based integration for data connectors
+Integrations that use Azure Functions might have extra data ingestion costs, because you host Azure Functions in your Azure organization. Learn more about [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/).
-Microsoft Sentinel can use the Syslog protocol to connect an agent to any data source that can perform real-time log streaming. For example, most on-premises data sources connect using agent-based integration.
+## Agent-based integration for data connectors
-The following sections describe the different types of Microsoft Sentinel agent-based data connectors. Follow the steps in each Microsoft Sentinel data connector page to configure connections using agent-based mechanisms.
+Microsoft Sentinel can use the Syslog protocol to connect an agent to any data source that can perform real-time log streaming. For example, most on-premises data sources connect by using agent-based integration.
-Learn which firewalls, proxies, and endpoints connect to Microsoft Sentinel through CEF or Syslog in the [data connectors reference](data-connectors-reference.md).
+The following sections describe the different types of Microsoft Sentinel agent-based data connectors. To configure connections using agent-based mechanisms, follow the steps in each Microsoft Sentinel data connector page.
### Syslog
-You can stream events from Linux-based, Syslog-supporting devices into Microsoft Sentinel using the [Azure Monitor Agent (AMA)](forward-syslog-monitor-agent.md). Depending on the device type, the agent is installed either directly on the device, or on a dedicated Linux-based log forwarder. The AMA receives events from the Syslog daemon over UDP. The Syslog daemon forwards events to the agent internally, communicating over UDS (Unix Domain Sockets). The AMA then transmits these events to the Microsoft Sentinel workspace.
+You can stream events from Linux-based, Syslog-supporting devices into Microsoft Sentinel by using the Azure Monitor Agent (AMA). Depending on the device type, the agent is installed either directly on the device, or on a dedicated Linux-based log forwarder. The AMA receives events from the Syslog daemon over UDP. The Syslog daemon forwards events to the agent internally, communicating over UDS (Unix Domain Sockets). The AMA then transmits these events to the Microsoft Sentinel workspace.
-Here is a simple flow that shows how Microsoft Sentinel streams Syslog data.
+Here's a simple flow that shows how Microsoft Sentinel streams Syslog data.
1. The device's built-in Syslog daemon collects local events of the specified types, and forwards the events locally to the agent. 1. The agent streams the events to your Log Analytics workspace. 1. After successful configuration, the data appears in the Log Analytics Syslog table.
+For more information, see [Tutorial: Forward Syslog data to a Log Analytics workspace with Microsoft Sentinel by using Azure Monitor Agent](forward-syslog-monitor-agent.md).
++ ### Common Event Format (CEF) Log formats vary, but many sources support CEF-based formatting. The Microsoft Sentinel agent, which is actually the Log Analytics agent, converts CEF-formatted logs into a format that Log Analytics can ingest. For data sources that emit data in CEF, set up the Syslog agent and then configure the CEF data flow. After successful configuration, the data appears in the **CommonSecurityLog** table.
-Learn how to [connect CEF-based appliances to Microsoft Sentinel](connect-common-event-format.md).
+For more information, see [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](connect-common-event-format.md).
### Custom logs For some data sources, you can collect logs as files on Windows or Linux computers using the Log Analytics custom log collection agent.
-Follow the steps in each Microsoft Sentinel data connector page to connect using the Log Analytics custom log collection agent. After successful configuration, the data appears in custom tables.
+To connect using the Log Analytics custom log collection agent, follow the steps in each Microsoft Sentinel data connector page. After successful configuration, the data appears in custom tables.
-Learn how to [collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md).
+For more information, see [Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md).
## Service-to-service integration for data connectors
-Microsoft Sentinel uses the Azure foundation to provide out-of-the-box, service-to-service support for Microsoft services and Amazon Web Services.
+Microsoft Sentinel uses the Azure foundation to provide out-of-the-box service-to-service support for Microsoft services and Amazon Web Services.
-Learn how to [connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md) or learn about data connector types in the [data connectors reference](data-connectors-reference.md).
-
-## Deploy data connectors as part of a solution
-
-[Microsoft Sentinel solutions](sentinel-solutions.md) provide packages of security content, including data connectors, workbooks, analytics rules, playbooks, and more. When you deploy a solution with a data connector, you get the data connector together with related content in the same deployment.
-
-Learn how to [centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md) or learn about the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
+For more information, see the following articles:
+- [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md)
+- [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
## Data connector support
-Both Microsoft and other organizations author Microsoft Sentinel data connectors. Each data connector has one of these support types:
+Both Microsoft and other organizations author Microsoft Sentinel data connectors. Each data connector has one of the following support types listed on the data connector page in Microsoft Sentinel.
| Support type| Description| |-||
-|**Microsoft-supported**|Applies to:<ul><li>Data connectors for data sources where Microsoft is the data provider and author.</li><li>Some Microsoft-authored data connectors for non-Microsoft data sources.</li></ul>Microsoft supports and maintains data connectors in this category according to the [Microsoft Azure Support Plans](https://azure.microsoft.com/support/options/#overview).<br><br>Partners or the Community support data connectors that are authored by any party other than Microsoft.|
+|**Microsoft-supported**|Applies to:<ul><li>Data connectors for data sources where Microsoft is the data provider and author.</li><li>Some Microsoft-authored data connectors for non-Microsoft data sources.</li></ul>Microsoft supports and maintains data connectors in this category according to the [Microsoft Azure Support Plans](https://azure.microsoft.com/support/options/#overview).<br><br>Partners or the Community support data connectors authored by any party other than Microsoft.|
|**Partner-supported**|Applies to data connectors authored by parties other than Microsoft.<br><br>The partner company provides support or maintenance for these data connectors. The partner company can be an Independent Software Vendor, a Managed Service Provider (MSP/MSSP), a Systems Integrator (SI), or any organization whose contact information is provided on the Microsoft Sentinel page for that data connector.<br><br>For any issues with a partner-supported data connector, contact the specified data connector support contact.|
-|**Community-supported**|Applies to data connectors authored by Microsoft or partner developers that don't have listed contacts for data connector support and maintenance on the specified data connector page in Microsoft Sentinel.<br><br>For questions or issues with these data connectors, you can [file an issue](https://github.com/Azure/Azure-Sentinel/issues/new/choose) in the [Microsoft Sentinel GitHub community](https://aka.ms/threathunters).|
+|**Community-supported**|Applies to data connectors authored by Microsoft or partner developers that don't have listed contacts for data connector support and maintenance on the data connector page in Microsoft Sentinel.<br><br>For questions or issues with these data connectors, you can [file an issue](https://github.com/Azure/Azure-Sentinel/issues/new/choose) in the [Microsoft Sentinel GitHub community](https://aka.ms/threathunters).|
-### Find the support contact for a data connector
+For more information, see [Find support for a data connector](configure-data-connector.md#find-support-for-a-data-connector).
-1. In the Microsoft Sentinel **Data connectors** page, select the relevant connector.
-1. To access support and maintenance for the connector, use the support contact link in the **Supported by** field on the side panel for the connecter.
+## Next steps
- :::image type="content" source="media/connect-data-sources/support.png" alt-text="Screenshot showing the Supported by field for a data connector in Microsoft Sentinel." lightbox="media/connect-data-sources/support.png":::
+For more information about data connectors, see the following articles.
-## Next steps
+- [Connect your data sources to Microsoft Sentinel by using data connectors](configure-data-connector.md)
+- [Find your Microsoft Sentinel data connector](data-connectors-reference.md)
+- [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md)
-- To get started with Microsoft Sentinel, you need a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free trial](https://azure.microsoft.com/free/).-- Learn how to [onboard your data to Microsoft Sentinel](quickstart-onboard.md) and [get visibility into your data and potential threats](get-visibility.md).-- To learn about custom data connectors, see [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md).-- For a basic Infrastructure as Code (IaC) reference of Bicep, ARM and Terraform to deploy data connectors in Microsoft Sentinel, see [Microsoft Sentinel data connector IaC reference](/azure/templates/microsoft.securityinsights/dataconnectors).
+For a basic Infrastructure as Code (IaC) reference of Bicep, Azure Resource Manager, and Terraform to deploy data connectors in Microsoft Sentinel, see [Microsoft Sentinel data connector IaC reference](/azure/templates/microsoft.securityinsights/dataconnectors).
sentinel Connect Mdti Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-mdti-data-connector.md
Title: Enable data connector for Microsoft's threat intelligence
-description: Learn how to ingest Microsoft's threat intelligence into your Sentinel workspace.
+description: Learn how to ingest Microsoft's threat intelligence into your Sentinel workspace to generate high fidelity alerts and incidents.
Previously updated : 03/27/2023 Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#customer intent: As a SOC admin, I want to utilize the best threat intelligence from Microsoft, so I can generate high fidelity alerts and incidents.
# Enable data connector for Microsoft Defender Threat Intelligence
Bring high fidelity indicators of compromise (IOC) generated by Microsoft Defend
> [!IMPORTANT] > The Microsoft Defender Threat Intelligence data connector is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
+> [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
## Prerequisites - In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.
Bring high fidelity indicators of compromise (IOC) generated by Microsoft Defend
To import threat indicators into Microsoft Sentinel from MDTI, follow these steps:
-1. From the [Azure portal](https://portal.azure.com/), navigate to the **Microsoft Sentinel** service.
-
-1. Choose the **workspace** to which you want to import the MDTI indicators from.
-
-1. Select **Content hub** from the menu.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**.
1. Find and select the **Threat Intelligence** solution.
For more information about how to manage the solution components, see [Discover
## Enable the Microsoft Defender Threat Intelligence data connector
-1. To configure the MDTI data connector, select the **Data connectors** menu.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Data connectors**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Data connectors**.
1. Find and select the Microsoft Defender Threat Intelligence data connector > **Open connector page** button.
At this point, the ingested indicators are now available for use in the *TI map.
You can find the new indicators in the **Threat intelligence** blade or directly in **Logs** by querying the **ThreatIntelligenceIndicator** table. For more information, see [Work with threat indicators](work-with-threat-indicators.md).
-## Next steps
+## Related content
In this document, you learned how to connect Microsoft Sentinel to Microsoft's threat intelligence feed with the MDTI data connector. To learn more about Microsoft Defender for Threat Intelligence see the following articles.
sentinel Connect Threat Intelligence Taxii https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-taxii.md
Title: Connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds | Microsoft Docs
+ Title: Connect to STIX/TAXII threat intelligence feeds
+ description: Learn about how to connect Microsoft Sentinel to industry-standard threat intelligence feeds to import threat indicators. Previously updated : 03/27/2023 Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#customer intent: As a SOC admin, I want to connect Microsoft Sentinel to a STIX/TAXII feed to ingest threat intelligence, so I can generate alerts incidents.
# Connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds -
-**See also**: [Connect your threat intelligence platform (TIP) to Microsoft Sentinel](connect-threat-intelligence-tip.md)
- The most widely adopted industry standard for the transmission of threat intelligence is a [combination of the STIX data format and the TAXII protocol](https://oasis-open.github.io/cti-documentation/). If your organization receives threat indicators from solutions that support the current STIX/TAXII version (2.0 or 2.1), you can use the **Threat Intelligence - TAXII data connector** to bring your threat indicators into Microsoft Sentinel. This connector enables a built-in TAXII client in Microsoft Sentinel to import threat intelligence from TAXII 2.x servers. :::image type="content" source="media/connect-threat-intelligence-taxii/threat-intel-taxii-import-path.png" alt-text="TAXII import path":::
To import STIX formatted threat indicators to Microsoft Sentinel from a TAXII se
Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Microsoft Sentinel, and specifically about the [TAXII threat intelligence feeds](threat-intelligence-integration.md#taxii-threat-intelligence-feeds) that can be integrated with Microsoft Sentinel. ++
+**See also**: [Connect your threat intelligence platform (TIP) to Microsoft Sentinel](connect-threat-intelligence-tip.md)
+ ## Prerequisites - In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level. - You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators.
TAXII 2.x servers advertise API Roots, which are URLs that host Collections of t
To import threat indicators into Microsoft Sentinel from a TAXII server, follow these steps:
-1. From the [Azure portal](https://portal.azure.com/), navigate to the **Microsoft Sentinel** service.
-
-1. Choose the **workspace** to which you want to import threat indicators from the TAXII server.
-
-1. Select **Content hub** from the menu.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**.
1. Find and select the **Threat Intelligence** solution.
-1. Select the :::image type="icon" source="media/connect-threat-intelligence-taxii/install-update-button.png"::: **Install/Update** button.
+1. Select the :::image type="icon" source="mediti-data-connector/install-update-button.png"::: **Install/Update** button.
- For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md).
+For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md).
## Enable the Threat intelligence - TAXII data connector
When relevant, the following IP addresses are those to include in your allowlist
:::row-end:::
-## Next steps
+## Related content
In this document, you learned how to connect Microsoft Sentinel to threat intelligence feeds using the TAXII protocol. To learn more about Microsoft Sentinel, see the following articles.
sentinel Connect Threat Intelligence Tip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-tip.md
Title: Connect your threat intelligence platform to Microsoft Sentinel | Microsoft Docs
+ Title: Connect your threat intelligence platform
+ description: Learn how to connect your threat intelligence platform (TIP) or custom feed to Microsoft Sentinel and send threat indicators.-+ Previously updated : 11/09/2021- Last updated : 3/14/2024+
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#customer intent: As a SOC admin, I want to use a Threat Intelligence Platform solution to ingest threat intelligence, so I can generate alerts incidents.
# Connect your threat intelligence platform to Microsoft Sentinel >[!NOTE] > This data connector is on a path for deprecation. More details will be published on the precise timeline. Use the new threat intelligence upload indicators API data connector for new solutions going forward.
->
-
-For more information, see [Connect your threat intelligence platform to Microsoft Sentinel with the upload indicators API](connect-threat-intelligence-upload-api.md).
+> For more information, see [Connect your threat intelligence platform to Microsoft Sentinel with the upload indicators API](connect-threat-intelligence-upload-api.md).
Many organizations use threat intelligence platform (TIP) solutions to aggregate threat indicator feeds from various sources. From the aggregated feed, the data is curated to apply to security solutions such as network devices, EDR/XDR solutions, or SIEMs such as Microsoft Sentinel. The **Threat Intelligence Platforms data connector** allows you to use these solutions to import threat indicators into Microsoft Sentinel.
Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Mic
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] + ## Prerequisites - In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.
Follow these steps to import threat indicators to Microsoft Sentinel from your i
<a name='sign-up-for-an-application-id-and-client-secret-from-your-azure-active-directory'></a>
-### Sign up for an Application ID and Client secret from your Microsoft Entra ID
+## Sign up for an Application ID and Client secret from your Microsoft Entra ID
Whether you are working with a TIP or with a custom solution, the tiIndicators API requires some basic information to allow you to connect your feed to it and send it threat indicators. The three pieces of information you need are:
Now that your app has been registered and permissions have been granted, you can
> [!IMPORTANT] > You must copy the **client secret** before leaving this screen. You cannot retrieve this secret again if you navigate away from this page. You will need this value when you configure your TIP or custom solution.
-### Input this information into your TIP solution or custom application
+## Input this information into your TIP solution or custom application
You now have all three pieces of information you need to configure your TIP or custom solution to send threat indicators to Microsoft Sentinel.
You now have all three pieces of information you need to configure your TIP or c
Once this configuration is complete, threat indicators will be sent from your TIP or custom solution, through the **Microsoft Graph tiIndicators API**, targeted at Microsoft Sentinel.
-### Enable the Threat Intelligence Platforms data connector in Microsoft Sentinel
+## Enable the Threat Intelligence Platforms data connector in Microsoft Sentinel
The last step in the integration process is to enable the **Threat Intelligence Platforms data connector** in Microsoft Sentinel. Enabling the connector is what allows Microsoft Sentinel to receive the threat indicators sent from your TIP or custom solution. These indicators will be available to all Microsoft Sentinel workspaces for your organization. Follow these steps to enable the Threat Intelligence Platforms data connector for each workspace:
-1. From the [Azure portal](https://portal.azure.com/), navigate to the **Microsoft Sentinel** service.
-
-1. Choose the **workspace** to which you want to import the threat indicators sent from your TIP or custom solution.
-
-1. Select **Content hub** from the menu.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**.
-1. Find and select the **Threat Intelligence** solution using the list view.
+1. Find and select the **Threat Intelligence** solution.
-1. Select the :::image type="icon" source="media/connect-threat-intelligence-tip/install-update-button.png"::: **Install/Update** button.
+1. Select the :::image type="icon" source="mediti-data-connector/install-update-button.png"::: **Install/Update** button.
- For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md).
+For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md).
-1. To configure the TIP data connector, select the **Data connectors** menu.
+1. To configure the TIP data connector, select **Configuration** > **Data connectors**.
1. Find and select the **Threat Intelligence Platforms** data connector > **Open connector page** button. :::image type="content" source="media/connect-threat-intelligence-tip/tip-data-connector-config.png" alt-text="Screenshot displaying the data connectors page with the TIP data connector listed." lightbox="media/connect-threat-intelligence-tip/tip-data-connector-config.png":::
-1. As youΓÇÖve already completed the app registration and configured your TIP or custom solution to send threat indicators, the only step left is to select the **Connect** button.
+1. As you've already completed the app registration and configured your TIP or custom solution to send threat indicators, the only step left is to select the **Connect** button.
Within a few minutes, threat indicators should begin flowing into this Microsoft Sentinel workspace. You can find the new indicators in the **Threat intelligence** blade, accessible from the Microsoft Sentinel navigation menu.
-## Next steps
+## Related content
In this document, you learned how to connect your threat intelligence platform to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles.
sentinel Connect Threat Intelligence Upload Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-upload-api.md
Title: Connect your threat intelligence platform with upload indicators API
+ Title: Connect your TIP with upload indicators API
description: Learn how to connect your threat intelligence platform (TIP) or custom feed using the upload indicators API to Microsoft Sentinel. Previously updated : 07/10/2023 Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#customer intent: As a SOC admin, I want to connect my Threat Intelligence Platform with the upload indicators API to ingest threat intelligence, so I can utilize the benefits of this updated API.
# Connect your threat intelligence platform to Microsoft Sentinel with the upload indicators API
-Many organizations use threat intelligence platform (TIP) solutions to aggregate threat indicator feeds from various sources. From the aggregated feed, the data is curated to apply to security solutions such as network devices, EDR/XDR solutions, or SIEMs such as Microsoft Sentinel. The **Threat Intelligence Upload Indicators API** data connector allows you to use these solutions to import threat indicators into Microsoft Sentinel.
-
-This data connector uses the Sentinel upload indicators API to ingest threat intelligence indicators into Microsoft Sentinel.
+Many organizations use threat intelligence platform (TIP) solutions to aggregate threat indicator feeds from various sources. From the aggregated feed, the data is curated to apply to security solutions such as network devices, EDR/XDR solutions, or SIEMs such as Microsoft Sentinel. The **Threat Intelligence Upload Indicators API** data connector allows you to use these solutions to import threat indicators into Microsoft Sentinel. This data connector uses the Sentinel upload indicators API to ingest threat intelligence indicators into Microsoft Sentinel. For more information, see [Threat Intelligence](understand-threat-intelligence.md).
:::image type="content" source="media/connect-threat-intelligence-upload-api/threat-intel-upload-api.png" alt-text="Threat intelligence import path":::
-Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Microsoft Sentinel.
- > [!IMPORTANT] > The Microsoft Sentinel upload indicators API and **Threat Intelligence Upload Indicators API** data connector are in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+> [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Mic
- The Microsoft Entra application must be granted the Microsoft Sentinel contributor role at the workspace level. ## Instructions+ Follow these steps to import threat indicators to Microsoft Sentinel from your integrated TIP or custom threat intelligence solution:+ 1. Register a Microsoft Entra application and record its application ID. 1. Generate and record a client secret for your Microsoft Entra application. 1. Assign your Microsoft Entra application the Microsoft Sentinel contributor role or equivalent.
Follow these steps to import threat indicators to Microsoft Sentinel from your i
<a name='register-an-azure-ad-application'></a>
-### Register a Microsoft Entra application
+## Register a Microsoft Entra application
The [default user role permissions](../active-directory/fundamentals/users-default-permissions.md#restrict-member-users-default-permissions) allow users to create application registrations. If this setting has been switched to **No**, you'll need permission to manage applications in Microsoft Entra ID. Any of the following Microsoft Entra roles include the required permissions: - Application administrator
For more information on registering your Microsoft Entra application, see [Regis
Once you've registered your application, record its Application (client) ID from the application's **Overview** tab.
-### Generate and record client secret
+## Generate and record client secret
Now that your application has been registered, generate and record a client secret.
Now that your application has been registered, generate and record a client secr
For more information on generating a client secret, see [Add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret).
-### Assign a role to the application
+## Assign a role to the application
+ The upload indicators API ingests threat indicators at the workspace level and allows a least privilege role of Microsoft Sentinel contributor. 1. From the Azure portal, go to Log Analytics workspaces.
The upload indicators API ingests threat indicators at the workspace level and a
For more information on assigning roles to applications, see [Assign a role to the application](../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application).
-### Enable the Threat Intelligence upload indicators API data connector in Microsoft Sentinel
+## Enable the Threat Intelligence upload indicators API data connector in Microsoft Sentinel
Enable the **Threat Intelligence Upload Indicators API** data connector to allow Microsoft Sentinel to receive threat indicators sent from your TIP or custom solution. These indicators are available to the Microsoft Sentinel workspace you configure.
-1. From the [Azure portal](https://portal.azure.com/), navigate to the **Microsoft Sentinel** service.
-1. Choose the **workspace** where you want to import the threat indicators.
-1. Select **Content hub** from the menu.
-1. Find and select the **Threat Intelligence** solution using the list view.
-1. Select the :::image type="icon" source="media/connect-threat-intelligence-tip/install-update-button.png"::: **Install/Update** button.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**.
+
+1. Find and select the **Threat Intelligence** solution.
+
+1. Select the :::image type="icon" source="mediti-data-connector/install-update-button.png"::: **Install/Update** button.
- For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md).
+For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md).
-1. The data connector is now visible in **Data Connectors** page. Open the data connector page to find more information on configuring your application to this API.
+1. The data connector is now visible in **Configuration** > **Data Connectors**. Open the data connector page to find more information on configuring your application with this API.
:::image type="content" source="media/connect-threat-intelligence-upload-api/upload-api-data-connector.png" alt-text="Screenshot displaying the data connectors page with the upload API data connector listed." lightbox="media/connect-threat-intelligence-upload-api/upload-api-data-connector.png":::
-### Configure your TIP solution or custom application
+## Configure your TIP solution or custom application
The following configuration information required by the upload indicators API: - Application (client) ID
Enter these values in the configuration of your integrated TIP or custom solutio
:::image type="content" source="media/connect-threat-intelligence-upload-api/upload-api-data-connector-connected.png" alt-text="Screenshot showing upload indicators API data connector in the connected state." lightbox="media/connect-threat-intelligence-upload-api/upload-api-data-connector-connected.png":::
-## Next steps
+## Related content
In this document, you learned how to connect your threat intelligence platform to Microsoft Sentinel. To learn more about using threat indicators in Microsoft Sentinel, see the following articles.
sentinel Create Manage Use Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-manage-use-automation-rules.md
Title: Create and use Microsoft Sentinel automation rules to manage response description: This article explains how to create and use automation rules in Microsoft Sentinel to manage and handle incidents, in order to maximize your SOC's efficiency and effectiveness in response to security threats.-- Previously updated : 05/09/2023++ Last updated : 04/03/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Create and use Microsoft Sentinel automation rules to manage response
-> [!IMPORTANT]
->
-> Some features of automation rules are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> Features in preview will be so indicated when they are mentioned throughout this article.
- This article explains how to create and use automation rules in Microsoft Sentinel to manage and orchestrate threat response, in order to maximize your SOC's efficiency and effectiveness. In this article you'll learn how to define the triggers and conditions that will determine when your automation rule will run, the various actions that you can have the rule perform, and the remaining features and functionalities.
+> [!IMPORTANT]
+>
+> Noted features of automation rules are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
+ ## Design your automation rule
+Before you create your automation rule, we recommend that you determine its scope and design, including the trigger, conditions, and actions that will make up your rule.
+ ### Determine the scope The first step in designing and defining your automation rule is figuring out which incidents or alerts you want it to apply to. This determination will directly impact how you create the rule.
The first step in designing and defining your automation rule is figuring out wh
You also want to determine your use case. What are you trying to accomplish with this automation? Consider the following options: - Create tasks for your analysts to follow in triaging, investigating, and remediating incidents.-- Suppress noisy incidents (see [this article on handling false positives](false-positives.md#add-exceptions-by-using-automation-rules) instead)
+- Suppress noisy incidents. (Alternatively, use other methods to [handle false positives in Microsoft Sentinel](false-positives.md).)
- Triage new incidents by changing their status from New to Active and assigning an owner. - Tag incidents to classify them. - Escalate an incident by assigning a new owner.
You also want to determine your use case. What are you trying to accomplish with
Do you want this automation to be activated when new incidents or alerts are created? Or anytime an incident gets updated?
-Automation rules are triggered **when an incident is created or updated** or **when an alert is created**. Recall that incidents include alerts, and that both alerts and incidents are created by analytics rules, of which there are several types, as explained in [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
+Automation rules are triggered **when an incident is created or updated** or **when an alert is created**. Recall that incidents include alerts, and that both alerts and incidents can be created by analytics rules, of which there are several types, as explained in [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
The following table shows the different possible scenarios that will cause an automation rule to run. | Trigger type | Events that cause the rule to run | | | |
-| **When incident is created** | - A new incident is created by an analytics rule.<br>- An incident is ingested from Microsoft Defender XDR.<br>- A new incident is created manually. |
-| **When incident is updated**<br> | - An incident's status is changed (closed/reopened/triaged).<br>- An incident's owner is assigned or changed.<br>- An incident's severity is raised or lowered.<br>- Alerts are added to an incident.<br>- Comments, tags, or tactics are added to an incident. |
-| **When alert is created**<br> | - An alert is created by a scheduled analytics rule.
+| **When incident is created** | <li>A new incident is created by an analytics rule.<li>An incident is ingested from Microsoft Defender XDR.<li>A new incident is created manually. |
+| **When incident is updated**<br> | <li>An incident's status is changed (closed/reopened/triaged).<li>An incident's owner is assigned or changed.<li>An incident's severity is raised or lowered.<li>Alerts are added to an incident.<li>Comments, tags, or tactics are added to an incident. |
+| **When alert is created**<br> | <li>An alert is created by an analytics rule. |
## Create your automation rule Most of the following instructions apply to any and all use cases for which you'll create automation rules. -- For the use case of suppressing noisy incidents, see [this article on handling false positives](false-positives.md#add-exceptions-by-using-automation-rules).-- For creating an automation rule that will apply to a single specific analytics rule, see [this article on configuring automated response in analytics rules](detect-threats-custom.md#set-automated-responses-and-create-the-rule).
+If you're looking to suppress noisy incidents, try [handling false positives](false-positives.md#add-exceptions-by-using-automation-rules).
+
+If you want to create an automation rule to apply to a specific analytics rule, see [Set automated responses and create the rule](detect-threats-custom.md#set-automated-responses-and-create-the-rule).
+
+**To create your automation rule**:
+
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), select the **Configuration** > **Automation** page. For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Automation**.
-1. From the **Automation** blade in the Microsoft Sentinel navigation menu, select **Create** from the top menu and choose **Automation rule**.
+1. From the **Automation** page in the Microsoft Sentinel navigation menu, select **Create** from the top menu and choose **Automation rule**.
- :::image type="content" source="./media/create-manage-use-automation-rules/add-rule-automation.png" alt-text="Screenshot of creating a new automation rule in the Automation blade." lightbox="./media/create-manage-use-automation-rules/add-rule-automation.png":::
+ #### [Azure portal](#tab/azure-portal)
+ :::image type="content" source="./media/create-manage-use-automation-rules/add-rule-automation.png" alt-text="Screenshot of creating a new automation rule in the Automation page." lightbox="./media/create-manage-use-automation-rules/add-rule-automation.png":::
-1. The **Create new automation rule** panel opens. Enter a name for your rule.
+ #### [Defender portal](#tab/defender-portal)
+ :::image type="content" source="./media/create-manage-use-automation-rules/add-rule-automation-defender.png" alt-text="Screenshot of creating a new automation rule in the Automation page." lightbox="./media/create-manage-use-automation-rules/add-rule-automation-defender.png":::
- :::image type="content" source="media/create-manage-use-automation-rules/create-automation-rule.png" alt-text="Screenshot of Create new automation rule wizard.":::
+
+
+1. The **Create new automation rule** panel opens. In the **Automation rule name** field, enter a name for your rule.
### Choose your trigger
From the **Trigger** drop-down, select the appropriate trigger according to the
### Define conditions
-#### Base conditions
+Use the options in the **Conditions** area to define conditions for your automation rule.
+
+- Rules you create for when an alert is created support only the **If Analytic rule name** property in your condition. Select whether you want the rule to be inclusive (*Contains*) or exclusive (*Does not contain*), and then select the analytic rule name from the drop-down list.
+
+- Rules you create for when an incident is created or updated support a large variety of conditions, depending on your environment. These options start with whether your workspace is onboarded to the unified security operations platform:
+
+ #### [Onboarded workspaces](#tab/onboarded)
+
+ If your workspace is onboarded to the unified security operations platform, start by selecting one of the following operators, in either the Azure or the Defender portal:
+
+ - **AND**: individual conditions that are evaluated as a group. The rule executes if *all* the conditions of this type are met.
+
+ To work with the **AND** operator, select the **+ Add** expander and choose **Condition (And)** from the drop-down list. The list of conditions is populated by incident property and [entity property](entities-reference.md) fields.
+
+ - **OR** (also known as *condition groups*): groups of conditions, each of which are evaluated independently. The rule executes if one or more groups of conditions are true. To learn how to work with these complex types of conditions, see [Add advanced conditions to automation rules](add-advanced-conditions-to-automation-rules.md).
+
+ For example:
+
+ :::image type="content" source="media/create-manage-use-automation-rules/conditions-onboarded.png" alt-text="Screenshot of automation rule conditions when your workspace is onboarded to the unified security operations platform.":::
+
+ #### [Workspaces not onboarded](#tab/not-onboarded)
-1. **Incident provider**: Incidents can have two possible sources: they can be created inside Microsoft Sentinel, and they can also be [imported from&mdash;and synchronized with&mdash;Microsoft Defender XDR](microsoft-365-defender-sentinel-integration.md).
+ If your workspace isn't onboarded to the unified security operations platform, start by defining the following condition properties:
+
+ - **Incident provider**: Incidents can have two possible sources: they can be created inside Microsoft Sentinel, and they can also be [imported from&mdash;and synchronized with&mdash;Microsoft Defender XDR](microsoft-365-defender-sentinel-integration.md).
- If you selected one of the incident triggers and you want the automation rule to take effect only on incidents created in Microsoft Sentinel, or alternatively, only on those imported from Microsoft Defender XDR, specify the source in the **If Incident provider equals** condition. (This condition will be displayed only if an incident trigger is selected.)
+ If you selected one of the incident triggers and you want the automation rule to take effect only on incidents created in Microsoft Sentinel, or alternatively, only on those imported from Microsoft Defender XDR, specify the source in the **If Incident provider equals** condition. (This condition will be displayed only if an incident trigger is selected.)
-1. **Analytics rule name**: For all trigger types, if you want the automation rule to take effect only on certain analytics rules, specify which ones by modifying the **If Analytics rule name contains** condition. (This condition will *not* be displayed if Microsoft Defender XDR is selected as the incident provider.)
+ - **Analytic rule name**: For all trigger types, if you want the automation rule to take effect only on certain analytics rules, specify which ones by modifying the **If Analytics rule name contains** condition. (This condition will *not* be displayed if Microsoft Defender XDR is selected as the incident provider.)
-#### Other conditions (incidents only)
+ Then, continue by selecting one of the following operators:
-Add any other conditions you want this automation rule's activation to depend on. You now have two ways to add conditions:
+ - **AND**: individual conditions that are evaluated as a group. The rule executes if *all* the conditions of this type are met.
-- **AND conditions**: individual conditions that will be evaluated as a group. The rule will execute if *all* the conditions of this type are met. This type of condition will be explained below.
+ To work with the **AND** operator, select the **+ Add** expander and choose **Condition (And)** from the drop-down list. The list of conditions is populated by incident property and [entity property](entities-reference.md) fields.
-- **OR conditions** (also known as *condition groups*): groups of conditions, each of which will be evaluated independently. The rule will execute if one or more groups of conditions are true. To learn how to work with these complex types of conditions, see [Add advanced conditions to automation rules](add-advanced-conditions-to-automation-rules.md).
+ - **OR** (also known as *condition groups*): groups of conditions, each of which are evaluated independently. The rule executes if one or more groups of conditions are true. To learn how to work with these complex types of conditions, see [Add advanced conditions to automation rules](add-advanced-conditions-to-automation-rules.md).
-Select the **+ Add** expander and choose **Condition (And)** from the drop-down list. The list of conditions is populated by incident property and [entity property](entities-reference.md) fields.
+ For example:
+ :::image type="content" source="media/create-manage-use-automation-rules/conditions-not-onboarded.png" alt-text="Screenshot of automation rule conditions when the workspace isn't onboarded to the unified security operations platform.":::
+
+
+
+ If you selected **When an incident is updated** as the trigger, start by defining your conditions, and then adding extra operators and values as needed.
+
+**To define your conditions**:
1. Select a property from the first drop-down box on the left. You can begin typing any part of a property name in the search box to dynamically filter the list, so you can find what you're looking for quickly.+ :::image type="content" source="media/create-manage-use-automation-rules/filter-list.png" alt-text="Screenshot of typing in a search box to filter the list of choices."::: 1. Select an operator from the next drop-down box to the right. :::image type="content" source="media/create-manage-use-automation-rules/select-operator.png" alt-text="Screenshot of selecting a condition operator for automation rules.":::
- The list of operators you can choose from varies according to the selected trigger and property. Here's a summary of what's available:
+ The list of operators you can choose from varies according to the selected trigger and property.
- ##### Conditions available with the create trigger
+ #### Conditions available with the create trigger
| Property | Operator set | | -- | -- |
- | - Title<br>- Description<br>- Tag<br>- All listed entity properties | - Equals/Does not equal<br>- Contains/Does not contain<br>- Starts with/Does not start with<br>- Ends with/Does not end with |
- | - Severity<br>- Status<br>- Incident provider<br>- Custom details key (Preview) | - Equals/Does not equal |
- | - Tactics<br>- Alert product names<br>- Custom details value (Preview) | - Contains/Does not contain |
+ | - **Title**<br>- **Description**<br>- All listed **entity properties** | - Equals/Does not equal<br>- Contains/Does not contain<br>- Starts with/Does not start with<br>- Ends with/Does not end with |
+ | - **Tag** (See [individual vs. collection](automate-incident-handling-with-automation-rules.md#tag-property-individual-vs-collection)) | **Any individual tag:**<br>- Equals/Does not equal<br>- Contains/Does not contain<br>- Starts with/Does not start with<br>- Ends with/Does not end with<br><br>**Collection of all tags:**<br>- Contains/Does not contain |
+ | - **Severity**<br>- **Status**<br>- **Custom details key** | - Equals/Does not equal |
+ | - **Tactics**<br>- **Alert product names**<br>- **Custom details value**<br>- **Analytic rule name** | - Contains/Does not contain |
- ##### Conditions available with the update trigger
+ #### Conditions available with the update trigger
| Property | Operator set | | -- | -- |
- | - Title<br>- Description<br>- Tag<br>- All listed entity properties | - Equals/Does not equal<br>- Contains/Does not contain<br>- Starts with/Does not start with<br>- Ends with/Does not end with |
- | - Tag (in addition to above)<br>- Alerts<br>- Comments | - Added |
- | - Severity<br>- Status | - Equals/Does not equal<br>- Changed<br>- Changed from<br>- Changed to |
- | - Owner | - Changed |
- | - Incident provider<br>- Updated by<br>- Custom details key (Preview) | - Equals/Does not equal |
- | - Tactics | - Contains/Does not contain<br>- Added |
- | - Alert product names<br>- Custom details value (Preview) | - Contains/Does not contain |
+ | - **Title**<br>- **Description**<br>- All listed **entity properties** | - Equals/Does not equal<br>- Contains/Does not contain<br>- Starts with/Does not start with<br>- Ends with/Does not end with |
+ | - **Tag** (See [individual vs. collection](automate-incident-handling-with-automation-rules.md#tag-property-individual-vs-collection)) | **Any individual tag:**<br>- Equals/Does not equal<br>- Contains/Does not contain<br>- Starts with/Does not start with<br>- Ends with/Does not end with<br><br>**Collection of all tags:**<br>- Contains/Does not contain |
+ | - **Tag** (in addition to above)<br>- **Alerts**<br>- **Comments** | - Added |
+ | - **Severity**<br>- **Status** | - Equals/Does not equal<br>- Changed<br>- Changed from<br>- Changed to |
+ | - **Owner** | - Changed |
+ | - **Updated by**<br>- **Custom details key** | - Equals/Does not equal |
+ | - **Tactics** | - Contains/Does not contain<br>- Added |
+ | - **Alert product names**<br>- **Custom details value**<br>- **Analytic rule name** | - Contains/Does not contain |
-1. Enter a value in the text box on the right. Depending on the property you chose, this might be a drop-down list from which you would select the values you choose. You might also be able to add several values by selecting the icon to the right of the text box (highlighted by the red arrow below).
+1. Enter a value in the field on the right. Depending on the property you chose, this might be either a text box or a drop-down in which you select from a closed list of values. You might also be able to add several values by selecting the dice icon to the right of the text box.
:::image type="content" source="media/create-manage-use-automation-rules/add-values-to-condition.png" alt-text="Screenshot of adding values to your condition in automation rules."::: Again, for setting complex **Or** conditions with different fields, see [Add advanced conditions to automation rules](add-advanced-conditions-to-automation-rules.md).
-#### Conditions based on custom details
+#### Conditions based on tags
+
+You can create two kinds of conditions based on tags:
-You can set the value of a [custom detail surfaced in an incident](surface-custom-details-in-alerts.md) as a condition of an automation rule. Recall that custom details are data points in raw event log records that can be surfaced and displayed in alerts and the incidents generated from them. Through custom details you can get to the actual relevant content in your alerts without having to dig through query results.
+- Conditions with **Any individual tag** operators evaluate the specified value against every tag in the collection. The evaluation is *true* when *at least one tag* satisfies the condition.
+- Conditions with **Collection of all tags** operators evaluate the specified value against the collection of tags as a single unit. The evaluation is *true* only if *the collection as a whole* satisfies the condition.
-To add a condition based on a custom detail, take the following steps:
+To add one of these conditions based on an incident's tags, take the following steps:
1. Create a new automation rule as described above. 1. Add a condition or a condition group.
+1. Select **Tag** from the properties drop-down list.
+
+1. Select the operators drop-down list to reveal the available operators to choose from.
+
+ ##### [Onboarded workspaces](#tab/onboarded)
+
+ :::image type="content" source="media/create-manage-use-automation-rules/tag-create-condition-defender.png" alt-text="Screenshot of list of operators for tag condition in create trigger rule--for onboarded workspaces." lightbox="media/create-manage-use-automation-rules/tag-create-condition-defender.png":::
+
+ ##### [Workspaces not onboarded](#tab/not-onboarded)
+
+ :::image type="content" source="media/create-manage-use-automation-rules/tag-create-condition-azure.png" alt-text="Screenshot of list of operators for tag condition in create trigger rule--for non-onboarded workspaces." lightbox="media/create-manage-use-automation-rules/tag-create-condition-azure.png":::
+
+
+
+ See how the operators are divided in two categories as described before. Choose your operator carefully based on how you want the tags to be evaluated.
+
+ For more information, see [*Tag* property: individual vs. collection](automate-incident-handling-with-automation-rules.md#tag-property-individual-vs-collection).
+
+#### Conditions based on custom details
+
+You can set the value of a [custom detail surfaced in an incident](surface-custom-details-in-alerts.md) as a condition of an automation rule. Recall that custom details are data points in raw event log records that can be surfaced and displayed in alerts and the incidents generated from them. Use custom details to get to the actual relevant content in your alerts without having to dig through query results.
+
+**To add a condition based on a custom detail**:
+
+1. Create a new automation rule as described [earlier](#create-your-automation-rule).
+
+1. Add a condition or a condition group.
+ 1. Select **Custom details key** from the properties drop-down list. Select **Equals** or **Does not equal** from the operators drop-down list. For the custom details condition, the values in the last drop-down list come from the custom details that were surfaced in all the analytics rules listed in the first condition. Select the custom detail you want to use as a condition. :::image type="content" source="media/create-manage-use-automation-rules/custom-detail-key-condition.png" alt-text="Screenshot of adding a custom detail key as a condition.":::
-1. You've now chosen the field you want to evaluate for this condition. Now you have to specify the value appearing in that field that will make this condition evaluate to *true*.
+1. You chose the field you want to evaluate for this condition. Now specify the value appearing in that field that makes this condition evaluate to *true*.
Select **+ Add item condition**. :::image type="content" source="media/create-manage-use-automation-rules/add-item-condition.png" alt-text="Screenshot of selecting add item condition for automation rules.":::
If you add a **Run playbook** action, you will be prompted to choose from the dr
- Only playbooks that start with the **incident trigger** can be run from automation rules using one of the incident triggers, so only they will appear in the list. Likewise, only playbooks that start with the **alert trigger** are available in automation rules using the alert trigger. -- <a name="explicit-permissions"></a>Microsoft Sentinel must be granted explicit permissions in order to run playbooks. If a playbook appears "grayed out" in the drop-down list, it means Sentinel does not have permission to that playbook's resource group. Click the **Manage playbook permissions** link to assign permissions.
+- <a name="explicit-permissions"></a>Microsoft Sentinel must be granted explicit permissions in order to run playbooks. If a playbook appears "grayed out" in the drop-down list, it means Sentinel does not have permission to that playbook's resource group. Select the **Manage playbook permissions** link to assign permissions.
- In the **Manage permissions** panel that opens up, mark the check boxes of the resource groups containing the playbooks you want to run, and click **Apply**.
+ In the **Manage permissions** panel that opens up, mark the check boxes of the resource groups containing the playbooks you want to run, and select **Apply**.
:::image type="content" source="./media/tutorial-respond-threats-playbook/manage-permissions.png" alt-text="Manage permissions":::
- You yourself must have **owner** permissions on any resource group to which you want to grant Microsoft Sentinel permissions, and you must have the **Logic App Contributor** role on any resource group containing playbooks you want to run.
+ You yourself must have **owner** permissions on any resource group to which you want to grant Microsoft Sentinel permissions, and you must have the **Microsoft Sentinel Automation Contributor** role on any resource group containing playbooks you want to run.
- If you don't yet have a playbook that will take the action you have in mind, [create a new playbook](tutorial-respond-threats-playbook.md). You will have to exit the automation rule creation process and restart it after you have created your playbook.
You can change the order of actions in your rule even after you've added them. S
1. Under **Rule expiration**, if you want your automation rule to expire, set an expiration date (and optionally, a time). Otherwise, leave it as *Indefinite*.
-1. The **Order** field is pre-populated with the next available number for your rule's trigger type. This number determines where in the sequence of automation rules (of the same trigger type) this rule will run. You can change the number if you want this rule to run before an existing rule.
+1. The **Order** field is prepopulated with the next available number for your rule's trigger type. This number determines where in the sequence of automation rules (of the same trigger type) this rule will run. You can change the number if you want this rule to run before an existing rule.
See [Notes on execution order and priority](automate-incident-handling-with-automation-rules.md#notes-on-execution-order-and-priority) for more information.
-1. Click **Apply**. You're done!
+1. Select **Apply**. You're done!
:::image type="content" source="media/create-manage-use-automation-rules/finish-creating-rule.png" alt-text="Screenshot of final steps of creating automation rule."::: ## Audit automation rule activity
-Find out what automation rules may have done to a given incident. You have a full record of incident chronicles available to you in the *SecurityIncident* table in the **Logs** blade. Use the following query to see all your automation rule activity:
+Find out what automation rules might have done to a given incident. You have a full record of incident chronicles available to you in the *SecurityIncident* table in the **Logs** page in the Azure portal, or the **Advanced hunting** page in the Defender portal. Use the following query to see all your automation rule activity:
```kusto SecurityIncident
SecurityIncident
Automation rules are run sequentially, according to the order you determine. Each automation rule is executed after the previous one has finished its run. Within an automation rule, all actions are run sequentially in the order in which they are defined. See [Notes on execution order and priority](automate-incident-handling-with-automation-rules.md#notes-on-execution-order-and-priority) for more information.
-Playbook actions within an automation rule may be treated differently under some circumstances, according to the following criteria:
+Playbook actions within an automation rule might be treated differently under some circumstances, according to the following criteria:
| Playbook run time | Automation rule advances to the next action... | | -- | |
sentinel Create Nrt Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-nrt-rules.md
Title: Work with near-real-time (NRT) detection analytics rules in Microsoft Sen
description: This article explains how to view and create near-real-time (NRT) detection analytics rules in Microsoft Sentinel. Previously updated : 11/02/2022 Last updated : 03/28/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Work with near-real-time (NRT) detection analytics rules in Microsoft Sentinel
Microsoft SentinelΓÇÖs [near-real-time analytics rules](near-real-time-rules.md)
For the time being, these templates have limited application as outlined below, but the technology is rapidly evolving and growing. + ## View near-real-time (NRT) rules
-1. From the Microsoft Sentinel navigation menu, select **Analytics**.
+# [Azure portal](#tab/azure-portal)
+
+1. From the **Configuration** section of the Microsoft Sentinel navigation menu, select **Analytics**.
+
+1. On the **Analytics** screen, with the **Active rules** tab selected, filter the list for **NRT** templates:
+
+ 1. Select **Add filter** and choose **Rule type** from the list of filters.
-1. In the **Active rules** tab of the **Analytics** blade, filter the list for **NRT** templates:
+ 1. From the resulting list, select **NRT**. Then select **Apply**.
- 1. Click the **Rule type** filter, then the drop-down list that appears below.
+# [Defender portal](#tab/defender-portal)
- 1. Unmark **Select all**, then mark **NRT**.
+1. From the Microsoft Defender navigation menu, expand **Microsoft Sentinel**, then **Configuration**. Select **Analytics**.
- 1. If necessary, click the top of the drop-down list to retract it, then click **OK**.
+1. On the **Analytics** screen, with the **Active rules** tab selected, filter the list for **NRT** templates:
+
+ 1. Select **Add filter** and choose **Rule type** from the list of filters.
+
+ 1. From the resulting list, select **NRT**. Then select **Apply**.
++ ## Create NRT rules You create NRT rules the same way you create regular [scheduled-query analytics rules](detect-threats-custom.md):
-1. From the Microsoft Sentinel navigation menu, select **Analytics**.
+# [Azure portal](#tab/azure-portal)
-1. Select **Create** from the button bar, then **NRT query rule** from the drop-down list.
+1. From the **Configuration** section of the Microsoft Sentinel navigation menu, select **Analytics**.
+
+1. In the action bar at the top, select **+Create** and select **NRT query rule**. This opens the **Analytics rule wizard**.
:::image type="content" source="media/create-nrt-rules/create-nrt-rule.png" alt-text="Screenshot shows how to create a new NRT rule." lightbox="media/create-nrt-rules/create-nrt-rule.png":::
-1. Follow the instructions of the [**analytics rule wizard**](detect-threats-custom.md).
+# [Defender portal](#tab/defender-portal)
+
+1. From the Microsoft Defender navigation menu, expand **Microsoft Sentinel**, then **Configuration**. Select **Analytics**.
+
+1. In the action bar at the top of the grid, select **+Create** and select **NRT query rule**. This opens the **Analytics rule wizard**.
+
+ :::image type="content" source="media/create-nrt-rules/defender-create-nrt-rule.png" alt-text="Screenshot shows how to create a new NRT rule." lightbox="media/create-nrt-rules/create-nrt-rule.png":::
+++
+3. Follow the instructions of the [**analytics rule wizard**](detect-threats-custom.md).
The configuration of NRT rules is in most ways the same as that of scheduled analytics rules.
sentinel Create Tasks Automation Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-tasks-automation-rule.md
Title: Create incident tasks in Microsoft Sentinel using automation rules description: This article explains how to use automation rules to create lists of incident tasks, in order to standardize analyst workflow processes in Microsoft Sentinel.-- Previously updated : 11/24/2022++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
++ # Create incident tasks in Microsoft Sentinel using automation rules
Another article, at the following links, addresses scenarios that apply more to
- [View and follow incident tasks](work-with-tasks.md#view-and-follow-incident-tasks) - [Manually add an ad-hoc task to an incident](work-with-tasks.md#manually-add-an-ad-hoc-task-to-an-incident) + ## Prerequisites The **Microsoft Sentinel Responder** role is required to create automation rules and to view and edit incidents, both of which are necessary to add, view, and edit tasks.
Give your automation rule a name that describes what it does.
For example, filter by **Analytics rule name**:
- - You may want to add tasks to incidents based on the types of threats detected by an analytics rule or a group of analytics rules, that need to be handled according to a certain workflow. Search for and select the relevant analytics rules from the drop-down list.
+ - You might want to add tasks to incidents based on the types of threats detected by an analytics rule or a group of analytics rules that need to be handled according to a certain workflow. Search for and select the relevant analytics rules from the drop-down list.
- - Or, you may want to add tasks that are relevant for incidents across all types of threats (in this case, leave the default selection of **All** as is).
+ - Or, you might want to add tasks that are relevant for incidents across all types of threats (in this case, leave the default selection of **All** as is).
In either case, you can add more conditions to narrow the scope of incidents to which your automation rule will apply. Learn more about [adding advanced conditions to automation rules](add-advanced-conditions-to-automation-rules.md).
sentinel Create Tasks Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-tasks-playbook.md
Title: Create and perform incident tasks in Microsoft Sentinel using playbooks description: This article explains how to use playbooks to create (and optionally perform) incident tasks, in order to manage complex analyst workflow processes in Microsoft Sentinel.-- Previously updated : 11/24/2022++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Create and perform incident tasks in Microsoft Sentinel using playbooks
In this example we're going to add a playbook action that researches an IP addre
1. Inside the **For each** loop, select **Add an action**. Add a **Condition** from the **Control** actions library.
- Add the **Last analysis statistics Malicious** dynamic content item from the **Get an IP report** output (you may have to select "See more" to find it), select the **is greater than** operator, and enter `0` as the value. This condition asks the question "Did the Virus Total IP report have any results?"
+ Add the **Last analysis statistics Malicious** dynamic content item from the **Get an IP report** output (you might have to select "See more" to find it), select the **is greater than** operator, and enter `0` as the value. This condition asks the question "Did the Virus Total IP report have any results?"
:::image type="content" source="media/create-tasks-playbook/set-condition.png" alt-text="Screenshot shows how to set a true-false condition in a playbook.":::
sentinel Customize Alert Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customize-alert-details.md
Last updated 03/05/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Customize alert details in Microsoft Sentinel This article explains how to override the default properties of alerts with content from the underlying query results.
-In the process of creating a scheduled analytics rule, as the first step you define a name and description for the rule, and you assign it a severity and MITRE ATT&CK tactics. All alerts generated by a given rule - and all incidents created as a result - will inherit the name, description, severity, and tactics defined in the rule, without regard to the particular content of a specific instance of the alert.
+In the process of creating a [scheduled analytics rule](detect-threats-custom.md), as the first step you define a name and description for the rule, and you assign it a severity and MITRE ATT&CK tactics. All alerts generated by a given rule - and all incidents created as a result - will inherit the name, description, severity, and tactics defined in the rule, without regard to the particular content of a specific instance of the alert.
With the **alert details** feature, you can override these and other default properties of alerts in two ways:
With the **alert details** feature, you can override these and other default pro
- Customize the severity, tactics, and other properties of a given instance of an alert (see the full list of properties below) with the values of any relevant fields from the query output. If the selected fields are empty or have values that don't match the field data type, the respective alert properties will revert to their defaults (for tactics and severity, those specified in the first page of the wizard). > [!IMPORTANT]
-> Some alert details' customizability (see those so indicated below) are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
+> - Some alert details' customizability (see those so indicated below) are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
Follow the procedure detailed below to use the alert details feature. These steps are part of the [analytics rule creation wizard](detect-threats-custom.md), but they're addressed here independently to address the scenario of adding or changing alert details in an existing analytics rule. ## How to customize alert details
-1. From the Microsoft Sentinel navigation menu, select **Analytics**.
+1. Enter the **Analytics** page in the portal through which you access Microsoft Sentinel:
+
+ # [Azure portal](#tab/azure)
+
+ From the **Configuration** section of the Microsoft Sentinel navigation menu, select **Analytics**.
+
+ # [Defender portal](#tab/defender)
+
+ From the Microsoft Defender navigation menu, expand **Microsoft Sentinel**, then **Configuration**. Select **Analytics**.
+
+
1. Select a scheduled query rule and select **Edit**. Or create a new rule by selecting **Create > Scheduled query rule** at the top of the screen.
Follow the procedure detailed below to use the alert details feature. These step
1. When you have finished customizing your alert details, if you're now creating the rule, continue to the next tab in the wizard. If you're editing an existing rule, select the **Review and create** tab. Once the rule validation is successful, select **Save**.
- > [!NOTE]
- >
- > **Service limits**
- > - The combined size limit for all alert details and [custom details](surface-custom-details-in-alerts.md), collectively, is **64 KB**.
+ > [!NOTE]
+ >
+ > **Service limits**
+ > - The combined size limit for all alert details and [custom details](surface-custom-details-in-alerts.md), collectively, is **64 KB**.
## Next steps
sentinel Customize Entity Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customize-entity-activities.md
Title: Customize activities on Microsoft Sentinel entity timelines | Microsoft D
description: Add customized activities to those Microsoft Sentinel tracks and displays on the timeline of entity pages Previously updated : 11/09/2021 Last updated : 03/16/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Customize activities on entity page timelines
> [!IMPORTANT] > > - Activity customization is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
## Introduction
In addition to the activities tracked and presented in the timeline by Microsoft
- Add new activities to the entity timeline by modifying existing out-of-the-box activity templates. -- Add new activities from custom logs - for example, from a physical access-control log, you can add a user's entry and exit activities for a particular building to the user's timeline.
+- Add new activities from custom logs. For example, from a physical access-control log, you can add a user's entry and exit activities for a particular restricted area&mdash;say, a server room&mdash;to the user's timeline.
## Getting started
+- Users of Microsoft Sentinel in the Azure portal, select the **Azure portal** tab below.
+- Users of the unified security operations platform in the Microsoft Defender portal, select the **Defender portal** tab.
+
+# [Azure portal](#tab/azure)
+ 1. From the Microsoft Sentinel navigation menu, select **Entity behavior**.
-1. In the **Entity behavior** blade, select **Customize entity page** at the top of the screen.
+1. On the **Entity behavior** page, select **Customize entity page (Preview)** at the top of the screen.
:::image type="content" source="./media/customize-entity-activities/entity-behavior-blade.png" alt-text="Entity behavior page":::
-1. You'll see a page with a list of any activities you've created in the **My activities** tab. In the **Activity templates** tab, you'll see the collection of activities offered out-of-the-box by Microsoft security researchers. These are the activities that are already being tracked and displayed on the timelines in your entity pages.
+# [Defender portal](#tab/defender)
+
+1. In the Microsoft Defender portal, find any entity page.
+ 1. Select **Assets > Devices** or **Identities**.
+ 1. Select a device or a user from the list. If you selected a user, then select **View user page** on the following popup.
+
+1. On the entity page, select the **Sentinel events** tab.
+
+1. On the **Sentinel events** tab, select **Customize Sentinel activities**.
+ :::image type="content" source="media/customize-entity-activities/identity-entity-page-defender.png" alt-text="Screenshot of Defender entity page menu.":::
+
++
+On the **Customize Sentinel activities** page, you'll see a list of any activities you've created in the **My activities** tab. In the **Activity templates** tab, you'll see the collection of activities offered out-of-the-box by Microsoft security researchers. These are the activities that are already being tracked and displayed on the timelines in your entity pages.
- > [!NOTE]
- > - As long as you have not created any user-defined activities, your entity pages will display all the activities listed under the **Activity templates** tab.
- >
- > - Once you define a single custom activity, your entity pages will display **only** those activities that appear in the **My activities** tab.
- >
- > - If you want to continue seeing the out-of-the-box activities in your entity pages, you must create an activity for each template you want to be tracked and displayed. Follow the instructions under "Create an activity from a template" below.
+- As long as you have not created any user-defined activities, your entity pages will display *all* the activities listed under the **Activity templates** tab.
+
+- Once you create or customize an activity, your entity pages will display *only* those activities, which appear in the **My activities** tab.
+
+- If you want to continue seeing the out-of-the-box activities in your entity pages, you must create an activity for each template you want to be tracked and displayed. Follow the instructions under "Create an activity from a template" below.
## Create an activity from a template
-1. Click on the **Activity templates** tab to see the various activities available by default. You can filter the list by entity type as well as by data source. Selecting an activity from the list will display the following details in the preview pane:
+1. Select the **Activity templates** tab to see the various activities available by default. You can filter the list by entity type as well as by data source. Selecting an activity from the list will display the following information in the details pane:
- - A description of the activity
+ - A description of the activity
- The data source that provides the events that make up the activity
In addition to the activities tracked and presented in the timeline by Microsoft
- The query that results in the detection of this activity
-1. Click the **Create activity** button at the bottom of the preview pane to start the activity creation wizard.
+1. Select **Create activity** at the bottom of the details pane to start the activity creation wizard.
+
+ # [Azure portal](#tab/azure)
+
+ :::image type="content" source="./media/customize-entity-activities/activity-details.png" alt-text="Screenshot of activity template list in Azure portal.":::
+
+ # [Defender portal](#tab/defender)
+
+ :::image type="content" source="./media/customize-entity-activities/activity-details-defender.png" alt-text="Screenshot of activity template list in Defender portal.":::
+
+ When you select **Create activity** in the Defender portal, you are redirected to the Microsoft Sentinel activity wizard in the Azure portal in a new tab.
- :::image type="content" source="./media/customize-entity-activities/activity-details.png" alt-text="View activity details":::
+
1. The **Activity wizard - Create new activity from template** will open, with its fields already populated from the template. You can make changes as you like in the **General** and **Activity configuration** tabs, or leave everything as is to continue viewing the out-of-the-box activity.
At least one identifier is required in a query.
| | Host_NetBiosName + Host_NTDomain | similar to fully qualified domain name (FQDN) | | | Host_NetBiosName + Host_DnsDomain | similar to fully qualified domain name (FQDN) | | | Host_AzureID | the Microsoft Entra object ID of the host in Microsoft Entra ID (if Microsoft Entra domain joined) |
-| | Host_OMSAgentID | the OMS Agent ID of the agent installed on a specific host (unique per host) |
-|
+| | Host_OMSAgentID | the OMS Agent ID of the agent installed on a specific host (unique per host)
Based on the entity selected you will see the available identifiers. Clicking on the relevant identifiers will paste the identifier into the query, at the location of the cursor.
You can also use the **Activities** filter to present or hide specific activitie
## Next steps In this document, you learned how to create custom activities for your entity page timelines. To learn more about Microsoft Sentinel, see the following articles:-- Get the complete picture on [entity pages](identify-threats-with-entity-behavior-analytics.md).
+- Get the complete picture on [entity pages](entity-pages.md).
+- Learn about [User and Entity Behavior Analytics (UEBA)](identify-threats-with-entity-behavior-analytics.md).
- See the full list of [entities and identifiers](entities-reference.md).
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Title: Find your Microsoft Sentinel data connector | Microsoft Docs
description: Learn about specific configuration steps for Microsoft Sentinel data connectors. - Previously updated : 07/26/2023 Last updated : 03/02/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Find your Microsoft Sentinel data connector
This article lists all supported, out-of-the-box data connectors and links to ea
> [!IMPORTANT] > - Noted Microsoft Sentinel data connectors are currently in **Preview**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > - For connectors that use the Log Analytics agent, the agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
+> - [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
Data connectors are available as part of the following offerings: -- Solutions: Many data connectors are deployed as part of [Microsoft Sentinel solution](sentinel-solutions.md) together with related content like analytics rules, workbooks and playbooks. For more information, see the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
+- Solutions: Many data connectors are deployed as part of [Microsoft Sentinel solution](sentinel-solutions.md) together with related content like analytics rules, workbooks, and playbooks. For more information, see the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
- Community connectors: More data connectors are provided by the Microsoft Sentinel community and can be found in the [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?filters=solution-templates&page=1&search=sentinel). Documentation for community data connectors is the responsibility of the organization that created the connector.
Data connectors are available as part of the following offerings:
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
-### Data connector prerequisites
+## Data connector prerequisites
[!INCLUDE [data-connector-prereq](includes/data-connector-prereq.md)]
sentinel Define Playbook Access Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/define-playbook-access-restrictions.md
Title: Configure advanced security for Microsoft Sentinel playbooks description: This article shows how to define an access restriction policy for Microsoft Sentinel Standard-plan playbooks, so that they can support private endpoints.-- Previously updated : 12/27/2022++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Configure advanced security for Microsoft Sentinel playbooks
-> [!IMPORTANT]
->
-> The new version of access restriction policies is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- This article shows how to define an [access restriction policy](../app-service/overview-access-restrictions.md) for Microsoft Sentinel Standard-plan playbooks, so that they can support private endpoints. Defining this policy will ensure that **only Microsoft Sentinel will have access** to the Standard logic app containing your playbook workflows. Learn more about [using private endpoints to secure traffic between Standard logic apps and Azure virtual networks](../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md).
+> [!IMPORTANT]
+>
+> The new version of access restriction policies is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
+ ## Define an access restriction policy
-1. From the Microsoft Sentinel navigation menu, select **Automation**. Select the **Active playbooks** tab.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), select the **Configuration** > **Automation** page. For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Automation**.
+
+1. On the **Automation** page, select the **Active playbooks** tab.
1. Filter the list for Standard-plan apps. 1. Select the **Plan** filter.
sentinel Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-built-in.md
When you create an analytics rule, an access permissions token is applied to the
There is one exception to this, however: when a rule is created to access workspaces in other subscriptions or tenants, such as what happens in the case of an MSSP, Microsoft Sentinel takes extra security measures to prevent unauthorized access to customer data. For these kinds of rules, the credentials of the user that created the rule are applied to the rule instead of an independent access token, so that when the user no longer has access to the other subscription or tenant, the rule stops working.
-If you operate Microsoft Sentinel in a cross-subscription or cross-tenant scenario, when one of your analysts or engineers loses access to a particular workspace, any rules created by that user stops working. You will get a health monitoring message regarding "insufficient access to resource", and the rule will be [auto-disabled](detect-threats-custom.md#issue-a-scheduled-rule-failed-to-execute-or-appears-with-auto-disabled-added-to-the-name) after having failed a certain number of times.
+If you operate Microsoft Sentinel in a cross-subscription or cross-tenant scenario, when one of your analysts or engineers loses access to a particular workspace, any rules created by that user stops working. You will get a health monitoring message regarding "insufficient access to resource", and the rule will be [auto-disabled](troubleshoot-analytics-rules.md#issue-a-scheduled-rule-failed-to-execute-or-appears-with-auto-disabled-added-to-the-name) after having failed a certain number of times.
## Export rules to an ARM template
sentinel Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-custom.md
Title: Create custom analytics rules to detect threats with Microsoft Sentinel | Microsoft Docs
+ Title: Create a custom analytics rule from scratch in Microsoft Sentinel
description: Learn how to create custom analytics rules to detect security threats with Microsoft Sentinel. Take advantage of event grouping, alert grouping, and alert enrichment, and understand AUTO DISABLED. Previously updated : 05/28/2023 Last updated : 03/26/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
-# Create custom analytics rules to detect threats
+# Create a custom analytics rule from scratch
-After [connecting your data sources](quickstart-onboard.md) to Microsoft Sentinel, create custom analytics rules to help discover threats and anomalous behaviors in your environment.
+YouΓÇÖve set up [connectors and other means of collecting activity data](connect-data-sources.md) across your digital estate. Now you need to dig through all that data to detect patterns of activity and discover activities that donΓÇÖt fit those patterns and that could represent a security threat.
-Analytics rules search for specific events or sets of events across your environment, alert you when certain event thresholds or conditions are reached, generate incidents for your SOC to triage and investigate, and respond to threats with automated tracking and remediation processes.
+Microsoft Sentinel and its many [solutions provided in the Content hub](sentinel-solutions.md) offer templates for the most commonly used types of analytics rules, and youΓÇÖre strongly encouraged to make use of those templates, customizing them to fit your specific scenarios. But itΓÇÖs possible you might need something completely different, so in that case you can create a rule from scratch, using the analytics rule wizard.
-> [!TIP]
-> When creating custom rules, use existing rules as templates or references. Using existing rules as a baseline helps by building out most of the logic before you make any needed changes.
+This article walks you through the **Analytics rule wizard** and explains all the available options. It's accompanied by screenshots and directions to access the wizard in both the Azure portal, for Microsoft Sentinel users who aren't also Microsoft Defender subscribers, and the Defender portal, for users of the Microsoft Defender unified security operations platform.
-> [!div class="checklist"]
-> - Create analytics rules
-> - Define how events and alerts are processed
-> - Define how alerts and incidents are generated
-> - Choose automated threat responses for your rules
-## Create a custom analytics rule with a scheduled query
+## Prerequisites
-1. From the Microsoft Sentinel navigation menu, select **Analytics**.
+- You must have the Microsoft Sentinel Contributor role, or any other role or set of permissions that includes write permissions on your Log Analytics workspace and its resource group.
+
+## Design and build your query
+
+Before you do anything else, you should design and build a query in Kusto Query Language (KQL) that your rule will use to query one or more tables in your Log Analytics workspace.
+
+1. Determine a data source that you want to search to detect unusual or suspicious activity. Find the name of the Log Analytics table into which data from that source is ingested. You can find the table name on the page of the data connector for that source. Use this table name (or a function based on it) as the basis for your query.
+
+1. Decide what kind of analysis you want this query to perform on the table. This decision will determine which commands and functions you should use in the query.
+
+1. Decide which data elements (fields, columns) you want from the query results. This decision will determine how you structure the output of the query.
+
+### Best practices for analytics rule queries
+
+- It's recommended to use an [Advanced Security Information Model (ASIM) parser](normalization-about-parsers.md) as your query source, instead of using a native table. This will ensure that the query supports any current or future relevant data source or family of data sources, rather than relying on a single data source.
+
+- The query length should be between 1 and 10,000 characters and cannot contain "`search *`" or "`union *`". You can use [user-defined functions](/azure/data-explorer/kusto/query/functions/user-defined-functions) to overcome the query length limitation.
+
+- Using ADX functions to create Azure Data Explorer queries inside the Log Analytics query window **is not supported**.
+
+- When using the **`bag_unpack`** function in a query, if you [project the columns](/azure/data-explorer/kusto/query/projectoperator) as fields using "`project field1`" and the column doesn't exist, the query will fail. To guard against this happening, you must [project the column](/azure/data-explorer/kusto/query/projectoperator) as follows:
+
+ `project field1 = column_ifexists("field1","")`
+
+For more help building Kusto queries, see [Kusto Query Language in Microsoft Sentinel](kusto-overview.md) and [Best practices for Kusto Query Language queries](/azure/data-explorer/kusto/query/best-practices?toc=%2Fazure%2Fsentinel%2FTOC.json&bc=%2Fazure%2Fsentinel%2Fbreadcrumb%2Ftoc.json).
+
+Build and test your queries in the **Logs** screen. When youΓÇÖre satisfied, save the query for use in your rule.
+
+## Create your analytics rule
+
+This section describes how to create a rule using the Azure or Defender portals.
+
+### Start the Analytics rule wizard
+
+# [Azure portal](#tab/azure-portal)
+
+1. From the **Configuration** section of the Microsoft Sentinel navigation menu, select **Analytics**.
1. In the action bar at the top, select **+Create** and select **Scheduled query rule**. This opens the **Analytics rule wizard**.
- :::image type="content" source="media/tutorial-detect-threats-custom/create-scheduled-query-small.png" alt-text="Create scheduled query" lightbox="media/tutorial-detect-threats-custom/create-scheduled-query-full.png":::
+ :::image type="content" source="media/detect-threats-custom/create-scheduled-query.png" alt-text="Screenshot of Analytics screen in Azure portal." lightbox="media/detect-threats-custom/create-scheduled-query.png":::
-### Analytics rule wizard&mdash;General tab
+# [Defender portal](#tab/defender-portal)
-- Provide a unique **Name** and a **Description**.
+1. From the Microsoft Defender navigation menu, expand **Microsoft Sentinel**, then **Configuration**. Select **Analytics**.
-- In the **Tactics and techniques** field, you can choose from among categories of attacks by which to classify the rule. These are based on the tactics and techniques of the [MITRE ATT&CK](https://attack.mitre.org/) framework.
+1. In the action bar at the top of the grid, select **+Create** and select **Scheduled query rule**. This opens the **Analytics rule wizard**.
- [Incidents](investigate-cases.md) created from alerts that are detected by rules mapped to MITRE ATT&CK tactics and techniques automatically inherit the rule's mapping.
+ :::image type="content" source="media/detect-threats-custom/defender-create-scheduled-query.png" alt-text="Screenshot of Analytics screen in Defender portal." lightbox="media/detect-threats-custom/defender-create-scheduled-query.png":::
-- Set the alert **Severity** as appropriate, matching the impact the activity triggering the rule might have on the target environment, should the rule be a true positive.+
- - **Informational**. No impact on your system, but the information might be indicative of future steps planned by a threat actor.
- - **Low**. The immediate impact would be minimal. A threat actor would likely need to conduct multiple steps before achieving an impact on an environment.
- - **Medium**. The threat actor could have some impact on the environment with this activity, but it would be limited in scope or require additional activity.
- - **High**. The activity identified provides the threat actor with wide ranging access to conduct actions on the environment or is triggered by impact on the environment.
+### Name the rule and define general information
- Severity level defaults are not a guarantee of current or environmental impact level. [Customize alert details](customize-alert-details.md) to customize the severity, tactics, and other properties of a given instance of an alert with the values of any relevant fields from a query output.
-
- Severity definitions for Microsoft Sentinel analytics rule templates are relevant only for alerts created by analytics rules. For alerts ingested from from other services, the severity is defined by the source security service.
-
-- When you create the rule, its **Status** is **Enabled** by default, which means it will run immediately after you finish creating it. If you donΓÇÖt want it to run immediately, select **Disabled**, and the rule will be added to your **Active rules** tab and you can enable it from there when you need it.
+In the Azure portal, stages are represented visually as tabs. In the Defender portal, they're represented visually as milestones on a timeline. See the screenshots below for examples.
- :::image type="content" source="media/tutorial-detect-threats-custom/general-tab.png" alt-text="Start creating a custom analytics rule":::
+1. Provide a unique **Name** and a **Description**.
-## Define the rule query logic and configure settings
+1. Set the alert **Severity** as appropriate, matching the impact the activity triggering the rule might have on the target environment, should the rule be a true positive.
-In the **Set rule logic** tab, you can either write a query directly in the **Rule query** field, or create the query in Log Analytics and then copy and paste it here.
+ | Severity | Description |
+ | | |
+ | **Informational** | No impact on your system, but the information might be indicative of future steps planned by a threat actor. |
+ | **Low** | The immediate impact would be minimal. A threat actor would likely need to conduct multiple steps before achieving an impact on an environment. |
+ | **Medium** | The threat actor could have some impact on the environment with this activity, but it would be limited in scope or require additional activity. |
+ | **High** | The activity identified provides the threat actor with wide ranging access to conduct actions on the environment or is triggered by impact on the environment. |
-- Queries are written in Kusto Query Language (KQL). Learn more about KQL [concepts](/azure/data-explorer/kusto/concepts/) and [queries](/azure/data-explorer/kusto/query/), and see this handy [quick reference guide](/azure/data-explorer/kql-quick-reference).
+ Severity level defaults are not a guarantee of current or environmental impact level. [Customize alert details](customize-alert-details.md) to customize the severity, tactics, and other properties of a given instance of an alert with the values of any relevant fields from a query output.
+
+ Severity definitions for Microsoft Sentinel analytics rule templates are relevant only for alerts created by analytics rules. For alerts ingested from other services, the severity is defined by the source security service.
-- The example shown in this screenshot queries the *SecurityEvent* table to display a type of [failed Windows logon events](/windows/security/threat-protection/auditing/event-4625).
+1. In the **Tactics and techniques** field, you can choose from among categories of threat activities by which to classify the rule. These are based on the tactics and techniques of the [MITRE ATT&CK](https://attack.mitre.org/) framework.
- :::image type="content" source="media/tutorial-detect-threats-custom/set-rule-logic-tab-1-new.png" alt-text="Configure query rule logic and settings" lightbox="media/tutorial-detect-threats-custom/set-rule-logic-tab-all-1-new.png":::
+ [Incidents](investigate-cases.md) created from alerts that are detected by rules mapped to MITRE ATT&CK tactics and techniques automatically inherit the rule's mapping.
-- Here's another sample query, one that would alert you when an anomalous number of resources is created in [Azure Activity](../azure-monitor/essentials/activity-log.md).
+ For more information on maximizing your coverage of the MITRE ATT&CK threat landscape, see [Understand security coverage by the MITRE ATT&CK® framework](mitre-coverage.md)
- ```kusto
- AzureActivity
- | where OperationNameValue == "MICROSOFT.COMPUTE/VIRTUALMACHINES/WRITE" or OperationNameValue == "MICROSOFT.RESOURCES/DEPLOYMENTS/WRITE"
- | where ActivityStatusValue == "Succeeded"
- | make-series dcount(ResourceId)  default=0 on EventSubmissionTimestamp in range(ago(7d), now(), 1d) by Caller
- ```
+1. When you create the rule, its **Status** is **Enabled** by default, which means it will run immediately after you finish creating it. If you donΓÇÖt want it to run immediately, select **Disabled**, and the rule will be added to your **Active rules** tab and you can enable it from there when you need it.
- > [!IMPORTANT]
- >
- > We recommend that your query uses an [Advanced Security Information Model (ASIM) parser](normalization-about-parsers.md) and not a native table. This will ensure that the query supports any current or future relevant data source rather than a single data source.
- >
+ > [!NOTE]
+ > There's another way, currently in preview, to create a rule without it running immediately. You can schedule the rule to first run at a specific date and time. See [Schedule and scope the query](#schedule-and-scope-the-query) below.
+1. Select **Next: Set rule logic**.
- > [!NOTE]
- > **Rule query best practices**:
- >
- > - The query length should be between 1 and 10,000 characters and cannot contain "`search *`" or "`union *`". You can use [user-defined functions](/azure/data-explorer/kusto/query/functions/user-defined-functions) to overcome the query length limitation.
- >
- > - Using ADX functions to create Azure Data Explorer queries inside the Log Analytics query window **is not supported**.
- >
- > - When using the **`bag_unpack`** function in a query, if you [project the columns](/azure/data-explorer/kusto/query/projectoperator) as fields using "`project field1`" and the column doesn't exist, the query will fail. To guard against this happening, you must [project the column](/azure/data-explorer/kusto/query/projectoperator) as follows:
- > - `project field1 = column_ifexists("field1","")`
+ # [Azure portal](#tab/azure-portal)
-### Alert enrichment
+ :::image type="content" source="media/detect-threats-custom/general-tab.png" alt-text="Screenshot of opening screen of analytics rule wizard in the Azure portal.":::
-- Use the **Entity mapping** configuration section to map parameters from your query results to Microsoft Sentinel-recognized entities. Entities enrich the rules' output (alerts and incidents) with essential information that serves as the building blocks of any investigative processes and remedial actions that follow. They are also the criteria by which you can group alerts together into incidents in the **Incident settings** tab.
+ # [Defender portal](#tab/defender-portal)
- Learn more about [entities in Microsoft Sentinel](entities.md).
+ :::image type="content" source="media/detect-threats-custom/defender-wizard-general.png" alt-text="Screenshot of opening screen of analytics rule wizard in the Defender portal.":::
- See [Map data fields to entities in Microsoft Sentinel](map-data-fields-to-entities.md) for complete entity mapping instructions, along with important information about limitations and [backward compatibility](map-data-fields-to-entities.md#notes-on-the-new-version).
+
-- Use the **Custom details** configuration section to extract event data items from your query and surface them in the alerts produced by this rule, giving you immediate event content visibility in your alerts and incidents.
+### Define the rule logic
- Learn more about surfacing custom details in alerts, and see the [complete instructions](surface-custom-details-in-alerts.md).
+1. **Enter a query for your rule.**
-- Use the **Alert details** configuration section to override default values of the alert's properties with details from the underlying query results. Alert details allow you to display, for example, an attacker's IP address or account name in the title of the alert itself, so it will appear in your incidents queue, giving you a much richer and clearer picture of your threat landscape.
+ Paste the query you designed, built, and tested into the **Rule query** window. Every change you make in this window is instantly validated, so if there are any mistakes, youΓÇÖll see an indication right below the window.
- See complete instructions on [customizing your alert details](customize-alert-details.md).
+1. **Map entities.**
-> [!NOTE]
-> **The size limit for an entire alert is *64 KB***.
-> - Alerts that grow larger than 64 KB will be truncated. As entities are identified, they are added to the alert one by one until the alert size reaches 64 KB, and any remaining entities are dropped from the alert.
->
-> - The other alert enrichments also contribute to the size of the alert.
->
-> - To reduce the size of your alert, use the `project-away` operator in your query to [remove any unnecessary fields](/azure/data-explorer/kusto/query/projectawayoperator). (Consider also the `project` operator if there are only [a few fields you need to keep](/azure/data-explorer/kusto/query/projectoperator).)
+ Entities are essential for detecting and investigating threats. Map the entity types recognized by Microsoft Sentinel onto fields in your query results. This mapping integrates the discovered entities into the [*Entities* field in your alert schema](security-alert-schema.md).
-### Query scheduling and alert threshold
+ For complete instructions on mapping entities, see [Map data fields to entities in Microsoft Sentinel](map-data-fields-to-entities.md).
-- In the **Query scheduling** section, set the following parameters:
+1. **Surface custom details in your alerts.**
- :::image type="content" source="media/tutorial-detect-threats-custom/set-rule-logic-tab-2.png" alt-text="Set query schedule and event grouping" lightbox="media/tutorial-detect-threats-custom/set-rule-logic-tab-all-2-new.png":::
+ By default, only the alert entities and metadata are visible in incidents without drilling down into the raw events in the query results. This step takes other fields in your query results and integrates them into the [*ExtendedProperties* field in your alerts](security-alert-schema.md), causing them to be displayed up front in your alerts, and in any incidents created from those alerts.
- - Set **Run query every** to control how often the query is run&mdash;as frequently as every 5 minutes or as infrequently as once every 14 days.
+ For complete instructions on surfacing custom details, see [Surface custom event details in alerts in Microsoft Sentinel](surface-custom-details-in-alerts.md).
- - Set **Lookup data from the last** to determine the time period of the data covered by the query&mdash;for example, it can query the past 10 minutes of data, or the past 6 hours of data. The maximum is 14 days.
-
- - For the new **Start running** setting (in Preview):
+1. **Customize alert details.**
- - Leave it set to **Automatically** to continue the original behavior: the rule will run for the first time immediately upon being created, and after that at the interval set in the **Run query every** setting.
+ This setting allows you to customize otherwise-standard alert properties according to the content of various fields in each individual alert. These customizations are integrated into the [*ExtendedProperties* field in your alerts](security-alert-schema.md). For example, you can customize the alert name or description to include a username or IP address featured in the alert.
- - Toggle the switch to **At specific time** if you want to determine when the rule first runs, instead of having it run immediately. Then choose the date using the calendar picker and enter the time in the format of the example shown.
+ For complete instructions on customizing alert details, see [Customize alert details in Microsoft Sentinel](customize-alert-details.md).
- :::image type="content" source="media/tutorial-detect-threats-custom/advanced-scheduling.png" alt-text="Screenshot of advanced scheduling toggle and settings.":::
-
- Future runnings of the rule will occur at the specified interval after the first running.
- The line of text under the **Start running** setting (with the information icon at its left) summarizes the current query scheduling and lookback settings.
+1. <a name="schedule-and-scope-the-query"></a>**Schedule and scope the query.**
+ 1. Set the following parameters in the **Query scheduling** section:
- > [!NOTE]
- >
- > **Query intervals and lookback period**
- >
- > These two settings are independent of each other, up to a point. You can run a query at a short interval covering a time period longer than the interval (in effect having overlapping queries), but you cannot run a query at an interval that exceeds the coverage period, otherwise you will have gaps in the overall query coverage.
- >
- > **Ingestion delay**
- >
- > To account for **latency** that may occur between an event's generation at the source and its ingestion into Microsoft Sentinel, and to ensure complete coverage without data duplication, Microsoft Sentinel runs scheduled analytics rules on a **five-minute delay** from their scheduled time.
- >
- > For more information, see [Handle ingestion delay in scheduled analytics rules](ingestion-delay.md).
+ | Setting | Behavior |
+ | | |
+ | **Run query every** | Controls the **query interval**: how often the query is run. |
+ | **Lookup data from the last** | Determines the **lookback period**: the time period covered by the query. |
-- Use the **Alert threshold** section to define the sensitivity level of the rule. For example, set **Generate alert when number of query results** to **Is greater than** and enter the number 1000 if you want the rule to generate an alert only if the query returns more than 1000 results each time it runs. This is a required field, so if you donΓÇÖt want to set a threshold ΓÇô that is, if you want your alert to register every event ΓÇô enter 0 in the number field.
+ - The allowed range for both of these parameters is from **5 minutes** to **14 days**.
-### Results simulation
+ - The query interval must be shorter than or equal to the lookback period. If it's shorter, the query periods will overlap and this may cause some duplication of results. The rule validation will not allow you to set an interval longer than the lookback period, though, as that would result in gaps in your coverage.
-In the **Results simulation** area, in the right side of the wizard, select **Test with current data** and Microsoft Sentinel will show you a graph of the results (log events) the query would have generated over the last 50 times it would have run, according to the currently defined schedule. If you modify the query, select **Test with current data** again to update the graph. The graph shows the number of results over the defined time period, which is determined by the settings in the **Query scheduling** section.
+ 1. Set **Start running**:
-Here's what the results simulation might look like for the query in the screenshot above. The left side is the default view, and the right side is what you see when you hover over a point in time on the graph.
+ | Setting | Behavior |
+ | | |
+ | **Automatically** | The rule will run for the first time immediately upon being created, and after that at the interval set in the **Run query every** setting. |
+ | **At specific time** (Preview) | Set a date and time for the rule to first run, after which it will run at the interval set in the **Run query every** setting. |
+ - The **start running** time must be between 10 minutes and 30 days after the rule creation (or enablement) time.
-If you see that your query would trigger too many or too frequent alerts, you can experiment with the settings in the **Query scheduling** and **Alert threshold** [sections](#query-scheduling-and-alert-threshold) and select **Test with current data** again.
+ - The line of text under the **Start running** setting (with the information icon at its left) summarizes the current query scheduling and lookback settings.
-### Event grouping and rule suppression
+ :::image type="content" source="media/detect-threats-custom/advanced-scheduling.png" alt-text="Screenshot of advanced scheduling toggle and settings.":::
+
+ # [Azure portal](#tab/azure-portal)
-- Under **Event grouping**, choose one of two ways to handle the grouping of **events** into **alerts**:
+ :::image type="content" source="media/detect-threats-custom/set-rule-logic-contd.png" alt-text="Screenshot of continuation of rule logic screen of analytics rule wizard in the Azure portal.":::
- - **Group all events into a single alert** (the default setting). The rule generates a single alert every time it runs, as long as the query returns more results than the specified **alert threshold** above. The alert includes a summary of all the events returned in the results.
+ # [Defender portal](#tab/defender-portal)
- - **Trigger an alert for each event**. The rule generates a unique alert for each event returned by the query. This is useful if you want events to be displayed individually, or if you want to group them by certain parameters&mdash;by user, hostname, or something else. You can define these parameters in the query.
+ :::image type="content" source="media/detect-threats-custom/defender-set-rule-logic-contd.png" alt-text="Screenshot of continuation of rule logic screen of analytics rule wizard in the Defender portal.":::
- Currently the number of alerts a rule can generate is capped at 150. If in a particular rule, **Event grouping** is set to **Trigger an alert for each event**, and the rule's query returns more than 150 events, each of the first 149 events will generate a unique alert, and the 150th alert will summarize the entire set of returned events. In other words, the 150th alert is what would have been generated under the **Group all events into a single alert** option.
+
- If you choose this option, Microsoft Sentinel will add a new field, **OriginalQuery**, to the results of the query. Here is a comparison of the existing **Query** field and the new field:
+ > [!NOTE]
+ >
+ > **Ingestion delay**
+ >
+ > To account for **latency** that may occur between an event's generation at the source and its ingestion into Microsoft Sentinel, and to ensure complete coverage without data duplication, Microsoft Sentinel runs scheduled analytics rules on a **five-minute delay** from their scheduled time.
+ >
+ > For more information, see [Handle ingestion delay in scheduled analytics rules](ingestion-delay.md).
- | Field name | Contains | Running the query in this field<br>results in... |
- | - | :-: | :-: |
- | **Query** | The compressed record of the event that generated this instance of the alert | The event that generated this instance of the alert;<br>limited to 10240 bytes |
- | **OriginalQuery** | The original query as written in the analytics&nbsp;rule | The most recent event in the timeframe in which the query runs, that fits the parameters defined by the query |
+1. <a name="alert-threshold"></a>**Set the threshold for creating alerts.**
- In other words, the **OriginalQuery** field behaves like the **Query** field usually behaves. The result of this extra field is that the problem described by the first item in the [Troubleshooting](#troubleshooting) section below has been solved.
+ Use the **Alert threshold** section to define the sensitivity level of the rule.
+ - Set **Generate alert when number of query results** to **Is greater than**, and enter the minimum number of events that need to be found over the time period of the query for the rule to generate an alert.
+ - This is a required field, so if you donΓÇÖt want to set a threshold&mdash;that is, if you want to trigger the alert for even a single event in a given time period&mdash;enter `0` in the number field.
- > [!NOTE]
- > What's the difference between **events** and **alerts**?
- >
- > - An **event** is a description of a single occurrence of an action. For example, a single entry in a log file could count as an event. In this context an event refers to a single result returned by a query in an analytics rule.
- >
- > - An **alert** is a collection of events that, taken together, are significant from a security standpoint. An alert could contain a single event if the event had significant security implications - an administrative login from a foreign country/region outside of office hours, for example.
- >
- > - By the way, what are **incidents**? Microsoft Sentinel's internal logic creates **incidents** from **alerts** or groups of alerts. The incidents queue is the focal point of SOC analysts' work - triage, investigation and remediation.
- >
- > Microsoft Sentinel ingests raw events from some data sources, and already-processed alerts from others. It is important to note which one you're dealing with at any time.
+1. **Set event grouping settings.**
-- In the **Suppression** section, you can turn the **Stop running query after alert is generated** setting **On** if, once you get an alert, you want to suspend the operation of this rule for a period of time exceeding the query interval. If you turn this on, you must set **Stop running query for** to the amount of time the query should stop running, up to 24 hours.
+ Under **Event grouping**, choose one of two ways to handle the grouping of **events** into **alerts**:
-## Configure the incident creation settings
+ | Setting | Behavior |
+ | | |
+ | **Group&nbsp;all&nbsp;events into a single alert**<br>(default) | The rule generates a single alert every time it runs, as long as the query returns more results than the specified **alert threshold** above. This single alert summarizes all the events returned in the query results. |
+ | **Trigger an alert for each event** | The rule generates a unique alert for each event returned by the query. This is useful if you want events to be displayed individually, or if you want to group them by certain parameters&mdash;by user, hostname, or something else. You can define these parameters in the query. |
-In the **Incident Settings** tab, you can choose whether and how Microsoft Sentinel turns alerts into actionable incidents. If this tab is left alone, Microsoft Sentinel will create a single, separate incident from each and every alert. You can choose to have no incidents created, or to group several alerts into a single incident, by changing the settings in this tab.
+ Analytics rules can generate up to 150 alerts. If **Event grouping** is set to **Trigger an alert for each event**, and the rule's query returns *more than 150 events*, the first 149 events will each generate a unique alert (for 149 alerts), and the 150th alert will summarize the entire set of returned events. In other words, the 150th alert is what would have been generated if **Event grouping** had been set to **Group all events into a single alert**.
-For example:
+1. **Temporarily suppress rule after an alert is generated.**
+ In the **Suppression** section, you can turn the **Stop running query after alert is generated** setting **On** if, once you get an alert, you want to suspend the operation of this rule for a period of time exceeding the query interval. If you turn this on, you must set **Stop running query for** to the amount of time the query should stop running, up to 24 hours.
-### Incident settings
+1. **Simulate the results of the query and logic settings.**
-In the **Incident settings** section, **Create incidents from alerts triggered by this analytics rule** is set by default to **Enabled**, meaning that Microsoft Sentinel will create a single, separate incident from each and every alert triggered by the rule.
+ In the **Results simulation** area, select **Test with current data** and Microsoft Sentinel will show you a graph of the results (log events) the query would have generated over the last 50 times it would have run, according to the currently defined schedule. If you modify the query, select **Test with current data** again to update the graph. The graph shows the number of results over the defined time period, which is determined by the settings in the **Query scheduling** section.
-- If you donΓÇÖt want this rule to result in the creation of any incidents (for example, if this rule is just to collect information for subsequent analysis), set this to **Disabled**.
+ Here's what the results simulation might look like for the query in the screenshot above. The left side is the default view, and the right side is what you see when you hover over a point in time on the graph.
-- If you want a single incident to be created from a group of alerts, instead of one for every single alert, see the next section.
+ :::image type="content" source="media/detect-threats-custom/results-simulation.png" alt-text="Results simulation screenshots":::
-### Alert grouping
+ If you see that your query would trigger too many or too frequent alerts, you can experiment with the settings in the [**Query scheduling**](#schedule-and-scope-the-query) and [**Alert threshold**](#alert-threshold) sections and select **Test with current data** again.
-In the **Alert grouping** section, if you want a single incident to be generated from a group of up to 150 similar or recurring alerts (see note), set **Group related alerts, triggered by this analytics rule, into incidents** to **Enabled**, and set the following parameters.
+1. Select **Next: Incident settings**.
-- **Limit the group to alerts created within the selected time frame**: Determine the time frame within which the similar or recurring alerts will be grouped together. All of the corresponding alerts within this time frame will collectively generate an incident or a set of incidents (depending on the grouping settings below). Alerts outside this time frame will generate a separate incident or set of incidents.
+### Configure the incident creation settings
-- **Group alerts triggered by this analytics rule into a single incident by**: Choose the basis on which alerts will be grouped together:
+In the **Incident settings** tab, choose whether Microsoft Sentinel turns alerts into actionable incidents, and whether and how alerts are grouped together in incidents.
- | Option | Description |
- | - | - |
- | **Group alerts into a single incident if all the entities match** | Alerts are grouped together if they share identical values for each of the mapped entities (defined in the [Set rule logic](#define-the-rule-query-logic-and-configure-settings) tab above). This is the recommended setting. |
- | **Group all alerts triggered by this rule into a single incident** | All the alerts generated by this rule are grouped together even if they share no identical values. |
- | **Group alerts into a single incident if the selected entities and details match** | Alerts are grouped together if they share identical values for all of the mapped entities, alert details, and custom details selected from the respective drop-down lists.<br><br>You might want to use this setting if, for example, you want to create separate incidents based on the source or target IP addresses, or if you want to group alerts that match a specific entity and severity.<br><br>**Note**: When you select this option, you must have at least one entity type or field selected for the rule. Otherwise, the rule validation will fail and the rule won't be created. |
+1. **Enable incident creation.**
-- **Re-open closed matching incidents**: If an incident has been resolved and closed, and later on another alert is generated that should belong to that incident, set this setting to **Enabled** if you want the closed incident re-opened, and leave as **Disabled** if you want the alert to create a new incident.
+ In the **Incident settings** section, **Create incidents from alerts triggered by this analytics rule** is set by default to **Enabled**, meaning that Microsoft Sentinel will create a single, separate incident from each and every alert triggered by the rule.
- > [!NOTE]
- >
- > **Up to 150 alerts** can be grouped into a single incident.
- > - The incident will only be created after all the alerts have been generated. All of the alerts will be added to the incident immediately upon its creation.
- >
- > - If more than 150 alerts are generated by a rule that groups them into a single incident, a new incident will be generated with the same incident details as the original, and the excess alerts will be grouped into the new incident.
+ - If you donΓÇÖt want this rule to result in the creation of any incidents (for example, if this rule is just to collect information for subsequent analysis), set this to **Disabled**.
-## Set automated responses and create the rule
+ > [!IMPORTANT]
+ > If you onboarded Microsoft Sentinel to the unified security operations platform in the Microsoft Defender portal, and this rule is querying and creating alerts from Microsoft 365 or Microsoft Defender sources, you must set this setting to **Disabled**.
-In the **Automated responses** tab, you can use [automation rules](automate-incident-handling-with-automation-rules.md) to set automated responses to occur at any of three types of occasions:
-- When an alert is generated by this analytics rule.-- When an incident is created with alerts generated by this analytics rule.-- When an incident is updated with alerts generated by this analytics rule.
-
-The grid displayed under **Automation rules** shows the automation rules that already apply to this analytics rule (by virtue of it meeting the conditions defined in those rules). You can edit any of these by selecting the ellipsis at the end of each row. Or, you can [create a new automation rule](create-manage-use-automation-rules.md).
+ - If you want a single incident to be created from a group of alerts, instead of one for every single alert, see the next section.
-Use automation rules to perform [basic triage](investigate-incidents.md#navigate-and-triage-incidents), assignment, [workflow](incident-tasks.md), and closing of incidents.
+1. <a name="alert-grouping"></a>**Set alert grouping settings.**
-Automate more complex tasks and invoke responses from remote systems to remediate threats by calling playbooks from these automation rules. You can do this for incidents as well as for individual alerts.
+ In the **Alert grouping** section, if you want a single incident to be generated from a group of up to 150 similar or recurring alerts (see note), set **Group related alerts, triggered by this analytics rule, into incidents** to **Enabled**, and set the following parameters.
-- For more information and instructions on creating playbooks and automation rules, see [Automate threat responses](tutorial-respond-threats-playbook.md#automate-threat-responses).
+ 1. **Limit the group to alerts created within the selected time frame**: Determine the time frame within which the similar or recurring alerts will be grouped together. All of the corresponding alerts within this time frame will collectively generate an incident or a set of incidents (depending on the grouping settings below). Alerts outside this time frame will generate a separate incident or set of incidents.
-- For more information about when to use the **incident created trigger**, the **incident updated trigger**, or the **alert created trigger**, see [Use triggers and actions in Microsoft Sentinel playbooks](playbook-triggers-actions.md#microsoft-sentinel-triggers-summary).
+ 1. **Group alerts triggered by this analytics rule into a single incident by**: Choose the basis on which alerts will be grouped together:
- :::image type="content" source="media/tutorial-detect-threats-custom/automated-response-tab.png" alt-text="Define the automated response settings":::
+ | Option | Description |
+ | - | - |
+ | **Group alerts into a single incident if all the entities match** | Alerts are grouped together if they share identical values for each of the mapped entities (defined in the [Set rule logic](#define-the-rule-logic) tab above). This is the recommended setting. |
+ | **Group all alerts triggered by this rule into a single incident** | All the alerts generated by this rule are grouped together even if they share no identical values. |
+ | **Group alerts into a single incident if the selected entities and details match** | Alerts are grouped together if they share identical values for all of the mapped entities, alert details, and custom details selected from the respective drop-down lists.<br><br>You might want to use this setting if, for example, you want to create separate incidents based on the source or target IP addresses, or if you want to group alerts that match a specific entity and severity.<br><br>**Note**: When you select this option, you must have at least one entity type or field selected for the rule. Otherwise, the rule validation will fail and the rule won't be created. |
-- Under **Alert automation (classic)** at the bottom of the screen, you'll see any playbooks you've configured to run automatically when an alert is generated using the old method.
- - **As of June 2023**, you can no longer add playbooks to this list. Playbooks already listed here will continue to run until this method is **deprecated, effective March 2026**.
+ 1. **Re-open closed matching incidents**: If an incident has been resolved and closed, and later on another alert is generated that should belong to that incident, set this setting to **Enabled** if you want the closed incident re-opened, and leave as **Disabled** if you want the alert to create a new incident.
- - If you still have any playbooks listed here, you should instead create an automation rule based on the **alert created trigger** and invoke the playbook from there. After you've done that, select the ellipsis at the end of the line of the playbook listed here, and select **Remove**. See [Migrate your Microsoft Sentinel alert-trigger playbooks to automation rules](migrate-playbooks-to-automation-rules.md) for full instructions.
+ > [!NOTE]
+ >
+ > **Up to 150 alerts** can be grouped into a single incident.
+ > - The incident will only be created after all the alerts have been generated. All of the alerts will be added to the incident immediately upon its creation.
+ >
+ > - If more than 150 alerts are generated by a rule that groups them into a single incident, a new incident will be generated with the same incident details as the original, and the excess alerts will be grouped into the new incident.
-Select **Review and create** to review all the settings for your new analytics rule. When the "Validation passed" message appears, select **Create**.
+1. Select **Next: Automated response**.
+ # [Azure portal](#tab/azure-portal)
-## View the rule and its output
-
-- You can find your newly created custom rule (of type "Scheduled") in the table under the **Active rules** tab on the main **Analytics** screen. From this list you can enable, disable, or delete each rule.
+ :::image type="content" source="media/detect-threats-custom/incident-settings-tab.png" alt-text="Screenshot of incident settings screen of analytics rule wizard in the Azure portal.":::
-- To view the results of the analytics rules you create, go to the **Incidents** page, where you can triage incidents, [investigate them](investigate-cases.md), and [remediate the threats](respond-threats-during-investigation.md).
+ # [Defender portal](#tab/defender-portal)
-- You can update the rule query to exclude false positives. For more information, see [Handle false positives in Microsoft Sentinel](false-positives.md).
+ :::image type="content" source="media/detect-threats-custom/defender-incident-settings.png" alt-text="Screenshot of incident settings screen of analytics rule wizard in the Defender portal.":::
-> [!NOTE]
-> Alerts generated in Microsoft Sentinel are available through [Microsoft Graph Security](/graph/security-concept-overview). For more information, see the [Microsoft Graph Security alerts documentation](/graph/api/resources/security-api-overview).
+
-## Export the rule to an ARM template
+### Set automated responses and create the rule
-If you want to package your rule to be managed and deployed as code, you can easily [export the rule to an Azure Resource Manager (ARM) template](import-export-analytics-rules.md). You can also import rules from template files in order to view and edit them in the user interface.
+In the **Automated responses** tab, you can use [automation rules](automate-incident-handling-with-automation-rules.md) to set automated responses to occur at any of three types of occasions:
+- When an alert is generated by this analytics rule.
+- When an incident is created from alerts generated by this analytics rule.
+- When an incident is updated with alerts generated by this analytics rule.
+
+The grid displayed under **Automation rules** shows the automation rules that already apply to this analytics rule (by virtue of it meeting the conditions defined in those rules). You can edit any of these by selecting the name of the rule or the ellipsis at the end of each row. Or, you can select **Add new** to [create a new automation rule](create-manage-use-automation-rules.md).
-## Troubleshooting
+Use automation rules to perform [basic triage](investigate-incidents.md#navigate-and-triage-incidents), assignment, [workflow](incident-tasks.md), and closing of incidents.
-### Issue: No events appear in query results
+Automate more complex tasks and invoke responses from remote systems to remediate threats by calling playbooks from these automation rules. You can invoke playbooks for incidents as well as for individual alerts.
-When **event grouping** is set to **trigger an alert for each event**, query results viewed at a later time may appear to be missing, or different than expected. For example, you might view a query's results at a later time when you've pivoted back to the results from a related incident.
+- For more information and instructions on creating playbooks and automation rules, see [Automate threat responses](tutorial-respond-threats-playbook.md#automate-threat-responses).
-- Results are automatically saved with the alerts. However, if the results are too large, no results are saved, and then no data will appear when viewing the query results again.-- In cases where there is [ingestion delay](ingestion-delay.md), or the query is not deterministic due to aggregation, the alert's result might be different than the result shown by running the query manually.
+- For more information about when to use the **incident created trigger**, the **incident updated trigger**, or the **alert created trigger**, see [Use triggers and actions in Microsoft Sentinel playbooks](playbook-triggers-actions.md#microsoft-sentinel-triggers-summary).
-> [!NOTE]
-> This issue has been solved by the addition of a new field, **OriginalQuery**, to the results when this event grouping option is selected. See the [description](#event-grouping-and-rule-suppression) above.
+# [Azure portal](#tab/azure-portal)
-### Issue: A scheduled rule failed to execute, or appears with AUTO DISABLED added to the name
-It's a rare occurrence that a scheduled query rule fails to run, but it can happen. Microsoft Sentinel classifies failures up front as either transient or permanent, based on the specific type of the failure and the circumstances that led to it.
+# [Defender portal](#tab/defender-portal)
-#### Transient failure
-A transient failure occurs due to a circumstance which is temporary and will soon return to normal, at which point the rule execution will succeed. Some examples of failures that Microsoft Sentinel classifies as transient are:
+ -- A rule query takes too long to run and times out.-- Connectivity issues between data sources and Log Analytics, or between Log Analytics and Microsoft Sentinel.-- Any other new and unknown failure is considered transient.
+- Under **Alert automation (classic)** at the bottom of the screen, you'll see any playbooks you've configured to run automatically when an alert is generated using the old method.
+ - **As of June 2023**, you can no longer add playbooks to this list. Playbooks already listed here will continue to run until this method is **deprecated, effective March 2026**.
-In the event of a transient failure, Microsoft Sentinel continues trying to execute the rule again after predetermined and ever-increasing intervals, up to a point. After that, the rule will run again only at its next scheduled time. A rule will never be auto-disabled due to a transient failure.
+ - If you still have any playbooks listed here, you should instead create an automation rule based on the **alert created trigger** and invoke the playbook from the automation rule. After you've done that, select the ellipsis at the end of the line of the playbook listed here, and select **Remove**. See [Migrate your Microsoft Sentinel alert-trigger playbooks to automation rules](migrate-playbooks-to-automation-rules.md) for full instructions.
-#### Permanent failure - rule auto-disabled
+Select **Next: Review and create** to review all the settings for your new analytics rule. When the "Validation passed" message appears, select **Create**.
-A permanent failure occurs due to a change in the conditions that allow the rule to run, which without human intervention will not return to their former status. The following are some examples of failures that are classified as permanent:
+# [Azure portal](#tab/azure-portal)
-- The target workspace (on which the rule query operated) has been deleted.-- The target table (on which the rule query operated) has been deleted.-- Microsoft Sentinel had been removed from the target workspace.-- A function used by the rule query is no longer valid; it has been either modified or removed.-- Permissions to one of the data sources of the rule query were changed ([see example below](#permanent-failure-due-to-lost-access-across-subscriptionstenants)).-- One of the data sources of the rule query was deleted.
-**In the event of a predetermined number of consecutive permanent failures, of the same type and on the same rule,** Microsoft Sentinel stops trying to execute the rule, and also takes the following steps:
+# [Defender portal](#tab/defender-portal)
-- Disables the rule.-- Adds the words **"AUTO DISABLED"** to the beginning of the rule's name.-- Adds the reason for the failure (and the disabling) to the rule's description.
-You can easily determine the presence of any auto-disabled rules, by sorting the rule list by name. The auto-disabled rules will be at or near the top of the list.
+
-SOC managers should be sure to check the rule list regularly for the presence of auto-disabled rules.
+## View the rule and its output
+
+**View the rule definition:**
-#### Permanent failure due to resource drain
+- You can find your newly created custom rule (of type "Scheduled") in the table under the **Active rules** tab on the main **Analytics** screen. From this list you can enable, disable, or delete each rule.
-Another kind of permanent failure occurs due to an **improperly built query** that causes the rule to consume **excessive computing resources** and risks being a performance drain on your systems. When Microsoft Sentinel identifies such a rule, it takes the same three steps mentioned above for the other permanent failures&mdash;disables the rule, prepends **"AUTO DISABLED"** to the rule name, and adds the reason for the failure to the description.
+**View the results of the rule:**
-To re-enable the rule, you must address the issues in the query that cause it to use too many resources. See the following articles for best practices to optimize your Kusto queries:
+# [Azure portal](#tab/azure-portal)
-- [Query best practices - Azure Data Explorer](/azure/data-explorer/kusto/query/best-practices)-- [Optimize log queries in Azure Monitor](../azure-monitor/logs/query-optimization.md)
+- To view the results of the analytics rules you create in the Azure portal, go to the **Incidents** page, where you can triage incidents, [investigate them](investigate-cases.md), and [remediate the threats](respond-threats-during-investigation.md).
-Also see [Useful resources for working with Kusto Query Language in Microsoft Sentinel](kusto-resources.md) for further assistance.
+# [Defender portal](#tab/defender-portal)
-#### Permanent failure due to lost access across subscriptions/tenants
+- To view the results of the analytics rules you create in the Defender portal, expand **Investigation & response** in the navigation menu, then **Incidents & alerts**. View incidents on the **Incidents** page, where you can triage incidents, [investigate them](investigate-cases.md), and [remediate the threats](respond-threats-during-investigation.md). View individual alerts on the **Alerts** page.
-One particular example of when a permanent failure could occur due to a permissions change on a data source ([see list above](#permanent-failurerule-auto-disabled)) concerns the case of an MSSP&mdash;or any other scenario where analytics rules query across subscriptions or tenants.
++
+**Tune the rule:**
+
+- You can update the rule query to exclude false positives. For more information, see [Handle false positives in Microsoft Sentinel](false-positives.md).
-When you create an analytics rule, an access permissions token is applied to the rule and saved along with it. This token ensures that the rule can access the workspace that contains the data queried by the rule, and that this access will be maintained even if the rule's creator loses access to that workspace.
+> [!NOTE]
+> Alerts generated in Microsoft Sentinel are available through [Microsoft Graph Security](/graph/security-concept-overview). For more information, see the [Microsoft Graph Security alerts documentation](/graph/api/resources/security-api-overview).
-There is one exception to this, however: when a rule is created to access workspaces in other subscriptions or tenants, such as what happens in the case of an MSSP, Microsoft Sentinel takes extra security measures to prevent unauthorized access to customer data. For these kinds of rules, the credentials of the user that created the rule are applied to the rule instead of an independent access token, so that when the user no longer has access to the other tenant, the rule will stop working.
+## Export the rule to an ARM template
-If you operate Microsoft Sentinel in a cross-subscription or cross-tenant scenario, be aware that if one of your analysts or engineers loses access to a particular workspace, any rules created by that user will stop working. You will get a health monitoring message regarding "insufficient access to resource", and the rule will be [auto-disabled according to the procedure described above](#permanent-failurerule-auto-disabled).
+If you want to package your rule to be managed and deployed as code, you can easily [export the rule to an Azure Resource Manager (ARM) template](import-export-analytics-rules.md). You can also import rules from template files in order to view and edit them in the user interface.
## Next steps
To automate rule enablement, push rules to Microsoft Sentinel via [API](/rest/ap
For more information, see: -- [Tutorial: Investigate incidents with Microsoft Sentinel](investigate-cases.md)-- [Navigate and investigate incidents in Microsoft Sentinel - Preview](investigate-incidents.md)-- [Classify and analyze data using entities in Microsoft Sentinel](entities.md)
+- [Troubleshooting analytics rules in Microsoft Sentinel](troubleshoot-analytics-rules.md)
+- [Navigate and investigate incidents in Microsoft Sentinel](investigate-incidents.md)
+- [Entities in Microsoft Sentinel](entities.md)
- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md) Also, learn from an example of using custom analytics rules when [monitoring Zoom](https://techcommunity.microsoft.com/t5/azure-sentinel/monitoring-zoom-with-azure-sentinel/ba-p/1341516) with a [custom connector](create-custom-connector.md).
sentinel Domain Based Essential Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/domain-based-essential-solutions.md
Title: ASIM-based domain solutions - Essentials for Microsoft Sentinel
description: Learn about the Microsoft essential solutions for Microsoft Sentinel that span across different ASIM schemas like networks, DNS, and web sessions. Previously updated : 03/08/2023 Last updated : 03/01/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal.
+ #Customer intent: As a security engineer, I want to learn how I can minimize the amount of solution content I have to deploy and manage by using Microsoft essential solutions for Microsoft Sentinel. # Advanced Security Information Model (ASIM) based domain solutions for Microsoft Sentinel (preview)
-Microsoft essential solutions are domain solutions published by Microsoft for Microsoft Sentinel. These solutions have out-of-the-box content which can operate across multiple products for specific categories like networking. Some of these essential solutions use the normalization technique Advanced Security Information Model (ASIM) to normalize the data at query time or ingestion time.
+Microsoft essential solutions are domain solutions published by Microsoft for Microsoft Sentinel. These solutions have out-of-the-box content that can operate across multiple products for specific categories like networking. Some of these essential solutions use the normalization technique Advanced Security Information Model (ASIM) to normalize the data at query time or ingestion time.
> [!IMPORTANT] > Microsoft essential solutions and the Network Session Essentials solution are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Microsoft essential solutions are domain solutions published by Microsoft for Mi
## Why use ASIM-based Microsoft essential solutions?
-When multiple solutions in a domain category share similar detection patterns, it makes sense to have the data captured under a normalized schema like ASIM. Essential solutions makes use of this ASIM schema to detect threats at scale.
+When multiple solutions in a domain category share similar detection patterns, it makes sense to have the data captured under a normalized schema like ASIM. Essential solutions make use of this ASIM schema to detect threats at scale.
In the content hub, there are multiple product solutions for different domain categories like "Security - Network". For example, Azure Firewall, Palo Alto Firewall, and Corelight have product solutions for the "Security - Network" domain category.
In the content hub, there are multiple product solutions for different domain ca
You might consider Microsoft essential solutions for the following reasons: - A normalized schema makes it easier for you to query incident details. You don't have to remember different vendor syntax for similar log attributes.-- If you don't have to manage content for multiple solutions, use case deployment and incident handling is much easier.
+- If you don't have to manage content for multiple solutions, use case deployment and incident handling is easier.
- A consolidated workbook view gives you better environment visibility and possible query time parsing with high performing ASIM parsers. ## ASIM schemas supported
The following table describes the type of content available with each essential
||| |Analytical Rule | The analytical rules available in the ASIM-based essential solutions are generic and a good fit for any of the dependent Microsoft Sentinel product solutions for that domain. The Microsoft Sentinel product solution might have a source specific use case covered as part of the analytical rule. Enable Microsoft Sentinel product solution rules as needed for your environment. | |Hunting query | The hunting queries available in the ASIM-based essential solutions are generic and a good fit to hunt for threats from any of the dependent Microsoft Sentinel product solutions for that domain. The Microsoft Sentinel product solution might have a source specific hunting query available out-of-the-box. Use the hunting queries from the Microsoft Sentinel product solution as needed for your environment. |
-|Playbook | The ASIM-based essential solutions are expected to handle data with very high events per seconds. When you have content that's using that volume of data, you might experience some performance impact that can cause slow loading of workbooks or query results. To solve this problem, the summarization playbook summarizes the source logs and stores the information into a predefined table. Enable the summarization playbook to allow the essential solutions to query this table.<br><br> Because playbooks in Microsoft Sentinel are based on workflows built in Azure Logic Apps which create separate resources, additional charges might apply. For more information, see the [Azure Logic Apps pricing page](https://azure.microsoft.com/pricing/details/logic-apps/). Additional charges might also apply for storage of the summarized data. |
-|Watchlist | The ASIM-based essential solutions use a watchlist that includes multiple sets of conditions for analytic rule detection and hunting queries. The watchlist allows you to do the following tasks:<br><br>- Do focused monitoring with data filtration. <br>- Switch between hunting and detection for each list item. <br>- Keep **Threshold type** set to **Static** to leverage threshold-based alerting while anomaly-based alerts would learn from the last few days of data (maximum 14 days). <br>- Modify **Alert Name**, **Description**, **Tactic** and **Severity** by using this watchlist for individual list items.<br>- Disable detection by setting **Severity** as **Disabled**. |
+|Playbook | The ASIM-based essential solutions are expected to handle data with high events per seconds. When you have content that's using that volume of data, you might experience some performance impact that can cause slow loading of workbooks or query results. To solve this problem, the summarization playbook summarizes the source logs and stores the information into a predefined table. Enable the summarization playbook to allow the essential solutions to query this table.<br><br> Because playbooks in Microsoft Sentinel are based on workflows built in Azure Logic Apps that create separate resources, other charges might apply. For more information, see the [Azure Logic Apps pricing page](https://azure.microsoft.com/pricing/details/logic-apps/). Other charges might also apply for storage of the summarized data. |
+|Watchlist | The ASIM-based essential solutions use a watchlist that includes multiple sets of conditions for analytic rule detection and hunting queries. The watchlist allows you to do the following tasks:<br><br>- Do focused monitoring with data filtration. <br>- Switch between hunting and detection for each list item. <br>- Keep **Threshold type** set to **Static** to leverage threshold-based alerting while anomaly-based alerts would learn from the last few days of data (maximum 14 days). <br>- Modify **Alert Name**, **Description**, **Tactic**, and **Severity** by using this watchlist for individual list items.<br>- Disable detection by setting **Severity** as **Disabled**. |
|Workbook | The workbook available with the ASIM-based essential solutions gives a consolidated view of different events and activity happening in the dependent domain. Because this workbook fetches results from a very high volume of data, there might be some performance lag. If you experience performance issues, use the summarization playbook.| These essential solutions like other Microsoft Sentinel domain solutions don't have a connector of their own. They depend on the source specific connectors in Microsoft Sentinel product solutions to pull in the logs. To understand the products the domain solution supports, refer to the prerequisite list of product solutions each of the ASIM domain essentials solutions lists. Install one or more of the product solutions. Configure the data connectors to meet the underlying product dependency needs and to enable better usage of this domain solution content.
-## Next steps
+## Related articles
- [Find ASIM-based domain essential solutions](sentinel-solutions-catalog.md) like the Network Session Essentials and DNS Essentials Solution for Microsoft Sentinel - [Using the Advanced Security Information Model (ASIM)](/azure/sentinel/normalization-about-parsers)
sentinel Enable Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-entity-behavior-analytics.md
description: Enable User and Entity Behavior Analytics in Microsoft Sentinel, an
Previously updated : 07/05/2023 Last updated : 03/18/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
-# Enable User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel
+# Enable User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel
In the previous deployment step, you enabled the Microsoft Sentinel security content you need to protect your systems. In this article, you learn how to enable and use the UEBA feature to streamline the analysis process. This article is part of the [Deployment guide for Microsoft Sentinel](deploy-overview.md). As Microsoft Sentinel collects logs and alerts from all of its connected data sources, it analyzes them and builds baseline behavioral profiles of your organizationΓÇÖs entities (such as users, hosts, IP addresses, and applications) across time and peer group horizon. Using a variety of techniques and machine learning capabilities, Microsoft Sentinel can then identify anomalous activity and help you determine if an asset has been compromised. Learn more about [UEBA](identify-threats-with-entity-behavior-analytics.md). [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] ## Prerequisites To enable or disable this feature (these prerequisites are not required to use the feature): -- Your user must be assigned the **Global Administrator** or **Security Administrator** roles in Microsoft Entra ID.
+- Your user must be assigned the Microsoft Entra ID **Global Administrator** or **Security Administrator** roles in your tenant.
- Your user must be assigned at least one of the following **Azure roles** ([Learn more about Azure RBAC](roles.md)): - **Microsoft Sentinel Contributor** at the workspace or resource group levels.
To enable or disable this feature (these prerequisites are not required to use t
## How to enable User and Entity Behavior Analytics
-1. Go to the **Entity behavior configuration** page. There are three ways to get to this page:
+- Users of Microsoft Sentinel in the Azure portal, follow the instructions in the **Azure portal** tab.
+- Users of Microsoft Sentinel as part of the unified security operations platform in the Microsoft Defender portal, follow the instructions in the **Defender portal** tab.
+
+1. Go to the **Entity behavior configuration** page.
++
+ # [Azure portal](#tab/azure)
+
+ Use any one of these three ways to get to the **Entity behavior configuration** page:
- Select **Entity behavior** from the Microsoft Sentinel navigation menu, then select **Entity behavior settings** from the top menu bar.
To enable or disable this feature (these prerequisites are not required to use t
- From the Microsoft Defender XDR data connector page, select the **Go the UEBA configuration page** link.
+ # [Defender portal](#tab/defender)
+
+ To get to the **Entity behavior configuration** page:
+
+ 1. From the Microsoft Defender portal navigation menu, select **Settings**.
+ 1. In the **Settings** page, select **Microsoft Sentinel**.
+ 1. From the next menu, select **Entity behavior analytics**.
+ 1. Then, select **Set UEBA** which will open a new browser tab with the **Entity behavior configuration** page in the **Azure portal**.
+
+
+ 1. On the **Entity behavior configuration** page, switch the toggle to **On**. :::image type="content" source="media/enable-entity-behavior-analytics/ueba-configuration.png" alt-text="Screenshot of UEBA configuration settings.":::
sentinel Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/entities.md
Title: Use entities to classify and analyze data in Microsoft Sentinel
-description: Assign entity classifications (users, hostnames, IP addresses) to data items in Microsoft Sentinel, and use them to compare, analyze, and correlate data from multiple sources.
+ Title: Entities in Microsoft Sentinel
+description: Entities are classifications or labels for data elements in your Microsoft Sentinel alerts. Microsoft Sentinel uses entities to recognize data elements as a particular entity type, correlate data across alerts, analyze it to glean particular insights, and provide a rich foundation and context for investigating and remediating security threats.
Previously updated : 07/26/2022 Last updated : 03/11/2024
-# Classify and analyze data using entities in Microsoft Sentinel
+# Entities in Microsoft Sentinel
-When alerts are sent to or generated by Microsoft Sentinel, they contain data items that Sentinel can recognize and classify into categories as **entities**. When Microsoft Sentinel understands what kind of entity a particular data item represents, it knows the right questions to ask about it, and it can then compare insights about that item across the full range of data sources, and easily track it and refer to it throughout the entire Sentinel experience - analytics, investigation, remediation, hunting, and so on. Some common examples of entities are user accounts, hosts, files, processes, IP addresses, and URLs.
+When alerts are sent to or generated by Microsoft Sentinel, they contain data elements that Sentinel can recognize and classify into categories as **entities**. When Microsoft Sentinel understands what kind of entity a particular data element represents, it knows the right questions to ask about it, and it can then compare insights about that item across the full range of data sources, and easily track it and refer to it throughout the entire Sentinel experience - analytics, investigation, remediation, hunting, and so on. Some common examples of entities are user accounts, hosts, mailboxes, IP addresses, files, cloud applications, processes, and URLs.
++
+In the unified security operations platform in the Microsoft Defender portal, entities generally fall into two main categories:
+
+| Entity category | Characterization | Main examples |
+| | - | - |
+| Assets | <li>Internal objects<li>Protected objects<li>Inventoried objects | <li>Accounts (Users)<li>Hosts (Devices)<li>Mailboxes<li>Azure resources |
+| Other entities<br>*(evidence)* | <li>External items<li>Not in your control<li>Indicators of compromise | <li>IP addresses<li>Files<li>Processes<li>URLs |
## Entity identifiers
-Microsoft Sentinel supports a wide variety of entity types. Each type has its own unique attributes, including some that can be used to identify a particular entity. These attributes are represented as fields in the entity, and are called **identifiers**. See the full list of supported entities and their identifiers below.
+Microsoft Sentinel supports a wide variety of entity types. Each type has its own unique attributes, which are represented as fields in the entity schema, and are called **identifiers**. See the full list of supported entities [below](#supported-entities), and the complete set of entity schemas and identifers in [Microsoft Sentinel entity types reference](entities-reference.md).
### Strong and weak identifiers
-As noted just above, for each type of entity there are fields, or sets of fields, that can identify it. These fields or sets of fields can be referred to as **strong identifiers** if they can uniquely identify an entity without any ambiguity, or as **weak identifiers** if they can identify an entity under some circumstances, but are not guaranteed to uniquely identify an entity in all cases. In many cases, though, a selection of weak identifiers can be combined to produce a strong identifier.
+For each type of entity there are fields, or sets of fields, that can identify particular instances of that entity. These fields or sets of fields can be referred to as **strong identifiers** if they can uniquely identify an entity without any ambiguity, or as **weak identifiers** if they can identify an entity under some circumstances, but are not guaranteed to uniquely identify an entity in all cases. In many cases, though, a selection of weak identifiers can be combined to produce a strong identifier.
For example, user accounts can be identified as **account** entities in more than one way: using a single **strong identifer** like a Microsoft Entra account's numeric identifier (the **GUID** field), or its **User Principal Name (UPN)** value, or alternatively, using a combination of **weak identifiers** like its **Name** and **NTDomain** fields. Different data sources can identify the same user in different ways. Whenever Microsoft Sentinel encounters two entities that it can recognize as the same entity based on their identifiers, it merges the two entities into a single entity, so that it can be handled properly and consistently.
-If, however, one of your resource providers creates an alert in which an entity is not sufficiently identified - for example, using only a single **weak identifier** like a user name without the domain name context - then the user entity cannot be merged with other instances of the same user account. Those other instances would be identified as a separate entity, and those two entities would remain separate instead of unified.
+If, however, one of your resource providers creates an alert in which an entity is not sufficiently identified&mdash;for example, using only a single **weak identifier** like a user name without the domain name context&mdash;then the user entity cannot be merged with other instances of the same user account. Those other instances would be identified as a separate entity, and those two entities would remain separate instead of unified.
In order to minimize the risk of this happening, you should verify that all of your alert providers properly identify the entities in the alerts they produce. Additionally, synchronizing user account entities with Microsoft Entra ID may create a unifying directory, which will be able to merge user account entities.
In order to minimize the risk of this happening, you should verify that all of y
The following types of entities are currently identified in Microsoft Sentinel: -- Account-- Host-- IP address-- URL-- Azure resource-- Cloud application-- DNS resolution-- File-- File hash-- Malware-- Process-- Registry key-- Registry value-- Security group-- Mailbox-- Mail cluster-- Mail message-- Submission mail
+- [Account](entities-reference.md#account)
+- [Host](entities-reference.md#host)
+- [IP address](entities-reference.md#ip)
+- [URL](entities-reference.md#url)
+- [Azure resource](entities-reference.md#azure-resource)
+- [Cloud application](entities-reference.md#cloud-application)
+- [DNS resolution](entities-reference.md#dns-resolution)
+- [File](entities-reference.md#file)
+- [File hash](entities-reference.md#file-hash)
+- [Malware](entities-reference.md#malware)
+- [Process](entities-reference.md#process)
+- [Registry key](entities-reference.md#registry-key)
+- [Registry value](entities-reference.md#registry-value)
+- [Security group](entities-reference.md#security-group)
+- [Mailbox](entities-reference.md#mailbox)
+- [Mail cluster](entities-reference.md#mail-cluster)
+- [Mail message](entities-reference.md#mail-message)
+- [Submission mail](entities-reference.md#submission-mail)
You can view these entities' identifiers and other relevant information in the [entities reference](entities-reference.md).
You can view these entities' identifiers and other relevant information in the [
How does Microsoft Sentinel recognize a piece of data in an alert as identifying an entity?
-Let's look at how data processing is done in Microsoft Sentinel. Data is ingested from various sources through [connectors](connect-data-sources.md), whether service-to-service, agent-based, or using a syslog service and a log forwarder. The data is stored in tables in your Log Analytics workspace. These tables are then queried at regularly scheduled intervals by the analytics rules you have defined and enabled. One of the many actions taken by these analytics rules is the mapping of data fields in the tables to Microsoft Sentinel-recognized entities. According to mappings you define in your analytics rules, Microsoft Sentinel will take fields from the results returned by your query, recognize them by the identifiers you specified for each entity type, and apply to them the entity type identified by those identifiers.
+Let's look at how data processing is done in Microsoft Sentinel. Data is ingested from various sources through [connectors](connect-data-sources.md), whether service-to-service, agent-based, or API-based. The data is stored in tables in your Log Analytics workspace. These tables are queried at regular intervals by the scheduled or near-real-time analytics rules you've defined and enabled, or on-demand as part of hunting queries when you hunt for threats. Part of the definition of these analytics rules and hunting queries is the mapping of data fields in the tables to entity types recognized by Microsoft Sentinel. According to the mappings you define, Microsoft Sentinel will take fields from the results returned by your query, recognize them by the identifiers you specified for each entity type, and apply to them the entity type identified by those identifiers.
What's the point of all this?
-When Microsoft Sentinel is able to identify entities in alerts from different types of data sources, and especially if it can do so using strong identifiers common to each data source or to a third schema, it can then easily correlate between all of these alerts and data sources. These correlations help build a rich store of information and insights on the entities, giving you a solid foundation for your security operations.
+When Microsoft Sentinel is able to identify entities in alerts from different types of data sources, and especially if it can do so using strong identifiers common to each data source or to another schema, it can then easily correlate between all of these alerts and data sources. These correlations help build a rich store of information and insights on the entities, giving you a solid foundation and context for investigating and responding to security threats.
Learn how to [map data fields to entities](map-data-fields-to-entities.md).
Learn [which identifiers strongly identify an entity](entities-reference.md).
## Entity pages
-Information about entity pages can now be found at [Investigate entities with entity pages in Microsoft Sentinel](entity-pages.md).
+Information about entity pages can now be found at [Entity pages in Microsoft Sentinel](entity-pages.md).
## Next steps
sentinel Entity Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/entity-pages.md
Title: Investigate entities with entity pages in Microsoft Sentinel
-description: Use entity pages to get information about entities that you come across in your incident investigations. Gain insights into entity activities and assess risk.
+ Title: Entity pages in Microsoft Sentinel
+description: Entity pages display information about entities surfaced in your alerts, or that you otherwise come across in your incident investigations. Among this information is the timeline of alerts involving the entity, and curated insights into entity activities. Entity pages provide a rich foundation and context for your investigations, helping you detect, analyze, mitigate, and respond to security threats.
Previously updated : 01/17/2023 Last updated : 03/16/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
-# Investigate entities with entity pages in Microsoft Sentinel
+# Entity pages in Microsoft Sentinel
-When you come across a user account, a hostname / IP address, or an Azure resource in an incident investigation, you may decide you want to know more about it. For example, you might want to know its activity history, whether it's appeared in other alerts or incidents, whether it's done anything unexpected or out of character, and so on. In short, you want information that can help you determine what sort of threat these entities represent and guide your investigation accordingly.
+When you come across a user account, a hostname, an IP address, or an Azure resource in an incident investigation, you may decide you want to know more about it. For example, you might want to know its activity history, whether it's appeared in other alerts or incidents, whether it's done anything unexpected or out of character, and so on. In short, you want information that can help you determine what sort of threat these entities represent and guide your investigation accordingly.
+ ## Entity pages
More specifically, entity pages consist of three parts:
- The right-side panel presents [behavioral insights](#entity-insights) on the entity. These insights are continuously developed by Microsoft security research teams. They are based on various data sources and provide context for the entity and its observed activities, helping you to quickly identify [anomalous behavior](soc-ml-anomalies.md) and security threats.
- As of November 2023, the next generation of insights is starting to be made available in **PREVIEW**, in the form of [enrichment widgets](whats-new.md#visualize-data-with-enrichment-widgets-preview). These new insights can integrate data from external sources and get updates in real time, and they can be seen alongside the existing insights. To take advantage of these new widgets, you must [enable the widget experience](enable-enrichment-widgets.md).
-
- - [See the instructions for enabling the widget experience](enable-enrichment-widgets.md).
- - [Learn more about enrichment widgets](whats-new.md#visualize-data-with-enrichment-widgets-preview).
+ As of November 2023, the next generation of insights is starting to be made available in **PREVIEW**, in the form of enrichment widgets. These new insights can integrate data from external sources and get updates in real time, and they can be seen alongside the existing insights. To take advantage of these new widgets, you must [enable the widget experience](enable-enrichment-widgets.md).
If you're investigating an incident using the **[new investigation experience](investigate-incidents.md)**, you'll be able to see a panelized version of the entity page right inside the incident details page. You have a [list of all the entities in a given incident](investigate-incidents.md#explore-the-incidents-entities), and selecting an entity opens a side panel with three "cards"&mdash;**Info**, **Timeline**, and **Insights**&mdash; showing all the same information described above, within the specific time frame corresponding with that of the alerts in the incident.
+If you're using the **[unified security operations platform](https://go.microsoft.com/fwlink/p/?linkid=2263690)** in the Microsoft Defender portal, the **timeline** and **insights** panels appear in the **Sentinel events** tab of the Defender entity page.
+
+# [Azure portal](#tab/azure-portal)
++
+# [Defender portal](#tab/defender-portal)
++++ ## The timeline
+# [Azure portal](#tab/azure-portal)
The timeline is a major part of the entity page's contribution to behavior analytics in Microsoft Sentinel. It presents a story about entity-related events, helping you understand the entity's activity within a specific time frame. You can choose the **time range** from among several preset options (such as *last 24 hours*), or set it to any custom-defined time frame. Additionally, you can set filters that limit the information in the timeline to specific types of events or alerts.
-The following types of items are included in the timeline:
+The following types of items are included in the timeline.
+
+- **Alerts**: any alerts in which the entity is defined as a **mapped entity**. Note that if your organization has created [custom alerts using analytics rules](./detect-threats-custom.md), you should make sure that the rules' entity mapping is done properly.
+
+- **Bookmarks**: any bookmarks that include the specific entity shown on the page.
+
+- **Anomalies**: UEBA detections based on dynamic baselines created for each entity across various data inputs and against its own historical activities, those of its peers, and those of the organization as a whole.
+
+- **Activities**: aggregation of notable events relating to the entity. A wide range of activities are collected automatically, and you can now [customize this section by adding activities](customize-entity-activities.md) of your own choosing.
-- Alerts - any alerts in which the entity is defined as a **mapped entity**. Note that if your organization has created [custom alerts using analytics rules](./detect-threats-custom.md), you should make sure that the rules' entity mapping is done properly. -- Bookmarks - any bookmarks that include the specific entity shown on the page.
+# [Defender portal](#tab/defender-portal)
-- Anomalies - UEBA detections based on dynamic baselines created for each entity across various data inputs and against its own historical activities, those of its peers, and those of the organization as a whole.
+The timeline on the **Sentinel events** tab adds a major part of the entity page's contribution to behavior analytics in the unified security operations platform in Microsoft Defender. It presents a story about entity-related events, helping you understand the entity's activity within a specific time frame.
-- Activities - aggregation of notable events relating to the entity. A wide range of activities are collected automatically, and you can now [customize this section by adding activities](customize-entity-activities.md) of your own choosing.
+In particular, you'll see on the Sentinel events timeline alerts and events from third-party sources collected only by Microsoft Sentinel, such as syslog/CEF and custom logs ingested through the Azure Monitor Agent or custom connectors.
+
+The following types of items are included in the timeline.
+
+- **Alerts**: any alerts in which the entity is defined as a **mapped entity**. Note that if your organization has created [custom alerts using analytics rules](./detect-threats-custom.md), you should make sure that the rules' entity mapping is done properly.
+
+- **Bookmarks**: any bookmarks that include the specific entity shown on the page.
+
+- **Anomalies**: UEBA detections based on dynamic baselines created for each entity across various data inputs and against its own historical activities, those of its peers, and those of the organization as a whole.
+
+- **Activities**: aggregation of notable events relating to the entity. A wide range of activities are collected automatically, and you can now [customize this section by adding activities](customize-entity-activities.md) of your own choosing.
++
+This timeline displays information from the past 24 hours. This period is not currently adjustable.
++ ## Entity insights
The insights are based on the following data sources:
- Heartbeat (Azure Monitor Agent) - CommonSecurityLog (Microsoft Sentinel)
+Generally speaking, each entity insight displayed on the entity page is accompanied by a link that will take you to a page where the query underlying the insight is displayed, along with the results, so you can examine the results in greater depth.
+
+- In Microsoft Sentinel in the Azure portal, the link takes you to the **Logs** page.
+- In the unified security operations platform in the Microsoft Defender portal, the link takes you to the **Advanced hunting** page.
+ ## How to use entity pages Entity pages are designed to be part of multiple usage scenarios, and can be accessed from incident management, the investigation graph, bookmarks, or directly from the entity search page under **Entity behavior** in the Microsoft Sentinel main menu.
Microsoft Sentinel currently offers the following entity pages:
> The **IP address entity page** (now in preview) contains **geolocation data** supplied by the **Microsoft Threat Intelligence service**. This service combines geolocation data from Microsoft solutions and third-party vendors and partners. The data is then available for analysis and investigation in the context of a security incident. For more information, see also [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md). - Azure resource (**Preview**)-- IoT device (**Preview**)-
+- IoT device (**Preview**)&mdash;only in Microsoft Sentinel in the Azure portal for now.
## Next steps In this document, you learned about getting information about entities in Microsoft Sentinel using entity pages. For more information about entities and how you can use them, see the following articles: -- [Classify and analyze data using entities in Microsoft Sentinel](entities.md).
+- [Learn about entities in Microsoft Sentinel](entities.md).
- [Customize activities on entity page timelines](customize-entity-activities.md). - [Identify advanced threats with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel](identify-threats-with-entity-behavior-analytics.md) - [Enable entity behavior analytics](./enable-entity-behavior-analytics.md) in Microsoft Sentinel.
sentinel Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md
This article describes the features available in Microsoft Sentinel across diffe
|[Microsoft 365 Defender incident integration](microsoft-365-defender-sentinel-integration.md#working-with-microsoft-defender-xdr-incidents-in-microsoft-sentinel-and-bi-directional-sync) |GA |&#x2705; |&#x2705;| &#10060; | |[Microsoft Teams integrations](collaborate-in-microsoft-teams.md) |Public preview |&#x2705; |&#x2705;| &#10060; | |[Playbook template gallery](use-playbook-templates.md) |Public preview |&#x2705; |&#x2705;| &#10060; |
-|[Run playbooks on entities](respond-threats-during-investigation.md) |Public preview |&#x2705; |&#x2705; |&#10060; |
+|[Run playbooks on entities](respond-threats-during-investigation.md) |Public preview |&#x2705; |&#x2705; |&#x2705; |
|[Run playbooks on incidents](automate-responses-with-playbooks.md) |Public preview |&#x2705; |&#x2705;| &#x2705; | |[SOC incident audit metrics](manage-soc-with-incident-metrics.md) |GA |&#x2705; |&#x2705;| &#x2705; |
This article describes the features available in Microsoft Sentinel across diffe
<sup><a name="partialga"></a>1</sup> Partially GA: The ability to disable specific findings from vulnerability scans is in public preview.
+## Managing Microsoft Sentinel
+
+|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
+||||||
+|[Workspace manager](workspace-manager.md) |Public preview | &#x2705; |&#x2705; |&#10060; |
+|[SIEM migration experience](siem-migration.md) | GA | &#x2705; |&#10060; |&#10060; |
+ ## Normalization |Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
sentinel Geographical Availability Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/geographical-availability-data-residency.md
Microsoft Sentinel can run on workspaces in the following regions:
|North America |South America |Asia |Europe |Australia |Africa | |||||||
-|**US**<br><br>ΓÇó Central US<br>ΓÇó Central US EUAP<br>ΓÇó East US<br>ΓÇó East US 2<br>ΓÇó East US 2 EUAP<br>ΓÇó North Central US<br>ΓÇó South Central US<br>ΓÇó West US<br>ΓÇó West US 2<br>ΓÇó West US 3<br>ΓÇó West Central US<br>ΓÇó USNat East<br>ΓÇó USNat West<br>ΓÇó USSec East<br>ΓÇó USSec West<br><br>**Azure government**<br><br>ΓÇó USGov Non-Regional<br>ΓÇó USGov Arizona<br>ΓÇó USGov Virginia<br><br>**Canada**<br><br>ΓÇó Canada Central<br>ΓÇó Canada East |ΓÇó Brazil South<br>ΓÇó Brazil Southeast |ΓÇó East Asia<br>ΓÇó Southeast Asia<br>ΓÇó Qatar Central<br><br>**Japan**<br><br>ΓÇó Japan East<br>ΓÇó Japan West<br><br>**China 21Vianet**<br><br>ΓÇó China East 2<br><br>**India**<br><br>ΓÇó Central India<br>ΓÇó South India<br>ΓÇó West India<br>ΓÇó Jio India West<br>ΓÇó Jio India Central<br><br>**Korea**<br><br>ΓÇó Korea Central<br>ΓÇó Korea South<br><br>**UAE**<br><br>ΓÇó UAE Central<br>ΓÇó UAE North |ΓÇó North Europe<br>ΓÇó West Europe<br><br>**France**<br><br>ΓÇó France Central<br>ΓÇó France South<br><br>**Germany**<br><br>ΓÇó Germany West Central<br>ΓÇó Germany North<br><br>**Norway**<br><br>ΓÇó Norway East<br>ΓÇó Norway West<br><br>**Sweden**<br><br>ΓÇó Sweden Central <br><br>**Switzerland**<br><br>ΓÇó Switzerland North<br>ΓÇó Switzerland West<br><br>**UK**<br><br>ΓÇó UK South<br>ΓÇó UK West |ΓÇó Australia Central<br>Australia Central 2<br>ΓÇó Australia East<br>ΓÇó Australia Southeast |ΓÇó South Africa North<br>ΓÇó South Africa West |
+|**US**<br><br>ΓÇó Central US<br>ΓÇó East US<br>ΓÇó East US 2<br>ΓÇó East US 2 EUAP<br>ΓÇó North Central US<br>ΓÇó South Central US<br>ΓÇó West US<br>ΓÇó West US 2<br>ΓÇó West US 3<br>ΓÇó West Central US<br>ΓÇó USNat East<br>ΓÇó USNat West<br>ΓÇó USSec East<br>ΓÇó USSec West<br><br>**Azure government**<br><br>ΓÇó USGov Arizona<br>ΓÇó USGov Virginia<br><br>**Canada**<br><br>ΓÇó Canada Central<br>ΓÇó Canada East |ΓÇó Brazil South<br>ΓÇó Brazil Southeast |ΓÇó East Asia<br>ΓÇó Southeast Asia<br>ΓÇó Qatar Central<br><br>**Japan**<br><br>ΓÇó Japan East<br>ΓÇó Japan West<br><br>**China 21Vianet**<br><br>ΓÇó China East 2<br><br>**India**<br><br>ΓÇó Central India<br>ΓÇó Jio India West<br>ΓÇó Jio India Central<br><br>**Korea**<br><br>ΓÇó Korea Central<br>ΓÇó Korea South<br><br>**UAE**<br><br>ΓÇó UAE Central<br>ΓÇó UAE North |ΓÇó North Europe<br>ΓÇó West Europe<br><br>**France**<br><br>ΓÇó France Central<br>ΓÇó France South<br><br>**Germany**<br><br>ΓÇó Germany West Central<br><br>**Norway**<br><br>ΓÇó Norway East<br>ΓÇó Norway West<br><br>**Sweden**<br><br>ΓÇó Sweden Central <br><br>**Switzerland**<br><br>ΓÇó Switzerland North<br>ΓÇó Switzerland West<br><br>**UK**<br><br>ΓÇó UK South<br>ΓÇó UK West |ΓÇó Australia Central<br>Australia Central 2<br>ΓÇó Australia East<br>ΓÇó Australia Southeast |ΓÇó South Africa North |
sentinel Hunting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/hunting.md
Title: Hunting capabilities in Microsoft Sentinel| Microsoft Docs description: Use Microsoft Sentinel's built-in hunting queries to guide you into asking the right questions to find issues in your data.- - Previously updated : 09/28/2022- Last updated : 03/13/2024++
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
-# Hunt for threats with Microsoft Sentinel
+# Threat hunting in Microsoft Sentinel
-As security analysts and investigators, you want to be proactive about looking for security threats, but your various systems and security appliances generate mountains of data that can be difficult to parse and filter into meaningful events. Microsoft Sentinel has powerful hunting search and query tools to hunt for security threats across your organization's data sources. To help security analysts look proactively for new anomalies that weren't detected by your security apps or even by your scheduled analytics rules, Microsoft Sentinel's built-in hunting queries guide you into asking the right questions to find issues in the data you already have on your network.
+As security analysts and investigators, you want to be proactive about looking for security threats, but your various systems and security appliances generate mountains of data that can be difficult to parse and filter into meaningful events. Microsoft Sentinel has powerful hunting search and query tools to hunt for security threats across your organization's data sources. To help security analysts look proactively for new anomalies that aren't detected by your security apps or even by your scheduled analytics rules, Microsoft Sentinel's built-in hunting queries guide you into asking the right questions to find issues in the data you already have on your network.
-For example, one built-in query provides data about the most uncommon processes running on your infrastructure. You wouldn't want an alert about each time they are run - they could be entirely innocent - but you might want to take a look at the query on occasion to see if there's anything unusual.
+For example, one built-in query provides data about the most uncommon processes running on your infrastructure. You wouldn't want an alert each time they run. They could be entirely innocent. But you might want to take a look at the query on occasion to see if there's anything unusual.
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] ## Use built-in queries
Built-in hunting queries are developed by Microsoft security researchers on a co
Use queries before, during, and after a compromise to take the following actions: -- **Before an incident occurs**: Waiting on detections is not enough. Take proactive action by running any threat-hunting queries related to the data you're ingesting into your workspace at least once a week.
+- **Before an incident occurs**: Waiting on detections isn't enough. Take proactive action by running any threat-hunting queries related to the data you're ingesting into your workspace at least once a week.
- Results from your proactive hunting provide early insight into events that may confirm that a compromise is in process, or will at least show weaker areas in your environment that are at risk and need attention.
+ Results from your proactive hunting provide early insight into events that might confirm that a compromise is in process, or at least show weaker areas in your environment that are at risk and need attention.
- **During a compromise**: Use [livestream](livestream.md) to run a specific query constantly, presenting results as they come in. Use livestream when you need to actively monitor user events, such as if you need to verify whether a specific compromise is still taking place, to help determine a threat actor's next action, and towards the end of an investigation to confirm that the compromise is indeed over. -- **After a compromise**: After a compromise or an incident has occurred, make sure to improve your coverage and insight to prevent similar incidents in the future.
+- **After a compromise**: After a compromise or an incident occurred, make sure to improve your coverage and insight to prevent similar incidents in the future.
- - Modify your existing queries or create new ones to assist with early detection, based on insights you've gained from your compromise or incident.
+ - Modify your existing queries or create new ones to assist with early detection, based on insights gained from your compromise or incident.
- - If you've discovered or created a hunting query that provides high value insights into possible attacks, create custom detection rules based on that query and surface those insights as alerts to your security incident responders.
+ - If you discovered or created a hunting query that provides high value insights into possible attacks, create custom detection rules based on that query and surface those insights as alerts to your security incident responders.
View the query's results, and select **New alert rule** > **Create Microsoft Sentinel alert**. Use the **Analytics rule wizard** to create a new rule based on your query. For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md). You can also create hunting and livestream queries over data stored in Azure Data Explorer. For more information, see details of [constructing cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md) in the Azure Monitor documentation.
-Use community resources, such as the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries) to find additional queries and data sources.
+Use community resources, such as the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries) to find more queries and data sources.
## Use the hunting dashboard
The following table describes detailed actions available from the hunting dashbo
| Action | Description | | | | | **See how queries apply to your environment** | Select the **Run all queries** button, or select a subset of queries using the check boxes to the left of each row and select the **Run selected queries** button. <br><br>Running your queries can take anywhere from a few seconds to many minutes, depending on how many queries are selected, the time range, and the amount of data that is being queried. |
-| **View the queries that returned results** | After your queries are done running, view the queries that returned results using the **Results** filter: <br>- Sort to see which queries had the most or fewest results. <br>- View the queries that are not at all active in your environment by selecting *N/A* in the **Results** filter. <br>- Hover over the info icon (**i**) next to the *N/A* to see which data sources are required to make this query active. |
-| **Identify spikes in your data** | Identify spikes in the data by sorting or filtering on **Results delta** or **Results delta percentage**. <br><br>This compares the results of the last 24 hours against the results of the previous 24-48 hours, highlighting any large differences or relative difference in volume. |
-| **View queries mapped to the MITRE ATT&CK tactic** | The **MITRE ATT&CK tactic bar**, at the top of the table, lists how many queries are mapped to each MITRE ATT&CK tactic. The tactic bar gets dynamically updated based on the current set of filters applied. <br><br>This enables you to see which MITRE ATT&CK tactics show up when you filter by a given result count, a high result delta, *N/A* results, or any other set of filters. |
-| **View queries mapped to MITRE ATT&CK techniques** | Queries can also be mapped to MITRE ATT&CK techniques. You can filter or sort by MITRE ATT&CK techniques using the **Technique** filter. By opening a query, you will be able to select the technique to see the MITRE ATT&CK description of the technique. |
+| **View the queries that returned results** | After your queries are done running, view the queries that returned results using the **Results** filter: <br>- Sort to see which queries had the most or fewest results. <br>- View the queries that aren't at all active in your environment by selecting *N/A* in the **Results** filter. <br>- Hover over the info icon (**i**) next to the *N/A* to see which data sources are required to make this query active. |
+| **Identify spikes in your data** | Identify spikes in the data by sorting or filtering on **Results delta** or **Results delta percentage**. <br><br>Compares the results of the last 24 hours against the results of the previous 24-48 hours, highlighting any large differences or relative difference in volume. |
+| **View queries mapped to the MITRE ATT&CK tactic** | The **MITRE ATT&CK tactic bar**, at the top of the table, lists how many queries are mapped to each MITRE ATT&CK tactic. The tactic bar gets dynamically updated based on the current set of filters applied. <br><br>Enables you to see which MITRE ATT&CK tactics show up when you filter by a given result count, a high result delta, *N/A* results, or any other set of filters. |
+| **View queries mapped to MITRE ATT&CK techniques** | Queries can also be mapped to MITRE ATT&CK techniques. You can filter or sort by MITRE ATT&CK techniques using the **Technique** filter. By opening a query, you're able to select the technique to see the MITRE ATT&CK description of the technique. |
| **Save a query to your favorites** | Queries saved to your favorites automatically run each time the **Hunting** page is accessed. You can create your own hunting query or clone and customize an existing hunting query template. | | **Run queries** | Select **Run Query** in the hunting query details page to run the query directly from the hunting page. The number of matches is displayed within the table, in the **Results** column. Review the list of hunting queries and their matches. |
-| **Review an underlying query** | Perform a quick review of the underlying query in the query details pane. You can see the results by clicking the **View query results** link (below the query window) or the **View Results** button (at the bottom of the pane). The query will open in the **Logs** (Log Analytics) blade, and below the query, you can review the matches for the query. |
+| **Review an underlying query** | Perform a quick review of the underlying query in the query details pane. You can see the results by clicking the **View query results** link (below the query window) or the **View Results** button (at the bottom of the pane). The query opens the **Logs** (Log Analytics) page, and below the query, you can review the matches for the query. |
## Create a custom hunting query
Create or modify a query and save it as your own query or share it with users wh
1. Fill in all the blank fields and select **Create**.
- 1. Create entity mappings by selecting entity types, identifiers and columns.
+ 1. Create entity mappings by selecting entity types, identifiers, and columns.
:::image type="content" source="media/hunting/map-entity-types-hunting.png" alt-text="Screenshot for mapping entity types in hunting queries.":::
- 1. Map MITRE ATT&CK techniques to your hunting queries by selecting the tactic, technique and sub-technique (if applicable).
+ 1. Map MITRE ATT&CK techniques to your hunting queries by selecting the tactic, technique, and sub-technique (if applicable).
:::image type="content" source="./media/hunting/mitre-attack-mapping-hunting.png" alt-text="New query" lightbox="./media/hunting/new-query.png":::
Create or modify a query and save it as your own query or share it with users wh
**To modify an existing custom query**:
-1. From the table, select the hunting query that you wish to modify. Note that only queries that from a custom content source can be edited. Other content sources have to be edited at that source.
+1. From the table, select the hunting query that you wish to modify. Only queries that from a custom content source can be edited. Other content sources have to be edited at that source.
1. Select the ellipsis (...) in the line of the query you want to modify, and select **Edit query**.
We recommend that your query uses an [Advanced Security Information Model (ASIM)
## Create bookmarks
-During the hunting and investigation process, you may come across query results that may look unusual or suspicious. Bookmark these items to refer back to them in the future, such as when creating or enriching an incident for investigation. Events such as potential root causes, indicators of compromise, or other notable events should be raised as a bookmark. If a key event you've bookmarked is severe enough to warrant an investigation, escalate it to an incident.
+During the hunting and investigation process, you might come across query results that look unusual or suspicious. Bookmark these items to refer back to them in the future, such as when creating or enriching an incident for investigation. Events such as potential root causes, indicators of compromise, or other notable events should be raised as a bookmark. If a key event you bookmarked is severe enough to warrant an investigation, escalate it to an incident.
-- In your results, mark the checkboxes for any rows you want to preserve, and select **Add bookmark**. This creates for a record for each marked row - a bookmark - that contains the row results as well as the query that created the results. You can add your own tags and notes to each bookmark.
+- In your results, mark the checkboxes for any rows you want to preserve, and select **Add bookmark**. This creates for a record for each marked row, a bookmark, that contains the row results and the query that created the results. You can add your own tags and notes to each bookmark.
- As with scheduled analytics rules, you can enrich your bookmarks with entity mappings to extract multiple entity types and identifiers, and MITRE ATT&CK mappings to associate particular tactics and techniques.
- - Bookmarks will default to use the same entity and MITRE ATT&CK technique mappings as the hunting query that produced the bookmarked results.
+ - Bookmarks default to use the same entity and MITRE ATT&CK technique mappings as the hunting query that produced the bookmarked results.
- View all the bookmarked findings by clicking on the **Bookmarks** tab in the main **Hunting** page. Add tags to bookmarks to classify them for filtering. For example, if you're investigating an attack campaign, you can create a tag for the campaign, apply the tag to any relevant bookmarks, and then filter all the bookmarks based on the campaign.
When your hunting and investigations become more complex, use Microsoft Sentinel
Notebooks provide a kind of virtual sandbox, complete with its own kernel, where you can carry out a complete investigation. Your notebook can include the raw data, the code you run on that data, the results, and their visualizations. Save your notebooks so that you can share it with others to reuse in your organization.
-Notebooks may be helpful when your hunting or investigation becomes too large to remember easily, view details, or when you need to save queries and results. To help you create and share notebooks, Microsoft Sentinel provides [Jupyter Notebooks](https://jupyter.org), an open-source, interactive development and data manipulation environment, integrated directly in the Microsoft Sentinel **Notebooks** page.
+Notebooks might be helpful when your hunting or investigation becomes too large to remember easily, view details, or when you need to save queries and results. To help you create and share notebooks, Microsoft Sentinel provides [Jupyter Notebooks](https://jupyter.org), an open-source, interactive development, and data manipulation environment, integrated directly in the Microsoft Sentinel **Notebooks** page.
For more information, see:
The following table describes some methods of using Jupyter notebooks to help yo
|Method |Description | |||
-|**Data persistence, repeatability, and backtracking** | If you're working with many queries and results sets, you're likely to have some dead ends. You'll need to decide which queries and results to keep, and how to accumulate the useful results in a single report. <br><br> Use Jupyter Notebooks to save queries and data as you go, use variables to rerun queries with different values or dates, or save your queries to rerun on future investigations. |
-|**Scripting and programming** | Use Jupyter Notebooks to add programming to your queries, including: <br><br>- *Declarative* languages like [Kusto Query Language (KQL)](/azure/kusto/query/) or SQL, to encode your logic in a single, possibly complex, statement.<br>- *Procedural* programming languages, to run logic in a series of steps. <br><br>Splitting your logic into steps can help you see and debug intermediate results, add functionality that might not be available in the query language, and reuse partial results in later processing steps. |
+|**Data persistence, repeatability, and backtracking** | If you're working with many queries and results sets, you're likely to have some dead ends. You need to decide which queries and results to keep, and how to accumulate the useful results in a single report. <br><br> Use Jupyter Notebooks to save queries and data as you go, use variables to rerun queries with different values or dates, or save your queries to rerun on future investigations. |
+|**Scripting and programming** | Use Jupyter Notebooks to add programming to your queries, including: <br><br>- *Declarative* languages like [Kusto Query Language (KQL)](/azure/kusto/query/) or SQL, to encode your logic in a single, possibly complex, statement.<br>- *Procedural* programming languages, to run logic in a series of steps. <br><br>Split your logic into steps to help you see and debug intermediate results, add functionality that might not be available in the query language, and reuse partial results in later processing steps. |
|**Links to external data** | While Microsoft Sentinel tables have most telemetry and event data, Jupyter Notebooks can link to any data that's accessible over your network or from a file. Using Jupyter Notebooks allows you to include data such as: <br><br>- Data in external services that you don't own, such as geolocation data or threat intelligence sources<br>- Sensitive data that's stored only within your organization, such as human resource databases or lists of high-value assets<br>- Data that you haven't yet migrated to the cloud. |
-|**Specialized data processing, machine learning, and visualization tools** | Jupyter Notebooks provides additional visualizations, machine learning libraries, and data processing and transformation features. <br><br>For example, use Jupyter Notebooks with the following [Python](https://python.org) capabilities:<br>- [pandas](https://pandas.pydata.org/) for data processing, cleanup, and engineering<br>- [Matplotlib](https://matplotlib.org), [HoloViews](https://holoviews.org), and [Plotly](https://plot.ly) for visualization<br>- [NumPy](https://www.numpy.org) and [SciPy](https://www.scipy.org) for advanced numerical and scientific processing<br>- [scikit-learn](https://scikit-learn.org/stable/https://docsupdatetracker.net/index.html) for machine learning<br>- [TensorFlow](https://www.tensorflow.org/), [PyTorch](https://pytorch.org), and [Keras](https://keras.io/) for deep learning<br><br>**Tip**: Jupyter Notebooks supports multiple language kernels. Use *magics* to mix languages within the same notebook, by allowing the execution of individual cells using another language. For example, you can retrieve data using a PowerShell script cell, process the data in Python, and use JavaScript to render a visualization. |
+|**Specialized data processing, machine learning, and visualization tools** | Jupyter Notebooks provides more visualizations, machine learning libraries, and data processing and transformation features. <br><br>For example, use Jupyter Notebooks with the following [Python](https://python.org) capabilities:<br>- [pandas](https://pandas.pydata.org/) for data processing, cleanup, and engineering<br>- [Matplotlib](https://matplotlib.org), [HoloViews](https://holoviews.org), and [Plotly](https://plot.ly) for visualization<br>- [NumPy](https://www.numpy.org) and [SciPy](https://www.scipy.org) for advanced numerical and scientific processing<br>- [scikit-learn](https://scikit-learn.org/stable/https://docsupdatetracker.net/index.html) for machine learning<br>- [TensorFlow](https://www.tensorflow.org/), [PyTorch](https://pytorch.org), and [Keras](https://keras.io/) for deep learning<br><br>**Tip**: Jupyter Notebooks supports multiple language kernels. Use *magics* to mix languages within the same notebook, by allowing the execution of individual cells using another language. For example, you can retrieve data using a PowerShell script cell, process the data in Python, and use JavaScript to render a visualization. |
### MSTIC, Jupyter, and Python security tools
The following operators are especially helpful in Microsoft Sentinel hunting que
- **summarize** - Produce a table that aggregates the content of the input table. -- **join** - Merge the rows of two tables to form a new table by matching values of the specified column(s) from each table.
+- **join** - Merge the rows of two tables to form a new table by matching values of the specified columns from each table.
- **count** - Return the number of records in the input record set.
sentinel Hunts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/hunts.md
description: Learn how to use hunts for conducting end-to-end proactive threat h
Previously updated : 04/24/2023 Last updated : 03/12/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
-# Use Hunts to conduct end-to-end proactive threat hunting in Microsoft Sentinel
+# Conduct end-to-end proactive threat hunting in Microsoft Sentinel
Proactive threat hunting is a process where security analysts seek out undetected threats and malicious behaviors. By creating a hypothesis, searching through data, and validating that hypothesis, they determine what to act on. Actions can include creating new detections, new threat intelligence, or spinning up a new incident.
-Learn how to use the **Hunts** feature, which provides an end to end hunting experience within Microsoft Sentinel.
+Use the end to end hunting experience within Microsoft Sentinel to:
-Common use cases:
- Proactively hunt based on specific MITRE techniques, potentially malicious activity, recent threats, or your own custom hypothesis. - Use security-researcher-generated hunting queries or custom hunting queries to investigate malicious behavior. - Conduct your hunts using multiple persisted-query tabs that enable you to keep context over time.
Common use cases:
- Keep track of your new, active, and closed hunts in one place. - View metrics based on validated hypotheses and tangible results. + ## Prerequisites In order to use the hunts feature, you either need to be assigned a built-in Microsoft Sentinel role, or a custom Azure RBAC role. Here are your options:
In order to use the hunts feature, you either need to be assigned a built-in Mic
Defining a hypothesis is an open ended, flexible process and can include any idea you want to validate. Common hypotheses include: - Suspicious behavior - Investigate potentially malicious activity that's visible in your environment to determine if an attack is occurring.-- New threat campaign - Look for types of malicious activity based on newly discovered threat actors, techniques, or vulnerabilities. This might be something you've heard about in a security news article.
+- New threat campaign - Look for types of malicious activity based on newly discovered threat actors, techniques, or vulnerabilities. This might be something you heard about in a security news article.
- Detection gaps - Increase your detection coverage using the MITRE ATT&CK map to identify gaps. Microsoft Sentinel gives you flexibility as you zero in on the right set of hunting queries to investigate your hypothesis. When you create a hunt, initiate it with preselected hunting queries or add queries as you progress. Here are recommendations for preselected queries based on the most common hypotheses. ### Hypothesis - Suspicious behavior
-1. Navigate to the Hunting page **Queries** tab. With a well-established base of queries installed, running all your queries is the recommended method for identifying potentially malicious behaviors.
-1. Select **Run All queries** > wait for the queries to execute. This process may take a while.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Hunting**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Hunting**.
+
+1. Select the **Queries** tab. To identify potentially malicious behaviors, run all the queries.
+1. Select **Run All queries** > wait for the queries to execute. This process might take a while.
1. Select **Add filter** > **Results** > unselect the checkboxes "!", "N/A", "-", and "0" values > **Apply** :::image type="content" source="media/hunts/all-queries-with-results.png" alt-text="Screenshot shows the filter described in step 3.":::
-1. Sort these results by the **Results Delta** column to see what has changed most recently. These results provide initial guidance on the hunt.
+1. Sort these results by the **Results Delta** column to see what changed most recently. These results provide initial guidance on the hunt.
### Hypothesis - New threat campaign
-Content hub offers threat campaign and domain-based solutions to hunt for specific attacks.
-
-1. For example, install the "Log4J Vulnerability Detection" or the "Apache Tomcat" solutions from Microsoft.
-
- :::image type="content" source="media/hunts/content-hub-solutions.png" alt-text="Screenshot shows the content hub in grid view with the Log4J and Apache solutions selected." lightbox="media/hunts/content-hub-solutions.png":::
-1. Once installed, create a hunt directly from the solution by selecting the package > **Actions** > **Create hunt (Preview)**.
+The content hub offers threat campaign and domain-based solutions to hunt for specific attacks. In the following steps, you install one of these types of solutions.
- :::image type="content" source="media/hunts/add-content-queries-to-hunt.png" alt-text="Screenshot shows action menu options from content hub solutions page.":::
+1. Go to the **Content Hub**.
+1. Install a threat campaign or domain-based solution like the **Log4J Vulnerability Detection** or **Apache Tomcat**.
-1. If you already have a hunt started, select **Add to existing hunt (Preview)** to add the queries from the solution to an existing hunt.
-1. Alternatively, search for queries from these solutions in the Hunting **Queries** tab. Search by solution name, or filtering by **Source Name** of the solution.
+ :::image type="content" source="media/hunts/content-hub-solutions.png" alt-text="Screenshot shows the content hub in grid view with the Log4J and Apache solutions selected." lightbox="media/hunts/content-hub-solutions.png":::
+1. After the solution is installed, in Microsoft Sentinel, go to **Hunting**.
+1. Select the **Queries** tab.
+1. Search by solution name, or filtering by **Source Name** of the solution.
+1. Select the query and **Run query**.
### Hypothesis - Detection gaps The MITRE ATT&CK map helps you identify specific gaps in your detection coverage. Use predefined hunting queries for specific MITRE ATT&CK techniques as a starting point to develop new detection logic.
The MITRE ATT&CK map helps you identify specific gaps in your detection coverage
## Create a Hunt There are two primary ways to create a hunt.
-1. If you've started with a hypothesis where you've selected queries, select the **Hunt actions** drop down menu > **Create new hunt**. All the queries you selected are cloned for this new hunt.
+1. If you started with a hypothesis where you selected queries, select the **Hunt actions** drop down menu > **Create new hunt**. All the queries you selected are cloned for this new hunt.
:::image type="content" source="media/hunts/create-new-hunt.png" alt-text="Screenshot shows queries selected and the create new hunt menu option selected.":::
There are two primary ways to create a hunt.
1. Select **Create** to get started.
- :::image type="content" source="media/hunts/create-hunt-description.png" alt-text="Screenshot shows the hunt creation page with Hunt name, description, owner, status and hypothesis state.":::
+ :::image type="content" source="media/hunts/create-hunt-description.png" alt-text="Screenshot shows the hunt creation page with Hunt name, description, owner, status, and hypothesis state.":::
## View hunt details
There are two primary ways to create a hunt.
:::image type="content" source="media/hunts/view-hunt-details.png" alt-text="Screenshot showing the hunt details." lightbox="media/hunts/view-hunt-details.png"::: ### Queries tab
-The **Queries** tab contains hunting queries specific to this hunt. These queries are clones of the originals, independent from all others in the workspace and can be updated or deleted without impacting your overall set of hunting queries or queries in other hunts.
+The **Queries** tab contains hunting queries specific to this hunt. These queries are clones of the originals, independent from all others in the workspace. Update or delete them without impacting your overall set of hunting queries or queries in other hunts.
#### Add a query to the hunt 1. Select **Query Actions** > **add queries to hunt**
This feature allows you to see hunting query results in the Log Analytics search
1. These LA query tabs are lost if you close the browser tab. If you want to persist the queries long term, you need to save the query, create a new hunting query, or [copy it into a comment](#add-comments) for later use within the hunt. ## Add a bookmark+ When you find interesting results or important rows of data, add those results to the hunt by creating a bookmark. For more information, see [Use hunting bookmarks for data investigations](bookmarks.md).
-1. Select the desired row or rows. Select the Add bookmark action, right above the Results table.
+1. Select the desired row or rows.
+1. Above the results table, select **Add bookmark**.
:::image type="content" source="media/hunts/add-bookmark.png" alt-text="Screenshot showing add bookmark pane with optional fields filled in." lightbox="media/hunts/add-bookmark.png":::
- Optional steps:
-1. Name the bookmark(s),
-1. Set the event time column
-1. Map entity identifiers
-1. Set MITRE tactics and techniques
-1. Add tags, and add notes.
+1. Name the bookmark.
+1. Set the event time column.
+1. Map entity identifiers.
+1. Set MITRE tactics and techniques.
+1. Add tags, and add notes.
The bookmarks preserve the specific row results, KQL query, and time range that generated the result.
When you find interesting results or important rows of data, add those results t
## View bookmarks
-1. Navigate to your hunt's bookmark tab to view your bookmarks with previously created details.
+
+1. Navigate to the hunt's bookmark tab to view your bookmarks.
:::image type="content" source="media/hunts/view-bookmark.png" alt-text="Screenshot showing a bookmark with all its details and the hunts action menu open." lightbox="media/hunts/view-bookmark.png":::
When you find interesting results or important rows of data, add those results t
- Select the **Edit** button to update the tags, MITRE tactics and techniques, and notes. ## Interact with entities+ 1. Navigate to your hunt's **Entities** tab to view, search, and filter the entities contained in your hunt. This list is generated from the list of entities in the bookmarks. The Entities tab automatically resolves duplicated entries. 1. Select entity names to visit the corresponding UEBA entity page. 1. Right-click on the entity to take actions appropriate to the entity types, such as adding an IP address to TI or running an entity type specific playbook.
Comments are an excellent place to collaborate with colleagues, preserve notes,
## Create incidents+ There are two choices for incident creation while hunting. Option 1: Use bookmarks.
Option 2: Use the hunts **Actions**.
:::image type="content" source="media/hunts/create-incident-actions-menu.png" alt-text="Screenshot showing hunts actions menu from the bookmarks window.":::
-1. During the **Add bookmarks** step, use the **Add bookmark** action to choose bookmarks from the hunt to add to the incident. You're limited to bookmarks that haven't already been assigned to an incident.
+1. During the **Add bookmarks** step, use the **Add bookmark** action to choose bookmarks from the hunt to add to the incident. You're limited to bookmarks that aren't assigned to an incident.
1. After the incident is created, it will be linked under the **Related incidents** list for that hunt. ## Update status
-1. When you have captured enough evidence to validate or invalidate your hypothesis, update your hypothesis state.
+
+1. When you captured enough evidence to validate or invalidate your hypothesis, update your hypothesis state.
:::image type="content" source="media/hunts/set-hypothesis.png" alt-text="Screenshot shows hypothesis state menu selection.":::
Option 2: Use the hunts **Actions**.
These status updates are visible on the main Hunting page and are used to [track metrics](#track-metrics). ## Track metrics+ Track tangible results from hunting activity using the metrics bar in the **Hunts** tab. Metrics show the number of validated hypotheses, new incidents created, and new analytic rules created. Use these results to set goals or celebrate milestones of your hunting program.
-
## Next steps
-In this article you learned how to run a hunting investigation with the hunts feature in Microsoft Sentinel.
+
+In this article, you learned how to run a hunting investigation with the hunts feature in Microsoft Sentinel.
For more information, see: - [Hunt for threats with Microsoft Sentinel](hunting.md)
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
Title: Identify advanced threats with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel | Microsoft Docs
+ Title: Advanced threat detection with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel | Microsoft Docs
description: Create behavioral baselines for entities (users, hostnames, IP addresses) and use them to detect anomalous behavior and identify zero-day advanced persistent threats (APT). Previously updated : 08/08/2022 Last updated : 03/19/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
-# Identify advanced threats with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel
+# Advanced threat detection with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] -
-Identifying threats inside your organization and their potential impact - whether a compromised entity or a malicious insider - has always been a time-consuming and labor-intensive process. Sifting through alerts, connecting the dots, and active hunting all add up to massive amounts of time and effort expended with minimal returns, and the possibility of sophisticated threats simply evading discovery. Particularly elusive threats like zero-day, targeted, and advanced persistent threats can be the most dangerous to your organization, making their detection all the more critical.
+Identifying threats inside your organization and their potential impact&mdash;whether a compromised entity or a malicious insider&mdash;has always been a time-consuming and labor-intensive process. Sifting through alerts, connecting the dots, and active hunting all add up to massive amounts of time and effort expended with minimal returns, and the possibility of sophisticated threats simply evading discovery. Particularly elusive threats like zero-day, targeted, and advanced persistent threats can be the most dangerous to your organization, making their detection all the more critical.
The UEBA capability in Microsoft Sentinel eliminates the drudgery from your analystsΓÇÖ workloads and the uncertainty from their efforts, and delivers high-fidelity, actionable intelligence, so they can focus on investigation and remediation. +
+All the benefits of UEBA are available in the unified security operations platform in the Microsoft Defender portal.
+ ## What is User and Entity Behavior Analytics (UEBA)? As Microsoft Sentinel collects logs and alerts from all of its connected data sources, it analyzes them and builds baseline behavioral profiles of your organizationΓÇÖs entities (such as users, hosts, IP addresses, and applications) across time and peer group horizon. Using a variety of techniques and machine learning capabilities, Microsoft Sentinel can then identify anomalous activity and help you determine if an asset has been compromised. Not only that, but it can also figure out the relative sensitivity of particular assets, identify peer groups of assets, and evaluate the potential impact of any given compromised asset (its ΓÇ£blast radiusΓÇ¥). Armed with this information, you can effectively prioritize your investigation and incident handling.
Microsoft Sentinel presents artifacts that help your security analysts get a cle
- as compared to organization's behavior. :::image type="content" source="media/identify-threats-with-entity-behavior-analytics/context.png" alt-text="Entity context":::
-The user entity information that Microsoft Sentinel uses to build its user profiles comes from your Microsoft Entra ID (and/or your on-premises Active Directory, now in Preview). When you enable UEBA, it synchronizes your Microsoft Entra ID with Microsoft Sentinel, storing the information in an internal database visible through the *IdentityInfo* table in Log Analytics.
+The user entity information that Microsoft Sentinel uses to build its user profiles comes from your Microsoft Entra ID (and/or your on-premises Active Directory, now in Preview). When you enable UEBA, it synchronizes your Microsoft Entra ID with Microsoft Sentinel, storing the information in an internal database visible through the *IdentityInfo* table.
+
+- In Microsoft Sentinel in the Azure portal, you query the *IdentityInfo* table in Log Analytics on the **Logs** page.
+- In the unified security operations platform in Microsoft Defender, you query this table in **Advanced hunting**.
Now in preview, you can also sync your on-premises Active Directory user entity information as well, using Microsoft Defender for Identity.
Learn more about [entities in Microsoft Sentinel](entities.md) and see the full
### Entity pages
-Information about **entity pages** can now be found at [Investigate entities with entity pages in Microsoft Sentinel](entity-pages.md).
+Information about **entity pages** can now be found at [Entity pages in Microsoft Sentinel](entity-pages.md).
## Querying behavior analytics data
BehaviorAnalytics
| where ActivityInsights.CountryUncommonlyConnectedFromAmongPeers == True ```
+- In Microsoft Sentinel in the Azure portal, you query the *BehaviorAnalytics* table in Log Analytics on the **Logs** page.
+- In the unified security operations platform in Microsoft Defender, you query this table in **Advanced hunting**.
+ ### User peers metadata - table and notebook User peers' metadata provides important context in threat detections, in investigating an incident, and in hunting for a potential threat. Security analysts can observe the normal activities of a user's peers to determine if the user's activities are unusual as compared to those of his or her peers.
sentinel Incident Investigation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/incident-investigation.md
The **Entities tab** contains a list of all the entities in the incident. When a
- **Timeline** contains a list of the alerts that feature this entity and activities the entity has done, as collected from logs in which the entity appears. - **Insights** contains answers to questions about the entity relating to its behavior in comparison to its peers and its own history, its presence on watchlists or in threat intelligence, or any other sort of unusual occurrence relating to it. These answers are the results of queries defined by Microsoft security researchers that provide valuable and contextual security information on entities, based on data from a collection of sources.
- As of November 2023, the **Insights** panel includes the next generation of insights, available in **PREVIEW**, in the form of [enrichment widgets](whats-new.md#visualize-data-with-enrichment-widgets-preview), alongside the existing insights. To take advantage of these new widgets, you must [enable the widget experience](enable-enrichment-widgets.md).
+ As of November 2023, the **Insights** panel includes the next generation of insights, available in **PREVIEW**, in the form of enrichment widgets, alongside the existing insights. To take advantage of these new widgets, you must [enable the widget experience](enable-enrichment-widgets.md).
Depending on the entity type, you can take a number of further actions from this side panel: - Pivot to the entity's full [entity page](entity-pages.md) to get even more details over a longer timespan or launch the graphical investigation tool centered on that entity.
sentinel Indicators Bulk File Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/indicators-bulk-file-import.md
Title: Add indicators in bulk to threat intelligence by file
-description: Learn how to bulk add indicators to threat intelligence from flat files in Microsoft Sentinel.
+description: Learn how to bulk add indicators to threat intelligence from flat files like .csv or .json in Microsoft Sentinel.
- Previously updated : 07/26/2022- Last updated : 3/14/2024+
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ #Customer intent: As a security analyst, I want to bulk import indicators from common file types to my threat intelligence (TI), so I can more effectively share TI during an investigation.
In this how-to guide, you'll add indicators from a CSV or JSON file into Microso
> [!IMPORTANT] > This feature is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
## Prerequisites
-You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators.
+- You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators.
+ ## Select an import template for your indicators Add multiple indicators to your threat intelligence with a specially crafted CSV or JSON file. Download the file templates to get familiar with the fields and how they map to the data you have. Review the required fields for each template type to validate your data before importing.
-1. From the [Azure portal](https://portal.azure.com), go to **Microsoft Sentinel**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Threat intelligence**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Threat intelligence**.
-1. Select the workspace you want to import threat indicators into.
+1. Select **Import** > **Import using a file**.
-1. Go to **Threat Intelligence** under the **Threat Management** heading.
+ #### [Azure portal](#tab/azure-portal)
:::image type="content" source="media/indicators-bulk-file-import/import-using-file-menu-fixed.png" alt-text="Screenshot of the menu options to import indicators using a file menu." lightbox="media/indicators-bulk-file-import/import-using-file-menu-fixed.png":::-
-1. Select **Import** > **Import using a file**.
+
+ #### [Defender portal](#tab/defender-portal)
+ :::image type="content" source="media/indicators-bulk-file-import/import-using-file-menu-defender-portal.png" alt-text="Screenshot of the menu options to import indicators using a file menu from the Defender portal." lightbox="media/indicators-bulk-file-import/import-using-file-menu-defender-portal.png":::
+
1. Choose CSV or JSON from the **File Format** drop down menu.
- :::image type="content" source="media/indicators-bulk-file-import/format-select-and-download.png" alt-text="Screenshot of the menu flyout to upload a CSV or JSON file, choose a template to download, and specify a source highlighting the file format selection.":::
+ :::image type="content" source="media/indicators-bulk-file-import/format-select-and-download.png" alt-text="Screenshot of the menu flyout to upload a CSV or JSON file, choose a template to download, and specify a source.":::
1. Select the **Download template** link once you've chosen a bulk upload template.
The templates provide all the fields you need to create a single valid indicator
## Upload the indicator file
-1. Change the file name from the template default, but keep the file extension as .csv or .json. When you create a unique file name, it will be easier to monitor your imports from the **Manage file imports** pane.
+1. Change the file name from the template default, but keep the file extension as .csv or .json. When you create a unique file name, it's easier to monitor your imports from the **Manage file imports** pane.
1. Drag your indicators file to the **Upload a file** section or browse for the file using the link.
-1. Enter a source for the indicators in the **Source** text box. This value will be stamped on all the indicators included in that file. You can view this property as the **SourceSystem** field. The source will also be displayed in the **Manage file imports** pane. Learn more about how to view indicator properties here: [Work with threat indicators](work-with-threat-indicators.md#find-and-view-your-indicators-in-logs).
+1. Enter a source for the indicators in the **Source** text box. This value is be stamped on all the indicators included in that file. You can view this property as the **SourceSystem** field. The source is also be displayed in the **Manage file imports** pane. Learn more about how to view indicator properties here: [Work with threat indicators](work-with-threat-indicators.md#find-and-view-your-indicators-in-logs).
1. Choose how you want Microsoft Sentinel to handle invalid indicator entries by selecting one of the radio buttons at the bottom of the **Import using a file** pane. - Import only the valid indicators and leave aside any invalid indicators from the file.
Monitor your imports and view error reports for partially imported or failed imp
:::image type="content" source="media/indicators-bulk-file-import/manage-file-imports.png" alt-text="Screenshot of the menu option to manage file imports.":::
-1. Review the status of imported files and the number of invalid indicator entries.The valid/invalid indicator count is updated once the file is processed. Please wait for the import to complete to get the updated count of valid/invalid indicators.
+1. Review the status of imported files and the number of invalid indicator entries. The valid indicator count is updated once the file is processed. Wait for the import to complete to get the updated count of valid indicators.
:::image type="content" source="media/indicators-bulk-file-import/manage-file-imports-pane.png" alt-text="Screenshot of the manage file imports pane with example ingestion data. The columns show sorted by imported number with various sources.":::
Monitor your imports and view error reports for partially imported or failed imp
1. Select the preview of the error file or download the error file containing the errors about invalid indicators.
-Microsoft Sentinel maintains the status of the file import for 30 days. The actual file and the associated error file are maintained in the system for 24 hours. After 24 hours the file and the error file are deleted, and the ingested indicators will continue to show in the Threat Intelligence menu.
+Microsoft Sentinel maintains the status of the file import for 30 days. The actual file and the associated error file are maintained in the system for 24 hours. After 24 hours the file and the error file are deleted, but any ingested indicators continue to show in Threat Intelligence.
## Understand the import templates
-Review each template to ensure your indicators are imported successfully. If this is your first import, be sure to reference the instructions in the template file and follow the supplemental guidance below.
+Review each template to ensure your indicators are imported successfully. Be sure to reference the instructions in the template file and the following supplemental guidance.
### CSV template structure
Review each template to ensure your indicators are imported successfully. If thi
The CSV template needs multiple columns to accommodate the file indicator type because file indicators can have multiple hash types like MD5, SHA256, and more. All other indicator types like IP addresses only require the observable type and the observable value.
-1. The column headings for the CSV **All other indicator types** template include fields such as `threatTypes`, single or multiple `tags`, `confidence`, and `tlpLevel`. TLP or Traffic Light Protocol is a sensitivity designation to help make decisions on threat intelligence sharing.
+1. The column headings for the CSV **All other indicator types** template include fields such as `threatTypes`, single or multiple `tags`, `confidence`, and `tlpLevel`. Traffic Light Protocol (TLP) is a sensitivity designation to help make decisions on threat intelligence sharing.
1. Only the `validFrom`, `observableType` and `observableValue` fields are required.
Phishing,"demo, csv",MDTI article - Franken-Phish domainname,Entity appears in M
1. Remove the template comments before upload.
-1. Close the last indicator in the array using the "}" without a comma.
+1. Close the last indicator in the array using the `}` without a comma.
1. Keep in mind the max file size for a JSON file import is 250MB.
Here's an example ipv4-addr indicator using the JSON template.
] ```
-## Next steps
+## Related content
This article has shown you how to manually bolster your threat intelligence by importing indicators gathered in flat files. Check out these links to learn how indicators power other analytics in Microsoft Sentinel. - [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md)
sentinel Investigate Large Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-large-datasets.md
Title: Start an investigation by searching large datasets - Microsoft Sentinel
description: Learn about search jobs and restoring archived data in Microsoft Sentinel. Previously updated : 01/21/2022 Last updated : 03/03/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Start an investigation by searching for events in large datasets
One of the primary activities of a security team is to search logs for specific
In Microsoft Sentinel, you can search across long time periods in extremely large datasets by using a search job. While you can run a search job on any type of log, search jobs are ideally suited to search archived logs. If you need to do a full investigation on archived data, you can restore that data into the hot cache to run high performing queries and deeper analysis. ## Search large datasets
sentinel Livestream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/livestream.md
Title: Use hunting Livestream in Microsoft Sentinel to detect threats | Microsoft Docs
-description: This article describes how to use hunting Livestream in Microsoft Sentinel to keep track of data.
-
+ Title: Detect threats by using hunting livestream in Microsoft Sentinel
+description: Learn how to use hunting livestream in Microsoft Sentinel to actively monitor a compromise event.
- Previously updated : 09/29/2022- Last updated : 03/12/2024+++
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
-# Use hunting livestream in Microsoft Sentinel to detect threats
+# Detect threats by using hunting livestream in Microsoft Sentinel
Use hunting livestream to create interactive sessions that let you test newly created queries as events occur, get notifications from the sessions when a match is found, and launch investigations if necessary. You can quickly create a livestream session using any Log Analytics query.
Use hunting livestream to create interactive sessions that let you test newly cr
- **Get notified when threats occur**
- You can compare threat data feeds to aggregated log data and be notified when a match occurs. Threat data feeds are ongoing streams of data that are related to potential or current threats, so the notification might indicate a potential threat to your organization. Create a livestream session instead of a custom alert rule when you want to be notified of a potential issue without the overheads of maintaining a custom alert rule.
+ You can compare threat data feeds to aggregated log data and be notified when a match occurs. Threat data feeds are ongoing streams of data that are related to potential or current threats, so the notification might indicate a potential threat to your organization. Create a livestream session instead of a custom alert rule to be notified of a potential issue without the overheads of maintaining a custom alert rule.
- **Launch investigations**
- If there is an active investigation that involves an asset such as a host or user, you can view specific (or any) activity in the log data as it occurs on that asset. You can be notified when that activity occurs.
+ If there's an active investigation that involves an asset such as a host or user, view specific (or any) activity in the log data as it occurs on that asset. Be notified when that activity occurs.
## Create a livestream session You can create a livestream session from an existing hunting query, or create your session from scratch.
-1. In the Azure portal, navigate to **Sentinel** > **Threat management** > **Hunting**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Hunting**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Hunting**.
1. To create a livestream session from a hunting query:
You can create a livestream session from an existing hunting query, or create yo
1. To create a livestream session from scratch:
- 1. Select the **Livestream** tab
- 1. Click **+ New livestream**.
+ 1. Select the **Livestream** tab.
+ 1. Select **+ New livestream**.
1. On the **Livestream** pane: - If you started livestream from a query, review the query and make any changes you want to make. - If you started livestream from scratch, create your query.
- > [!NOTE]
- > Livestream supports **cross-resource queries** of data in Azure Data Explorer. [**Learn more about cross-resource queries**](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md).
+ Livestream supports **cross-resource queries** of data in Azure Data Explorer. [**Learn more about cross-resource queries**](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md).
1. Select **Play** from the command bar.
You can create a livestream session from an existing hunting query, or create yo
1. Select **Save** from the command bar.
- Unless you select **Pause**, the session continues to run until you are signed out from the Azure portal.
+ Unless you select **Pause**, the session continues to run until you're signed out from the Azure portal.
## View your livestream sessions
-1. In the Azure portal, navigate to **Sentinel** > **Threat management** > **Hunting** > **Livestream** tab.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Hunting**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Hunting**.
+
+1. Select the **Livestream** tab.
1. Select the livestream session you want to view or edit. For example:
Select the notification to open the **Livestream** pane.
## Elevate a livestream session to an alert
-You can promote a livestream session to a new alert by selecting **Elevate to alert** from the command bar on the relevant livestream session:
+Promote a livestream session to a new alert by selecting **Elevate to alert** from the command bar on the relevant livestream session:
> [!div class="mx-imgBorder"] > ![Elevate livestream session to an alert](./media/livestream/elevate-to-alert.png)
sentinel Map Data Fields To Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/map-data-fields-to-entities.md
Last updated 04/26/2022 +
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
# Map data fields to entities in Microsoft Sentinel
+Entity mapping is an integral part of the configuration of [scheduled query analytics rules](detect-threats-custom.md). It enriches the rules' output (alerts and incidents) with essential information that serves as the building blocks of any investigative processes and remedial actions that follow.
+
+The procedure detailed below is part of the analytics rule creation wizard. It's treated here independently to address the scenario of adding or changing entity mappings in an existing analytics rule.
+ > [!IMPORTANT] > > - See "[Notes on the new version](#notes-on-the-new-version)" at the end of this document for important information about backward compatibility and differences between the new and old versions of entity mapping.
+> - [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
+
+## How to map entities
-## Introduction
+1. Enter the **Analytics** page in the portal through which you access Microsoft Sentinel:
-Entity mapping is an integral part of the configuration of [scheduled query analytics rules](detect-threats-custom.md). It enriches the rules' output (alerts and incidents) with essential information that serves as the building blocks of any investigative processes and remedial actions that follow.
+ # [Azure portal](#tab/azure)
-The procedure detailed below is part of the analytics rule creation wizard. It's treated here independently to address the scenario of adding or changing entity mappings in an existing analytics rule.
+ From the **Configuration** section of the Microsoft Sentinel navigation menu, select **Analytics**.
-## How to map entities
+ # [Defender portal](#tab/defender)
+
+ From the Microsoft Defender navigation menu, expand **Microsoft Sentinel**, then **Configuration**. Select **Analytics**.
-1. From the Microsoft Sentinel navigation menu, select **Analytics**.
+
1. Select a scheduled query rule and select **Edit** from the details pane. Or create a new rule by clicking **Create > Scheduled query rule** at the top of the screen.
sentinel Microsoft 365 Defender Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-sentinel-integration.md
Other services whose alerts are collected by Microsoft Defender XDR include:
In addition to collecting alerts from these components and other services, Microsoft Defender XDR generates alerts of its own. It creates incidents from all of these alerts and sends them to Microsoft Sentinel.
-> [!IMPORTANT]
-> The Microsoft Defender XDR connector is now generally available!
- ## Common use cases and scenarios - One-click connect of Microsoft Defender XDR incidents, including all alerts and entities from Microsoft Defender XDR components, into Microsoft Sentinel.
Once the Microsoft Defender XDR integration is connected, the connectors for all
- To avoid creating duplicate incidents for the same alerts, we recommend that customers turn off all **Microsoft incident creation rules** for Microsoft Defender XDR-integrated products (Defender for Endpoint, Defender for Identity, Defender for Office 365, Defender for Cloud Apps, and Microsoft Entra ID Protection) when connecting Microsoft Defender XDR. This can be done by disabling incident creation in the connector page. Keep in mind that if you do this, any filters that were applied by the incident creation rules will not be applied to Microsoft Defender XDR incident integration.
+- If your workspace is onboarded to the [unified security operations platform](microsoft-sentinel-defender-portal.md), you *must* turn off all Microsoft incident creation rules, as they aren't supported. For more information, see [Automation with the unified security operations platform](automation.md#automation-with-the-unified-security-operations-platform)
+ ## Working with Microsoft Defender XDR incidents in Microsoft Sentinel and bi-directional sync Microsoft Defender XDR incidents will appear in the Microsoft Sentinel incidents queue with the product name **Microsoft Defender XDR**, and with similar details and functionality to any other Sentinel incidents. Each incident contains a link back to the parallel incident in the Microsoft Defender Portal.
sentinel Microsoft Sentinel Defender Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-sentinel-defender-portal.md
+
+ Title: Microsoft Sentinel in the Microsoft Defender portal
+description: Learn about changes in the Microsoft Defender portal with the integration of Microsoft Sentinel.
+++ Last updated : 04/03/2024
+appliesto:
+ - Microsoft Sentinel in the Microsoft Defender portal
+++
+# Microsoft Sentinel in the Microsoft Defender portal (preview)
+
+Microsoft Sentinel is available as part of the public preview for the unified security operations platform in the Microsoft Defender portal. For more information, see:
+
+- [Unified security operations platform with Microsoft Sentinel and Defender XDR](https://aka.ms/unified-soc-announcement)
+- [Connect Microsoft Sentinel to Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-sentinel-onboard)
+
+ This article describes the Microsoft Sentinel experience in the Microsoft Defender portal.
+> [!IMPORTANT]
+> Information in this article relates to a prerelease product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+## New and improved capabilities
+
+The following table describes the new or improved capabilities available in the Defender portal with the integration of Microsoft Sentinel and Defender XDR.
+
+|Capabilities |Description |
+|||
+|Advanced hunting | Query from a single portal across different data sets to make hunting more efficient and remove the need for context-switching. View and query all data including data from Microsoft security services and Microsoft Sentinel. Use all your existing Microsoft Sentinel workspace content, including queries and functions.<br><br> For more information, see [Advanced hunting in the Microsoft Defender portal](https://go.microsoft.com/fwlink/p/?linkid=2264410).|
+|Attack disrupt | Deploy automatic attack disruption for SAP with both the unified security operations platform and the Microsoft Sentinel solution for SAP applications. For example, contain compromised assets by locking suspicious SAP users in case of a financial process manipulation attack. <br><br>Attack disruption capabilities for SAP are available in the Defender portal only. To use attack disruption for SAP, update your data connector agent version and ensure that the relevant Azure role is assigned to your agent's identity. <br><br> For more information, see [Automatic attack disruption for SAP (Preview)](sap/deployment-attack-disrupt.md). |
+|Unified entities| Entity pages for devices, users, IP addresses, and Azure resources in the Defender portal display information from Microsoft Sentinel and Defender data sources. These entity pages give you an expanded context for your investigations of incidents and alerts in the Defender portal.<br><br>For more information, see [Investigate entities with entity pages in Microsoft Sentinel](/azure/sentinel/entity-pages).|
+|Unified incidents| Manage and investigate security incidents in a single location and from a single queue in the Defender portal. Incidents include:<br>- Data from the breadth of sources<br>- AI analytics tools of security information and event management (SIEM)<br>- Context and mitigation tools offered by extended detection and response (XDR) <br><br> For more information, see [Incident response in the Microsoft Defender portal](/microsoft-365/security/defender/incidents-overview).|
++
+## Capability differences between portals
+
+Most Microsoft Sentinel capabilities are available in both the Azure and Defender portals. In the Defender portal, some Microsoft Sentinel experiences open out to the Azure portal for you to complete a task.
+
+This section covers the Microsoft Sentinel capabilities or integrations in the unified security operations platform that are only available in either the Azure portal or Defender portal. It excludes the Microsoft Sentinel experiences that open the Azure portal from the Defender portal.
+
+### Defender portal only
+
+The following capabilities are only available in the Defender portal.
+
+|Capability |Learn more |
+|||
+|Attack disruption for SAP | [Automatic attack disruption in the Microsoft Defender portal](/microsoft-365/security/defender/automatic-attack-disruption) |
+
+### Azure portal only
+
+The following capabilities are only available in the Azure portal.
+
+|Capability |Learn more |
+|||
+|Tasks | [Use tasks to manage incidents in Microsoft Sentinel](incident-tasks.md) |
+|Add entities to threat intelligence from incidents | [Add entity to threat indicators](add-entity-to-threat-intelligence.md) |
+| Automation | Some automation procedures are available only in the Azure portal. <br><br>Other automation procedures are the same in the Defender and Azure portals, but differ in the Azure portal between workspaces that are onboarded to the unified security operations platform and workspaces that aren't. <br><br>For more information, see [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](https://aka.ms/unified-soc-automation-lims). |
+
+## Quick reference
+
+Some Microsoft Sentinel capabilities, like the unified incident queue, are integrated with Microsoft Defender XDR in the unified security operations platform. Many other Microsoft Sentinel capabilities are available in the **Microsoft Sentinel** section of the Defender portal.
+
+The following image shows the **Microsoft Sentinel** menu in the Defender portal:
++
+The following sections describe where to find Microsoft Sentinel features in the Defender portal. The sections are organized as Microsoft Sentinel is in the Azure portal.
+
+### General
+
+The following table lists the changes in navigation between the Azure and Defender portals for the **General** section in the Azure portal.
+
+|Azure portal |Defender portal |
+|||
+|Overview | Overview |
+|Logs | Investigation & response > Hunting > Advanced hunting |
+|News & guides | Not available |
+|Search | Microsoft Sentinel > Search |
++
+### Threat management
+
+The following table lists the changes in navigation between the Azure and Defender portals for the **Threat management** section in the Azure portal.
+
+|Azure portal |Defender portal |
+|||
+|Incidents | Investigation & response > Incidents & alerts > Incidents |
+|Workbooks | Microsoft Sentinel > Threat management> Workbooks |
+|Hunting | Microsoft Sentinel > Threat management > Hunting |
+|Notebooks | Microsoft Sentinel > Threat management > Notebooks |
+|Entity behavior | *User entity page:* Assets > Identities > *{user}* > Sentinel events<br>*Device entity page:* Assets > Devices > *{device}* > Sentinel events<br><br>Also, find the entity pages for the user, device, IP, and Azure resource entity types from incidents and alerts as they appear. |
+|Threat intelligence | Microsoft Sentinel > Threat management > Threat intelligence |
+|MITRE ATT&CK|Microsoft Sentinel > Threat management > MITRE ATT&CK |
++
+### Content management
+
+The following table lists the changes in navigation between the Azure and Defender portals for the **Content management** section in the Azure portal.
+
+|Azure portal |Defender portal |
+|||
+|Content hub | Microsoft Sentinel > Content management > Content hub |
+|Repositories | Microsoft Sentinel > Content management > Repositories |
+|Community | Not available |
+
+### Configuration
+
+The following table lists the changes in navigation between the Azure and Defender portals for the **Configuration** section in the Azure portal.
+
+|Azure portal |Defender portal |
+|||
+|Workspace manager | Not available |
+|Data connectors | Microsoft Sentinel > Configuration > Data connectors |
+|Analytics | Microsoft Sentinel > Configuration > Analytics |
+|Watchlists | Microsoft Sentinel > Configuration > Watchlists |
+|Automation | Microsoft Sentinel > Configuration > Automation |
+|Settings | System > Settings > Microsoft Sentinel |
+
+## Related content
+
+- [Connect Microsoft Sentinel to Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-sentinel-onboard)
+- [Microsoft Defender XDR documentation](/microsoft-365/security/defender)
sentinel Migrate Playbooks To Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migrate-playbooks-to-automation-rules.md
Title: Migrate your Microsoft Sentinel alert-trigger playbooks to automation rules | Microsoft Docs description: This article explains how (and why) to take your existing playbooks built on the alert trigger and migrate them from being invoked by analytics rules to being invoked by automation rules.-- Previously updated : 05/09/2023++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
++ # Migrate your Microsoft Sentinel alert-trigger playbooks to automation rules This article explains how (and why) to take your existing playbooks built on the alert trigger and migrate them from being invoked by **analytics rules** to being invoked by **automation rules**. + ## Why migrate If you have already created and built playbooks to respond to alerts (rather than incidents), and attached them to analytics rules, we strongly encourage you to move these playbooks to automation rules. Doing so will give you the following advantages:
Finally, the ability to invoke playbooks from analytics rules will be **deprecat
### Create an automation rule from an analytics rule
-1. From the main navigation menu, select **Analytics**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), select the **Configuration** > **Analytics** page. For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Analytics**.
1. Under **Active rules**, find an analytics rule already configured to run a playbook.
Finally, the ability to invoke playbooks from analytics rules will be **deprecat
:::image type="content" source="media/migrate-playbooks-to-automation-rules/select-playbook.png" alt-text="Screenshot of selecting playbook as action in automation rule wizard.":::
-1. Click **Apply**. You will now see the new rule in the automation rules grid.
+1. Select **Apply**. You will now see the new rule in the automation rules grid.
1. Remove the playbook from the **Alert automation (classic)** section.
Finally, the ability to invoke playbooks from analytics rules will be **deprecat
### Create a new automation rule from the Automation portal
-1. From the main navigation menu, select **Automation**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), select the **Configuration** > **Analytics** page. For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Analytics**.
1. From the top menu bar, select **Create -> Automation rule**.
sentinel Monitor Analytics Rule Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-analytics-rule-integrity.md
Here are some sample queries to help you get started:
| where Status != "Success" ``` -- Find rules that have been "[auto-disabled](detect-threats-custom.md#issue-a-scheduled-rule-failed-to-execute-or-appears-with-auto-disabled-added-to-the-name)":
+- Find rules that have been "[auto-disabled](troubleshoot-analytics-rules.md#issue-a-scheduled-rule-failed-to-execute-or-appears-with-auto-disabled-added-to-the-name)":
```kusto _SentinelHealth()
sentinel Monitor Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-your-data.md
Title: Visualize your data using workbooks in Microsoft Sentinel | Microsoft Doc
description: Learn how to visualize your data using workbooks in Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 03/07/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Visualize and monitor your data by using workbooks in Microsoft Sentinel
-After you have connected your data sources to Microsoft Sentinel, visualize and monitor the data using workbooks in Microsoft Sentinel. Microsoft Sentinel allows you to create custom workbooks across your data or, use existing workbook templates available with packaged solutions or as standalone content from the content hub. These templates allow you to quickly gain insights across your data as soon as you connect a data source.
+After you connect your data sources to Microsoft Sentinel, visualize and monitor the data using workbooks in Microsoft Sentinel. Microsoft Sentinel allows you to create custom workbooks across your data or, use existing workbook templates available with packaged solutions or as standalone content from the content hub. These templates allow you to quickly gain insights across your data as soon as you connect a data source.
-This article describes how to visualize your data in Microsoft Sentinel.
+This article describes how to visualize your data in Microsoft Sentinel by using workbooks.
-> [!div class="checklist"]
-> * Use workbook templates
-> * Create new workbooks
## Prerequisites
This article describes how to visualize your data in Microsoft Sentinel.
The workbooks that you see in Microsoft Sentinel are saved within the Microsoft Sentinel workspace's resource group and are tagged by the workspace in which they were created. - To use a workbook template, install the solution that contains the workbook or install the workbook as a standalone item from the **Content Hub**. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
-## Use a workbook template
+## Create a workbook from a template
+
+Use a template installed from the content hub to create a workbook.
+
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Workbooks**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Workbooks**.
1. Go to **Workbooks** and then select **Templates** to see the list of workbook templates installed.
- To see which are relevant to the data types you have connected, the **Required data types** field in each workbook lists the data type next to a green check mark if you already stream relevant data to Microsoft Sentinel.
+ To see which templates are relevant to the data types you connected, review the **Required data types** field in each workbook where available.
- [ ![Go to workbooks.](media/tutorial-monitor-data/access-workbooks.png) ](media/tutorial-monitor-data/access-workbooks.png#lightbox)
+ #### [Azure portal](#tab/azure-portal)
+ :::image type="content" source="media/monitor-your-data/workbook-template-azure-portal.png" alt-text="Screenshot of a workbook template with required data types shown in the details pane." lightbox="media/monitor-your-data/workbook-template-azure-portal.png":::
-1. Select **View template** to see the template populated with your data.
+ #### [Defender portal](#tab/defender-portal)
+ :::image type="content" source="media/monitor-your-data/workbook-template-defender-portal.png" alt-text="Screenshot of a workbook template in the Defender portal that shows the required data types." lightbox="media/monitor-your-data/workbook-template-defender-portal.png":::
-1. To edit the workbook, select **Save**, and then select the location where you want to save the JSON file for the template.
+1. Select **Save** from the template details pane and the location where you want to save the JSON file for the template. This action creates an Azure resource based on the relevant template and saves the JSON file of the workbook not the data.
- > [!NOTE]
- > This creates an Azure resource based on the relevant template and saves the JSON file of the workbook and not the data.
+1. Select **View saved workbook** from the template details pane.
+1. Select the **Edit** button in the workbook toolbar to customize the workbook according to your needs.
-1. Select **View saved workbook**.
+ [ ![Screenshot that shows the saved workbook.](media/monitor-your-data/workbook-graph.png) ](media/monitor-your-data/workbook-graph.png#lightbox)
- [ ![View workbooks.](media/tutorial-monitor-data/workbook-graph.png) ](media/tutorial-monitor-data/workbook-graph.png#lightbox)
+ To clone your workbook, select **Edit** and then **Save as**. Save the clone with another name, under the same subscription and resource group. Cloned workbooks are displayed under the **My workbooks** tab.
- Select the **Edit** button in the workbook toolbar to customize the workbook according to your needs. When you're done, select **Save** to save your changes.
+1. When you're done, select **Save** to save your changes.
- For more information, see how to [Create interactive reports with Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md).
+For more information, see how to [Create interactive reports with Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md).
-> [!TIP]
-> To clone your workbook, select **Edit** and then **Save as**, making sure to save it with another name, under the same subscription and resource group.
-> Cloned workbooks are displayed under the **My workbooks** tab.
->
## Create new workbook
-1. Go to **Workbooks** and then select **Add workbook** to create a new workbook from scratch.
-
- [ ![New workbook.](media/tutorial-monitor-data/create-workbook.png) ](media/tutorial-monitor-data/create-workbook.png#lightbox)
+Create a workbook from scratch in Microsoft Sentinel.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Workbooks**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Workbooks**.
+1. Select **Add workbook**.
1. To edit the workbook, select **Edit**, and then add text, queries, and parameters as necessary. For more information on how to customize the workbook, see how to [Create interactive reports with Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md).
-1. When building a query, make sure the **Data source** is set to **Logs** and **Resource type** is set to **Log Analytics**, and then choose the relevant workspace(s).
+ [ ![Screenshot that shows a new workbook.](media/monitor-your-data/create-workbook.png) ](media/monitor-your-data/create-workbook.png#lightbox)
- > [!IMPORTANT]
- >
- > We recommend that your query uses an [Advanced Security Information Model (ASIM) parser](normalization-about-parsers.md) and not a built-in table. This ensures that the query will support any current or future relevant data source rather than a single data source.
- >
-
-1. After you create your workbook, save the workbook, making sure you save it under the subscription and resource group of your Microsoft Sentinel workspace.
+1. When building a query, set the **Data source** to **Logs** and **Resource type** to **Log Analytics**, and then choose one or more workspaces.
+
+ We recommend that your query uses an [Advanced Security Information Model (ASIM) parser](normalization-about-parsers.md) and not a built-in table. The query will then support any current or future relevant data source rather than a single data source.
+
+1. After you create your workbook, save the workbook under the subscription and resource group of your Microsoft Sentinel workspace.
1. If you want to let others in your organization use the workbook, under **Save to** select **Shared reports**. If you want this workbook to be available only to you, select **My reports**.
-1. To switch between workbooks in your workspace, select **Open** ![Icon for opening a workbook.](./media/tutorial-monitor-data/switch.png) in the toolbar of any workbook. The screen switches to a list of other workbooks you can switch to.
+1. To switch between workbooks in your workspace, select **Open** ![Icon for opening a workbook.](./media/monitor-your-data/switch.png) in the toolbar of any workbook. The screen switches to a list of other workbooks you can switch to.
Select the workbook you want to open:
- [ ![Switch workbooks.](media/tutorial-monitor-data/switch-workbooks.png) ](media/tutorial-monitor-data/switch-workbooks.png#lightbox)
+ [ ![Switch workbooks.](media/monitor-your-data/switch-workbooks.png) ](media/monitor-your-data/switch-workbooks.png#lightbox)
## Refresh your workbook data
Refresh your workbook to display updated data. In the toolbar, select one of the
- Auto refresh intervals are also restarted if you manually refresh your data.
- > [!TIP]
- > By default, auto refresh is turned off. To optimize performance, auto refresh is also turned off each time you close a workbook, and does not run in the background. Turn auto refresh back on as needed the next time you open the workbook.
- >
+ By default, auto refresh is turned off. To optimize performance, auto refresh is turned off each time you close a workbook. It doesn't run in the background. Turn auto refresh back on as needed the next time you open the workbook.
## Print a workbook or save as PDF
To print a workbook, or save it as a PDF, use the options menu to the right of t
1. Select options > :::image type="icon" source="media/monitor-your-data/print-icon.png" border="false"::: **Print content**. 2. In the print screen, adjust your print settings as needed or select **Save as PDF** to save it locally.
-For example:
-
-[ ![Print your workbook or save as PDF.](media/monitor-your-data/print-workbook.png) ](media/monitor-your-data/print-workbook.png#lightbox)
+ For example:
+ :::image type="content" source="media/monitor-your-data/print-workbook.png" alt-text="Screenshot that shows how to print your workbook or save as PDF." :::
## How to delete workbooks
-To delete a saved workbook (either a saved template or a customized workbook), in the Workbooks page, select the saved workbook that you want to delete and select **Delete**. This action removes the saved workbook.
-
-> [!NOTE]
-> This removes the workbook resource as well as any changes you made to the template. The original template will remain available.
-
-## Next steps
-
-In this article, you learned how to visualize your data by using workbooks in Microsoft Sentinel.
+To delete a saved workbook, either a saved template or a customized workbook, select the saved workbook that you want to delete and select **Delete**. This action removes the saved workbook. It also removes the workbook resource and any changes you made to the template. The original template remains available.
-To learn how to automate your responses to threats, see [Set up automated threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
+## Related articles
To learn about popular built-in workbooks, see [Commonly used Microsoft Sentinel workbooks](top-workbooks.md).
sentinel Near Real Time Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/near-real-time-rules.md
Microsoft SentinelΓÇÖs [near-real-time analytics rules](detect-threats-built-in.
NRT rules are hard-coded to run once every minute and capture events ingested in the preceding minute, so as to be able to supply you with information as up-to-the-minute as possible.
-Unlike regular scheduled rules that run on a built-in five-minute delay to account for ingestion time lag, NRT rules run on just a two-minute delay, solving the ingestion delay problem by querying on events' ingestion time instead of their generation time at the source (the TimeGenerated field). This results in improvements of both frequency and accuracy in your detections. (To understand this issue more completely, see [Query scheduling and alert threshold](detect-threats-custom.md#query-scheduling-and-alert-threshold) and [Handle ingestion delay in scheduled analytics rules](ingestion-delay.md).)
+Unlike regular scheduled rules that run on a built-in five-minute delay to account for ingestion time lag, NRT rules run on just a two-minute delay, solving the ingestion delay problem by querying on events' ingestion time instead of their generation time at the source (the TimeGenerated field). This results in improvements of both frequency and accuracy in your detections. (To understand this issue more completely, see [Query scheduling and alert threshold](detect-threats-custom.md#schedule-and-scope-the-query) and [Handle ingestion delay in scheduled analytics rules](ingestion-delay.md).)
NRT rules have many of the same features and capabilities as scheduled analytics rules. The full set of alert enrichment capabilities is available&mdash;you can map entities and surface custom details, and you can configure dynamic content for alert details. You can choose how alerts are grouped into incidents, you can temporarily suppress the running of a query after it generates a result, and you can define automation rules and playbooks to run in response to alerts and incidents generated from the rule.
sentinel Notebook Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebook-get-started.md
description: Walk through the Getting Started Guide For Microsoft Sentinel ML No
Previously updated : 01/09/2023 Last updated : 03/08/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel
MSTICPy reduces the amount of code that customers need to write for Microsoft Se
- Visualization tools using event timelines, process trees, and geo mapping. - Advanced analyses, such as time series decomposition, anomaly detection, and clustering.
-The steps in this article describe how to run the **Getting Started Guide for Microsoft Sentinel ML Notebooks** notebook in your Azure ML workspace via Microsoft Sentinel. You can also use this article as guidance for performing similar steps to run notebooks in other environments, including locally.
+The steps in this article describe how to run the **Getting Started Guide for Microsoft Sentinel ML Notebooks** notebook in your Azure Machine Learning workspace via Microsoft Sentinel. You can also use this article as guidance for performing similar steps to run notebooks in other environments, including locally.
For more information, see [Use notebooks to power investigations](hunting.md#use-notebooks-to-power-investigations) and [Use Jupyter notebooks to hunt for security threats](notebooks.md).
-> [!NOTE]
-> Several Microsoft Sentinel notebooks do not use MSTICPy, such as the **Credential Scanner** notebooks, or the PowerShell and C# examples. Notebooks that do not use MSTICpy do not need the MSTICPy configuration described in this article.
->
+Several Microsoft Sentinel notebooks don't use MSTICPy, such as the **Credential Scanner** notebooks, or the PowerShell and C# examples. Notebooks that don't use MSTICpy don't need the MSTICPy configuration described in this article.
+ ## Prerequisites
+Before you begin, make sure you have the required permissions and resources.
+ - To use notebooks in Microsoft Sentinel, make sure that you have the required permissions. For more information, see [Manage access to Microsoft Sentinel notebooks](notebooks.md#manage-access-to-microsoft-sentinel-notebooks). -- To perform the steps in this article, you'll need Python 3.6 or later. In Azure ML you can use either a Python 3.8 kernel (recommended) or a Python 3.6 kernel.
+- To perform the steps in this article, you need Python 3.6 or later. In Azure Machine Learning, you can use either a Python 3.8 kernel (recommended) or a Python 3.6 kernel.
-- This notebook uses the [MaxMind GeoLite2](https://www.maxmind.com) geolocation lookup service for IP addresses. To use the MaxMind GeoLite2 service, you'll need an account key. You can sign up for a free account and key at the [Maxmind signup page](https://www.maxmind.com/en/geolite2/signup).
+- This notebook uses the [MaxMind GeoLite2](https://www.maxmind.com) geolocation lookup service for IP addresses. To use the MaxMind GeoLite2 service, you need an account key. You can sign up for a free account and key at the [Maxmind signup page](https://www.maxmind.com/en/geolite2/signup).
-- This notebook uses [VirusTotal](https://www.virustotal.com) (VT) as a threat intelligence source. To use VirusTotal threat intelligence lookup, you'll need a VirusTotal account and API key.
+- This notebook uses [VirusTotal](https://www.virustotal.com) (VT) as a threat intelligence source. To use VirusTotal threat intelligence lookup, you need a VirusTotal account and API key.
You can sign up for a free VT account at the [VirusTotal getting started page](https://developers.virustotal.com/v3.0/reference#getting-started). If you're already a VirusTotal user, you can use your existing key.
For more information, see [Use notebooks to power investigations](hunting.md#use
This procedure describes how to launch your notebook and initialize MSTICpy.
-1. In Microsoft Sentinel, select **Notebooks** from the left.
-1. From the **Templates** tab, select **A Getting Started Guide For Microsoft Sentinel ML Notebooks** > **Save notebook** to save it to your Azure ML workspace.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Notebooks**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Notebooks**.
- Select **Launch notebook** to run the notebook. The notebook contains a series of cells:
+1. From the **Templates** tab, select **A Getting Started Guide For Microsoft Sentinel ML Notebooks** .
+1. Select **Create from template**.
+1. Edit the name and select the Azure Machine Learning workspace as appropriate.
+1. Select **Save** to save it to your Azure Machine Learning workspace.
+
+1. Select **Launch notebook** to run the notebook. The notebook contains a series of cells:
- *Markdown* cells contain text and graphics with instructions for using the notebook
- - *Code* cells contain executable code that perform the notebook functions
+ - *Code* cells contain executable code that performs the notebook functions
- **Reading and running code cells**
+1. Read and run the code cells in order. Skipping cells or running them out of order might cause errors later in the notebook.
- Read and run the code cells in order. Skipping cells or running them out of order may cause errors later in the notebook.
+ Run each cell by selecting the play button to the left of each cell. Depending on the function being performed, the code in the cell might run quickly, or it might take a few seconds to complete.
- Run each cell by selecting the play button to the left of each cell. Depending on the function being performed, the code in the cell may run very quickly, or it may take a few seconds to complete.
+ When the cell is running, the play button changes to a loading spinner, and a status of `Executing` is displayed at the bottom of the cell, together with the elapsed time.
- When running, the play button changes to a loading spinner, and a status of `Executing` is displayed at the bottom of the cell, together with the elapsed time.
+ If your notebook doesn't seem to be working as described, restart the kernel and run the notebook from the beginning. For example, if any cell in the **Getting Started Guide** notebook takes longer than a minute to run, try restarting the kernel and re-running the notebook.
- > [!TIP]
- > If your notebook doesn't seem to be working as described, restart the kernel and run the notebook from the beginning. For example, if any cell in the **Getting Started Guide** notebook takes longer than a minute to run, try restarting the kernel and re-running the notebook.
- >
- > The **Getting Started Guide** notebook includes instructions for the basic use of Jupyter notebooks, including restarting the Jupyter kernel.
- >
+ The **Getting Started Guide** notebook includes instructions for the basic use of Jupyter notebooks, including restarting the Jupyter kernel.
- After you've completed reading and running the cells in the **What is a Jupyter Notebook** section, you're ready to start the configuration tasks, beginning in the **Setting up the notebook environment** section.
+ After you complete reading and running the cells in the **What is a Jupyter Notebook** section, you're ready to start the configuration tasks, beginning in the **Setting up the notebook environment** section.
1. Run the first code cell in the **Setting up the notebook environment** section of your notebook, which includes the following code:
This procedure describes how to launch your notebook and initialize MSTICpy.
pd.set_option("display.html.table_schema", False) ```
- The initialization status is shown in the output. Configuration warnings about missing settings in the `Missing msticpyconfig.yaml` file are expected because you haven't configured anything yet.
-
-> [!NOTE]
-> Most Microsoft Sentinel notebooks start with a MSTICpy initialization cell that:
->
-> - Defines the minimum versions for Python and MSTICPy the notebook requires.
-> - Ensures that the latest version of MSTICPy is installed.
-> - Imports and runs the `init_notebook` function.
->
+ The initialization status is shown in the output. Configuration warnings about missing settings in the `Missing msticpyconfig.yaml` file are expected because you didn't configure anything yet.
## Create your configuration file
After the basic initialization, you're ready to create your configuration file w
Many Microsoft Sentinel notebooks connect to external services such as [VirusTotal](https://www.virustotal.com) (VT) to collect and enrich data. To connect to these services you need to set and store configuration details, such as authentication tokens. Having this data in your configuration file avoids you having to type in authentication tokens and workspace details each time you use a notebook.
-MSTICPy uses a **msticpyconfig.yaml** for storing a wide range of configuration details. By default, a **msticpyconfig.yaml** file is generated by the notebook initialization function. If you [cloned this notebook from the Microsoft Sentinel portal](#run-and-initialize-the-getting-started-guide-notebook), the configuration file will be populated with Microsoft Sentinel workspace data. This data is read from a **config.json** file, created in the Azure ML workspace when you launch your notebook. For more information, see the [MSTICPy Package Configuration documentation](https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html).
+MSTICPy uses a **msticpyconfig.yaml** for storing a wide range of configuration details. By default, a **msticpyconfig.yaml** file is generated by the notebook initialization function. If you [cloned this notebook from the Microsoft Sentinel portal](#run-and-initialize-the-getting-started-guide-notebook), the configuration file is populated with Microsoft Sentinel workspace data. This data is read from a **config.json** file, created in the Azure Machine Learning workspace when you launch your notebook. For more information, see the [MSTICPy Package Configuration documentation](https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html).
-The following sections describe how to add additional configuration details to the **msticpyconfig.yaml** file.
+The following sections describe how to add more configuration details to the **msticpyconfig.yaml** file.
-> [!NOTE]
-> If you run the *Getting Started Guide* notebook again, and already have a minimally-configured **msticpyconfig.yaml** file, the `init_notebook` function does not overwrite or modify your existing file.
->
+If you run the *Getting Started Guide* notebook again, and already have a minimally configured **msticpyconfig.yaml** file, the `init_notebook` function doesn't overwrite or modify your existing file.
-> [!TIP]
-> At any point in time, select the **-Help** drop-down menu in the MSTICPy configuration tool for more instructions and links to detailed documentation.
->
+At any point in time, select the **-Help** drop-down menu in the MSTICPy configuration tool for more instructions and links to detailed documentation.
### Display the MSTICPy settings editor
The following sections describe how to add additional configuration details to t
The automatically created **msticpyconfig.yaml** file, shown in the settings editor, contains two entries in the Microsoft Sentinel section. These are both populated with details of the Microsoft Sentinel workspace that the notebook was cloned from. One entry has the name of your workspace and the other is named **Default**.
- MSTICPy allows you to store configurations for multiple Microsoft Sentinel workspaces and switch between them. The **Default** entry allows you to authenticate to your "home" workspace by default, without having to name it explicitly. If you add additional workspaces you can configure any one of them to be the **Default** entry.
+ MSTICPy allows you to store configurations for multiple Microsoft Sentinel workspaces and switch between them. The **Default** entry allows you to authenticate to your "home" workspace by default, without having to name it explicitly. If you add another workspaces, you can configure any one of them to be the **Default** entry.
- > [!NOTE]
- > In the Azure ML environment, the settings editor might take 10-20 seconds to appear.
+ In the Azure Machine Learning environment, the settings editor might take 10-20 seconds to appear.
1. Verify your current settings and select **Save Settings**.
The following sections describe how to add additional configuration details to t
This procedure describes how to store your [VirusTotal API key](#prerequisites) in the **msticpyconfig.yaml** file. You can opt to upload the API key to Azure Key Vault, but you must configure the Key Vault settings first. For more information, see [Configure Key Vault settings](#configure-key-vault-settings).
-**To add VirusTotal details in the MSTICPy settings editor**:
+To add VirusTotal details in the MSTICPy settings editor, complete the following steps.
1. Enter the following code in a code cell and run:
This procedure describes how to store your [VirusTotal API key](#prerequisites)
1. Select **Update**, and then select **Save Settings** at the bottom of the settings editor.
-> [!TIP]
-> For more information about other supported threat intelligence providers, see [Threat intelligence providers](https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html) in the MSTICPy documentation and [Threat intelligence integration in Microsoft Sentinel](threat-intelligence-integration.md).
->
+For more information about other supported threat intelligence providers, see [Threat intelligence providers](https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html) in the MSTICPy documentation and [Threat intelligence integration in Microsoft Sentinel](threat-intelligence-integration.md).
+ ### Add GeoIP provider settings This procedure describes how to store a [MaxMind GeoLite2 account key](#prerequisites) in the **msticpyconfig.yaml** file, which allows your notebook to use geolocation lookup services for IP addresses.
-**To add GeoIP provider settings in the MSTICPy settings editor**:
+To add GeoIP provider settings in the MSTICPy settings editor, complete the following steps.
1. Enter the following code in an empty code cell and run:
This procedure describes how to store a [MaxMind GeoLite2 account key](#prerequi
- On Windows, this folder is mapped to the **%USERPROFILE%/.msticpy**. - On Linux or macOS, this path is mapped to the **.msticpy** folder in your home folder.
-> [!TIP]
-> For more information about other supported geolocation lookup services, see the [MSTICPy GeoIP Providers documentation](https://msticpy.readthedocs.io/en/latest/data_acquisition/GeoIPLookups.html).
->
+
+For more information about other supported geolocation lookup services, see the [MSTICPy GeoIP Providers documentation](https://msticpy.readthedocs.io/en/latest/data_acquisition/GeoIPLookups.html).
### Configure Azure Cloud settings
If your organization doesn't use the Azure public cloud, you must specify this i
### Validate settings
-Select **Validate settings** in the settings editor.
+1. Select **Validate settings** in the settings editor.
-Warning messages about missing configurations are expected, but you shouldn't have any for threat intelligence provider or GeoIP provider settings.
+ Warning messages about missing configurations are expected, but you shouldn't have any for threat intelligence provider or GeoIP provider settings.
-Depending on your environment, you may also need to [Configure Key Vault settings](#configure-key-vault-settings) or [Specify the Azure cloud](#specify-the-azure-cloud-and-azure-authentication-methods).
+1. Depending on your environment, you might also need to [Configure Key Vault settings](#configure-key-vault-settings) or [Specify the Azure cloud](#specify-the-azure-cloud-and-azure-authentication-methods).
-If you need to make any changes because of the validation, make those changes and then select **Save Settings**.
+1. If you need to make any changes because of the validation, make those changes and then select **Save Settings**.
-When you're done, select the **Close** button to hide the validation output.
+1. When you're done, select the **Close** button to hide the validation output.
For more information, see: [Advanced configurations for Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebooks-msticpy-advanced.md)
msticpy.settings.refresh_config()
## Test your notebook
-Now that you've initialized your environment and configured basic settings for your workspace, use the MSTICPy `QueryProvider` class to test the notebook. `QueryProvider` queries a data source, in this case your Microsoft Sentinel workspace, and makes the queried data available to view and analyze in your notebook.
+Now that you initialized your environment and configured basic settings for your workspace, use the MSTICPy `QueryProvider` class to test the notebook. `QueryProvider` queries a data source, in this case, your Microsoft Sentinel workspace, and makes the queried data available to view and analyze in your notebook.
-Use the following procedures to create an instance of the `QueryProvider` class, authenticate to Microsoft Sentinel from your notebook, and view and run queries with a variety of different parameter options.
+Use the following procedures to create an instance of the `QueryProvider` class, authenticate to Microsoft Sentinel from your notebook, and view and run queries with various different parameter options.
-> [!TIP]
-> You can have multiple instances of `QueryProvider` loaded for use with multiple Microsoft Sentinel workspaces or other data providers such as Microsoft Defender for Endpoint.
->
+You can have multiple instances of `QueryProvider` loaded for use with multiple Microsoft Sentinel workspaces or other data providers such as Microsoft Defender for Endpoint.
### Load the QueryProvider
To load the `QueryProvider` for `AzureSentinel`, proceed to the cell with the f
qry_prov = QueryProvider("AzureSentinel") ```
-> [!NOTE]
-> If you see a warning `Runtime dependency of PyGObject is missing` when loading the Microsoft Sentinel driver, see the [Error: *Runtime dependency of PyGObject is missing*](https://github.com/Azure/Azure-Sentinel-Notebooks/wiki/%22Runtime-dependency-of-PyGObject-is-missing%22-error).
+If you see a warning `Runtime dependency of PyGObject is missing` when loading the Microsoft Sentinel driver, see the [Error: *Runtime dependency of PyGObject is missing*](https://github.com/Azure/Azure-Sentinel-Notebooks/wiki/%22Runtime-dependency-of-PyGObject-is-missing%22-error).
This warning doesn't impact notebook functionality.
->
### Authenticate to your Microsoft Sentinel workspace from your notebook
-In Azure ML notebooks, the authentication defaults to using the credentials you used to authenticate to the Azure ML workspace.
+In Azure Machine Learning notebooks, the authentication defaults to using the credentials you used to authenticate to the Azure Machine Learning workspace.
-**Authenticate by using managed identity**
+Authenticate by using managed identity by completing the following steps.
-Run the following code to authenticate to your Sentinel workspace.
+1. Run the following code to authenticate to your Sentinel workspace.
```python # Get the default Microsoft Sentinel workspace details from msticpyconfig.yaml
Run the following code to authenticate to your Sentinel workspace.
qry_prov.connect(ws_config) ```
-Output similar to the following is displayed in your notebook:
+1. Review the output. The output displayed is similar to the following image.
:::image type="content" source="media/notebook-get-started/authorization-connected-workspace.png" alt-text="Screenshot that shows authentication to Azure that ends with a connected message.":::
To avoid having to re-authenticate if you restart the kernel or run another note
The Azure CLI component on the Compute instance caches a *refresh token* that it can reuse until the token times out. MSTICPy automatically uses Azure CLI credentials, if they're available.
-To authenticate using Azure CLI enter the following into an empty cell and run it:
+To authenticate using Azure CLI, enter the following command into an empty cell and run it:
```azurecli !az login ```
-> [!NOTE]
-> You will need to re-authenticate if you restart your Compute instance or switch to a different instance. For more information, see [Caching credentials with Azure CLI](https://github.com/Azure/Azure-Sentinel-Notebooks/wiki/Caching-credentials-with-Azure-CLI) section in the Microsoft Sentinel Notebooks GitHub repository wiki.
->
+You need to re-authenticate if you restart your Compute instance or switch to a different instance. For more information, see [Caching credentials with Azure CLI](https://github.com/Azure/Azure-Sentinel-Notebooks/wiki/Caching-credentials-with-Azure-CLI) section in the Microsoft Sentinel Notebooks GitHub repository wiki.
### View the Microsoft Sentinel workspace data schema and built-in MSTICPy queries
MSTICPy also includes many built-in queries available for you to run. List avail
**To view a sample of available queries**:
-Proceed to the next cell, with the following code, and run it. You can omit the `[::5]` to list all queries.
+1. Proceed to the next cell, with the following code, and run it. You can omit the `[::5]` to list all queries.
-```python
-# Get a sample of available queries
-print(qry_prov.list_queries()[::5]) # showing a sample - remove "[::5]" for whole list
-```
-
-The following output appears:
-
-```output
-Sample of queries
-=================
-['Azure.get_vmcomputer_for_host', 'Azure.list_azure_activity_for_account', 'AzureNetwork.az_net_analytics', 'AzureNetwork.get_heartbeat_for_ip', 'AzureSentinel.get_bookmark_by_id', 'Heartbeatget_heartbeat_for_host', 'LinuxSyslog.all_syslog', 'LinuxSyslog.list_logon_failures', 'LinuxSyslog.sudo_activity', 'MultiDataSource.get_timeseries_decompose', 'Network.get_host_for_ip','Office365.list_activity_for_ip', 'SecurityAlert.list_alerts_for_ip', 'ThreatIntelligence.list_indicators_by_filepath', 'WindowsSecurity.get_parent_process', 'WindowsSecurity.list_host_events','WindowsSecurity.list_hosts_matching_commandline', 'WindowsSecurity.list_other_events']
-```
+ ```python
+ # Get a sample of available queries
+ print(qry_prov.list_queries()[::5]) # showing a sample - remove "[::5]" for whole list
+ ```
-**To get help about a query by passing `?` as a parameter**:
+1. Review the output.
-```python
-# Get help about a query by passing "?" as a parameter
-qry_prov.Azure.list_all_signins_geo("?")
-```
+ ```output
+ Sample of queries
+ =================
+ ['Azure.get_vmcomputer_for_host', 'Azure.list_azure_activity_for_account', 'AzureNetwork.az_net_analytics', 'AzureNetwork.get_heartbeat_for_ip', 'AzureSentinel.get_bookmark_by_id', 'Heartbeatget_heartbeat_for_host', 'LinuxSyslog.all_syslog', 'LinuxSyslog.list_logon_failures', 'LinuxSyslog.sudo_activity', 'MultiDataSource.get_timeseries_decompose', 'Network.get_host_for_ip','Office365.list_activity_for_ip', 'SecurityAlert.list_alerts_for_ip', 'ThreatIntelligence.list_indicators_by_filepath', 'WindowsSecurity.get_parent_process', 'WindowsSecurity.list_host_events','WindowsSecurity.list_hosts_matching_commandline', 'WindowsSecurity.list_other_events']
+ ```
-The following output appears:
+1. To get help about a query by passing `?` as a parameter:
-```output
-Help for 'list_all_signins_geo' query
-=====================================
-Query: list_all_signins_geo
-Data source: AzureSentinel
-Gets Signin data used by morph charts
-
-Parameters
--
-add_query_items: str (optional)
- Additional query clauses
-end: datetime (optional)
- Query end time
-start: datetime (optional)
- Query start time
- (default value is: -5)
-table: str (optional)
- Table name
- (default value is: SigninLogs)
-Query:
- {table} | where TimeGenerated >= datetime({start}) | where TimeGenerated <= datetime({end}) | extend Result = iif(ResultType==0, "Sucess", "Failed") | extend Latitude = tostring(parse_json(tostring(LocationDetails.geoCoordinates)).latitude) | extend Longitude = tostring(parse_json(tostring(LocationDetails.geoCoordinates)).longitude)
-```
+ ```python
+ # Get help about a query by passing "?" as a parameter
+ qry_prov.Azure.list_all_signins_geo("?")
+ ```
-**To view both tables and queries in a scrollable, filterable list**:
+1. Review the output.
+
+ ```output
+ Help for 'list_all_signins_geo' query
+ =====================================
+ Query: list_all_signins_geo
+ Data source: AzureSentinel
+ Gets Signin data used by morph charts
+
+ Parameters
+ -
+ add_query_items: str (optional)
+ Additional query clauses
+ end: datetime (optional)
+ Query end time
+ start: datetime (optional)
+ Query start time
+ (default value is: -5)
+ table: str (optional)
+ Table name
+ (default value is: SigninLogs)
+ Query:
+ {table} | where TimeGenerated >= datetime({start}) | where TimeGenerated <= datetime({end}) | extend Result = iif(ResultType==0, "Sucess", "Failed") | extend Latitude = tostring(parse_json(tostring(LocationDetails.geoCoordinates)).latitude) | extend Longitude = tostring(parse_json(tostring(LocationDetails.geoCoordinates)).longitude)
+ ```
-Proceed to the next cell, with the following code, and run it:
+1. To view both tables and queries in a scrollable, filterable list, proceed to the next cell, with the following code, and run it.
-```python
-qry_prov.browse_queries()
-```
+ ```python
+ qry_prov.browse_queries()
+ ```
-For the selected query, all required and optional parameters are displayed, together with the full text of the query. For example:
+1. For the selected query, all required and optional parameters are displayed, together with the full text of the query. For example:
+ :::image type="content" source="media/notebook-get-started/view-tables-queries-in-list.png" alt-text="Screenshot of tables and queries displayed in a scrollable, filterable list.":::
+
+While you can't run queries from the browser, you can copy and paste the example at the end of each query to run elsewhere in the notebook.
For more information, see [Running a pre-defined query](https://msticpy.readthedocs.io/en/latest/data_acquisition/DataProviders.html#running-a-pre-defined-query) in the MSTICPy documentation.
-> [!NOTE]
-> While you can't run queries from the browser, you can copy and paste the example at the end of each query to run elsewhere in the notebook.
->
- ### Run queries with time parameters Most queries require time parameters. Date/time strings are tedious to type in, and modifying them in multiple places can be error-prone. Each query provider has default start and end time parameters for queries. These time parameters are used by default, whenever time parameters are called for. You can change the default time range by opening the `query_time` control. The changes remain in effect until you change them again.
-Proceed to the next cell, with the following code, and run it:
+1. Proceed to the next cell, with the following code, and run it:
-```python
-# Open the query time control for your query provider
-qry_prov.query_time
-```
+ ```python
+ # Open the query time control for your query provider
+ qry_prov.query_time
+ ```
-Set the `start` and `end` times as needed. For example:
+1. Set the `start` and `end` times as needed. For example:
+ :::image type="content" source="media/notebook-get-started/set-time-parameters.png" alt-text="Screenshot of setting default time parameters for queries.":::
### Run a query using the built-in time range
-Query results return as a [Pandas DataFrame](https://pandas.pydata.org), which is a tabular data structure, like a spreadsheet or database table. You can use [pandas functions](https://pandas.pydata.org/docs/user_guide/10min.html) to perform extra filtering and analysis on the query results.
-
-The following code cell runs a query using the query provider default time settings. You can change this range, and run the code cell again to query for the new time range.
+Query results return as a [Pandas DataFrame](https://pandas.pydata.org), which is a tabular data structure, like a spreadsheet or database table. Use [pandas functions](https://pandas.pydata.org/docs/user_guide/10min.html) to perform extra filtering and analysis on the query results.
-```python
-# The time parameters are taken from the qry_prov time settings
-# but you can override this by supplying explict "start" and "end" datetimes
-signins_df = qry_prov.Azure.list_all_signins_geo()
+1. Run the following code cell. It runs a query using the query provider default time settings. You can change this range, and run the code cell again to query for the new time range.
-# display first 5 rows of any results
-# If there is no data, just the column headings display
-signins_df.head()
-```
+ ```python
+ # The time parameters are taken from the qry_prov time settings
+ # but you can override this by supplying explict "start" and "end" datetimes
+ signins_df = qry_prov.Azure.list_all_signins_geo()
+
+ # display first 5 rows of any results
+ # If there is no data, just the column headings display
+ signins_df.head()
+ ```
-The output displays the first five rows of results. For example:
+1. Review the output. It displays the first five rows of results. For example:
+ :::image type="content" source="media/notebook-get-started/run-query-with-built-in-time-range.png" alt-text="Screenshot of a query run with the built-in time range.":::
-If there's no data, only the column headings display.
+ If there's no data, only the column headings display.
### Run a query using a custom time range
-You can also create a new query time object and pass it to a query as a parameter, which allows you to run a one-off query for a different time range, without affecting the query provider defaults.
+You can also create a new query time object and pass it to a query as a parameter. That allows you to run a one-off query for a different time range, without affecting the query provider defaults.
```python # Create and display a QueryTime control.
time_range = nbwidgets.QueryTime()
time_range ```
-After youΓÇÖve set the desired time range, you can pass the time range to the query function, running the following code in a separate cell from the previous code:
+After you set the desired time range, you can pass the time range to the query function, running the following code in a separate cell from the previous code:
```python signins_df = qry_prov.Azure.list_all_signins_geo(time_range)
signins_df = qry_prov.Azure.list_all_signins_geo(start=q_start, end=q_end)
### Customize your queries
-You can customize the built-in queries by adding additional query logic, or run complete queries using the `exec_query` function.
+You can customize the built-in queries by adding more query logic, or run complete queries using the `exec_query` function.
For example, most built-in queries support the `add_query_items` parameter, which you can use to append filters or other operations to the queries.
For example, most built-in queries support the `add_query_items` parameter, whic
) ```
-1. Pass a full KQL query string to the query provider. The query runs against the connected workspace, and the data returns as a panda DataFrame. Run:
+1. Pass a full Kusto Query Language (KQL) query string to the query provider. The query runs against the connected workspace, and the data returns as a panda DataFrame. Run:
```python # Define your query
For more information, see:
- The [MSTICPy query reference](https://msticpy.readthedocs.io/en/latest/data_acquisition/DataQueries.html) - [Running MSTICPy pre-defined queries](https://msticpy.readthedocs.io/en/latest/data_acquisition/DataProviders.html#running-an-pre-defined-query)
-### Test VirusTotal and GeoLite2
-
-**To check for an IP address in VirusTotal data**:
-
-To use threat intelligence to see if an IP address appears in VirusTotal data, run the cell with the following code:
+### Test VirusTotal
-```python
-# Create your TI provider ΓÇô note you can re-use the TILookup provider (ΓÇÿtiΓÇÖ) for
-# subsequent queries - you donΓÇÖt have to create it for each query
-ti = TILookup()
+1. To use threat intelligence to see if an IP address appears in VirusTotal data, run the cell with the following code:
-# Look up an IP address
-ti_resp = ti.lookup_ioc("85.214.149.236")
+ ```python
+ # Create your TI provider ΓÇô note you can re-use the TILookup provider (ΓÇÿtiΓÇÖ) for
+ # subsequent queries - you donΓÇÖt have to create it for each query
+ ti = TILookup()
+
+ # Look up an IP address
+ ti_resp = ti.lookup_ioc("85.214.149.236")
+
+ ti_df = ti.result_to_df(ti_resp)
+ ti.browse_results(ti_df, severities="all")
+ ```
-ti_df = ti.result_to_df(ti_resp)
-ti.browse_results(ti_df, severities="all")
-```
+1. Review the output. For example:
-The output shows details about the results. For example:
+ :::image type="content" source="media/notebook-get-started/test-virustotal-ip.png" alt-text="Screenshot of an IP address appearing in VirusTotal data.":::
+1. Scroll down to view full results.
-Make sure to scroll down to view full results. For more information, see [Threat Intel Lookups in MSTICPy](https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html).
+For more information, see [Threat Intel Lookups in MSTICPy](https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html).
-**To test geolocation IP lookup**:
+### Test geolocation IP lookup
-To get geolocation details for an IP address using the MaxMind service, run the cell with the following code:
+1. To get geolocation details for an IP address using the MaxMind service, run the cell with the following code:
-```python
-# create an instance of the GeoLiteLookup provider ΓÇô this
-# can be re-used for subsequent queries.
-geo_ip = GeoLiteLookup()
-raw_res, ip_entity = geo_ip.lookup_ip("85.214.149.236")
-display(ip_entity[0])
-```
-
-The output shows geolocation information for the IP address. For example:
+ ```python
+ # create an instance of the GeoLiteLookup provider ΓÇô this
+ # can be re-used for subsequent queries.
+ geo_ip = GeoLiteLookup()
+ raw_res, ip_entity = geo_ip.lookup_ip("85.214.149.236")
+ display(ip_entity[0])
+ ```
-```output
-ipaddress
-{ 'AdditionalData': {},
- 'Address': '85.214.149.236',
- 'Location': { 'AdditionalData': {},
- 'CountryCode': 'DE',
- 'CountryName': 'Germany',
- 'Latitude': 51.2993,
- 'Longitude': 9.491,
- 'Type': 'geolocation',
- 'edges': set()},
- 'ThreatIntelligence': [],
- 'Type': 'ipaddress',
- 'edges': set()}
-```
+1. Review the output. For example:
+
+ ```output
+ ipaddress
+ { 'AdditionalData': {},
+ 'Address': '85.214.149.236',
+ 'Location': { 'AdditionalData': {},
+ 'CountryCode': 'DE',
+ 'CountryName': 'Germany',
+ 'Latitude': 51.2993,
+ 'Longitude': 9.491,
+ 'Type': 'geolocation',
+ 'edges': set()},
+ 'ThreatIntelligence': [],
+ 'Type': 'ipaddress',
+ 'edges': set()}
+ ```
-> [!NOTE]
-> The first time you run this code, you should see the GeoLite driver downloading its database.
->
+The first time you run this code, you should see the GeoLite driver downloading its database.
For more information, see [MSTICPy GeoIP Providers](https://msticpy.readthedocs.io/en/latest/data_acquisition/GeoIPLookups.html).
For more information, see [MSTICPy GeoIP Providers](https://msticpy.readthedocs.
This section is relevant only when storing secrets in Azure Key Vault.
-When you store secrets in Azure Key Vault, you'll need to create the Key Vault first, in the [Azure global KeyVault management portal](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.KeyVault%2Fvaults).
+When you store secrets in Azure Key Vault, you need to create the Key Vault first in the [Azure global KeyVault management portal](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.KeyVault%2Fvaults).
-Required settings are all values that you get from the Vault properties, although some may have different names. For example:
+Required settings are all values that you get from the Vault properties, although some might have different names. For example:
- **VaultName** is show at the top left of the Azure Key Vault **Properties** screen - **TenantId** is shown as **Directory ID**
The **Use KeyRing** option is selected by default, and lets you cache Key Vault
> In our case, the *compute* is the Jupyter hub server, where the notebook kernel is running, and not necessarily the machine that your browser is running on. If you are using Azure ML, the *compute* will be the Azure ML Compute instance you have selected. Keyring does its caching on the host where the notebook kernel is running. >
-**To add Key Vault settings in the MSTICPy settings editor**:
+To add Key Vault settings in the MSTICPy settings editor, complete the following steps.
1. Proceed to the next cell, with the following code, and run it:
The **Use KeyRing** option is selected by default, and lets you cache Key Vault
### Test Key Vault
-To test your key vault, check to see if you can connect and view your secrets. If you haven't added a secret, you won't see any details. If you need to, add a test secret from the Azure Key Vault portal to the vault, and check that it shows in Microsoft Sentinel.
+To test your key vault, check to see if you can connect and view your secrets. If you didn't add a secret, you don't see any details. If you need to, add a test secret from the Azure Key Vault portal to the vault, and check that it shows in Microsoft Sentinel.
For example:
mpconfig.show_kv_secrets()
> Also, delete cached copies of the notebook. For example, look in the **.ipynb_checkpoints** sub-folder of your notebook directory, and delete any copies of this notebook found. Saving the notebook with a cleared output should overwrite the checkpoint copy. >
-After you have Key Vault configured, you can use the **Upload to KV** button in the Data Providers and TI Providers sections to move the selected setting to the Vault. MSTICPy will generate a default name for the secret based on the path of the setting, such as `TIProviders-VirusTotal-Args-AuthKey`.
+After you have Key Vault configured, you can use the **Upload to KV** button in the Data Providers and TI Providers sections to move the selected setting to the Vault. MSTICPy generates a default name for the secret based on the path of the setting, such as `TIProviders-VirusTotal-Args-AuthKey`.
-If the value is successfully uploaded, the contents of the **Value** field in the settings editor is deleted and the underlying setting is replaced with a placeholder value. MSTICPy will use this to indicate that it should automatically generate the Key Vault path when trying to retrieve the key.
+If the value is successfully uploaded, the contents of the **Value** field in the settings editor is deleted and the underlying setting is replaced with a placeholder value. MSTICPy uses this value to indicate that it should automatically generate the Key Vault path when trying to retrieve the key.
-If you already have the required secrets stored in a Key Vault you can enter the secret name in the **Value** field. If the secret is not stored in your default Vault (the values specified in the [Key Vault](https://msticpy.readthedocs.io/en/latest/getting_started/SettingsEditor.html#key-vault) section), you can specify a path of **VaultName/SecretName**.
+If you already have the required secrets stored in a Key Vault, you can enter the secret name in the **Value** field. If the secret isn't stored in your default Vault (the values specified in the [Key Vault](https://msticpy.readthedocs.io/en/latest/getting_started/SettingsEditor.html#key-vault) section), you can specify a path of **VaultName/SecretName**.
-Fetching settings from a Vault in a different tenant is not currently supported. For more information, see [Specifying secrets as Key Vault secrets](https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html#specifying-secrets-as-key-vault-secrets).
+Fetching settings from a Vault in a different tenant isn't currently supported. For more information, see [Specifying secrets as Key Vault secrets](https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html#specifying-secrets-as-key-vault-secrets).
## Specify the Azure cloud and Azure authentication methods
-If you are using a sovereign or government Azure cloud, rather than the public or global Azure cloud, you must select the appropriate cloud in your settings. For most organizations the global cloud is the default.
+If you're using a sovereign or government Azure cloud, rather than the public or global Azure cloud, you must select the appropriate cloud in your settings. For most organizations, the global cloud is the default.
You can also use these Azure settings to define default preferences for the Azure authentication type.
-**To specify Azure cloud and Azure authentication methods**:
+To specify Azure cloud and Azure authentication methods, complete the following steps.
1. Proceed to the next cell, with the following code, and run it:
You can also use these Azure settings to define default preferences for the Azur
1. Select one or more of the following methods: - **env** to store your Azure Credentials in environment variables.
- - **msi** to use Managed Service Identity, which is an identity assigned to the host or virtual machine where the Jupyter hub is running. MSI is not currently supported in Azure ML Compute instances.
+ - **msi** to use Managed Service Identity, which is an identity assigned to the host or virtual machine where the Jupyter hub is running. MSI isn't currently supported in Azure Machine Learning Compute instances.
- **cli** to use credentials from an authenticated Azure CLI session. - **interactive** to use the interactive device authorization flow using a [one-time device code](#authenticate-to-your-microsoft-sentinel-workspace-from-your-notebook).
- > [!TIP]
- > In most cases, we recommend selecting multiple methods, such as both **cli** and **interactive**. Azure authentication will try each of the configured methods in the order listed above until one succeeds.
- >
+ In most cases, we recommend selecting multiple methods, such as both **cli** and **interactive**. Azure authentication tries each of the configured methods in the order listed until one succeeds.
1. Select **Save** and then **Save Settings**.
-For example:
+ For example:
+ :::image type="content" source="media/notebook-get-started/settings-for-azure-gov-cloud.png" alt-text="Screenshot of settings defined for the Azure Government cloud.":::
## Next steps
You can also try out other notebooks stored in the [Microsoft Sentinel Notebooks
- [Machine Learning examples](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/9bba6bb9007212fca76169c3d9a29df2da95582d/Machine%20Learning%20in%20Notebooks%20Examples.ipynb) - The [Entity Explorer series](https://github.com/Azure/Azure-Sentinel-Notebooks/) of notebooks, which allow for a deep drill-down into details about a host, account, IP address, and other entities.
-> [!TIP]
-> If you use the notebook described in this article in another Jupyter environment, you can use any kernel that supports Python 3.6 or later.
->
-> To use MSTICPy notebooks outside of Microsoft Sentinel and Azure Machine Learning (ML), you'll also need to configure your Python environment. Install Python 3.6 or later with the Anaconda distribution, which includes many of the required packages.
->
+If you use the notebook described in this article in another Jupyter environment, you can use any kernel that supports Python 3.6 or later.
+
+To use MSTICPy notebooks outside of Microsoft Sentinel and Azure Machine Learning (ML), you also need to configure your Python environment. Install Python 3.6 or later with the Anaconda distribution, which includes many of the required packages.
### More reading on MSTICPy and notebooks
sentinel Notebooks Hunt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks-hunt.md
description: Launch and run notebooks with the Microsoft Sentinel hunting capabi
- Previously updated : 01/05/2023 Last updated : 03/08/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ #Customer intent: As a security analyst, I want to deploy and launch a Jupyter notebook to hunt for security threats.
Last updated 01/05/2023
As part of your security investigations and hunting, launch and run Jupyter notebooks to programmatically analyze your data.
-In this how-to guide, you'll create an Azure Machine Learning (ML) workspace, launch notebook from Sentinel portal to your Azure ML workspace, and run code in the notebook.
+In this article, you create an Azure Machine Learning workspace, launch notebook from Microsoft Sentinel to your Azure Machine Learning workspace, and run code in the notebook.
++
+<a name ="create-an-azure-ml-workspace-from-microsoft-sentinel"></a>
## Prerequisites
-We recommend that you learn about Microsoft Sentinel notebooks in general before completing the steps in this article. See [Use Jupyter notebooks to hunt for security threats](notebooks.md).
+We recommend that you learn about Microsoft Sentinel notebooks before completing the steps in this article. See [Use Jupyter notebooks to hunt for security threats](notebooks.md).
To use Microsoft Sentinel notebooks, you must have the following roles and permissions:
To use Microsoft Sentinel notebooks, you must have the following roles and permi
|**Microsoft Sentinel** |- The **Microsoft Sentinel Contributor** role, in order to save and launch notebooks from Microsoft Sentinel | |**Azure Machine Learning** |- A resource group-level **Owner** or **Contributor** role, to create a new Azure Machine Learning workspace if needed. <br>- A **Contributor** role on the Azure Machine Learning workspace where you run your Microsoft Sentinel notebooks. <br><br>For more information, see [Manage access to an Azure Machine Learning workspace](../machine-learning/how-to-assign-roles.md). |
-## Create an Azure ML workspace from Microsoft Sentinel
+## Create an Azure Machine Learning workspace from Microsoft Sentinel
-To create your workspace, select one of the following tabs, depending on whether you'll be using a public or private endpoint.
+To create your workspace, select one of the following tabs, depending on whether you're using a public or private endpoint.
-- We recommend using a *public endpoint* if your Microsoft Sentinel workspace has one, to avoid potential issues in the network communication.-- If you want to use an Azure ML workspace in a virtual network, use a *private endpoint*.
+- We recommend that you use a *public endpoint* when your Microsoft Sentinel workspace has one, to avoid potential issues in the network communication.
+- If you want to use an Azure Machine Learning workspace in a virtual network, use a *private endpoint*.
# [Public endpoint](#tab/public-endpoint)
-1. From the Azure portal, go to **Microsoft Sentinel** > **Threat management** > **Notebooks** and then select **Create a new AML workspace**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Notebooks**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Notebooks**.
+
+1. Select **Configure Azure Machine Learning** > **Create a new AML workspace**.
1. Enter the following details, and then select **Next**.
To create your workspace, select one of the following tabs, depending on whether
|**Resource group**|Use an existing resource group in your subscription or enter a name to create a new resource group. A resource group holds related resources for an Azure solution.| |**Workspace name**|Enter a unique name that identifies your workspace. Names must be unique across the resource group. Use a name that's easy to recall and to differentiate from workspaces created by others.| |**Region**|Select the location closest to your users and the data resources to create your workspace.|
- |**Storage account**| A storage account is used as the default datastore for the workspace. You may create a new Azure Storage resource or select an existing one in your subscription.|
- |**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription.|
- |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription.|
- |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
+ |**Storage account**| A storage account is used as the default datastore for the workspace. You might create a new Azure Storage resource or select an existing one in your subscription.|
+ |**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You might create a new Azure Key Vault resource or select an existing one in your subscription.|
+ |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You might create a new Azure Application Insights resource or select an existing one in your subscription.|
+ |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you might choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
1. On the **Networking** tab, select **Enable public access from all networks**.
To create your workspace, select one of the following tabs, depending on whether
# [Private endpoint](#tab/private-endpoint)
-The steps in this procedure reference specific articles in the Azure Machine Learning documentation when relevant. For more information, see [How to create a secure Azure ML workspace](../machine-learning/tutorial-create-secure-workspace.md).
+The steps in this procedure reference specific articles in the Azure Machine Learning documentation when relevant. For more information, see [How to create a secure Azure Machine Learning workspace](../machine-learning/tutorial-create-secure-workspace.md).
+
+1. Create a virtual machine (VM) jump box within a virtual network. Since the virtual network restricts access from the public internet, the jump box is used as a way to connect to resources behind the virtual network.
-1. Create a VM jump box within a VNet. Since the VNet restricts access from the public internet, the jump box is used as a way to connect to resources behind the VNet.
+1. Access the jump box, and then go to your Microsoft Sentinel workspace. We recommend using [Azure Bastion](../bastion/bastion-overview.md) to access the VM.
-1. Access the jump box, and then go to your Microsoft Sentinel workspace. We recommend using [Azure Bastion](../bastion/bastion-overview.md) to access the VM.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Notebooks**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Notebooks**.
-1. In Microsoft Sentinel, select **Threat management** > **Notebooks** and then select **Create a new AML workspace**.
+1. Select **Configure Azure Machine Learning** > **Create a new AML workspace**.
1. Enter the following details, and then select **Next**.
The steps in this procedure reference specific articles in the Azure Machine Lea
|**Resource group**|Use an existing resource group in your subscription or enter a name to create a new resource group. A resource group holds related resources for an Azure solution.| |**Workspace name**|Enter a unique name that identifies your workspace. Names must be unique across the resource group. Use a name that's easy to recall and to differentiate from workspaces created by others.| |**Region**|Select the location closest to your users and the data resources to create your workspace.|
- |**Storage account**| A storage account is used as the default datastore for the workspace. You may create a new Azure Storage resource or select an existing one in your subscription.|
- |**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription.|
- |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription.|
- |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
+ |**Storage account**| A storage account is used as the default datastore for the workspace. You might create a new Azure Storage resource or select an existing one in your subscription.|
+ |**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You might create a new Azure Key Vault resource or select an existing one in your subscription.|
+ |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You might create a new Azure Application Insights resource or select an existing one in your subscription.|
+ |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you might choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
-1. On the **Networking** tab, select **Disable public access and use private endpoint**. Make sure to use the same VNet as you have in the VM jump box. For example:
+1. On the **Networking** tab, select **Disable public access and use private endpoint**. Make sure to use the same virtual network as you have in the VM jump box. For example:
:::image type="content" source="media/notebooks/create-private-endpoint.png" alt-text="Screenshot of the Create private endpoint page in Microsoft Sentinel." lightbox="media/notebooks/create-private-endpoint.png":::
The steps in this procedure reference specific articles in the Azure Machine Lea
It can take several minutes to create your workspace in the cloud. During this time, the workspace **Overview** page shows the current deployment status, and updates when the deployment is complete.
-1. In the Azure Machine Learning studio, on the **Compute** page, create a new compute. On the **Advanced Settings** tab, make sure to select the same VNet that you'd used for your VM jump box. For more information, see [Create and manage an Azure Machine Learning compute instance](../machine-learning/how-to-create-compute-instance.md?tabs=python).
+1. In the Azure Machine Learning studio, on the **Compute** page, create a new compute. On the **Advanced Settings** tab, make sure to select the same virtual network that you'd used for your VM jump box. For more information, see [Create and manage an Azure Machine Learning compute instance](../machine-learning/how-to-create-compute-instance.md?tabs=python).
-1. Configure your network traffic to access Azure ML from behind a firewall. For more information, see [Configure inbound and outbound network traffic](../machine-learning/how-to-access-azureml-behind-firewall.md?tabs=ipaddress%2cpublic).
+1. Configure your network traffic to access Azure Machine Learning from behind a firewall. For more information, see [Configure inbound and outbound network traffic](../machine-learning/how-to-access-azureml-behind-firewall.md?tabs=ipaddress%2cpublic).
Continue with one of the following sets of steps:
Continue with one of the following sets of steps:
- Clone and launch notebooks from Microsoft Sentinel to Azure Machine Learning - Upload notebooks to Azure Machine Learning manually
- - Clone the [Microsoft Sentinel notebooks GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks) on the Azure Machine learning terminal
+ - Clone the [Microsoft Sentinel notebooks GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks) on the Azure Machine Learning terminal
- **If you have another private link, that uses a different VNET**, do the following:
For more information, see:
-After your deployment is complete, you can go back to the Microsoft Sentinel **Notebooks** and launch notebooks from your new Azure ML workspace.
+After your deployment is complete, go back to **Notebooks** in Microsoft Sentinel and launch notebooks from your new Azure Machine Learning workspace.
If you have multiple notebooks, make sure to select a default AML workspace to use when launching your notebooks. For example: :::image type="content" source="media/notebooks/default-machine-learning.png" alt-text="Select a default AML workspace for your notebooks.":::
-## Launch a notebook in your Azure ML workspace
+## Launch a notebook in your Azure Machine Learning workspace
-After you've created an AML workspace, start launching your notebooks in your Azure ML workspace, from Microsoft Sentinel.
+After you create an Azure Machine Learning workspace, launch your notebooks in that workspace from Microsoft Sentinel.
-
-1. From the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Notebooks** > **Templates**, where you can see notebooks that Microsoft Sentinel provides.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Notebooks**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Notebooks**.
+1. Select the **Templates** tab to see the notebooks that Microsoft Sentinel provides.
1. Select a notebook to view its description, required data types, and data sources.
- When you've found the notebook you want to use, select **Create from template** and **Save** to clone it into your own workspace.
+1. When you find the notebook you want to use, select **Create from template** and **Save** to clone it into your own workspace.
- Edit the name as needed. If the notebook already exists in your workspace, you can overwrite the existing notebook or create a new one. By default, your notebook will be saved in /Users/<Your_User_Name>/ directory of selected AML workspace.
+1. Edit the name as needed. If the notebook already exists in your workspace, overwrite the existing notebook or create a new one. By default, your notebook is saved in /Users/<Your_User_Name>/ directory of selected AML workspace.
:::image type="content" source="media/notebooks/save-notebook.png" alt-text="Save a notebook to clone it to your own workspace.":::
After you've created an AML workspace, start launching your notebooks in your Az
Only you can see and use the compute instances you create. Your user files are stored separately from the VM and are shared among all compute instances in the workspace.
- If you are creating a new compute instance in order to test your notebooks, create your compute instance with the **General Purpose** category.
+ If you're creating a new compute instance in order to test your notebooks, create your compute instance with the **General Purpose** category.
- The kernel is also shown at the top right of your Azure ML window. If the kernel you need isn't selected, select a different version from the dropdown list.
+ The kernel is also shown at the top right of your Azure Machine Learning window. If the kernel you need isn't selected, select a different version from the dropdown list.
-1. Once your notebook server is created and started, you can starting running your notebook cells. In each cell, select the **Run** icon to run your notebook code.
+1. Once your notebook server is created and started, run your notebook cells. In each cell, select the **Run** icon to run your notebook code.
For more information, see [Command mode shortcuts.](../machine-learning/how-to-run-jupyter-notebooks.md)
print("2 + 2 =", y)
```
-The sample code shown above produces this output:
+The sample code produces this output:
```python Congratulations, you just ran this code cell
The output is:
## Download all Microsoft Sentinel notebooks
-This section describes how to use Git to download all the notebooks available in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/), from inside a Microsoft Sentinel notebook, directly to your Azure ML workspace.
+This section describes how to use Git to download all the notebooks available in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/), from inside a Microsoft Sentinel notebook, directly to your Azure Machine Learning workspace.
-Having Microsoft Sentinel notebooks stored in your Azure ML workspace allows you to keep them updated easily.
+Storing the Microsoft Sentinel notebooks in your Azure Machine Learning workspace allows you to keep them updated easily.
1. From a Microsoft Sentinel notebook, enter the following code into an empty cell, and then run the cell:
Having Microsoft Sentinel notebooks stored in your Azure ML workspace allows you
!git clone https://github.com/Azure/Azure-Sentinel-Notebooks.git azure-sentinel-nb ```
- A copy of the GitHub repository contents is created in the **azure-Sentinel-nb** directory on your user folder in your Azure ML workspace.
+ A copy of the GitHub repository contents is created in the **azure-Sentinel-nb** directory on your user folder in your Azure Machine Learning workspace.
1. Copy the notebooks you want from this folder to your working directory.
Having Microsoft Sentinel notebooks stored in your Azure ML workspace allows you
!cd azure-sentinel-nb && git pull ```
-## Next steps
--- [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md)-- [Integrate notebooks with Azure Synapse (Public preview)](notebooks-with-synapse.md)-
-Other resources:
-- Use notebooks shared in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks) as useful tools, illustrations, and code samples that you can use when developing your own notebooks.--- Submit feedback, suggestions, requests for features, contributed notebooks, bug reports or improvements and additions to existing notebooks. Go to the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel) to create an issue or fork and upload a contribution.--- Learn more about using notebooks in threat hunting and investigation by exploring some notebook templates, such as [Credential Scan on Azure Log Analytics](https://www.youtube.com/watch?v=OWjXee8o04M) and Guided Investigation - Process Alerts.-
- Find more notebook templates in the Microsoft Sentinel > **Notebooks** > **Templates** tab.
--- **Find more notebooks** in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks):-
- - The [`Example-Notebooks`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/tutorials-and-examples/example-notebooks) directory includes sample notebooks that are saved with data that you can use to show intended output.
-
- - The [`HowTos`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/tutorials-and-examples/how-tos) directory includes notebooks that describe concepts such as setting your default Python version, creating Microsoft Sentinel bookmarks from a notebook, and more.
-
-For more information, see:
--- [Create your first Microsoft Sentinel notebook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/creating-your-first-microsoft-sentinel-notebook/ba-p/2977745) (Blog series)
+## Related content
-- [Tutorial: Microsoft Sentinel notebooks - Getting started](https://www.youtube.com/results?search_query=azazure+sentinel+notebooks) (Video)-- [Tutorial: Edit and run Jupyter notebooks without leaving Azure ML studio](https://www.youtube.com/watch?v=AAj-Fz0uCNk) (Video)-- [Webinar: Microsoft Sentinel notebooks fundamentals](https://www.youtube.com/watch?v=rewdNeX6H94)-- [Proactively hunt for threats](hunting.md)-- [Use bookmarks to save interesting information while hunting](bookmarks.md)-- [Jupyter, msticpy, and Microsoft Sentinel](https://msticpy.readthedocs.io/en/latest/getting_started/JupyterAndAzureSentinel.html)
+- [Jupyter notebooks with Microsoft Sentinel hunting capabilities](notebooks.md)
+- [Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md)
sentinel Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks.md
Title: Use notebooks with Microsoft Sentinel for security hunting
-description: Learn about Jupyter notebooks with the Microsoft Sentinel hunting capabilities.
+ Title: Jupyter notebooks with Microsoft Sentinel hunting capabilities
+description: Learn about Jupyter notebooks in Microsoft Sentinel for security hunting.
Previously updated : 01/05/2023 Last updated : 03/07/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
-# Use Jupyter notebooks to hunt for security threats
+# Jupyter notebooks with Microsoft Sentinel hunting capabilities
Jupyter notebooks combine full programmability with a huge collection of libraries for machine learning, visualization, and data analysis. These attributes make Jupyter a compelling tool for security investigation and hunting. The foundation of Microsoft Sentinel is the data store; it combines high-performance querying, dynamic schema, and scales to massive data volumes. The Azure portal and all Microsoft Sentinel tools use a common API to access this data store. The same API is also available for external tools such as [Jupyter](https://jupyter.org/) notebooks and Python. + ## When to use Jupyter notebooks While many common tasks can be carried out in the portal, Jupyter extends the scope of what you can do with this data. For example, use notebooks to: -- **Perform analytics** that aren't provided out-of-the box in Microsoft Sentinel, such as some Python machine learning features-- **Create data visualizations** that aren't provided out-of-the box in Microsoft Sentinel, such as custom timelines and process trees
+- **Perform analytics** that aren't provided out-of-the-box in Microsoft Sentinel, such as some Python machine learning features
+- **Create data visualizations** that aren't provided out-of-the-box in Microsoft Sentinel, such as custom timelines and process trees
- **Integrate data sources** outside of Microsoft Sentinel, such as an on-premises data set.
-We've integrated the Jupyter experience into the Azure portal, making it easy for you to create and run notebooks to analyze your data. The *Kqlmagic* library provides the glue that lets you take [KQL](https://kusto.azurewebsites.net/docs/kusto/query/https://docsupdatetracker.net/index.html) queries from Microsoft Sentinel and run them directly inside a notebook.
+We integrated the Jupyter experience into the Azure portal, making it easy for you to create and run notebooks to analyze your data. The *Kqlmagic* library provides the glue that lets you take Kusto Query Language (KQL) queries from Microsoft Sentinel and run them directly inside a notebook.
Several notebooks, developed by some of Microsoft's security analysts, are packaged with Microsoft Sentinel: - Some of these notebooks are built for a specific scenario and can be used as-is. - Others are intended as samples to illustrate techniques and features that you can copy or adapt for use in your own notebooks.
-Other notebooks may also be imported from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/).
+Import other notebooks from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/).
## How Jupyter notebooks work
The Microsoft Sentinel notebooks use many popular Python libraries such as *pand
- Statistics and numerical computing - Machine learning and deep learning
-To avoid having to type or paste complex and repetitive code into notebook cells, most Python notebooks rely on third-party libraries called *packages*. To use a package in a notebook, you need to both install and import the package. Azure ML Compute has most common packages pre-installed. Make sure that you import the package, or the relevant part of the package, such as a module, file, function, or class.
+To avoid having to type or paste complex and repetitive code into notebook cells, most Python notebooks rely on third-party libraries called *packages*. To use a package in a notebook, you need to both install and import the package. Azure Machine Learning Compute has most common packages pre-installed. Make sure that you import the package, or the relevant part of the package, such as a module, file, function, or class.
Microsoft Sentinel notebooks use a Python package called [MSTICPy](https://github.com/Microsoft/msticpy/), which is a collection of cybersecurity tools for data retrieval, analysis, enrichment, and visualization. MSTICPy tools are designed specifically to help with creating notebooks for hunting and investigation and we're actively working on new features and improvements. For more information, see: - [MSTIC Jupyter and Python Security Tools documentation](https://msticpy.readthedocs.io/)-- [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md)
+- [Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md)
- [Advanced configurations for Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebooks-msticpy-advanced.md) ## Find notebooks
-From the Azure portal, go to **Microsoft Sentinel** > **Threat management** > **Notebooks**, to see notebooks that Microsoft Sentinel provides. For more notebooks built by Microsoft or contributed from the community, go to [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/).
+In Microsoft Sentinel, select **Notebooks** to see notebooks that Microsoft Sentinel provides. Learn more about using notebooks in threat hunting and investigation by exploring notebook templates like **Credential Scan on Azure Log Analytics** and **Guided Investigation - Process Alerts**.
+
+For more notebooks built by Microsoft or contributed from the community, go to [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks/). Use notebooks shared in the Microsoft Sentinel GitHub repository as useful tools, illustrations, and code samples that you can use when developing your own notebooks.
+
+- The [`Sample-Notebooks`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/tutorials-and-examples/example-notebooks) directory includes sample notebooks that are saved with data that you can use to show intended output.
+
+- The [`HowTos`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/tutorials-and-examples/how-tos) directory includes notebooks that describe concepts such as setting your default Python version, creating Microsoft Sentinel bookmarks from a notebook, and more.
## Manage access to Microsoft Sentinel notebooks To use Jupyter notebooks in Microsoft Sentinel, you must first have the right permissions, depending on your user role.
-While you can run Microsoft Sentinel notebooks in JupyterLab or Jupyter classic, in Microsoft Sentinel, notebooks are run on an [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) (Azure ML) platform. To run notebooks in Microsoft Sentinel, you must have appropriate access to both Microsoft Sentinel workspace and an [Azure ML workspace](../machine-learning/concept-workspace.md).
+While you can run Microsoft Sentinel notebooks in JupyterLab or Jupyter classic, in Microsoft Sentinel, notebooks are run on an [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) platform. To run notebooks in Microsoft Sentinel, you must have appropriate access to both Microsoft Sentinel workspace and an [Azure Machine Learning workspace](../machine-learning/concept-workspace.md).
|Permission |Description | |||
-|**Microsoft Sentinel permissions** | Like other Microsoft Sentinel resources, to access notebooks on Microsoft Sentinel Notebooks blade, a Microsoft Sentinel Reader, Microsoft Sentinel Responder, or Microsoft Sentinel Contributor role is required. <br><br>For more information, see [Permissions in Microsoft Sentinel](roles.md).|
-|**Azure Machine Learning permissions** | An Azure Machine Learning workspace is an Azure resource. Like other Azure resources, when a new Azure Machine Learning workspace is created, it comes with default roles. You can add users to the workspace and assign them to one of these built-in roles. For more information, see [Azure Machine Learning default roles](../machine-learning/how-to-assign-roles.md) and [Azure built-in roles](../role-based-access-control/built-in-roles.md). <br><br> **Important**: Role access can be scoped to multiple levels in Azure. For example, someone with owner access to a workspace may not have owner access to the resource group that contains the workspace. For more information, see [How Azure RBAC works](../role-based-access-control/overview.md). <br><br>If you're an owner of an Azure ML workspace, you can add and remove roles for the workspace and assign roles to users. For more information, see:<br> - [Azure portal](../role-based-access-control/role-assignments-portal.md)<br> - [PowerShell](../role-based-access-control/role-assignments-powershell.md)<br> - [Azure CLI](../role-based-access-control/role-assignments-cli.md)<br> - [REST API](../role-based-access-control/role-assignments-rest.md)<br> - [Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)<br> - [Azure Machine Learning CLI ](../machine-learning/how-to-assign-roles.md#manage-workspace-access)<br><br>If the built-in roles are insufficient, you can also create custom roles. Custom roles might have read, write, delete, and compute resource permissions in that workspace. You can make the role available at a specific workspace level, a specific resource group level, or a specific subscription level. For more information, see [Create custom role](../machine-learning/how-to-assign-roles.md#create-custom-role). |
-
-## Next steps
--- [Hunt for security threats with Jupyter notebooks](notebooks-hunt.md)-- [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md)-- [Integrate notebooks with Azure Synapse (Public preview)](notebooks-with-synapse.md)
+|**Microsoft Sentinel permissions** | Like other Microsoft Sentinel resources, to access notebooks in Microsoft Sentinel, a Microsoft Sentinel Reader, Microsoft Sentinel Responder, or Microsoft Sentinel Contributor role is required. <br><br>For more information, see [Permissions in Microsoft Sentinel](roles.md).|
+|**Azure Machine Learning permissions** | An Azure Machine Learning workspace is an Azure resource. Like other Azure resources, when a new Azure Machine Learning workspace is created, it comes with default roles. You can add users to the workspace and assign them to one of these built-in roles. For more information, see [Azure Machine Learning default roles](../machine-learning/how-to-assign-roles.md) and [Azure built-in roles](../role-based-access-control/built-in-roles.md). <br><br> **Important**: Role access can be scoped to multiple levels in Azure. For example, someone with owner access to a workspace might not have owner access to the resource group that contains the workspace. For more information, see [How Azure RBAC works](../role-based-access-control/overview.md). <br><br>If you're an owner of an Azure Machine Learning workspace, you can add and remove roles for the workspace and assign roles to users. For more information, see:<br> - [Azure portal](../role-based-access-control/role-assignments-portal.md)<br> - [PowerShell](../role-based-access-control/role-assignments-powershell.md)<br> - [Azure CLI](../role-based-access-control/role-assignments-cli.md)<br> - [REST API](../role-based-access-control/role-assignments-rest.md)<br> - [Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)<br> - [Azure Machine Learning CLI ](../machine-learning/how-to-assign-roles.md#manage-workspace-access)<br><br>If the built-in roles are insufficient, you can also create custom roles. Custom roles might have read, write, delete, and compute resource permissions in that workspace. You can make the role available at a specific workspace level, a specific resource group level, or a specific subscription level. For more information, see [Create custom role](../machine-learning/how-to-assign-roles.md#create-custom-role). |
-Other resources:
-- Use notebooks shared in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks) as useful tools, illustrations, and code samples that you can use when developing your own notebooks.
+## Submit feedback for a notebook
-- Submit feedback, suggestions, requests for features, contributed notebooks, bug reports or improvements and additions to existing notebooks. Go to the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel) to create an issue or fork and upload a contribution.
+Submit feedback, requests for features, bug reports, or improvements to existing notebooks. Go to the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel) to create an issue, or fork and upload a contribution.
-- Learn more about using notebooks in threat hunting and investigation by exploring some notebook templates, such as [Credential Scan on Azure Log Analytics](https://www.youtube.com/watch?v=OWjXee8o04M) and Guided Investigation - Process Alerts.
+## Related content
- Find more notebook templates in the Microsoft Sentinel > **Notebooks** > **Templates** tab.
--- **Find more notebooks** in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel-Notebooks):-
- - The [`Sample-Notebooks`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/tutorials-and-examples/example-notebooks) directory includes sample notebooks that are saved with data that you can use to show intended output.
-
- - The [`HowTos`](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/tutorials-and-examples/how-tos) directory includes notebooks that describe concepts such as setting your default Python version, creating Microsoft Sentinel bookmarks from a notebook, and more.
+- [Hunt for security threats with Jupyter notebooks](notebooks-hunt.md)
+- [Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md)
+- [Proactively hunt for threats](hunting.md)
+- [Keep track of data during hunting with Microsoft Sentinel](bookmarks.md)
-For more information, see:
+For blogs, videos, and other resources, see:
- [Create your first Microsoft Sentinel notebook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/creating-your-first-microsoft-sentinel-notebook/ba-p/2977745) (Blog series)--- [Tutorial: Microsoft Sentinel notebooks - Getting started](https://www.youtube.com/results?search_query=azazure+sentinel+notebooks) (Video)-- [Tutorial: Edit and run Jupyter notebooks without leaving Azure ML studio](https://www.youtube.com/watch?v=AAj-Fz0uCNk) (Video)-- [Webinar: Microsoft Sentinel notebooks fundamentals](https://www.youtube.com/watch?v=rewdNeX6H94)-- [Proactively hunt for threats](hunting.md)-- [Use bookmarks to save interesting information while hunting](bookmarks.md)
+- [Tutorial: Microsoft Sentinel notebooks - Getting started](https://www.youtube.com/watch?v=SaEQJfoe8Io) (Video)
+- [Tutorial: Edit and run Jupyter notebooks without leaving Azure Machine Learning studio](https://www.youtube.com/watch?v=AAj-Fz0uCNk) (Video)
+- [Detect Credential Leaks using Azure Sentinel Notebooks](https://www.youtube.com/watch?v=OWjXee8o04M) (Video)
+- [Webinar: Microsoft Sentinel notebooks fundamentals](https://www.youtube.com/watch?v=rewdNeX6H94) (Video)
- [Jupyter, msticpy, and Microsoft Sentinel](https://msticpy.readthedocs.io/en/latest/getting_started/JupyterAndAzureSentinel.html)
sentinel Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/overview.md
Microsoft Sentinel natively incorporates proven Azure services, like Log Analyti
## Collect data by using data connectors
-To on-board Microsoft Sentinel, you first need to [connect to your data sources](connect-data-sources.md).
+To on-board Microsoft Sentinel, you first need to [connect to your data sources](configure-data-connector.md).
Microsoft Sentinel comes with many connectors for Microsoft solutions that are available out of the box and provide real-time integration. Some of these connectors include:
Microsoft Sentinel comes with many connectors for Microsoft solutions that are a
Microsoft Sentinel has built-in connectors to the broader security and applications ecosystems for non-Microsoft solutions. You can also use common event format, Syslog, or REST-API to connect your data sources with Microsoft Sentinel.
-For more information, see [Find your data connector](data-connectors-reference.md).
+For more information, see the following articles:
+- [Microsoft Sentinel data connectors](connect-data-sources.md)
+- [Find your data connector](data-connectors-reference.md)
+ ## Create interactive reports by using workbooks
After you [onboard to Microsoft Sentinel](quickstart-onboard.md), monitor your d
Workbooks display differently in Microsoft Sentinel than in Azure Monitor. But it may be useful for you to see how to [create a workbook in Azure Monitor](../azure-monitor/visualize/workbooks-create-workbook.md). Microsoft Sentinel allows you to create custom workbooks across your data. Microsoft Sentinel also comes with built-in workbook templates to allow you to quickly gain insights across your data as soon as you connect a data source. Workbooks are intended for SOC engineers and analysts of all tiers to visualize data.
sentinel Playbook Triggers Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/playbook-triggers-actions.md
Title: Use triggers and actions in Microsoft Sentinel playbooks | Microsoft Docs description: Learn in greater depth how to give your playbooks access to the information in your Microsoft Sentinel alerts and incidents and use that information to take remedial actions.- Previously updated : 11/09/2021-++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
++ # Use triggers and actions in Microsoft Sentinel playbooks
For an introduction to playbooks, see [Automate threat response with playbooks i
For the complete specification of the Microsoft Sentinel connector, see the [Logic Apps connector documentation](/connectors/azuresentinel/). + ## Permissions required | Roles \ Connector components | Triggers | "Get" actions | Update incident,<br>add a comment |
You can supply the following JSON code to generate the schema. The code shows th
This will create a **For each** loop, since an incident contains an array of alerts.
-1. Click on the **Use sample payload to generate schema** link.
+1. Select on the **Use sample payload to generate schema** link.
![Select 'use sample payload to generate schema' link](./media/playbook-triggers-actions/generate-schema-link.png)
-1. Supply a sample payload. You can find a sample payload by looking in Log Analytics (the **Logs** blade) for another instance of this alert, and copying the custom details object (under **Extended Properties**). In the screenshot below, we used the JSON code shown above.
+1. Supply a sample payload. You can find a sample payload by looking in Log Analytics for another instance of this alert, and copying the custom details object (under **Extended Properties**). Access Log Analytics data either in the **Logs** page in the Azure portal or the **Advanced hunting** page in the Defender portal. In the screenshot below, we used the JSON code shown above.
![Enter sample JSON payload.](./media/playbook-triggers-actions/sample-payload.png)
sentinel Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/restore.md
Title: Restore archived logs from search - Microsoft Sentinel
description: Learn how to restore archived logs from search job results. Previously updated : 01/20/2022 Last updated : 03/03/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Restore archived logs from search
Restore data from an archived log to use in high performing queries and analytic
Before you restore data in an archived log, see [Start an investigation by searching large datasets (preview)](investigate-large-datasets.md) and [Restore in Azure Monitor](../azure-monitor/logs/restore.md). ## Restore archived log data
-To restore archived log data in Microsoft Sentinel, specify the table and time range for the data you want to restore. Within a few minutes, the log data is available within the Log Analytics workspace. Then you can use the data in high-performance queries that support full KQL.
+To restore archived log data in Microsoft Sentinel, specify the table and time range for the data you want to restore. Within a few minutes, the log data is available within the Log Analytics workspace. Then you can use the data in high-performance queries that support full Kusto Query Language (KQL).
You can restore archived data directly from the **Search** page or from a saved search.
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
-1. Under **General**, select **Search**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **General**, select **Search**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Search**.
1. Restore log data in one of two ways: - At the top of **Search** page, select **Restore**. :::image type="content" source="media/restore/search-page-restore.png" alt-text="Screenshot of restore button at the top of the search page.":::
You can restore archived data directly from the **Search** page or from a saved
View the status and results of the log data restore by going to the **Restoration** tab. You can view the restored data when the status of the restore job shows **Data Available**.
-1. In your Microsoft Sentinel workspace, select **Search** > **Restoration**.
+1. In Microsoft Sentinel, select **Search** > **Restoration**.
:::image type="content" source="media/restore/restoration-tab.png" alt-text="Screenshot of the restoration tab on the search page.":::
View the status and results of the log data restore by going to the **Restoratio
To save costs, we recommend you delete the restored table when you no longer need it. When you delete a restored table, Azure doesn't delete the underlying source data.
-1. In your Microsoft Sentinel workspace, select **Search** > **Restoration**.
+1. In Microsoft Sentinel, select **Search** > **Restoration**.
1. Identify the table you want to delete. 1. Select **Delete** for that table row.
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
Title: Roles and permissions in Microsoft Sentinel
description: Learn how Microsoft Sentinel assigns permissions to users using Azure role-based access control, and identify the allowed actions for each role. Previously updated : 09/29/2023 Last updated : 03/07/2024 +
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
# Roles and permissions in Microsoft Sentinel This article explains how Microsoft Sentinel assigns permissions to user roles and identifies the allowed actions for each role. Microsoft Sentinel uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to provide [built-in roles](../role-based-access-control/built-in-roles.md) that can be assigned to users, groups, and services in Azure. This article is part of the [Deployment guide for Microsoft Sentinel](deploy-overview.md).
-Use Azure RBAC to create and assign roles within your security operations team to grant appropriate access to Microsoft Sentinel. The different roles give you fine-grained control over what Microsoft Sentinel users can see and do. Azure roles can be assigned in the Microsoft Sentinel workspace directly (see note below), or in a subscription or resource group that the workspace belongs to, which Microsoft Sentinel inherits.
+Use Azure RBAC to create and assign roles within your security operations team to grant appropriate access to Microsoft Sentinel. The different roles give you fine-grained control over what Microsoft Sentinel users can see and do. Azure roles can be assigned in the Microsoft Sentinel workspace directly, or in a subscription or resource group that the workspace belongs to, which Microsoft Sentinel inherits.
+ ## Roles and permissions for working in Microsoft Sentinel
+Grant the appropriate access to the data in your workspace by using built-in roles. You might need to grant more roles or specific permissions depending on a user's job tasks.
++ ### Microsoft Sentinel-specific roles
-**All Microsoft Sentinel built-in roles grant read access to the data in your Microsoft Sentinel workspace.**
+All Microsoft Sentinel built-in roles grant read access to the data in your Microsoft Sentinel workspace.
- [**Microsoft Sentinel Reader**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-reader) can view data, incidents, workbooks, and other Microsoft Sentinel resources. -- [**Microsoft Sentinel Responder**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder) can, in addition to the above, manage incidents (assign, dismiss, etc.).
+- [**Microsoft Sentinel Responder**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder) can, in addition to the permissions for Microsoft Sentinel Reader, manage incidents like assign, dismiss, and change incidents.
-- [**Microsoft Sentinel Contributor**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) can, in addition to the above, install and update solutions from content hub, create and edit workbooks, analytics rules, and other Microsoft Sentinel resources.
+- [**Microsoft Sentinel Contributor**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) can, in addition to the permissions for Microsoft Sentinel Responder, install and update solutions from content hub, and create and edit Microsoft Sentinel resources like workbooks, analytics rules, and more.
- [**Microsoft Sentinel Playbook Operator**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-playbook-operator) can list, view, and manually run playbooks. - [**Microsoft Sentinel Automation Contributor**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-automation-contributor) allows Microsoft Sentinel to add playbooks to automation rules. It isn't meant for user accounts.
-> [!NOTE]
->
-> - For best results, assign these roles to the **resource group** that contains the Microsoft Sentinel workspace. This way, the roles apply to all the resources that support Microsoft Sentinel, as those resources should also be placed in the same resource group.
->
-> - As another option, assign the roles directly to the Microsoft Sentinel **workspace** itself. If you do this, you must also assign the same roles to the SecurityInsights **solution resource** in that workspace. You might need to assign them to other resources as well, and you will need to constantly manage role assignments to resources.
+For best results, assign these roles to the **resource group** that contains the Microsoft Sentinel workspace. This way, the roles apply to all the resources that support Microsoft Sentinel, as those resources should also be placed in the same resource group.
+
+As another option, assign the roles directly to the Microsoft Sentinel **workspace** itself. If you do that, you must assign the same roles to the SecurityInsights **solution resource** in that workspace. You might also need to assign them to other resources, and continually manage role assignments to the resources.
### Other roles and permissions
Users with particular job requirements might need to be assigned other roles or
- **Allow guest users to assign incidents**
- If a guest user needs to be able to assign incidents, you need to assign the [**Directory Reader**](../active-directory/roles/permissions-reference.md#directory-readers) to the user, in addition to the **Microsoft Sentinel Responder** role. Note that the Directory Reader role is *not* an Azure role but a Microsoft Entra role, and that regular (non-guest) users have this role assigned by default.
+ If a guest user needs to be able to assign incidents, you need to assign the [**Directory Reader**](../active-directory/roles/permissions-reference.md#directory-readers) role to the user, in addition to the **Microsoft Sentinel Responder** role. The Directory Reader role isn't an Azure role but a Microsoft Entra role, and regular (nonguest) users have this role assigned by default.
- **Create and delete workbooks**
Users with particular job requirements might need to be assigned other roles or
### Azure and Log Analytics roles you might see assigned
-When you assign Microsoft Sentinel-specific Azure roles, you might come across other Azure and Log Analytics roles that might have been assigned to users for other purposes. Note that these roles grant a wider set of permissions that include access to your Microsoft Sentinel workspace and other resources:
+When you assign Microsoft Sentinel-specific Azure roles, you might come across other Azure and Log Analytics roles that might be assigned to users for other purposes. These roles grant a wider set of permissions that include access to your Microsoft Sentinel workspace and other resources:
- **Azure roles:** [Owner](../role-based-access-control/built-in-roles.md#owner), [Contributor](../role-based-access-control/built-in-roles.md#contributor), and [Reader](../role-based-access-control/built-in-roles.md#reader). Azure roles grant access across all your Azure resources, including Log Analytics workspaces and Microsoft Sentinel resources.
After understanding how roles and permissions work in Microsoft Sentinel, you ca
| | [Logic Apps Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run and modify playbooks. | | **Service Principal** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) | Microsoft Sentinel's resource group | Automated configuration for management tasks | -
-> [!TIP]
-> More roles might be required depending on the data you ingest or monitor. For example, Microsoft Entra roles may be required, such as the Global Administrator or Security Administrator roles, to set up data connectors for services in other Microsoft portals.
->
+More roles might be required depending on the data you ingest or monitor. For example, Microsoft Entra roles might be required, such as the Global Administrator or Security Administrator roles, to set up data connectors for services in other Microsoft portals.
## Resource-based access control
-You might have some users who need to access only specific data in your Microsoft Sentinel workspace, but shouldn't have access to the entire Microsoft Sentinel environment. For example, you might want to provide a non-security operations (non-SOC) team with access to the Windows event data for the servers they own.
+You might have some users who need to access only specific data in your Microsoft Sentinel workspace, but shouldn't have access to the entire Microsoft Sentinel environment. For example, you might want to provide a team outside of security operations with access to the Windows event data for the servers they own.
-In such cases, we recommend that you configure your role-based access control (RBAC) based on the resources that are allowed to your users, instead of providing them with access to the Microsoft Sentinel workspace or specific Microsoft Sentinel features. This method is also known as setting up resource-context RBAC. [Learn more about RBAC](resource-context-rbac.md)
+In such cases, we recommend that you configure your role-based access control (RBAC) based on the resources that are allowed to your users, instead of providing them with access to the Microsoft Sentinel workspace or specific Microsoft Sentinel features. This method is also known as setting up resource-context RBAC. For more information, see [Manage access to Microsoft Sentinel data by resource](resource-context-rbac.md).
## Next steps
sentinel Deploy Data Connector Agent Container Other Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container-other-methods.md
- Title: Microsoft Sentinel solution for SAP® applications - manually deploy and configure the SAP data connector agent container using the command line
-description: This article shows you how to manually deploy the container that hosts the SAP data connector agent, using the Azure command line interface, in order to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel Solution for SAP.
---- Previously updated : 01/03/2024--
-# Manually deploy and configure the container hosting the SAP data connector agent
-
-This article shows you how to use the Azure command line interface to deploy the container that hosts the SAP data connector agent, and create new SAP systems under the agent. You use this connector agent to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel Solution for SAP.
-
-Other ways to deploy the container and create SAP systems using the Azure portal or a *kickstart* script are described in [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md). These other methods make use of an Azure Key Vault to store SAP credentials, and are highly preferred over the method described here. You should use the manual deployment method only if none of the other options are available to you.
-
-## Deployment milestones
-
-Deployment of the Microsoft Sentinel Solution for SAP is divided into the following sections
-
-1. [Deployment overview](deployment-overview.md)
-
-1. [Deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
-
-1. [Prepare SAP environment](preparing-sap.md)
-
-1. [Deploy the Microsoft Sentinel solution for SAP applications® from the content hub](deploy-sap-security-content.md)
-
-1. **Deploy data connector agent (*You are here*)**
-
-1. [Configure Microsoft Sentinel Solution for SAP](deployment-solution-configuration.md)
-
-1. Optional deployment steps
- - [Configure auditing](configure-audit.md)
- - [Configure SAP data connector to use SNC](configure-snc.md)
-
-## Data connector agent deployment overview
-
-Read about the [deployment process](deploy-data-connector-agent-container.md#data-connector-agent-deployment-overview).
-
-## Prerequisites
-
-Read about the [prerequisites for deploying the agent container](deploy-data-connector-agent-container.md#prerequisites).
-
-## Deploy the data connector agent container manually
-
-1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
-
-1. Install [Docker](https://www.docker.com/) on the VM, following the [recommended deployment steps](https://docs.docker.com/engine/install/) for the chosen operating system.
-
-1. Use the following commands (replacing `<SID>` with the name of the SAP instance) to create a folder to store the container configuration and metadata, and to download a sample systemconfig.json file (for older versions use the systemconfig.ini file) into that folder.
-
- ```bash
- sid=<SID>
- mkdir -p /opt/sapcon/$sid
- cd /opt/sapcon/$sid
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.json
- ```
-
- For agent versions released before June 22, 2023, use systemconfig.ini instead of systemconfig.json. Substitute the following line for the last line in the previous code block.
-
- ```bash
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.ini
- ```
-
-1. Edit the systemconfig.ini file to [configure the relevant settings](reference-systemconfig.md).
-
-1. Run the following commands (replacing `<SID>` with the name of the SAP instance) to retrieve the latest container image, create a new container, and configure it to start automatically.
-
- ```bash
- sid=<SID>
- docker pull mcr.microsoft.com/azure-sentinel/solutions/sapcon:latest
- docker create --restart unless-stopped --name my-container mcr.microsoft.com/azure-sentinel/solutions/sapcon
- ```
-
-1. Run the following command to copy the SDK into the container. Replace `<SID>` with the name of the SAP instance and `<sdkfilename>` with full filename of the SAP NetWeaver SDK.
-
- ```bash
- sdkfile=<sdkfilename>
- sid=<SID>
- docker cp $sdkfile sapcon-$sid:/sapcon-app/inst/
- ```
-
-1. Run the following command (replacing `<SID>` with the name of the SAP instance) to start the container.
-
- ```bash
- sid=<SID>
- docker start sapcon-$sid
- ```
-
-<!-- -->
-
-## Next steps
-
-Once the connector is deployed, proceed to deploy Microsoft Sentinel Solution for SAP content:
-> [!div class="nextstepaction"]
-> [Deploy the solution content from the content hub](deploy-sap-security-content.md)
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
Title: Microsoft Sentinel solution for SAP® applications - deploy and configure the SAP data connector agent container
+ Title: Microsoft Sentinel solution for SAP applications - deploy and configure the SAP data connector agent container
description: This article shows you how to use the Azure portal to deploy the container that hosts the SAP data connector agent, in order to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel Solution for SAP.--++ Previously updated : 01/02/2024 Last updated : 04/01/2024 # Deploy and configure the container hosting the SAP data connector agent
-This article shows you how to deploy the container that hosts the SAP data connector agent, and how to use it to create connections to your SAP systems. This two-step process is required to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel solution for SAP® applications.
+This article shows you how to deploy the container that hosts the SAP data connector agent, and how to use it to create connections to your SAP systems. This two-step process is required to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel solution for SAP applications.
The recommended method to deploy the container and create connections to SAP systems is via the Azure portal. This method is explained in the article, and also demonstrated in [this video on YouTube](https://www.youtube.com/watch?v=bg0vmUvcQ5Q). Also shown in this article is a way to accomplish these objectives by calling a *kickstart* script from the command line.
-Alternatively, you can deploy the data connector agent manually by issuing individual commands from the command line, as described in [this article](deploy-data-connector-agent-container-other-methods.md).
+Alternatively, you can deploy the data connector Docker container agent manually, such as in a Kubernetes cluster. For more information, open a support ticket.
> [!IMPORTANT]
-> Deploying the container and creating connections to SAP systems via the Azure portal is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Deploying the container and creating connections to SAP systems via the Azure portal is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ ## Deployment milestones
Deployment of the Microsoft Sentinel solution for SAP® applications is divided
1. [Deploy the Microsoft Sentinel solution for SAP applications® from the content hub](deploy-sap-security-content.md)
-1. **Deploy data connector agent (*You are here*)**
+1. **Deploy the data connector agent (*You are here*)**
1. [Configure Microsoft Sentinel solution for SAP® applications](deployment-solution-configuration.md)
Deployment of the Microsoft Sentinel solution for SAP® applications is divided
## Data connector agent deployment overview
-For the Microsoft Sentinel solution for SAP® applications to operate correctly, you must first get your SAP data into Microsoft Sentinel. To accomplish this, you need to deploy the solution's SAP data connector agent.
+For the Microsoft Sentinel solution for SAP applications to operate correctly, you must first get your SAP data into Microsoft Sentinel. To accomplish this, you need to deploy the solution's SAP data connector agent.
-The data connector agent runs as a container on a Linux virtual machine (VM). This VM can be hosted either in Azure, in a third-party cloud, or on-premises. We recommend that you install and configure this container using the Azure portal (in PREVIEW); however, you can choose to deploy the container using a *kickstart* script, or to [deploy the container manually](deploy-data-connector-agent-container-other-methods.md#deploy-the-data-connector-agent-container-manually).
+The data connector agent runs as a container on a Linux virtual machine (VM). This VM can be hosted either in Azure, in a third-party cloud, or on-premises. We recommend that you install and configure this container using the Azure portal (in PREVIEW); however, you can choose to deploy the container using a *kickstart* script. If you want to deploy the data connector Docker container agent manually, such as in a Kubernetes cluster, open a support ticket for more details.
The agent connects to your SAP system to pull logs and other data from it, then sends those logs to your Microsoft Sentinel workspace. To do this, the agent has to authenticate to your SAP system&mdash;that's why you created a user and a role for the agent in your SAP system in the previous step.
You have a few choices of how and where to store your agent configuration inform
For any of these scenarios, you have the extra option to authenticate using SAP's Secure Network Communication (SNC) and X.509 certificates. This option provides a higher level of authentication security, but it's only a practical option in a limited set of scenarios.
-Ideally, your SAP configuration and authentication secrets can and should be stored in an [**Azure Key Vault**](../../key-vault/general/authentication.md). How you access your key vault depends on where your VM is deployed:
+Deploying the data connector agent container includes the following steps:
-- **A container on an Azure VM** can use an Azure [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to seamlessly access Azure Key Vault. Select the [**Managed identity** tab](deploy-data-connector-agent-container.md?tabs=managed-identity#deploy-the-data-connector-agent-container) for the instructions to deploy your agent container using managed identity.
+1. [Creating the virtual machine and setting up access to your SAP system credentials](#create-a-virtual-machine-and-configure-access-to-your-credentials). This procedure may need to be performed by another team in your organization, but must be performed before the other procedures in this article.
- In the event that a system-assigned managed identity can't be used, the container can also authenticate to Azure Key Vault using a [Microsoft Entra ID registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md), or, as a last resort, a [**configuration file**](deploy-data-connector-agent-container.md?tabs=config-file#deploy-the-data-connector-agent-container).
+1. [Set up and deploy the data connector agent](#deploy-the-data-connector-agent).
-- **A container on an on-premises VM**, or **a VM in a third-party cloud environment**, can't use Azure managed identity, but can authenticate to Azure Key Vault using a [Microsoft Entra ID registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md). Select the [**Registered application** tab below](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container) for the instructions to deploy your agent container.--- If for some reason a registered-application service principal can't be used, you can use a [**configuration file**](reference-systemconfig.md), though this is not preferred.
+1. [Configure the agent to connect to an SAP system.](#connect-to-a-new-sap-system)
## Prerequisites
-Before you deploy the data connector agent, make sure you have done the following:
+Before you deploy the data connector agent, make sure that have all the deployment prerequisites in place. For more information, see [Prerequisites for deploying Microsoft Sentinel solution for SAP applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md).
+
+Also, if you plan to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC), take the relevant preparatory steps. For more information, see [Deploy the Microsoft Sentinel for SAP data connector by using SNC](configure-snc.md).
+
+## Create a virtual machine and configure access to your credentials
+
+Ideally, your SAP configuration and authentication secrets can and should be stored in an [**Azure Key Vault**](../../key-vault/general/authentication.md). How you access your key vault depends on where your VM is deployed:
+
+- **A container on an Azure VM** can use an Azure [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to seamlessly access Azure Key Vault.
-- Follow the [Prerequisites for deploying Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md).-- If you plan to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC), [take the preparatory steps for deploying the Microsoft Sentinel for SAP data connector with SNC](configure-snc.md).-- Set up a Key Vault, using either a [managed identity](deploy-data-connector-agent-container.md?tabs=managed-identity#create-key-vault) or a [registered application](deploy-data-connector-agent-container.md?tabs=registered-application#create-key-vault) (links are to the procedures shown below). Make sure you have the necessary permissions.
- - If your circumstances do not allow for using Azure Key Vault, create a [**configuration file**](reference-systemconfig.md) to use instead.
-- For more information on these options, see the [overview section](#data-connector-agent-deployment-overview).
+ In the event that a system-assigned managed identity can't be used, the container can also authenticate to Azure Key Vault using a [Microsoft Entra ID registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md), or, as a last resort, a **configuration file**.
-## Deploy the data connector agent container
+- **A container on an on-premises VM**, or **a VM in a third-party cloud environment**, can't use Azure managed identity, but can authenticate to Azure Key Vault using a [Microsoft Entra ID registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md).
-This section has three steps:
-- In the first step, you [create the virtual machine and set up your access to your SAP system credentials](#create-virtual-machine-and-configure-access-to-your-credentials). (This step may need to be performed by other appropriate personnel, but it must be done first. See [Prerequisites](#prerequisites).)-- In the second step, you [set up and deploy the data connector agent](#deploy-the-data-connector-agent).-- In the third step, you configure the agent to [connect to an SAP system](#connect-to-a-new-sap-system).
+- If for some reason a registered-application service principal can't be used, you can use a [**configuration file**](reference-systemconfig.md), though this is not preferred.
+
+> [!NOTE]
+> This procedure may need to be performed by another team in your organization, but must be performed before the other procedures in this article.
+>
-### Create virtual machine and configure access to your credentials
+Select one of the following tabs, depending on how you plan to store and access your authentication credentials and configuration data.
-# [Managed identity](#tab/managed-identity)
+# [Managed identity](#tab/create-managed-identity)
-#### Create a managed identity with an Azure VM
+### Create a managed identity with an Azure VM
1. Run the following command to **Create a VM** in Azure (substitute actual names from your environment for the `<placeholders>`):
This section has three steps:
1. Copy the **systemAssignedIdentity** GUID, as it will be used in the coming steps. This is your **managed identity**.
-# [Registered application](#tab/registered-application)
+# [Registered application](#tab/create-registered-application)
-#### Register an application to create an application identity
+### Register an application to create an application identity
1. Run the following command from the Azure command line to **create and register an application**:
This section has three steps:
1. Before proceeding any further, create a virtual machine on which to deploy the agent. You can create this machine in Azure, in another cloud, or on-premises.
-# [Configuration file](#tab/config-file)
+# [Configuration file](#tab/create-config-file)
-#### Create a configuration file
+### Use a configuration file
Key Vault is the recommended method to store your authentication credentials and configuration data.
-If you are prevented from using Azure Key Vault, you can use a configuration file instead. See the appropriate reference file:
+If you are prevented from using Azure Key Vault, you can use a configuration file instead:
-- [Systemconfig.ini file reference](reference-systemconfig.md) (for agent versions deployed before June 22, 2023).-- [Systemconfig.json file reference](reference-systemconfig-json.md) (for versions deployed June 22 or later).
+1. Create a virtual machine on which to deploy the agent.
+1. Continue with deploying the data connector agent using the configuration file. For more information, see
+[Command line options](#command-line-options).
-Once you have the file prepared, but before proceeding any further, create a virtual machine on which to deploy the agent. Then, skip the Key Vault steps below and go directly to the step after them&mdash;[Deploy the data connector agent](#deploy-the-data-connector-agent).
+The configuration file is generated during the agent deployment. For more information, see:
+
+- [Systemconfig.json file reference](reference-systemconfig-json.md) (for versions deployed June 22 or later).
+- [Systemconfig.ini file reference](reference-systemconfig.md) (for agent versions deployed before June 22, 2023).
-#### Create Key Vault
+### Create a key vault
+
+This procedure describes how to create a key vault to store your agent configuration information, including your SAP authentication secrets. If you'll be using an existing key vault, skip directly to [step 2](#step2).
-1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`):
- (If you'll be using an existing key vault, ignore this step.)
+**To create your key vault**:
+
+1. Run the following commands, substituting actual names for the `<placeholder>` values.
```azurecli az keyvault create \ --name <KeyVaultName> \ --resource-group <KeyVaultResourceGroupName>
- ```
+ ```
-1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these when you assign the key vault access policy and run the deployment script in the coming steps.
+1. <a name=step2></a>Copy the name of your key vault and the name of its resource group. You'll need these when you assign the key vault access permissions and run the deployment script in the next steps.
-#### Assign a key vault access policy
+### Assign key vault access permissions
-1. Run the following command to **assign a key vault access policy** to the identity that you created and copied above (substitute actual names for the `<placeholders>`). Choose the appropriate tab for the type of identity you created to see the relevant command.
+1. In your key vault, assign the following Azure role-based access control or vault access policy permissions on the secrets scope to the [identity that you created and copied earlier](#create-a-virtual-machine-and-configure-access-to-your-credentials).
- # [Managed identity](#tab/managed-identity)
+ |Permission model |Permissions required |
+ |||
+ |**Azure role-based access control** | Key Vault Secrets User |
+ |**Vault access policy** | `get`, `list` |
- Run this command to assign the access policy to your VM's **system-assigned managed identity**:
+ Use the options in the portal to assign the permissions, or run one of the following commands to assign key vault secrets permissions to your identity, substituting actual names for the `<placeholder>` values. Select the tab for the type of identity you'd created.
- ```azurecli
- az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --object-id <VM system-assigned identity> --secret-permissions get list set
- ```
+ # [Assign managed identity permissions](#tab/perms-managed-identity)
- This policy will allow the VM to list, read, and write secrets from/to the key vault.
+ Run one of the following commands, depending on your preferred Key Vault permission model, to assign key vault secrets permissions to your VM's system-assigned managed identity. The policy specified in the commands allows the VM to list and read secrets from the key vault.
- # [Registered application](#tab/registered-application)
+ - **Azure role-based access control permission model**:
- Run this command to assign the access policy to a **registered application identity**:
+ ```Azure CLI
+ az role assignment create --assignee-object-id <ManagedIdentityId> --role "Key Vault Secrets User" --scope /subscriptions/<KeyVaultSubscriptionId>/resourceGroups/<KeyVaultResourceGroupName> /providers/Microsoft.KeyVault/vaults/<KeyVaultName>
+ ```
- ```azurecli
- az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --spn <appId> --secret-permissions get list set
- ```
+ - **Vault access policy permission model**:
+
+ ```Azure CLI
+ az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --object-id <ManagedIdentityId> --secret-permissions get list
+ ```
+
+ # [Assign registered application permissions](#tab/perms-registered-application)
+
+ Run one of the following commands, depending on your preferred Key Vault permission model, to assign key vault secrets permissions to your VM's registered application identity. The policy specified in the commands allows the VM to list and read secrets from the key vault.
- This policy will allow the VM to list, read, and write secrets from/to the key vault.
+ - **Azure role-based access control permission model**:
- # [Configuration file](#tab/config-file)
+ ```Azure CLI
+ az role assignment create --assignee-object-id <ServicePrincipalObjectId> --role "Key Vault Secrets User" --scope /subscriptions/<KeyVaultSubscriptionId>/resourceGroups/<KeyVaultResourceGroupName>/providers/Microsoft.KeyVault/vaults/<KeyVaultName>
+ ```
- Move on, nothing to see here...
+ To find the object ID of the app registrationΓÇÖs service principal, go to the Microsoft Entra ID portal's **Enterprise applications** page. Search for the name of the app registration there, and copy the **Object ID** value.
+
+ > [!IMPORTANT]
+ > Do not confuse the object ID from the **Enterprise Applications** page with the app registration's object ID found on the **App registrations** page. Only the object ID from the **Enterprise applications** page will work.
+
+ - **Vault access policy permission model**:
+
+ ```Azure CLI
+ az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --spn <ApplicationId> --secret-permissions get list
+ ```
+
+ To find the object ID of the app registration, go to the Microsoft Entra ID portal's **App registrations** page. Search for name of the app registration and copy the **Application (client) ID** value.
+1. In the same key vault, assign the following Azure role-based access control or vault access policy permissions on the secrets scope to the user configuring the data connector agent:
+
+ |Permission model |Permissions required |
+ |||
+ |**Azure role-based access control** | Key Vault Secrets Officer |
+ |**Vault access policy** | `get`, `list`, `set`, `delete` |
+
+ Use the options in the portal to assign the permissions, or run one of the following commands to assign key vault secrets permissions to the user, substituting actual names for the `<placeholder>` values:
+
+ - **Azure role-based access control permission model**:
+
+ ```Azure CLI
+ az role assignment create --role "Key Vault Secrets Officer" --assignee <UserPrincipalName> --scope /subscriptions/<KeyVaultSubscriptionId>/resourceGroups/<KeyVaultResourceGroupName>/providers/Microsoft.KeyVault/vaults/<KeyVaultName>
+ ```
-### Deploy the data connector agent
+ - **Vault access policy permission model**:
+
+ ```Azure CLI
+ az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --upn <UserPrincipalName>--secret-permissions get list set delete
+ ```
+
+## Deploy the data connector agent
Now that you've created a VM and a Key Vault, your next step is to create a new agent and connect to one of your SAP systems.
Now that you've created a VM and a Key Vault, your next step is to create a new
1. **Download or transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download)** to the machine.
-# [Azure portal (Preview)](#tab/azure-portal/managed-identity)
+Use one of the following sets of procedures, depending on whether you're using a managed identity or a registered application to access your key vault, and whether you're using the Azure portal or the command line to deploy the agent:
+
+- [Azure portal options (Preview)](#azure-portal-options-preview)
+- [Command line options](#command-line-options)
+
+> [!TIP]
+> The Azure portal can only be used with an Azure key vault. If you're using a configuration file instead, use the relevant [command line option](#command-line-options).
+>
+
+### Azure portal options (Preview)
+
+Select one of the following tabs, depending on the type of identity you're using to access your key vault.
> [!NOTE] > If you previously installed SAP connector agents manually or using the kickstart scripts, you can't configure or manage those agents in the Azure portal. If you want to use the portal to configure and update agents, you must reinstall your existing agents using the portal.
-Create a new agent through the Azure portal, authenticating with a managed identity:
+# [Deploy with a managed identity](#tab/deploy-azure-managed-identity)
+
+This procedure describes how to create a new agent through the Azure portal, authenticating with a managed identity:
1. From the Microsoft Sentinel navigation menu, select **Data connectors**.
-1. In the search bar, type *SAP*.
+1. In the search bar, enter *SAP*.
1. Select **Microsoft Sentinel for SAP** from the search results, and select **Open connector page**. 1. To collect data from an SAP system, you must follow these two steps:
-
+ 1. [Create a new agent](#create-a-new-agent) 1. [Connect the agent to a new SAP system](#connect-to-a-new-sap-system)
Create a new agent through the Azure portal, authenticating with a managed ident
1. Under **Create a collector agent** on the right, define the agent details:
+ |Name |Description |
+ |||
+ |**Agent name** | Enter an agent name, including any of the following characters: <ul><li> a-z<li> A-Z<li>0-9<li>_ (underscore)<li>. (period)<li>- (dash)</ul> |
+ |**Subscription** / **Key vault** | Select the **Subscription** and **Key vault** from their respective drop-downs. |
+ |**NWRFC SDK zip file path on the agent VM** | Enter the path in your VM that contains the SAP NetWeaver Remote Function Call (RFC) Software Development Kit (SDK) archive (.zip file). For example, */src/test/NWRFC.zip*. |
+ |**Enable SNC connection support** |Select to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC). <br><br>If you select this option, enter the path that contains the `sapgenpse` binary and `libsapcrypto.so` library, under **SAP Cryptographic Library path on the agent VM**. |
+ |**Authentication to Azure Key Vault** | To authenticate to your key vault using a managed identity, leave the default **Managed Identity** option selected. <br><br>You must have the managed identity set up ahead of time. For more information, see [Create a virtual machine and configure access to your credentials](#create-a-virtual-machine-and-configure-access-to-your-credentials). |
+
+ > [!NOTE]
+ > If you want to use an SNC connection, make sure to select **Enable SNC connection support** at this stage as you can't go back and enable an SNC connection after you finish deploying the agent. For more information, see [Deploy the Microsoft Sentinel for SAP data connector by using SNC](configure-snc.md).
+
+ For example:
+ :::image type="content" source="media/deploy-data-connector-agent-container/create-agent-managed-id.png" alt-text="Screenshot of the Create a collector agent area.":::
- - Enter the **Agent name**. The agent name can include these characters:
- - a-z
- - A-Z
- - 0-9
- - _ (underscore)
- - . (period)
- - \- (dash)
+1. Select **Create** and review the recommendations before you complete the deployment:
- - Select the **Subscription** and **Key Vault** from their respective drop-downs.
+ :::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment.png" alt-text="Screenshot of the final stage of the agent deployment.":::
- - Under **NWRFC SDK zip file path on the agent VM**, type the path in your VM that contains the SAP NetWeaver Remote Function Call (RFC) Software Development Kit (SDK) archive (.zip file). For example, */src/test/NWRFC.zip*.
+1. <a name="role"></a>Deploying the SAP data connector agent requires that you grant your agent's VM identity with specific permissions to the Microsoft Sentinel workspace, using the **Microsoft Sentinel Business Applications Agent Operator** role.
- - To ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC), select **Enable SNC connection support**. If you select this option, enter the path that contains the `sapgenpse` binary and `libsapcrypto.so` library, under **SAP Cryptographic Library path on the agent VM**.
-
- > [!NOTE]
- > Make sure that you select **Enable SNC connection support** at this stage if you want to use an SNC connection. You can't go back and enable an SNC connection after you finish deploying the agent.
-
- Learn more about [deploying the connector over a SNC connection](configure-snc.md).
+ To run the command in this step, you must be a resource group owner on your Microsoft Sentinel workspace. If you aren't a resource group owner on your workspace, this procedure can also be performed after the agent deployment is complete.
- - To authenticate to your key vault using a managed identity, leave the default option **Managed Identity**, selected. You must have the managed identity set up ahead of time, as mentioned in the [prerequisites](#prerequisites).
+ Copy the **Role assignment command** from step 1 and run it on your agent VM, replacing the `Object_ID` placeholder with your VM identity object ID. For example:
-1. Select **Create** and review the recommendations before you complete the deployment:
+ :::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment-role.png" alt-text="Screenshot of the Copy icon for the command from step 1.":::
- :::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment.png" alt-text="Screenshot of the final stage of the agent deployment.":::
+ To find your VM identity object ID in Azure, go to **Enterprise application** > **All applications**, and select your VM name. Copy the value of the **Object ID** field to use with your copied command.
-1. Under **Just one step before we finish**, select **Copy** :::image type="content" source="media/deploy-data-connector-agent-container/copy-icon.png" alt-text="Screenshot of the Copy icon." border="false"::: next to **Agent command**. After you've copied the command line, select **Close**.
+ This command assigns the **Microsoft Sentinel Business Applications Agent Operator** Azure role to your VM's managed identity, including only the scope of the specified agent's data in the workspace.
- The relevant agent information is deployed into Azure Key Vault, and the new agent is visible in the table under **Add an API based collector agent**.
+ > [!IMPORTANT]
+ > Assigning the **Microsoft Sentinel Business Applications Agent Operator** role via the CLI assigns the role only on the scope of the specified agent's data in the workspace. This is the most secure, and therefore recommended option.
+ >
+ > If you must assign the role [via the Azure portal](/azure/role-based-access-control/role-assignments-portal?tabs=delegate-condition), we recommend assigning the role on a small scope, such as only on the Microsoft Sentinel workspace.
- At this stage, the agent's **Health** status is **"Incomplete installation. Please follow the instructions"**. Once the agent is installed successfully, the status changes to **Agent healthy**. This update can take up to 10 minutes.
+1. Select **Copy** :::image type="content" source="media/deploy-data-connector-agent-container/copy-icon.png" alt-text="Screenshot of the Copy icon." border="false"::: next to the **Agent command** in step 2. For example:
+
+ :::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment-agent.png" alt-text="Screenshot of the Agent command to copy in step 2.":::
+
+1. After you've copied the command line, select **Close**.
+
+ The relevant agent information is deployed into Azure Key Vault, and the new agent is visible in the table under **Add an API based collector agent**.
+
+ At this stage, the agent's **Health** status is **"Incomplete installation. Please follow the instructions"**. Once the agent is installed successfully, the status changes to **Agent healthy**. This update can take up to 10 minutes. For example:
:::image type="content" source="media/deploy-data-connector-agent-container/installation-status.png" alt-text="Screenshot of the health statuses of API-based collector agents on the SAP data connector page." lightbox="media/deploy-data-connector-agent-container/installation-status.png":::
- The table displays the agent name and health status for only those agents you deploy via the Azure portal. Agents deployed using the command line will not be displayed here.
+ > [!NOTE]
+ > The table displays the agent name and health status for only those agents you deploy via the Azure portal. Agents deployed using the [command line](#command-line-options) aren't displayed here.
+ >
+
+1. On the VM where you plan to install the agent, open a terminal and run the **Agent command** that you'd copied in the previous step.
+
+ The script updates the OS components and installs the Azure CLI, Docker software, and other required utilities, such as jq, netcat, and curl.
-1. In your target VM (the VM where you plan to install the agent), open a terminal and run the command you copied in the previous step.
+ Supply additional parameters to the script as needed to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl). You can supply additional parameters to the script to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
-
If you need to copy your command again, select **View** :::image type="content" source="media/deploy-data-connector-agent-container/view-icon.png" border="false" alt-text="Screenshot of the View icon."::: to the right of the **Health** column and copy the command next to **Agent command** on the bottom right.
-#### Connect to a new SAP system
+### Connect to a new SAP system
-Anyone adding a new connection to an SAP system must have write permission to the [Key Vault where the SAP credentials are stored](#create-key-vault). See [Prerequisites](#prerequisites).
+Anyone adding a new connection to an SAP system must have write permission to the [key vault where the SAP credentials are stored](#create-a-key-vault). For more information, see [Create a virtual machine and configure access to your credentials](#create-a-virtual-machine-and-configure-access-to-your-credentials).
1. In the **Configuration** area, select **Add new system (Preview)**.
Anyone adding a new connection to an SAP system must have write permission to th
Learn more about how to [monitor your SAP system health](../monitor-sap-system-health.md).
-# [Azure portal (Preview)](#tab/azure-portal/registered-application)
+# [Deploy with a registered application](#tab/deploy-azure-registered-application)
-> [!NOTE]
-> If you previously installed SAP connector agents manually or using the kickstart scripts, you can't configure or manage those agents in the Azure portal. If you want to use the portal to configure and update agents, you must reinstall your existing agents using the portal.
-
-Create a new agent through the Azure portal, authenticating with a Microsoft Entra ID registered application:
+This procedure describes how to create a new agent through the Azure portal, authenticating with a Microsoft Entra ID registered application.
1. From the Microsoft Sentinel navigation menu, select **Data connectors**.
-1. In the search bar, type *SAP*.
+1. In the search bar, enter *SAP*.
1. Select **Microsoft Sentinel for SAP** from the search results, and select **Open connector page**. 1. To collect data from an SAP system, you must follow these two steps:
-
+ 1. [Create a new agent](#create-a-new-agent-1) 1. [Connect the agent to a new SAP system](#connect-to-a-new-sap-system-1)
Create a new agent through the Azure portal, authenticating with a Microsoft Ent
1. Under **Create a collector agent** on the right, define the agent details: +
+ |Name |Description |
+ |||
+ |**Agent name** | Enter an agent name, including any of the following characters: <ul><li> a-z<li> A-Z<li>0-9<li>_ (underscore)<li>. (period)<li>- (dash)</ul> |
+ |**Subscription** / **Key vault** | Select the **Subscription** and **Key vault** from their respective drop-downs. |
+ |**NWRFC SDK zip file path on the agent VM** | Enter the path in your VM that contains the SAP NetWeaver Remote Function Call (RFC) Software Development Kit (SDK) archive (.zip file). For example, */src/test/NWRFC.zip*. |
+ |**Enable SNC connection support** |Select to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC). <br><br>If you select this option, enter the path that contains the `sapgenpse` binary and `libsapcrypto.so` library, under **SAP Cryptographic Library path on the agent VM**. |
+ |**Authentication to Azure Key Vault** | To authenticate to your key vault using a registered application, select **Application Identity**. <br><br>You must have the registered application (application identity) set up ahead of time. For more information, see [Create a virtual machine and configure access to your credentials](#create-a-virtual-machine-and-configure-access-to-your-credentials). |
+
+ > [!NOTE]
+ > If you want to use an SNC connection, make sure to select **Enable SNC connection support** at this stage as you can't go back and enable an SNC connection after you finish deploying the agent. For more information, see [Deploy the Microsoft Sentinel for SAP data connector by using SNC](configure-snc.md).
+
+ For example:
+ :::image type="content" source="media/deploy-data-connector-agent-container/create-agent-app-id.png" alt-text="Screenshot of the Create a collector agent area.":::
- - Enter the **Agent name**. The agent name can include these characters:
- - a-z
- - A-Z
- - 0-9
- - _ (underscore)
- - . (period)
- - \- (dash)
+1. Select **Create** and review the recommendations before you complete the deployment:
- - Select the **Subscription** and **Key Vault** from their respective drop-downs.
+ :::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment.png" alt-text="Screenshot of the final stage of the agent deployment.":::
- - Under **NWRFC SDK zip file path on the agent VM**, type the path in your VM that contains the SAP NetWeaver Remote Function Call (RFC) Software Development Kit (SDK) archive (.zip file). For example, */src/test/NWRFC.zip*.
+1. Deploying the SAP data connector agent requires that you grant your agent's VM identity with specific permissions to the Microsoft Sentinel workspace, using the **Microsoft Sentinel Business Applications Agent Operator** role.
- - To ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC), select **Enable SNC connection support**. If you select this option, enter the path that contains the `sapgenpse` binary and `libsapcrypto.so` library, under **SAP Cryptographic Library path on the agent VM**.
-
- > [!NOTE]
- > Make sure that you select **Enable SNC connection support** at this stage if you want to use an SNC connection. You can't go back and enable an SNC connection after you finish deploying the agent.
-
- Learn more about [deploying the connector over a SNC connection](configure-snc.md).
+ To run the command in this step, you must be a resource group owner on your Microsoft Sentinel workspace. If you aren't a resource group owner on your workspace, this procedure can also be performed after the agent deployment is complete.
- - To authenticate to your key vault using a registered application, select **Application Identity**. You must have the registered application (application identity) set up ahead of time, as mentioned in the [prerequisites](#prerequisites).
+ Copy the **Role assignment command** from step 1 and run it on your agent VM, replacing the `Object_ID` placeholder with your VM identity object ID. For example:
-1. Select **Create** and review the recommendations before you complete the deployment:
+ :::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment-role.png" alt-text="Screenshot of the Copy icon for the command from step 1.":::
- :::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment.png" alt-text="Screenshot of the final stage of the agent deployment.":::
+ To find your VM identity object ID in Azure, go to **Enterprise application** > **All applications**, and select your application name. Copy the value of the **Object ID** field to use with your copied command.
+
+ This command assigns the **Microsoft Sentinel Business Applications Agent Operator** Azure role to your VM's application identity, including only the scope of the specified agent's data in the workspace.
+
+ > [!IMPORTANT]
+ > Assigning the **Microsoft Sentinel Business Applications Agent Operator** role via the CLI assigns the role only on the scope of the specified agent's data in the workspace. This is the most secure, and therefore recommended option.
+ >
+ > If you must assign the role [via the Azure portal](/azure/role-based-access-control/role-assignments-portal?tabs=delegate-condition), we recommend assigning the role on a small scope, such as only on the Microsoft Sentinel workspace.
+
+1. Select **Copy** :::image type="content" source="media/deploy-data-connector-agent-container/copy-icon.png" alt-text="Screenshot of the Copy icon." border="false"::: next to the **Agent command** in step 2. For example:
-1. Under **Just one step before we finish**, select **Copy** :::image type="content" source="media/deploy-data-connector-agent-container/copy-icon.png" alt-text="Screenshot of the Copy icon." border="false"::: next to **Agent command**. After you've copied the command line, select **Close**.
+ :::image type="content" source="media/deploy-data-connector-agent-container/finish-agent-deployment-agent.png" alt-text="Screenshot of the Agent command to copy in step 2.":::
+
+1. After you've copied the command line, select **Close**.
The relevant agent information is deployed into Azure Key Vault, and the new agent is visible in the table under **Add an API based collector agent**.
Create a new agent through the Azure portal, authenticating with a Microsoft Ent
:::image type="content" source="media/deploy-data-connector-agent-container/installation-status.png" alt-text="Screenshot of the health statuses of API-based collector agents on the SAP data connector page." lightbox="media/deploy-data-connector-agent-container/installation-status.png":::
- The table displays the agent name and health status for only those agents you deploy via the Azure portal. Agents deployed using the command line will not be displayed here.
+ The table displays the agent name and health status for only those agents you deploy via the Azure portal. Agents deployed using the [command line](#command-line-options) aren't displayed here.
+
+1. On the VM where you plan to install the agent, open a terminal and run the **Agent command** that you'd copied in the previous step.
-1. In your target VM (the VM where you plan to install the agent), open a terminal and run the command you copied in the previous step.
+ The script updates the OS components and installs the Azure CLI, Docker software, and other required utilities, such as jq, netcat, and curl.
+
+ Supply additional parameters to the script as needed to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl). You can supply additional parameters to the script to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
-
If you need to copy your command again, select **View** :::image type="content" source="media/deploy-data-connector-agent-container/view-icon.png" border="false" alt-text="Screenshot of the View icon."::: to the right of the **Health** column and copy the command next to **Agent command** on the bottom right.
-#### Connect to a new SAP system
+### Connect to a new SAP system
-Anyone adding a new connection to an SAP system must have write permission to the [Key Vault where the SAP credentials are stored](#create-key-vault). See [Prerequisites](#prerequisites).
+Anyone adding a new connection to an SAP system must have write permission to the [key vault where the SAP credentials are stored](#create-a-key-vault). For more information, see [Create a virtual machine and configure access to your credentials](#create-a-virtual-machine-and-configure-access-to-your-credentials).
1. In the **Configuration** area, select **Add new system (Preview)**.
Anyone adding a new connection to an SAP system must have write permission to th
Learn more about how to [monitor your SAP system health](../monitor-sap-system-health.md).
-# [Azure portal (Preview)](#tab/azure-portal/config-file)
++
-**The Azure portal can only be used with Azure Key Vault.**
+### Command line options
-To use the command line to create an agent using a config file, see [these instructions](?tabs=config-file%2Ccommand-line#deploy-the-data-connector-agent).
+Select one of the following tabs, depending on the type of identity you're using to access your key vault:
-# [Command line script](#tab/command-line/managed-identity)
+# [Deploy with a managed identity](#tab/deploy-cli-managed-identity)
Create a new agent using the command line, authenticating with a managed identity:
Create a new agent using the command line, authenticating with a managed identit
The process has been successfully completed, thank you! ```
- Note the Docker container name in the script output. You'll use it in the next step.
+ Note the Docker container name in the script output. To see the list of docker containers on your VM, run:
+
+ ```bash
+ docker ps -a
+ ```
+
+ You'll use the name of the docker container in the next step.
+
+1. Deploying the SAP data connector agent requires that you grant your agent's VM identity with specific permissions to the Microsoft Sentinel workspace, using the **Microsoft Sentinel Business Applications Agent Operator** role.
+
+ To run the command in this step, you must be a resource group owner on your Microsoft Sentinel workspace. If you aren't a resource group owner on your workspace, this procedure can also be performed later on.
+
+ Assign the **Microsoft Sentinel Business Applications Agent Operator** role to the VM's identity:
+
+ 1. <a name=agent-id-managed></a>Get the agent ID by running the following command, replacing the `<container_name>` placeholder with the name of the docker container that you'd created with the Kickstart script:
-1. Run the following command to **configure the Docker container to start automatically**.
+ ```bash
+ docker inspect <container_name> | grep -oP '"SENTINEL_AGENT_GUID=\K[^"]+
+ ```
+
+ For example, an agent ID returned might be `234fba02-3b34-4c55-8c0e-e6423ceb405b`.
++
+ 1. Assign the **Microsoft Sentinel Business Applications Agent Operator** by running the following command:
```bash
- docker update --restart unless-stopped <container-name>
+ az role assignment create --assignee <OBJ_ID> --role "Microsoft Sentinel Business Applications Agent Operator" --scope /subscriptions/<SUB_ID>/resourcegroups/<RESOURCE_GROUP_NAME>/providers/microsoft.operationalinsights/workspaces/<WS_NAME>/providers/Microsoft.SecurityInsights/BusinessApplicationAgents/<AGENT_IDENTIFIER>
```
- To view a list of the available containers use the command: `docker ps -a`.
+ Replace placeholder values as follows:
-# [Command line script](#tab/command-line/registered-application)
+ |Placeholder |Value |
+ |||
+ |`<OBJ_ID>` | Your VM identity object ID. <br><br> To find your VM identity object ID in Azure, go to **Enterprise application** > **All applications**, and select your VM name. Copy the value of the **Object ID** field to use with your copied command. |
+ |`<SUB_ID>` | Your Microsoft Sentinel workspace subscription ID |
+ |`<RESOURCE_GROUP_NAME>` | Your Microsoft Sentinel workspace resource group name |
+ |`<WS_NAME>` | Your Microsoft Sentinel workspace name |
+ |`<AGENT_IDENTIFIER>` | The agent ID displayed after running the command in the [previous step](#agent-id-managed). |
+
+1. To configure the Docker container to start automatically, run the following command, replacing the `<container-name>` placeholder with the name of your container:
+
+ ```bash
+ docker update --restart unless-stopped <container-name>
+ ```
+
+# [Deploy with a registered application](#tab/deploy-cli-registered-application)
Create a new agent using the command line, authenticating with a Microsoft Entra ID registered application:
Create a new agent using the command line, authenticating with a Microsoft Entra
The process has been successfully completed, thank you! ```
- Note the Docker container name in the script output. You'll use it in the next step.
+ Note the Docker container name in the script output. To see the list of docker containers on your VM, run:
+
+ ```bash
+ docker ps -a
+ ```
+
+ You'll use the name of the docker container in the next step.
+
+1. Deploying the SAP data connector agent requires that you grant your agent's VM identity with specific permissions to the Microsoft Sentinel workspace, using the **Microsoft Sentinel Business Applications Agent Operator** role.
+
+ To run the command in this step, you must be a resource group owner on your Microsoft Sentinel workspace. If you aren't a resource group owner on your workspace, this step can also be performed later on.
+
+ Assign the **Microsoft Sentinel Business Applications Agent Operator** role to the VM's identity:
-1. Run the following command to **configure the Docker container to start automatically**.
+ 1. <a name=agent-id-application></a>Get the agent ID by running the following command, replacing the `<container_name>` placeholder with the name of the docker container that you'd created with the Kickstart script:
+
+ ```bash
+ docker inspect <container_name> | grep -oP '"SENTINEL_AGENT_GUID=\K[^"]+'
+ ```
+
+ For example, an agent ID returned might be `234fba02-3b34-4c55-8c0e-e6423ceb405b`.
+
+ 1. Assign the **Microsoft Sentinel Business Applications Agent Operator** by running the following command:
+
+ ```bash
+ az role assignment create --assignee <OBJ_ID> --role "Microsoft Sentinel Business Applications Agent Operator" --scope /subscriptions/<SUB_ID>/resourcegroups/<RESOURCE_GROUP_NAME>/providers/microsoft.operationalinsights/workspaces/<WS_NAME>/providers/Microsoft.SecurityInsights/BusinessApplicationAgents/<AGENT_IDENTIFIER>
+ ```
+
+ Replace placeholder values as follows:
+
+ |Placeholder |Value |
+ |||
+ |`<OBJ_ID>` | Your VM identity object ID. <br><br> To find your VM identity object ID in Azure, go to **Enterprise application** > **All applications**, and select your application name. Copy the value of the **Object ID** field to use with your copied command. |
+ |`<SUB_ID>` | Your Microsoft Sentinel workspace subscription ID |
+ |`<RESOURCE_GROUP_NAME>` | Your Microsoft Sentinel workspace resource group name |
+ |`<WS_NAME>` | Your Microsoft Sentinel workspace name |
+ |`<AGENT_IDENTIFIER>` | The agent ID displayed after running the command in the [previous step](#agent-id-application). |
+
+1. To configure the Docker container to start automatically, run the following command, replacing the `<container-name>` placeholder with the name of your container:
```bash docker update --restart unless-stopped <container-name> ```
- To view a list of the available containers use the command: `docker ps -a`.
-# [Command line script](#tab/command-line/config-file)
+# [Deploy with a configuration file](#tab/deploy-cli-config-file)
1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
Create a new agent using the command line, authenticating with a Microsoft Entra
The process has been successfully completed, thank you! ```
- Note the Docker container name in the script output. You'll use it in the next step.
+ Note the Docker container name in the script output. To see the list of docker containers on your VM, run:
+
+ ```bash
+ docker ps -a
+ ```
+
+ You'll use the name of the docker container in the next step.
++
+1. Deploying the SAP data connector agent requires that you grant your agent's VM identity with specific permissions to the Microsoft Sentinel workspace, using the **Microsoft Sentinel Business Applications Agent Operator** role.
+
+ To run the command in this step, you must be a resource group owner on your Microsoft Sentinel workspace. If you aren't a resource group owner on your workspace, this step can also be performed later on.
+
+ Assign the **Microsoft Sentinel Business Applications Agent Operator** role to the VM's identity:
+
+ 1. <a name=agent-id-file></a>Get the agent ID by running the following command, replacing the `<container_name>` placeholder with the name of the docker container that you'd created with the Kickstart script:
+
+ ```bash
+ docker inspect <container_name> | grep -oP '"SENTINEL_AGENT_GUID=\K[^"]+'
+ ```
+
+ For example, an agent ID returned might be `234fba02-3b34-4c55-8c0e-e6423ceb405b`.
+
-1. Run the following command to **configure the Docker container to start automatically**.
+ 1. Assign the **Microsoft Sentinel Business Applications Agent Operator** by running the following command:
```bash
- docker update --restart unless-stopped <container-name>
+ az role assignment create --assignee <OBJ_ID> --role "Microsoft Sentinel Business Applications Agent Operator" --scope /subscriptions/<SUB_ID>/resourcegroups/<RESOURCE_GROUP_NAME>/providers/microsoft.operationalinsights/workspaces/<WS_NAME>/providers/Microsoft.SecurityInsights/BusinessApplicationAgents/<AGENT_IDENTIFIER>
```
- To view a list of the available containers use the command: `docker ps -a`.
+ Replace placeholder values as follows:
+
+ |Placeholder |Value |
+ |||
+ |`<OBJ_ID>` | Your VM identity object ID. <br><br> To find your VM identity object ID in Azure, go to **Enterprise application** > **All applications**, and select your VM or application name, depending on whether you're using a managed identity or a registered application. <br><br>Copy the value of the **Object ID** field to use with your copied command. |
+ |`<SUB_ID>` | Your Microsoft Sentinel workspace subscription ID |
+ |`<RESOURCE_GROUP_NAME>` | Your Microsoft Sentinel workspace resource group name |
+ |`<WS_NAME>` | Your Microsoft Sentinel workspace name |
+ |`<AGENT_IDENTIFIER>` | The agent ID displayed after running the command in the [previous step](#agent-id-file). |
++
+1. Run the following command to configure the Docker container to start automatically.
+
+ ```bash
+ docker update --restart unless-stopped <container-name>
+ ```
sentinel Deployment Attack Disrupt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-attack-disrupt.md
+
+ Title: Automatic attack disruption for SAP | Microsoft Sentinel
+description: Learn about deploying automatic attack disruption for SAP with the unified security operations platform.
+++ Last updated : 04/01/2024
+appliesto: Microsoft Sentinel in the Azure portal and the Microsoft Defender portal
+
+#customerIntent: As a security engineer, I want to deploy automatic attack disruption for SAP in the Microsoft Defender portal.
++
+# Automatic attack disruption for SAP (Preview)
+
+Microsoft Defender XDR correlates millions of individual signals to identify active ransomware campaigns or other sophisticated attacks in the environment with high confidence. While an attack is in progress, Defender XDR disrupts the attack by automatically containing compromised assets that the attacker is using through automatic attack disruption. Automatic attack disruption limits lateral movement early on and reduces the overall impact of an attack, from associated costs to loss of productivity. At the same time, it leaves security operations teams in complete control of investigating, remediating, and bringing assets back online.
+
+When you add a new SAP system to Microsoft Sentinel, your default configuration includes attack disruption functionality in the unified SOC platform. This article describes how to ensure that your SAP system is ready to support automatic attack disruption for SAP in the Microsoft Defender portal.
+
+For more information, see [Automatic attack disruption in Microsoft Defender XDR](/microsoft-365/security/defender/automatic-attack-disruption).
++
+## Attack disruption with the unified security operations platform
+
+Attack disruption for SAP is configured by updating your data connector agent version and ensuring that the relevant role is applied. However, attack disruption itself surfaces only in the unified security operations platform in the Microsoft Defender portal.
+
+To use attack disruption for SAP, make sure that you configured the integration between Microsoft Sentinel and Microsoft Defender XDR. For more information, see [Connect Microsoft Sentinel to Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-sentinel-onboard) and [Microsoft Sentinel in the Microsoft Defender portal (preview)](../microsoft-sentinel-defender-portal.md).
+
+## Required SAP data connector agent version and role
+
+Attack disruption for SAP requires that you have:
+
+- A Microsoft Sentinel SAP data connector agent, version 88020708 or higher.
+- The identity of your data connector agent VM must be assigned to the **Microsoft Sentinel Business Applications Agent Operator** Azure role.
+
+**To use attack disruption for SAP**, deploy a new agent, or update your current agent to the latest version. For more information, see:
+
+- [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md)
+- [Update Microsoft Sentinel's SAP data connector agent](update-sap-data-connector.md)
+
+**To verify your current agent version**, run the following query from the Microsoft Sentinel **Logs** page:
+
+```Kusto
+SAP_HeartBeat_CL
+| where sap_client_category_s !contains "AH"
+| summarize arg_max(TimeGenerated, agent_ver_s), make_set(system_id_s) by agent_id_g
+| project
+ TimeGenerated,
+ SAP_Data_Connector_Agent_guid = agent_id_g,
+ Connected_SAP_Systems_Ids = set_system_id_s,
+ Current_Agent_Version = agent_ver_s
+```
+
+If the identity of your data connector agent VM isn't yet assigned to the **Microsoft Sentinel Business Applications Agent Operator** role as part of the deployment process, assign the role manually. For more information, see [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md#role).
+
+## Related content
+
+- [Automatic attack disruption in Microsoft Defender XDR](/microsoft-365/security/defender/automatic-attack-disruption)
+- [Microsoft Sentinel in the Microsoft Defender portal (preview)](../microsoft-sentinel-defender-portal.md)
+- [Prerequisites for deploying Microsoft Sentinel solution for SAP applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md)
+- [Deploy Microsoft Sentinel solution for SAP applications](deployment-overview.md)
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
Title: Deploy SAP Change Requests (CRs) and configure authorization
+ Title: Configure SAP authorizations and deploy optional SAP Change Requests (CRS)
description: This article shows you how to deploy the SAP Change Requests (CRs) necessary to prepare the environment for the installation of the SAP agent, so that it can properly connect to your SAP systems. Previously updated : 03/10/2023 Last updated : 03/27/2024
-# Deploy SAP Change Requests and configure authorization
-This article shows you how to deploy SAP Change Requests (CRs), which prepare the environment for the installation of the SAP agent, so that it can properly connect to your SAP systems.
+# Configure SAP authorizations and deploy optional SAP Change Requests
-> [!IMPORTANT]
-> - This article presents a [**step-by-step guide**](#deploy-crs) to deploying the relevant CRs. It's recommended for SOC engineers or implementers who may not necessarily be SAP experts.
-> - Experienced SAP administrators that are familiar with the CR deployment process may prefer to get the appropriate CRs directly from the [**SAP environment validation steps**](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of the guide and deploy them. Note that the *NPLK900271* CR deploys a sample role, and the administrator may prefer to manually define the role according to the information in the [**Required ABAP authorizations**](#required-abap-authorizations) section below.
+This article describes how to prepare your environment for the installation of the SAP agent so that it can properly connect to your SAP systems. Preparation includes configuring required SAP authorizations and, optionally, deploying extra SAP change requests (CRs).
-## Required and optional CRs
-
-This article discusses the installation of the following CRs:
-
-|CR |Required/optional |Description |
-||||
-|NPLK900271 |Required |This CR creates and configures a role. Alternatively, you can load the authorizations directly from a file. [Review how to create and configure a role](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#create-and-configure-a-role-required). |
-|NPLK900201 or NPLK900202 |Optional |[Retrieves additional information from SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#retrieve-additional-information-from-sap-optional). You select one of these CRs according to your SAP version. |
-
-## Prerequisites
-
-1. Make sure you've copied the details of the **SAP system version**, **System ID (SID)**, **System number**, **Client number**, **IP address**, **administrative username** and **password** before beginning the deployment process. For the following example, the following details are assumed:
-
- - **SAP system version:** `SAP ABAP Platform 1909 Developer edition`
- - **SID:** `A4H`
- - **System number:** `00`
- - **Client number:** `001`
- - **IP address:** `192.168.136.4`
- - **Administrator user:** `a4hadm`, however, the SSH connection to the SAP system is established with `root` user credentials.
-1. Review the [SAP environment validation steps](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) to determine which CRs to install.
-1. If you installed the NPLK900202 [optional CR](#required-and-optional-crs) used to retrieve additional information, make sure you've installed the [relevant SAP note](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#deploy-sap-note-optional).
+- [!INCLUDE [unified-soc-preview-without-alert](../includes/unified-soc-preview-without-alert.md)]
## Deployment milestones
Track your SAP solution deployment journey through this series of articles:
1. [Deploy the solution content from the content hub](deploy-sap-security-content.md)
-1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
+1. [Deploy the data connector agent](deploy-data-connector-agent-container.md)
1. [Configure Microsoft Sentinel solution for SAP® applications](deployment-solution-configuration.md)
Track your SAP solution deployment journey through this series of articles:
- [Deploy SAP connector manually](sap-solution-deploy-alternate.md) - [Select SAP ingestion profiles](select-ingestion-profiles.md)
-To deploy the CRs, follow the steps outlined below. The steps below may differ according to the version of the SAP system and should be considered for demonstration purposes only.
+## Configure the Microsoft Sentinel role
-## Deploy CRs
+1. Upload role authorizations from the [**/MSFTSEN/SENTINEL_RESPONDER**](https://aka.ms/SAP_Sentinel_Responder_Role) file in GitHub.
-> [!NOTE]
->
-> It is *strongly recommended* that the deployment of SAP CRs be carried out by an experienced SAP system administrator.
+ This creates the **/MSFTSEN/SENTINEL_RESPONDER** role, which includes all the authorizations required to retrieve logs from the SAP systems and run [attack disruption response actions](https://aka.ms/attack-disrupt-defender).
-### Set up the files
+ Alternately, create a role manually with the relevant authorizations required for the logs you want to ingest. For more information, see [Required ABAP authorizations](#required-abap-authorizations). The examples in this procedure use the **/MSFTSEN/SENTINEL_RESPONDER** name.
-1. Sign in to the SAP system using SSH.
-
-1. Transfer the CR files to the SAP system. Learn more about [the CRs in this step](#required-and-optional-crs).
+1. The next step is to generate an active role profile for Microsoft Sentinel to use. Run the **PFCG** transaction:
- Alternatively, you can download the files directly onto the SAP system from the SSH prompt. Use the following commands:
+ In the **SAP Easy Access** screen, enter `PFCG` in the field in the upper left corner of the screen and then press **ENTER**.
- - Download NPLK900271 (required)
- ```bash
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900271.NPL
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900271.NPL
- ```
-
- Alternatively, you can [load these authorizations directly from a file](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#create-and-configure-a-role-required).
-
- - Download NPLK900202 (optional)
- ```bash
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900202.NPL
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900202.NPL
- ```
-
- - Download NPLK900201 (optional)
- ```bash
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900201.NPL
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900201.NPL
- ```
-
- Note that each CR consists of two files, one beginning with K and one with R.
-
-1. Change the ownership of the files to user *`<sid>`adm* and group *sapsys*. (Substitute your SAP system ID for `<sid>`.)
-
- ```bash
- chown <sid>adm:sapsys *.NPL
- ```
-
- In our example:
- ```bash
- chown a4hadm:sapsys *.NPL
- ```
-
-1. Copy the cofiles (those beginning with *K*) to the `/usr/sap/trans/cofiles` folder. Preserve the permissions while copying, using the `cp` command with the `-p` switch.
-
- ```bash
- cp -p K*.NPL /usr/sap/trans/cofiles/
- ```
-
-1. Copy the data files (those beginning with R) to the `/usr/sap/trans/data` folder. Preserve the permissions while copying, using the `cp` command with the `-p` switch.
-
- ```bash
- cp -p R*.NPL /usr/sap/trans/data/
- ```
-
-### Import the CRs
-
-1. Launch the **SAP Logon** application and sign in to the SAP GUI console.
-
-1. Run the **STMS_IMPORT** transaction:
-
- In the **SAP Easy Access** screen, type `STMS_IMPORT` in the field in the upper left corner of the screen and press the **Enter** key.
-
- :::image type="content" source="media/preparing-sap/stms-import.png" alt-text="Screenshot of running the S T M S import transaction.":::
-
-1. In the **Import Queue** window that appears, select **More > Extras > Other Requests > Add**.
-
- :::image type="content" source="media/preparing-sap/import-queue-add.png" alt-text="Screenshot of adding an import queue.":::
-
-1. In the **Add Transport Requests to Import Queue** pop-up that appears, select the **Transp. Request** field.
-
-1. The **Transport requests** window will appear and display a list of CRs available to be deployed. Select a CR and select the green checkmark button.
-
-1. Back in the **Add Transport Request to Import Queue** window, select **Continue** (the green checkmark) or press the Enter key.
-
-1. In the **Add Transport Request** confirmation dialog, select **Yes**.
-
-1. If you plan to deploy more CRs, repeat the procedure in the preceding 5 steps for the remaining CRs.
-
-1. In the **Import Queue** window, select the relevant Transport Request once, and then select **F9** or **Select/Deselect Request** icon.
-
-1. If you have remaining Transport Requests to add to the deployment, repeat step 9.
-
-1. Select the Import Requests icon:
-
- :::image type="content" source="media/preparing-sap/import-requests.png" alt-text="Screenshot of importing all requests." lightbox="media/preparing-sap/import-requests-lightbox.png":::
-
-1. In **Start Import** window, select the **Target Client** field.
-
-1. The **Input Help..** dialog will appear. Select the number of the client you want to deploy the CRs to (`001` in our example), then select the green checkmark to confirm.
-
-1. Back in the **Start Import** window, select the **Options** tab, mark the **Ignore Invalid Component Version** checkbox, and select the green checkmark to confirm.
-
- :::image type="content" source="media/preparing-sap/start-import.png" alt-text="Screenshot of the start import window.":::
-
-1. In the **Start import** confirmation dialog, select **Yes** to confirm the import.
-
-1. Back in the **Import Queue** window, select **Refresh**, wait until the import operation completes and the import queue shows as empty.
-
-1. To review the import status, in the **Import Queue** window select **More > Go To > Import History**.
-
- :::image type="content" source="media/preparing-sap/import-history.png" alt-text="Screenshot of import history.":::
-
-1. If you deployed the *NPLK900202* CR, it is expected to display a **Warning**. Select the entry to verify that the warnings displayed are of type "Table \<tablename\> was activated."
-
- The CRs and versions in the screenshots below may change according to your installed CR version.
-
- :::image type="content" source="media/preparing-sap/import-status.png" alt-text="Screenshot of import status display." lightbox="media/preparing-sap/import-status-lightbox.png":::
-
- :::image type="content" source="media/preparing-sap/import-warning.png" alt-text="Screenshot of import warning message display.":::
-
-## Configure Sentinel role
-
-After the *NPLK900271* CR is deployed, a **/MSFTSEN/SENTINEL_CONNECTOR** role is created in SAP. If the role is created manually, it may bear a different name.
-
-In the examples shown here, we will use the role name **/MSFTSEN/SENTINEL_CONNECTOR**.
-
-The next step is to generate an active role profile for Microsoft Sentinel to use.
-
-1. Run the **PFCG** transaction:
-
- In the **SAP Easy Access** screen, type `PFCG` in the field in the upper left corner of the screen and press the **Enter** key.
-
-1. In the **Role Maintenance** window, type the role name `/MSFTSEN/SENTINEL_CONNECTOR` in the **Role** field and select the **Change** button (the pencil).
-
- :::image type="content" source="media/preparing-sap/change-role-change.png" alt-text="Screenshot of choosing a role to change.":::
+1. In the **Role Maintenance** window, type the role name `/MSFTSEN/SENTINEL_RESPONDER` in the **Role** field and select the **Change** button (the pencil).
1. In the **Change Roles** window that appears, select the **Authorizations** tab. 1. In the **Authorizations** tab, select **Change Authorization Data**.
- :::image type="content" source="media/preparing-sap/change-role-change-auth-data.png" alt-text="Screenshot of changing authorization data.":::
- 1. In the **Information** popup, read the message and select the green checkmark to confirm. 1. In the **Change Role: Authorizations** window, select **Generate**.
- :::image type="content" source="media/preparing-sap/change-role-authorizations.png" alt-text="Screenshot of generating authorizations." lightbox="media/preparing-sap/change-role-authorizations-lightbox.png":::
- See that the **Status** field has changed from **Unchanged** to **generated**. 1. Select **Back** (to the left of the SAP logo at the top of the screen). 1. Back in the **Change Roles** window, verify that the **Authorizations** tab displays a green box, then select **Save**.
- :::image type="content" source="media/preparing-sap/change-role-save.png" alt-text="Screenshot of saving changed role.":::
- ### Create a user The Microsoft Sentinel solution for SAP® applications requires a user account to connect to your SAP system. Use the following instructions to create a user account and assign it to the role that you created in the previous step.
-In the examples shown here, we will use the role name **/MSFTSEN/SENTINEL_CONNECTOR**.
+In the examples shown here, we use the role name **/MSFTSEN/SENTINEL_RESPONDER**.
1. Run the **SU01** transaction:
- In the **SAP Easy Access** screen, type `SU01` in the field in the upper left corner of the screen and press the **Enter** key.
+ In the **SAP Easy Access** screen, enter `SU01` in the field in the upper left corner of the screen and press **ENTER**.
1. In the **User Maintenance: Initial Screen** screen, type in the name of the new user in the **User** field and select **Create Technical User** from the button bar. 1. In the **Maintain Users** screen, select **System** from the **User Type** drop-down list. Create and enter a complex password in the **New Password** and **Repeat Password** fields, then select the **Roles** tab.
-1. In the **Roles** tab, in the **Role Assignments** section, enter the full name of the role - `/MSFTSEN/SENTINEL_CONNECTOR` in our example - and press **Enter**.
+1. In the **Roles** tab, in the **Role Assignments** section, enter the full name of the role - `/MSFTSEN/SENTINEL_RESPONDER` in our example - and press **Enter**.
+ After pressing **Enter**, verify that the right-hand side of the **Role Assignments** section populates with data, such as **Change Start Date**.
In the examples shown here, we will use the role name **/MSFTSEN/SENTINEL_CONNEC
### Required ABAP authorizations
-The following table lists the ABAP authorizations required to ensure that SAP logs can be correctly retrieved by the account used by Microsoft Sentinel's SAP data connector.
+This section lists the ABAP authorizations required to ensure that the SAP user account used by Microsoft Sentinel's SAP data connector can correctly retrieve logs from the SAP systems and run [attack disruption response actions](https://aka.ms/attack-disrupt-defender).
-The required authorizations are listed here by log type. Only the authorizations listed for the types of logs you plan to ingest into Microsoft Sentinel are required.
+The required authorizations are listed here by their purpose. You only need the authorizations that are listed for the kinds of logs you want to bring into Microsoft Sentinel and the attack disruption response actions you want to apply.
> [!TIP]
-> To create a role with all the required authorizations, deploy the SAP *NPLK900271* CR on the SAP system, or load the role authorizations from the [MSFTSEN_SENTINEL_CONNECTOR_ROLE_V0.0.27.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file. This CR creates the **/MSFTSEN/SENTINEL_CONNECTOR** role that has all the necessary permissions for the data connector to operate.
-> Alternatively, you can create a role that has minimal permissions by deploying the *NPLK900268* CR, or loading the role authorizations from the [MSFTSEN_SENTINEL_AGENT_BASIC_ROLE_V0.0.1.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file. This CR or authorizations file creates the **/MSFTSEN/SENTINEL_AGENT_BASIC** role. This role has the minimal required permissions for the data connector to operate. Note that if you choose to deploy this role, you might need to update it frequently.
+> To create a role with all the required authorizations, load the role authorizations from the [**/MSFTSEN/SENTINEL_RESPONDER**](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/SAP/Sample%20Authorizations%20Role%20File/MSFTSEN_SENTINEL_RESPONDER) file.
+>
+> Alternately, to enable only log retrieval, without attack disruption response actions, deploy the SAP *NPLK900271* CR on the SAP system to create the **/MSFTSEN/SENTINEL_CONNECTOR** role, or load the role authorizations from the [**/MSFTSEN/SENTINEL_CONNECTOR**](https://aka.ms/SAP_Sentinel_Connector_Role) file.
-| Authorization Object | Field | Value |
+| Authorization object | Field | Value |
| -- | -- | -- | | **All logs** | | | | S_RFC | RFC_TYPE | Function Module |
The required authorizations are listed here by log type. Only the authorizations
| **SNC Data** | | | | S_TABU_NAM | TABLE | SNCSYSACL | | S_TABU_NAM | TABLE | USRACL |
+|<a name=attack-disrupt></a>**Attack disruption response actions** | | |
+|S_RFC |RFC_TYPE |Function Module |
+|S_RFC |RFC_NAME |BAPI_USER_LOCK |
+|S_RFC |RFC_NAME |BAPI_USER_UNLOCK |
+|S_RFC |RFC_NAME |TH_DELETE_USER <br>In contrast to its name, this function doesn't delete users, but ends the active user session. |
+|S_USER_GRP |CLASS |* <br>We recommend replacing S_USER_GRP CLASS with the relevant classes in your organization that represent dialog users. |
+|S_USER_GRP |ACTVT |03 |
+|S_USER_GRP |ACTVT |05 |
If needed, you can [remove the user role and the optional CR installed on your ABAP system](deployment-solution-configuration.md#remove-the-user-role-and-the-optional-cr-installed-on-your-abap-system).
+## Deploy optional CRs
+
+This section presents a step-by-step guide to deploying extra, optional CRs. It's intended for SOC engineers or implementers who might not necessarily be SAP experts.
+
+Experienced SAP administrators that are familiar with the CR deployment process might prefer to get the appropriate CRs directly from the [SAP environment validation steps](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of the guide and deploy them.
+
+We strongly recommend that deploying SAP CRs is done by an experienced SAP system administrator.
+
+The following table describes the optional CRs available to deploy:
+
+|CR |Description |
+|||
+|**NPLK900271** |Creates and configures a sample role with the basic authorizations required to allow the SAP data connector to connect to your SAP system. Alternatively, you can load authorizations directly from a file or manually define the role according to the logs you want to ingest. <br><br>For more information, see [Required ABAP authorizations](#required-abap-authorizations) and [Create and configure a role (required)](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#create-and-configure-a-role-required). |
+|**NPLK900201** or **NPLK900202** |[Retrieves additional information from SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#retrieve-additional-information-from-sap-optional). Select one of these CRs according to your SAP version. |
+
+### Prerequisites for deploying CRs
+
+1. Make sure you've copied the details of the **SAP system version**, **System ID (SID)**, **System number**, **Client number**, **IP address**, **administrative username**, and **password** before beginning the deployment process. For the following example, the following details are assumed:
+
+ - **SAP system version:** `SAP ABAP Platform 1909 Developer edition`
+ - **SID:** `A4H`
+ - **System number:** `00`
+ - **Client number:** `001`
+ - **IP address:** `192.168.136.4`
+ - **Administrator user:** `a4hadm`, however, the SSH connection to the SAP system is established with `root` user credentials.
+
+1. Make sure you know which [CR you want to deploy](#deploy-optional-crs).
+
+1. If you're deploying the NPLK900202 CR to retrieve additional information, make sure you've installed the [relevant SAP note](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#deploy-sap-note-optional).
+
+### Set up the files
+
+1. Sign in to the SAP system using SSH.
+
+1. Transfer the CR files to the SAP system or download the files directly onto the SAP system from the SSH prompt. Use the following commands:
+
+ - Download NPLK900271
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900271.NPL
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900271.NPL
+ ```
+
+ Alternatively, you can [load these authorizations directly from a file](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#create-and-configure-a-role-required).
+
+ - Download NPLK900202
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900202.NPL
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900202.NPL
+ ```
+
+ - Download NPLK900201
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900201.NPL
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900201.NPL
+ ```
+
+ Each CR consists of two files, one beginning with K and one with R.
+
+1. Change the ownership of the files to user *`<sid>`adm* and group *sapsys*. (Substitute your SAP system ID for `<sid>`.)
+
+ ```bash
+ chown <sid>adm:sapsys *.NPL
+ ```
+
+ In our example:
+ ```bash
+ chown a4hadm:sapsys *.NPL
+ ```
+
+1. Copy the cofiles (those beginning with *K*) to the `/usr/sap/trans/cofiles` folder. Preserve the permissions while copying, using the `cp` command with the `-p` switch.
+
+ ```bash
+ cp -p K*.NPL /usr/sap/trans/cofiles/
+ ```
+
+1. Copy the data files (those beginning with R) to the `/usr/sap/trans/data` folder. Preserve the permissions while copying, using the `cp` command with the `-p` switch.
+
+ ```bash
+ cp -p R*.NPL /usr/sap/trans/data/
+ ```
+
+### Import the CRs
+
+1. Launch the **SAP Logon** application and sign in to the SAP GUI console.
+
+1. Run the **STMS_IMPORT** transaction:
+
+ In the **SAP Easy Access** screen, enter `STMS_IMPORT` in the field in the upper left corner of the screen and then press **ENTER**.
+
+ :::image type="content" source="media/preparing-sap/stms-import.png" alt-text="Screenshot of running the STMS import transaction.":::
+
+1. In the **Import Queue** window that appears, select **More > Extras > Other Requests > Add**.
+
+ :::image type="content" source="media/preparing-sap/import-queue-add.png" alt-text="Screenshot of adding an import queue.":::
+
+1. In the **Add Transport Requests to Import Queue** pop-up that appears, select the **Transp. Request** field.
+
+1. The **Transport requests** window will appear and display a list of CRs available to be deployed. Select a CR and select the green checkmark button.
+
+1. Back in the **Add Transport Request to Import Queue** window, select **Continue** (the green checkmark) or press **ENTER**.
+
+1. In the **Add Transport Request** confirmation dialog, select **Yes**.
+
+1. If you plan to deploy more CRs, repeat the procedure in the preceding five steps for the remaining CRs.
+
+1. In the **Import Queue** window, select the relevant Transport Request once, and then select **F9** or **Select/Deselect Request** icon.
+
+1. If you have remaining Transport Requests to add to the deployment, repeat step 9.
+
+1. Select the Import Requests icon:
+
+ :::image type="content" source="media/preparing-sap/import-requests.png" alt-text="Screenshot of importing all requests." lightbox="media/preparing-sap/import-requests-lightbox.png":::
+
+1. In **Start Import** window, select the **Target Client** field.
+
+1. The **Input Help..** dialog appears. Select the number of the client you want to deploy the CRs to (`001` in our example), then select the green checkmark to confirm.
+
+1. Back in the **Start Import** window, select the **Options** tab, mark the **Ignore Invalid Component Version** checkbox, and select the green checkmark to confirm.
+
+ :::image type="content" source="media/preparing-sap/start-import.png" alt-text="Screenshot of the start import window.":::
+
+1. In the **Start import** confirmation dialog, select **Yes** to confirm the import.
+
+1. Back in the **Import Queue** window, select **Refresh**, wait until the import operation completes and the import queue shows as empty.
+
+1. To review the import status, in the **Import Queue** window select **More > Go To > Import History**.
+
+ :::image type="content" source="media/preparing-sap/import-history.png" alt-text="Screenshot of import history.":::
+
+1. If you deployed the *NPLK900202* CR, it's expected to display a **Warning**. Select the entry to verify that the warnings displayed are of type "Table \<tablename\> was activated."
+
+ The CRs and versions in the following screenshots might change according to your installed CR version.
+
+ :::image type="content" source="media/preparing-sap/import-status.png" alt-text="Screenshot of import status display." lightbox="media/preparing-sap/import-status-lightbox.png":::
+
+ :::image type="content" source="media/preparing-sap/import-warning.png" alt-text="Screenshot of import warning message display.":::
+++ ## Verify that the PAHI table (history of system, database, and SAP parameters) is updated at regular intervals The SAP PAHI table includes data on the history of the SAP system, the database, and SAP parameters. In some cases, the Microsoft Sentinel solution for SAP® applications can't monitor the SAP PAHI table at regular intervals, due to missing or faulty configuration (see the [SAP note](https://launchpad.support.sap.com/#/notes/12103) with more details on this issue). It's important to update the PAHI table and to monitor it frequently, so that the Microsoft Sentinel solution for SAP® applications can alert on suspicious actions that might happen at any time throughout the day.
If the job exists and is configured correctly, no further steps are needed.
**If the job doesnΓÇÖt exist**:
-1. Log in to your SAP system in the 000 client.
+1. Sign in to your SAP system in the 000 client.
1. Execute the SM36 transaction. 1. Under **Job Name**, type *SAP_COLLECTOR_FOR_PERFMONITOR*.
If the job exists and is configured correctly, no further steps are needed.
## Next steps
-You have now fully prepared your SAP environment. The required CRs have been deployed, a role and profile have been provisioned, and a user account has been created and assigned the proper role profile.
+Your SAP environment is now fully prepared to deploy a data connector agent. A role and profile are provisioned, a user account is created and assigned the relevant role profile, and CRs are deployed as needed for your environment.
-Now you are ready to enable and configure SAP auditing for Microsoft Sentinel.
+Now, you're ready to enable and configure SAP auditing for Microsoft Sentinel.
> [!div class="nextstepaction"] > [Enable and configure SAP auditing for Microsoft Sentinel](configure-audit.md)
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
description: This article lists the prerequisites required for deployment of the
Previously updated : 06/19/2023 Last updated : 03/21/2024+ + # Prerequisites for deploying Microsoft Sentinel solution for SAP® applications This article lists the prerequisites required for deployment of the Microsoft Sentinel solution for SAP® applications. + ## Deployment milestones Track your SAP solution deployment journey through this series of articles:
Track your SAP solution deployment journey through this series of articles:
1. [Deploy the solution content from the content hub](deploy-sap-security-content.md)
-1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
+1. [Deploy the data connector agent](deploy-data-connector-agent-container.md)
1. [Configure Microsoft Sentinel solution for SAP® applications](deployment-solution-configuration.md)
-1. Optional deployment steps
- - [Configure data connector to use SNC](configure-snc.md)
+1. Optional deployment steps
+ - [Configure data connector to use Secure Network Communication (SNC)](configure-snc.md)
- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) - [Configure audit log monitoring rules](configure-audit-log-rules.md) - [Deploy SAP connector manually](sap-solution-deploy-alternate.md)
To successfully deploy the Microsoft Sentinel solution for SAP® applications, y
| Prerequisite | Description |Required/optional | | - | -- |-- | | **Access to Microsoft Sentinel** | Make a note of your Microsoft Sentinel *workspace ID* and *primary key*.<br>You can find these details in Microsoft Sentinel: from the navigation menu, select **Settings** > **Workspace settings** > **Agents management**. Copy the *Workspace ID* and *Primary key* and paste them aside for use during the deployment process. |Required |
-| **Permissions to create Azure resources** | At a minimum, you must have the necessary permissions to deploy solutions from the Microsoft Sentinel content hub. For more information, see the [Microsoft Sentinel content hub catalog](../sentinel-solutions-catalog.md). |- Required if you plan to [deploy the data connector agent via the UI](deploy-data-connector-agent-container.md).<br>- Optional if you plan to [deploy the data connector agent using other methods](deploy-data-connector-agent-container-other-methods.md). |
-| **Permissions to create an Azure key vault or access an existing one** | Use Azure Key Vault to store secrets required to connect to your SAP system (recommended when this is a required prerequisite). For more information, see the [Azure Key Vault documentation](../../key-vault/index.yml). |- Required if you plan to [deploy the data connector agent via the UI](deploy-data-connector-agent-container.md).<br>- Optional if you plan to [deploy the data connector agent using other methods](deploy-data-connector-agent-container-other-methods.md). |
+| **Permissions to create Azure resources** | At a minimum, you must have the necessary permissions to deploy solutions from the Microsoft Sentinel content hub. For more information, see the [Microsoft Sentinel content hub catalog](../sentinel-solutions-catalog.md). |Required |
+| **Permissions to create an Azure key vault or access an existing one** | Use Azure Key Vault to store secrets required to connect to your SAP system (recommended when this is a required prerequisite). For more information, see [Assign key vault access permissions](deploy-data-connector-agent-container.md#assign-key-vault-access-permissions). |Required if you plan to store the SAP system credentials in Azure Key Vault. <br><br>Optional if you plan to store them in a configuration file. For more information, see [Create a virtual machine and configure access to your credentials](deploy-data-connector-agent-container.md#create-a-virtual-machine-and-configure-access-to-your-credentials).|
+| **Permissions to assign a privileged role to the SAP data connector agent** | Deploying the SAP data connector agent requires that you grant your agent's VM identity with specific permissions to the Microsoft Sentinel workspace, using the **Microsoft Sentinel Business Applications Agent Operator** role. To grant this role, you need **Owner** permissions on the resource group where your Microsoft Sentinel workspace resides. <br><br>For more information, see [Deploy the data connector agent](deploy-data-connector-agent-container.md#deploy-the-data-connector-agent). | Required. <br> If you don't have **Owner** permissions on the resource group, the relevant step can also be performed by another user who does have the relevant permissions, separately after the agent is fully deployed.|
### System prerequisites | Prerequisite | Description | | - | -- | | **System architecture** | The data connector component of the SAP solution is deployed as a Docker container, and each SAP client requires its own container instance.<br>The container host can be either a physical machine or a virtual machine, can be located either on-premises or in any cloud. <br>The VM hosting the container ***does not*** have to be located in the same Azure subscription as your Microsoft Sentinel workspace, or even in the same Microsoft Entra tenant. |
-| **Virtual machine sizing recommendations** | **Minimum specification**, such as for a lab environment:<br>*Standard_B2s* VM, with:<br>- 2 cores<br>- 4 GB RAM<br><br>**Standard connector** (default):<br>*Standard_D2as_v5* VM or<br>*Standard_D2_v5* VM, with: <br>- 2 cores<br>- 8 GB RAM<br><br>**Multiple connectors**:<br>*Standard_D4as_v5* or<br>*Standard_D4_v5* VM, with: <br>- 4 cores<br>- 16 GB RAM |
+| **Virtual machine sizing recommendations** | **Minimum specification**, such as for a lab environment:<br>*Standard_B2s* VM, with:<br>- Two cores<br>- 4-GB RAM<br><br>**Standard connector** (default):<br>*Standard_D2as_v5* VM or<br>*Standard_D2_v5* VM, with: <br>- Two cores<br>- 8-GB RAM<br><br>**Multiple connectors**:<br>*Standard_D4as_v5* or<br>*Standard_D4_v5* VM, with: <br>- Four cores<br>- 16-GB RAM |
| **Administrative privileges** | Administrative privileges (root) are required on the container host machine. |
-| **Supported Linux versions** | The SAP data connector agent has been tested with the following Linux distributions:<br>- Ubuntu 18.04 or higher<br>- SLES version 15 or higher<br>- RHEL version 7.7 or higher<br><br>If you have a different operating system, you may need to [deploy and configure the container manually](deploy-data-connector-agent-container-other-methods.md#deploy-the-data-connector-agent-container-manually) instead of using the kickstart script. |
+| **Supported Linux versions** | The SAP data connector agent is tested with the following Linux distributions:<br>- Ubuntu 18.04 or higher<br>- SLES version 15 or higher<br>- RHEL version 7.7 or higher<br><br>If you have a different operating system, you might need to deploy and configure the container manually. For more information, open a support ticket. |
| **Network connectivity** | Ensure that the container host has access to: <br>- Microsoft Sentinel <br>- Azure key vault (in deployment scenario where Azure key vault is used to store secrets<br>- SAP system via the following TCP ports: *32xx*, *5xx13*, *33xx*, *48xx* (when SNC is used), where *xx* is the SAP instance number. |
-| **Software utilities** | The [SAP data connector deployment script](reference-kickstart.md) installs the following required software on the container host VM (depending on the Linux distribution used, the list may vary slightly): <br>- [Unzip](http://infozip.sourceforge.net/UnZip.html)<br>- [NetCat](https://sectools.org/tool/netcat/)<br>- [Docker](https://www.docker.com/)<br>- [jq](https://stedolan.github.io/jq/)<br>- [curl](https://curl.se/)<br><br>
+| **Software utilities** | The [SAP data connector deployment script](reference-kickstart.md) installs the following required software on the container host VM (depending on the Linux distribution used, the list might vary slightly): <br>- [Unzip](http://infozip.sourceforge.net/UnZip.html)<br>- [NetCat](https://sectools.org/tool/netcat/)<br>- [Docker](https://www.docker.com/)<br>- [jq](https://stedolan.github.io/jq/)<br>- [curl](https://curl.se/) |
+| **Managed identity or service principal** | The latest version of the SAP data connector agent requires a managed identity or service principal to authenticate to Microsoft Sentinel. <br><br>Legacy agents are supported for updates to the latest version, and then must use a managed identity or service principal to continue updating to subsequent versions. |
+ ### SAP prerequisites
To successfully deploy the Microsoft Sentinel solution for SAP® applications, y
| **Supported SAP versions** | The SAP data connector agent support SAP NetWeaver systems and was tested on [SAP_BASIS versions 731](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) and above. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on the older [SAP_BASIS version 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows). | | **Required software** | SAP NetWeaver RFC SDK 7.50 ([Download here](https://aka.ms/sentinel4sapsdk))<br>Make sure that you also have an SAP user account in order to access the SAP software download page. | | **SAP system details** | Make a note of the following SAP system details for use in this tutorial:<br>- SAP system IP address and FQDN hostname<br>- SAP system number, such as `00`<br>- SAP System ID, from the SAP NetWeaver system (for example, `NPL`) <br>- SAP client ID, such as `001` |
-| **SAP NetWeaver instance access** | The SAP data connector agent uses one of the following mechanisms to authenticate to the SAP system: <br>- SAP ABAP user/password<br>- A user with an X.509 certificate (This option requires additional configuration steps) |
+| **SAP NetWeaver instance access** | The SAP data connector agent uses one of the following mechanisms to authenticate to the SAP system: <br>- SAP ABAP user/password<br>- A user with an X.509 certificate (This option requires extra configuration steps) |
## SAP environment validation steps
To successfully deploy the Microsoft Sentinel solution for SAP® applications, y
### Create and configure a role (required)
-To allow the SAP data connector to connect to your SAP system, you must create a role. Create the role by deploying CR **NPLK900271** or by loading the role authorizations from the [MSFTSEN_SENTINEL_CONNECTOR_ROLE_V0.0.27.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file.
+To allow the SAP data connector to connect to your SAP system, you must create a role. Create the role by loading the role authorizations from the [**/MSFTSEN/SENTINEL_RESPONDER**](https://aka.ms/SAP_Sentinel_Responder_Role) file.
-> [!NOTE]
-> Alternatively, you can create a role that has minimal permissions by deploying change request *NPLK900268*, or loading the role authorizations from the [MSFTSEN_SENTINEL_AGENT_BASIC_ROLE_V0.0.1.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file.
-> This change request or authorizations file creates the **/MSFTSEN/SENTINEL_AGENT_BASIC** role. This role has the minimal required permissions for the data connector to operate. Note that if you choose to deploy this role, you might need to update it frequently.
-
-Experienced SAP administrators may choose to create the role manually and assign it the appropriate permissions. In such a case, it is not necessary to deploy the CR *NPLK900271*, but you must instead create a role using the recommendations outlined in [Expert: Deploy SAP CRs and deploy required ABAP authorizations](preparing-sap.md#required-abap-authorizations).
+The **/MSFTSEN/SENTINEL_RESPONDER** role includes both log retrieval and [attack disruption response actions](https://aka.ms/attack-disrupt-defender). To enable only log retrieval, without attack disruption response actions, either deploy the SAP *NPLK900271* CR on the SAP system, or load the role authorizations from the [**MSFTSEN_SENTINEL_CONNECTOR**](https://aka.ms/SAP_Sentinel_Connector_Role) file. The **/MSFTSEN/SENTINEL_CONNECTOR** role that has all the basic permissions for the data connector to operate.
| SAP BASIS versions | Sample CR | | | | | Any version | *NPLK900271*: [K900271.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900271.NPL), [R900271.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900271.NPL) |
+Experienced SAP administrators might choose to create the role manually and assign it the appropriate permissions. In such cases, make sure to follow the recommended authorizations for each log. For more information, see [Required ABAP authorizations](preparing-sap.md#required-abap-authorizations).
+ ### Retrieve additional information from SAP (optional)
-You can deploy additional CRs from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR) to enable the SAP data connector to retrieve certain information from your SAP system.
+You can deploy extra CRs from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR) to enable the SAP data connector to retrieve certain information from your SAP system.
- **SAP BASIS 7.5 SP12 and above**: Client IP Address information from security audit log - **ANY SAP BASIS version**: DB Table logs, Spool Output log
If you choose to retrieve additional information with the [NPLK900202 optional C
## Next steps
-After verifying that all the prerequisites have been met, proceed to the next step to deploy the required CRs to your SAP system and configure authorization.
+After verifying that all the prerequisites are met, proceed to the next step to deploy the required CRs to your SAP system and configure authorization.
> [!div class="nextstepaction"] > [Deploying SAP CRs and configuring authorization](preparing-sap.md)
sentinel Reference Kickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-kickstart.md
Title: Microsoft Sentinel solution for SAP® applications container kickstart deployment script reference | Microsoft Docs description: Description of command line options available with kickstart deployment script--++ Previously updated : 05/24/2023 Last updated : 04/03/2024 # Kickstart script reference ## Script overview
-Simplify the [deployment of the container hosting the SAP data connector](deploy-data-connector-agent-container.md) by using the provided **Kickstart script** (available at [Microsoft Sentinel solution for SAP® applications GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP)), which can also enable different modes of secrets storage, configure SNC, and more.
+Simplify the [deployment of the container hosting the SAP data connector](deploy-data-connector-agent-container.md) by using the provided **Kickstart script** (available at [Microsoft Sentinel solution for SAP® applications GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP)), which can also enable different modes of secrets storage, configure Secure Network Communications (SNC), and more.
## Parameter reference
The following parameters are configurable. You can see examples of how these par
**Required:** No. `kvmi` is assumed by default.
-**Explanation:** Specifies whether secrets (username, password, log analytics ID and shared key) should be stored in local configuration file, or in Azure Key Vault. Also controls whether authentication to Azure Key Vault is done using the VM's Azure system-assigned managed identity or a Microsoft Entra registered-application identity.
+**Explanation:** Specifies whether secrets (username, password, log analytics ID, and shared key) should be stored in local configuration file, or in Azure Key Vault. Also controls whether authentication to Azure Key Vault is done using the VM's Azure system-assigned managed identity or a Microsoft Entra registered-application identity.
If set to `kvmi`, Azure Key Vault is used to store secrets, and authentication to Azure Key Vault is done using the virtual machine's Azure system-assigned managed identity.
-If set to `kvsi`, Azure Key Vault is used to store secrets, and authentication to Azure Key Vault is done using a Microsoft Entra registered-application identity. Usage of `kvsi` mode requires `--appid`, `--appsecret` and `--tenantid` values.
+If set to `kvsi`, Azure Key Vault is used to store secrets, and authentication to Azure Key Vault is done using a Microsoft Entra registered-application identity. Usage of `kvsi` mode requires `--appid`, `--appsecret`, and `--tenantid` values.
-If set to `cfgf`, configuration file stored locally will be used to store secrets.
+If set to `cfgf`, the configuration file stored locally is used to store secrets.
#### ABAP server connection mode
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** No. If not specified, the default is `abap`.
-**Explanation:** Defines whether the data collector agent should connect to the ABAP server directly, or through a message server. Use `abap` to have the agent connect directly to the ABAP server, whose name you can define using the `--abapserver` parameter (though if you don't, [you will still be prompted for it](#abap-server-address)). Use `mserv` to connect through a message server, in which case you **must** specify the `--messageserverhost`, `--messageserverport`, and `--logongroup` parameters.
+**Explanation:** Defines whether the data collector agent should connect to the ABAP server directly, or through a message server. Use `abap` to have the agent connect directly to the ABAP server, whose name you can define using the `--abapserver` parameter (though if you don't, [you're still prompted for it](#abap-server-address)). Use `mserv` to connect through a message server, in which case you **must** specify the `--messageserverhost`, `--messageserverport`, and `--logongroup` parameters.
#### Configuration folder location
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** `<servername>`
-**Required:** No. If the parameter isn't specified and if the [ABAP server connection mode](#abap-server-connection-mode) parameter is set to `abap`, you will be prompted for the server hostname/IP address.
+**Required:** No. If the parameter isn't specified and if the [ABAP server connection mode](#abap-server-connection-mode) parameter is set to `abap`, you're prompted for the server hostname/IP address.
**Explanation:** Used only if the connection mode is set to `abap`, this parameter contains the Fully Qualified Domain Name (FQDN), short name, or IP address of the ABAP server to connect to.
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** `<system number>`
-**Required:** No. If not specified, user will be prompted for the system number.
+**Required:** No. If not specified, the user is prompted for the system number.
**Explanation:** Specifies the SAP system instance number to connect to.
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** `<SID>`
-**Required:** No. If not specified, user will be prompted for the system ID.
+**Required:** No. If not specified, the user is prompted for the system ID.
**Explanation:** Specifies the SAP system ID to connect to.
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** `<client number>`
-**Required:** No. If not specified, user will be prompted for the client number.
+**Required:** No. If not specified, the user is prompted for the client number.
**Explanation:** Specifies the client number to connect to.
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** Yes, if [ABAP server connection mode](#abap-server-connection-mode) is set to `mserv`.
-**Explanation:** Specifies the logon group to use when connecting to a message server. Can be used **only** if [ABAP server connection mode](#abap-server-connection-mode) is set to `mserv`. If the logon group name contains spaces, they should be passed in double quotes, as in the example `--logongroup "my logon group"`.
+**Explanation:** Specifies the sign-in group to use when connecting to a message server. Can be used **only** if [ABAP server connection mode](#abap-server-connection-mode) is set to `mserv`. If the logon group name contains spaces, they should be passed in double quotes, as in the example `--logongroup "my logon group"`.
#### Logon username
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** `<username>`
-**Required:** No, user will be prompted for username, if **not** using SNC (X.509) for authentication if not supplied.
+**Required:** No. If not supplied, the user is prompted for the username if they are **not** using SNC (X.509) for authentication.
-**Explanation:** Username that will be used to authenticate to ABAP server.
+**Explanation:** Username used to authenticate to ABAP server.
#### Logon password
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** `<password>`
-**Required:** No, user will be prompted for password, if **not** using SNC (X.509) for authentication if not supplied. Password input will then be masked.
+**Required:** No. If not supplied, the user is prompted for the password, if they're **not** using SNC (X.509) for authentication. Password input is masked.
-**Explanation:** Password that will be used to authenticate to ABAP server.
+**Explanation:** Password used to authenticate to ABAP server.
#### NetWeaver SDK file location
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** `<filename>`
-**Required:** No, script will attempt to locate nwrfc*.zip file in the current folder, if not found, user will be prompted to supply a valid NetWeaver SDK archive file.
+**Required:** No. The script attempts to locate the nwrfc*.zip file in the current folder. If it isn't found, the user is prompted to supply a valid NetWeaver SDK archive file.
-**Explanation:** NetWeaver SDK file path. A valid SDK is required for the data collector to operate. For more information see [Prerequisites for deploying Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#table-of-prerequisites).
+**Explanation:** NetWeaver SDK file path. A valid SDK is required for the data collector to operate. For more information, see [Prerequisites for deploying Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#table-of-prerequisites).
#### Enterprise Application ID
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** Yes, if [Secret storage location](#secret-storage-location) is set to `kvsi`.
-**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the application ID.
+**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#create-a-virtual-machine-and-configure-access-to-your-credentials). This parameter specifies the application ID.
#### Enterprise Application secret
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** Yes, if [Secret storage location](#secret-storage-location) is set to `kvsi`.
-**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the application secret.
+**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#create-a-virtual-machine-and-configure-access-to-your-credentials). This parameter specifies the application secret.
#### Tenant ID
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** Yes, if [Secret storage location](#secret-storage-location) is set to `kvsi`.
-**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#deploy-the-data-connector-agent-container). This parameter specifies the Microsoft Entra tenant ID.
+**Explanation:** When Azure Key Vault authentication mode is set to `kvsi`, authentication to key vault is done using an [enterprise application (service principal) identity](deploy-data-connector-agent-container.md?tabs=registered-application#create-a-virtual-machine-and-configure-access-to-your-credentials). This parameter specifies the Microsoft Entra tenant ID.
#### Key Vault Name
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** `<key vaultname>`
-**Required:** No. If [Secret storage location](#secret-storage-location) is set to `kvsi` or `kvmi`, the script will prompt for the value if not supplied.
+**Required:** No. If [Secret storage location](#secret-storage-location) is set to `kvsi` or `kvmi`, the script prompts for the value if not supplied.
-**Explanation:** If [Secret storage location](#secret-storage-location) is set to `kvsi` or `kvmi`, then the key vault name (in FQDN format) should be entered here.
+**Explanation:** If [Secret storage location](#secret-storage-location) is set to `kvsi` or `kvmi`, the key vault name (in FQDN format) should be entered here.
#### Log Analytics workspace ID
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** `<id>`
-**Required:** No. If not supplied, the script will prompt for the workspace ID.
+**Required:** No. If not supplied, the script prompts for the workspace ID.
-**Explanation:** Log Analytics workspace ID where the data collector will send the data to. To locate the workspace ID, locate the Log Analytics workspace in the Azure portal: open Microsoft Sentinel, select **Settings** in the **Configuration** section, select **Workspace settings**, then select **Agents Management**.
+**Explanation:** Log Analytics workspace ID where the data collector sends the data to. To locate the workspace ID, locate the Log Analytics workspace in the Azure portal: open Microsoft Sentinel, select **Settings** in the **Configuration** section, select **Workspace settings**, then select **Agents Management**.
#### Log Analytics key
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** `<key>`
-**Required:** No. If not supplied, script will prompt for the workspace key. Input will be masked in this case.
+**Required:** No. If not supplied, script prompts for the workspace key. Input is masked.
-**Explanation:** Primary or secondary key of the Log Analytics workspace where data collector will send the data to. To locate the workspace Primary or Secondary Key, locate the Log Analytics workspace in Azure portal: open Microsoft Sentinel, select **Settings** in the **Configuration** section, select **Workspace settings**, then select **Agents Management**.
+**Explanation:** Primary or secondary key of the Log Analytics workspace where the data collector sends the data to. To locate the workspace Primary or Secondary Key, locate the Log Analytics workspace in Azure portal: open Microsoft Sentinel, select **Settings** in the **Configuration** section, select **Workspace settings**, then select **Agents Management**.
#### Use X.509 (SNC) for authentication
If set to `cfgf`, configuration file stored locally will be used to store secret
**Parameter values:** None
-**Required:** No. If not specified, username and password will be used for authentication. If specified, `--cryptolib`, `--sapgenpse`, combination of either `--client-cert` and `--client-key`, or `--client-pfx` and `--client-pfx-passwd` as well as `--server-cert`, and in certain cases `--cacert` switches is required.
+**Required:** No. If not specified, the username and password is used for authentication. If specified, `--cryptolib`, `--sapgenpse`, combination of either `--client-cert` and `--client-key`, or `--client-pfx` and `--client-pfx-passwd` as well as `--server-cert`, and in certain cases `--cacert` switches is required.
-**Explanation:** Switch specifies that X.509 authentication will be used to connect to ABAP server, rather than username/password authentication. See [SNC configuration documentation](configure-snc.md) for more information.
+**Explanation:** Specifies that X.509 authentication is used to connect to ABAP server, rather than username/password authentication. For more information, see [Deploy the Microsoft Sentinel for SAP data connector by using SNC](configure-snc.md).
#### SAP Cryptographic library path
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** Yes, if `--use-snc` is specified **and** the certificate is issued by an enterprise certification authority.
-**Explanation:** If the certificate is self-signed, it has no issuing CA, so there is no trust chain that needs to be validated. If the certificate is issued by an enterprise CA, the issuing CA certificate and any higher-level CA certificates need to be validated. Use separate instances of the `--cacert` switch for each CA in the trust chain, and supply the full filenames of the public certificates of the enterprise certificate authorities.
+**Explanation:** If the certificate is self-signed, it has no issuing CA, so there's no trust chain that needs to be validated. If the certificate is issued by an enterprise CA, the issuing CA certificate and any higher-level CA certificates need to be validated. Use separate instances of the `--cacert` switch for each CA in the trust chain, and supply the full filenames of the public certificates of the enterprise certificate authorities.
#### Client PFX certificate path
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** No
-**Explanation:** Containers, that cannot establish connection to Microsoft Azure services directly and require connection via a proxy server require `--http-proxy` switch to define proxy url for the container. Format of the proxy url is `http://hostname:port`.
+**Explanation:** Containers that can't establish connection to Microsoft Azure services directly and require connection via a proxy server require `--http-proxy` switch to define proxy url for the container. Format of the proxy url is `http://hostname:port`.
#### Host Based Networking
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** No.
-**Explanation:** If this switch is specified, the agent will use host-based networking configuration. This can solve internal DNS resolution issues in some cases.
+**Explanation:** If this switch is specified, the agent uses host-based networking configuration. This can solve internal DNS resolution issues in some cases.
#### Confirm all prompts
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** No
-**Explanation:** If `--confirm-all-prompts` switch is specified, script will not pause for any user confirmations and will only prompt if user input is required. Use `--confirm-all-prompts` switch to achieve a zero-touch deployment.
+**Explanation:** If the `--confirm-all-prompts` switch is specified, the script doesn't pause for any user confirmations and only prompts if user input is required. Use `--confirm-all-prompts` switch to achieve a zero-touch deployment.
#### Use preview build of the container
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** No
-**Explanation:** By default, container deployment kickstart script deploys the container with :latest tag. Public preview features are published to :latest-preview tag. To ensure container deployment script uses public preview version of the container, specify the `--preview` switch.
+**Explanation:** By default, container deployment kickstart script deploys the container with the `:latest` tag. Public preview features are published to the `:latest-preview` tag. To ensure container deployment script uses public preview version of the container, specify the `--preview` switch.
## Next steps
sentinel Sap Audit Controls Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-audit-controls-workbook.md
In the **Select a rule to configure** table, you'll see the list of activated an
- The counts and graph lines of **Incidents** and **Alerts** generated by each rule are displayed. (Identical counts suggest that [alert grouping is disabled](../detect-threats-custom.md#alert-grouping).) -- Also shown are columns indicating that the rule's [incident creation setting is enabled](../detect-threats-custom.md#incident-settings) (the **Incidents** column), and what the source of the rule is (the **Source** column)&mdash;*Gallery*, *Content hub*, or *Custom*.
+- Also shown are columns indicating that the rule's [incident creation setting is enabled](../detect-threats-custom.md#configure-the-incident-creation-settings) (the **Incidents** column), and what the source of the rule is (the **Source** column)&mdash;*Gallery*, *Content hub*, or *Custom*.
-- If the **Recommended configuration** for that rule is "As alert only," then you should consider [disabling the incident creation setting](../detect-threats-custom.md#incident-settings) in the rule (see below).
+- If the **Recommended configuration** for that rule is "As alert only," then you should consider [disabling the incident creation setting](../detect-threats-custom.md#configure-the-incident-creation-settings) in the rule (see below).
- When you select a rule, a details panel appears with information about the rule.
sentinel Update Sap Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/update-sap-data-connector.md
Title: Update Microsoft Sentinel's SAP data connector agent description: This article shows you how to update an already existing SAP data connector to its latest version.--++ Previously updated : 12/31/2022 Last updated : 03/27/2024+ # Update Microsoft Sentinel's SAP data connector agent
This article shows you how to update an already existing Microsoft Sentinel for
To get the latest features, you can [enable automatic updates](#automatically-update-the-sap-data-connector-agent-preview) for the SAP data connector agent, or [manually update the agent](#manually-update-sap-data-connector-agent).
-Note that the automatic or manual updates described in this article are relevant to the SAP connector agent only, and not to the Microsoft Sentinel Solution for SAP. To successfully update the solution, your agent needs to be up to date. The solution is updated separately.
+The automatic or manual updates described in this article are relevant to the SAP connector agent only, and not to the Microsoft Sentinel solution for SAP. To successfully update the solution, your agent needs to be up to date. The solution is updated separately.
++
+## Prerequisites
+
+Before you start, make sure that you have all the prerequisites for deploying Microsoft Sentinel solution for SAP applications.
+
+For more information, see [Prerequisites for deploying Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md).
## Automatically update the SAP data connector agent (Preview)
You can choose to enable automatic updates for the connector agent on [all exist
### Enable automatic updates on all existing containers
-To enable automatic updates on all existing containers (all containers with a connected SAP agent), run the following command on the collector VM:
+To enable automatic updates on all existing containers (all containers with a connected SAP agent), run the following command on the collector machine:
``` wget -O sapcon-sentinel-auto-update.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-auto-update.sh && bash ./sapcon-sentinel-auto-update.sh
wget -O sapcon-sentinel-auto-update.sh https://raw.githubusercontent.com/Azure/A
The command creates a cron job that runs daily and checks for updates. If the job detects a new version of the agent, it updates the agent on all containers that exist when you run the command above. If a container is running a Preview version that is newer than the latest version (the version that the job installs), the job doesn't update that container.
-If you add containers after you run the cron job, the new containers are not automatically updated. To update these containers, in the */opt/sapcon/[SID or Agent GUID]/settings.json* file, define the `auto_update` parameter for each of the containers as `true`.
+If you add containers after you run the cron job, the new containers aren't updated automatically. To update these containers, in the */opt/sapcon/[SID or Agent GUID]/settings.json* file, define the `auto_update` parameter for each of the containers as `true`.
The logs for this update are under *var/log/sapcon-sentinel-register-autoupdate.log/*.
sentinel Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/search-jobs.md
Title: Search across long time spans in large datasets - Microsoft Sentinel
description: Learn how to use search jobs to search large datasets. Previously updated : 10/17/2022 Last updated : 03/07/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Search across long time spans in large datasets Use a search job when you start an investigation to find specific events in logs up to seven years ago. You can search events across all your logs, including events in Analytics, Basic, and Archived log plans. Filter and look for events that match your criteria.
-For more information on search job concepts and limitations, see [Start an investigation by searching large datasets](investigate-large-datasets.md) and [Search jobs in Azure Monitor](../azure-monitor/logs/search-jobs.md).
+- For more information on search job concepts and limitations, see [Start an investigation by searching large datasets](investigate-large-datasets.md) and [Search jobs in Azure Monitor](../azure-monitor/logs/search-jobs.md).
+
+- Search jobs across certain data sets might incur extra charges. For more information, see [Microsoft Sentinel pricing page](billing.md).
+ ## Start a search job
-Go to **Search** in Microsoft Sentinel to enter your search criteria.
+Go to **Search** in Microsoft Sentinel from the Azure portal or the Microsoft Defender portal to enter your search criteria. Depending on the size of the target dataset, search times vary. While most search jobs take a few minutes to complete, searches across massive data sets that run up to 24 hours are also supported.
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
-1. Under **General**, select **Search**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **General**, select **Search**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Search**.
1. Select the **Table** menu and choose a table for your search. 1. In the **Search** box, enter a search term.
+ #### [Azure portal](#tab/azure-portal)
:::image type="content" source="media/search-jobs/search-job-criteria.png" alt-text="Screenshot of search page with search criteria of administrator, time range last 90 days, and table selected." lightbox="media/search-jobs/search-job-criteria.png":::
-1. Click the **Run search** link to open the advanced KQL editor and a preview of the results for a seven day time range.
-
- :::image type="content" source="media/search-jobs/search-job-advanced-kql.png" alt-text="Screenshot of KQL editor with the initial search and the results for a seven day period." lightbox="media/search-jobs/search-job-advanced-kql.png":::
+ #### [Defender portal](#tab/defender-portal)
+ :::image type="content" source="media/search-jobs/search-job-defender-portal.png" alt-text="Screenshot of search page with search criteria of administrator, time range last 90 days, and table selected." lightbox="media/search-jobs/search-job-defender-portal.png":::
+
+1. Select the **Start** to open the advanced Kusto Query Language (KQL) editor and preview of the results for a set time range.
-1. You can modify the KQL and see an updated preview of the search results by selecting **Run**.
+1. Change the KQL query as needed and select **Run** to get an updated preview of the search results.
- :::image type="content" source="media/search-jobs/search-job-advanced-kql-revise.png" alt-text="Screenshot of KQL editor with revised search." lightbox="media/search-jobs/search-job-advanced-kql-revise.png":::
+ :::image type="content" source="media/search-jobs/search-job-advanced-kql-edit.png" alt-text="Screenshot of KQL editor with revised search.":::
-1. Once you're satisfied with the query and the search results preview, click on the 3 dots **...** > toggle the **Search job mode** switch > click the **Search job** button.
+1. When you're satisfied with the query and the search results preview, select the ellipses **...** and toggle **Search job mode** on.
:::image type="content" source="media/search-jobs/search-job-advanced-kql-ellipsis.png" alt-text="Screenshot of KQL editor with revised search with ellipsis highlighted for Search job mode." lightbox="media/search-jobs/search-job-advanced-kql-ellipsis.png":::- 1. Select the appropriate **Time range**.
+1. Resolve any KQL issues indicated by a squiggly red line in the editor.
+1. When you're ready to start the search job, select **Search job**.
+1. Enter a new table name to store the search job results.
+1. Select **Run a search job**.
- :::image type="content" source="media/search-jobs/search-job-advanced-kql-custom-time-range.png" alt-text="Screenshot of KQL editor with revised search, and custom time range." lightbox="media/search-jobs/search-job-advanced-kql-custom-time-range.png":::
-
-1. Make sure to resolve any KQL issues indicated by a squiggly red line in the editor. When you're ready to start the search job, select **Search**.
-
-1. Enter a new table name where the search job results will be stored > click **Run a search job**.
-
- When the search job starts, wait for a notification, and the **Done** button to be available. Once the notification is displayed, click **Done** to close the search pane and return to the search overview page to view the job status.
-
-1. Wait for your search job to be completed. Depending on the size of the target dataset, search times vary. While most search jobs take a few minutes to complete, searches across massive data sets that run up to 24 hours are also supported. Search jobs across certain data sets may incur extra charges. Refer to the [Microsoft Sentinel pricing page](billing.md) for more information.
+1. Wait for the notification **Search job is done** to view the results.
## View search job results View the status and results of your search job by going to the **Saved Searches** tab.
-1. In your Microsoft Sentinel workspace, select **Search** > **Saved Searches**.
+1. In Microsoft Sentinel, select **Search** > **Saved Searches**.
1. On the search card, select **View search results**. :::image type="content" source="media/search-jobs/view-search-results.png" alt-text="Screenshot that shows the link to view search results at the bottom of the search job card." lightbox="media/search-jobs/view-search-results.png":::
-1. By default, you see all the results that match your original search criteria.
+ By default, you see all the results that match your original search criteria.
-1. To refine the list of results returned from the search table, click the **Add filter** button.
+1. To refine the list of results returned from the search table, select **Add filter**.
- :::image type="content" source="media/search-jobs/search-results-filter.png" alt-text="Screenshot that shows search job results with added filters." lightbox="media/search-jobs/search-results-filter.png":::
-
-1. As you're reviewing your search job results, click **Add bookmark**, or select the bookmark icon to preserve a row. Adding a bookmark allows you to tag events, add notes, and attach these events to an incident for later reference.
+1. As you're reviewing your search job results, select **Add bookmark**, or select the bookmark icon to preserve a row. Adding a bookmark allows you to tag events, add notes, and attach these events to an incident for later reference.
:::image type="content" source="media/search-jobs/search-results-add-bookmark.png" alt-text="Screenshot that shows search job results with a bookmark in the process of being added." lightbox="media/search-jobs/search-results-add-bookmark.png":::
-1. Click the **Columns** button and select the checkbox next to columns you'd like to add to the results view.
-
-1. Add the *Bookmarked* filter to only show preserved entries. Click the **View all bookmarks** button to go the **Hunting** page where you can add a bookmark to an existing incident.
+1. Select the **Columns** button and select the checkbox next to columns you'd like to add to the results view.
+1. Add the **Bookmarked** filter to only show preserved entries.
+1. Select **View all bookmarks** to go the **Hunting** page where you can add a bookmark to an existing incident.
## Next steps
-To learn more, see the following topics.
+To learn more, see the following articles.
- [Hunt with bookmarks](bookmarks.md) - [Restore archived logs](restore.md)
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-catalog.md
Title: Microsoft Sentinel content hub catalog | Microsoft Docs
-description: This article lists the solutions currently available in the content hub for Microsoft Sentinel and where to find the full list of solutions.
+description: Learn about domain specific solutions available in the content hub for Microsoft Sentinel and where to find the full list of solutions.
Previously updated : 08/08/2023 Last updated : 03/01/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal.
# Microsoft Sentinel content hub catalog
This article helps you find the full list of the solutions available in Microsof
When you deploy a solution, the security content included with the solution, such as data connectors, playbooks, or workbooks, are available in the relevant views for the content. For more information, see [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md). + ## All solutions for Microsoft Sentinel To get the full list of all solutions available in Microsoft Sentinel, see the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=solution-templates&page=1&search=sentinel). Search for a specific product solution or provider. Filter by **Product Type** = **Solution Templates** to see solutions for Microsoft Sentinel.
sentinel Sentinel Solutions Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-delete.md
Title: Delete installed Microsoft Sentinel out-of-the-box content and solutions
-description: Remove solutions and content you've deployed in Microsoft Sentinel.
+description: Remove solutions and content you deployed in Microsoft Sentinel.
Previously updated : 06/22/2023 Last updated : 03/01/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal.
# Delete installed Microsoft Sentinel out-of-the-box content and solutions
-If you've installed a Microsoft Sentinel out-of-the-box solution, you can remove content items from the solution or delete the installed solution. If you later need to restore deleted content items, select **Reinstall** on the solution. Similarly, you can restore the solution by re-installing the solution.
+If you installed a Microsoft Sentinel out-of-the-box solution, you can remove content items from the solution or delete the installed solution. If you later need to restore deleted content items, select **Reinstall** on the solution. Similarly, you can restore the solution by reinstalling the solution.
+ ## Delete content items Delete content items for an installed solution deployed by the content hub.
-1. In the content hub, select an installed solution where the version is 2.0.0 or higher.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**.
+1. Select an installed solution where the version is 2.0.0 or higher.
1. On the solutions details page, select **Manage**. 1. Select the content item or items you want to delete.
-1. Select **Delete items**.
+1. Select **Delete**.
:::image type="content" source="media/sentinel-solutions-delete/manage-solution-delete-item.png" alt-text="Screenshot of solution with content items selected for deletion.":::
Delete a solution and the related content templates from the content hub or in t
1. On the solutions details page, select **Delete**. 1. Select **Yes** to delete the solution and the templates.
- :::image type="content" source="media/sentinel-solutions-delete/manage-solution-delete.png" alt-text="Screenshot of the delete confirmation prompt.":::
+ :::image type="content" source="media/sentinel-solutions-delete/manage-solution-delete.png" alt-text="Screenshot of the confirmation prompt to delete the solution.":::
To restore an out-of-the-box solution from the content hub, select the solution and **Install**.
-## Next steps
+## Related articles
- [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md) - [About Microsoft Sentinel content and solutions](sentinel-solutions.md)
sentinel Sentinel Solutions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-deploy.md
Title: Discover and deploy Microsoft Sentinel out-of-the-box content from Conten
description: Learn how to find and deploy Sentinel packaged solutions containing data connectors, analytics rules, hunting queries, workbooks, and other content. Previously updated : 02/15/2024 Last updated : 03/01/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal.
# Discover and manage Microsoft Sentinel out-of-the-box content
The Microsoft Sentinel Content hub is your centralized location to discover and
If you're a partner who wants to create your own solution, see the [Microsoft Sentinel Solutions Build Guide](https://aka.ms/sentinelsolutionsbuildguide) for solution authoring and publishing. + ## Prerequisites In order to install, update, and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.
For more information about other roles and permissions supported for Microsoft S
The content hub offers the best way to find new content or manage the solutions you already installed.
-1. For Microsoft Sentinel in the [Azure portal](https://portal.microsoft.com), under **Content management**, select **Content hub**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**.
- The **Content hub** page displays a searchable grid or list of solutions and standalone content.
+ The **Content hub** page displays a searchable grid or a list of solutions and standalone content.
1. Filter the list displayed, either by selecting specific values from the filters, or entering any part of a content name or description in the **Search** field.
The content hub offers the best way to find new content or manage the solutions
Each content item shows categories that apply to it, and solutions show the types of content included. For example, in the following image, the **Cisco Umbrella** solution lists one of its categories as **Security - Cloud Security**, and indicates it includes a data connector, analytics rules, hunting queries, playbooks, and more.
- :::image type="content" source="./media/sentinel-solutions-deploy/solutions-list.png" alt-text="Screenshot of the Microsoft Sentinel content hub.":::
+
+ #### [Azure portal](#tab/azure-portal)
+ :::image type="content" source="./media/sentinel-solutions-deploy/solutions-list.png" alt-text="Screenshot of the Microsoft Sentinel content hub in the Azure portal.":::
+
+ #### [Defender portal](#tab/defender-portal)
+ :::image type="content" source="./media/sentinel-solutions-deploy/solutions-list-defender.png" alt-text="Screenshot of the Microsoft Sentinel content hub in the Defender portal.":::
## Install or update content
sentinel Sentinel Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions.md
Title: About Microsoft Sentinel content and solutions | Microsoft Docs
-description: This article describes Microsoft Sentinel content and solutions, which customers can use to find data analysis tools packaged together with data connectors.
+description: Learn about Microsoft Sentinel content and solutions that include data analysis tools packaged together with data connectors.
Previously updated : 06/22/2023 Last updated : 03/01/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal.
# About Microsoft Sentinel content and solutions
Microsoft Sentinel *content* is Security Information and Event Management (SIEM)
Content in Microsoft Sentinel includes any of the following types: - **[Data connectors](connect-data-sources.md)** provide log ingestion from different sources into Microsoft Sentinel-- **[Parsers](normalization-about-parsers.md)** provide log formatting/transformation into [ASIM](normalization.md) formats, supporting usage across various Microsoft Sentinel content types and scenarios
+- **[Parsers](normalization-about-parsers.md)** provide log formatting/transformation into [Advanced Security Information Model (ASIM)](normalization.md) formats, supporting usage across various Microsoft Sentinel content types and scenarios
- **[Workbooks](get-visibility.md)** provide monitoring, visualization, and interactivity with data in Microsoft Sentinel, highlighting meaningful insights for users - **[Analytics rules](detect-threats-built-in.md)** provide alerts that point to relevant SOC actions via incidents - **[Hunting queries](hunting.md)** are used by SOC teams to proactively hunt for threats in Microsoft Sentinel
Microsoft Sentinel offers these content types as *solutions* and *standalone* it
You can either customize out-of-the-box (OOTB) content for your own needs, or you can create your own solution with content to share with others in the community. For more information, see the [Microsoft Sentinel Solutions Build Guide](https://aka.ms/sentinelsolutionsbuildguide) for solutions' authoring and publishing. + ## Discover and manage Microsoft Sentinel content Use the Microsoft Sentinel **Content hub** to centrally discover and install out-of-the-box (OOTB) content. The Microsoft Sentinel Content hub provides in-product discoverability, single-step deployment, and enablement of end-to-end product, domain, and/or vertical OOTB solutions and content in Microsoft Sentinel. -- In the **Content hub**, filter by [categories](#categories-for-microsoft-sentinel-out-of-the-box-content-and-solutions) and other parameters, or use the powerful text search, to find the content that works best for your organization's needs. The **Content hub** also indicates the [support model](#support-models-for-microsoft-sentinel-out-of-the-box-content-and-solutions) applied to each piece of content, as some content is maintained by Microsoft and others are maintained by partners or the community.
+- Filter by [categories](#categories-for-microsoft-sentinel-out-of-the-box-content-and-solutions) and other parameters, or use the powerful text search, to find the content that works best for your organization's needs.
+
+ The **Content hub** also indicates the [support model](#support-models-for-microsoft-sentinel-out-of-the-box-content-and-solutions) applied to each piece of content, as some content is maintained by Microsoft and others are maintained by partners or the community.
+
+- Manage updates for out-of-the-box content in the **Content hub**. Or, for custom content, manage updates from the **Repositories** page. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
- Manage [updates for out-of-the-box content](sentinel-solutions-deploy.md#install-or-update-content) via the Microsoft Sentinel **Content hub**, and for custom content via the **Repositories** page.
+- Customize out-of-the-box content for your own needs, or create custom content, including analytics rules, hunting queries, notebooks, workbooks, and more.
-- Customize out-of-the-box content for your own needs, or create custom content, including analytics rules, hunting queries, notebooks, workbooks, and more. Manage your custom content directly in your Microsoft Sentinel workspace, via the [Microsoft Sentinel API](/rest/api/securityinsights/), or in your own source control repository, via the Microsoft Sentinel [Repositories](ci-cd.md) page.
+ Manage your custom content directly in your Microsoft Sentinel workspace by using the Microsoft Sentinel API or from your own source control repository. For more information, see [Microsoft Sentinel API](/rest/api/securityinsights/) and [Deploy custom content from your repository](ci-cd.md).
### Why content hub solutions?
Microsoft Sentinel out-of-the-box content can be applied with one or more of the
| **Compliance** | Compliance product, services, and protocols | | **DevOps** | Development operations tools and services | | **Identity** | Identity service providers and integrations |
-| **Internet of Things (IoT)** | IoT, OT devices and infrastructure, industrial control services |
+| **Internet of Things (IoT)** | IoT, operational technology (OT) devices, and infrastructure, industrial control services |
| **IT Operations**| Products and services managing IT | | **Migration** | Migration enablement products, services, and | | **Networking** | Network products, services, and tools | | **Platform** | Microsoft Sentinel generic or framework components, Cloud infrastructure, and platform| | **Security - Others** | Other security products and services with no other clear category | | **Security - Threat Intelligence** | Threat intelligence platforms, feeds, products, and services |
-| **Security - Threat Protection** | Threat protection, email protection, and XDR and endpoint protection products and services |
+| **Security - Threat Protection** | Threat protection, email protection, extended detection and response (XDR), and endpoint protection products and services |
| **Security - 0-day Vulnerability** | Specialized solutions for zero-day vulnerability attacks like [Nobelium](../security/fundamentals/recover-from-identity-compromise.md) | | **Security - Automation (SOAR)** | Security automations, SOAR (Security Operations and Automated Responses), security operations, and incident response products and services. | | **Security - Cloud Security** | CASB (Cloud Access Service Broker), CWPP (Cloud workload protection platforms), CSPM (Cloud security posture management and other Cloud Security products and services |
Both Microsoft and other organizations author Microsoft Sentinel out-of-the-box
| Support model | Description | | - | -- |
-| **Microsoft-supported**| Applies to: <br>- Content/solutions where Microsoft is the data provider, where relevant, and author. <br> - Some Microsoft-authored content/solutions for non-Microsoft data sources. <br><br> Microsoft supports and maintains content/solutions in this support model in accordance with [Microsoft Azure Support Plans](https://azure.microsoft.com/support/options/#overview). <br>Partners or the Community support content/solutions that are authored by any party other than Microsoft.|
+| **Microsoft-supported**| Applies to: <br>- Content/solutions where Microsoft is the data provider, where relevant, and author. <br> - Some Microsoft-authored content/solutions for non-Microsoft data sources. <br><br> Microsoft supports and maintains content/solutions in this support model in accordance with [Microsoft Azure Support Plans](https://azure.microsoft.com/support/options/#overview). <br>Partners or the Community support content or solutions authored by any party other than Microsoft.|
|**Partner-supported** | Applies to content/solutions authored by parties other than Microsoft. <br><br> The partner company provides support or maintenance for these pieces of content/solutions. The partner company can be an Independent Software Vendor, a Managed Service Provider (MSP/MSSP), a Systems Integrator (SI), or any organization whose contact information is provided on the Microsoft Sentinel page for the selected content/solutions.<br><br> For any issues with a partner-supported solution, contact the specified support contact.|
-|**Community-supported** |Applies to content/solutions authored by Microsoft or partner developers that don't have listed contacts for support and maintenance in Microsoft Sentinel.<br><br> For questions or issues with these solutions, [file an issue](https://github.com/Azure/Azure-Sentinel/issues/new/choose) in the [Microsoft Sentinel GitHub community](https://aka.ms/threathunters). |
+|**Community-supported** |Applies to content or solutions authored by Microsoft or partner developers without listed contacts for support and maintenance in Microsoft Sentinel.<br><br> For questions or issues with these solutions, [file an issue](https://github.com/Azure/Azure-Sentinel/issues/new/choose) in the [Microsoft Sentinel GitHub community](https://aka.ms/threathunters). |
## Content sources for Microsoft Sentinel content and solutions
Each piece of content or solution has one of the following content sources:
||| |**Content hub** |Solutions deployed by the Content hub that support lifecycle management | |**Standalone** |Standalone content deployed by the Content hub that is automatically kept up-to-date |
-|**Custom** |Content or solutions you've customized in your workspace |
-|**Gallery content** |Content from the feature galleries that don't support lifecycle management. This content source is retiring soon. For more information see [OOTB content centralization changes](sentinel-content-centralize.md). |
+|**Custom** |Content or solutions you customized in your workspace |
+|**Gallery content** |Content from the feature galleries that don't support lifecycle management. This content source is retiring soon. For more information, see [OOTB content centralization changes](sentinel-content-centralize.md). |
|**Repositories** |Content or solutions from a repository connected to your workspace | ## Next steps
-After you've learned about Microsoft Sentinel content, discover and install solutions and standalone content from the **Content hub** in your Microsoft Sentinel workspace.
+Discover and install solutions and standalone content from the **Content hub** in your Microsoft Sentinel workspace.
For more information, see:
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
# Microsoft Sentinel skill-up training
-This article walks you through a level 400 training to help you skill up on Microsoft Sentinel. The training comprises 21 modules that present relevant product documentation, blog posts, and other resources.
+This article walks you through a level 400 training to help you skill up on Microsoft Sentinel. The training comprises 21 self-paced modules that present relevant product documentation, blog posts, and other resources.
The modules listed here are split into five parts following the life cycle of a Security Operation Center (SOC):
Use Microsoft Sentinel, Microsoft Defender for Cloud, and Microsoft Defender XDR
* View the Better Together webinar ["OT and IOT attack detection, investigation, and response."](https://youtu.be/S8DlZmzYO2s)
-#### To monitor your multi-cloud workloads
+#### To monitor your multicloud workloads
The cloud is (still) new and often not monitored as extensively as on-premises workloads. Read this [presentation](https://techcommunity.microsoft.com/gxcuf89792/attachments/gxcuf89792/AzureSentinelBlog/243/1/L400-P2%20Use%20cases.pdf) to learn how Microsoft Sentinel can help you close the cloud monitoring gap across your clouds.
You can also send the alerts from Microsoft Sentinel to your third-party SIEM or
#### For MSSPs Because it eliminates the setup cost and is location agnostic, Microsoft Sentinel is a popular choice for providing SIEM as a service. You'll find a [list of MISA (Microsoft Intelligent Security Association) member-managed security service providers (MSSPs) that use Microsoft Sentinel](https://www.microsoft.com/security/blog/2020/07/14/microsoft-intelligent-security-association-managed-security-service-providers/). Many other MSSPs, especially regional and smaller ones, use Microsoft Sentinel but aren't MISA members.
-To start your journey as an MSSP, read the [Microsoft Sentinel Technical Playbooks for MSSPs](https://aka.ms/azsentinelmssp). More information about MSSP support is included in the next module, which covers cloud architecture and multi-tenant support.
+To start your journey as an MSSP, read the [Microsoft Sentinel Technical Playbooks for MSSPs](https://aka.ms/azsentinelmssp). More information about MSSP support is included in the next module, which covers cloud architecture and multitenant support.
## Part 2: Architecting and deploying
sentinel Soc Ml Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/soc-ml-anomalies.md
Title: Use customizable anomalies to detect threats in Microsoft Sentinel | Micr
description: This article explains how to use the new customizable anomaly detection capabilities in Microsoft Sentinel. Previously updated : 11/02/2022 Last updated : 03/17/2024
## What are customizable anomalies?
-With attackers and defenders constantly fighting for advantage in the cybersecurity arms race, attackers are always finding ways to evade detection. Inevitably, though, attacks will still result in unusual behavior in the systems being attacked. Microsoft Sentinel's customizable, machine learning-based anomalies can identify this behavior with analytics rule templates that can be put to work right out of the box. While anomalies don't necessarily indicate malicious or even suspicious behavior by themselves, they can be used to improve detections, investigations, and threat hunting:
+With attackers and defenders constantly fighting for advantage in the cybersecurity arms race, attackers are always finding ways to evade detection. Inevitably, though, attacks still result in unusual behavior in the systems being attacked. Microsoft Sentinel's customizable, machine learning-based anomalies can identify this behavior with analytics rule templates that can be put to work right out of the box. While anomalies don't necessarily indicate malicious or even suspicious behavior by themselves, they can be used to improve detections, investigations, and threat hunting:
-- **Additional signals to improve detection**: Security analysts can use anomalies to detect new threats and make existing detections more effective. A single anomaly is not a strong signal of malicious behavior, but when combined with several anomalies that occur at different points on the kill chain, their cumulative effect is much stronger. Security analysts can enhance existing detections as well by making the unusual behavior identified by anomalies a condition for alerts to be fired.
+- **Additional signals to improve detection**: Security analysts can use anomalies to detect new threats and make existing detections more effective. A single anomaly is not a strong signal of malicious behavior, but a combination of several anomalies at different points on the kill chain sends a clear message. Security analysts can make existing detection alerts more accurate by conditioning them on the identification of anomalous behavior.
- **Evidence during investigations**: Security analysts also can use anomalies during investigations to help confirm a breach, find new paths for investigating it, and assess its potential impact. These efficiencies reduce the time security analysts spend on investigations. -- **The start of proactive threat hunts**: Threat hunters can use anomalies as context to help determine whether their queries have uncovered suspicious behavior. When the behavior is suspicious, the anomalies also point toward potential paths for further hunting. These clues provided by anomalies reduce both the time to detect a threat and its chance to cause harm.
+- **The start of proactive threat hunts**: Threat hunters can use anomalies as context to help determine whether their queries uncovered suspicious behavior. When the behavior is suspicious, the anomalies also point toward potential paths for further hunting. These clues provided by anomalies reduce both the time to detect a threat and its chance to cause harm.
-Anomalies can be powerful tools, but they are notoriously very noisy. They typically require a lot of tedious tuning for specific environments or complex post-processing. Microsoft Sentinel customizable anomaly templates are tuned by our data science team to provide out-of-the box value, but should you need to tune them further, the process is simple and requires no knowledge of machine learning. The thresholds and parameters for many of the anomalies can be configured and fine-tuned through the already familiar analytics rule user interface. The performance of the original threshold and parameters can be compared to the new ones within the interface and further tuned as necessary during a testing, or flighting, phase. Once the anomaly meets the performance objectives, the anomaly with the new threshold or parameters can be promoted to production with the click of a button. Microsoft Sentinel customizable anomalies enable you to get the benefit of anomalies without the hard work.
+Anomalies can be powerful tools, but they are notoriously noisy. They typically require a lot of tedious tuning for specific environments, or complex post-processing. Customizable anomaly templates are tuned by Microsoft Sentinel's data science team to provide out-of-the-box value. If you need to tune them further, the process is simple and requires no knowledge of machine learning. The thresholds and parameters for many of the anomalies can be configured and fine-tuned through the already familiar analytics rule user interface. The performance of the original threshold and parameters can be compared to the new ones within the interface and further tuned as necessary during a testing, or flighting, phase. Once the anomaly meets the performance objectives, the anomaly with the new threshold or parameters can be promoted to production with the click of a button. Microsoft Sentinel customizable anomalies enable you to get the benefit of anomaly detection without the hard work.
## UEBA anomalies
-Some of the anomalies detected by Microsoft Sentinel come from its [User and Entity Behavior Analytics (UEBA) engine](identify-threats-with-entity-behavior-analytics.md), which detects anomalies based on dynamic baselines created for each entity across various data inputs. Each entity's baseline behavior is set according to its own historical activities, those of its peers, and those of the organization as a whole. Anomalies can be triggered by the correlation of different attributes such as action type, geo-location, device, resource, ISP, and more.
+Some of Microsoft Sentinel's anomaly detections come from its [User and Entity Behavior Analytics (UEBA) engine](identify-threats-with-entity-behavior-analytics.md), which detects anomalies based on each entity's baseline historical behavior across various environments. Each entity's baseline behavior is set according to its own historical activities, those of its peers, and those of the organization as a whole. Anomalies can be triggered by the correlation of different attributes such as action type, geo-location, device, resource, ISP, and more.
## Next steps
sentinel Surface Custom Details In Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/surface-custom-details-in-alerts.md
Last updated 04/26/2022
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Surface custom event details in alerts in Microsoft Sentinel
-## Introduction
- [Scheduled query analytics rules](detect-threats-custom.md) analyze **events** from data sources connected to Microsoft Sentinel, and produce **alerts** when the contents of these events are significant from a security perspective. These alerts are further analyzed, grouped, and filtered by Microsoft Sentinel's various engines and distilled into **incidents** that warrant a SOC analyst's attention. However, when the analyst views the incident, only the properties of the component alerts themselves are immediately visible. Getting to the actual content - the information contained in the events - requires doing some digging. Using the **custom details** feature in the **analytics rule wizard**, you can surface event data in the alerts that are constructed from those events, making the event data part of the alert properties. In effect, this gives you immediate event content visibility in your incidents, enabling you to triage, investigate, draw conclusions, and respond with much greater speed and efficiency. The procedure detailed below is part of the analytics rule creation wizard. It's treated here independently to address the scenario of adding or changing custom details in an existing analytics rule. + ## How to surface custom event details
-1. From the Microsoft Sentinel navigation menu, select **Analytics**.
+1. Enter the **Analytics** page in the portal through which you access Microsoft Sentinel:
+
+ # [Azure portal](#tab/azure)
+
+ From the **Configuration** section of the Microsoft Sentinel navigation menu, select **Analytics**.
+
+ # [Defender portal](#tab/defender)
+
+ From the Microsoft Defender navigation menu, expand **Microsoft Sentinel**, then **Configuration**. Select **Analytics**.
+
+
1. Select a scheduled query rule and click **Edit**. Or create a new rule by clicking **Create > Scheduled query rule** at the top of the screen.
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
Title: Threat intelligence integration in Microsoft Sentinel description: Learn about the different ways threat intelligence feeds are integrated with and used by Microsoft Sentinel. - Previously updated : 3/28/2022+ Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Threat intelligence integration in Microsoft Sentinel
Microsoft Sentinel gives you a few different ways to [use threat intelligence fe
## TAXII threat intelligence feeds
-To connect to TAXII threat intelligence feeds, follow the instructions to [connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds](connect-threat-intelligence-taxii.md), together with the data supplied by each vendor linked below. You may need to contact the vendor directly to obtain the necessary data to use with the connector.
+To connect to TAXII threat intelligence feeds, follow the instructions to [connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds](connect-threat-intelligence-taxii.md), together with the data supplied by each vendor. You may need to contact the vendor directly to obtain the necessary data to use with the connector.
### Accenture Cyber Threat Intelligence -- [Learn about Accenture CTI integration with Microsoft Sentinel](https://www.accenture.com/us-en/services/security/cyber-defense).
+- [Learn about Accenture Cyber Threat Intelligence (CTI) integration with Microsoft Sentinel](https://www.accenture.com/us-en/services/security/cyber-resilience).
### Cybersixgill Darkfeed -- [Learn about Cybersixgill integration with Microsoft Sentinel @Cybersixgill](https://www.cybersixgill.com/partners/azure-sentinel/)-- To connect Microsoft Sentinel to Cybersixgill TAXII Server and get access to Darkfeed, [contact Cybersixgill](mailto://azuresentinel@cybersixgill.com) to obtain the API Root, Collection ID, Username and Password.
+- [Learn about Cybersixgill integration with Microsoft Sentinel](https://www.cybersixgill.com/partners/azure-sentinel/).
+- To connect Microsoft Sentinel to Cybersixgill TAXII Server and get access to Darkfeed, [contact azuresentinel@cybersixgill.com](mailto://azuresentinel@cybersixgill.com) to obtain the API Root, Collection ID, Username, and Password.
### ESET - [Learn about ESET's threat intelligence offering](https://www.eset.com/int/business/services/threat-intelligence/).-- To connect Microsoft Sentinel to the ESET TAXII server, obtain the API root URL, Collection ID, Username and Password from your ESET account. Then follow the [general instructions](connect-threat-intelligence-taxii.md) and [ESET's knowledge base article](https://support.eset.com/en/kb8314-eset-threat-intelligence-with-ms-azure-sentinel).
+- To connect Microsoft Sentinel to the ESET TAXII server, obtain the API root URL, Collection ID, Username, and Password from your ESET account. Then follow the [general instructions](connect-threat-intelligence-taxii.md) and [ESET's knowledge base article](https://support.eset.com/en/kb8314-eset-threat-intelligence-with-ms-azure-sentinel).
### Financial Services Information Sharing and Analysis Center (FS-ISAC)
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
### IBM X-Force -- [Learn more about IBM X-Force integration](https://www.ibm.com/security/xforce)
+- [Learn more about IBM X-Force integration](https://www.ibm.com/security/xforce).
### IntSights -- [Learn more about the IntSights integration with Microsoft Sentinel @IntSights](https://intsights.com/resources/intsights-microsoft-azure-sentinel)
+- [Learn more about the IntSights integration with Microsoft Sentinel @IntSights](https://intsights.com/resources/intsights-microsoft-azure-sentinel).
- To connect Microsoft Sentinel to the IntSights TAXII Server, obtain the API Root, Collection ID, Username and Password from the IntSights portal after you configure a policy of the data you wish to send to Microsoft Sentinel. ### Kaspersky -- [Learn about Kaspersky integration with Microsoft Sentinel](https://support.kaspersky.com/15908)
+- [Learn about Kaspersky integration with Microsoft Sentinel](https://support.kaspersky.com/15908).
### Pulsedive -- [Learn about Pulsedive integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-pulsedive-feed-into-microsoft-sentinel/ba-p/3478953)
+- [Learn about Pulsedive integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-pulsedive-feed-into-microsoft-sentinel/ba-p/3478953).
### ReversingLabs -- [Learn about ReversingLabs TAXII integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-reversinglab-s-ransomware-feed-into-microsoft-sentinel/ba-p/3423937)
+- [Learn about ReversingLabs TAXII integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/import-reversinglab-s-ransomware-feed-into-microsoft-sentinel/ba-p/3423937).
### Sectrio -- [Learn more about Sectrio integration](https://sectrio.com/threat-intelligence/)-- [Step by step process for integrating Sectrio's TI feed into Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-bring-threat-intelligence-from-sectrio-using/ba-p/2964648)
+- [Learn more about Sectrio integration](https://sectrio.com/threat-intelligence/).
+- [Step by step process for integrating Sectrio's TI feed into Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-bring-threat-intelligence-from-sectrio-using/ba-p/2964648).
### SEKOIA.IO -- [Learn about SEKOIA.IO integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bring-threat-intelligence-from-sekoia-io-using-taxii-data/ba-p/3302497)
+- [Learn about SEKOIA.IO integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bring-threat-intelligence-from-sekoia-io-using-taxii-data/ba-p/3302497).
### ThreatConnect -- [Learn more about STIX and TAXII @ThreatConnect](https://threatconnect.com/stix-taxii/)-- [TAXII Services documentation @ThreatConnect](https://docs.threatconnect.com/en/latest/rest_api/taxii/taxii_2.1.html)
+- [Learn more about STIX and TAXII at ThreatConnect](https://threatconnect.com/stix-taxii/).
+- [See TAXII Services documentation at ThreatConnect](https://docs.threatconnect.com/en/latest/rest_api/taxii/taxii_2.1.html)
## Integrated threat intelligence platform products
-To connect to Threat Intelligence Platform (TIP) feeds, follow the instructions to [connect Threat Intelligence platforms to Microsoft Sentinel](connect-threat-intelligence-tip.md). The second part of these instructions calls for you to enter information into your TIP solution. See the links below for more information.
+To connect to Threat Intelligence Platform (TIP) feeds, see [connect Threat Intelligence platforms to Microsoft Sentinel](connect-threat-intelligence-tip.md). See the following solutions to learn what additional information is needed.
### Agari Phishing Defense and Brand Protection
To connect to Threat Intelligence Platform (TIP) feeds, follow the instructions
### MISP Open Source Threat Intelligence Platform - Push threat indicators from MISP to Microsoft Sentinel using the TI upload indicators API with [MISP2Sentinel](https://www.misp-project.org/2023/08/26/MISP-Sentinel-UploadIndicatorsAPI.html/).-- Azure Marketplace link for [MISP2Sentinel](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-misp2sentinel?tab=Overview).
+- Here is the Azure Marketplace link for [MISP2Sentinel](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-misp2sentinel?tab=Overview).
- [Learn more about the MISP Project](https://www.misp-project.org/). ### Palo Alto Networks MineMeld
For more information about how to find and manage the solutions, see [Discover a
### HYAS Insight -- Find and enable incident enrichment playbooks for [HYAS Insight](https://www.hyas.com/hyas-insight) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/HYAS/Playbooks). Search for subfolders beginning with "Enrich-Sentinel-Incident-HYAS-Insight-".
+- Find and enable incident enrichment playbooks for [HYAS Insight](https://www.hyas.com/hyas-insight) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/HYAS/Playbooks). Search for subfolders beginning with `Enrich-Sentinel-Incident-HYAS-Insight-`.
- See the HYAS Insight Logic App [connector documentation](/connectors/hyasinsight/). ### Microsoft Defender Threat Intelligence
For more information about how to find and manage the solutions, see [Discover a
### Recorded Future Security Intelligence Platform -- Find and enable incident enrichment playbooks for [Recorded Future](https://www.recordedfuture.com/integrations/microsoft-azure/) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks). Search for subfolders beginning with "RecordedFuture_".
+- Find and enable incident enrichment playbooks for [Recorded Future](https://www.recordedfuture.com/integrations/microsoft-azure/) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks). Search for subfolders beginning with `RecordedFuture_`.
- See the Recorded Future Logic App [connector documentation](/connectors/recordedfuturev2/). ### ReversingLabs TitaniumCloud
For more information about how to find and manage the solutions, see [Discover a
### Virus Total -- Find and enable incident enrichment playbooks for [Virus Total](https://developers.virustotal.com/v3.0/reference) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/VirusTotal/Playbooks). Search for subfolders beginning with "Get-VTURL".
+- Find and enable incident enrichment playbooks for [Virus Total](https://developers.virustotal.com/v3.0/reference) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/VirusTotal/Playbooks). Search for subfolders beginning with `Get-VTURL`.
- See the Virus Total Logic App [connector documentation](/connectors/virustotal/). ## Next steps
sentinel Troubleshoot Analytics Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/troubleshoot-analytics-rules.md
+
+ Title: Troubleshooting analytics rules in Microsoft Sentinel
+description: Learn how to deal with certain known issues that can affect analytics rules, and understand the meaning of AUTO DISABLED.
++++ Last updated : 03/26/2024++
+# Troubleshooting analytics rules in Microsoft Sentinel
+
+This article explains how to deal with certain issues that may arise with execution of [scheduled analytics rules](detect-threats-custom.md) in Microsoft Sentinel.
+
+## Issue: No events appear in query results
+
+When **event grouping** is set to **trigger an alert for each event**, query results viewed at a later time may appear to be missing, or different than expected. For example, you might view a query's results at a later time when investigating a related incident, and as part of that investigation you decide to pivot back to this query's earlier results.
+
+Results are automatically saved with the alerts. However, if the results are too large, no results are saved, and no data appears when viewing the query results again.
+
+In cases where there's [ingestion delay](ingestion-delay.md), or the query is not deterministic due to aggregation, the alert's result might be different than the result shown by running the query manually.
+
+To solve this problem, when a rule has this event grouping setting, Microsoft Sentinel adds the **OriginalQuery** field to the results of the query. Here's a comparison of the existing **Query** field and the new field:
+
+ | Field name | Contains | Running the query in this field<br>results in... |
+ | - | :-: | :-: |
+ | **Query** | The compressed record of the event that generated this instance of the alert. | The event that generated this instance of the alert;<br>limited to 10 kilobytes. |
+ | **OriginalQuery** | The original query as written in the analytics&nbsp;rule. | The most recent event in the timeframe in which the query runs, that fits the parameters defined by the query. |
+
+ In other words, the **OriginalQuery** field behaves like the **Query** field behaves under the default event grouping setting.
+
+## Issue: A scheduled rule failed to execute, or appears with AUTO DISABLED added to the name
+
+It's a rare occurrence that a scheduled query rule fails to run, but it can happen. Microsoft Sentinel classifies failures up front as either transient or permanent, based on the specific type of the failure and the circumstances that led to it.
+
+### Transient failure
+
+A transient failure occurs due to a circumstance that's temporary and soon returns to normal, at which point the rule execution succeeds. Some examples of failures that Microsoft Sentinel classifies as transient are:
+
+- A rule query takes too long to run and times out.
+- Connectivity issues between data sources and Log Analytics, or between Log Analytics and Microsoft Sentinel.
+- Any other new and unknown failure is considered transient.
+
+In the event of a transient failure, Microsoft Sentinel continues trying to execute the rule again after predetermined and ever-increasing intervals, up to a point. After that, the rule will run again only at its next scheduled time. A rule is never autodisabled due to a transient failure.
+
+### Permanent failure&mdash;rule autodisabled
+
+A permanent failure occurs due to a change in the conditions that allow the rule to run, which without human intervention can't return to their former status. The following are some examples of failures that are classified as permanent:
+
+- The target workspace (on which the rule query operated) was deleted.
+- The target table (on which the rule query operated) was deleted.
+- Microsoft Sentinel was removed from the target workspace.
+- A function used by the rule query is no longer valid; it was either modified or removed.
+- Permissions to one of the data sources of the rule query were changed ([see example](#permanent-failure-due-to-lost-access-across-subscriptionstenants)).
+- One of the data sources of the rule query was deleted.
+
+**In the event of a predetermined number of consecutive permanent failures, of the same type and on the same rule,** Microsoft Sentinel stops trying to execute the rule, and also takes the following steps:
+
+1. Disables the rule.
+1. Adds the words **"AUTO DISABLED"** to the beginning of the rule's name.
+1. Adds the reason for the failure (and the disabling) to the rule's description.
+
+You can easily determine the presence of any autodisabled rules, by sorting the rule list by name. The autodisabled rules are at or near the top of the list.
+
+SOC managers should be sure to check the rule list regularly for the presence of autodisabled rules.
+
+### Permanent failure due to resource drain
+
+Another kind of permanent failure occurs due to an **improperly built query** that causes the rule to consume **excessive computing resources** and risks being a performance drain on your systems. When Microsoft Sentinel identifies such a rule, it takes the same three steps mentioned for the other types of permanent failures&mdash;disables the rule, prepends **"AUTO DISABLED"** to the rule name, and adds the reason for the failure to the description.
+
+To re-enable the rule, you must address the issues in the query that cause it to use too many resources. See the following articles for best practices to optimize your Kusto queries:
+
+- [Query best practices - Azure Data Explorer](/azure/data-explorer/kusto/query/best-practices)
+- [Optimize log queries in Azure Monitor](../azure-monitor/logs/query-optimization.md)
+
+Also see [Useful resources for working with Kusto Query Language in Microsoft Sentinel](kusto-resources.md) for further assistance.
+
+### Permanent failure due to lost access across subscriptions/tenants
+
+One particular example of when a permanent failure could occur due to a permissions change on a data source ([see the list](#permanent-failurerule-autodisabled)) concerns the case of a Microsoft Security Solution Provider (MSSP)&mdash;or any other scenario where analytics rules query across subscriptions or tenants.
+
+When you create an analytics rule, an access permissions token is applied to the rule and saved along with it. This token ensures that the rule can access the workspace that contains the tables referenced by the rule's query, and that this access is maintained even if the rule's creator loses access to that workspace.
+
+There's one exception, however: when a rule is created to access workspaces in other subscriptions or tenants, such as what happens in the case of an MSSP, Microsoft Sentinel takes extra security measures to prevent unauthorized access to customer data. These kinds of rules have the credentials of the user that created the rule applied to them, instead of an independent access token. When the user no longer has access to the other tenant, the rule stops working.
+
+If you operate Microsoft Sentinel in a cross-subscription or cross-tenant scenario, and if one of your analysts or engineers loses access to a particular workspace, any rules created by that user will stop working. You'll get a health monitoring message regarding "insufficient access to resource", and the rule will be [autodisabled according to the procedure described previously](#permanent-failurerule-autodisabled).
+
+## Next steps
+
+For more information, see:
+
+- [Tutorial: Investigate incidents with Microsoft Sentinel](investigate-cases.md)
+- [Navigate and investigate incidents in Microsoft Sentinel - Preview](investigate-incidents.md)
+- [Classify and analyze data using entities in Microsoft Sentinel](entities.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
+
+Also, learn from an example of using custom analytics rules when [monitoring Zoom](https://techcommunity.microsoft.com/t5/azure-sentinel/monitoring-zoom-with-azure-sentinel/ba-p/1341516) with a [custom connector](create-custom-connector.md).
sentinel Tutorial Enrich Ip Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-enrich-ip-information.md
Title: Tutorial - Automatically check and record IP address reputation in incident in Microsoft Sentinel description: In this tutorial, learn how to use Microsoft Sentinel automation rules and playbooks to automatically check IP addresses in your incidents against a threat intelligence source and record each result in its relevant incident.-- Previously updated : 12/05/2022++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Tutorial: Automatically check and record IP address reputation information in incidents
When you complete this tutorial, you'll be able to:
> * Create an automation rule to invoke the playbook > * See the results of your automated process ## Prerequisites+ To complete this tutorial, make sure you have: - An Azure subscription. Create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one.
To complete this tutorial, make sure you have:
- A (free) [VirusTotal account](https://www.virustotal.com/gui/my-apikey) will suffice for this tutorial. A production implementation requires a VirusTotal Premium account.
-## Sign in to the Azure portal and Microsoft Sentinel
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. From the Search bar, search for and select **Microsoft Sentinel**.
-
-1. Search for and select your workspace from the list of available Microsoft Sentinel workspaces.
-
-1. On the **Microsoft Sentinel | Overview** page, select **Automation** from the navigation menu, under **Configuration**.
- ## Create a playbook from a template Microsoft Sentinel includes ready-made, out-of-the-box playbook templates that you can customize and use to automate a large number of basic SecOps objectives and scenarios. Let's find one to enrich the IP address information in our incidents.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), select the **Configuration** > **Automation** page. For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Automation**.
+ 1. From the **Automation** page, select the **Playbook templates (Preview)** tab. 1. Filter the list of templates by tag:
Microsoft Sentinel includes ready-made, out-of-the-box playbook templates that y
1. Clear the **Select all** checkbox, then mark the **Enrichment** checkbox. Select **OK**.
+ For example:
+ :::image type="content" source="media/tutorial-enrich-ip-information/1-filter-playbook-template-list.png" alt-text="Screenshot of list of playbook templates to be filtered by tags." lightbox="media/tutorial-enrich-ip-information/1-filter-playbook-template-list.png"::: 1. Select the **IP Enrichment - Virus Total report** template, and select **Create playbook** from the details pane.
Here's where we do that.
### Authorize Log Analytics connection
-The next action is a **Condition** that determines the rest of the for-each loop's actions based on the outcome of the IP address report. It analyzes the **Reputation** score given to the IP address in the report. A score higher than 0 indicates the address is harmless; a score lower than 0 indicates it's malicious.
+The next action is a **Condition** that determines the rest of the for-each loop's actions based on the outcome of the IP address report. It analyzes the **Reputation** score given to the IP address in the report. A score higher than **0** indicates the address is harmless; a score lower than **0** indicates it's malicious.
:::image type="content" source="media/tutorial-enrich-ip-information/12-reputation-condition.png" alt-text="Screenshot of condition action in logic app designer.":::
sentinel Tutorial Extract Incident Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-extract-incident-entities.md
Title: Extract incident entities with non-native actions description: In this tutorial, you extract entity types with action types that aren't native to Microsoft Sentinel, and save these actions in a playbook to use for SOC automation.-- Previously updated : 02/28/2023++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Tutorial: Extract incident entities with non-native actions
In this tutorial, you learn how to:
> * Parse the results in a JSON file. > * Create the values as dynamic content for future use. + ## Prerequisites To complete this tutorial, make sure you have:
To complete this tutorial, make sure you have:
## Create a playbook with an incident trigger
-1. Open the [Azure portal](https://portal.azure.com/) and navigate to the **Microsoft Sentinel** service.
-1. On the left, select **Automation**, and on the top left of the **Automation** page, select **Create** > **Playbook with incident trigger**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), select the **Configuration** > **Automation** page. For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Automation**.
+
+1. On the **Automation** page, select **Create** > **Playbook with incident trigger**.
1. In the **Create playbook** wizard, under **Basics**, select the subscription and resource group, and give the playbook a name. 1. Select **Next: Connections >**.
- Under **Connections**, the **Microsoft Sentinel - Connect with managed identity** connection should be visible.
+ Under **Connections**, the **Microsoft Sentinel - Connect with managed identity** connection should be visible. For example:
:::image type="content" source="media/tutorial-extract-incident-entities/create-playbook.png" alt-text="Screenshot of creating a new playbook with an incident trigger.":::
sentinel Tutorial Respond Threats Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-respond-threats-playbook.md
Title: Tutorial - Automate threat response in Microsoft Sentinel description: Use this tutorial to help you use playbooks together with automation rules in Microsoft Sentinel to automate your incident response and remediate security threats.-- Previously updated : 05/09/2023++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Tutorial: Respond to threats by using playbooks with automation rules in Microsoft Sentinel
This tutorial shows you how to use playbooks together with automation rules to a
> This tutorial provides basic guidance for a top customer task: creating automation to triage incidents. For more information, see our **How-to** section, such as [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md) and [Use triggers and actions in Microsoft Sentinel playbooks](playbook-triggers-actions.md). > + ## What are automation rules and playbooks? [Automation rules](automate-incident-handling-with-automation-rules.md) help you triage incidents in Microsoft Sentinel. You can use them to automatically assign incidents to the right personnel, close noisy incidents or known [false positives](false-positives.md), change their severity, and add tags. They are also the mechanism by which you can run playbooks in response to incidents or alerts.
Get a more complete and detailed introduction to automating threat response usin
Follow these steps to create a new playbook in Microsoft Sentinel:
+#### [Azure portal](#tab/azure-portal)
:::image type="content" source="./media/tutorial-respond-threats-playbook/add-new-playbook.png" alt-text="Screenshot of the menu selection for adding a new playbook in the Automation screen." lightbox="media/tutorial-respond-threats-playbook/add-new-playbook.png":::
-1. From the **Microsoft Sentinel** navigation menu, select **Automation**.
+#### [Defender portal](#tab/defender-portal)
+++
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), select the **Configuration** > **Automation** page. For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Automation**.
1. From the top menu, select **Create**.
Regardless of which trigger you chose to create your playbook with in the previo
1. If you want to monitor this playbook's activity for diagnostic purposes, mark the **Enable diagnostics logs in Log Analytics** check box, and choose your **Log Analytics workspace** from the drop-down list.
- 1. If your playbooks need access to protected resources that are inside or connected to an Azure virtual network, [you may need to use an integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md). If so, mark the **Associate with integration service environment** check box, and select the desired ISE from the drop-down list.
+ 1. If your playbooks need access to protected resources that are inside or connected to an Azure virtual network, [you might need to use an integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md). If so, mark the **Associate with integration service environment** check box, and select the desired ISE from the drop-down list.
1. Select **Next : Connections >**.
To use a playbook to respond automatically to an **entire incident** or to an **
To create an automation rule:
-1. From the **Automation** blade in the Microsoft Sentinel navigation menu, select **Create** from the top menu and then **Automation rule**.
+1. From the **Automation** page in the Microsoft Sentinel navigation menu, select **Create** from the top menu and then **Automation rule**.
:::image type="content" source="./media/tutorial-respond-threats-playbook/add-new-rule.png" alt-text="Screenshot showing how to add a new automation rule."::: 1. The **Create new automation rule** panel opens. Enter a name for your rule.
+ Your options differ depending on whether your workspace is onboarded to the unified security operations platform. For example:
+
+ #### [Onboarded workspaces](#tab/after-onboarding)
+
+ :::image type="content" source="./media/tutorial-respond-threats-playbook/create-automation-rule-onboarded.png" alt-text="Screenshot showing the automation rule creation wizard.":::
+
+ #### [Workspaces that aren't onboarded](#tab/before-onboarding)
+ :::image type="content" source="./media/tutorial-respond-threats-playbook/create-automation-rule.png" alt-text="Screenshot showing the automation rule creation wizard.":::
+
+ 1. **Trigger:** Select the appropriate trigger according to the circumstance for which you're creating the automation rule&mdash;**When incident is created**, **When incident is updated**, or **When alert is created**. 1. **Conditions:**
- 1. Incidents can have two possible sources: they can be created inside Microsoft Sentinel, and they can also be [imported from&mdash;and synchronized with&mdash;Microsoft Defender XDR](microsoft-365-defender-sentinel-integration.md).
-
- If you selected one of the incident triggers and you want the automation rule to take effect only on incidents sourced in Microsoft Sentinel, or alternatively in Microsoft Defender XDR, specify the source in the **If Incident provider equals** condition. (This condition will be displayed only if an incident trigger is selected.)
+
+ 1. If your workspace is not yet onboarded to the unified security operations platform, incidents can have two possible sources:
+
+ - Incidents can be created inside Microsoft Sentinel
+ - Incidents can be [imported from&mdash;and synchronized with&mdash;Microsoft Defender XDR](microsoft-365-defender-sentinel-integration.md).
+
+ If you selected one of the incident triggers and you want the automation rule to take effect only on incidents sourced in Microsoft Sentinel, or alternatively in Microsoft Defender XDR, specify the source in the **If Incident provider equals** condition.
+
+ This condition will be displayed only if an incident trigger is selected and your workspace isn't onboarded to the unified security operations platform.
1. For all trigger types, if you want the automation rule to take effect only on certain analytics rules, specify which ones by modifying the **If Analytics rule name contains** condition. 1. Add any other conditions you want to determine whether this automation rule will run. Select **+ Add** and choose [conditions or condition groups](add-advanced-conditions-to-automation-rules.md) from the drop-down list. The list of conditions is populated by alert detail and entity identifier fields. 1. **Actions:**
- 1. Since you're using this automation rule to run a playbook, choose the **Run playbook** action from the drop-down list. You'll then be prompted to choose from a second drop-down list that shows the available playbooks. An automation rule can run only those playbooks that start with the same trigger (incident or alert) as the trigger defined in the rule, so only those playbooks will appear in the list.<a name="permissions-to-run-playbooks"></a>
+ 1. Since you're using this automation rule to run a playbook, choose the **Run playbook** action from the drop-down list. You'll then be prompted to choose from a second drop-down list that shows the available playbooks. An automation rule can run only those playbooks that start with the same trigger (incident or alert) as the trigger defined in the rule, so only those playbooks will appear in the list. <a name="permissions-to-run-playbooks"></a>
<a name="explicit-permissions"></a>
To create an automation rule:
> 1. In the **Settings** blade, select the **Settings** tab, then the **Playbook permissions** expander. > 1. Click the **Configure permissions** button to open the **Manage permissions** panel mentioned above, and continue as described there. >
- > - If, in an **MSSP** scenario, you want to [run a playbook in a customer tenant](automate-incident-handling-with-automation-rules.md#permissions-in-a-multi-tenant-architecture) from an automation rule created while signed into the service provider tenant, you must grant Microsoft Sentinel permission to run the playbook in ***both tenants***. In the **customer** tenant, follow the instructions for the multi-tenant deployment in the preceding bullet point. In the **service provider** tenant, you must add the **Azure Security Insights** app in your Azure Lighthouse onboarding template:
+ > - If, in an **MSSP** scenario, you want to [run a playbook in a customer tenant](automate-incident-handling-with-automation-rules.md#permissions-in-a-multitenant-architecture) from an automation rule created while signed into the service provider tenant, you must grant Microsoft Sentinel permission to run the playbook in ***both tenants***. In the **customer** tenant, follow the instructions for the multi-tenant deployment in the preceding bullet point. In the **service provider** tenant, you must add the **Azure Security Insights** app in your Azure Lighthouse onboarding template:
> 1. From the Azure Portal go to **Microsoft Entra ID**. > 1. Click on **Enterprise Applications**. > 1. Select **Application Type** and filter on **Microsoft Applications**.
To create an automation rule:
1. Enter a number under **Order** to determine where in the sequence of automation rules this rule will run.
-1. Click **Apply**. You're done!
+1. Select **Apply**. You're done!
[Discover other ways](automate-incident-handling-with-automation-rules.md#creating-and-managing-automation-rules) to create automation rules.
You can also manually run a playbook on demand, whether in response to alerts, i
### Run a playbook manually on an alert
+This procedure is not supported in the unified security operations platform.
+
+In the Azure portal, select one of the following tabs as needed for your environment:
+ # [NEW Incident details page](#tab/incidents) 1. In the **Incidents** page, select an incident.
-1. Select **View full details** at the bottom of the incident details pane.
+ In the Azure portal, select **View full details** at the bottom of the incident details pane to open the incident details page.
1. In the incident details page, in the **Incident timeline** widget, choose the alert you want to run the playbook on. Select the three dots at the end of the alert's line and choose **Run playbook** from the pop-up menu.
You can also manually run a playbook on demand, whether in response to alerts, i
1. In the **Incidents** page, select an incident.
-1. Select **View full details** at the bottom of the incident details pane.
+ In the Azure portal, select **View full details** at the bottom of the incident details pane to open the incident details page.
1. In the incident details page, select the **Alerts** tab, choose the alert you want to run the playbook on, and select the **View playbooks** link at the end of the line of that alert.
You can see the run history for playbooks on an alert by selecting the **Runs**
### Run a playbook manually on an incident (Preview)
-1. In the **Incidents** page, select an incident.
+This procedure differs, depending on if you're working in Microsoft Sentinel or in the unified security operations platform. Select the relevant tab for your environment:
++
+# [Azure portal](#tab/azure)
+
+1. In the **Incidents** page, select an incident.
1. From the incident details pane that appears on the right, select **Actions > Run playbook (Preview)**. (Selecting the three dots at the end of the incident's line on the grid or right-clicking the incident will display the same list as the **Action** button.) 1. The **Run playbook on incident** panel opens on the right. You'll see a list of all playbooks configured with the **Microsoft Sentinel Incident** Logic Apps trigger that you have access to.
- > [!NOTE]
- > If you don't see the playbook you want to run in the list, it means Microsoft Sentinel doesn't have permissions to run playbooks in that resource group ([see the note above](#explicit-permissions)). To grant those permissions, select **Settings** from the main menu, choose the **Settings** tab, expand the **Playbook permissions** expander, and select **Configure permissions**. In the **Manage permissions** panel that opens up, mark the check boxes of the resource groups containing the playbooks you want to run, and select **Apply**.
+ If you don't see the playbook you want to run in the list, it means Microsoft Sentinel doesn't have permissions to run playbooks in that resource group ([see the note above](#explicit-permissions)).
+
+ To grant those permissions, select **Settings** > **Settings** > **Playbook permissions** > **Configure permissions**. In the **Manage permissions** panel that opens up, mark the check boxes of the resource groups containing the playbooks you want to run, and select **Apply**.
1. Select **Run** on the line of a specific playbook to run it immediately.
-You can see the run history for playbooks on an incident by selecting the **Runs** tab on the **Run playbook on incident** panel. It might take a few seconds for any just-completed run to appear in the list. Selecting a specific run will open the full run log in Logic Apps.
+# [Microsoft Defender portal](#tab/microsoft-defender)
+
+1. In the **Incidents** page, select an incident.
+
+1. From the incident details pane that appears on the right, select **Run Playbook**.
+
+1. The **Run playbook on incident** panel opens on the right, with all related playbooks for the selected incident. In the **Action** column, select **Run playbook** for the playbook you want to run immediately.
+
+The **Actions** column might also show one of the following statuses:
+
+|Status |Description and action required |
+|||
+|<a name="missing-perms"></a>**Missing permissions** | You must have the *Microsoft Sentinel playbook operator* role on any resource group containing playbooks you want to run. If you're missing permissions, we recommend you contact an admin to grant you with the relevant permissions. <br><br>For more information, see [Permissions required to work with playbooks](automate-responses-with-playbooks.md#permissions-required).|
+|<a name="grant-perms"></a>**Grant permission** | Microsoft Sentinel is missing the *Microsoft Sentinel Automation Contributor* role, which is required to run playbooks on incidents. In such cases, select **Grant permission** to open the **Manage permissions** pane. The **Manage permissions** pane is filtered by default to the selected playbook's resource group. Select the resource group and then select **Apply** to grant the required permissions. <br><br>You must be an *Owner* or a *User access administrator* on the resource group to which you want to grant Microsoft Sentinel permissions. If you're missing permissions, the resource group is greyed out and you won't be able to select it. In such cases, we recommend you contact an admin to grant you with the relevant permissions. <br><br>For more information, see the note above](#explicit-permissions). |
+++
+View the run history for playbooks on an incident by selecting the **Runs** tab on the **Run playbook on incident** panel. It might take a few seconds for any just-completed run to appear in the list. Selecting a specific run will open the full run log in Logic Apps.
### Run a playbook manually on an entity (Preview)
+This procedure is not supported in the unified security operations platform.
+ 1. Select an entity in one of the following ways, depending on your originating context: **If you're in an incident's details page (new version):**
You can see the run history for playbooks on an incident by selecting the **Runs
**If you're in the Investigation graph:** 1. Select an entity in the graph. 1. Select the **Run playbook (Preview)** button in the entity side panel.
- For some entity types, you may have to select the **Entity actions** button and from the resulting menu select **Run playbook (Preview)**.
+ For some entity types, you might have to select the **Entity actions** button and from the resulting menu select **Run playbook (Preview)**.
**If you're proactively hunting for threats:** 1. From the **Entity behavior** screen, select an entity from the lists on the page, or search for and select another entity.
You can see the run history for playbooks on a given entity by selecting the **R
In this tutorial, you learned how to use playbooks and automation rules in Microsoft Sentinel to respond to threats. - Learn more about [authenticating playbooks to Microsoft Sentinel](authenticate-playbooks-to-sentinel.md) - Learn more about [using triggers and actions in Microsoft Sentinel playbooks](playbook-triggers-actions.md)-- Learn more about - Learn how to [proactively hunt for threats](hunting.md) using Microsoft Sentinel.
sentinel Ueba Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ueba-reference.md
Last updated 06/28/2022
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Microsoft Sentinel UEBA reference This reference article lists the input data sources for the User and Entity Behavior Analytics service in Microsoft Sentinel. It also describes the enrichments that UEBA adds to entities, providing needed context to alerts and incidents. + ## UEBA data sources These are the data sources from which the UEBA engine collects and analyzes data to train its ML models and set behavioral baselines for users, devices, and other entities. UEBA then looks at data from these sources to find anomalies and glean insights.
These are the data sources from which the UEBA engine collects and analyzes data
This section describes the enrichments UEBA adds to Microsoft Sentinel entities, along with all their details, that you can use to focus and sharpen your security incident investigations. These enrichments are displayed on [entity pages](entity-pages.md#how-to-use-entity-pages) and can be found in the following Log Analytics tables, the contents and schema of which are listed below: -- The **BehaviorAnalytics** table is where UEBA's output information is stored.
+- The **[BehaviorAnalytics](#behavioranalytics-table)** table is where UEBA's output information is stored.
The following three dynamic fields from the BehaviorAnalytics table are described in the [entity enrichments dynamic fields](#entity-enrichments-dynamic-fields) section below.
This section describes the enrichments UEBA adds to Microsoft Sentinel entities,
<a name="baseline-explained"></a>User activities are analyzed against a baseline that is dynamically compiled each time it is used. Each activity has its defined lookback period from which the dynamic baseline is derived. The lookback period is specified in the [**Baseline**](#activityinsights-field) column in this table. -- The **IdentityInfo** table is where identity information synchronized to UEBA from Microsoft Entra ID (and from on-premises Active Directory via Microsoft Defender for Identity) is stored.
+- The **[IdentityInfo](#identityinfo-table)** table is where identity information synchronized to UEBA from Microsoft Entra ID (and from on-premises Active Directory via Microsoft Defender for Identity) is stored.
### BehaviorAnalytics table
The following table describes the behavior analytics data displayed on each [ent
| **ActivityInsights** | dynamic | The contextual analysis of activity based on our profiling ([details below](#activityinsights-field)). | | **InvestigationPriority** | int | The anomaly score, between 0-10 (0=benign, 10=highly anomalous). | -- ### Entity enrichments dynamic fields > [!NOTE]
While the initial synchronization may take a few days, once the data is fully sy
> [!NOTE]
-> Currently, only built-in roles are supported.
+> - Currently, only built-in roles are supported.
>
-> Data about deleted groups, where a user was removed from a group, is not currently supported.
+> - Data about deleted groups, where a user was removed from a group, is not currently supported.
>-
-The following table describes the user identity data included in the **IdentityInfo** table in Log Analytics.
-
-| Field | Type | Description |
-| - | -- | - |
-| **AccountCloudSID** | string | The Microsoft Entra security identifier of the account. |
-| **AccountCreationTime** | datetime | The date the user account was created (UTC). |
-| **AccountDisplayName** | string | The display name of the user account. |
-| **AccountDomain** | string | The domain name of the user account. |
-| **AccountName** | string | The user name of the user account. |
-| **AccountObjectId** | string | The Microsoft Entra object ID for the user account. |
-| **AccountSID** | string | The on-premises security identifier of the user account. |
-| **AccountTenantId** | string | The Microsoft Entra tenant ID of the user account. |
-| **AccountUPN** | string | The user principal name of the user account. |
-| **AdditionalMailAddresses** | dynamic | The additional email addresses of the user. |
-| **AssignedRoles** | dynamic | The Microsoft Entra roles the user account is assigned to. |
-| **BlastRadius** | string | A calculation based on the position of the user in the org tree and the user's Microsoft Entra roles and permissions. <br>Possible values: *Low, Medium, High* |
-| **ChangeSource** | string | The source of the latest change to the entity. <br>Possible values:<br>- *AzureActiveDirectory*<br>- *ActiveDirectory*<br>- *UEBA*<br>- *Watchlist*<br>- *FullSync* |
-| **City** | string | The city of the user account. |
-| **Country** | string | The country of the user account. |
-| **DeletedDateTime** | datetime | The date and time the user was deleted. |
-| **Department** | string | The department of the user account. |
-| **GivenName** | string | The given name of the user account. |
-| **GroupMembership** | dynamic | Microsoft Entra groups where the user account is a member. |
-| **IsAccountEnabled** | bool | An indication as to whether the user account is enabled in Microsoft Entra ID or not. |
-| **JobTitle** | string | The job title of the user account. |
-| **MailAddress** | string | The primary email address of the user account. |
-| **Manager** | string | The manager alias of the user account. |
-| **OnPremisesDistinguishedName** | string | The Microsoft Entra ID distinguished name (DN). A distinguished name is a sequence of relative distinguished names (RDN), connected by commas. |
-| **Phone** | string | The phone number of the user account. |
-| **SourceSystem** | string | The system where the user is managed. <br>Possible values:<br>- *AzureActiveDirectory*<br>- *ActiveDirectory*<br>- *Hybrid* |
-| **State** | string | The geographical state of the user account. |
-| **StreetAddress** | string | The office street address of the user account. |
-| **Surname** | string | The surname of the user. account. |
-| **TenantId** | string | The tenant ID of the user. |
-| **TimeGenerated** | datetime | The time when the event was generated (UTC). |
-| **Type** | string | The name of the table. |
-| **UserAccountControl** | dynamic | Security attributes of the user account in the AD domain. <br> Possible values (may contain more than one):<br>- *AccountDisabled*<br>- *HomedirRequired*<br>- *AccountLocked*<br>- *PasswordNotRequired*<br>- *CannotChangePassword*<br>- *EncryptedTextPasswordAllowed*<br>- *TemporaryDuplicateAccount*<br>- *NormalAccount*<br>- *InterdomainTrustAccount*<br>- *WorkstationTrustAccount*<br>- *ServerTrustAccount*<br>- *PasswordNeverExpires*<br>- *MnsLogonAccount*<br>- *SmartcardRequired*<br>- *TrustedForDelegation*<br>- *DelegationNotAllowed*<br>- *UseDesKeyOnly*<br>- *DontRequirePreauthentication*<br>- *PasswordExpired*<br>- *TrustedToAuthenticationForDelegation*<br>- *PartialSecretsAccount*<br>- *UseAesKeys* |
-| **UserState** | string | The current state of the user account in Microsoft Entra ID.<br>Possible values:<br>- *Active*<br>- *Disabled*<br>- *Dormant*<br>- *Lockout* |
-| **UserStateChangedOn** | datetime | The date of the last time the account state was changed (UTC). |
-| **UserType** | string | The user type. |
+> - There are actually two versions of the *IdentityInfo* table: one serving Microsoft Sentinel, in the *Log Analytics* schema, the other serving the Microsoft Defender portal via Microsoft Defender for Identity, in what's known as the *Advanced hunting* schema. Both versions of this table are fed by Microsoft Entra ID, but the Log Analytics version added a few fields.
+>
+> [The unified security operations platform in the Defender portal](https://go.microsoft.com/fwlink/p/?linkid=2263690) uses the *Advanced hunting* version of this table, so, to minimize the differences between the versions of the table, most of the unique fields in the Log Analytics version are gradually being added to the *Advanced hunting* version as well. Regardless of in which portal you're using Microsoft Sentinel, you'll have access to nearly all the same information, though there may be a small time lag in synchronization between the versions.
+
+The following table describes the user identity data included in the **IdentityInfo** table in Log Analytics in the Azure portal. The fourth column shows the corresponding fields in the *Advanced hunting* version of the table, that Microsoft Sentinel uses in the Defender portal. Field names in boldface are named differently in the *Advanced hunting* schema than they are in the Microsoft Sentinel Log Analytics version.
+
+| Field name in<br>*Log Analytics* schema | Type | Description | Field name in<br>*Advanced hunting* schema |
+| - | -- | - | |
+| **AccountCloudSID** | string | The Microsoft Entra security identifier of the account. | **CloudSid** |
+| **AccountCreationTime** | datetime | The date the user account was created (UTC). | **CreatedDateTime** |
+| **AccountDisplayName** | string | The display name of the user account. | AccountDisplayName |
+| **AccountDomain** | string | The domain name of the user account. | AccountDomain |
+| **AccountName** | string | The user name of the user account. | AccountName |
+| **AccountObjectId** | string | The Microsoft Entra object ID for the user account. | AccountObjectId |
+| **AccountSID** | string | The on-premises security identifier of the user account. | AccountSID |
+| **AccountTenantId** | string | The Microsoft Entra tenant ID of the user account. | -- |
+| **AccountUPN** | string | The user principal name of the user account. | AccountUPN |
+| **AdditionalMailAddresses** | dynamic | The additional email addresses of the user. | -- |
+| **AssignedRoles** | dynamic | The Microsoft Entra roles the user account is assigned to. | AssignedRoles |
+| **BlastRadius** | string | A calculation based on the position of the user in the org tree and the user's Microsoft Entra roles and permissions. <br>Possible values: *Low, Medium, High* | -- |
+| **ChangeSource** | string | The source of the latest change to the entity. <br>Possible values: <li>*AzureActiveDirectory*<li>*ActiveDirectory*<li>*UEBA*<li>*Watchlist*<li>*FullSync* | ChangeSource |
+| **CompanyName** | | The company name to which the user belongs. | -- |
+| **City** | string | The city of the user account. | City |
+| **Country** | string | The country of the user account. | Country |
+| **DeletedDateTime** | datetime | The date and time the user was deleted. | -- |
+| **Department** | string | The department of the user account. | Department |
+| **GivenName** | string | The given name of the user account. | GivenName |
+| **GroupMembership** | dynamic | Microsoft Entra groups where the user account is a member. | -- |
+| **IsAccountEnabled** | bool | An indication as to whether the user account is enabled in Microsoft Entra ID or not. | IsAccountEnabled |
+| **JobTitle** | string | The job title of the user account. | JobTitle |
+| **MailAddress** | string | The primary email address of the user account. | **EmailAddress** |
+| **Manager** | string | The manager alias of the user account. | Manager |
+| **OnPremisesDistinguishedName** | string | The Microsoft Entra ID distinguished name (DN). A distinguished name is a sequence of relative distinguished names (RDN), connected by commas. | **DistinguishedName** |
+| **Phone** | string | The phone number of the user account. | Phone |
+| **SourceSystem** | string | The system where the user is managed. <br>Possible values: <li>*AzureActiveDirectory*<li>*ActiveDirectory*<li>*Hybrid* | **SourceProvider** |
+| **State** | string | The geographical state of the user account. | State |
+| **StreetAddress** | string | The office street address of the user account. | **Address** |
+| **Surname** | string | The surname of the user. account. | Surname |
+| **TenantId** | string | The tenant ID of the user. | -- |
+| **TimeGenerated** | datetime | The time when the event was generated (UTC). | **Timestamp** |
+| **Type** | string | The name of the table. | -- |
+| **UserAccountControl** | dynamic | Security attributes of the user account in the AD domain. <br> Possible values (may contain more than one):<li>*AccountDisabled*<li>*HomedirRequired*<li>*AccountLocked*<li>*PasswordNotRequired*<li>*CannotChangePassword*<li>*EncryptedTextPasswordAllowed*<li>*TemporaryDuplicateAccount*<li>*NormalAccount*<li>*InterdomainTrustAccount*<li>*WorkstationTrustAccount*<li>*ServerTrustAccount*<li>*PasswordNeverExpires*<li>*MnsLogonAccount*<li>*SmartcardRequired*<li>*TrustedForDelegation*<li>*DelegationNotAllowed*<li>*UseDesKeyOnly*<li>*DontRequirePreauthentication*<li>*PasswordExpired*<li>*TrustedToAuthenticationForDelegation*<li>*PartialSecretsAccount*<li>*UseAesKeys* | -- |
+| **UserState** | string | The current state of the user account in Microsoft Entra ID.<br>Possible values:<li>*Active*<li>*Disabled*<li>*Dormant*<li>*Lockout* | -- |
+| **UserStateChangedOn** | datetime | The date of the last time the account state was changed (UTC). | -- |
+| **UserType** | string | The user type. | -- |
## Next steps
sentinel Understand Threat Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/understand-threat-intelligence.md
Title: Understand threat intelligence in Microsoft Sentinel
+ Title: Understand threat intelligence
+ description: Understand how threat intelligence feeds are connected to, managed, and used in Microsoft Sentinel to analyze data, detect threats, and enrich alerts. - Previously updated : 5/23/2023+ Last updated : 3/06/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Understand threat intelligence in Microsoft Sentinel
-Microsoft Sentinel is a cloud native Security Information and Event Management (SIEM) solution with the ability to quickly pull threat intelligence from numerous sources.
+Microsoft Sentinel is a cloud native Security Information and Event Management (SIEM) solution with the ability to quickly pull threat intelligence from numerous sources.
+ ## Introduction to threat intelligence
Tagging threat indicators is an easy way to group them together to make them eas
:::image type="content" source="media/understand-threat-intelligence/threat-intel-tagging-indicators.png" alt-text="Apply tags to threat indicators" lightbox="media/understand-threat-intelligence/threat-intel-tagging-indicators.png":::
-To validate your indicators and view your successfully imported threat indicators, regardless of the source, go to the **Logs** page. In this Log Analytics view, the **ThreatIntelligenceIndicator** table under the **Microsoft Sentinel** group is where all your Microsoft Sentinel threat indicators are stored. This table is the basis for threat intelligence queries performed by other Microsoft Sentinel features such as **Analytics** and **Workbooks**.
+Validate your indicators and view your successfully imported threat indicators from the Microsoft Sentinel enabled log analytics workspace. The **ThreatIntelligenceIndicator** table under the **Microsoft Sentinel** schema is where all your Microsoft Sentinel threat indicators are stored. This table is the basis for threat intelligence queries performed by other Microsoft Sentinel features such as **Analytics** and **Workbooks**.
-Here is an example view of the **Logs** page with a basic query for threat indicators.
+Here is an example view of a basic query for threat indicators.
:::image type="content" source="media/understand-threat-intelligence/logs-page-ti-table.png" alt-text="Screenshot shows the logs page with a sample query of the ThreatIntelligenceIndicator table." lightbox="media/understand-threat-intelligence/logs-page-ti-table.png":::
For more details on using threat indicators in your analytics rules, see [Use th
Microsoft provides access to its threat intelligence through the **Microsoft Defender Threat Intelligence Analytics** rule. For more information on how to take advantage of this rule which generates high fidelity alerts and incidents, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md) ## Workbooks provide insights about your threat intelligence
sentinel Use Matching Analytics To Detect Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-matching-analytics-to-detect-threats.md
description: This article explains how to detect threats with Microsoft generated threat intelligence in Microsoft Sentinel. Previously updated : 03/27/2023 Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#Customer intent: As a SOC analyst, I want to match my security data with Microsoft threat intelligence so I can generate high fidelity alerts and incidents.
# Use matching analytics to detect threats
Use the following steps to triage through the incidents generated by the **Micro
1. Observe the indicator details. When a match is found, the indicator is published to the Log Analytics **ThreatIntelligenceIndicators** table, and displayed in the **Threat Intelligence** page. For any indicators published from this rule, the source is defined as **Microsoft Defender Threat Intelligence Analytics**.
-For example, in the **ThreatIntelligenceIndicators** log:
+For example, in the **ThreatIntelligenceIndicators** table:
In the **Threat Intelligence** page:
-## Get additional context from Microsoft Defender Threat Intelligence
+## Get more context from Microsoft Defender Threat Intelligence
Along with high fidelity alerts and incidents, some MDTI indicators include a link to a reference article in the MDTI community portal.
Along with high fidelity alerts and incidents, some MDTI indicators include a li
For more information, see the [MDTI portal](https://ti.defender.microsoft.com) and [What is Microsoft Defender Threat Intelligence?](/defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti)
-## Next steps
+## Related content
In this article, you learned how to connect threat intelligence produced by Microsoft to generate alerts and incidents. For more information about threat intelligence in Microsoft Sentinel, see the following articles:
sentinel Use Playbook Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-playbook-templates.md
Title: Create and customize Microsoft Sentinel playbooks from templates | Microsoft Docs description: This article shows how to create playbooks from and work with playbook templates, to customize them to fit your needs.- Previously updated : 06/21/2023-++ Last updated : 03/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
++ # Create and customize Microsoft Sentinel playbooks from content templates
This article helps you understand how to:
> [!IMPORTANT] > > **Playbook templates** are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
+ ## Explore playbook templates
-In Microsoft Sentinel, select **Content hub** and then select **Content type** to filter for **Playbook**. This filtered view lists all the solutions and standalone content that include one or more playbook templates. Install the solution or standalone content to get the template.
+For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), select the **Content management** > **Content hub** page. For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**.
+
+On the **Content hub** page, select **Content type** to filter for **Playbook**. This filtered view lists all the solutions and standalone content that include one or more playbook templates. Install the solution or standalone content to get the template.
-Then, in Microsoft Sentinel, select **Automation** and then the **Playbook templates** tab to view the installed templates.
+Then, select **Configuration** > **Automation** > **Playbook templates** tab to view the installed templates.
:::image type="content" source="media/use-playbook-templates/gallery.png" alt-text="Screenshot of the playbooks gallery." lightbox="media/use-playbook-templates/gallery.png":::
sentinel Use Threat Indicators In Analytics Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-threat-indicators-in-analytics-rules.md
description: This article explains how to generate alerts and incidents with threat intelligence indicators in Microsoft Sentinel. Previously updated : 8/30/2022 Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#Customer intent: As a SOC analyst, I want to connect the threat intelligence available to analytics rules so I can generate alerts and incidents.
# Use threat indicators in analytics rules
Below is an example of how to enable and configure a rule to generate security a
You can leave the default settings or change them to meet your requirements, and you can define incident-generation settings on the **Incident settings** tab. For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md). When you are finished, select the **Automated response** tab.
-1. Configure any automation youΓÇÖd like to trigger when a security alert is generated from this analytics rule. Automation in Microsoft Sentinel is done using combinations of **automation rules** and **playbooks** powered by Azure Logic Apps. To learn more, see this [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](./tutorial-respond-threats-playbook.md). When finished, select the **Next: Review >** button to continue.
+1. Configure any automation you'd like to trigger when a security alert is generated from this analytics rule. Automation in Microsoft Sentinel is done using combinations of **automation rules** and **playbooks** powered by Azure Logic Apps. To learn more, see this [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](./tutorial-respond-threats-playbook.md). When finished, select the **Next: Review >** button to continue.
1. When you see the message that the rule validation has passed, select the **Create** button and you are finished.
-You can find your enabled rules in the **Active rules** tab of the **Analytics** section of Microsoft Sentinel. You can edit, enable, disable, duplicate, or delete the active rule from there. The new rule runs immediately upon activation, and from then on will run on its defined schedule.
+## Review your rules
+
+Find your enabled rules in the **Active rules** tab of the **Analytics** section of Microsoft Sentinel. Edit, enable, disable, duplicate, or delete the active rule from there. The new rule runs immediately upon activation, and then runs on its defined schedule.
According to the default settings, each time the rule runs on its schedule, any results found will generate a security alert. Security alerts in Microsoft Sentinel can be viewed in the **Logs** section of Microsoft Sentinel, in the **SecurityAlert** table under the **Microsoft Sentinel** group. In Microsoft Sentinel, the alerts generated from analytics rules also generate security incidents, which can be found in **Incidents** under **Threat Management** on the Microsoft Sentinel menu. Incidents are what your security operations teams will triage and investigate to determine the appropriate response actions. You can find detailed information in this [Tutorial: Investigate incidents with Microsoft Sentinel](./investigate-cases.md).
-Since analytic rules constrain lookups beyond 14 days, Microsoft Sentinel refreshes indicators every 12 days to make sure they are available for matching purposes through the analytic rules.
+> [!NOTE]
+> Since analytic rules constrain lookups beyond 14 days, Microsoft Sentinel refreshes indicators every 12 days to make sure they are available for matching purposes through the analytic rules.
-## Next steps
+## Related content
In this article, you learned how to use threat intelligence indicators to detect threats. For more about threat intelligence in Microsoft Sentinel, see the following articles:
sentinel Watchlists Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-create.md
Title: Create watchlists - Microsoft Sentinel
-description: Create watchlist in Microsoft Sentinel for allowlists or blocklists, to enrich event data, and help investigate threats.
+ Title: Create new watchlists
+
+description: Create watchlist in Microsoft Sentinel for allowlists or blocklists, to enrich event data, and help investigate threats.
Previously updated : 12/06/2023 Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#Customer intent: As a SOC analyst, I want to correlate data from meaningful data sources I provide with events so I can watch for more relationships with better visibility.
# Create watchlists in Microsoft Sentinel
Watchlists in Microsoft Sentinel allow you to correlate data from a data source
Upload a watchlist file from a local folder or from your Azure Storage account. To create a watchlist file, you have the option to download one of the watchlist templates from Microsoft Sentinel to populate with your data. Then upload that file when you create the watchlist in Microsoft Sentinel.
-Local file uploads are currently limited to files of up to 3.8 MB in size. A file that's over 3.8 MB in size and up to 500 MB is considered a [large watchlist](#create-a-large-watchlist-from-file-in-azure-storage-preview) Upload the file to an Azure Storage account. Before you create a watchlist, review the [limitations of watchlists](watchlists.md).
-
-When you create a watchlist, the watchlist name and alias must each be between 3 and 64 characters. The first and last characters must be alphanumeric. But you can include whitespaces, hyphens, and underscores in between the first and last characters.
+Local file uploads are currently limited to files of up to 3.8 MB in size. A file that's over 3.8 MB in size and up to 500 MB is considered a [large watchlist](#create-a-large-watchlist-from-file-in-azure-storage-preview). Upload the file to an Azure Storage account. Before you create a watchlist, review the [limitations of watchlists](watchlists.md#limitations-of-watchlists).
> [!IMPORTANT] > The features for watchlist templates and the ability to create a watchlist from a file in Azure Storage are currently in **PREVIEW**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-> >
+>
+> [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)]
+ ## Upload a watchlist from a local folder You have two ways to upload a CSV file from your local machine to create a watchlist.
You have two ways to upload a CSV file from your local machine to create a watch
If you didn't use a watchlist template to create your file,
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Watchlist**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Watchlist**.
-1. Under **Configuration**, select **Watchlist**.
+1. Select **+ New**.
+
+ #### [Azure portal](#tab/azure-portal)
+
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-new.png" alt-text="Screenshot of add watchlist option on watchlist page." lightbox="./media/watchlists-create/sentinel-watchlist-new.png":::
-1. Select **+ Add new**.
+ #### [Defender portal](#tab/defender-portal)
- :::image type="content" source="./media/watchlists-create/sentinel-watchlist-new.png" alt-text="Screenshot of add watchlist option on watchlist page." lightbox="./media/watchlists-create/sentinel-watchlist-new.png":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-new-defender.png" alt-text="Screenshot of add watchlist option on watchlist page." lightbox="./media/watchlists-create/sentinel-watchlist-new-defender.png":::
+
1. On the **General** page, provide the name, description, and alias for the watchlist.
If you didn't use a watchlist template to create your file,
1. Select **Next: Review and Create**.
- :::image type="content" source="./media/watchlists-create/sentinel-watchlist-source.png" alt-text="Screenshot of the watchlist source tab." lightbox="./media/watchlists-create/sentinel-watchlist-source.png":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-source.png" alt-text="Screenshot showing the watchlist source tab." lightbox="./media/watchlists-create/sentinel-watchlist-source.png":::
1. Review the information, verify that it's correct, wait for the **Validation passed** message, and then select **Create**.
It might take several minutes for the watchlist to be created and the new data t
To create the watchlist from a template you populated,
-1. From appropriate workspace in Microsoft Sentinel, select **Watchlist**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Watchlist**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Watchlist**.
1. Select the tab **Templates (Preview)**.
Create a shared access signature URL for Microsoft Sentinel to retrieve the watc
### Step 3: Add Azure to the CORS tab
-Before using a SAS URI, add the azure portal to the Cross Origin Resource Sharing (CORS).
+Before using a SAS URI, add the Azure portal to the Cross Origin Resource Sharing (CORS).
1. Go to the storage account settings, **Resource sharing** page. 1. Select the **Blob service** tab.
For more information, see [CORS support for Azure Storage](/rest/api/storageserv
### Step 4: Add the watchlist to a workspace
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
-
-1. Under **Configuration**, select **Watchlist**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Watchlist**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Watchlist**.
-1. Select **+ Add new**.
+1. Select **+ New**.
:::image type="content" source="./media/watchlists-create/sentinel-watchlist-new.png" alt-text="Screenshot of the add watchlist on the watchlist page." lightbox="./media/watchlists-create/sentinel-watchlist-new.png":::
It might take a while for a large watchlist to be created and the new data to be
View the status by selecting the watchlist in your workspace.
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
-
-1. Under **Configuration**, select **Watchlist**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Watchlist**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Watchlist**.
1. On the **My Watchlists** tab, select the watchlist.
Each built-in watchlist template has its own set of data listed in the CSV file
To download one of the watchlist templates,
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
-
-1. Under **Configuration**, select **Watchlist**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Watchlist**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Watchlist**.
1. Select the tab **Templates (Preview)**.
To download one of the watchlist templates,
If you delete and recreate a watchlist, you might see both the deleted and recreated entries in Log Analytics within the five-minute SLA for data ingestion. If you see these entries together in Log Analytics for a longer period of time, submit a support ticket.
-## Next steps
+## Related content
To learn more about Microsoft Sentinel, see the following articles:
sentinel Watchlists Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-manage.md
Title: Edit watchlist - Microsoft Sentinel
-description: Edit or add items to watchlists in Microsoft Sentinel watchlists.
+ Title: Edit watchlists - Microsoft Sentinel
+description: Learn how to edit and add more items to Microsoft Sentinel watchlists to them to keep them up-to-date.
Previously updated : 1/04/2022 Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#Customer intent: As a security analyst, I want to edit or bulk edit my watchlists so I can keep them up to date.
# Manage watchlists in Microsoft Sentinel We recommend you edit an existing watchlist instead of deleting and recreating a watchlist. Log analytics has a five-minute SLA for data ingestion. If you delete and recreate a watchlist, you might see both the deleted and recreated entries in Log Analytics during this five-minute window. If you see these duplicate entries in Log Analytics for a longer period of time, submit a support ticket. + ## Edit a watchlist item Edit a watchlist to edit or add an item to the watchlist.
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
-1. Under **Configuration**, select **Watchlist**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Watchlist**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Watchlist**.
1. Select the watchlist you want to edit. 1. On the details pane, select **Update watchlist** > **Edit watchlist items**.
Edit a watchlist to edit or add an item to the watchlist.
1. To add a new item to your watchlist, 1. Select **Add new**.
- :::image type="content" source="./media/watchlists-manage/sentinel-watchlist-edit-add-new.png" alt-text="Screenshot of the add new button at the top of the edit watchlist items page.":::
+ :::image type="content" source="./media/watchlists-manage/sentinel-watchlist-edit-add-new.png" alt-text="Screenshot of the new button at the top of the edit watchlist items page.":::
- 1. Fill in the fields in the **Add watchlist item** panel.
+ 1. Fill in the fields of the **Add watchlist item** panel.
1. At the bottom of that panel, select **Add**. ## Bulk update a watchlist
The updated watchlist file you upload must contain the search key field used by
To bulk update a watchlist,
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
-1. Under **Configuration**, select **Watchlist**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Watchlist**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Watchlist**.
1. Select the watchlist you want to edit. 1. On the details pane, select **Update watchlist** > **Bulk update**.
To bulk update a watchlist,
1. If you get an error, fix the issue in the file. Then select **Reset** and try the file upload again. 1. Select **Next: Review and update** > **Update**.
-## Next steps
+## Related content
To learn more about Microsoft Sentinel, see the following articles:
sentinel Watchlists Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-queries.md
Title: Build queries or rules with watchlists - Microsoft Sentinel
-description: Use watchlists in searches or detection rules for Microsoft Sentinel.
+description: Use watchlists in KQL search queries or detection rules with built-in functions for Microsoft Sentinel.
Previously updated : 01/05/2023 Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#Customer intent: As a SOC analyst, I want to incorporate my watchlists with advanced hunting or detection rules so I can use data I provide in meaningful ways for my security monitoring.
# Build queries or detection rules with watchlists in Microsoft Sentinel Query data in any table against data from a watchlist by treating the watchlist as a table for joins and lookups. When you create a watchlist, you define the *SearchKey*. The search key is the name of a column in your watchlist that you expect to use as a join with other data or as a frequent object of searches.
-For optimal query performance, use **Searchkey** as the key for joins in your queries.
+For optimal query performance, use **SearchKey** as the key for joins in your queries.
+ ## Build queries with watchlists To use a watchlist in search query, write a Kusto query that uses the _GetWatchlist('watchlist-name') function and uses **SearchKey** as the key for your join.
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
-1. Under **Configuration**, select **Watchlist**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Watchlist**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Watchlist**.
1. Select the watchlist you want to use. 1. Select **View in Logs**.
To use a watchlist in search query, write a Kusto query that uses the _GetWatchl
1. Write a query that uses the _GetWatchlist('watchlist-name') function and uses **SearchKey** as the key for your join.
- For example, the following example query joins the `RemoteIPCountry` column in the `Heartbeat` table with the search key defined for the watchlist named mywatchlist.
+ For example, the following example query joins the `RemoteIPCountry` column in the `Heartbeat` table with the search key defined for the watchlist named `mywatchlist`.
```kusto Heartbeat
To use a watchlist in search query, write a Kusto query that uses the _GetWatchl
To use watchlists in analytics rules, create a rule using the _GetWatchlist('watchlist-name') function in the query.
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
1. Under **Configuration**, select **Analytics**. 1. Select **Create** and the type of rule you want to create. 1. On the **General** tab, enter the appropriate information. 1. On the **Set rule logic** tab, under **Rule query** use the `_GetWatchlist('<watchlist>')` function in the query.
- For example, let's say you have a watchlist named ΓÇ£ipwatchlistΓÇ¥ that you created from a CSV file with the following values:
+ For example, let's say you have a watchlist named `ipwatchlist` that you created from a CSV file with the following values:
- |IPAddress,Location |
+ |`IPAddress,Location` |
||
- | 10.0.100.11,Home |
- |172.16.107.23,Work |
- |10.0.150.39,Home |
- |172.20.32.117,Work |
+ |`10.0.100.11,Home` |
+ |`172.16.107.23,Work` |
+ |`10.0.150.39,Home` |
+ |`172.20.32.117,Work` |
The CSV file looks something like the following image. :::image type="content" source="./media/watchlists-queries/create-watchlist.png" alt-text="Screenshot of four items in a CSV file that's used for the watchlist.":::
To use watchlists in analytics rules, create a rule using the _GetWatchlist('wat
1. Complete the rest of the tabs in the **Analytics rule wizard**.
-Watchlists are refreshed in your workspace every 12 days, updating the `TimeGenerated` field.. For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md#query-scheduling-and-alert-threshold).
+Watchlists are refreshed in your workspace every 12 days, updating the `TimeGenerated` field. For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md).
## View list of watchlist aliases You might need to see a list of watchlist aliases to identify a watchlist to use in a query or analytics rule.
-1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
-1. Under **General**, select **Logs**.
-1. If you see a list of queries, close the **Queries** window.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **General**, select **Logs**.<br> In the [Defender portal](https://security.microsoft.com/), select **Investigation & response** > **Hunting** > **Advanced hunting**.
1. On the **New Query** page, run the following query: `_GetWatchlistAlias`. 1. Review the list of aliases in the **Results** tab. :::image type="content" source="./media/watchlists-queries/sentinel-watchlist-alias.png" alt-text="Screenshot that shows a list of watchlists." lightbox="./media/watchlists-queries/sentinel-watchlist-alias.png":::
-## Next steps
+## Related content
In this document, you learned how to use watchlists in Microsoft Sentinel to enrich data and improve investigations. To learn more about Microsoft Sentinel, see the following articles:
sentinel Watchlists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists.md
Title: What is a watchlist - Microsoft Sentinel
-description: Learn what watchlists are in Microsoft and when to use them.
+ Title: What is a watchlist
+
+description: Learn how watchlists allow you to correlate data with events and when to use them in Microsoft Sentinel.
- Previously updated : 01/05/2023+ Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Use watchlists in Microsoft Sentinel
Use watchlists to help you with following scenarios:
Before you create a watchlist, be aware of the following limitations:
+- When you create a watchlist, the watchlist name and alias must each be between 3 and 64 characters. The first and last characters must be alphanumeric. But you can include whitespaces, hyphens, and underscores in between the first and last characters.
- The use of watchlists should be limited to reference data, as they aren't designed for large data volumes. - The **total number of active watchlist items** across all watchlists in a single workspace is currently limited to **10 million**. Deleted watchlist items don't count against this total. If you require the ability to reference large data volumes, consider ingesting them using [custom logs](../azure-monitor/agents/data-sources-custom-logs.md) instead. - Watchlists are refreshed in your workspace every 12 days, updating the `TimeGenerated` field.
For more information, see the following articles:
## Watchlists in queries for searches and detection rules
-Query data in any table against data from a watchlist by treating the watchlist as a table for joins and lookups. When you create a watchlist, you define the *SearchKey*. The search key is the name of a column in your watchlist that you expect to use as a join with other data or as a frequent object of searches. For example, suppose you have a server watchlist that contains country names and their respective two-letter country codes. You expect to use the country codes often for search or joins. So you use the country code column as the search key.
+Query data in any table against data from a watchlist by treating the watchlist as a table for joins and lookups. When you create a watchlist, you define the *SearchKey*. The search key is the name of a column in your watchlist that you expect to use as a join with other data or as a frequent object of searches. For example, suppose you have a server watchlist that contains country names and their respective two-letter country codes. You expect to use the country codes often for searches or joins. So you use the country code column as the search key.
-The following example query joins the `RemoteIPCountry` column in the `Heartbeat` table with the search key defined for the watchlist named mywatchlist.
+The following example query joins the `RemoteIPCountry` column in the `Heartbeat` table with the search key defined for the watchlist named `mywatchlist`.
```kusto Heartbeat
The following example query joins the `RemoteIPCountry` column in the `Heartbeat
Let's look some other example queries.
-Suppose you want to use a watchlist in an analytics rule. You create a watchlist called ΓÇ£ipwatchlistΓÇ¥ that includes columns for "IPAddress" and "Location". You define "IPAddress" as the search key.
+Suppose you want to use a watchlist in an analytics rule. You create a watchlist called `ipwatchlist` that includes columns for `IPAddress` and `Location`. You define `IPAddress` as the **SearchKey**.
- |IPAddress,Location |
+ |`IPAddress,Location` |
||
- | 10.0.100.11,Home |
- |172.16.107.23,Work |
- |10.0.150.39,Home |
- |172.20.32.117,Work |
+ |`10.0.100.11,Home` |
+ |`172.16.107.23,Work` |
+ |`10.0.150.39,Home` |
+ |`172.20.32.117,Work` |
To only include events from IP addresses in the watchlist, you might use a query where watchlist is used as a variable or where the watchlist is used inline.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
description: Learn about the latest new features and announcement in Microsoft S
Previously updated : 03/11/2024 Last updated : 04/03/2024 # What's new in Microsoft Sentinel
This article lists recent features added for Microsoft Sentinel, and new feature
The listed features were released in the last three months. For information about earlier features delivered, see our [Tech Community blogs](https://techcommunity.microsoft.com/t5/azure-sentinel/bg-p/AzureSentinelBlog/label-name/What's%20New). -
-> [!TIP]
-> Get notified when this page is updated by copying and pasting the following URL into your feed reader:
->
-> `https://aka.ms/sentinel/rss`
+ Get notified when this page is updated by copying and pasting the following URL into your feed reader:
+`https://aka.ms/sentinel/rss`
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
+## April 2024
+
+- [Unified security operations platform in the Microsoft Defender portal (preview)](#unified-security-operations-platform-in-the-microsoft-defender-portal-preview)
+- [Microsoft Sentinel now generally available (GA) in Azure China 21Vianet](#microsoft-sentinel-now-generally-available-ga-in-azure-china-21vianet)
+
+### Unified security operations platform in the Microsoft Defender portal (preview)
+
+The unified security operations platform in the Microsoft Defender portal is now available. This release brings together the full capabilities of Microsoft Sentinel, Microsoft Defender XDR, and Microsoft Copilot in Microsoft Defender. For more information, see the following resources:
+
+- Blog announcement: [ΓÇïΓÇïUnified security operations platform with Microsoft Sentinel and Microsoft Defender XDR](https://aka.ms/unified-soc-announcement)
+- [Microsoft Sentinel in the Microsoft Defender portal](https://go.microsoft.com/fwlink/p/?linkid=2263690)
+- [Connect Microsoft Sentinel to Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-sentinel-onboard)
+- [Microsoft Security Copilot in Microsoft Defender XDR](/microsoft-365/security/defender/security-copilot-in-microsoft-365-defender)
+
+### Microsoft Sentinel now generally available (GA) in Azure China 21Vianet
+
+Microsoft Sentinel is now generally available (GA) in Azure China 21Vianet. <!--what does this actually mean?--> Individual features might still be in public preview, as listed on [Microsoft Sentinel feature support for Azure commercial/other clouds](feature-availability.md).
+
+For more information, see also [Geographical availability and data residency in Microsoft Sentinel](geographical-availability-data-residency.md).
+ ## March 2024 - [SIEM migration experience now generally available (GA)](#siem-migration-experience-now-generally-available-ga)
Use analytics rules together with the [Microsoft Sentinel solution for SAP® app
For more information, see [Microsoft Sentinel solution for SAP® applications data reference](sap/sap-solution-log-reference.md) and [Handle false positives in Microsoft Sentinel](false-positives.md).
-## November 2023
--- [Take advantage of Microsoft Defender for Cloud integration with Microsoft Defender XDR (Preview)](#take-advantage-of-microsoft-defender-for-cloud-integration-with-microsoft-defender-xdr-preview)-- [Near-real-time rules now generally available](#near-real-time-rules-now-generally-available)-- [Elevate your cybersecurity intelligence with enrichment widgets (Preview)](#elevate-your-cybersecurity-intelligence-with-enrichment-widgets-preview)-
-### Take advantage of Microsoft Defender for Cloud integration with Microsoft Defender XDR (Preview)
-
-Microsoft Defender for Cloud is now [integrated with Microsoft Defender XDR](../defender-for-cloud/release-notes.md#defender-for-cloud-is-now-integrated-with-microsoft-365-defender-preview), formerly known as Microsoft 365 Defender. This integration, currently **in Preview**, allows Defender XDR to collect alerts from Defender for Cloud and create Defender XDR incidents from them.
-
-Thanks to this integration, Microsoft Sentinel customers who have enabled [Defender XDR incident integration](microsoft-365-defender-sentinel-integration.md) will now be able to ingest and synchronize Defender for Cloud incidents, with all their alerts, through Microsoft Defender XDR.
-
-To support this integration, Microsoft has added a new **Tenant-based Microsoft Defender for Cloud (Preview)** connector. This connector will allow Microsoft Sentinel customers to receive Defender for Cloud alerts and incidents across their entire tenants, without having to monitor and maintain the connector's enrollment to all their Defender for Cloud subscriptions.
-
-This connector can be used to ingest Defender for Cloud alerts, regardless of whether you have Defender XDR incident integration enabled.
--- Learn more about [Microsoft Defender for Cloud integration with Microsoft Defender XDR](../defender-for-cloud/release-notes.md#defender-for-cloud-is-now-integrated-with-microsoft-365-defender-preview).-- Learn more about [ingesting Defender for Cloud incidents into Microsoft Sentinel](ingest-defender-for-cloud-incidents.md).
-<!--
-- Learn how to [connect the tenant-based Defender for Cloud data connector](connect-defender-for-cloud-tenant.md) (in Preview).>-
-### Near-real-time rules now generally available
-
-Microsoft SentinelΓÇÖs [near-real-time analytics rules](detect-threats-built-in.md#nrt) are now generally available (GA). These highly responsive rules provide up-to-the-minute threat detection by running their queries at intervals just one minute apart.
--- [Learn more about near-real-time rules](near-real-time-rules.md).-- [Create and work with near-real-time rules](create-nrt-rules.md).-
-<a name="visualize-data-with-enrichment-widgets-preview"></a>
-### Elevate your cybersecurity intelligence with enrichment widgets (Preview)
-
-Enrichment widgets in Microsoft Sentinel are dynamic components designed to provide you with in-depth, actionable intelligence about entities. They integrate external and internal content and data from various sources, offering a comprehensive understanding of potential security threats. These widgets serve as a powerful enhancement to your cybersecurity toolkit, offering both depth and breadth in information analysis.
-
-Widgets are already available in Microsoft Sentinel today (in Preview). They currently appear for IP entities, both on their full [entity pages](entity-pages.md) and on their [entity info panels](incident-investigation.md) that appear in Incident pages. These widgets show you valuable information about the entities, from both internal and third-party sources.
-
-**What makes widgets essential in Microsoft Sentinel?**
--- **Real-time updates:** In the ever-evolving cybersecurity landscape, real-time data is of paramount importance. Widgets provide live updates, ensuring that your analysts are always looking at the most recent data.--- **Integration:** Widgets are seamlessly integrated into Microsoft Sentinel data sources, drawing from their vast reservoir of logs, alerts, and intelligence. This integration means that the visual insights presented by widgets are backed by the robust analytical power of Microsoft Sentinel.-
-In essence, widgets are more than just visual aids. They are powerful analytical tools that, when used effectively, can greatly enhance the speed and efficiency of threat detection, investigation, and response.
--- [Enable the enrichment widgets experience in Microsoft Sentinel](enable-enrichment-widgets.md)-
-## October 2023
--- [Microsoft Applied Skill - Configure SIEM security operations using Microsoft Sentinel](#microsoft-applied-skill-available-for-microsoft-sentinel)-- [Changes to the documentation table of contents](#changes-to-the-documentation-table-of-contents)-
-### Microsoft Applied Skill available for Microsoft Sentinel
-
-This month Microsoft Worldwide Learning announced [Applied Skills](https://techcommunity.microsoft.com/t5/microsoft-learn-blog/announcing-microsoft-applied-skills-the-new-credentials-to/ba-p/3775645) to help you acquire the technical skills you need to reach your full potential. Microsoft Sentinel is included in the initial set of credentials offered! This credential is based on the learning path with the same name.
-- **Learning path** - [Configure SIEM security operations using Microsoft Sentinel](/training/paths/configure-security-information-event-management-operations-using-microsoft-sentinel/)
- <br>Learn at your own pace, and the modules require you to have your own Azure subscription.
-- **Applied Skill** - [Configure SIEM security operations using Microsoft Sentinel](/credentials/applied-skills/configure-siem-security-operations-using-microsoft-sentinel/)
- <br>A 2 hour assessment is contained in a sandbox virtual desktop. You are provided an Azure subscription with some features already configured.
-
-### Changes to the documentation table of contents
-
-We've made some significant changes in how the Microsoft Sentinel documentation is organized in the table of contents on the left-hand side of the library. Two important things to know:
--- Bookmarked links persist. Unless we retire an article, your saved and shared links to Microsoft Sentinel articles still work.-- Articles used to be divided by concepts, how-tos, and tutorials. Now, the articles are organized by lifecycle or scenario with the related concepts, how-tos, and tutorials in those buckets.-
-We hope these changes to the organization makes your exploration of Microsoft Sentinel documentation more intuitive!
-
-## September 2023
--- [Improve SOX compliance with new workbook for SAP](#improve-sox-compliance-with-new-workbook-for-sap)-
-### Improve SOX compliance with new workbook for SAP
-
-The **SAP Audit Controls workbook** is now provided to you as part of the [Microsoft Sentinel solution for SAP® applications](./sap/solution-overview.md).
-
-This workbook helps you check your SAP® environment's security controls for compliance with your chosen control framework, be it [SOX](https://www.bing.com/search?q=sox+compliance+IT+security&qs=n&form=QBRE&sp=-1&lq=0&pq=sox+compliance+it+security&sc=8-26&sk=&cvid=3ACE338C88CE43368A223D4DB7FC35E6&ghsh=0&ghacc=0&ghpl=), [NIST](https://www.nist.gov/cyberframework/framework), or a custom framework of your choice.
-
-The workbook provides tools for you to assign analytics rules in your environment to specific security controls and control families, monitor and categorize the incidents generated by the SAP solution-based analytics rules, and report on your compliance.
-
-Learn more about the [**SAP Audit Controls workbook**](./sap/sap-audit-controls-workbook.md).
-
-## August 2023
--- [New incident investigation experience is now GA](#new-incident-investigation-experience-is-now-ga)-- [Updated MISP2Sentinel solution utilizes the new upload indicators API.](#updated-misp2sentinel-solution)-- [New and improved entity pages](#new-and-improved-entity-pages)-
-### New incident investigation experience is now GA
-
-Microsoft Sentinel's comprehensive [incident investigation and case management experience](incident-investigation.md) is now generally available in both commercial and government clouds. This experience includes the revamped incident page, which itself includes displays of the incident's entities, insights, and similar incidents for comparison. The new experience also includes an incident log history and a task list.
-
-Also generally available are the similar incidents widget and the ability to add entities to your threat intelligence list of indicators of compromise (IoCs).
--- Learn more about [investigating incidents](investigate-incidents.md) in Microsoft Sentinel.-
-### Updated MISP2Sentinel solution
-
-The open source threat intelligence sharing platform, MISP, has an updated solution to push indicators to Microsoft Sentinel. This notable solution utilizes the new upload indicators API to take advantage of workspace granularity and align the MISP ingested TI to STIX-based properties.
-
-Learn more about the implementation details from the [MISP blog entry for MISP2Sentinel](https://www.misp-project.org/2023/08/26/MISP-Sentinel-UploadIndicatorsAPI.html/).
-
-### New and improved entity pages
-
-Microsoft Sentinel now provides you enhanced and enriched entity pages and panels, giving you more security information on user accounts, full entity data to enrich your incident context, and a reduction in latency for a faster, smoother experience.
--- Read more about these changes in this blog post: [Taking Entity Investigation to the Next Level: Microsoft SentinelΓÇÖs Upgraded Entity Pages](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/taking-entity-investigation-to-the-next-level-microsoft-sentinel/ba-p/3878382).--- Learn more about [entities in Microsoft Sentinel](entities.md).- ## Next steps > [!div class="nextstepaction"]
sentinel Work With Anomaly Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-anomaly-rules.md
Title: Work with anomaly detection analytics rules in Microsoft Sentinel | Microsoft Docs
+ Title: Work with anomaly detection analytics rules in Microsoft Sentinel
description: This article explains how to view, create, manage, assess, and fine-tune anomaly detection analytics rules in Microsoft Sentinel. Previously updated : 11/02/2022 Last updated : 03/17/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+ # Work with anomaly detection analytics rules in Microsoft Sentinel Microsoft SentinelΓÇÖs [customizable anomalies feature](soc-ml-anomalies.md) provides [built-in anomaly templates](detect-threats-built-in.md#anomaly) for immediate value out-of-the-box. These anomaly templates were developed to be robust by using thousands of data sources and millions of events, but this feature also enables you to change thresholds and parameters for the anomalies easily within the user interface. Anomaly rules are enabled, or activated, by default, so they will generate anomalies out-of-the-box. You can find and query these anomalies in the **Anomalies** table in the **Logs** section. + ## View customizable anomaly rule templates
-You can now find anomaly rules displayed in a grid in the **Anomalies** tab in the **Analytics** page. The list can be filtered by the following criteria:
+You can now find anomaly rules displayed in a grid in the **Anomalies** tab in the **Analytics** page.
+
+1. For users of Microsoft Sentinel in the Azure portal, select **Analytics** from the Microsoft Sentinel navigation menu.
+
+ For users of the unified security operations platform in the Microsoft Defender portal, select **Microsoft Sentinel > Configuration > Analytics** from the Microsoft Defender navigation menu.
+
+1. On the **Analytics** page, select the **Anomalies** tab.
-- **Status** - whether the rule is enabled or disabled.
+1. To filter the list by one or more of the following criteria, select **Add filter** and choose accordingly.
-- **Tactics** - the MITRE ATT&CK framework tactics covered by the anomaly.
+ - **Status** - whether the rule is enabled or disabled.
-- **Techniques** - the MITRE ATT&CK framework techniques covered by the anomaly.
+ - **Tactics** - the MITRE ATT&CK framework tactics covered by the anomaly.
-- **Data sources** - the type of logs that need to be ingested and analyzed for the anomaly to be defined.
+ - **Techniques** - the MITRE ATT&CK framework techniques covered by the anomaly.
-When you select a rule, you will see the following information in the details pane:
+ - **Data sources** - the type of logs that need to be ingested and analyzed for the anomaly to be defined.
-- **Description** explains how the anomaly works and the data it requires.
+1. Select a rule and view the following information in the details pane:
-- **Tactics and techniques** are the MITRE ATT&CK framework tactics and techniques covered by the anomaly.
+ - **Description** explains how the anomaly works and the data it requires.
-- **Parameters** are the configurable attributes for the anomaly.
+ - **Tactics and techniques** are the MITRE ATT&CK framework tactics and techniques covered by the anomaly.
-- **Threshold** is a configurable value that indicates the degree to which an event must be unusual before an anomaly is created.
+ - **Parameters** are the configurable attributes for the anomaly.
-- **Rule frequency** is the time between log processing jobs that find the anomalies.
+ - **Threshold** is a configurable value that indicates the degree to which an event must be unusual before an anomaly is created.
-- **Rule status** tells you whether the rule runs in **Production** or **Flighting** (staging) mode when enabled.
+ - **Rule frequency** is the time between log processing jobs that find the anomalies.
-- **Anomaly version** shows the version of the template that is used by a rule. If you want to change the version used by a rule that is already active, you must recreate the rule.
+ - **Rule status** tells you whether the rule runs in **Production** or **Flighting** (staging) mode when enabled.
+
+ - **Anomaly version** shows the version of the template that is used by a rule. If you want to change the version used by a rule that is already active, you must recreate the rule.
The rules that come with Microsoft Sentinel out of the box cannot be edited or deleted. To customize a rule, you must first create a duplicate of the rule, and then customize the duplicate. [See the complete instructions](#tune-anomaly-rules).
The rules that come with Microsoft Sentinel out of the box cannot be edited or d
> > 1. You can submit feedback to Microsoft on your experience with customizable anomalies. -- ## Assess the quality of anomalies You can see how well an anomaly rule is performing by reviewing a sample of the anomalies created by a rule over the last 24-hour period.
-1. From the Microsoft Sentinel navigation menu, select **Analytics**.
+1. For users of Microsoft Sentinel in the Azure portal, select **Analytics** from the Microsoft Sentinel navigation menu.
+
+ For users of the unified security operations platform in the Microsoft Defender portal, select **Microsoft Sentinel > Configuration > Analytics** from the Microsoft Defender navigation menu.
1. On the **Analytics** page, select the **Anomalies** tab.
sentinel Work With Threat Indicators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-threat-indicators.md
Title: Work with threat indicators in Microsoft Sentinel
+ Title: Work with threat indicators
+ description: This article explains how to view, create, manage, and visualize threat intelligence indicators in Microsoft Sentinel. Previously updated : 8/30/2022 Last updated : 3/14/2024
+appliesto:
+ - Microsoft Sentinel in the Azure portal
+ - Microsoft Sentinel in the Microsoft Defender portal
+
+#customer intent: As a security analyst, I want to use threat intelligence so I can power my threat detections.
# Work with threat indicators in Microsoft Sentinel
Integrate threat intelligence (TI) into Microsoft Sentinel through the following
- **Visualize key information** about your imported threat intelligence in Microsoft Sentinel with the **Threat Intelligence workbook**. + ## View your threat indicators in Microsoft Sentinel ### Find and view your indicators in the Threat intelligence page
This procedure describes how to view and manage your indicators in the **Threat
**To view your threat intelligence indicators in the Threat intelligence page**:
-1. Open the [Azure portal](https://portal.azure.com/) and navigate to the **Microsoft Sentinel** service.
-
-1. Select the workspace where you imported threat indicators.
-
-1. From the **Threat Management** section on the left, select the **Threat Intelligence** page.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Threat intelligence**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Threat intelligence**.
1. From the grid, select the indicator for which you want to view more details. The indicator's details appear on the right, showing information such as confidence levels, tags, threat types, and more.
This procedure describes how to view and manage your indicators in the **Threat
1. IP and domain name indicators are enriched with extra GeoLocation and WhoIs data, providing more context for investigations where the selected indicator is found.
- For example:
+For example:
+
+#### [Azure portal](#tab/azure-portal)
++
+#### [Defender portal](#tab/defender-portal)
- :::image type="content" source="media/work-with-threat-indicators/geolocation-whois-ti.png" alt-text="Screenshot of the Threat intelligence page with an indicator showing GeoLocation and WhoIs data." lightbox="media/work-with-threat-indicators/geolocation-whois-ti.png":::
++ > [!IMPORTANT] > GeoLocation and WhoIs enrichment is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Imported threat indicators are listed in the **Microsoft Sentinel > ThreatIntell
**To view your threat intelligence indicators in Logs**:
-1. Open the [Azure portal](https://portal.azure.com/) and navigate to the **Microsoft Sentinel** service.
-
-1. Select the workspace to which youΓÇÖve imported threat indicators using either threat intelligence data connector.
-
-1. Select **Logs** from the **General** section of the Microsoft Sentinel menu.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **General**, select **Logs**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Investigation & response** > **Hunting** > **Advanced hunting**.
1. The **ThreatIntelligenceIndicator** table is located under the **Microsoft Sentinel** group. 1. Select the **Preview data** icon (the eye) next to the table name and select the **See in query editor** button to run a query that will show records from this table.
- Your results should look similar to the sample threat indicator shown below:
+ Your results should look similar to the sample threat indicator shown in this screenshot:
:::image type="content" source="media/work-with-threat-indicators/ti-table-results.png" alt-text="Screenshot shows sample ThreatIntelligenceIndicator table results with the details expanded." lightbox="media/work-with-threat-indicators/ti-table-results.png":::
The **Threat intelligence** page also allows you to create threat indicators dir
### Create a new indicator
-1. From the [Azure portal](https://portal.azure.com/), navigate to the **Microsoft Sentinel** service.
-
-1. Choose the **workspace** to which youΓÇÖve imported threat indicators using either threat intelligence data connector.
-
-1. Select **Threat Intelligence** from the Threat Management section of the Microsoft Sentinel menu.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Threat intelligence**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Threat intelligence**.
1. Select the **Add new** button from the menu bar at the top of the page.
Workbooks provide powerful interactive dashboards that give you insights into al
There is also a rich community of [Azure Monitor workbooks on GitHub](https://github.com/microsoft/Application-Insights-Workbooks) to download more templates and contribute your own templates.
-## Next steps
+## Related content
In this article, you learned all the ways to work with threat intelligence indicators throughout Microsoft Sentinel. For more about threat intelligence in Microsoft Sentinel, see the following articles: - [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md). - Connect Microsoft Sentinel to [STIX/TAXII threat intelligence feeds](./connect-threat-intelligence-taxii.md).-- [Connect threat intelligence platforms](./connect-threat-intelligence-tip.md) to Microsoft Sentinel. - See which [TIPs, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Microsoft Sentinel.
sentinel Workspace Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/workspace-manager.md
Last updated 04/24/2023
-# Centrally manage multiple Microsoft Sentinel workspaces with workspace manager
+# Centrally manage multiple Microsoft Sentinel workspaces with workspace manager (Preview)
Learn how to centrally manage multiple Microsoft Sentinel workspaces within one or more Azure tenants with workspace manager. This article takes you through provisioning and usage of workspace manager. Whether you're a global enterprise or a Managed Security Services Provider (MSSP), workspace manager helps you operate at scale efficiently.
Here are the active content types supported with workspace
- Hunting and Livestream queries - Workbooks
+> [!IMPORTANT]
+> Support for workspace manager is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
++ ## Prerequisites - You need at least two Microsoft Sentinel workspaces. One workspace to manage from and at least one other workspace to be managed.
service-bus-messaging Service Bus Outages Disasters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-outages-disasters.md
Title: Insulate Azure Service Bus applications against outages and disasters description: This article provides techniques to protect applications against a potential Azure Service Bus outage. Previously updated : 12/15/2022 Last updated : 04/02/2024 # Best practices for insulating applications against Service Bus outages and disasters
Mission-critical applications must operate continuously, even in the presence of
An outage is defined as the temporary unavailability of Azure Service Bus. The outage can affect some components of Service Bus, such as a messaging store, or even the entire datacenter. After the problem has been fixed, Service Bus becomes available again. Typically, an outage doesn't cause loss of messages or other data. An example of a component failure is the unavailability of a particular messaging store. An example of a datacenter-wide outage is a power failure of the datacenter, or a faulty datacenter network switch. An outage can last from a few minutes to a few days.
-A disaster is defined as the permanent loss of a Service Bus scale unit or datacenter. The datacenter may or may not become available again. Typically a disaster causes loss of some or all messages or other data. Examples of disasters are fire, flooding, or earthquake.
+A disaster is defined as the permanent loss of a Service Bus scale unit or datacenter. The datacenter might or might not become available again. Typically a disaster causes loss of some or all messages or other data. Examples of disasters are fire, flooding, or earthquake.
## Protection against outages and disasters - premium tier High availability and disaster recovery concepts are built right into the Azure Service Bus **premium** tier, both within the same region (via availability zones) and across different regions (via geo-disaster Recovery).
Service Bus **premium** tier supports geo-disaster recovery, at the namespace le
### Availability zones
-The Service Bus **premium** tier supports [availability Zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of messaging store (1 primary and 2 secondary). Service Bus keeps all three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If applications see transient disconnects from Service Bus, the [retry logic](/azure/architecture/best-practices/retry-service-specific#service-bus) in the SDK will automatically reconnect to Service Bus.
+The Service Bus **premium** tier supports [availability Zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of messaging store (1 primary and 2 secondary). Service Bus keeps all three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If applications see transient disconnects from Service Bus, the [retry logic](/azure/architecture/best-practices/retry-service-specific#service-bus) in the SDK automatically reconnects to Service Bus.
When you use availability zones, **both metadata and data (messages)** are replicated across data centers in the availability zone. > [!NOTE] > The availability zones support for the premium tier is only available in [Azure regions](../availability-zones/az-region.md) where availability zones are present.
-When you create a premium tier namespace through the portal, the support for availability zones (if available in the selected region) is automatically enabled for the namespace. When creating a premium tier namespace through other mechanisms, such as [ARM / Bicep templates](/azure/templates/microsoft.servicebus/namespaces#sbnamespaceproperties), [CLI](/cli/azure/servicebus/namespace?#az-servicebus-namespace-create-optional-parameters), or [PowerShell](/powershell/module/az.servicebus/new-azservicebusnamespace#-zoneredundant), the property `zoneRedundant` needs to be explicitly set to `true` to enable availability zones (if available in the selected region). There's no additional cost for using this feature and you can't disable or enable this feature after namespace creation.
+When you create a premium tier namespace through the portal, the support for availability zones (if available in the selected region) is automatically enabled for the namespace. When you create a premium tier namespace through other mechanisms, such as [Azure Resource Manager / Bicep templates](/azure/templates/microsoft.servicebus/namespaces#sbnamespaceproperties), [CLI](/cli/azure/servicebus/namespace?#az-servicebus-namespace-create-optional-parameters), or [PowerShell](/powershell/module/az.servicebus/new-azservicebusnamespace#-zoneredundant), the property `zoneRedundant` needs to be explicitly set to `true` to enable availability zones (if available in the selected region). There's no extra cost for using this feature and you can't disable or enable this feature after namespace creation.
## Protection against outages and disasters - standard tier
-To achieve resilience against datacenter outages when using the standard messaging pricing tier, Service Bus supports two approaches: **active** and **passive** replication. For each approach, if a given queue or topic must remain accessible in the presence of a datacenter outage, you can create it in both namespaces. Both entities can have the same name. For example, a primary queue can be reached under **contosoPrimary.servicebus.windows.net/myQueue**, while its secondary counterpart can be reached under **contosoSecondary.servicebus.windows.net/myQueue**.
+To achieve resilience against datacenter outages with the standard messaging pricing tier, you could use **active** or **passive** replication. For each approach, if a given queue or topic must remain accessible in the presence of a datacenter outage, you can create it in both namespaces. Both entities can have the same name. For example, a primary queue can be reached under **contosoPrimary.servicebus.windows.net/myQueue**, while its secondary counterpart can be reached under **contosoSecondary.servicebus.windows.net/myQueue**.
>[!NOTE] > The **active replication** and **passive replication** setup are general purpose solutions and not specific features of Service Bus.
A client receives messages from both queues. Because there's a chance that the r
In general, passive replication is more economical than active replication because in most cases only one operation is performed. Latency, throughput, and monetary cost are identical to the non-replicated scenario.
-When using passive replication, in the following scenarios, messages can be lost or received twice:
+When you use passive replication, in the following scenarios, messages can be lost or received twice:
-* **Message delay or loss**: Assume that the sender successfully sent a message m1 to the primary queue, and then the queue becomes unavailable before the receiver receives m1. The sender sends a subsequent message m2 to the secondary queue. If the primary queue is temporarily unavailable, the receiver receives m1 after the queue becomes available again. In case of a disaster, the receiver may never receive m1.
+* **Message delay or loss**: Assume that the sender successfully sent a message m1 to the primary queue, and then the queue becomes unavailable before the receiver receives m1. The sender sends a subsequent message m2 to the secondary queue. If the primary queue is temporarily unavailable, the receiver receives m1 after the queue becomes available again. When a disaster happens, the receiver might never receive m1.
* **Duplicate reception**: Assume that the sender sends a message m to the primary queue. Service Bus successfully processes m but fails to send a response. After the send operation times out, the sender sends an identical copy of m to the secondary queue. If the receiver is able to receive the first copy of m before the primary queue becomes unavailable, the receiver receives both copies of m at approximately the same time. If the receiver isn't able to receive the first copy of m before the primary queue becomes unavailable, the receiver initially receives only the second copy of m, but then receives a second copy of m when the primary queue becomes available. The [Azure Messaging Replication Tasks with .NET Core][Azure Messaging Replication Tasks with .NET Core] sample demonstrates replication of messages between namespaces.
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md
Title: Best practices for improving performance using Azure Service Bus description: Describes how to use Service Bus to optimize performance when exchanging brokered messages. Previously updated : 06/30/2023 Last updated : 04/02/2024 ms.devlang: csharp
The benchmarking sample doesn't use any advanced features, so the throughput you
#### Compute considerations
-Using certain Service Bus features may require compute utilization that may decrease the expected throughput. Some of these features are -
+Using certain Service Bus features require compute utilization that can decrease the expected throughput. Some of these features are -
1. Sessions. 2. Fanning out to multiple subscriptions on a single topic.
You can also utilize Azure Monitor to [automatically scale the Service Bus names
### Sharding across namespaces
-While scaling up Compute (Messaging Units) allocated to the namespace is an easier solution, it **may not** provide a linear increase in the throughput. It's because of Service Bus internals (storage, network, etc.), which may be limiting the throughput.
+While scaling up Compute (Messaging Units) allocated to the namespace is an easier solution, it **might not** provide a linear increase in the throughput. It's because of Service Bus internals (storage, network, etc.), which might be limiting the throughput.
-The cleaner solution in this case is to shard your entities (queues, and topics) across different Service Bus Premium namespaces. You may also consider sharding across different namespaces in different Azure regions.
+The cleaner solution in this case is to shard your entities (queues, and topics) across different Service Bus Premium namespaces. You can also consider sharding across different namespaces in different Azure regions.
## Protocols Service Bus enables clients to send and receive messages via one of three protocols:
Service Bus enables clients to send and receive messages via one of three protoc
2. Service Bus Messaging Protocol (SBMP) 3. Hypertext Transfer Protocol (HTTP)
-AMQP is the most efficient, because it maintains the connection to Service Bus. It also implements [batching](#batching-store-access) and [prefetching](#prefetching). Unless explicitly mentioned, all content in this article assumes the use of AMQP or SBMP.
+AMQP is the most efficient, because it maintains the connection to Service Bus. It also implements batching and [prefetching](#prefetching). Unless explicitly mentioned, all content in this article assumes the use of AMQP or SBMP.
> [!IMPORTANT] > The SBMP protocol is only available for .NET Framework. AMQP is the default for .NET Standard.
For more information on minimum .NET Standard platform support, see [.NET implem
## Reusing factories and clients # [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2)
-The Service Bus clients that interact with the service, such as [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient), [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender), [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver), and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor), should be registered for dependency injection as singletons (or instantiated once and shared). ServiceBusClient can be registered for dependency injection with the [ServiceBusClientBuilderExtensions](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/servicebus/Azure.Messaging.ServiceBus/src/Compatibility/ServiceBusClientBuilderExtensions.cs).
+The Service Bus clients that interact with the service, such as [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient), [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender), [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver), and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor), should be registered for dependency injection as singletons (or instantiated once and shared). ServiceBusClient can be registered for dependency injection with the [ServiceBusClientBuilderExtensions](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/servicebus/Azure.Messaging.ServiceBus/src/Compatibility/ServiceBusClientBuilderExtensions.cs).
We recommend that you don't close or dispose these clients after sending or receiving each message. Closing or disposing the entity-specific objects (ServiceBusSender/Receiver/Processor) results in tearing down the link to the Service Bus service. Disposing the ServiceBusClient results in tearing down the connection to the Service Bus service.
-This guidance doesn't apply to the [ServiceBusSessionReceiver](/dotnet/api/azure.messaging.servicebus.servicebussessionreceiver), as its lifetime is the same as the session itself. For applications working with the `ServiceBusSessionReceiver`, it's recommended to use a singleton instance of the `ServiceBusClient` to accept each session, which spans a new `ServiceBusSessionReceiver` bound to that session. Once the application finishes processing that session, it should dispose the associated `ServiceBusSessionReceiver`.
+This guidance doesn't apply to the [ServiceBusSessionReceiver](/dotnet/api/azure.messaging.servicebus.servicebussessionreceiver), as its lifetime is the same as the session itself. For applications working with the `ServiceBusSessionReceiver`, it's recommended to use a singleton instance of the `ServiceBusClient` to accept each session, which spans a new `ServiceBusSessionReceiver` bound to that session. Once the application finishes processing that session, it should dispose the associated `ServiceBusSessionReceiver`.
# [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk)
When setting the receive mode to `ReceiveAndDelete`, both steps are combined in
Service Bus doesn't support transactions for receive-and-delete operations. Also, peek-lock semantics are required for any scenarios in which the client wants to defer or [dead-letter](service-bus-dead-letter-queues.md) a message.
-## Batching store access
-
-To increase the throughput of a queue, topic, or subscription, Service Bus batches multiple messages when it writes to its internal store.
--- When you enable batching on a queue, writing messages into the store, and deleting messages from the store are batched. -- When you enable batching on a topic, writing messages into the store are batched. -- When you enable batching on a subscription, deleting messages from the store are batched. -- When batched store access is enabled for an entity, Service Bus delays a store write operation for that entity by up to 20 ms.-
-> [!NOTE]
-> There is no risk of losing messages with batching, even if there is a Service Bus failure at the end of a 20ms batching interval.
-
-Additional store operations that occur during this interval are added to the batch. Batched store access only affects **Send** and **Complete** operations; receive operations aren't affected. Batched store access is a property on an entity. Batching occurs across all entities that enable batched store access.
-
-When you create a new queue, topic or subscription, batched store access is enabled by default.
--
-# [Azure.Messaging.ServiceBus SDK](#tab/net-standard-sdk-2)
-To disable batched store access, you need an instance of a `ServiceBusAdministrationClient`. Create a `CreateQueueOptions` from a queue description that sets the `EnableBatchedOperations` property to `false`.
-
-```csharp
-var options = new CreateQueueOptions(path)
-{
- EnableBatchedOperations = false
-};
-var queue = await administrationClient.CreateQueueAsync(options);
-```
--
-# [Microsoft.Azure.ServiceBus SDK](#tab/net-standard-sdk)
-
-To disable batched store access, you need an instance of a `ManagementClient`. Create a queue from a queue description that sets the `EnableBatchedOperations` property to `false`.
-
-```csharp
-var queueDescription = new QueueDescription(path)
-{
- EnableBatchedOperations = false
-};
-var queue = await managementClient.CreateQueueAsync(queueDescription);
-```
-
-For more information, see the following articles:
-- [QueueDescription.EnableBatchedOperations property](/dotnet/api/microsoft.azure.servicebus.management.queuedescription.enablebatchedoperations)-- [SubscriptionDescription.EnabledBatchedOperations property](/dotnet/api/microsoft.azure.servicebus.management.subscriptiondescription.enablebatchedoperations)
-* [TopicDescription.EnableBatchedOperations](/dotnet/api/microsoft.azure.servicebus.management.topicdescription.enablebatchedoperations)
---
-Batched store access doesn't affect the number of billable messaging operations. It's a property of a queue, topic, or subscription. It's independent of the receive mode and the protocol that's used between a client and the Service Bus service.
- ## Prefetching
-[Prefetching](service-bus-prefetch.md) enables the queue or subscription client to load additional messages from the service when it receives messages. The client stores these messages in a local cache. The size of the cache is determined by the `ServiceBusReceiver.PrefetchCount` properties. Each client that enables prefetching maintains its own cache. A cache isn't shared across clients. If the client starts a receive operation and its cache is empty, the service transmits a batch of messages. The size of the batch equals the size of the cache or 256 KB, whichever is smaller. If the client starts a receive operation and the cache contains a message, the message is taken from the cache.
+[Prefetching](service-bus-prefetch.md) enables the queue or subscription client to load additional messages from the service when it receives messages. The client stores these messages in a local cache. The size of the cache is determined by the `ServiceBusReceiver.PrefetchCount` properties. Each client that enables prefetching maintains its own cache. A cache isn't shared across clients. If the client starts a receive operation and its cache is empty, the service transmits a batch of messages. If the client starts a receive operation and the cache contains a message, the message is taken from the cache.
When a message is prefetched, the service locks the prefetched message. With the lock, the prefetched message can't be received by a different receiver. If the receiver can't complete the message before the lock expires, the message becomes available to other receivers. The prefetched copy of the message remains in the cache. The receiver that consumes the expired cached copy receives an exception when it tries to complete that message. By default, the message lock expires after 60 seconds. This value can be extended to 5 minutes. To prevent the consumption of expired messages, set the cache size smaller than the number of messages that a client can consume within the lock timeout interval.
While using these approaches together, consider the following cases -
* Prefetch should be greater than or equal to the number of messages you're expecting to receive from `ReceiveMessagesAsync`. * Prefetch can be up to n/3 times the number of messages processed per second, where n is the default lock duration.
-There are some challenges with having a greedy approach, that is, keeping the prefetch count high, because it implies that the message is locked to a particular receiver. We recommend that you try out prefetch values that are between the thresholds mentioned above, and identify what fits.
+There are some challenges with having a greedy approach, that is, keeping the prefetch count high, because it implies that the message is locked to a particular receiver. We recommend that you try out prefetch values that are between the thresholds mentioned earlier, and identify what fits.
## Multiple queues or topics
If a single queue or topic can't handle the expected number of messages, use mul
More queues or topics mean that you have more entities to manage at deployment time. From a scalability perspective, there really isn't too much of a difference that you would notice as Service Bus already spreads the load across multiple logs internally, so if you use six queues or topics or two queues or topics won't make a material difference.
-The tier of service you use impacts performance predictability. If you choose **Standard** tier, throughput and latency are best effort over a shared multi-tenant infrastructure. Other tenants on the same cluster may impact your throughput. If you choose **Premium**, you get resources that give you predictable performance, and your multiple queues or topics get processed out of that resource pool. For more information, see [Pricing tiers](#pricing-tier).
+The tier of service you use impacts performance predictability. If you choose **Standard** tier, throughput and latency are best effort over a shared multitenant infrastructure. Other tenants on the same cluster can impact your throughput. If you choose **Premium**, you get resources that give you predictable performance, and your multiple queues or topics get processed out of that resource pool. For more information, see [Pricing tiers](#pricing-tier).
## Partitioned namespaces When you use [partitioned premium tier namespaces](service-bus-partitioning.md), multiple partitions with lower messaging units (MU) give you a better performance over a single partition with higher MUs.
Goal: Minimize latency of a queue or topic. The number of senders and receivers
Goal: Maximize throughput of a queue or topic with a large number of senders. Each sender sends messages with a moderate rate. The number of receivers is small.
-Service Bus enables up to 1000 concurrent connections to a messaging entity. This limit is enforced at the namespace level, and queues, topics, or subscriptions are capped by the limit of concurrent connections per namespace. For queues, this number is shared between senders and receivers. If all 1000 connections are required for senders, replace the queue with a topic and a single subscription. A topic accepts up to 1000 concurrent connections from senders. The subscription accepts an additional 1000 concurrent connections from receivers. If more than 1000 concurrent senders are required, the senders should send messages to the Service Bus protocol via HTTP.
+Service Bus enables up to 1,000 concurrent connections to a messaging entity. This limit is enforced at the namespace level, and queues, topics, or subscriptions are capped by the limit of concurrent connections per namespace. For queues, this number is shared between senders and receivers. If all 1,000 connections are required for senders, replace the queue with a topic and a single subscription. A topic accepts up to 1,000 concurrent connections from senders. The subscription accepts an extra 1,000 concurrent connections from receivers. If more than 1,000 concurrent senders are required, the senders should send messages to the Service Bus protocol via HTTP.
To maximize throughput, follow these steps:
To maximize throughput, follow these steps:
Goal: Maximize the receive rate of a queue or subscription with a large number of receivers. Each receiver receives messages at a moderate rate. The number of senders is small.
-Service Bus enables up to 1000 concurrent connections to an entity. If a queue requires more than 1000 receivers, replace the queue with a topic and multiple subscriptions. Each subscription can support up to 1000 concurrent connections. Alternatively, receivers can access the queue via the HTTP protocol.
+Service Bus enables up to 1,000 concurrent connections to an entity. If a queue requires more than 1,000 receivers, replace the queue with a topic and multiple subscriptions. Each subscription can support up to 1,000 concurrent connections. Alternatively, receivers can access the queue via the HTTP protocol.
To maximize throughput, follow these guidelines:
To maximize throughput, try the following steps:
* Use asynchronous operations to take advantage of client-side batching. * Leave batched store access enabled. This access increases the overall rate at which messages can be written into the topic.
-* Set the prefetch count to 20 times the expected receive rate in seconds. This count reduces the number of Service Bus client protocol transmissions.
+* Set the prefetch count to 20 times the expected rate at which messages are received. This count reduces the number of Service Bus client protocol transmissions.
service-connector Quickstart Cli Aks Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-aks-connection.md
Use the Azure CLI command to create a service connection to a Blob Storage with
* **AKS cluster name:** the name of your AKS cluster that connects to the target service. * **Target service resource group name:** the resource group name of the Blob Storage. * **Storage account name:** the account name of your Blob Storage.
-* **User-assigned identity subscription ID:** the subscription ID of the user assigned identity that used to create workload identity
-* **User-assigned identity client ID:** the client ID of the user assigned identity used to create workload identity
+* **User-assigned identity resource ID:** the resource ID of the user assigned identity that is used to create workload identity
```azurecli az aks connection create storage-blob \
- --workload-identity client-id="<your-user-assigned-identity-client-id>" subs-id="<your-user-assigned-identity-subscription-id>"
+ --workload-identity <user-identity-resource-id>
``` > [!NOTE]
-> If you don't have a Blob Storage, you can run `az aks connection create storage-blob --new --workload-identity client-id="<your-user-assigned-identity-client-id>" subs-id="<your-user-assigned-identity-subscription-id>"` to provision a new one and get connected to your function app straightaway.
+> If you don't have a Blob Storage, you can run `az aks connection create storage-blob --new --workload-identity <user-identity-resource-id>"` to provision a new one and get connected to your function app straightaway.
service-connector Tutorial Python Aks Storage Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-aks-storage-workload-identity.md
Learn how to create a pod in an AKS cluster, which talks to an Azure storage acc
1. Create a resource group for this tutorial.
-```azurecli
-az group create \
- --name MyResourceGroup \
- --location eastus
-```
+ ```azurecli
+ az group create \
+ --name MyResourceGroup \
+ --location eastus
+ ```
1. Create an AKS cluster with the following command, or referring to the [tutorial](../aks/learn/quick-kubernetes-deploy-cli.md). We create the service connection, pod definition and deploy the sample application to this cluster.
az group create \
--node-count 1 ```
-1. connect to the cluster with the following command.
+1. Connect to the cluster with the following command.
```azurecli az aks get-credentials \
az group create \
1. Create an Azure storage account with the following command, or referring to the [tutorial](../storage/common/storage-account-create.md). This is the target service that is connected to the AKS cluster and sample application interacts with.
-```azurecli
-az storage account create \
- --resource-group MyResourceGroup \
- --name MyStorageAccount \
- --location eastus \
- --sku Standard_LRS
-```
+ ```azurecli
+ az storage account create \
+ --resource-group MyResourceGroup \
+ --name MyStorageAccount \
+ --location eastus \
+ --sku Standard_LRS
+ ```
1. Create an Azure container registry with the following command, or referring to the [tutorial](../container-registry/container-registry-get-started-portal.md). The registry hosts the container image of the sample application, which will be consumed by the AKS pod definition.
-```azurecli
-az acr create \
- --resource-group MyResourceGroup \
- --name MyRegistry \
- --sku Standard
-```
-
-And enable anonymous pull so that AKS cluster can consume the images in the registry.
+ ```azurecli
+ az acr create \
+ --resource-group MyResourceGroup \
+ --name MyRegistry \
+ --sku Standard
+ ```
+ And enable anonymous pull so that AKS cluster can consume the images in the registry.
-```azurecli
-az acr update \
- --resource-group MyResourceGroup \
- --name MyRegistry \
- --anonymous-pull-enabled
-```
+ ```azurecli
+ az acr update \
+ --resource-group MyResourceGroup \
+ --name MyRegistry \
+ --anonymous-pull-enabled
+ ```
1. Create a user-assigned managed identity with the following command, or referring to the [tutorial](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities). The user-assigned managed identity is used in service connection creation to enable workload identity for AKS workloads.
-```azurecli
-az identity create \
- --resource-group MyResourceGroup \
- --name MyIdentity
-```
+ ```azurecli
+ az identity create \
+ --resource-group MyResourceGroup \
+ --name MyIdentity
+ ```
## Create service connection with Service Connector
Provide the following information as prompted:
* **AKS cluster name:** the name of your AKS cluster that connects to the target service. * **Target service resource group name:** the resource group name of the Azure storage account. * **Storage account name:** the Azure storage account that is connected.
-* **User-assigned identity subscription ID:** the subscription ID of the user-assigned identity used to create workload identity.
-* **User-assigned identity client ID:** the client ID of the user-assigned identity used to create workload identity.
+* **User-assigned identity resource ID:** the resource ID of the user-assigned identity used to create workload identity.
Provide the following information as prompted:
1. Build and push the images to your container registry using the Azure CLI [`az acr build`](/cli/azure/acr#az_acr_build) command.
-```azurecli
-az acr build --registry <MyRegistry> --image sc-demo-storage-identity:latest ./
-```
+ ```azurecli
+ az acr build --registry <MyRegistry> --image sc-demo-storage-identity:latest ./
+ ```
1. View the images in your container registry using the [`az acr repository list`](/cli/azure/acr/repository#az_acr_repository_list) command.
-```azurecli
-az acr repository list --name <MyRegistry> --output table
-```
+ ```azurecli
+ az acr repository list --name <MyRegistry> --output table
+ ```
## Run application and test connection
service-fabric How To Migrate Transport Layer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-migrate-transport-layer-security.md
+
+ Title: How to migrate to TLS (Transport Layer Security) 1.3 for Service Fabric
+description: A how-to guide for migrating to TLS version 1.3 for classic and managed Service Fabric clusters.
+++++ Last updated : 03/29/2024++
+# How to migrate to TLS (Transport Layer Security) 1.3 for Service Fabric
+
+This article explains how to enable TLS 1.3 on Service Fabric clusters. TLS helps manage the HTTP endpoints of your clusters.
+
+If you only use certificates and don't need to define endpoints for your clusters, you only need to enable exclusive authentication mode. To enable this mode, you only need to complete the [Upgrade to the latest Service Fabric runtime version](#upgrade-to-the-latest-service-fabric-runtime-version) and [Enable exclusive authentication mode](#enable-exclusive-authentication-mode) steps. Callouts are made when appropriate. If you later decide to enable token-based authentication, you need to complete the skipped steps.
+
+> [!NOTE]
+> The steps in this article are for Windows machines. Linux isn't supported at this time.
+
+> [!NOTE]
+> Support for TLS 1.3 was introduced with Service Fabric version 10.1CU2 (10.1.1951.9590). However, TLS 1.3 won't be enabled when using TLS 1.3 for Service Fabric Transport endpoints of user applications running in Windows 8 compatibility mode. In this scenario, Windows 10 compatibility mode must be declared in the Windows application manifests for TLS 1.3 to successfully enabled.
+
+## Prerequisites
+
+1. Determine the Service Fabric runtime version of your cluster. You can see your cluster's runtime version by logging in the Azure portal, viewing your cluster in the Service Fabric Explorer, or connecting to your cluster via PowerShell.
+1. Ensure all the nodes in your cluster are upgraded to Windows Server 2022.
+ * For managed clusters, you can follow the steps outlined in the [Modify the OS SKU for a node type section of the Service Fabric managed cluster node types how-to guide](how-to-managed-cluster-modify-node-type.md#modify-the-os-sku-for-a-node-type).
+ * For classic clusters, you can follow the steps outlined in [Scale up a Service Fabric cluster primary node type](service-fabric-scale-up-primary-node-type.md).
+1. Determine if you use token-based authentication. You can check in the portal or review your cluster's manifest in the Service Fabric Explorer. If you do use token-based authentication, Microsoft Entra ID settings appear in the cluster manifest.
+
+Once you complete these prerequisite steps, you're ready to enable TLS 1.3 on your Service Fabric clusters.
+
+## Upgrade to the latest Service Fabric runtime version
+
+In this step, you upgrade your cluster's runtime version to the latest version, which supports TLS 1.3.
+
+Follow the steps in [Upgrade the Service Fabric version that runs on your cluster](service-fabric-cluster-upgrade-windows-server.md). When you're finished, return to this article.
+
+If you don't use token-based authentication for authenticating to your clusters, you should skip the next two steps. Instead, proceed to [Enable exclusive authentication mode](#enable-exclusive-authentication-mode). If you use token-based authentication for authenticating to your clusters, proceed to the next step.
+
+## Define a new HTTP endpoint
+
+> [!NOTE]
+> If you don't use token-based authentication for authenticating to your clusters, you should skip this step and the next. Instead, proceed to [Enable exclusive authentication mode](#enable-exclusive-authentication-mode).
+
+In this step, you define a new HTTP endpoint to use for token-based authentication to your cluster. You must define a new endpoint because TLS 1.3 doesn't easily support mixed mode authentication, where both X.509 certificates and OAuth 2.0 bearer tokens. Service Fabric cluster management endpoints typically use mixed mode authentication, so enabling TLS 1.3 without creating a new endpoint would break your cluster management endpoints.
+
+Define a new endpoint exclusively dedicated to token-based authentication and do so for each node type in your cluster. In the following JSON snippet, we demonstrate how to define an endpoint in a cluster manifest with an example port number of 19079:
+
+```json
+"nodeTypes": [
+ {
+ "name": "parameters('vmNodeType0Name')]",
+ ...
+ "httpGatewayTokenAuthEndpointPort": "19079",
+ ...
+ }
+]
+```
+
+You can use any port number. It should be the same value throughout the cluster and should be selected from the range of ports reserved for the Service Fabric runtime.
+
+To deploy the new endpoint, you have two options:
+* Upgrade the configuration of an existing cluster using the new manifest
+* Define the endpoint at deployment time of a new cluster
+
+### Cluster configuration upgrade for an existing
+
+You can follow the steps in [Customize cluster settings using Resource Manager templates section of the Upgrade the configuration of a cluster in Azure](service-fabric-cluster-config-upgrade-azure.md#customize-cluster-settings-using-resource-manager-templates). When editing the JSON in step 4, make sure to update the `properties` element to include the new endpoint definition previously detailed in a sample JSON snippet.
+
+### Deploy a new cluster
+
+You can follow the steps in the appropriate quickstart for the type of Service Fabric cluster you use. Make sure to edit the template to include the new endpoint definition previously detailed in the sample JSON snippet.
+* [Service Fabric managed clusters quickstart](quickstart-managed-cluster-template.md)
+* [Service Fabric classic clusters quickstart](quickstart-cluster-template.md)
+
+## Migrate to the new token authentication endpoint
+
+> [!NOTE]
+> If you don't use token-based authentication for authenticating to your clusters, you should skip this step and should've skipped the previous step. Instead, proceed to [Enable exclusive authentication mode](#enable-exclusive-authentication-mode).
+
+In this step, you need to find and update all clients that used token-based authentication to target the new token authentication endpoint. These clients made include scripts, code, or services. Any clients still addressing the old gateway port break when the port starts accepting TLS 1.3 connections. Also note that this port could be parameterized or have a different value than the Service Fabric-defined default.
+
+Some examples of changes that need to be made:
+* Microsoft Entra ID applications
+* Any scripts that reference the existing endpoint
+* Load Balancer (LB) inbound Network Address Translation (NAT), Health Probe, and LB rules that reference the existing endpoint
+* Network Security Group (NSG) rules
+
+You also need to migrate traffic that requires token-based authentication to the new endpoint.
+
+## Enable exclusive authentication mode
+
+In this step, you enable exclusive authentication mode. As a safety mechanism, TLS 1.3 isn't offered on the default HTTP gateway endpoint until the cluster owner enables exclusive authentication mode.
+
+`enableHttpGatewayExclusiveAuthMode` is a new setting with a default value of `false`. You to set this new setting to `true`. If you use token-based authentication, you can set `enableHttpGatewayExclusiveAuthMode` at the same time as the new endpoint definition in the previous steps. This setting update was only introduced sequentially to minimize the chance of breakages.
+
+> [!WARNING]
+> If users aren't fully migrated to the new set of endpoints, this is a breaking change.
+
+> [!IMPORTANT]
+> The Service Fabric runtime blocks enabling the exclusive authentication mode if token-based authentication is enabled on your cluster but a separate endpoint for token-based authentication isn't yet specified.
+>
+> However, nothing in the cluster can detect breaks in external clients that attempt to authenticate using tokens against the newly exclusive default HTTP gateway port.
+
+After you introduce this new setting to your cluster's configuration, you'll lose token-based access to the previous endpoint. You can access Service Fabric Explorer via the new port you defined if you completed the [Define a new HTTP endpoint step](#define-a-new-http-endpoint).
+
+To update the `enableHttpGatewayExclusiveAuthMode` setting, you have two options:
+* Upgrade the configuration of an existing cluster using the new manifest
+* Define the endpoint at deployment time of a new cluster
+
+### Cluster configuration upgrade for an existing
+
+You can follow the steps in [Customize cluster settings using Resource Manager templates section of the Upgrade the configuration of a cluster in Azure](service-fabric-cluster-config-upgrade-azure.md#customize-cluster-settings-using-resource-manager-templates). When editing the JSON in step 4, make sure to update the `properties` element to include the new setting shown in the following JSON snippet.
+
+```json
+ "enableHttpGatewayExclusiveAuthMode": true
+```
+
+### Deploy a new cluster
+
+You can follow the steps in the appropriate quickstart for the type of Service Fabric cluster you use. Make sure to edit the template to include the new endpoint definition previously detailed in the sample JSON snippet.
+* [Service Fabric managed clusters quickstart](quickstart-managed-cluster-template.md)
+* [Service Fabric classic clusters quickstart](quickstart-cluster-template.md)
+
+## Next steps
+
+There aren't any specific steps you need to complete after migrating your cluster to TLS 1.3. However, some useful related articles are including in the following links:
+* [X.509 Certificate-based authentication in Service Fabric clusters](cluster-security-certificates.md)
+* [Manage certificates in Service Fabric clusters](cluster-security-certificate-management.md)
static-web-apps Enterprise Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/enterprise-edge.md
Title: Enterprise-grade edge in Azure Static Web Apps
-description: Learn about Azure Static Web Apps enterprise-grade edge
+description: Learn about Azure Static Web Apps enterprise-grade edge.
az staticwebapp enterprise-edge enable -n my-static-webapp -g my-resource-group
## Limitations -- Private Endpoint can't be used with enterprise-grade edge.
+- Private Endpoint can't be used with enterprise-grade edge.
+- Custom domains configured using A Records (DNS) aren't supported with enterprise-grade edge.
## Next steps
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
For container-level permissions, you can choose which containers you want to gra
| List | l | <li>List content within container</li><li>List content within directory</li> | | Delete | d | <li>Delete file/directory</li> | | Create | c | <li>Upload file if file doesn't exist</li><li>Create directory if directory doesn't exist</li> |
-| Modify Ownership | o | <li>Change owner or group for file/directory</li> |
+| Modify Ownership | o | <li>Change the owning user or owning group for file/directory</li> |
| Modify Permissions | p | <li>Change permissions for file/directory</li> | When performing write operations on blobs in sub directories, Read permission is required to open the directory and access blob properties. ## ACLs
-For directory or blob level permissions, you can change owner, group, and mode that are used by ADLS Gen2 ACLs. Most SFTP clients expose commands for changing these properties. The following table describes common commands in more detail.
+For directory or blob level permissions, you can change owning user, owning group, and mode that are used by ADLS Gen2 ACLs. Most SFTP clients expose commands for changing these properties. The following table describes common commands in more detail.
| Command | Required Container Permission | Description | ||||
-| chown | o | <li>Change owner for file/directory</li><li>Must specify numeric ID</li> |
-| chgrp | o | <li>Change group for file/directory</li><li>Must specify numeric ID</li> |
+| chown | o | <li>Change owning user for file/directory</li><li>Must specify numeric ID</li> |
+| chgrp | o | <li>Change owning group for file/directory</li><li>Must specify numeric ID</li> |
| chmod | p | <li>Change permissions/mode for file/directory</li><li>Must specify POSIX style octal permissions</li> |
-The IDs required for changing owner and group are part of new properties for Local Users. The following table describes each new Local User property in more detail.
+The IDs required for changing owning user and owning group are part of new properties for Local Users. The following table describes each new Local User property in more detail.
| Property | Description | |||
-| UserId | <li>Unique identifier for the Local User within the storage account</li><li>Generated by default when the Local User is created</li><li>Used for setting owner on file/directory</li> |
-| GroupId | <li>Identifer for a group of Local Users</li> |
+| UserId | <li>Unique identifier for the Local User within the storage account</li><li>Generated by default when the Local User is created</li><li>Used for setting owning user on file/directory</li> |
+| GroupId | <li>Identifer for a group of Local Users</li><li>Used for setting owning group on file/directory</li> |
| AllowAclAuthorization | <li>Allow authorizing this Local User's requests with ACLs</li> | Once the desired ACLs have been configured and the Local User enables `AllowAclAuthorization`, they may use ACLs to authorize their requests. Similar to RBAC, container permissions can interoperate with ACLs. Only if the local user doesn't have sufficient container permissions will ACLs be evaluated. To learn more, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md).
storage Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-minimum-version.md
Previously updated : 12/30/2022 Last updated : 03/22/2024
Communication between a client application and an Azure Storage account is encry
Azure Storage currently supports three versions of the TLS protocol: 1.0, 1.1, and 1.2. Azure Storage uses TLS 1.2 on public HTTPS endpoints, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
+> [!TIP]
+> Azure Storage relies on Windows implementation of SSL that is not based on OpenSSL and therefore is not exposed to OpenSSL related vulnerabilities.
+ Azure Storage accounts permit clients to send and receive data with the oldest version of TLS, TLS 1.0, and above. To enforce stricter security measures, you can configure your storage account to require that clients send and receive data with a newer version of TLS. If a storage account requires a minimum version of TLS, then any requests made with an older version will fail. This article describes how to use a DRAG (Detection-Remediation-Audit-Governance) framework to continuously manage secure TLS for your storage accounts.
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
# StorSimple 8100 and 8600 migration to Azure File Sync
-The StorSimple 8000 series is represented by either the 8100 or the 8600 physical, on-premises appliances and their cloud service components. StorSimple 8010 and 8020 virtual appliances are also covered in this migration guide. It's possible to migrate the data from either of these appliances to Azure file shares with optional Azure File Sync. Azure File Sync is the default and strategic long-term Azure service that replaces the StorSimple on-premises functionality. This article provides the necessary background knowledge and migration steps for a successful migration to Azure File Sync.
+The StorSimple 8000 series includes either the 8100 or the 8600 physical, on-premises appliances and their cloud service components. StorSimple 8010 and 8020 virtual appliances are also covered in this migration guide. It's possible to migrate the data from either of these appliances to Azure file shares with optional Azure File Sync. Azure File Sync is the default and strategic long-term Azure service that replaces the StorSimple on-premises functionality. This article provides the necessary background knowledge and migration steps for a successful migration to Azure File Sync.
> [!NOTE] > The StorSimple Service (including the StorSimple Device Manager for 8000 and 1200 series and StorSimple Data Manager) has reached the end of support. The end of support for StorSimple was published in 2019 on the [Microsoft LifeCycle Policy](/lifecycle/products/?terms=storsimple) and [Azure Communications](https://azure.microsoft.com/updates/storsimpleeol/) pages. Additional notifications were sent via email and posted on the Azure portal and in the [StorSimple overview](../../storsimple/storsimple-overview.md). Contact [Microsoft Support](https://azure.microsoft.com/support/create-ticket/) for additional details.
This section contains the steps you should take at the beginning of your migrati
- Selecting storage tier - Selecting storage redundancy options - Selecting direct-share-access vs. Azure File Sync
- - StorSimple Service Data Encryption Key & Serial Number
+ - StorSimple Service Data Encryption Key and Serial Number
- StorSimple Volume Backup migration
- - Mapping StorSimple volumes & shares to Azure file shares
+ - Mapping StorSimple volumes and shares to Azure file shares
- Grouping shares inside Azure file shares - Mapping considerations - Migration planning worksheet
This section contains the steps you should take at the beginning of your migrati
### Inventory
-When you begin planning your migration, first identify all the StorSimple appliances and volumes you need to migrate. After you've done that, you can decide on the best migration path for you.
+When you begin planning your migration, first identify all the StorSimple appliances and volumes you need to migrate. Afterwards, you can decide on the best migration path.
* StorSimple physical appliances (8000 series) use this migration guide.
-* Virtual appliances, [StorSimple 1200 series, use a different migration guide](storage-files-migration-storsimple-1200.md).
+* StorSimple virtual appliances (1200 series) use a [different migration guide](storage-files-migration-storsimple-1200.md).
### Migration cost summary Migrations to Azure file shares from StorSimple volumes via migration jobs in a StorSimple Data Manager resource are free of charge. Other costs might be incurred during and after a migration:
-* **Network egress:** Your StorSimple files live in a storage account within a specific Azure region. If you provision the Azure file shares you migrate into a storage account that's located in the same Azure region, no egress cost will occur. You can move your files to a storage account in a different region as part of this migration. In that case, egress costs will apply to you.
-* **Azure file share transactions:** When files are copied into an Azure file share (as part of a migration or outside of one), transaction costs apply as files and metadata are being written. As a best practice, start your Azure file share on the transaction optimized tier during the migration. Switch to your desired tier after the migration is finished. The following phases will call this out at the appropriate point.
-* **Change an Azure file share tier:** Changing the tier of an Azure file share costs transactions. In most cases, it will be more cost efficient to follow the advice from the previous point.
-* **Storage cost:** When this migration starts copying files into an Azure file share, storage is consumed and billed. Migrated backups will become [Azure file share snapshots](storage-snapshots-files.md). File share snapshots only consume storage capacity for the differences they contain.
-* **StorSimple:** Until you have a chance to deprovision the StorSimple devices and storage accounts, StorSimple cost for storage, backups, and appliances will continue to occur.
+* **Network egress:** Your StorSimple files live in a storage account within a specific Azure region. If you provision the Azure file shares you migrate into a storage account in the same Azure region, no egress costs occur. However, if you move your files to a storage account in a different region as part of this migration, egress costs will apply.
+* **Azure file share transactions:** When files are copied into an Azure file share (as part of a migration or outside of one), transaction costs apply as files and metadata are being written. As a best practice, start your Azure file share on the transaction optimized tier during the migration. Switch to your desired tier after the migration is finished. The phases described in this article call this out at the appropriate point.
+* **Change an Azure file share tier:** Changing the tier of an Azure file share costs transactions. In most cases, it is more cost efficient to follow the advice from the previous point.
+* **Storage cost:** When this migration starts copying files into an Azure file share, storage is consumed and billed. Migrated backups become [Azure file share snapshots](storage-snapshots-files.md). File share snapshots only consume storage capacity for the differences they contain.
+* **StorSimple:** Until you deprovision the StorSimple devices and storage accounts, StorSimple cost for storage, backups, and appliances will continue to occur.
### Direct-share-access vs. Azure File Sync
-Azure file shares open up a whole new world of opportunities for structuring your file services deployment. An Azure file share is just an SMB share in the cloud that you can set up to have users access directly over the SMB protocol with the familiar Kerberos authentication and existing NTFS permissions (file and folder ACLs) working natively. Learn more about [identity-based access to Azure file shares](storage-files-active-directory-overview.md).
+Azure file shares open up a new world of opportunities for structuring your file services deployment. An Azure file share is an SMB share in the cloud that you can set up to have users access directly over the SMB protocol with the familiar Kerberos authentication and existing NTFS permissions (file and folder ACLs) working natively. Learn more about [identity-based access to Azure file shares](storage-files-active-directory-overview.md).
An alternative to direct access is [Azure File Sync](../file-sync/file-sync-planning.md). Azure File Sync is a direct analog for StorSimple's ability to cache frequently used files on-premises.
Azure File Sync is a Microsoft cloud service, based on two main components:
* File synchronization and cloud tiering to create a performance access cache on any Windows Server. * File shares as native storage in Azure that can be accessed over multiple protocols like SMB and file REST.
-Azure file shares retain important file fidelity aspects on stored files like attributes, permissions, and timestamps. With Azure file shares, there's no longer a need for an application or service to interpret the files and folders stored in the cloud. You can access them natively over familiar protocols and clients like Windows File Explorer. Azure file shares allow you to store general-purpose file server data and application data in the cloud. Backup of an Azure file share is a built-in functionality and can be further enhanced by Azure Backup.
+Azure file shares retain important file fidelity aspects like attributes, permissions, and timestamps. With Azure file shares, there's no longer a need for an application or service to interpret the files and folders stored in the cloud. You can access them natively over familiar protocols and clients. Azure file shares allow you to store general-purpose file server data and application data in the cloud.
This article focuses on the migration steps. If you want to learn more about Azure File Sync before migrating, see the following articles:
-* [Azure File Sync overview](../file-sync/file-sync-planning.md "Overview")
+* [Azure File Sync planning guide](../file-sync/file-sync-planning.md)
* [Azure File Sync deployment guide](../file-sync/file-sync-deployment-guide.md) ### StorSimple service data encryption key
-When you first set up your StorSimple appliance, it generated a "service data encryption key" and instructed you to securely store the key. This key is used to encrypt all data in the associated Azure storage account where the StorSimple appliance stores your files.
+When you first set up your StorSimple appliance, it generated a service data encryption key and instructed you to securely store the key. This key is used to encrypt all data in the associated Azure storage account where the StorSimple appliance stores your files.
-The "service data encryption key" is necessary for a successful migration. Now is a good time to retrieve this key from your records, one for each of the appliances in your inventory.
+The service data encryption key is necessary for a successful migration. Retrieve this key from your records, one for each of the appliances in your inventory.
If you can't find the keys in your records, you can generate a new key from the appliance. Each appliance has a unique encryption key.
If you can't find the keys in your records, you can generate a new key from the
> When you're deciding how to connect to your StorSimple appliance, consider the following: > > * Connecting through an HTTPS session is the most secure and recommended option.
-> * Connecting directly to the device serial console is secure, but connecting to the serial console over network switches is not.
+> * Connecting directly to the device serial console is secure, but connecting to the serial console over network switches isn't.
> * HTTP session connections are an option but are *not encrypted*. They're not recommended unless they're used within in a closed, trusted network. ### Known limitations
-The StorSimple Data Manager and Azure file shares have a few limitations you should consider before you begin your migration, as they can prevent a migration:
-* Only NTFS volumes from your StorSimple appliance are supported. ReFS volumes are not supported.
-* Any volume placed on [Windows Server Dynamic Disks](/troubleshoot/windows-server/backup-and-storage/best-practices-using-dynamic-disks) is not supported. (deprecated before Windows Server 2012)
+The StorSimple Data Manager and Azure file shares have a few limitations you should consider before you begin, as they can prevent a migration:
+
+* Only NTFS volumes from your StorSimple appliance are supported. ReFS volumes aren't supported.
+* Any volume placed on [Windows Server Dynamic Disks](/troubleshoot/windows-server/backup-and-storage/best-practices-using-dynamic-disks) isn't supported.
* The service doesn't work with volumes that are BitLocker encrypted or have [Data Deduplication](/windows-server/storage/data-deduplication/understand) enabled. * Corrupted StorSimple backups can't be migrated.
-* Special networking options, such as firewalls or private endpoint-only communication can't be enabled on either the source storage account where StorSimple backups are stored, nor on the target storage account that holds your Azure file shares.
-
+* Special networking options, such as firewalls or private endpoint-only communication, can't be enabled on either the source storage account where StorSimple backups are stored, nor on the target storage account that holds your Azure file shares.
### File fidelity
-If none of the limitations in [Known limitations](#known-limitations) prevent a migration. There are still limitations on what can be stored in Azure file shares that you need to be aware of.
-_File fidelity_ refers to the multitude of attributes, timestamps, and data that compose a file. In a migration, file fidelity is a measure of how well the information on the source (StorSimple volume) can be translated (migrated) to the target (Azure file share).
-[Azure Files supports a subset](/rest/api/storageservices/set-file-properties) of the [NTFS file properties](/windows/win32/fileio/file-attribute-constants). ACLs, common metadata, and some timestamps will be migrated. The following items won't prevent a migration but will cause per-item issues during a migration:
+If none of the limitations in [Known limitations](#known-limitations) prevent a migration, there are still limitations on what can be stored in Azure file shares.
+
+File fidelity refers to the multitude of attributes, timestamps, and data that compose a file. In a migration, file fidelity is a measure of how well the information on the source (StorSimple volume) can be translated (migrated) to the target Azure file share.
+
+[Azure Files supports a subset](/rest/api/storageservices/set-file-properties) of the [NTFS file properties](/windows/win32/fileio/file-attribute-constants). Windows ACLs, common metadata, and some timestamps are migrated.
+
+The following items won't prevent a migration but will cause per-item issues during a migration:
-* Timestamps: File change time will not be set - it is currently read-only over the REST protocol. Last access timestamp on a file will not be moved, it currently isn't a supported attribute on files stored in an Azure file share.
-* [Alternative Data Streams](/openspecs/windows_protocols/ms-fscc/b134f29a-6278-4f3f-904f-5e58a713d2c5) can't be stored in Azure file shares. Files holding Alternate Data Streams will be copied, but Alternate Data Streams will be stripped from the file in the process.
-* Symbolic links, hard links, junctions, and reparse points are skipped during a migration. The migration copy logs will list each skipped item and a reason.
-* EFS encrypted files will fail to copy. Copy logs will show the item failed to copy with "Access is denied".
-* Corrupt files are skipped. The copy logs may list different errors for each item that is corrupt on the StorSimple disk: "The request failed due to a fatal device hardware error" or "The file or directory is corrupted or unreadable" or "The access control list (ACL) structure is invalid".
+* Timestamps: File change time won't be set. It's currently read-only over the REST protocol. Last access timestamp on a file won't be moved, as it isn't a supported attribute on files stored in an Azure file share.
+* [Alternative Data Streams](/openspecs/windows_protocols/ms-fscc/b134f29a-6278-4f3f-904f-5e58a713d2c5) can't be stored in Azure file shares. Files holding Alternate Data Streams will be copied, but Alternate Data Streams are stripped from the file in the process.
+* Symbolic links, hard links, junctions, and reparse points are skipped during a migration. The migration copy logs list each skipped item and a reason.
+* EFS encrypted files fail to copy. Copy logs show the item failed to copy with "Access is denied".
+* Corrupt files are skipped. The copy logs might list different errors for each item that is corrupt on the StorSimple disk: "The request failed due to a fatal device hardware error" or "The file or directory is corrupted or unreadable" or "The access control list (ACL) structure is invalid".
* Individual files larger than 4 TiB are skipped.
-* File path lengths need to be equal to or fewer than 2048 characters. Files and folders with longer paths will be skipped.
-* Reparse points will be skipped. Any Microsoft Data Deduplication / SIS reparse points or those of third parties cannot be resolved by the migration engine and prevent a migration of the affected files and folders.
+* File path lengths must be equal to or fewer than 2048 characters. Files and folders with longer paths are skipped.
+* Reparse points are skipped. Any Microsoft Data Deduplication / SIS reparse points or those of third parties can't be resolved by the migration engine and will prevent a migration of the affected files and folders.
The [troubleshooting section](#troubleshooting) at the end of this article has more details for item level and migration job level error codes and where possible, their mitigation options. ### StorSimple volume backups StorSimple offers differential backups on the volume level. Azure file shares also have this ability, called share snapshots.
-Your migration jobs can only move backups, not data from the live volume. So the most recent backup should always be on the list of backups moved in a migration.
-Decide if you need to move any older backups during your migration.
-Best practice is to keep this list as small as possible, so your migration jobs complete faster.
+Your migration jobs can only move backups, never data from the live volume. Therefore the most recent backup is closest to the live data and thus should always be part of the list of backups to be moved in a migration.
-To identify critical backups that must be migrated, make a checklist of your backup policies. For instance:
-* The most recent backup. (Note: The most recent backup should always be part of this list.)
+Decide if you need to move any older backups during your migration. It's a best practice to keep this list as small as possible so your migration jobs complete faster.
+
+To identify critical backups that must be migrated, make a checklist of your backup policies. For example:
+
+* The most recent backup.
* One backup a month for 12 months.
-* One backup a year for three years.
+* One backup a year for three years.
-Later on, when you create your migration jobs, you can use this list to identify the exact StorSimple volume backups that must be migrated to satisfy the requirements on your list.
+When you create your migration jobs, you can use this list to identify the exact StorSimple volume backups that must be migrated to satisfy your requirements.
-> [!CAUTION]
-> Selecting more than **50** StorSimple volume backups is not supported.
-> Your migration jobs can only move backups, never data from the live volume. Therefore the most recent backup is closest to the live data and thus should always be part of the list of backups to be moved in a migration.
+It's best to suspend all StorSimple backup retention policies before you select a backup for migration. Migrating your backups can take several days or weeks. StorSimple offers backup retention policies that delete backups. Backups you've selected for this migration might get deleted before they've had a chance to be migrated.
> [!CAUTION]
-> It's best to suspend all StorSimple backup retention policies before you select a backup for migration. </br>Migrating your backups takes several days or weeks. StorSimple offers backup retention policies that will delete backups. Backups you have selected for this migration may get deleted before they had a chance to be migrated.
+> Selecting more than **50** StorSimple volume backups isn't supported.
### Map your existing StorSimple volumes to Azure file shares
Later on, when you create your migration jobs, you can use this list to identify
### Number of storage accounts
-Your migration will likely benefit from a deployment of multiple storage accounts that each hold a smaller number of Azure file shares.
+Your migration will likely benefit from deploying multiple storage accounts that each hold a smaller number of Azure file shares.
-If your file shares are highly active (utilized by many users or applications), two Azure file shares might reach the performance limit of your storage account. Because of this, the best practice is to migrate to multiple storage accounts, each with their own individual file shares, and typically no more than two or three shares per storage account.
+If your file shares are highly active (utilized by many users or applications), two Azure file shares might reach the performance limit of your storage account. Because of this, it's often better migrate to multiple storage accounts, each with their own individual file shares, and typically no more than two or three shares per storage account. A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares into the same storage account, if you have archival shares in them.
-A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares into the same storage account, if you have archival shares in them.
+These considerations apply more to [direct cloud access](#direct-share-access-vs-azure-file-sync) (through an Azure VM or service) than to Azure File Sync. If you plan to exclusively use Azure File Sync on these shares, grouping several into a single Azure storage account is fine. In the future, you might want to lift and shift an app into the cloud that would then directly access a file share, as this scenario would benefit from having higher IOPS and throughput. Or you could start using a service in Azure that would also benefit from having higher IOPS and throughput.
-These considerations apply more to [direct cloud access](#direct-share-access-vs-azure-file-sync) (through an Azure VM or service) than to Azure File Sync. If you plan to exclusively use Azure File Sync on these shares, grouping several into a single Azure storage account is fine. In the future, you may want to lift and shift an app into the cloud that would then directly access a file share, this scenario would benefit from having higher IOPS and throughput. Or you could start using a service in Azure that would also benefit from having higher IOPS and throughput.
-
-If you've made a list of your shares, map each share to the storage account where it will reside.
+After making a list of your shares, map each share to the storage account where it will reside. Decide on an Azure region, and ensure each storage account and Azure File Sync resource matches the region you selected.
> [!IMPORTANT]
-> Decide on an Azure region, and ensure each storage account and Azure File Sync resource matches the region you selected.
> Don't configure network and firewall settings for the storage accounts now. Making these configurations at this point would make a migration impossible. Configure these Azure storage settings after the migration is complete. ### Storage account settings
-There are many configurations you can make on a storage account. The following checklist should be used for confirming your storage account configurations. You can change for instance the networking configuration after your migration is complete.
+There are many configurations you can make on a storage account. Use the following checklist to confirm your storage account configurations. You can change the networking configuration after your migration is complete.
> [!div class="checklist"]
-> * Large file shares: Enabled - Large file shares improve performance and allow you to store up to 100TiB in a share. This setting applies to target storage accounts with Azure file shares.
-> * Firewall and virtual networks: Disabled - do not configure any IP restrictions or limit storage account access to a specific VNET. The public endpoint of the storage account is used during the migration. All IP addresses from Azure VMs must be allowed. It's best to configure any firewall rules on the storage account after the migration. Configure both, your source and target storage accounts this way.
-> * Private Endpoints: Supported - You can enable private endpoints but the public endpoint is used for the migration and must remain available. This consideration applies to both, your source and target storage accounts.
+> * Large file shares: Enabled - Large file shares improve performance and allow you to store up to 100 TiB in a share. This setting applies to target storage accounts with Azure file shares.
+> * Firewall and virtual networks: Disabled - don't configure any IP restrictions or limit storage account access to a specific virtual network. The public endpoint of the storage account is used during the migration. All IP addresses from Azure VMs must be allowed. It's best to configure any firewall rules on the storage account after the migration. Configure both your source and target storage accounts this way.
+> * Private Endpoints: Supported - You can enable private endpoints, but the public endpoint is used for the migration and must remain available. This applies to both your source and target storage accounts.
### Phase 1 summary At the end of Phase 1: * You have a good overview of your StorSimple devices and volumes.
-* The Data Manager service is ready to access your StorSimple volumes in the cloud because you've retrieved your "service data encryption key" for each StorSimple device.
+* The Data Manager service is ready to access your StorSimple volumes in the cloud because you've retrieved your service data encryption key for each StorSimple device.
* You have a plan for which volumes and backups (if any beyond the most recent) need to be migrated. * You know how to map your volumes to the appropriate number of Azure file shares and storage accounts.
This section discusses considerations around deploying the different resource ty
### Deploy storage accounts
-You'll likely need to deploy several Azure storage accounts. Each one will hold a smaller number of Azure file shares, as per your deployment plan, completed in the previous section of this article. Go to the Azure portal to [deploy your planned storage accounts](../common/storage-account-create.md#create-a-storage-account). Consider adhering to the following basic settings for any new storage account.
+You'll likely need to deploy several Azure storage accounts. Each one will hold a smaller number of Azure file shares, as per your deployment plan. Go to the Azure portal to [deploy your planned storage accounts](../common/storage-account-create.md#create-a-storage-account). Consider adhering to the following basic settings for any new storage account.
> [!IMPORTANT]
-> Do not configure network and firewall settings for your storage accounts now. Making those configurations at this point would make a migration impossible. Configure these Azure storage settings after the migration is complete.
+> Don't configure network and firewall settings for your storage accounts now. Making those configurations at this point would make a migration impossible. Configure these Azure storage settings after the migration is complete.
#### Subscription
-You can use the same subscription you used for your StorSimple deployment or a different one. The only limitation is that your subscription must be in the same Microsoft Entra tenant as the StorSimple subscription. Consider moving the StorSimple subscription to the appropriate tenant before you start a migration. You can only move the entire subscription, individual StorSimple resources can't be moved to a different tenant or subscription.
+You can use the same subscription you used for your StorSimple deployment, or you can use a different one. The only limitation is that your subscription must be in the same Microsoft Entra tenant as the StorSimple subscription. Consider moving the StorSimple subscription to the appropriate tenant before you start a migration. You can only move the entire subscription, as individual StorSimple resources can't be moved to a different tenant or subscription.
#### Resource group
-Resource groups are assisting with organization of resources and admin management permissions. Find out more about [resource groups in Azure](../../azure-resource-manager/management/manage-resource-groups-portal.md#what-is-a-resource-group).
+Resource groups in Azure assist with organization of resources and admin management permissions. [Find out more](../../azure-resource-manager/management/manage-resource-groups-portal.md#what-is-a-resource-group).
#### Storage account name
-The name of your storage account will become part of a URL and has certain character limitations. In your naming convention, consider that storage account names have to be unique in the world, allow only lowercase letters and numbers, require between 3 to 24 characters, and don't allow special characters like hyphens or underscores. For more information, see [Azure storage resource naming rules](../../azure-resource-manager/management/resource-name-rules.md#microsoftstorage).
+The name of your storage account will become part of a URL used to access your file share, and has certain character limitations. In your naming convention, consider that storage account names must be unique in the world, allow only lowercase letters and numbers, require between 3 to 24 characters, and don't allow special characters like hyphens or underscores. See [Azure storage resource naming rules](../../azure-resource-manager/management/resource-name-rules.md#microsoftstorage).
#### Location
-The location or Azure region of a storage account is very important. If you use Azure File Sync, all of your storage accounts must be in the same region as your Storage Sync Service resource. The Azure region you pick should be close or central to your local servers and users. After your resource has been deployed, you can't change its region.
+The Azure region of a storage account is important. If you use Azure File Sync, all your storage accounts must be in the same region as your Storage Sync Service resource. The Azure region you pick should be close or central to your local servers and users. After you deploy your resource, you can't change its region.
-You can pick a different region from where your StorSimple data (storage account) currently resides.
-
-> [!IMPORTANT]
-> If you pick a different region from your current StorSimple storage account location, [egress charges will apply](https://azure.microsoft.com/pricing/details/bandwidth) during the migration. Data will leave the StorSimple region and enter your new storage account region. No bandwidth charges apply if you stay within the same Azure region.
+You can pick a different region from where your StorSimple data (storage account) currently resides, however, if you do, [egress charges will apply](https://azure.microsoft.com/pricing/details/bandwidth) during the migration. Data will leave the StorSimple region and enter your new storage account region. No bandwidth charges apply if you stay within the same Azure region.
#### Performance You have the option to pick premium storage (SSD) for Azure file shares or standard storage. Standard storage includes [several tiers for a file share](storage-how-to-create-file-share.md#change-the-tier-of-an-azure-file-share). Standard storage is the right option for most customers migrating from StorSimple.
-Still not sure?
- * Choose premium storage if you need the [performance of a premium Azure file share](understanding-billing.md#provisioned-model). * Choose standard storage for general-purpose file server workloads, which includes hot data and archive data. Also choose standard storage if the only workload on the share in the cloud will be Azure File Sync. * For premium file shares, choose *File shares* in the create storage account wizard. #### Replication
-There are several replication settings available. Learn more about the different replication types.
-
-Only choose from either of the following two options:
+There are several replication settings available. Only choose from the following two options:
* *Locally redundant storage (LRS)*. * *Zone redundant storage (ZRS)*, which isn't available in all Azure regions. > [!NOTE]
-> Only LRS and ZRS redundancy types are compatible with the large 100 TiB capacity Azure file shares.
-
-Geo redundant storage (GRS) in all variations is currently not supported. You can switch your redundancy type later, and switch to GRS when support for it arrives in Azure.
+> Geo redundant storage (GRS) and geo-zone redundant storage aren't supported.
#### Enable 100 TiB capacity file shares :::row::: :::column:::
- :::image type="content" source="media/storage-files-how-to-create-large-file-share/large-file-shares-advanced-enable.png" alt-text="An image showing the Advanced tab in the Azure portal for the creation of a storage account.":::
+ :::image type="content" source="media/storage-files-how-to-create-large-file-share/large-file-shares-advanced-enable.png" alt-text="An image showing the Advanced tab in the Azure portal for creating a storage account.":::
:::column-end::: :::column:::
- Under the **Advanced** section of the new storage account wizard in the Azure portal, you can enable **Large file shares** support in this storage account. If this option isn't available to you, you most likely selected the wrong redundancy type. Ensure you only select LRS or ZRS for this option to become available.
+ Under the **Advanced** section of the new storage account wizard in the Azure portal, you can enable **Large file shares** support in this storage account. If this option isn't available, you most likely selected the wrong redundancy type. Ensure you only select LRS or ZRS for this option to become available.
:::column-end::: :::row-end:::
-Opting for the large, 100 TiB capacity file shares has several benefits:
+Using large file shares has several benefits:
-* Your performance is greatly increased as compared to the smaller 5 TiB-capacity file shares (for example, 10 times the IOPS).
-* Your migration will finish significantly faster.
-* You ensure that a file share will have enough capacity to hold all the data you'll migrate into it, including the storage capacity differential backups require.
+* Performance is greatly increased as compared to the smaller 5 TiB file shares (for example, 10 times the IOPS).
+* Your migration will finish faster.
+* You ensure that a file share has enough capacity to hold all the data you'll migrate into it, including the storage capacity that differential backups require.
* Future growth is covered. > [!IMPORTANT]
-> Do not apply special networking to your storage account before or during your migration. The public endpoint must be accessible on source and target storage accounts. No limiting to specific IP ranges or VNETs is supported. You can change the storage account networking configurations after the migration.
+> Don't apply special networking to your storage account before or during your migration. The public endpoint must be accessible on source and target storage accounts. Limiting to specific IP ranges or virtual networks isn't supported. You can change the storage account networking configurations after the migration.
### Azure file shares
-After your storage accounts are created, go to the **File share** section of the storage account and deploy the appropriate number of Azure file shares as per your migration plan from Phase 1. Consider adhering to the following basic settings for your new file shares in Azure.
+After creating your storage accounts, go to the **File share** section of the storage account(s) and deploy the appropriate number of Azure file shares as per your migration plan from Phase 1. Consider adhering to the following basic settings for your new file shares in Azure.
:::row::: :::column:::
After your storage accounts are created, go to the **File share** section of the
### StorSimple Data Manager
-The Azure resource that will hold your migration jobs is called a **StorSimple Data Manager**. Select **New resource**, and search for it. Then select **Create**.
+The Azure resource that holds your migration jobs is called a **StorSimple Data Manager**. Select **New resource**, and search for it. Then select **Create**.
-This temporary resource is used for orchestration. You deprovision it after your migration completes. It should be deployed in the same subscription, resource group, and region as your StorSimple storage account.
+This temporary resource is used for orchestration. You deprovision it after your migration completes. Make sure to deploy it in the same subscription, resource group, and region as your StorSimple storage account.
### Azure File Sync With Azure File Sync, you can add on-premises caching of the most frequently accessed files. Similar to the caching abilities of StorSimple, the Azure File Sync cloud tiering feature offers local-access latency in combination with improved control over the available cache capacity on the Windows Server instance and multi-site sync. If having an on-premises cache is your goal, then in your local network, prepare a Windows Server VM (physical servers and failover clusters are also supported) with sufficient direct-attached storage capacity. > [!IMPORTANT]
-> Don't set up Azure File Sync yet. It's best to set up Azure File Sync after the migration of your share is complete. Deploying Azure File Sync shouldn't start before Phase 4 of a migration.
+> Don't set up Azure File Sync yet. Deploying Azure File Sync shouldn't start before Phase 4 of a migration.
### Phase 2 summary
At the end of Phase 2, you'll have deployed your storage accounts and all Azure
## Phase 3: Create and run a migration job
-This section describes how to set up a migration job and carefully map the directories on a StorSimple volume that should be copied into the target Azure file share you select.
+This section describes how to set up a migration job and map the directories on a StorSimple volume that should be copied into the target Azure file share you select.
:::row::: :::column:::
To get started, go to your StorSimple Data Manager, find **Job definitions** on
There are important aspects around choosing backups that need to be migrated: -- Your migration jobs can only move backups, not live volume data. So the most recent backup is closest to the live data and should always be on the list of backups moved in a migration. When you open the Backup selection dialog, it is selected by default.-- Make sure your latest backup is recent to keep the delta to the live share as small as possible. It could be worth manually triggering and completing another volume backup before creating a migration job. A small delta to the live share will improve your migration experience. If this delta can be zero = no more changes to the StorSimple volume happened after the newest backup was taken in your list - then Phase 5: User cut-over will be drastically simplified and sped up.-- Backups must be played back into the Azure file share **from oldest to newest**. An older backup cannot be "sorted into" the list of backups on the Azure file share after a migration job has run. Therefore you must ensure that your list of backups is complete *before* you create a job. -- This list of backups in a job cannot be modified once the job is created - even if the job never ran.-- In order to select backups, the StorSimple volume you want to migrate must be online.
+* Your migration jobs can only move backups, not live volume data. So the most recent backup is closest to the live data and should always be on the list of backups moved in a migration. When you open the Backup selection dialog, it's selected by default.
+* Make sure your latest backup is recent to keep the delta to the live share as small as possible. It could be worth manually triggering and completing another volume backup before creating a migration job. A small delta to the live share improves your migration experience. If this delta can be zero, meaning that no more changes to the StorSimple volume happened after the newest backup was taken in your list, then the user cut-over will be drastically simplified and sped up.
+* Backups must be played back into the Azure file share **from oldest to newest**. An older backup can't be "sorted into" the list of backups on the Azure file share after running a migration job. Therefore you must ensure that your list of backups is complete *before* you create a job.
+* This list of backups in a job can't be modified once the job is created, even if the job never ran.
+* In order to select backups, the StorSimple volume you want to migrate must be online.
:::row::: :::column:::
There are important aspects around choosing backups that need to be migrated:
:::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-annotated.png" alt-text="An image showing that the upper half of the blade for selecting backups lists all available backups. A selected backup will be grayed-out in this list and added to a second list on the lower half of the blade. There it can also be deleted again." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-annotated.png"::: :::column-end::: :::column:::
- When the backup selection blade opens, it is separated into two lists. In the first list, all available backups are displayed. You can expand and narrow the result set by filtering for a specific time range. (see next section) </br></br>A selected backup will display as grayed-out and it is added to a second list on the lower half of the blade. The second list displays all the backups selected for migration. A backup selected in error can also be removed again.
+ When the backup selection blade opens, it's separated into two lists. In the first list, all available backups are displayed. You can expand and narrow the result set by filtering for a specific time range. (see next section) </br></br>A selected backup will display as grayed-out and is added to a second list on the lower half of the blade. The second list displays all the backups selected for migration. A backup selected in error can also be removed again.
> [!CAUTION]
- > You must select **all** backups you wish to migrate. You cannot add older backups later on. You cannot modify the job to change your selection once the job is created.
+ > You must select **all** backups you wish to migrate. You can't add older backups later. You can't modify the job to change your selection once the job is created.
:::column-end::: :::row-end::: :::row:::
There are important aspects around choosing backups that need to be migrated:
:::row-end::: > [!CAUTION]
-> Selecting more than 50 StorSimple volume backups is not supported. Jobs with a large number of backups may fail. Make sure your backup retention policies don't delete a selected backup before it got a chance to be migrated!
+> Selecting more than 50 StorSimple volume backups isn't supported. Jobs with a large number of backups may fail. Make sure your backup retention policies don't delete a selected backup before it got a chance to be migrated!
### Directory mapping
A mapping is expressed from left to right: [\source path] \> [\target path].
|**\|** or RETURN (new line) | Separator of two folder-mapping instructions. </br>Alternatively, you can omit this character and select **Enter** to get the next mapping expression on its own line. | ### Examples+ Moves the content of folder *User data* to the root of the target file share: ``` console \User data > \
Sorts multiple source locations into a new directory structure:
Invalid target path overlap example:</br> *\\folder > \\*</br> *\\folder2 > \\*</br>
-* Source folders that don't exist will be ignored.
-* Folder structures that don't exist on the target will be created.
+* Source folders that don't exist are ignored.
+* Folder structures that don't exist on the target are created.
* Like Windows, folder names are case insensitive but case preserving. > [!NOTE]
Sorts multiple source locations into a new directory structure:
### Run a migration job
-Your migration jobs are listed under *Job definitions* in the Data Manager resource you've deployed to a resource group.
-From the list of job definitions, select the job you want to run.
+Your migration jobs are listed under *Job definitions* in the Data Manager resource you've deployed to a resource group. From the list of job definitions, select the job you want to run.
In the job blade that opens, you can see your job's current status and a list of backups you've selected. The list of backups is sorted by oldest to newest and will be migrated to your Azure file share in this order.
In the job blade that opens, you can see your job's current status and a list of
:::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-never-ran-focused.png" alt-text="Screenshot of the migration job blade with a highlight around the command to start the job. It also displays the selected backups scheduled for migration." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-never-ran.png"::: :::column-end::: :::column:::
- Initially, the migration job will have the status: **Never ran**. </br>When you are ready, you can start this migration job. (Select the image for a version with higher resolution.) </br> When a backup was successfully migrated, an automatic Azure file share snapshot will be taken. The original backup date of your StorSimple backup will be placed in the *Comments* section of the Azure file share snapshot. Utilizing this field will allow you to see when the data was originally backed up as compared to the time the file share snapshot was taken.
+ Initially, the migration job will have the status: **Never ran**. </br>When you're ready, start the migration job. Select the image for a version with higher resolution. </br> When a backup is successfully migrated, an automatic Azure file share snapshot will be taken. The original backup date of your StorSimple backup is placed in the *Comments* section of the Azure file share snapshot. Utilizing this field allows you to see when the data was originally backed up as compared to the time the file share snapshot was taken.
:::column-end::: :::row-end::: > [!CAUTION]
-> Backups must be processed from oldest to newest. Once a migration job is created, you can't change the list of selected StorSimple volume backups. Don't start the job if the list of Backups is incorrect or incomplete. Delete the job and make a new one with the correct backups selected. For each selected backup, check your retention schedules. Backups may get deleted by one or more of your retention policies before they got a chance to be migrated!
+> Backups must be processed from oldest to newest. Once a migration job is created, you can't change the list of selected StorSimple volume backups. Don't start the job if the list of Backups is incorrect or incomplete. Delete the job and make a new one with the correct backups selected. For each selected backup, check your retention schedules. Backups might get deleted by one or more of your retention policies before they got a chance to be migrated!
### Per-item errors
-The migration jobs have two columns in the list of backups that list any issues that may have occurred during the copy:
+The migration jobs have two columns in the list of backups that list any issues that might have occurred during the copy:
-* Copy errors </br>This column lists files or folders that should have been copied but weren't. These errors are often recoverable. When a backup lists item issues in this column, review the copy logs. If you need to migrate these files, select **Retry backup**. This option will become available once the backup finished processing. The [Managing a migration job](#manage-a-migration-job) section explains your options in more detail.
-* Unsupported files </br>This column lists files or folders that can't be migrated. Azure Storage has limitations in file names, path lengths, and file types that currently or logically can't be stored in an Azure file share. A migration job won't pause for these kinds of errors. Retrying migration of the backup won't change the result. When a backup lists item issues in this column, review the copy logs and take note. If such issues arise in your last backup and you found in the copy log that the failure was due to a file name, path length or other issue you have influence over, you may want to remedy the issue in the live StorSImple volume, take a StorSimple volume backup and create a new migration job with just that backup. You will then migrate this remedied namespace and it will become the most recent / live version of the Azure file share. This is a manual and time consuming process. Review the copy logs carefully and evaluate if it's worth it.
+* Copy errors </br>This column lists files or folders that should have been copied but weren't. These errors are often recoverable. When a backup lists item issues in this column, review the copy logs. If you need to migrate these files, select **Retry backup**. This option becomes available once the backup finishes processing. The [Managing a migration job](#manage-a-migration-job) section explains your options in more detail.
+* Unsupported files </br>This column lists files or folders that can't be migrated. Azure Storage has limitations in file names, path lengths, and file types that currently or logically can't be stored in an Azure file share. A migration job won't pause for these kinds of errors. Retrying migration of the backup won't change the result. When a backup lists item issues in this column, review the copy logs and take note. If such issues arise in your last backup and you found in the copy log that the failure was due to a file name, path length, or other issue you have influence over, you might want to remedy the issue in the live StorSimple volume, take a StorSimple volume backup, and create a new migration job with just that backup. You can then migrate this remedied namespace and it will become the most recent / live version of the Azure file share. This is a manual and time consuming process. Review the copy logs carefully and evaluate if it's worth it.
-These copy logs are *\*.csv* files listing namespace items succeeded and items that failed to get copied. The errors are further split into the previously discussed categories.
-From the log file location, you can find logs for failed files by searching for "failed". The result should be a set of logs for files that failed to copy. Sort these logs by size. There may be extra logs produced at 17 bytes in size. They are empty and can be ignored. With a sort, you can focus on the logs with content.
+These copy logs are *\*.csv* files listing namespace items succeeded and items that failed to get copied. The errors are further split into the previously discussed categories. From the log file location, you can find logs for failed files by searching for "failed". The result should be a set of logs for files that failed to copy. Sort these logs by size. There might be extra logs produced at 17 bytes in size. They are empty and can be ignored. With a sort, you can focus on the logs with content.
The same process applies for log files recording successful copies. ### Manage a migration job Migration jobs have the following states:
-* **Never ran** </br>A new job, that has been defined but never ran before.
+
+* **Never ran** </br>A new job that has been defined but never run.
* **Waiting** </br>A job in this state is waiting for resources to be provisioned in the migration service. It will automatically switch to a different state when ready.
-* **Failed** </br>A failed job hit a fatal error that prevents it from processing more backups. A job is not expected to enter this state. A support request is the best course of action.
-* **Canceled** / **Canceling**</br>Either and entire migration job or individual backups within the job can be canceled. Canceled backups won't be processed, a canceled migration job will stop processing more backups. Expect that canceling a job will take a long time. This doesn't prevent you from creating a new job. The best course of action is patience to let a job fully arrive in the **Canceled** state. You can either ignore failed / canceled jobs or delete them at a later time. You won't have to delete jobs before you can delete the Data Manager resource at the end of your StorSimple migration.
+* **Failed** </br>A failed job hit a fatal error that prevents it from processing more backups. A job isn't expected to enter this state. A support request is the best course of action.
+* **Canceled** / **Canceling**</br>Either and entire migration job or individual backups within the job can be canceled. Canceled backups won't be processed, as a canceled migration job will stop processing backups. Expect that canceling a job will take a long time. This doesn't prevent you from creating a new job. The best course of action is to let a job fully arrive in the **Canceled** state. You can either ignore failed / canceled jobs or delete them later. You won't have to delete jobs before you can delete the Data Manager resource at the end of your StorSimple migration.
:::row:::
Migration jobs have the following states:
:::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-running-focused.png" alt-text="Screenshot of the migration job blade with a large status icon on the top in the running state." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-running.png"::: :::column-end::: :::column:::
- **Running** </br></br>A running job is currently processing a backup. Refer to the table on the bottom half of the blade to see which backup is currently being processed and which ones might have been migrated already. </br>Already migrated backups have a column with a link to a copy log. If there are any errors reported for a backup, you should review its copy log.
+ **Running** </br></br>A running job is currently processing a backup. Refer to the table on the bottom half of the blade to see which backup is currently being processed and which ones might have been migrated already. </br>Already migrated backups have a column with a link to a copy log. If a backup reports any errors, you should review its copy log.
:::column-end::: :::row-end::: :::row:::
Migration jobs have the following states:
:::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-paused-focused.png" alt-text="Screenshot of the migration job blade with a large status icon on the top in the paused state." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-paused.png"::: :::column-end::: :::column:::
- **Paused** </br></br>A migration job is paused when there is a decision needed. This condition enables two command buttons on the top of the blade: </br>Choose **Retry backup** when the backup shows files that were supposed to move but didn't (*Copy error* column). </br>Choose **Skip backup** when the backup is missing (was deleted by policy since you created the migration job) or when the backup is corrupt. You can find detailed error information in the blade that opens when you click on the failed backup. </br></br>When you *skip* or *retry* the current backup, the migration service will create a new snapshot in your target Azure file share. You may want to delete the previous one later, it is likely incomplete.
+ **Paused** </br></br>A migration job is paused when there is a decision needed. This condition enables two command buttons on the top of the blade: </br>Choose **Retry backup** when the backup shows files that were supposed to move but didn't (*Copy error* column). </br>Choose **Skip backup** when the backup is missing (was deleted by policy since you created the migration job) or when the backup is corrupt. You can find detailed error information in the blade that opens when you click on the failed backup. </br></br>When you *skip* or *retry* the current backup, the migration service will create a new snapshot in your target Azure file share. You might want to delete the previous one later, as it's likely incomplete.
:::column-end::: :::row-end::: :::row:::
Migration jobs have the following states:
#### Run jobs in parallel
-You will likely have multiple StorSimple volumes, each with their own shares that need to be migrated to an Azure file share. It's important that you understand how much you can do in parallel. There are limitations that aren't enforced in the user experience and will either degrade or inhibit a complete migration if jobs are executed at the same time.
+You will likely have multiple StorSimple volumes, each with their own shares that must be migrated to an Azure file share. It's important that you understand how much you can do in parallel. There are limitations that aren't enforced in the user experience and will either degrade or inhibit a complete migration if jobs are executed at the same time.
There are no limits in defining migration jobs. You can define the same StorSimple source volume, the same Azure file share, across the same or different StorSimple appliances. However, running them has limitations: * Only one migration job with the same StorSimple source volume can run at the same time. * Only one migration job with the same target Azure file share can run at the same time.
-* Before starting the next job, you ensured that any of the previous jobs are in the `copy stage` and show progress of moving files for at least 30 Minutes.
-* You can run up to four migration jobs in parallel per StorSimple device manager, as long as you also abide by the previous rules.
+* Before starting the next job, ensure that any of the previous jobs are in the `copy stage` and show progress of moving files for at least 30 minutes.
+* You can run up to four migration jobs in parallel per StorSimple device manager, as long as you abide by the previous rules.
-When you attempt to start a migration job, the previous rules are checked. If there are jobs running, you may not be able to start the current job. You'll receive an alert that lists the name of currently running job(s) that must finish before you can start the new job.
+When you attempt to start a migration job, the previous rules are checked. If there are jobs running, you might not be able to start a new job. You'll receive an alert that lists the name of currently running job(s) that must finish before you can start the new job.
> [!TIP]
-> It's a good idea to regularly check your migration jobs in the *Job definition* tab of your *Data Manager* resource, to see if any of them have paused and need your input to complete.
+> It's a good idea to regularly check your migration jobs in the *Job definition* tab of your *Data Manager* resource to see if any of them have paused and need your input to complete.
### Phase 3 summary
At the end of Phase 3, you'll have run at least one of your migration jobs from
There are two main strategies for accessing your Azure file shares: * **Azure File Sync**: [Deploy Azure File Sync](#deploy-azure-file-sync) to an on-premises Windows Server instance. Azure File Sync has all the advantages of a local cache, just like StorSimple.
-* **Direct-share-access**: [Deploy direct-share-access](#deploy-direct-share-access). Use this strategy if your access scenario for a given Azure file share won't benefit from local caching, or you no longer have an ability to host an on-premises Windows Server instance. Here, your users and apps will continue to access SMB shares over the SMB protocol. These shares are no longer on an on-premises server but directly in the cloud.
+* **Direct-share-access**: [Deploy direct-share-access](#deploy-direct-share-access). Use this strategy if your access scenario for a given Azure file share won't benefit from local caching, or if you no longer have an ability to host an on-premises Windows Server instance. Here, your users and apps will continue to access SMB shares over the SMB protocol. These shares are no longer on an on-premises server but directly in the cloud.
You should have already decided which option is best for you in [Phase 1](#phase-1-prepare-for-migration) of this guide.
The remainder of this section focuses on deployment instructions.
- Deploying Azure File Sync - Deploy the Azure File Sync cloud resource - Deploy an on-premises Windows Server instance
- - Preparing the Windows Server instance for file sync
+ - Preparing the Windows Server instance for Azure File Sync
- Configuring Azure File Sync on the Windows Server instance - Monitoring initial sync - Testing Azure File Sync
It's time to deploy a part of Azure File Sync.
1. Deploy the Azure File Sync agent on your on-premises server. 1. Register the server with the cloud resource.
-Don't create any sync groups yet. Setting up sync with an Azure file share should only occur after your migration jobs to an Azure file share have completed. If you started using Azure File Sync before your migration completed, it would make your migration unnecessarily difficult since you couldn't easily tell when it was time to initiate a cut-over.
+Don't create any sync groups yet. Setting up sync with an Azure file share should only occur after your migration jobs to an Azure file share have completed. If you start using Azure File Sync before your migration completes, it will make your migration unnecessarily difficult because you won't be able to easily tell when it was time to initiate a cut-over.
#### Deploy the Azure File Sync cloud resource
Your registered on-premises Windows Server instance must be ready and connected
[!INCLUDE [storage-files-migration-configure-sync](../../../includes/storage-files-migration-configure-sync.md)] > [!IMPORTANT]
-> Be sure to turn on cloud tiering. Cloud tiering is the Azure File Sync feature that allows the local server to have less storage capacity than is stored in the cloud, yet have the full namespace available. Locally interesting data is also cached locally for fast, local access performance. Another reason to turn on cloud tiering at this step is that we don't want to sync file content at this stage. Only the namespace should be moving at this time.
+> Be sure to turn on cloud tiering. Cloud tiering is the Azure File Sync feature that allows the local server to have less storage capacity than is stored in the cloud, yet have the full namespace available. Locally interesting data is also cached locally for fast performance. Another reason to turn on cloud tiering at this step is that we don't want to sync file content at this stage. Only the namespace should be moving at this time.
### Deploy direct-share-access
Your registered on-premises Windows Server instance must be ready and connected
:::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br>
- The video references dedicated documentation for some topics:
+ The video references dedicated documentation for the following topics. Note that Azure Active Directory is now Microsoft Entra ID. For more info, see [New name for Azure AD](https://aka.ms/azureadnewname).
* [Identity overview](storage-files-active-directory-overview.md) * [How to domain join a storage account](storage-files-identity-auth-active-directory-enable.md)
At the end of this phase, you've created and run multiple migration jobs in your
## Phase 5: User cut-over
-This phase is all about wrapping up your migration:
+In this phase, you'll complete your migration:
* Plan your downtime. * Catch up with any changes your users and apps produced on the StorSimple side while the migration jobs in Phase 3 have been running.
This phase is all about wrapping up your migration:
This migration approach requires some downtime for your users and apps. The goal is to keep downtime to a minimum. The following considerations can help: * Keep your StorSimple volumes available while running your migration jobs.
-* When you've finished running your data migration jobs for a share, it's time to remove user access (at least write access) from the StorSimple volumes or shares. A final RoboCopy will catch up your Azure file share. Then you can cut over your users. Where you run RoboCopy depends on whether you chose to use Azure File Sync or direct-share-access. The upcoming section on RoboCopy covers that subject.
+* When you've finished running your data migration jobs for a share, it's time to remove user access (at least write access) from the StorSimple volumes or shares. A final RoboCopy will catch up your Azure file share. Then you can cut over your users. Where you run RoboCopy depends on whether you chose to use Azure File Sync or direct-share-access. The upcoming section covers that subject.
* After you've completed the RoboCopy catch-up, you're ready to expose the new location to your users by either the Azure file share directly or an SMB share on a Windows Server instance with Azure File Sync. Often a DFS-N deployment will help accomplish a cut-over quickly and efficiently. It will keep your existing share addresses consistent and repoint to a new location that contains your migrated files and folders.
-For archival data, it is a fully viable approach to take downtime on your StorSimple volume (or subfolder), take one more StorSimple volume backup, migrate and then open up the migration destination for access by users and apps. This will spare you the need for a catch-up RoboCopy as described in this section. However, this approach comes at the cost of a prolonged downtime window that might stretch to several days or longer depending on the number of files and backups you need to migrate. This is likely only an option for archival workloads that can do without write access for prolonged periods of time.
+For archival data, it's a fully viable approach to take downtime on your StorSimple volume (or subfolder), take one more StorSimple volume backup, migrate, and then open up the migration destination for access by users and apps. This will spare you the need for a catch-up RoboCopy. However, this approach comes at the cost of a prolonged downtime window that might stretch to several days or longer depending on the number of files and backups you need to migrate. This is likely only an option for archival workloads that can do without write access for prolonged periods of time.
### Determine when your namespace has fully synced to your server
-When you use Azure File Sync for an Azure file share, it's important that you determine your entire namespace has finished downloading to the server *before* you start any local RoboCopy. The time it takes to download your namespace depends on the number of items in your Azure file share. There are two methods for determining whether your namespace has fully arrived on the server.
+When you use Azure File Sync for an Azure file share, it's important to determine that your entire namespace has finished downloading to the server *before* you start any local RoboCopy. The time it takes to download your namespace depends on the number of items in your Azure file share. There are two methods for determining whether your namespace has fully arrived on the server.
#### Azure portal You can use the Azure portal to see when your namespace has fully arrived. * Sign in to the Azure portal, and go to your sync group. Check the sync status of your sync group and server endpoint.
-* The interesting direction is download. If the server endpoint is newly provisioned, it will show **Initial sync**, which indicates the namespace is still coming down.
-After that state changes to anything but **Initial sync**, your namespace will be fully populated on the server. You can now proceed with a local RoboCopy.
+* The interesting direction is download. If the server endpoint is newly provisioned, it will show **Initial sync**, which indicates the namespace is still coming down. After that state changes to anything but **Initial sync**, your namespace will be fully populated on the server.
+
+You can now proceed with a local RoboCopy.
#### Windows Server Event Viewer
At this point, there are differences between your on-premises Windows Server ins
1. You need to catch up with the changes that users or apps produced on the StorSimple side while the migration was ongoing. 1. For cases where you use Azure File Sync: The StorSimple appliance has a populated cache versus the Windows Server instance with just a namespace with no file content stored locally at this time. The final RoboCopy can help jump-start your local Azure File Sync cache by pulling over locally cached file content as much as is available and can fit on the Azure File Sync server.
-1. Some files might have been left behind by the migration job because of invalid characters. If so, copy them to the Azure File Sync-enabled Windows Server instance. Later on, you can adjust them so that they will sync. If you don't use Azure File Sync for a particular share, you're better off renaming the files with invalid characters on the StorSimple volume. Then run the RoboCopy directly against the Azure file share.
+1. Some files might have been left behind by the migration job because of invalid characters. If so, copy them to the Azure File Sync-enabled Windows Server instance. Later, you can adjust them so that they will sync. If you don't use Azure File Sync for a particular share, you're better off renaming the files with invalid characters on the StorSimple volume. Then run the RoboCopy directly against the Azure file share.
> [!WARNING]
-> Robocopy in Windows Server 2019 currently experiences an issue that will cause files tiered by Azure File Sync on the target server to be recopied from the source and re-uploaded to Azure when using the /MIR function of robocopy. It is imperative that you use Robocopy on a Windows Server other than 2019. A preferred choice is Windows Server 2016. This note will be updated should the issue be resolved via Windows Update.
+> Robocopy in Windows Server 2019 experienced an issue that caused files tiered by Azure File Sync on the target server to be recopied from the source and re-uploaded to Azure when using the `/MIR` function. We recommend running Robocopy on a Windows Server other than 2019, such as Windows Server 2016.
> [!WARNING] > You *must not* start the RoboCopy before the server has the namespace for an Azure file share downloaded fully. For more information, see [Determine when your namespace has fully downloaded to your server](#determine-when-your-namespace-has-fully-synced-to-your-server).
RoboCopy has several parameters. The following example showcases a finished comm
When you configure source and target locations of the RoboCopy command, make sure you review the structure of the source and target to ensure they match. If you used the directory-mapping feature of the migration job, your root-directory structure might be different than the structure of your StorSimple volume. If that's the case, you might need multiple RoboCopy jobs, one for each subdirectory. If you unsure if the command will perform as expected, you can use the */L* parameter, which will simulate the command without actually making any changes.
-This RoboCopy command uses /MIR, so it won't move files that are the same (tiered files, for instance). But if you get the source and target path wrong, /MIR also purges directory structures on your Windows Server instance or Azure file share that aren't present on the StorSimple source path. They must match exactly for the RoboCopy job to reach its intended goal of updating your migrated content with the latest changes made while the migration is ongoing.
+This RoboCopy command uses `/MIR`, so it won't move files that are the same (tiered files, for instance). But if you get the source and target path wrong, `/MIR` also purges directory structures on your Windows Server instance or Azure file share that aren't present on the StorSimple source path. They must match exactly for the RoboCopy job to reach its intended goal of updating your migrated content with the latest changes made while the migration is ongoing.
Consult the RoboCopy log file to see if files have been left behind. If issues exist, fix them, and rerun the RoboCopy command. Don't deprovision any StorSimple resources before you fix outstanding issues for files or folders you care about.
When using the StorSimple Data Manager migration service, either an entire migra
|||-| |**Backup** |*Could not find a backup for the parameters specified* |The backup selected for the job run is not found at the time of "Estimation" or "Copy". Ensure that the backup is still present in the StorSimple backup catalog. Sometimes automatic backup retention policies delete backups between selecting them for migration and actually running the migration job for this backup. Consider disabling any backup retention schedules before starting a migration. | |**Estimation </br> Configure compute** |*Installation of encryption keys failed* |Your *Service Data Encryption Key* is incorrect. Review the [encryption key section in this article](#storsimple-service-data-encryption-key) for more details and help retrieving the correct key. |
-| |*Batch error* |It is possible that starting up all the internal infrastructure required to perform a migration runs into an issue. Multiple other services are involved in this process. These problems generally resolve themselves when you attempt to run the job again. |
+| |*Batch error* |It's possible that starting up all the internal infrastructure required to perform a migration runs into an issue. Multiple other services are involved in this process. These problems generally resolve themselves when you attempt to run the job again. |
| |*StorSimple Manager encountered an internal error. Wait for a few minutes and then try the operation again. If the issue persists, contact Microsoft Support. (Error code: 1074161829)* |This generic error has multiple causes, but one possibility encountered is that the StorSimple device manager reached the limit of 50 appliances. Check if the most recently run jobs in the device manager have suddenly started to fail with this error, which would suggest this is the problem. The mitigation for this particular issue is to remove any offline StorSimple 8001 appliances created and used by the Data Manager Service. You can file a support ticket or delete them manually in the portal. Make sure to only delete offline 8001 series appliances. | |**Estimating Files** |*Clone volume job failed* |This error most likely indicates that you specified a backup that was somehow corrupted. The migration service can't mount or read it. You can try out the backup manually or open a support ticket. | | |*Cannot proceed as volume is in non-NTFS format* |Only NTFS volumes, non dedupe enabled, can be used by the migration service. If you have a differently formatted volume, like ReFS or a third-party format, the migration service won't be able to migrate this volume. See the [Known limitations](#known-limitations) section. |
When using the StorSimple Data Manager migration service, either an entire migra
| |*Timed out* |The estimation phase failing with a timeout is typically an issue with either the StorSimple appliance, or the source Volume Backup being slow and sometimes even corrupt. If re-running the backup doesn't work, then filing a support ticket is your best course of action. | | |*Could not find file &lt;path&gt; </br>Could not find a part of the path* |The job definition allows you to provide a source sub-path. This error is shown when that path does not exist. For instance: *\Share1 > \Share\Share1* </br> In this example you've specified *\Share1* as a sub-path in the source, mapping to another sub-path in the target. However, the source path does not exist (was misspelled?). Note: Windows is case preserving but not case dependent. Meaning specifying *\Share1* and *\share1* is equivalent. Also: Target paths that don't exist will be automatically created. | | |*This request is not authorized to perform this operation* |This error shows when the source StorSimple storage account or the target storage account with the Azure file share has a firewall setting enabled. You must allow traffic over the public endpoint and not restrict it with further firewall rules. Otherwise the Data Transformation Service will be unable to access either storage account, even if you authorized it. Disable any firewall rules and re-run the job. |
-|**Copying Files** |*The account being accessed does not support HTTP* |This is an Azure Files bug that is being fixed. The temporary mitigation is to disable internet routing on the target storage account or use the Microsoft routing endpoint. |
-| |*The specified share is full* |If the target is a premium Azure file share, ensure you have provisioned sufficient capacity for the share. Temporary over-provisioning is a common practice. If the target is a standard Azure file share, check that the target share has the "large file share" feature enabled. Standard storage is growing as you use the share. However, if you use a legacy storage account as a target, you might encounter a 5 TiB share limit. You will have to manually enable the ["Large file share"](storage-how-to-create-file-share.md#enable-large-file-shares-on-an-existing-account) feature. Fix the limits on the target and re-run the job. |
+|**Copying Files** |*The account being accessed does not support HTTP* |Disable internet routing on the target storage account or use the Microsoft routing endpoint. |
+| |*The specified share is full* |If the target is a premium Azure file share, ensure that you've provisioned sufficient capacity for the share. Temporary over-provisioning is a common practice. If the target is a standard Azure file share, check that the target share has the "large file share" feature enabled. Standard storage is growing as you use the share. However, if you use a legacy storage account as a target, you might encounter a 5 TiB share limit. You will have to manually enable the ["Large file share"](storage-how-to-create-file-share.md#enable-large-file-shares-on-an-existing-account) feature. Fix the limits on the target and re-run the job. |
### Item level errors
During the copy phase of a migration job run, individual namespace items (files
| |*Not a valid Win32 FileTime. Parameter name: fileTime* |In this case, the file can be accessed but can't be evaluated for copy because a timestamp the migration engine depends on is either corrupted or was written by an application in an incorrect format. There is not much you can do, because you can't change the timestamp in the backup. If retaining this file is important, perhaps on the latest version (last backup containing this file) you manually copy the file, fix the timestamp, and then move it to the target Azure file share. This option doesn't scale very well but is an option for high-value files where you want to have at least one version retained in your target. | | |*-2146232798 </br>Safe handle has been closed* |Often a transient error. Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. | | |*-2147024413 </br>Fatal device hardware error* |This is a rare error and not actually reported for a physical device, but rather the 8001 series virtualized appliances used by the migration service. The appliance ran into an issue. Files with this error won't stop the migration from proceeding to the next backup. That makes it hard for you to perform a manual copy or retry the backup that contains files with this error. If the files left behind are very important or there is a large number of files, you may need to start the migration of all backups again. Open a support ticket for further investigation. |
-|**Delete </br>(Mirror purging)** |*The specified directory is not empty.* |This error occurs when the migration mode is set to *mirror* and the process that removes items from the Azure file share ran into an issue that prevented it from deleting items. Deletion happens only in the live share, not from previous snapshots. The deletion is necessary because the affected files are not in the current backup and thus must be removed from the live share before the next snapshot. There are two options: Option 1: mount the target Azure file share and delete the files with this error manually. Option 2: you can ignore these errors and continue processing the next backup with an expectation that the target is not identical to source and has some extra items that weren't in the original StorSimple backup. |
-| |*Bad request* |This error indicates that the source file has certain characteristics that could not be copied to the Azure file share. Most notably there could be invisible control characters in a file name or 1 byte of a double byte character in the file name or file path. You can use the copy logs to get path names, copy the files to a temporary location, rename the paths to remove the unsupported characters, and then robocopy again to the Azure file share. You can then resume the migration by skipping to the next backup to be processed. |
--
+|**Delete </br>(Mirror purging)** |*The specified directory is not empty.* |This error occurs when the migration mode is set to *mirror* and the process that removes items from the Azure file share ran into an issue that prevented it from deleting items. Deletion happens only in the live share, not from previous snapshots. The deletion is necessary because the affected files are not in the current backup and thus must be removed from the live share before the next snapshot. There are two options: Option 1: mount the target Azure file share and delete the files with this error manually. Option 2: you can ignore these errors and continue processing the next backup with an expectation that the target isn't identical to source and has some extra items that weren't in the original StorSimple backup. |
+| |*Bad request* |This error indicates that the source file has certain characteristics that couldn't be copied to the Azure file share. Most notably there could be invisible control characters in a file name or 1 byte of a double byte character in the file name or file path. You can use the copy logs to get path names, copy the files to a temporary location, rename the paths to remove the unsupported characters, and then robocopy again to the Azure file share. You can then resume the migration by skipping to the next backup to be processed. |
## Next steps
-* Get more familiar with [Azure File Sync: aka.ms/AFS](../file-sync/file-sync-planning.md).
* Understand the flexibility of [cloud tiering](../file-sync/file-sync-cloud-tiering-overview.md) policies. * [Enable Azure Backup](../../backup/backup-afs.md#configure-backup-from-the-file-share-pane) on your Azure file shares to schedule snapshots and define backup retention schedules. * If you see in the Azure portal that some files are permanently not syncing, review the [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json) for steps to resolve these issues.
update-manager Guidance Migration Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-azure.md
description: Patching guidance overview for Microsoft Configuration Manager to A
Previously updated : 09/18/2023 Last updated : 04/03/2024
-# Guidance on migrating Azure VMs from Microsoft Configuration Manager to Azure Update Manager
+# Guidance on migrating virtual machines from Microsoft Configuration Manager to Azure Update Manager
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-This article provides a guide to start using Azure Update Manager (for update management) for Azure virtual machines that are currently using Microsoft Configuration Manager (MCM).
+This article provides a guide to start using Azure Update Manager (for update management) for virtual machines that are currently using Microsoft Configuration Manager (MCM).
-Microsoft Configuration Manager (MCM), previously known as System Center Configuration Manager (SCCM), helps you to manage PCs and servers, keep software up to date, set configuration and security policies, and monitor system status.
+Before initiating migration, you need to understand mapping between System Center components and equivalent services in Azure.
-MCM supports several [cloud services](/mem/configmgr/core/understand/use-cloud-services) that can supplement on-premises infrastructure and can help solve business problems such as:
-- How to manage clients that roam onto the internet.-- How to provide content resources to isolated clients or resources on the intranet, outside your firewall.-- How to scale out infrastructure when the physical hardware isn't available or isn't logically placed to support your needs.-
-Customers [extend and migrate an on-premises site to Azure](/mem/configmgr/core/support/azure-migration-tool) and create Azure virtual machines (VMs) for Configuration Manager and install the various site roles with default settings. The validation of new roles and removal of the on-premises site system role enables MCM to provide all the on-premises capabilities and experiences in Azure. For more information, see [Configuration Manager on Azure FAQ](/mem/configmgr/core/understand/configuration-manager-on-azure).
+| **System Center Component** | **Azure equivalent service** |
+| | |
+| System Center Operations Manager (SCOM) | Azure Monitor SCOM Managed Instance |
+| System Center Configuration Manager (SCCM), now called Microsoft Configuration Manager (MCM) | Azure Update Manager, </br> Change Tracking and Inventory, </br> Guest Config, </br> Azure Automation, </br> Desired State Configuration (DSC), </br> Azure Security Center |
+| System Center Virtual Machine Manager (SCVMM) | Arc enabled System Center VMM |
+| System Center Data Protection Manager (SCDPM) | Arc enabled DPM |
+| System Center Orchestrator (SCORCH) | Arc enabled DPM |
+| System Center Service Manager (SCSM) | - |
+> [!NOTE]
+> As part of your migration journey, we recommend the following options:
+> 1. Fully migrate your virtual machines to Azure and replace System Center with Azure native services.
+> 1. Take a hybrid approach and replace System Center with Azure native services. Where both Azure and on-premises virtual machines are managed using Azure native services. For on-premises virtual machines, the capabilities of the Azure platform are extended to on-premises via Azure Arc.
## Migrate to Azure Update Manager
+MCM helps you to manage PCs and servers, keep software up to date, set configuration and security policies, and monitor system status. MCM offers [multiple features and capabilities](/mem/configmgr/core/plan-design/changes/features-and-capabilities) and software [update management](/mem/configmgr/sum/understand/software-updates-introduction) is one of these.
-MCM offers [multiple features and capabilities](/mem/configmgr/core/plan-design/changes/features-and-capabilities) and software [update management](/mem/configmgr/sum/understand/software-updates-introduction) is one of these.By using MCM in Azure, you can continue with the existing investments in MCM and processes to manage update cycle for Windows VMs.
-
-**Specifically for update management or patching**, as per your requirements, you can also use the native [Azure Update Manager](overview.md) to manage and govern update compliance for Windows and Linux machines across your deployments in a consistent manner. Unlike MCM that needs maintaining Azure virtual machines for hosting the different Configuration Manager roles. Azure Update Manager is designed as a standalone Azure service to provide SaaS experience on Azure to manage hybrid environments. You don't need license to use Azure Update Manager.
+Specifically for update management or patching, as per your requirements, you can use the native [Azure Update Manager](overview.md) to manage and govern update compliance for Windows and Linux machines across your deployments in a consistent manner. Unlike MCM that needs maintaining Azure virtual machines for hosting the different Configuration Manager roles. Azure Update Manager is designed as a standalone Azure service to provide SaaS experience on Azure to manage hybrid environments. You don't need license to use Azure Update Manager.
> [!NOTE]
-> Azure Update Manager does not provide migration support for Azure VMs in MCM. For example, configurations.
+> - To manage clients/devices, Intune is the recommended Microsoft solution.
+> - Azure Update Manager does not provide migration support for Azure VMs in MCM. For example, configurations.
## Software update management capability map
The following table maps the **software update management capabilities** of MCM
Synchronize software updates between sites (Central Admin site, Primary, Secondary sites) | The top site (either central admin site or stand-alone primary site) connects to Microsoft Update to retrieve software update. [Learn more](/mem/configmgr/sum/understand/software-updates-introduction). After the top sites are synchronized, the child sites are synchronized. | There's no hierarchy of machines in Azure and therefore all machines connected to Azure receive updates from the source repository. Synchronize software updates/check for updates (retrieve patch metadata) | You can scan for updates periodically by setting configuration on the Software update point. [Learn more](/mem/configmgr/sum/get-started/synchronize-software-updates#to-schedule-software-updates-synchronization) | You can enable periodic assessment to enable scan of patches every 24 hours. [Learn more](assessment-options.md)| Configuring classifications/products to synchronize/scan/assess | You can choose the update classifications (security or critical updates) to synchronize/scan/assess. [Learn more](/mem/configmgr/sum/get-started/configure-classifications-and-products) | There's no such capability here. The entire software metadata is scanned. |
-Deploy software updates (install patches) | Provides three modes of deploying updates: </br> Manual deployment </br> Automatic deployment </br> Phased deployment [Learn more](/mem/configmgr/sum/deploy-use/deploy-software-updates) | Manual deployment is mapped to deploy [one-time updates](deploy-updates.md) and Automatic deployment is mapped to [scheduled updates](scheduled-patching.md) (The [Automatic Deployment Rules (ADRs)](/mem/configmgr/sum/deploy-use/automatically-deploy-software-updates#BKMK_CreateAutomaticDeploymentRule)) can be mapped to schedules. There's no phased deployment option.
+Deploy software updates (install patches) | Provides three modes of deploying updates: </br> Manual deployment </br> Automatic deployment </br> Phased deployment [Learn more](/mem/configmgr/sum/deploy-use/deploy-software-updates) | - Manual deployment is mapped to deploy [one-time updates](deploy-updates.md) </br> - Automatic deployment is mapped to scheduled updates </br> - There's no phased deployment option.
+| Deploy software updates on Windows and Linux machines (in Azure or on-premises or other clouds) | SCCM helps manage tracking and applying software updates to Windows machines (Currently, we don't support Linux machines.) | Azure Update Manager supports software updates on both Windows and Linux machines. |
++
+## Guidance to use Azure Update Manager on MCM managed machines
+
+As a first step in MCM user's journey towards Azure Update Manager, you need to enable Azure Update Manager on your existing MCM managed servers (i.e. ensure that Azure Update Manager and MCM co-existence is achieved). The following section address few challenges that you might encounter in this first step.
+
+### Overview of current MCM setup
+
+If you have WSUS server configured as part of the initial setup as MCM client uses WSUS server to scan for first-party updates. Third party updates content is published to this WSUS server as well. Azure Update Manager has the capability to scan and install updates from WSUS and we recommend to leverage the WSUS server configured as part of MCM setup to make Azure Update Manager work along with MCM.
+
+### First party updates
+
+For Azure Update Manager to scan and install first party updates (Windows and Microsoft updates), you should start approving the required updates in the configured WSUS server. This is done by [configuring an auto approval rule in WSUS](/windows-server/administration/windows-server-update-services/deploy/3-approve-and-deploy-updates-in-wsus#32-configure-auto-approval-rules) like what users have configured on MCM server.
+
+
+### Third party updates
+
+Third party updates should work as expected with Azure Update Manager provided you have already configured MCM for third party patching and it is able to successfully patch Third party updates via MCM. Ensure that you continue to publish third party updates to WSUS from MCM [Step 3 in Enable third-party updates](/mem/configmgr/sum/deploy-use/third-party-software-updates#publish-and-deploy-third-party-software-updates). After you publish to WSUS, Azure Update Manager will be able to detect and install these updates from WSUS server.
## Manage software updates using Azure Update Manager
For the third party software patching, Azure Update Manager should be connected
### Do I need to configure WSUS to use Azure Update Manager? WSUS is a way to manage patches. Azure Update Manager will refer to whichever endpoint it's pointed to. (Windows Update, Microsoft Update, or WSUS).+
+### Should I deploy the monthly patch through MCM?
+
+No, only approving patches in WSUS monthly or setting the Automatic Deployment Rules (ADRs) will scan and install patches on your servers.
+
+### How Azure Update Manager can be used to manage on-premises virtual machines?
+
+Azure Update Manager can be used on-premises by using Azure Arc. Azure Arc is a bridge that extends the Azure platform to help you build applications and services with the flexibility to run across datacenters, at the edge, and in multicloud environments. Azure Arc VM management lets you provision and manage Windows and Linux VMs hosted on-premises. This feature enables IT admins to manage Arc VMs by using Azure management tools, including Azure portal, Azure CLI, Azure PowerShell, and Azure Resource Manager (ARM) templates.
++ ## Next steps - [An overview on Azure Update Manager](overview.md) - [Check update compliance](view-updates.md) - [Deploy updates now (on-demand) for single machine](deploy-updates.md) - [Schedule recurring updates](scheduled-patching.md)
+- [An overview of Azure Arc-enabled servers](../azure-arc/servers/overview.md)
+
update-manager Periodic Assessment At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/periodic-assessment-at-scale.md
Previously updated : 02/27/2024 Last updated : 04/03/2024
This article describes how to enable Periodic Assessment for your machines at sc
You can monitor the compliance of resources under **Compliance** and remediation status under **Remediation** on the Azure Policy home page. > [!NOTE]
-> Currently, Periodic assessment policies donΓÇÖt support specialized, migrated, and restored images. However, they work for both marketplace and generalized gallery images. If you are facing failures during remediation see, [remediation failures for gallery images](troubleshoot.md#policy-remediation-tasks-are-failing-for-gallery-images-and-for-images-with-encrypted-disks) for more information.
--
+> - Periodic assessment policies work for all supported image types. If you are facing failures during remediation see, [remediation failures for gallery images](troubleshoot.md#policy-remediation-tasks-are-failing-for-gallery-images-and-for-images-with-encrypted-disks) for more information.
+> - Run a remediation task post create [for issues with auto remediation of specialized, migrated and restored images during create](troubleshoot.md#periodic-assessment-isnt-getting-set-correctly-when-the-periodic-assessment-policy-is-used-during-create-for-specialized-migrated-and-restored-vms).
## Enable Periodic Assessment for your Azure Arc-enabled machines by using Azure Policy
update-manager Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md
Title: Troubleshoot known issues with Azure Update Manager description: This article provides details on known issues and how to troubleshoot any problems with Azure Update Manager. Previously updated : 02/27/2024 Last updated : 04/03/2024
To review the logs related to all actions performed by the extension, on Windows
+## Periodic assessment isn't getting set correctly when the periodic assessment policy is used during create for specialized, migrated, and restored VMs
+
+### Cause
+Periodic assessment isn't getting set correctly during create for specialized, migrated, and restored VMs because of the way the current modify policy is designed. Post-creation, the policy will show these resources as non-compliant on the compliance dashboard.
+
+### Resolution
+
+Run a remediation task post create to remediate newly created resources. For more information see, [Remediate non-compliant resources with Azure Policy](../governance/policy/how-to/remediate-resources.md).
++ ## Policy remediation tasks are failing for gallery images and for images with encrypted disks ### Issue
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 03/18/2024 Last updated : 04/03/2024
A rollout may take several weeks before the agent is available in all environmen
| Release | Latest version | |--|--|
-| Production | 1.0.8297.800 |
+| Production | 1.0.8431.2300 |
| Validation | 1.0.8431.1500 | > [!TIP] > The Azure Virtual Desktop Agent is automatically installed when adding session hosts in most scenarios. If you need to install the agent manually, you can download it at [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool), together with the steps to install it.
+## Version 1.0.8431.2300
+
+*Published: April 2024*
+
+In this update, we made the following changes:
+
+- Fixed an issue with App Attach diagnostics that caused the agent to always report timeout exceptions. Now the agent only reports timeout exceptions to diagnostics when app attach registration is unsuccessful.
+
+- General improvements and bug fixes.
+ ## Version 1.0.8431.1500 (validation) *Published: March 2024*
virtual-machines Concepts Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/concepts-restore-points.md
For Azure VM Linux VMs, restore points support the list of Linux [distributions
## Other limitations - Restore points are supported only for managed disks. -- Ultra-disks, Ephemeral OS disks, and Shared disks aren't supported.
+- Ephemeral OS disks, and Shared disks aren't supported via both consistency modes.
- Restore points APIs require an API of version 2021-03-01 or later for application consistency. - Restore points APIs require an API of version 2021-03-01 or later for crash consistency. (in preview) - A maximum of 500 VM restore points can be retained at any time for a VM, irrespective of the number of restore point collections.
virtual-machines How To Enable Write Accelerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/how-to-enable-write-accelerator.md
There are limits of Azure Premium Storage VHDs per VM that can be supported by W
| M32ms, M32ls, M32ts, M32s, M32dms_v2, M32ms_v2 | 4 | 5000 | | M16ms, M16s | 2 | 2500 | | M8ms, M8s | 1 | 1250 |
+| Standard_M12s_v3, Standard_M12ds_v3 | 1 | 5000 |
+| Standard_M24s_v3, Standard_M24ds_v3 | 2 | 5000 |
+| Standard_M48s_1_v3, Standard_M48ds_1_v3 | 4 | 5000 |
+| Standard_M96s_1_v3, Standard_M96ds_1_v3, Standard_M96s_2_v3, Standard_M96ds_2_v3 | 8 | 10000 |
+| Standard_M176s_3_v3, Standard_M176ds_3_v3, Standard_M176s_4_v3, Standard_M176ds_4_v3 | 16 | 20000 |
The IOPS limits are per VM and *not* per disk. All Write Accelerator disks share the same IOPS limit per VM. Attached disks cannot exceed the write accelerator IOPS limit for a VM. For an example, even though the attached disks can do 30,000 IOPS, the system does not allow the disks to go above 20,000 IOPS for M416ms_v2.
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
This scope is integrated with [Update Manager](../update-center/overview.md), wh
- The value of **Repeat** should be at least 6 hours. - The start time for a schedule should be at least 10 minutes after the schedule's creation time.
->[!IMPORTANT]
-> The minimum maintenance window has been increased from 1 hour 10 minutes to 1 hour 30 minutes, while the minimum repeat value has been set to 6 hours for new schedules. **Please note that your existing schedules will not get impacted; however, we strongly recommend updating existing schedules to include these new changes.**
+>[!NOTE]
+> 1. The minimum maintenance window has been increased from 1 hour 10 minutes to 1 hour 30 minutes, while the minimum repeat value has been set to 6 hours for new schedules. **Please note that your existing schedules will not get impacted; however, we strongly recommend updating existing schedules to include these new changes.**
+> 2. The count of characters of Resource Group name along with Maintenance Configuration name should be less than 128 characters
In rare cases if platform catchup host update window happens to coincide with the guest (VM) patching window and if the guest patching window don't get sufficient time to execute after host update then the system would show **Schedule timeout, waiting for an ongoing update to complete the resource** error since only a single update is allowed by the platform at a time. To learn more about this topic, checkout [Update Manager and scheduled patching](../update-center/scheduled-patching.md)
-> [!NOTE]
-> 1. The count of characters of Resource Group name along with Maintenance Configuration name should be less than 128 characters
-> 2. If you move a VM to a different resource group or subscription, the scheduled patching for the VM stops working as this scenario is currently unsupported by the system. You can delete the older association of the moved VM and create the new association to include the moved VMs in a maintenance configuration.
+> [!IMPORTANT]
+> If you move a resource to a different resource group or subscription, then scheduled patching for the resource stops working as this scenario is currently unsupported by the system. The team is working to provide this capability but in the meantime, as a workaround, for the resource you want to move (in static scope)
+> 1. You need to remove the assignment of it
+> 2. Move the resource to a different resource group or subscription
+> 3. Recreate the assignment of it
+> In the dynamic scope, the steps are similar, but after removing the assignment in step 1, you simply need to initiate or wait for the next scheduled run. This action prompts the system to completely remove the assignment, enabling you to proceed with steps 2 and 3.
+> If you forget/miss any one of the above mentioned steps, you can reassign the resource to original assignment and repeat the steps again sequentially.
## Shut Down Machines
virtual-machines Nc A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-a100-v4-series.md
Due to increased GPU memory I/O footprint, the NC A100 v4 requires the use of [G
-| Size | vCPU | Memory (GiB) | Temp Disk (GiB) | NVMe Disks | GPU | GPU Memory (GiB) | Max data disks | Max uncached disk throughput (IOPS / MBps) | Max NICs/network bandwidth (MBps) |
+| Size | vCPU | Memory (GiB) | Temp Disk<sup>1</sup> (GiB) | NVMe Disks<sup>2</sup> | GPU<sup>3</sup> | GPU Memory (GiB) | Max data disks | Max uncached disk throughput (IOPS / MBps) | Max NICs/network bandwidth (MBps) |
||||||||||| | Standard_NC24ads_A100_v4 | 24 | 220 |64 | 960 GB | 1 | 80 | 8 | 30000/1000 | 2/20,000 | | Standard_NC48ads_A100_v4 | 48 | 440 | 128| 2x960 GB| 2 | 160 | 16 | 60000/2000 | 4/40,000 | | Standard_NC96ads_A100_v4 | 96 | 880 | 256| 4x960 GB | 4 | 320 | 32 | 120000/4000 | 8/80,000 |
-1 GPU = one A100 card <br>
-1. Local NVMe disk is coming as RAM and it needs to be manually formatted in newly deployed VM.
+<sup>1</sup> NC A100 v4 series VMs have a standard SCSI based temp resource disk for OS paging/swap file use. This ensures the NVMe drives can be fully dedicated to application use. This disk is Ephemeral, and all data will be lost on stop/deallocate.
-> [!NOTE]
-> Local NVMe disks are ephemeral, and any data stored on these disks will be lost if the VM is stopped or deallocated.
+<sup>2</sup> Local NVMe disks are ephemeral, data will be lost on these disks if you stop/deallocate your VM. Local NVMe disk is coming as RAM and it needs to be manually formatted in newly deployed VM.
+
+<sup>3</sup>1 GPU = one A100 80GB PCIe GPU card <br>
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Troubleshoot Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/troubleshoot-maintenance-configurations.md
Due to a previous bug, the system patch operation couldn't perform validation, a
## Open Issues
+#### Schedule Patching stops working after the resource is moved
+
+If you move a resource to a different resource group or subscription, then scheduled patching for the resource stops working as this scenario is currently unsupported by the system. The team is working to provide this capability but in the meantime, as a workaround, for the resource you want to move (in static scope)
+1. You need to remove the assignment of it
+2. Move the resource to a different resource group or subscription
+3. Recreate the assignment of it
+In the dynamic scope, the steps are similar, but after removing the assignment in step 1, you simply need to initiate or wait for the next scheduled run. This action prompts the system to completely remove the assignment, enabling you to proceed with steps 2 and 3.
+
+If you forget/miss any one of the above mentioned steps, you can reassign the resource to original assignment and repeat the steps again sequentially.
+ #### Schedule didn't trigger If a resource has two maintenance configurations with the same trigger time and an install patch configuration, and both are assigned to the same VM/resource, only one policy triggers. This is a known bug, and it's rarely observed. To mitigate this issue, adjust the start time of the maintenance configuration.
virtual-network Virtual Networks Name Resolution For Vms And Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
description: Name resolution scenarios for Azure IaaS, hybrid solutions, between
Previously updated : 04/27/2023 Last updated : 04/02/2024
If you provide your own DNS solution, it needs to:
Suppose you need to perform name resolution from your web app built by using App Service, linked to a virtual network, to VMs in the same virtual network. In addition to setting up a custom DNS server that has a DNS forwarder that forwards queries to Azure (virtual IP 168.63.129.16), perform the following steps:
-1. Enable virtual network integration for your web app, if not done already, as described in [Integrate your app with a virtual network](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-
-1. In the Azure portal, for the App Service plan hosting the web app, select **Sync Network** under **Networking**, **Virtual Network Integration**.
-
- ![Screenshot of virtual network name resolution](./media/virtual-networks-name-resolution-for-vms-and-role-instances/webapps-dns.png)
+Enable virtual network integration for your web app, if not done already, as described in [Integrate your app with a virtual network](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
If you need to perform name resolution from your vnet-linked web app (built by using App Service) to VMs in a different vnet that is **not linked** to the same private zone, use custom DNS servers or [Azure DNS Private Resolvers](../dns/dns-private-resolver-overview.md) on both vnets.
To use custom DNS servers:
* Enable virtual network integration for your web app to link to the source virtual network, following the instructions in [Integrate your app with a virtual network](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-* In the Azure portal, for the App Service plan hosting the web app, select **Sync Network** under **Networking**, **Virtual Network Integration**.
- To use an Azure DNS Private Resolver, see [Ruleset links](../dns/private-resolver-endpoints-rulesets.md#ruleset-links). ## Specify DNS servers
vpn-gateway About Zone Redundant Vnet Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-zone-redundant-vnet-gateways.md
These SKUs are available in Azure regions that have Azure availability zones. Fo
Coexistence of both VPN and ExpressRoute gateways in the same virtual network is supported. However, you should reserve a /27 IP address range for the gateway subnet.
+### Which configuration, zone-redundant or zonal, is recommended to achieve the highest availability for the virtual network gateway infrastructure?
+
+Zone-redundant. With this configuration, the virtual network gateway instances are spread across Azure availability zones, removing a single Azure availability zone as a single point of failure.
+
+Zonal deployments should only be configured if the target application is highly latency-sensitive and requires all Azure resources to be deployed to the same Availability zone.
+ ## Next steps [Create a zone-redundant virtual network gateway](create-zone-redundant-vnet-gateway.md)
vpn-gateway Monitor Vpn Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/monitor-vpn-gateway-reference.md
Metrics in Azure Monitor are numerical values that describe some aspect of a sys
| **BGP Peer Status** | Count | 5 minutes | Average BGP connectivity status per peer and per instance. | | **BGP Routes Advertised** | Count | 5 minutes | Number of routes advertised per peer and per instance. | | **BGP Routes Learned** | Count | 5 minutes | Number of routes learned per peer and per instance. |
-| **Gateway Inbound Flows** | Count | 5 minutes | Number of distinct 5-tuple flows flowing into a VPN Gateway. Limit is 250k flows. |
-| **Gateway Outbound Flows** | Count | 5 minutes | Number of distinct 5-tuple flows flowing out of a VPN Gateway. Limit is 250k flows. |
+| **Gateway Inbound Flows** | Count | 5 minutes | Number of distinct 5-tuple flows (protocol, local IP address, remote IP address, local port, and remote port) flowing into a VPN Gateway. Limit is 250k flows. |
+| **Gateway Outbound Flows** | Count | 5 minutes | Number of distinct 5-tuple flows (protocol, local IP address, remote IP address, local port, and remote port) flowing out of a VPN Gateway. Limit is 250k flows. |
| **Gateway P2S Bandwidth** | Bytes/s | 1 minute | Average combined bandwidth utilization of all point-to-site connections on the gateway. | | **Gateway S2S Bandwidth** | Bytes/s | 5 minutes | Average combined bandwidth utilization of all site-to-site connections on the gateway. | | **P2S Connection Count** | Count | 1 minute | Count of point-to-site connections on the gateway. |
Metrics in Azure Monitor are numerical values that describe some aspect of a sys
| **Tunnel MMSA Count** | Count | 5 minutes | Number of main mode security associations present. | | **Tunnel Peak PPS** | Count | 5 minutes | Max number of packets per second per tunnel. | | **Tunnel QMSA Count** | Count | 5 minutes | Number of quick mode security associations present. |
-| **Tunnel Total Flow Count** | Count | 5 minutes | Number of distinct 3-tuple flows created per tunnel. |
+| **Tunnel Total Flow Count** | Count | 5 minutes | Number of distinct 3-tuple flows (protocol, local IP address, remote IP address) created per tunnel. |
| **User Vpn Route Count** | Count | 5 minutes | Number of user VPN routes configured on the VPN Gateway. | | **VNet Address Prefix Count** | Count | 5 minutes | Number of virtual network address prefixes that are used/advertised by the gateway. |
vpn-gateway Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/packet-capture.md
To complete a packet capture, you need to provide a valid SAS (or Shared Access
1. The packet capture (pcap) file will be stored in the specified account.
+> [!NOTE]
+> Avoid the use of Azure-generated containers, such as `$logs`. Containers that start with `$` are typically internal containers, and only the service that creates them should use them. For instance, `$logs` is used by Azure Storage Account for writing storage account related logs.
+ ## Packet capture - PowerShell The following examples show PowerShell commands that start and stop packet captures. For more information on parameter options, see [Start-AzVirtualnetworkGatewayPacketCapture](/powershell/module/az.network/start-azvirtualnetworkgatewaypacketcapture).
vpn-gateway Vpn Gateway Troubleshoot Vpn Point To Site Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md
description: Learn to troubleshoot and solve common point-to-site connection pro
Previously updated : 02/13/2023 Last updated : 04/03/2024 # Troubleshooting: Azure point-to-site connection problems This article lists common point-to-site connection problems that you might experience. It also discusses possible causes and solutions for these problems.
-## VPN client error: A certificate could not be found
+## VPN client error: A certificate couldn't be found
### Symptom
To resolve this problem, follow these steps:
1. Open Certificate
-2. Make sure that the following certificates are in the correct location:
+1. Make sure that the following certificates are in the correct location:
| Certificate | Location | | - | - |
For more information about how to install the client certificate, see [Generate
> [!NOTE] > When you import the client certificate, do not select the **Enable strong private key protection** option.
-## The network connection between your computer and the VPN server could not be established because the remote server is not responding
+## The network connection between your computer and the VPN server couldn't be established because the remote server isn't responding
### Symptom
-When you try and connect to an Azure virtual network gateway using IKEv2 on Windows, you get the following error message:
+When you try to connect to an Azure virtual network gateway using IKEv2 on Windows, you get the following error message:
**The network connection between your computer and the VPN server could not be established because the remote server is not responding** ### Cause
-
- The problem occurs if the version of Windows does not have support for IKE fragmentation
-
+
+The problem occurs if the version of Windows doesn't have support for IKE fragmentation.
+ ### Solution
-IKEv2 is supported on Windows 10 and Server 2016. However, in order to use IKEv2, you must install updates and set a registry key value locally. OS versions prior to Windows 10 are not supported and can only use SSTP.
+IKEv2 is supported on Windows 10 and Server 2016. However, in order to use IKEv2, you must install updates and set a registry key value locally. OS versions prior to Windows 10 aren't supported and can only use SSTP.
-To prepare Windows 10 , or Server 2016 for IKEv2:
+To prepare Windows 10, or Server 2016 for IKEv2:
1. Install the update.
To prepare Windows 10 , or Server 2016 for IKEv2:
| Windows 10 Version 1703 | January 17, 2018 | [KB4057144](https://support.microsoft.com/help/4057144/windows-10-update-kb4057144) | | Windows 10 Version 1709 | March 22, 2018 | [KB4089848](https://www.catalog.update.microsoft.com/search.aspx?q=kb4089848) | -
-2. Set the registry key value. Create or set `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RasMan\ IKEv2\DisableCertReqPayload` REG_DWORD key in the registry to 1.
+1. Set the registry key value. Create or set `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RasMan\ IKEv2\DisableCertReqPayload` REG_DWORD key in the registry to 1.
## VPN client error: The message received was unexpected or badly formatted
When you try to connect to an Azure virtual network by using the VPN client, you
This problem occurs if one of the following conditions is true: - The use user-defined routes (UDR) with default route on the Gateway Subnet is set incorrectly.-- The root certificate public key is not uploaded into the Azure VPN gateway.
+- The root certificate public key isn't uploaded into the Azure VPN gateway.
- The key is corrupted or expired. ### Solution
This problem occurs if one of the following conditions is true:
To resolve this problem, follow these steps: 1. Remove UDR on the Gateway Subnet. Make sure UDR forwards all traffic properly.
-2. Check the status of the root certificate in the Azure portal to see whether it was revoked. If it is not revoked, try to delete the root certificate and reupload. For more information, see [Create certificates](vpn-gateway-howto-point-to-site-classic-azure-portal.md#generatecerts).
+1. Check the status of the root certificate in the Azure portal to see whether it was revoked. If it isn't revoked, try to delete the root certificate and reupload. For more information, see [Create certificates](vpn-gateway-howto-point-to-site-classic-azure-portal.md#generatecerts).
## VPN client error: A certificate chain processed but terminated
When you try to connect to an Azure virtual network by using the VPN client, you
| Azuregateway-*GUID*.cloudapp.net | Current User\Trusted Root Certification Authorities| | AzureGateway-*GUID*.cloudapp.net, AzureRoot.cer | Local Computer\Trusted Root Certification Authorities|
-2. If the certificates are already in the location, try to delete the certificates and reinstall them. The **azuregateway-*GUID*.cloudapp.net** certificate is in the VPN client configuration package that you downloaded from the Azure portal. You can use file archivers to extract the files from the package.
+1. If the certificates are already in the location, try to delete the certificates and reinstall them. The **azuregateway-*GUID*.cloudapp.net** certificate is in the VPN client configuration package that you downloaded from the Azure portal. You can use file archivers to extract the files from the package.
-## File download error: Target URI is not specified
+## File download error: Target URI isn't specified
### Symptom
When you try to connect to an Azure virtual network by using the VPN client, you
### Cause
-This problem might occur if you are trying to open the site-to-point VPN connection by using a shortcut.
+This problem might occur if you're trying to open the site-to-point VPN connection by using a shortcut.
### Solution Open the VPN package directly instead of opening it from the shortcut.
-## Cannot install the VPN client
+## Can't install the VPN client
### Cause
An additional certificate is required to trust the VPN gateway for your virtual
Extract the VPN client configuration package, and find the .cer file. To install the certificate, follow these steps: 1. Open mmc.exe.
-2. Add the **Certificates** snap-in.
-3. Select the **Computer** account for the local computer.
-4. Right-click the **Trusted Root Certification Authorities** node. Click **All-Task** > **Import**, and browse to the .cer file you extracted from the VPN client configuration package.
-5. Restart the computer.
-6. Try to install the VPN client.
+1. Add the **Certificates** snap-in.
+1. Select the **Computer** account for the local computer.
+1. Right-click the **Trusted Root Certification Authorities** node. Click **All-Task** > **Import**, and browse to the .cer file you extracted from the VPN client configuration package.
+1. Restart the computer.
+1. Try to install the VPN client.
## Azure portal error: Failed to save the VPN gateway, and the data is invalid
This problem might occur if the root certificate public key that you uploaded co
### Solution
-Make sure that the data in the certificate does not contain invalid characters, such as line breaks (carriage returns). The entire value should be one long line. The following text is a sample of the certificate:
+Make sure that the data in the certificate doesn't contain invalid characters, such as line breaks (carriage returns). The entire value should be one long line. The following text is a sample of the certificate:
```text --BEGIN CERTIFICATE--
This problem occurs because the name of the certificate contains an invalid char
When you try to download the VPN client configuration package, you receive the following error message: **Failed to download the file. Error details: error 503. The server is busy.**
-
+ ### Solution This error can be caused by a temporary network problem. Try to download the VPN package again after a few minutes.
-## Azure VPN Gateway upgrade: All Point to Site clients are unable to connect
+## Azure VPN Gateway upgrade: All point-to-site clients are unable to connect
### Cause
If the certificate is more than 50 percent through its lifetime, the certificate
### Solution
-To resolve this problem, re-download and redeploy the Point to Site package on all clients.
+To resolve this problem, redownload and redeploy the point-to-site package on all clients.
## Too many VPN clients connected at once The maximum number of allowable connections is reached. You can see the total number of connected clients in the Azure portal.
-## VPN client cannot access network file shares
+## VPN client can't access network file shares
### Symptom
-The VPN client has connected to the Azure virtual network. However, the client cannot access network shares.
+The VPN client has connected to the Azure virtual network. However, the client can't access network shares.
### Cause
-The SMB protocol is used for file share access. When the connection is initiated, the VPN client adds the session credentials and the failure occurs. After the connection is established, the client is forced to use the cache credentials for Kerberos authentication. This process initiates queries to the Key Distribution Center (a domain controller) to get a token. Because the client connects from the Internet, it might not be able to reach the domain controller. Therefore, the client cannot fail over from Kerberos to NTLM.
+The SMB protocol is used for file share access. When the connection is initiated, the VPN client adds the session credentials, and the failure occurs. After the connection is established, the client is forced to use the cache credentials for Kerberos authentication. This process initiates queries to the Key Distribution Center (a domain controller) to get a token. Because the client connects from the Internet, it might not be able to reach the domain controller. Therefore, the client can't fail over from Kerberos to NTLM.
-The only time that the client is prompted for a credential is when it has a valid certificate (with SAN=UPN) issued by the domain to which it is joined. The client also must be physically connected to the domain network. In this case, the client tries to use the certificate and reaches out to the domain controller. Then the Key Distribution Center returns a "KDC_ERR_C_PRINCIPAL_UNKNOWN" error. The client is forced to fail over to NTLM.
+The only time that the client is prompted for a credential is when it has a valid certificate (with SAN=UPN) issued by the domain to which it's joined. The client also must be physically connected to the domain network. In this case, the client tries to use the certificate and reaches out to the domain controller. Then the Key Distribution Center returns a "KDC_ERR_C_PRINCIPAL_UNKNOWN" error. The client is forced to fail over to NTLM.
### Solution
To work around the problem, disable the caching of domain credentials from the f
`HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\DisableDomainCreds - Set the value to 1` -
-## Cannot find the point-to-site VPN connection in Windows after reinstalling the VPN client
+## Can't find the point-to-site VPN connection in Windows after reinstalling the VPN client
### Symptom
-You remove the point-to-site VPN connection and then reinstall the VPN client. In this situation, the VPN connection is not configured successfully. You do not see the VPN connection in the **Network connections** settings in Windows.
+You remove the point-to-site VPN connection and then reinstall the VPN client. In this situation, the VPN connection isn't configured successfully. You don't see the VPN connection in the **Network connections** settings in Windows.
### Solution To resolve the problem, delete the old VPN client configuration files from **C:\Users\UserName\AppData\Roaming\Microsoft\Network\Connections\<VirtualNetworkId>**, and then run the VPN client installer again.
-## Point-to-site VPN client cannot resolve the FQDN of the resources in the local domain
+## Point-to-site VPN client can't resolve the FQDN of the resources in the local domain
### Symptom
-When the client connects to Azure by using point-to-site VPN connection, it cannot resolve the FQDN of the resources in your local domain.
+When the client connects to Azure by using point-to-site VPN connection, it can't resolve the FQDN of the resources in your local domain.
### Cause
-Point-to-site VPN client normally uses Azure DNS servers that are configured in the Azure virtual network. The Azure DNS servers take precedence over the local DNS servers that are configured in the client (unless the metric of the Ethernet interface is lower), so all DNS queries are sent to the Azure DNS servers. If the Azure DNS servers do not have the records for the local resources, the query fails.
+Point-to-site VPN client normally uses Azure DNS servers that are configured in the Azure virtual network. The Azure DNS servers take precedence over the local DNS servers that are configured in the client (unless the metric of the Ethernet interface is lower), so all DNS queries are sent to the Azure DNS servers. If the Azure DNS servers don't have the records for the local resources, the query fails.
### Solution
-To resolve the problem, make sure that the Azure DNS servers that used on the Azure virtual network can resolve the DNS records for local resources. To do this, you can use DNS Forwarders or Conditional forwarders. For more information, see [Name resolution using your own DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
+To resolve the problem, make sure that the Azure DNS servers that used on the Azure virtual network can resolve the DNS records for local resources. To do this, you can use DNS Forwarders or Conditional forwarders. For more information, see [Name resolution using your own DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
-## The point-to-site VPN connection is established, but you still cannot connect to Azure resources
+## The point-to-site VPN connection is established, but you still can't connect to Azure resources
### Cause
-This problem may occur if VPN client does not get the routes from Azure VPN gateway.
+This problem might occur if VPN client doesn't get the routes from Azure VPN gateway.
### Solution To resolve this problem, [reset Azure VPN gateway](./reset-gateway.md). To make sure that the new routes are being used, the Point-to-Site VPN clients must be downloaded again after virtual network peering has been successfully configured.
-## Error: "The revocation function was unable to check revocation because the revocation server was offline.(Error 0x80092013)"
+## Error: "The revocation function was unable to check revocation because the revocation server was offline. (Error 0x80092013)"
### Causes
-This error message occurs if the client cannot access http://crl3.digicert.com/ssca-sha2-g1.crl and http://crl4.digicert.com/ssca-sha2-g1.crl. The revocation check requires access to these two sites. This problem typically happens on the client that has proxy server configured. In some environments, if the requests are not going through the proxy server, it will be denied at the Edge Firewall.
+
+This error message occurs if the client can't access http://crl3.digicert.com/ssca-sha2-g1.crl and http://crl4.digicert.com/ssca-sha2-g1.crl. The revocation check requires access to these two sites. This problem typically happens on the client that has proxy server configured. In some environments, if the requests aren't going through the proxy server, it will be denied at the edge firewall.
### Solution
Make sure that RADIUS server is configured correctly. For More information, see
### Cause
-Root certificate had not been installed. The root certificate is installed in the client's **Trusted certificates** store.
+Root certificate hasn't been installed. The root certificate is installed in the client's **Trusted certificates** store.
## VPN Client Error: The remote connection was not made because the attempted VPN tunnels failed. (Error 800)
The NIC driver is outdated.
Update the NIC driver: 1. Click **Start**, type **Device Manager**, and select it from the list of results. If you're prompted for an administrator password or confirmation, type the password or provide confirmation.
-2. In the **Network adapters** categories, find the NIC that you want to update.
-3. Double-click the device name, select **Update driver**, select **Search automatically for updated driver software**.
-4. If Windows doesn't find a new driver, you can try looking for one on the device manufacturer's website and follow their instructions.
-5. Restart the computer and try the connection again.
+1. In the **Network adapters** categories, find the NIC that you want to update.
+1. Double-click the device name, select **Update driver**, select **Search automatically for updated driver software**.
+1. If Windows doesn't find a new driver, you can try looking for one on the device manufacturer's website and follow their instructions.
+1. Restart the computer and try the connection again.
## VPN Client Error: Dialing VPN connection \<VPN Connection Name\>, Status = VPN Platform did not trigger connection
-You may also see the following error in Event Viewer from RasClient: "The user \<User\> dialed a connection named \<VPN Connection Name\> which has failed. The error code returned on failure is 1460."
+You might also see the following error in Event Viewer from RasClient: "The user \<User\> dialed a connection named \<VPN Connection Name\> which has failed. The error code returned on failure is 1460."
### Cause
-The Azure VPN Client does not have the "Background apps" App Permission enabled in App Settings for Windows.
+The Azure VPN Client doesn't have the "Background apps" App Permission enabled in App Settings for Windows.
### Solution 1. In Windows, go to Settings -> Privacy -> Background apps
-2. Toggle the "Let apps run in the background" to On
+1. Toggle the "Let apps run in the background" to On
## Error: 'File download error Target URI is not specified'
The Azure VPN gateway type must be VPN and the VPN type must be **RouteBased**.
### Cause
-This problem can be caused by the previous VPN client installations.
+This problem can be caused by the previous VPN client installations.
### Solution
-Delete the old VPN client configuration files from **C:\Users\UserName\AppData\Roaming\Microsoft\Network\Connections\<VirtualNetworkId>** and run the VPN client installer again.
+Delete the old VPN client configuration files from **C:\Users\UserName\AppData\Roaming\Microsoft\Network\Connections\<VirtualNetworkId>** and run the VPN client installer again.
-## The VPN client hibernates or sleep after some time
+## I can't resolve records in Private DNS Zones using Private Resolver from point-to-site clients.
+
+### Symptom
+
+When you're using Azure provided (168.63.129.16) DNS server on the virtual network, point-to-site clients won't be able to resolve records present in Private DNS Zones (including private endpoints).
++
+### Cause
+
+Azure DNS server IP address (168.63.129.16) is resolvable only from Azure platform.
### Solution
-Check the sleep and hibernate settings in the computer that the VPN client is running on.
+The following steps help you resolve records from Private DNS zone:
+
+Configuring [Private resolver](https://github.com/MicrosoftDocs/azure-docs-pr/blob/ef411d08c2f3ba57c8b5495e5ad39067921ef4b9/azure/dns/dns-private-resolver-overview"https://github.com/microsoftdocs/azure-docs-pr/blob/ef411d08c2f3ba57c8b5495e5ad39067921ef4b9/azure/dns/dns-private-resolver-overview")'s inbound IP address as custom DNS servers on virtual network help you resolve records in private DNS zone (including those created from Private Endpoints). Note the Private DNS zones must be associated with the virtual network that has the Private Resolver.
++
+By default, DNS servers that are configured on a virtual network will be pushed to point-to-site clients that are connected via VPN gateway. Hence, configuring Private resolver inbound IP address as custom DNS servers on the virtual network will automatically push these IP address to clients as the VPN DNS server and you can seamlessly resolve records from private DNS zones (including private endpoints).
web-application-firewall Protect Api Hosted Apim By Waf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/protect-api-hosted-apim-by-waf.md
Azure WAF detection mode is used for testing and validating the policy. Detectio
## Restrict APIM access through the Azure Front Door only Requests routed through the Front Door include headers specific to your Front Door configuration. You can configure the
-[API Management policy reference](../../api-management/api-management-policies.md#access-restriction-policies) as an inbound APIM policy to filter incoming requests based on the unique value of the X-Azure-FDID HTTP request header that is sent to API Management. This header value is the Azure Front Door ID, which is available on the AFD Overview page.
+[check-header policy](../../api-management/api-management-policies.md#authentication-and-authorization) as an inbound APIM policy to filter incoming requests based on the unique value of the X-Azure-FDID HTTP request header that is sent to API Management. This header value is the Azure Front Door ID, which is available on the AFD Overview page.
1. Copy the Front Door ID from the AFD overview page.