Updates from: 07/23/2024 01:11:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md
workspace("AD-B2C-TENANT1").AuditLogs
## Change the data retention period
-Azure Monitor Logs are designed to scale and support collecting, indexing, and storing massive amounts of data per day from any source in your enterprise or deployed in Azure. By default, logs are retained for 30 days, but retention duration can be increased to up to two years. Learn how to [manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/cost-logs.md). After you select the pricing tier, you can [Change the data retention period](../azure-monitor/logs/data-retention-archive.md).
+Azure Monitor Logs are designed to scale and support collecting, indexing, and storing massive amounts of data per day from any source in your enterprise or deployed in Azure. By default, logs are retained for 30 days, but retention duration can be increased to up to two years. Learn how to [manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/cost-logs.md). After you select the pricing tier, you can [Change the data retention period](../azure-monitor/logs/data-retention-configure.md).
## Disable monitoring data collection
ai-studio Create Azure Ai Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-azure-ai-resource.md
To add grant users permissions:
Hub networking settings can be set during resource creation or changed in the **Networking** tab in the Azure portal view. Creating a new hub invokes a Managed Virtual Network. This streamlines and automates your network isolation configuration with a built-in Managed Virtual Network. The Managed Virtual Network settings are applied to all projects created within a hub.
-At hub creation, select between the networking isolation modes: **Public**, **Private with Internet Outbound**, and **Private with Approved Outbound**. To secure your resource, select either **Private with Internet Outbound** or Private with Approved Outbound for your networking needs. For the private isolation modes, a private endpoint should be created for inbound access. For more information on network isolation, see [Managed virtual network isolation](configure-managed-network.md). To create a secure hub, see [Create a secure hub](create-secure-ai-hub.md).
+At hub creation, select between the networking isolation modes: **Public**, **Private with Internet Outbound**, and **Private with Approved Outbound**. To secure your resource, select either **Private with Internet Outbound** or **Private with Approved Outbound** for your networking needs. For the private isolation modes, a private endpoint should be created for inbound access. For more information on network isolation, see [Managed virtual network isolation](configure-managed-network.md). To create a secure hub, see [Create a secure hub](create-secure-ai-hub.md).
At hub creation in the Azure portal, creation of associated Azure AI services, Storage account, Key vault, Application insights, and Container registry is given. These resources are found on the Resources tab during creation.
ai-studio Prompt Flow Tools Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md
The following table provides an index of tools in prompt flow. | Tool name | Description | Package name |
-||--|-|--|
+||--|-|
| [LLM](./llm-tool.md) | Use large language models (LLM) with the Azure OpenAI Service for tasks such as text completion or chat. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Prompt](./prompt-tool.md) | Craft a prompt by using Jinja as the templating language. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Python](./python-tool.md) | Run Python code. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
The following table provides an index of tools in prompt flow.
| [Content Safety (Text)](./content-safety-tool.md) | Use Azure AI Content Safety to detect harmful content. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Embedding](./embedding-tool.md) | Use Azure OpenAI embedding models to create an embedding vector that represents the input text. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Serp API](./serp-api-tool.md) | Use Serp API to obtain search results from a specific search engine. | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Index Lookup](./index-lookup-tool.md) | Search a vector-based query for relevant results using one or more text queries. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Index Lookup](./index-lookup-tool.md)<sup>1</sup> | Search a vector-based query for relevant results using one or more text queries. | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
<sup>1</sup> The Index Lookup tool replaces the three deprecated legacy index tools: Vector Index Lookup, Vector DB Lookup, and Faiss Index Lookup. If you have a flow that contains one of those tools, follow the [migration steps](./index-lookup-tool.md#migrate-from-legacy-tools-to-the-index-lookup-tool) to upgrade your flow.
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azur
Previously updated : 07/09/2024 Last updated : 07/20/2024
For more information on Kubernetes storage classes for Azure Files, see [Kuberne
- mfsymlinks - cache=strict - actimeo=30
+ - nobrl # disable sending byte range lock requests to the server and for applications which have challenges with posix locks
parameters: skuName: Premium_LRS ```
mountOptions:
- mfsymlinks - cache=strict - actimeo=30
+ - nobrl # disable sending byte range lock requests to the server and for applications which have challenges with posix locks
parameters: skuName: Premium_LRS ```
Kubernetes needs credentials to access the file share created in the previous st
- mfsymlinks - cache=strict - nosharesock
- - nobrl
+ - nobrl # disable sending byte range lock requests to the server and for applications which have challenges with posix locks
``` 2. Create the persistent volume using the [`kubectl create`][kubectl-create] command.
spec:
volumeAttributes: secretName: azure-secret # required shareName: aksshare # required
- mountOptions: 'dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock' # optional
+ mountOptions: 'dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock,nobrl' # optional
``` 2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
aks Csi Migrate In Tree Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md
Title: Migrate from in-tree storage class to CSI drivers on Azure Kubernetes Service (AKS) description: Learn how to migrate from in-tree persistent volume to the Container Storage Interface (CSI) driver in an Azure Kubernetes Service (AKS) cluster. Previously updated : 01/11/2024 Last updated : 07/20/2024
Migration from in-tree to CSI is supported by creating a static volume:
- mfsymlinks - cache=strict - nosharesock
- - nobrl
+ - nobrl # disable sending byte range lock requests to the server and for applications which have challenges with posix locks
``` 5. Create a file named *azurefile-mount-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume* using the following code.
aks Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-overview.md
Microsoft provides best-effort support for [the latest version of Dapr and two p
You can run Azure CLI commands to retrieve a list of available versions in [a cluster](/cli/azure/k8s-extension/extension-types#az-k8s-extension-extension-types-list-versions-by-cluster) or [a location](/cli/azure/k8s-extension/extension-types#az-k8s-extension-extension-types-list-versions-by-location).
-To view a list of the stable Dapr versions available to your managed AKS cluster, run the following command:
-
-```azurecli
-az k8s-extension extension-types list-versions-by-cluster --resource-group myResourceGroup --cluster-name myCluster --cluster-type managedClusters --extension-type microsoft.dapr --release-train stable
-```
-
-To see the latest stable Dapr version available to your managed AKS cluster, run the following:
-
-```azurecli
-az k8s-extension extension-types list-versions-by-cluster --resource-group myResourceGroup --cluster-name myCluster --cluster-type managedClusters --extension-type microsoft.dapr --release-train stable --show-latest
-```
-
-To view a list of the stable Dapr versions available _by location_:
-1. [Make sure you've registered the `ExtenstionTypes` feature to your Azure subscription.](./dapr.md#register-the-extenstiontypes-feature-to-your-azure-subscription)
-1. Run the following command.
-
-```azurecli
-az k8s-extension extension-types list-versions-by-location --location westus --extension-type microsoft.dapr
-```
+[Learn how to view and target the latest stable Dapr versions available to your managed AKS cluster.](./dapr.md#viewing-the-latest-stable-dapr-versions-available)
### Runtime support
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
[Dapr](./dapr-overview.md) simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. With Dapr's sidecar architecture, you can keep your code platform agnostic while tackling challenges around building microservices, like: - Calling other services reliably and securely - Building event-driven apps with pub/sub-- Building applications that are portable across multiple cloud services and hosts (for example, Kubernetes vs. a VM)
+- Building applications that are portable across multiple cloud services and hosts (for example, Kubernetes vs. a virtual machine)
> [!NOTE] > If you plan on installing Dapr in a Kubernetes production environment, see the [Dapr guidelines for production usage][kubernetes-production] documentation page.
Once Dapr is installed on your cluster, you can begin to develop using the Dapr
## Prerequisites - An Azure subscription. [Don't have one? Create a free account.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- Install the latest version of the [Azure CLI][install-cli].
+- The latest version of the [Azure CLI][install-cli].
- An existing [AKS cluster][deploy-cluster] or connected [Arc-enabled Kubernetes cluster][arc-k8s-cluster].-- [An Azure Kubernetes Service RBAC Admin role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-rbac-admin)
+- [An Azure Kubernetes Service Role-Based Access Control Admin role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-rbac-admin)
Select how you'd like to install, deploy, and configure the Dapr extension.
az extension update --name k8s-extension
### Register the `KubernetesConfiguration` resource provider
-If you haven't previously used cluster extensions, you may need to register the resource provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
+If you aren't already using cluster extensions, you may need to register the resource provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
```azurecli-interactive az provider list --query "[?contains(namespace,'Microsoft.KubernetesConfiguration')]" -o table
Create the Dapr extension, which installs Dapr on your AKS or Arc-enabled Kubern
For example, install the latest version of Dapr via the Dapr extension on your AKS cluster: ```azurecli az k8s-extension create --cluster-type managedClusters \cluster-name myAKSCluster \resource-group myResourceGroup \
+--cluster-name <myAKSCluster> \
+--resource-group <myResourceGroup> \
--name dapr \ --extension-type Microsoft.Dapr \ --auto-upgrade-minor-version false ```
-### Configuring automatic updates to Dapr control plane
+### Keep your managed AKS cluster updated to the latest version
+
+Based on your environment (dev, test, or production), you can keep up-to-date with the latest stable Dapr versions.
+
+#### Choosing a release train
+
+When configuring the extension, you can choose to install Dapr from a particular release train. Specify one of the two release train values:
+
+| Value | Description |
+| -- | -- |
+| `stable` | Default. |
+| `dev` | Early releases that can contain experimental features. Not suitable for production. |
+
+For example:
+
+```azurecli
+--release-train stable
+```
+
+#### Configuring automatic updates to Dapr control plane
> [!WARNING]
-> You can enable automatic updates to the Dapr control plane only in dev or test environments. Auto-upgrade is not suitable for production environments.
+> Auto-upgrade is not suitable for production environments. Only enable automatic updates to the Dapr control plane in dev or test environments. [Learn how to manually upgrade to the latest Dapr version for production environments.](#viewing-the-latest-stable-dapr-versions-available)
If you install Dapr without specifying a version, `--auto-upgrade-minor-version` *is automatically enabled*, configuring the Dapr control plane to automatically update its minor version on new releases.
You can disable auto-update by specifying the `--auto-upgrade-minor-version` par
--auto-upgrade-minor-version true ```
-### Targeting a specific Dapr version
+#### Viewing the latest stable Dapr versions available
-> [!NOTE]
-> Dapr is supported with a rolling window, including only the current and previous versions. It is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr, you may have to do intermediate upgrades to get to a supported version.
+To upgrade to the latest Dapr version in a production environment, you need to manually upgrade. Start by viewing a list of the stable Dapr versions available to your managed AKS cluster. Run the following command:
+
+```azurecli
+az k8s-extension extension-types list-versions-by-cluster --resource-group <myResourceGroup> --cluster-name <myCluster> --cluster-type managedClusters --extension-type microsoft.dapr --release-train stable
+```
-The same command-line argument is used for installing a specific version of Dapr or rolling back to a previous version. Set `--auto-upgrade-minor-version` to `false` and `--version` to the version of Dapr you wish to install. If the `version` parameter is omitted, the extension installs the latest version of Dapr. For example, to use Dapr 1.11.2:
+To see the latest stable Dapr version available to your managed AKS cluster, run the following command:
```azurecli
-az k8s-extension create --cluster-type managedClusters \
cluster-name myAKSCluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \auto-upgrade-minor-version false \version 1.11.2
+az k8s-extension extension-types list-versions-by-cluster --resource-group <myResourceGroup> --cluster-name <myCluster> --cluster-type managedClusters --extension-type microsoft.dapr --release-train stable --show-latest
```
-### Choosing a release train
+To view a list of the stable Dapr versions available _by location_:
+1. [Make sure you've registered the `ExtenstionTypes` feature to your Azure subscription.](./dapr.md#register-the-extenstiontypes-feature-to-your-azure-subscription)
+1. Run the following command.
-When configuring the extension, you can choose to install Dapr from a particular release train. Specify one of the two release train values:
+```azurecli
+az k8s-extension extension-types list-versions-by-location --location westus --extension-type microsoft.dapr
+```
-| Value | Description |
-| -- | -- |
-| `stable` | Default. |
-| `dev` | Early releases, can contain experimental features. Not suitable for production. |
+[Next, manually update Dapr to the latest stable version.](#targeting-a-specific-dapr-version)
-For example:
+#### Targeting a specific Dapr version
+
+> [!NOTE]
+> Dapr is supported with a rolling window, including only the current and previous versions. It is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr, you may have to do intermediate upgrades to get to a supported version.
+
+The same command-line argument is used for installing a specific version of Dapr or rolling back to a previous version. Set `--auto-upgrade-minor-version` to `false` and `--version` to the version of Dapr you wish to install. If the `version` parameter is omitted, the extension installs the latest version of Dapr. For example, to use Dapr 1.13.5:
```azureclirelease-train stable
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name <myAKSCluster> \
+--resource-group <myResourceGroup> \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--auto-upgrade-minor-version false \
+--version 1.13.5
``` # [Bicep](#tab/bicep)
For example:
### Register the `KubernetesConfiguration` resource provider
-If you haven't previously used cluster extensions, you may need to register the resource provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
+If you aren't already using cluster extensions, you may need to register the resource provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
```azurecli-interactive az provider list --query "[?contains(namespace,'Microsoft.KubernetesConfiguration')]" -o table
az feature show --namespace Microsoft.KubernetesConfiguration --name ExtensionTy
## Deploy the Dapr extension on your AKS or Arc-enabled Kubernetes cluster
-Create a Bicep template similar to the following example to deploy the Dapr extension to your existing cluster.
+Create a Bicep template similar to the following example and deploy the Dapr extension to your existing cluster.
```bicep @description('The name of the Managed Cluster resource.')
resource daprExtension 'Microsoft.KubernetesConfiguration/extensions@2022-11-01'
} ```
-Set the following variables, changing the values below to your actual resource group and cluster names.
+Set the following variables, changing the following values to your actual resource group and cluster names.
```azurecli-interactive
-MY_RESOURCE_GROUP=myResourceGroup
-MY_AKS_CLUSTER=myAKScluster
+MY_RESOURCE_GROUP=<myResourceGroup>
+MY_AKS_CLUSTER=<myAKSCluster>
``` Deploy the Bicep template using the `az deployment group` command.
When configuring the extension, you can choose to install Dapr from a particular
| Value | Description | | -- | -- | | `stable` | Default. |
-| `dev` | Early releases, can contain experimental features. Not suitable for production. |
+| `dev` | Early releases that can contain experimental features. Not suitable for production. |
For example:
Troubleshoot Dapr errors via the [common Dapr issues and solutions guide][dapr-t
If you need to delete the extension and remove Dapr from your AKS cluster, you can use the following command: ```azurecli
-az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSCluster --cluster-type managedClusters --name dapr
+az k8s-extension delete --resource-group <myResourceGroup> --cluster-name <myAKSCluster> --cluster-type managedClusters --name dapr
```
-Or simply remove the Bicep template.
+Or you can remove the Bicep template.
## Next Steps
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
See [Create diagnostic settings](../azure-monitor/essentials/diagnostic-settings
> > - Disable kube-audit logging when not required. > - Enable collection from *kube-audit-admin*, which excludes the get and list audit events.
-> - Enable resource-specific logs as described below and configure `AKSAudit` table as [basic logs](../azure-monitor/logs/basic-logs-configure.md).
+> - Enable resource-specific logs as described below and configure `AKSAudit` table as [basic logs](../azure-monitor/logs/logs-table-plans.md).
> > See [Monitor Kubernetes clusters using Azure services and cloud native tools](../azure-monitor/containers/monitor-kubernetes.md) for further recommendations and [Cost optimization and Azure Monitor](../azure-monitor/best-practices-cost.md) for further strategies to reduce your monitoring costs.
AKS supports either [Azure diagnostics mode](../azure-monitor/essentials/resourc
Resource-specific mode is recommended for AKS for the following reasons: - Data is easier to query because it's in individual tables dedicated to AKS.-- Supports configuration as [basic logs](../azure-monitor/logs/basic-logs-configure.md) for significant cost savings.
+- Supports configuration as [basic logs](../azure-monitor/logs/logs-table-plans.md) for significant cost savings.
For more information on the difference between collection modes including how to change an existing setting, see [Select the collection mode](../azure-monitor/essentials/resource-logs.md#select-the-collection-mode).
Azure Monitor Container Insights provides a schema for container logs known as C
- PodName - PodNamespace
-In addition, this schema is compatible with [Basic Logs](../azure-monitor/logs/basic-logs-configure.md?tabs=portal-1#set-a-tables-log-data-plan) data plan, which offers a low-cost alternative to standard analytics logs. The Basic log data plan lets you save on the cost of ingesting and storing high-volume verbose logs in your Log Analytics workspace for debugging, troubleshooting, and auditing, but not for analytics and alerts. For more information, see [Manage tables in a Log Analytics workspace](../azure-monitor/logs/manage-logs-tables.md?tabs=azure-portal).
+In addition, this schema is compatible with [Basic Logs](../azure-monitor/logs/logs-table-plans.md?tabs=portal-1#set-the-table-plan) data plan, which offers a low-cost alternative to standard analytics logs. The Basic log data plan lets you save on the cost of ingesting and storing high-volume verbose logs in your Log Analytics workspace for debugging, troubleshooting, and auditing, but not for analytics and alerts. For more information, see [Manage tables in a Log Analytics workspace](../azure-monitor/logs/manage-logs-tables.md?tabs=azure-portal).
ContainerLogV2 is the recommended approach and is the default schema for customers onboarding container insights with Managed Identity Auth using ARM, Bicep, Terraform, Policy, and Azure portal. For more information about how to enable ContainerLogV2 through either the cluster's Data Collection Rule (DCR) or ConfigMap, see [Enable the ContainerLogV2 schema](../azure-monitor/containers/container-insights-logs-schema.md?tabs=configure-portal#enable-the-containerlogv2-schema). ## Visualization
app-service Configure Authentication User Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-user-identities.md
For all language frameworks, App Service makes the claims in the incoming token
||--| | `X-MS-CLIENT-PRINCIPAL` | A Base64 encoded JSON representation of available claims. For more information, see [Decoding the client principal header](#decoding-the-client-principal-header). | | `X-MS-CLIENT-PRINCIPAL-ID` | An identifier for the caller set by the identity provider. |
-| `X-MS-CLIENT-PRINCIPAL-NAME` | A human-readable name for the caller set by the identity provider, e.g. Email Address, User Principal Name. |
+| `X-MS-CLIENT-PRINCIPAL-NAME` | A human-readable name for the caller set by the identity provider, such as email address or user principal name. |
| `X-MS-CLIENT-PRINCIPAL-IDP` | The name of the identity provider used by App Service Authentication. | Provider tokens are also exposed through similar headers. For example, Microsoft Entra also sets `X-MS-TOKEN-AAD-ACCESS-TOKEN` and `X-MS-TOKEN-AAD-ID-TOKEN` as appropriate. > [!NOTE]
-> Different language frameworks may present these headers to the app code in different formats, such as lowercase or title case.
+> Different language frameworks might present these headers to the app code in different formats, such as lowercase or title case.
-Code that is written in any language or framework can get the information that it needs from these headers. [Decoding the client principal header](#decoding-the-client-principal-header) covers this process. For some frameworks, the platform also provides extra options that may be more convenient.
+Code that is written in any language or framework can get the information that it needs from these headers. [Decoding the client principal header](#decoding-the-client-principal-header) covers this process. For some frameworks, the platform also provides extra options that might be more convenient.
### Decoding the client principal header
-`X-MS-CLIENT-PRINCIPAL` contains the full set of available claims as Base64 encoded JSON. These claims go through a default claims-mapping process, so some may have different names than you would see if processing the token directly. The decoded payload is structured as follows:
+`X-MS-CLIENT-PRINCIPAL` contains the full set of available claims as Base64 encoded JSON. These claims go through a default claims-mapping process, so some might have different names than you would see if processing the token directly. The decoded payload is structured as follows:
```json {
Code that is written in any language or framework can get the information that i
|||| | `auth_typ` | string | The name of the identity provider used by App Service Authentication. | | `claims` | array of objects | An array of objects representing the available claims. Each object contains `typ` and `val` properties. |
-| `typ` | string | The name of the claim. This may have been subject to default claims mapping and could be different from the corresponding claim contained in a token. |
+| `typ` | string | The name of the claim. It might be subject to default claims mapping and could be different from the corresponding claim contained in a token. |
| `val` | string | The value of the claim. | | `name_typ` | string | The name claim type, which is typically a URI providing scheme information about the `name` claim if one is defined. | | `role_typ` | string | The role claim type, which is typically a URI providing scheme information about the `role` claim if one is defined. |
-To process this header, your app will need to decode the payload and iterate through the `claims` array to find the claims of interest. It may be convenient to convert these into a representation used by the app's language framework. Here's an example of this process in C# that constructs a [ClaimsPrincipal](/dotnet/api/system.security.claims.claimsprincipal) type for the app to use:
+To process this header, your app needs to decode the payload and iterate through the `claims` array to find the claims of interest. It might be convenient to convert them into a representation used by the app's language framework. Here's an example of this process in C# that constructs a [ClaimsPrincipal](/dotnet/api/system.security.claims.claimsprincipal) type for the app to use:
```csharp using System;
public static class ClaimsPrincipalParser
### Framework-specific alternatives
-For ASP.NET 4.6 apps, App Service populates [ClaimsPrincipal.Current](/dotnet/api/system.security.claims.claimsprincipal.current) with the authenticated user's claims, so you can follow the standard .NET code pattern, including the `[Authorize]` attribute. Similarly, for PHP apps, App Service populates the `_SERVER['REMOTE_USER']` variable. For Java apps, the claims are [accessible from the Tomcat servlet](configure-language-java.md#authenticate-users-easy-auth).
+For ASP.NET 4.6 apps, App Service populates [ClaimsPrincipal.Current](/dotnet/api/system.security.claims.claimsprincipal.current) with the authenticated user's claims, so you can follow the standard .NET code pattern, including the `[Authorize]` attribute. Similarly, for PHP apps, App Service populates the `_SERVER['REMOTE_USER']` variable. For Java apps, the claims are [accessible from the Tomcat servlet](configure-language-java-security.md#authenticate-users-easy-auth).
For [Azure Functions](../azure-functions/functions-overview.md), `ClaimsPrincipal.Current` isn't populated for .NET code, but you can still find the user claims in the request headers, or get the `ClaimsPrincipal` object from the request context or even through a binding parameter. For more information, see [Working with client identities in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#working-with-client-identities).
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
Other language stacks, likewise, get the app settings as environment variables a
- [Node.js](configure-language-nodejs.md#access-environment-variables) - [PHP](configure-language-php.md#access-environment-variables) - [Python](configure-language-python.md#access-app-settings-as-environment-variables)-- [Java](configure-language-java.md#configure-data-sources)
+- [Java](configure-language-java-data-sources.md)
- [Custom containers](configure-custom-container.md#configure-environment-variables) App settings are always encrypted when stored (encrypted-at-rest).
At runtime, connection strings are available as environment variables, prefixed
* Notification Hub: `NOTIFICATIONHUBCONNSTR_` * Service Bus: `SERVICEBUSCONNSTR_` * Event Hub: `EVENTHUBCONNSTR_`
-* Document Db: `DOCDBCONNSTR_`
+* Document DB: `DOCDBCONNSTR_`
* Redis Cache: `REDISCACHECONNSTR_` >[!Note]
For example, a MySQL connection string named *connectionstring1* can be accessed
- [Node.js](configure-language-nodejs.md#access-environment-variables) - [PHP](configure-language-php.md#access-environment-variables) - [Python](configure-language-python.md#access-environment-variables)-- [Java](configure-language-java.md#configure-data-sources)
+- [Java](configure-language-java-data-sources.md)
- [Custom containers](configure-custom-container.md#configure-environment-variables) Connection strings are always encrypted when stored (encrypted-at-rest).
It's not possible to edit connection strings in bulk by using a JSON file with A
- [Node.js](configure-language-nodejs.md) - [PHP](configure-language-php.md) - [Python](configure-language-python.md)-- [Java](configure-language-java.md)
+- [Java](configure-language-java-deploy-run.md)
<a name="alwayson"></a>
app-service Configure Language Java Apm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-apm.md
+
+ Title: Configure APM platforms for Tomcat, JBoss, or Java SE apps
+description: Learn how to configure APM platforms, such as Application Insights, NewRelic, and AppDynamics, for Tomcat, JBoss, or Java SE app on Azure App Service.
+keywords: azure app service, web app, windows, oss, java, tomcat, jboss, spring boot, quarkus
+ms.devlang: java
+ Last updated : 07/17/2024+
+zone_pivot_groups: app-service-java-hosting
+adobe-target: true
++++
+# Configure APM platforms for Tomcat, JBoss, or Java SE apps in Azure App Service
+
+This article shows how to connect Java applications deployed on Azure App Service with Azure Monitor Application Insights, NewRelic, and AppDynamics application performance monitoring (APM) platforms.
++
+## Configure Application Insights
+
+Azure Monitor Application Insights is a cloud native application monitoring service that enables customers to observe failures, bottlenecks, and usage patterns to improve application performance and reduce mean time to resolution (MTTR). With a few clicks or CLI commands, you can enable monitoring for your Node.js or Java apps, autocollecting logs, metrics, and distributed traces, eliminating the need for including an SDK in your app. For more information about the available app settings for configuring the agent, see the [Application Insights documentation](../azure-monitor/app/java-standalone-config.md).
+
+# [Azure portal](#tab/portal)
+
+To enable Application Insights from the Azure portal, go to **Application Insights** on the left-side menu and select **Turn on Application Insights**. By default, a new application insights resource of the same name as your web app is used. You can choose to use an existing application insights resource, or change the name. Select **Apply** at the bottom.
+
+# [Azure CLI](#tab/cli)
+
+To enable via the Azure CLI, you need to create an Application Insights resource and set a couple app settings on the Azure portal to connect Application Insights to your web app.
+
+1. Enable the Applications Insights extension
+
+ ```azurecli
+ az extension add -n application-insights
+ ```
+
+2. Create an Application Insights resource using the following CLI command. Replace the placeholders with your desired resource name and group.
+
+ ```azurecli
+ az monitor app-insights component create --app <resource-name> -g <resource-group> --location westus2 --kind web --application-type web
+ ```
+
+ Note the values for `connectionString` and `instrumentationKey`, you'll need these values in the next step.
+
+ > [!NOTE]
+ > To retrieve a list of other locations, run `az account list-locations`.
+
+3. Set the instrumentation key, connection string, and monitoring agent version as app settings on the web app. Replace `<instrumentationKey>` and `<connectionString>` with the values from the previous step.
+
+ # [Windows](#tab/windows)
+
+ ```azurecli
+ az webapp config appsettings set -n <webapp-name> -g <resource-group> --settings "APPINSIGHTS_INSTRUMENTATIONKEY=<instrumentationKey>" "APPLICATIONINSIGHTS_CONNECTION_STRING=<connectionString>" "ApplicationInsightsAgent_EXTENSION_VERSION=~3" "XDT_MicrosoftApplicationInsights_Mode=default" "XDT_MicrosoftApplicationInsights_Java=1"
+ ```
+
+ # [Linux](#tab/linux)
+
+ ```azurecli
+ az webapp config appsettings set -n <webapp-name> -g <resource-group> --settings "APPINSIGHTS_INSTRUMENTATIONKEY=<instrumentationKey>" "APPLICATIONINSIGHTS_CONNECTION_STRING=<connectionString>" "ApplicationInsightsAgent_EXTENSION_VERSION=~3" "XDT_MicrosoftApplicationInsights_Mode=default"
+ ```
+
+
+++
+## Configure New Relic
+
+# [Windows](#tab/windows)
+
+1. Create a NewRelic account at [NewRelic.com](https://newrelic.com/signup)
+2. Download the Java agent from NewRelic. It has a file name similar to *newrelic-java-x.x.x.zip*.
+3. Copy your license key, you need it to configure the agent later.
+4. [SSH into your App Service instance](configure-linux-open-ssh-session.md) and create a new directory */home/site/wwwroot/apm*.
+5. Upload the unpacked NewRelic Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/newrelic*.
+6. Modify the YAML file at */home/site/wwwroot/apm/newrelic/newrelic.yml* and replace the placeholder license value with your own license key.
+7. In the Azure portal, browse to your application in App Service and create a new Application Setting.
+
+ ::: zone pivot="java-javase"
+
+ Create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`.
+
+ ::: zone-end
+
+ ::: zone pivot="java-tomcat"
+
+ Create an environment variable named `CATALINA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`.
+
+ ::: zone-end
+
+ ::: zone pivot="java-jboss"
+
+ For **JBoss EAP**, `[TODO]`.
+
+ ::: zone-end
+
+# [Linux](#tab/linux)
+
+1. Create a NewRelic account at [NewRelic.com](https://newrelic.com/signup)
+2. Download the Java agent from NewRelic. It has a file name similar to *newrelic-java-x.x.x.zip*.
+3. Copy your license key, you need it to configure the agent later.
+4. [SSH into your App Service instance](configure-linux-open-ssh-session.md) and create a new directory */home/site/wwwroot/apm*.
+5. Upload the unpacked NewRelic Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/newrelic*.
+6. Modify the YAML file at */home/site/wwwroot/apm/newrelic/newrelic.yml* and replace the placeholder license value with your own license key.
+7. In the Azure portal, browse to your application in App Service and create a new Application Setting.
+
+ ::: zone pivot="java-javase"
+
+ Create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`.
+
+ ::: zone-end
+
+ ::: zone pivot="java-tomcat"
+
+ Create an environment variable named `CATALINA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`.
+
+ ::: zone-end
+
+ ::: zone pivot="java-jboss"
+
+ For **JBoss EAP**, `[TODO]`.
+
+ ::: zone-end
+++
+> [!NOTE]
+> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
+
+## Configure AppDynamics
+
+# [Windows](#tab/windows)
+
+1. Create an AppDynamics account at [AppDynamics.com](https://www.appdynamics.com/community/register/)
+2. Download the Java agent from the AppDynamics website. The file name is similar to *AppServerAgent-x.x.x.xxxxx.zip*
+3. Use the [Kudu console](https://github.com/projectkudu/kudu/wiki/Kudu-console) to create a new directory */home/site/wwwroot/apm*.
+4. Upload the Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/appdynamics*.
+5. In the Azure portal, browse to your application in App Service and create a new Application Setting.
+
+ ::: zone pivot="java-javase"
+
+ Create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=<app-name>` where `<app-name>` is your App Service name. If you already have an environment variable for `JAVA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
+
+ ::: zone-end
+
+ ::: zone pivot="java-tomcat"
+
+ Create an environment variable named `CATALINA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=<app-name>` where `<app-name>` is your App Service name. If you already have an environment variable for `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
+
+ ::: zone-end
+
+ ::: zone pivot="java-jboss"
+
+ For **JBoss EAP**, `[TODO]`.
+
+ ::: zone-end
+
+# [Linux](#tab/linux)
+
+1. Create an AppDynamics account at [AppDynamics.com](https://www.appdynamics.com/community/register/)
+2. Download the Java agent from the AppDynamics website. The file name is similar to *AppServerAgent-x.x.x.xxxxx.zip*
+3. [SSH into your App Service instance](configure-linux-open-ssh-session.md) and create a new directory */home/site/wwwroot/apm*.
+4. Upload the Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/appdynamics*.
+5. In the Azure portal, browse to your application in App Service and create a new Application Setting.
+
+ ::: zone pivot="java-javase"
+
+ Create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=<app-name>` where `<app-name>` is your App Service name. If you already have an environment variable for `JAVA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
+
+ ::: zone-end
+
+ ::: zone pivot="java-tomcat"
+
+ Create an environment variable named `CATALINA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=<app-name>` where `<app-name>` is your App Service name. If you already have an environment variable for `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
+
+ ::: zone-end
+
+ ::: zone pivot="java-jboss"
+
+ For **JBoss EAP**, `[TODO]`.
+
+ ::: zone-end
+++
+## Configure Datadog
+
+# [Windows](#tab/windows)
+* The configuration options are different depending on which Datadog site your organization is using. See the official [Datadog Integration for Azure Documentation](https://docs.datadoghq.com/integrations/azure/)
+
+# [Linux](#tab/linux)
+* The configuration options are different depending on which Datadog site your organization is using. See the official [Datadog Integration for Azure Documentation](https://docs.datadoghq.com/integrations/azure/)
+++
+## Configure Dynatrace
+
+# [Windows](#tab/windows)
+* Dynatrace provides an [Azure Native Dynatrace Service](https://www.dynatrace.com/monitoring/technologies/azure-monitoring/). To monitor Azure App Services using Dynatrace, see the official [Dynatrace for Azure documentation](https://docs.datadoghq.com/integrations/azure/)
+
+# [Linux](#tab/linux)
+* Dynatrace provides an [Azure Native Dynatrace Service](https://www.dynatrace.com/monitoring/technologies/azure-monitoring/). To monitor Azure App Services using Dynatrace, see the official [Dynatrace for Azure documentation](https://docs.datadoghq.com/integrations/azure/)
+++
+## Next steps
+
+Visit the [Azure for Java Developers](/java/azure/) center to find Azure quickstarts, tutorials, and Java reference documentation.
+
+- [App Service Linux FAQ](faq-app-service-linux.yml)
+- [Environment variables and app settings reference](reference-app-settings.md)
app-service Configure Language Java Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-data-sources.md
+
+ Title: Configure data sources for Tomcat, JBoss, or Java SE apps
+description: Learn how to configure data sources for Tomcat, JBoss, or Java SE apps on Azure App Service, including native Windows and Linux container variants.
+keywords: azure app service, web app, windows, oss, java, tomcat, jboss
+ms.devlang: java
+ Last updated : 07/17/2024+
+zone_pivot_groups: app-service-java-hosting
+adobe-target: true
++++
+# Configure data sources for a Tomcat, JBoss, or Java SE app in Azure App Service
+
+This article shows how to configure data sources in a Java SE, Tomcat, or JBoss app in App Service.
++
+## Configure the data source
++
+To connect to data sources in Spring Boot applications, we suggest creating connection strings and injecting them into your *application.properties* file.
+
+1. In the "Configuration" section of the App Service page, set a name for the string, paste your JDBC connection string into the value field, and set the type to "Custom". You can optionally set this connection string as slot setting.
+
+ This connection string is accessible to our application as an environment variable named `CUSTOMCONNSTR_<your-string-name>`. For example, `CUSTOMCONNSTR_exampledb`.
+
+2. In your *application.properties* file, reference this connection string with the environment variable name. For our example, we would use the following code:
+
+ ```yml
+ app.datasource.url=${CUSTOMCONNSTR_exampledb}
+ ```
+
+For more information, see the [Spring Boot documentation on data access](https://docs.spring.io/spring-boot/docs/current/reference/html/howto-data-access.html) and [externalized configurations](https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html).
+++
+> [!TIP]
+> By default, the Linux Tomcat containers can automatically configure shared data sources for you in the Tomcat server. The only thing for you to do is add an app setting that contains a valid JDBC connection string to an Oracle, SQL Server, PostgreSQL, or MySQL database (including the connection credentials), and App Service automatically adds the cooresponding shared database to */usr/local/tomcat/conf/context.xml* for you, using an appropriate driver available in the container. For an end-to-end scenario using this approach, see [Tutorial: Build a Tomcat web app with Azure App Service on Linux and MySQL](tutorial-java-tomcat-mysql-app.md).
+
+These instructions apply to all database connections. You need to fill placeholders with your chosen database's driver class name and JAR file. Provided is a table with class names and driver downloads for common databases.
+
+| Database | Driver Class Name | JDBC Driver |
+||--||
+| PostgreSQL | `org.postgresql.Driver` | [Download](https://jdbc.postgresql.org/download/) |
+| MySQL | `com.mysql.jdbc.Driver` | [Download](https://dev.mysql.com/downloads/connector/j/) (Select "Platform Independent") |
+| SQL Server | `com.microsoft.sqlserver.jdbc.SQLServerDriver` | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server#download) |
+
+To configure Tomcat to use Java Database Connectivity (JDBC) or the Java Persistence API (JPA), first customize the `CATALINA_OPTS` environment variable that is read in by Tomcat at start-up. Set these values through an app setting in the [App Service Maven plugin](https://github.com/Microsoft/azure-maven-plugins/blob/develop/azure-webapp-maven-plugin/README.md):
+
+```xml
+<appSettings>
+ <property>
+ <name>CATALINA_OPTS</name>
+ <value>"$CATALINA_OPTS -Ddbuser=${DBUSER} -Ddbpassword=${DBPASSWORD} -DconnURL=${CONNURL}"</value>
+ </property>
+</appSettings>
+```
+
+Or set the environment variables in the **Configuration** > **Application Settings** page in the Azure portal.
+
+Next, determine if the data source should be available to one application or to all applications running on the Tomcat servlet.
+
+### Application-level data sources
+
+1. Create a *context.xml* file in the *META-INF/* directory of your project. Create the *META-INF/* directory if it doesn't exist.
+
+2. In *context.xml*, add a `Context` element to link the data source to a JNDI address. Replace the `driverClassName` placeholder with your driver's class name from the table above.
+
+ ```xml
+ <Context>
+ <Resource
+ name="jdbc/dbconnection"
+ type="javax.sql.DataSource"
+ url="${connURL}"
+ driverClassName="<insert your driver class name>"
+ username="${dbuser}"
+ password="${dbpassword}"
+ />
+ </Context>
+ ```
+
+3. Update your application's *web.xml* to use the data source in your application.
+
+ ```xml
+ <resource-env-ref>
+ <resource-env-ref-name>jdbc/dbconnection</resource-env-ref-name>
+ <resource-env-ref-type>javax.sql.DataSource</resource-env-ref-type>
+ </resource-env-ref>
+ ```
+
+### Shared server-level resources
+
+# [Windows](#tab/windows)
+
+You can't directly modify a Tomcat installation for server-wide configuration because the installation location is read-only. To make server-level configuration changes to your Windows Tomcat installation, the simplest way is to do the following on app start:
+
+1. Copy Tomcat to a local directory (`%LOCAL_EXPANDED%`) and use that as `CATALINA_BASE` (see [Tomcat documentation on this variable](https://tomcat.apache.org/tomcat-10.1-doc/introduction.html)).
+1. Add your shared data sources to `%LOCAL_EXPANDED%\tomcat\conf\server.xml` using XSL transform.
+
+#### Add a startup file
+
+Create a file named `startup.cmd` `%HOME%\site\wwwroot` directory. This file runs automatically before the Tomcat server starts. The file should have the following content:
+
+```dos
+C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File %HOME%\site\configure.ps1
+```
+
+#### Add the PowerShell configuration script
+
+Next, add the configuration script called *configure.ps1* to the *%HOME%_\site* directory with the following code:
+
+```powershell
+# Locations of xml and xsl files
+$target_xml="$Env:LOCAL_EXPANDED\tomcat\conf\server.xml"
+$target_xsl="$Env:HOME\site\server.xsl"
+
+# Define the transform function
+# Useful if transforming multiple files
+function TransformXML{
+ param ($xml, $xsl, $output)
+
+ if (-not $xml -or -not $xsl -or -not $output)
+ {
+ return 0
+ }
+
+ Try
+ {
+ $xslt_settings = New-Object System.Xml.Xsl.XsltSettings;
+ $XmlUrlResolver = New-Object System.Xml.XmlUrlResolver;
+ $xslt_settings.EnableScript = 1;
+
+ $xslt = New-Object System.Xml.Xsl.XslCompiledTransform;
+ $xslt.Load($xsl,$xslt_settings,$XmlUrlResolver);
+ $xslt.Transform($xml, $output);
+ }
+
+ Catch
+ {
+ $ErrorMessage = $_.Exception.Message
+ $FailedItem = $_.Exception.ItemName
+ echo 'Error'$ErrorMessage':'$FailedItem':' $_.Exception;
+ return 0
+ }
+ return 1
+}
+
+# Start here
+
+# Check for marker file indicating that config has already been done
+if(Test-Path "$Env:LOCAL_EXPANDED\tomcat\config_done_marker"){
+ return 0
+}
+
+# Delete previous Tomcat directory if it exists
+# In case previous config isn't completed or a new config should be forcefully installed
+if(Test-Path "$Env:LOCAL_EXPANDED\tomcat"){
+ Remove-Item "$Env:LOCAL_EXPANDED\tomcat" Recurse
+}
+
+md -Path "$Env:LOCAL_EXPANDED\tomcat"
+
+# Copy Tomcat to local
+# Using the environment variable $AZURE_TOMCAT90_HOME uses the 'default' version of Tomcat
+New-Item "$Env:LOCAL_EXPANDED\tomcat" -ItemType Directory
+Copy-Item -Path "$Env:AZURE_TOMCAT90_HOME\*" "$Env:LOCAL_EXPANDED\tomcat" -Recurse
+
+# Perform the required customization of Tomcat
+$success = TransformXML -xml $target_xml -xsl $target_xsl -output $target_xml
+
+# Mark that the operation was a success if successful
+if($success){
+ New-Item -Path "$Env:LOCAL_EXPANDED\tomcat\config_done_marker" -ItemType File
+}
+```
+
+This PowerShell completes the following steps:
+
+1. Check whether a custom Tomcat copy exists already. If it does, the startup script can end here.
+2. Copy Tomcat locally.
+3. Add shared data sources to the custom Tomcat's configuration using XSL transform.
+4. Indicate that configuration was successfully completed.
+
+#### Add XSL transform file
+
+A common use case for customizing the built-in Tomcat installation is to modify the `server.xml`, `context.xml`, or `web.xml` Tomcat configuration files. App Service already modifies these files to provide platform features. To continue to use these features, it's important to preserve the content of these files when you make changes to them. To accomplish this, use an [XSL transformation (XSLT)](https://www.w3schools.com/xml/xsl_intro.asp).
+
+Add an XSL transform file called *configure.ps1* to the *%HOME%_\site* directory. You can use the following XSL transform code to add a new connector node to `server.xml`. The *identity transform* at the beginning preserves the original contents of the configuration file.
+
+```xml
+<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
+ <xsl:output method="xml" indent="yes"/>
+
+ <!-- Identity transform: this ensures that the original contents of the file are included in the new file -->
+ <!-- Ensure that your transform files include this block -->
+ <xsl:template match="@* | node()" name="Copy">
+ <xsl:copy>
+ <xsl:apply-templates select="@* | node()"/>
+ </xsl:copy>
+ </xsl:template>
+
+ <xsl:template match="@* | node()" mode="insertConnector">
+ <xsl:call-template name="Copy" />
+ </xsl:template>
+
+ <xsl:template match="comment()[not(../Connector[@scheme = 'https']) and
+ contains(., '&lt;Connector') and
+ (contains(., 'scheme=&quot;https&quot;') or
+ contains(., &quot;scheme='https'&quot;))]">
+ <xsl:value-of select="." disable-output-escaping="yes" />
+ </xsl:template>
+
+ <xsl:template match="Service[not(Connector[@scheme = 'https'] or
+ comment()[contains(., '&lt;Connector') and
+ (contains(., 'scheme=&quot;https&quot;') or
+ contains(., &quot;scheme='https'&quot;))]
+ )]
+ ">
+ <xsl:copy>
+ <xsl:apply-templates select="@* | node()" mode="insertConnector" />
+ </xsl:copy>
+ </xsl:template>
+
+ <!-- Add the new connector after the last existing Connnector if there's one -->
+ <xsl:template match="Connector[last()]" mode="insertConnector">
+ <xsl:call-template name="Copy" />
+
+ <xsl:call-template name="AddConnector" />
+ </xsl:template>
+
+ <!-- ... or before the first Engine if there's no existing Connector -->
+ <xsl:template match="Engine[1][not(preceding-sibling::Connector)]"
+ mode="insertConnector">
+ <xsl:call-template name="AddConnector" />
+
+ <xsl:call-template name="Copy" />
+ </xsl:template>
+
+ <xsl:template name="AddConnector">
+ <!-- Add new line -->
+ <xsl:text>&#xa;</xsl:text>
+ <!-- This is the new connector -->
+ <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
+ maxThreads="150" scheme="https" secure="true"
+ keystoreFile="${{user.home}}/.keystore" keystorePass="changeit"
+ clientAuth="false" sslProtocol="TLS" />
+ </xsl:template>
+
+</xsl:stylesheet>
+```
+
+#### Set `CATALINA_BASE` app setting
+
+The platform also needs to know where your custom version of Tomcat is installed. You can set the installation's location in the `CATALINA_BASE` app setting.
+
+You can use the Azure CLI to change this setting:
+
+```azurecli
+ az webapp config appsettings set -g $MyResourceGroup -n $MyUniqueApp --settings CATALINA_BASE="%LOCAL_EXPANDED%\tomcat"
+```
+
+Or, you can manually change the setting in the Azure portal:
+
+1. Go to **Settings** > **Configuration** > **Application settings**.
+1. Select **New Application Setting**.
+1. Use these values to create the setting:
+ 1. **Name**: `CATALINA_BASE`
+ 1. **Value**: `"%LOCAL_EXPANDED%\tomcat"`
+
+#### Finalize configuration
+
+Finally, you place the driver JARs in the Tomcat classpath and restart your App Service. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/site/lib* directory. In the [Cloud Shell](https://shell.azure.com), run `az webapp deploy --type=lib` for each driver JAR:
+
+```azurecli-interactive
+az webapp deploy --resource-group <group-name> --name <app-name> --src-path <jar-name>.jar --type=lib --target-path <jar-name>.jar
+```
+
+# [Linux](#tab/linux)
+
+Adding a shared, server-level data source requires you to edit Tomcat's server.xml. The most reliable way to do this is as follows:
+
+1. Upload a [startup script](./faq-app-service-linux.yml) and set the path to the script in **Configuration** > **Startup Command**. You can upload the startup script using [FTP](deploy-ftp.md).
+
+Your startup script makes an [xsl transform](https://www.w3schools.com/xml/xsl_intro.asp) to the server.xml file and output the resulting xml file to `/usr/local/tomcat/conf/server.xml`. The startup script should install libxslt via apk. Your xsl file and startup script can be uploaded via FTP. Below is an example startup script.
+
+```sh
+# Install libxslt. Also copy the transform file to /home/tomcat/conf/
+apk add --update libxslt
+
+# Usage: xsltproc --output output.xml style.xsl input.xml
+xsltproc --output /home/tomcat/conf/server.xml /home/tomcat/conf/transform.xsl /usr/local/tomcat/conf/server.xml
+```
+
+The following example XSL file adds a new connector node to the Tomcat server.xml.
+
+```xml
+<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
+ <xsl:output method="xml" indent="yes"/>
+
+ <xsl:template match="@* | node()" name="Copy">
+ <xsl:copy>
+ <xsl:apply-templates select="@* | node()"/>
+ </xsl:copy>
+ </xsl:template>
+
+ <xsl:template match="@* | node()" mode="insertConnector">
+ <xsl:call-template name="Copy" />
+ </xsl:template>
+
+ <xsl:template match="comment()[not(../Connector[@scheme = 'https']) and
+ contains(., '&lt;Connector') and
+ (contains(., 'scheme=&quot;https&quot;') or
+ contains(., &quot;scheme='https'&quot;))]">
+ <xsl:value-of select="." disable-output-escaping="yes" />
+ </xsl:template>
+
+ <xsl:template match="Service[not(Connector[@scheme = 'https'] or
+ comment()[contains(., '&lt;Connector') and
+ (contains(., 'scheme=&quot;https&quot;') or
+ contains(., &quot;scheme='https'&quot;))]
+ )]
+ ">
+ <xsl:copy>
+ <xsl:apply-templates select="@* | node()" mode="insertConnector" />
+ </xsl:copy>
+ </xsl:template>
+
+ <!-- Add the new connector after the last existing Connnector if there's one -->
+ <xsl:template match="Connector[last()]" mode="insertConnector">
+ <xsl:call-template name="Copy" />
+
+ <xsl:call-template name="AddConnector" />
+ </xsl:template>
+
+ <!-- ... or before the first Engine if there's no existing Connector -->
+ <xsl:template match="Engine[1][not(preceding-sibling::Connector)]"
+ mode="insertConnector">
+ <xsl:call-template name="AddConnector" />
+
+ <xsl:call-template name="Copy" />
+ </xsl:template>
+
+ <xsl:template name="AddConnector">
+ <!-- Add new line -->
+ <xsl:text>&#xa;</xsl:text>
+ <!-- This is the new connector -->
+ <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
+ maxThreads="150" scheme="https" secure="true"
+ keystoreFile="${{user.home}}/.keystore" keystorePass="changeit"
+ clientAuth="false" sslProtocol="TLS" />
+ </xsl:template>
+
+</xsl:stylesheet>
+```
+
+#### Finalize configuration
+
+Finally, place the driver JARs in the Tomcat classpath and restart your App Service.
+
+1. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/site/lib* directory. In the [Cloud Shell](https://shell.azure.com), run `az webapp deploy --type=lib` for each driver JAR:
+
+```azurecli-interactive
+az webapp deploy --resource-group <group-name> --name <app-name> --src-path <jar-name>.jar --type=lib --path <jar-name>.jar
+```
+
+If you created a server-level data source, restart the App Service Linux application. Tomcat resets `CATALINA_BASE` to `/home/tomcat` and uses the updated configuration.
+++++
+There are three core steps when [registering a data source with JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/configuration_guide/datasource_management): uploading the JDBC driver, adding the JDBC driver as a module, and registering the module. App Service is a stateless hosting service, so the configuration commands for adding and registering the data source module must be scripted and applied as the container starts.
+
+1. Obtain your database's JDBC driver.
+2. Create an XML module definition file for the JDBC driver. The following example shows a module definition for PostgreSQL.
+
+ ```xml
+ <?xml version="1.0" ?>
+ <module xmlns="urn:jboss:module:1.1" name="org.postgres">
+ <resources>
+ <!-- ***** IMPORTANT : REPLACE THIS PLACEHOLDER *******-->
+ <resource-root path="/home/site/deployments/tools/postgresql-42.2.12.jar" />
+ </resources>
+ <dependencies>
+ <module name="javax.api"/>
+ <module name="javax.transaction.api"/>
+ </dependencies>
+ </module>
+ ```
+
+1. Put your JBoss CLI commands into a file named `jboss-cli-commands.cli`. The JBoss commands must add the module and register it as a data source. The following example shows the JBoss CLI commands for PostgreSQL.
+
+ ```bash
+ #!/usr/bin/env bash
+ module add --name=org.postgres --resources=/home/site/deployments/tools/postgresql-42.2.12.jar --module-xml=/home/site/deployments/tools/postgres-module.xml
+
+ /subsystem=datasources/jdbc-driver=postgres:add(driver-name="postgres",driver-module-name="org.postgres",driver-class-name=org.postgresql.Driver,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
+
+ data-source add --name=postgresDS --driver-name=postgres --jndi-name=java:jboss/datasources/postgresDS --connection-url=${POSTGRES_CONNECTION_URL,env.POSTGRES_CONNECTION_URL:jdbc:postgresql://db:5432/postgres} --user-name=${POSTGRES_SERVER_ADMIN_FULL_NAME,env.POSTGRES_SERVER_ADMIN_FULL_NAME:postgres} --password=${POSTGRES_SERVER_ADMIN_PASSWORD,env.POSTGRES_SERVER_ADMIN_PASSWORD:example} --use-ccm=true --max-pool-size=5 --blocking-timeout-wait-millis=5000 --enabled=true --driver-class=org.postgresql.Driver --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --jta=true --use-java-context=true --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker
+ ```
+
+1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The following example shows how to call your `jboss-cli-commands.cli`. Later, you'll configure App Service to run this script when the container starts.
+
+ ```bash
+ $JBOSS_HOME/bin/jboss-cli.sh --connect --file=/home/site/deployments/tools/jboss-cli-commands.cli
+ ```
+
+1. Using an FTP client of your choice, upload your JDBC driver, `jboss-cli-commands.cli`, `startup_script.sh`, and the module definition to `/site/deployments/tools/`.
+2. Configure your site to run `startup_script.sh` when the container starts. In the Azure portal, navigate to **Configuration** > **General Settings** > **Startup Command**. Set the startup command field to `/home/site/deployments/tools/startup_script.sh`. **Save** your changes.
+
+To confirm that the datasource was added to the JBoss server, SSH into your webapp and run `$JBOSS_HOME/bin/jboss-cli.sh --connect`. Once you're connected to JBoss, run the `/subsystem=datasources:read-resource` to print a list of the data sources.
++
+## Next steps
+
+Visit the [Azure for Java Developers](/java/azure/) center to find Azure quickstarts, tutorials, and Java reference documentation.
+
+- [App Service Linux FAQ](faq-app-service-linux.yml)
+- [Environment variables and app settings reference](reference-app-settings.md)
app-service Configure Language Java Deploy Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-deploy-run.md
+
+ Title: Deploy and configure Tomcat, JBoss, or Java SE apps
+description: Learn how to deploy Tomcat, JBoss, or Java SE apps to run on Azure App Service and perform common tasks like setting Java versions and configuring logging.
+keywords: azure app service, web app, windows, oss, java, tomcat, jboss, spring boot, quarkus
+ms.devlang: java
+ Last updated : 07/17/2024+
+zone_pivot_groups: app-service-java-hosting
+adobe-target: true
++++
+# Deploy and configure a Tomcat, JBoss, or Java SE app in Azure App Service
+
+This article shows you the most common deployment and runtime configuration for Java apps in App Service. If you've never used Azure App Service, you should read through the [Java quickstart](quickstart-java.md) first. General questions about using App Service that aren't specific to Java development are answered in the [App Service FAQ](faq-configuration-and-management.yml).
++
+## Show Java version
+
+# [Windows](#tab/windows)
+
+To show the current Java version, run the following command in the [Cloud Shell](https://shell.azure.com):
+
+```azurecli-interactive
+az webapp config show --name <app-name> --resource-group <resource-group-name> --query "[javaVersion, javaContainer, javaContainerVersion]"
+```
+
+To show all supported Java versions, run the following command in the [Cloud Shell](https://shell.azure.com):
+
+```azurecli-interactive
+az webapp list-runtimes --os windows | grep java
+```
+
+# [Linux](#tab/linux)
+
+To show the current Java version, run the following command in the [Cloud Shell](https://shell.azure.com):
+
+```azurecli-interactive
+az webapp config show --resource-group <resource-group-name> --name <app-name> --query linuxFxVersion
+```
+
+To show all supported Java versions, run the following command in the [Cloud Shell](https://shell.azure.com):
+
+```azurecli-interactive
+az webapp list-runtimes --os linux | grep "JAVA\|TOMCAT\|JBOSSEAP"
+```
+++
+For more information on version support, see [App Service language runtime support policy](language-support-policy.md).
+
+## Deploying your app
+
+### Build Tools
+
+#### Maven
+
+With the [Maven Plugin for Azure Web Apps](https://github.com/microsoft/azure-maven-plugins/tree/develop/azure-webapp-maven-plugin), you can prepare your Maven Java project for Azure Web App easily with one command in your project root:
+
+```shell
+mvn com.microsoft.azure:azure-webapp-maven-plugin:2.13.0:config
+```
+
+This command adds an `azure-webapp-maven-plugin` plugin and related configuration by prompting you to select an existing Azure Web App or create a new one. During configuration, it attempts to detect whether your application should be deployed to Java SE, Tomcat, or (Linux only) JBoss EAP. Then you can deploy your Java app to Azure using the following command:
+
+```shell
+mvn package azure-webapp:deploy
+```
+
+Here's a sample configuration in `pom.xml`:
+
+```xml
+<plugin>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>azure-webapp-maven-plugin</artifactId>
+ <version>2.11.0</version>
+ <configuration>
+ <subscriptionId>111111-11111-11111-1111111</subscriptionId>
+ <resourceGroup>spring-boot-xxxxxxxxxx-rg</resourceGroup>
+ <appName>spring-boot-xxxxxxxxxx</appName>
+ <pricingTier>B2</pricingTier>
+ <region>westus</region>
+ <runtime>
+ <os>Linux</os>
+ <webContainer>Java SE</webContainer>
+ <javaVersion>Java 17</javaVersion>
+ </runtime>
+ <deployment>
+ <resources>
+ <resource>
+ <type>jar</type>
+ <directory>${project.basedir}/target</directory>
+ <includes>
+ <include>*.jar</include>
+ </includes>
+ </resource>
+ </resources>
+ </deployment>
+ </configuration>
+</plugin>
+```
+
+#### Gradle
+
+1. Set up the [Gradle Plugin for Azure Web Apps](https://github.com/microsoft/azure-gradle-plugins/tree/master/azure-webapp-gradle-plugin) by adding the plugin to your `build.gradle`:
+
+ ```groovy
+ plugins {
+ id "com.microsoft.azure.azurewebapp" version "1.10.0"
+ }
+ ```
+
+1. Configure your web app details. The corresponding Azure resources are created if they don't exist.
+Here's a sample configuration, for details, refer to this [document](https://github.com/microsoft/azure-gradle-plugins/wiki/Webapp-Configuration).
+
+ ```groovy
+ azurewebapp {
+ subscription = '<your subscription id>'
+ resourceGroup = '<your resource group>'
+ appName = '<your app name>'
+ pricingTier = '<price tier like 'P1v2'>'
+ region = '<region like 'westus'>'
+ runtime {
+ os = 'Linux'
+ webContainer = 'Tomcat 10.0' // or 'Java SE' if you want to run an executable jar
+ javaVersion = 'Java 17'
+ }
+ appSettings {
+ <key> = <value>
+ }
+ auth {
+ type = 'azure_cli' // support azure_cli, oauth2, device_code and service_principal
+ }
+ }
+ ```
+
+1. Deploy with one command.
+
+ ```shell
+ gradle azureWebAppDeploy
+ ```
+
+### IDEs
+
+Azure provides seamless Java App Service development experience in popular Java IDEs, including:
+
+- *VS Code*: [Java Web Apps with Visual Studio Code](https://code.visualstudio.com/docs/java/java-webapp#_deploy-web-apps-to-the-cloud)
+- *IntelliJ IDEA*:[Create a Hello World web app for Azure App Service using IntelliJ](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app)
+- *Eclipse*:[Create a Hello World web app for Azure App Service using Eclipse](/azure/developer/java/toolkit-for-eclipse/create-hello-world-web-app)
+
+### Kudu API
++
+To deploy .jar files to Java SE, use the `/api/publish` endpoint of the Kudu site. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
+
+> [!NOTE]
+> Your .jar application must be named `app.jar` for App Service to identify and run your application. The [Maven plugin](#maven) does this for you automatically during deployment. If you don't wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](./faq-app-service-linux.yml) textbox in the Configuration section of the portal. The startup script doesn't run from the directory into which it's placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
+++
+To deploy .war files to Tomcat, use the `/api/wardeploy/` endpoint to POST your archive file. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
+++
+To deploy .war files to JBoss, use the `/api/wardeploy/` endpoint to POST your archive file. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
+
+To deploy .ear files, [use FTP](deploy-ftp.md). Your .ear application is deployed to the context root defined in your application's configuration. For example, if the context root of your app is `<context-root>myapp</context-root>`, then you can browse the site at the `/myapp` path: `http://my-app-name.azurewebsites.net/myapp`. If you want your web app to be served in the root path, ensure that your app sets the context root to the root path: `<context-root>/</context-root>`. For more information, see [Setting the context root of a web application](https://docs.jboss.org/jbossas/guides/webguide/r2/en/html/ch06.html).
++
+Don't deploy your .war or .jar using FTP. The FTP tool is designed to upload startup scripts, dependencies, or other runtime files. It's not the optimal choice for deploying web apps.
+
+## Logging and debugging apps
+
+Performance reports, traffic visualizations, and health checkups are available for each app through the Azure portal. For more information, see [Azure App Service diagnostics overview](overview-diagnostics.md).
+
+### Stream diagnostic logs
+
+# [Windows](#tab/windows)
++
+# [Linux](#tab/linux)
++++
+For more information, see [Stream logs in Cloud Shell](troubleshoot-diagnostic-logs.md#in-cloud-shell).
+
+### SSH console access in Linux
++
+### Linux troubleshooting tools
+
+The built-in Java images are based on the [Alpine Linux](https://alpine-linux.readthedocs.io/en/latest/getting_started.html) operating system. Use the `apk` package manager to install any troubleshooting tools or commands.
+
+### Java Profiler
+
+All Java runtimes on Azure App Service come with the JDK Flight Recorder for profiling Java workloads. You can use it to record JVM, system, and application events and troubleshoot problems in your applications.
+
+To learn more about the Java Profiler, visit the [Azure Application Insights documentation](/azure/azure-monitor/app/java-standalone-profiler).
+
+### Flight Recorder
+
+All Java runtimes on App Service come with the Java Flight Recorder. You can use it to record JVM, system, and application events and troubleshoot problems in your Java applications.
+
+# [Windows](#tab/windows)
+
+#### Timed Recording
+
+To take a timed recording, you need the PID (Process ID) of the Java application. To find the PID, open a browser to your web app's SCM site at `https://<your-site-name>.scm.azurewebsites.net/ProcessExplorer/`. This page shows the running processes in your web app. Find the process named "java" in the table and copy the corresponding PID (Process ID).
+
+Next, open the **Debug Console** in the top toolbar of the SCM site and run the following command. Replace `<pid>` with the process ID you copied earlier. This command starts a 30-second profiler recording of your Java application and generates a file named `timed_recording_example.jfr` in the `C:\home` directory.
+
+```
+jcmd <pid> JFR.start name=TimedRecording settings=profile duration=30s filename="C:\home\timed_recording_example.JFR"
+```
+
+# [Linux](#tab/linux)
+
+SSH into your App Service and run the `jcmd` command to see a list of all the Java processes running. In addition to jcmd itself, you should see your Java application running with a process ID number (pid).
+
+```shell
+078990bbcd11:/home# jcmd
+Picked up JAVA_TOOL_OPTIONS: -Djava.net.preferIPv4Stack=true
+147 sun.tools.jcmd.JCmd
+116 /home/site/wwwroot/app.jar
+```
+
+Execute the following command to start a 30-second recording of the JVM. It profiles the JVM and creates a JFR file named *jfr_example.jfr* in the home directory. (Replace 116 with the pid of your Java app.)
+
+```shell
+jcmd 116 JFR.start name=MyRecording settings=profile duration=30s filename="/home/jfr_example.jfr"
+```
+
+During the 30-second interval, you can validate the recording is taking place by running `jcmd 116 JFR.check`. The command shows all recordings for the given Java process.
+
+#### Continuous Recording
+
+You can use Java Flight Recorder to continuously profile your Java application with minimal impact on runtime performance. To do so, run the following Azure CLI command to create an App Setting named JAVA_OPTS with the necessary configuration. The contents of the JAVA_OPTS App Setting are passed to the `java` command when your app is started.
+
+```azurecli
+az webapp config appsettings set -g <your_resource_group> -n <your_app_name> --settings JAVA_OPTS=-XX:StartFlightRecording=disk=true,name=continuous_recording,dumponexit=true,maxsize=1024m,maxage=1d
+```
+
+Once the recording starts, you can dump the current recording data at any time using the `JFR.dump` command.
+
+```shell
+jcmd <pid> JFR.dump name=continuous_recording filename="/home/recording1.jfr"
+```
+++
+#### Analyze `.jfr` files
+
+Use [FTPS](deploy-ftp.md) to download your JFR file to your local machine. To analyze the JFR file, download and install [Java Mission Control](https://www.oracle.com/java/technologies/javase/products-jmc8-downloads.html). For instructions on Java Mission Control, see the [JMC documentation](https://docs.oracle.com/en/java/java-components/jdk-mission-control/) and the [installation instructions](https://www.oracle.com/java/technologies/javase/jmc8-install.html).
+
+### App logging
+
+# [Windows](#tab/windows)
+
+Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-windows) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az-webapp-log-config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. Logging to the local App Service filesystem instance is disabled 12 hours after you enable it. If you need longer retention, configure the application to write output to a Blob storage container.
++
+Your Java and Tomcat app logs can be found in the */home/LogFiles/Application/* directory.
++
+# [Linux](#tab/linux)
+
+Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az-webapp-log-config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. If you need longer retention, configure the application to write output to a Blob storage container.
++
+Your Java and Tomcat app logs can be found in the */home/LogFiles/Application/* directory.
++
+Azure Blob Storage logging for Linux based apps can only be configured using [Azure Monitor](./troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor).
+++
+If your application uses [Logback](https://logback.qos.ch/) or [Log4j](https://logging.apache.org/log4j) for tracing, you can forward these traces for review into Azure Application Insights using the logging framework configuration instructions in [Explore Java trace logs in Application Insights](/previous-versions/azure/azure-monitor/app/deprecated-java-2x#explore-java-trace-logs-in-application-insights).
+
+> [!NOTE]
+> Due to known vulnerability [CVE-2021-44228](https://logging.apache.org/log4j/2.x/security.html), be sure to use Log4j version 2.16 or later.
+
+## Customization and tuning
+
+Azure App Service supports out of the box tuning and customization through the Azure portal and CLI. Review the following articles for non-Java-specific web app configuration:
+
+- [Configure app settings](configure-common.md#configure-app-settings)
+- [Set up a custom domain](app-service-web-tutorial-custom-domain.md)
+- [Configure TLS/SSL bindings](configure-ssl-bindings.md)
+- [Add a CDN](../cdn/cdn-add-to-web-app.md)
+- [Configure the Kudu site](https://github.com/projectkudu/kudu/wiki/Configurable-settings#linux-on-app-service-settings)
+
+### Copy App Content Locally
+
+Set the app setting `JAVA_COPY_ALL` to `true` to copy your app contents to the local worker from the shared file system. This setting helps address file-locking issues.
+
+### Set Java runtime options
+
+To set allocated memory or other JVM runtime options, create an [app setting](configure-common.md#configure-app-settings) named `JAVA_OPTS` with the options. App Service passes this setting as an environment variable to the Java runtime when it starts.
++
+In the Azure portal, under **Application Settings** for the web app, create a new app setting named `JAVA_OPTS` that includes other settings, such as `-Xms512m -Xmx1204m`.
+++
+In the Azure portal, under **Application Settings** for the web app, create a new app setting named `CATALINA_OPTS` that includes other settings, such as `-Xms512m -Xmx1204m`.
++
+To configure the app setting from the Maven plugin, add setting/value tags in the Azure plugin section. The following example sets a specific minimum and maximum Java heap size:
+
+```xml
+<appSettings>
+ <property>
+ <name>JAVA_OPTS</name>
+ <value>-Xms1024m -Xmx1024m</value>
+ </property>
+</appSettings>
+```
++
+> [!NOTE]
+> You don't need to create a web.config file when using Tomcat on Windows App Service.
++
+Developers running a single application with one deployment slot in their App Service plan can use the following options:
+
+- B1 and S1 instances: `-Xms1024m -Xmx1024m`
+- B2 and S2 instances: `-Xms3072m -Xmx3072m`
+- B3 and S3 instances: `-Xms6144m -Xmx6144m`
+- P1v2 instances: `-Xms3072m -Xmx3072m`
+- P2v2 instances: `-Xms6144m -Xmx6144m`
+- P3v2 instances: `-Xms12800m -Xmx12800m`
+- P1v3 instances: `-Xms6656m -Xmx6656m`
+- P2v3 instances: `-Xms14848m -Xmx14848m`
+- P3v3 instances: `-Xms30720m -Xmx30720m`
+- I1 instances: `-Xms3072m -Xmx3072m`
+- I2 instances: `-Xms6144m -Xmx6144m`
+- I3 instances: `-Xms12800m -Xmx12800m`
+- I1v2 instances: `-Xms6656m -Xmx6656m`
+- I2v2 instances: `-Xms14848m -Xmx14848m`
+- I3v2 instances: `-Xms30720m -Xmx30720m`
+
+When tuning application heap settings, review your App Service plan details and take into account multiple applications and deployment slot needs to find the optimal allocation of memory.
+
+### Turn on web sockets
+
+Turn on support for web sockets in the Azure portal in the **Application settings** for the application. You need to restart the application for the setting to take effect.
+
+Turn on web socket support using the Azure CLI with the following command:
+
+```azurecli-interactive
+az webapp config set --name <app-name> --resource-group <resource-group-name> --web-sockets-enabled true
+```
+
+Then restart your application:
+
+```azurecli-interactive
+az webapp stop --name <app-name> --resource-group <resource-group-name>
+az webapp start --name <app-name> --resource-group <resource-group-name>
+```
+
+### Set default character encoding
+
+In the Azure portal, under **Application Settings** for the web app, create a new app setting named `JAVA_OPTS` with value `-Dfile.encoding=UTF-8`.
+
+Alternatively, you can configure the app setting using the App Service Maven plugin. Add the setting name and value tags in the plugin configuration:
+
+```xml
+<appSettings>
+ <property>
+ <name>JAVA_OPTS</name>
+ <value>-Dfile.encoding=UTF-8</value>
+ </property>
+</appSettings>
+```
++
+### Pre-Compile JSP files
+
+To improve performance of Tomcat applications, you can compile your JSP files before deploying to App Service. You can use the [Maven plugin](https://sling.apache.org/components/jspc-maven-plugin/plugin-info.html) provided by Apache Sling, or using this [Ant build file](https://tomcat.apache.org/tomcat-9.0-doc/jasper-howto.html#Web_Application_Compilation).
++
+> [!NOTE]
+>
++
+## Choosing a Java runtime version
+
+App Service allows users to choose the major version of the JVM, such as Java 8 or Java 11, and the patch version, such as 1.8.0_232 or 11.0.5. You can also choose to have the patch version automatically updated as new minor versions become available. In most cases, production apps should use pinned patch JVM versions. This prevents unanticipated outages during a patch version autoupdate. All Java web apps use 64-bit JVMs, and it's not configurable.
++
+If you're using Tomcat, you can choose to pin the patch version of Tomcat. On Windows, you can pin the patch versions of the JVM and Tomcat independently. On Linux, you can pin the patch version of Tomcat; the patch version of the JVM is also pinned but isn't separately configurable.
++
+If you choose to pin the minor version, you need to periodically update the JVM minor version on the app. To ensure that your application runs on the newer minor version, create a staging slot and increment the minor version on the staging slot. Once you confirm the application runs correctly on the new minor version, you can swap the staging and production slots.
++
+## Clustering
+
+App Service supports clustering for JBoss EAP versions 7.4.1 and greater. To enable clustering, your web app must be [integrated with a virtual network](overview-vnet-integration.md). When the web app is integrated with a virtual network, it restarts, and the JBoss EAP installation automatically starts up with a clustered configuration. The JBoss EAP instances communicate over the subnet specified in the virtual network integration, using the ports shown in the `WEBSITES_PRIVATE_PORTS` environment variable at runtime. You can disable clustering by creating an app setting named `WEBSITE_DISABLE_CLUSTERING` with any value.
+
+> [!NOTE]
+> If you're enabling your virtual network integration with an ARM template, you need to manually set the property `vnetPrivatePorts` to a value of `2`. If you enable virtual network integration from the CLI or Portal, this property is set for you automatically.
+
+When clustering is enabled, the JBoss EAP instances use the FILE_PING JGroups discovery protocol to discover new instances and persist the cluster information like the cluster members, their identifiers, and their IP addresses. On App Service, these files are under `/home/clusterinfo/`. The first EAP instance to start obtains read/write permissions on the cluster membership file. Other instances read the file, find the primary node, and coordinate with that node to be included in the cluster and added to the file.
+
+> [!Note]
+> You can avoid JBOSS clustering timeouts by [cleaning up obsolete discovery files during your app startup](https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/JBOSS/avoid_timeouts_obsolete_nodes.md)
+
+The Premium V3 and Isolated V2 App Service Plan types can optionally be distributed across Availability Zones to improve resiliency and reliability for your business-critical workloads. This architecture is also known as [zone redundancy](../availability-zones/migrate-app-service.md). The JBoss EAP clustering feature is compatible with the zone redundancy feature.
+
+### Autoscale Rules
+
+When configuring autoscale rules for horizontal scaling, it's important to remove instances incrementally (one at a time) to ensure each removed instance can transfer its activity (such as handling a database transaction) to another member of the cluster. When configuring your autoscale rules in the Portal to scale down, use the following options:
+
+- **Operation**: "Decrease count by"
+- **Cool down**: "5 minutes" or greater
+- **Instance count**: 1
+
+You don't need to incrementally add instances (scaling out), you can add multiple instances to the cluster at a time.
+
+## App Service plans
+
+<a id="jboss-eap-hardware-options"></a>
+
+JBoss EAP is only available on the Premium v3 and Isolated v2 App Service Plan types. Customers that created a JBoss EAP site on a different tier during the public preview should scale up to Premium or Isolated hardware tier to avoid unexpected behavior.
+++
+## Tomcat baseline configuration
+
+> [!NOTE]
+> This section applies to Linux only.
+
+Java developers can customize the server settings, troubleshoot issues, and deploy applications to Tomcat with confidence if they know about the server.xml file and configuration details of Tomcat. Possible customizations include:
+
+* Customizing Tomcat configuration: By understanding the server.xml file and Tomcat's configuration details, you can fine-tune the server settings to match the needs of their applications.
+* Debugging: When an application is deployed on a Tomcat server, developers need to know the server configuration to debug any issues that might arise. This includes checking the server logs, examining the configuration files, and identifying any errors that might be occurring.
+* Troubleshooting Tomcat issues: Inevitably, Java developers encounter issues with their Tomcat server, such as performance problems or configuration errors. By understanding the server.xml file and Tomcat's configuration details, developers can quickly diagnose and troubleshoot these issues, which can save time and effort.
+* Deploying applications to Tomcat: To deploy a Java web application to Tomcat, developers need to know how to configure the server.xml file and other Tomcat settings. Understanding these details is essential for deploying applications successfully and ensuring that they run smoothly on the server.
+
+When you create an app with built-in Tomcat to host your Java workload (a WAR file or a JAR file), there are certain settings that you get out of the box for Tomcat configuration. You can refer to the [Official Apache Tomcat Documentation](https://tomcat.apache.org/) for detailed information, including the default configuration for Tomcat Web Server.
+
+Additionally, there are certain transformations that are further applied on top of the server.xml for Tomcat distribution upon start. These are transformations to the Connector, Host, and Valve settings.
+
+The latest versions of Tomcat have server.xml (8.5.58 and 9.0.38 onward). Older versions of Tomcat don't use transforms and might have different behavior as a result.
+
+### Connector
+
+```xml
+<Connector port="${port.http}" address="127.0.0.1" maxHttpHeaderSize="16384" compression="on" URIEncoding="UTF-8" connectionTimeout="${site.connectionTimeout}" maxThreads="${catalina.maxThreads}" maxConnections="${catalina.maxConnections}" protocol="HTTP/1.1" redirectPort="8443"/>
+ ```
+* `maxHttpHeaderSize` is set to `16384`
+* `URIEncoding` is set to `UTF-8`
+* `conectionTimeout` is set to `WEBSITE_TOMCAT_CONNECTION_TIMEOUT`, which defaults to `240000`
+* `maxThreads` is set to `WEBSITE_CATALINA_MAXTHREADS`, which defaults to `200`
+* `maxConnections` is set to `WEBSITE_CATALINA_MAXCONNECTIONS`, which defaults to `10000`
+
+> [!NOTE]
+> The connectionTimeout, maxThreads and maxConnections settings can be tuned with app settings
+
+Following are example CLI commands that you might use to alter the values of conectionTimeout, maxThreads, or maxConnections:
+
+```azurecli-interactive
+az webapp config appsettings set --resource-group myResourceGroup --name myApp --settings WEBSITE_TOMCAT_CONNECTION_TIMEOUT=120000
+```
+```azurecli-interactive
+az webapp config appsettings set --resource-group myResourceGroup --name myApp --settings WEBSITE_CATALINA_MAXTHREADS=100
+```
+```azurecli-interactive
+az webapp config appsettings set --resource-group myResourceGroup --name myApp --settings WEBSITE_CATALINA_MAXCONNECTIONS=5000
+```
+* Connector uses the address of the container instead of 127.0.0.1
+
+### Host
+
+```xml
+<Host appBase="${site.appbase}" xmlBase="${site.xmlbase}" unpackWARs="${site.unpackwars}" workDir="${site.tempdir}" errorReportValveClass="com.microsoft.azure.appservice.AppServiceErrorReportValve" name="localhost" autoDeploy="true">
+```
+
+* `appBase` is set to `AZURE_SITE_APP_BASE`, which defaults to local `WebappsLocalPath`
+* `xmlBase` is set to `AZURE_SITE_HOME`, which defaults to `/site/wwwroot`
+* `unpackWARs` is set to `AZURE_UNPACK_WARS`, which defaults to `true`
+* `workDir` is set to `JAVA_TMP_DIR`, which defaults `TMP`
+* `errorReportValveClass` uses our custom error report valve
+
+### Valve
+
+```xml
+<Valve prefix="site_access_log.${catalina.instance.name}" pattern="%h %l %u %t &quot;%r&quot; %s %b %D %{x-arr-log-id}i" directory="${site.logdir}/http/RawLogs" maxDays="${site.logRetentionDays}" className="org.apache.catalina.valves.AccessLogValve" suffix=".txt"/>
+ ```
+* `directory` is set to `AZURE_LOGGING_DIR`, which defaults to `home\logFiles`
+* `maxDays` is to `WEBSITE_HTTPLOGGING_RETENTION_DAYS`, which defaults to `0` [forever]
+
+On Linux, it has all of the same customization, plus:
+
+* Adds some error and reporting pages to the valve:
+
+ ```xml
+ <xsl:attribute name="appServiceErrorPage">
+ <xsl:value-of select="'${appService.valves.appServiceErrorPage}'"/>
+ </xsl:attribute>
+
+ <xsl:attribute name="showReport">
+ <xsl:value-of select="'${catalina.valves.showReport}'"/>
+ </xsl:attribute>
+
+ <xsl:attribute name="showServerInfo">
+ <xsl:value-of select="'${catalina.valves.showServerInfo}'"/>
+ </xsl:attribute>
+ ```
++
+## Next steps
+
+Visit the [Azure for Java Developers](/java/azure/) center to find Azure quickstarts, tutorials, and Java reference documentation.
+
+- [App Service Linux FAQ](faq-app-service-linux.yml)
+- [Environment variables and app settings reference](reference-app-settings.md)
app-service Configure Language Java Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-security.md
+
+ Title: Configure security for Tomcat, JBoss, or Java SE apps
+description: Learn how to configure security for Tomcat, JBoss, or Java SE apps on Azure App Service, such as authentication, Key Vault references, and Java key store.
+keywords: azure app service, web app, windows, oss, java, tomcat, jboss
+ms.devlang: java
+ Last updated : 07/17/2024+
+zone_pivot_groups: app-service-java-hosting
+adobe-target: true
++++
+# Configure security for a Tomcat, JBoss, or Java SE app in Azure App Service
+
+This article shows how to configure Java-specific security settings in App Service. Java applications running in App Service have the same set of [security best practices](../security/fundamentals/paas-applications-using-app-services.md) as other applications.
++
+## Authenticate users (Easy Auth)
+
+Set up app authentication in the Azure portal with the **Authentication and Authorization** option. From there, you can enable authentication using Microsoft Entra ID or social sign-ins like Facebook, Google, or GitHub. Azure portal configuration only works when configuring a single authentication provider. For more information, see [Configure your App Service app to use Microsoft Entra sign-in](configure-authentication-provider-aad.md) and the related articles for other identity providers. If you need to enable multiple sign-in providers, follow the instructions in [Customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md).
++
+Spring Boot developers can use the [Microsoft Entra Spring Boot starter](/java/azure/spring-framework/configure-spring-boot-starter-java-app-with-azure-active-directory) to secure applications using familiar Spring Security annotations and APIs. Be sure to increase the maximum header size in your *application.properties* file. We suggest a value of `16384`.
+++
+Your Tomcat application can access the user's claims directly from the servlet by casting the Principal object to a Map object. The `Map` object maps each claim type to a collection of the claims for that type. In the following code example, `request` is an instance of `HttpServletRequest`.
+
+```java
+Map<String, Collection<String>> map = (Map<String, Collection<String>>) request.getUserPrincipal();
+```
+
+Now you can inspect the `Map` object for any specific claim. For example, the following code snippet iterates through all the claim types and prints the contents of each collection.
+
+```java
+for (Object key : map.keySet()) {
+ Object value = map.get(key);
+ if (value != null && value instanceof Collection {
+ Collection claims = (Collection) value;
+ for (Object claim : claims) {
+ System.out.println(claims);
+ }
+ }
+ }
+```
+
+To sign out users, use the `/.auth/ext/logout` path. To perform other actions, see the documentation on [Customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md). There's also official documentation on the Tomcat [HttpServletRequest interface](https://tomcat.apache.org/tomcat-5.5-doc/servletapi/javax/servlet/http/HttpServletRequest.html) and its methods. The following servlet methods are also hydrated based on your App Service configuration:
+
+```java
+public boolean isSecure()
+public String getRemoteAddr()
+public String getRemoteHost()
+public String getScheme()
+public int getServerPort()
+```
+
+To disable this feature, create an Application Setting named `WEBSITE_AUTH_SKIP_PRINCIPAL` with a value of `1`. To disable all servlet filters added by App Service, create a setting named `WEBSITE_SKIP_FILTERS` with a value of `1`.
+++
+For JBoss EAP, see the Tomcat tab.
++
+## Configure TLS/SSL
+
+To upload an existing TLS/SSL certificate and bind it to your application's domain name, follow the instructions in [Secure a custom DNS name with an TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). You can also configure the app to enforce TLS/SSL.
+
+## Use KeyVault References
+
+[Azure KeyVault](../key-vault/general/overview.md) provides centralized secret management with access policies and audit history. You can store secrets (such as passwords or connection strings) in KeyVault and access these secrets in your application through environment variables.
+
+First, follow the instructions for [granting your app access to a key vault](app-service-key-vault-references.md#grant-your-app-access-to-a-key-vault) and [making a KeyVault reference to your secret in an Application Setting](app-service-key-vault-references.md#source-app-settings-from-key-vault). You can validate that the reference resolves to the secret by printing the environment variable while remotely accessing the App Service terminal.
++
+For Spring configuration files, see this documentation on [externalized configurations](https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html).
+
+To inject these secrets in your Spring configuration file, use environment variable injection syntax (`${MY_ENV_VAR}`).
+++
+To inject these secrets in your Tomcat configuration file, use environment variable injection syntax (`${MY_ENV_VAR}`).
++
+## Use the Java key store in Linux
+
+By default, any public or private certificates [uploaded to App Service Linux](configure-ssl-certificate.md) are loaded into the respective Java key stores as the container starts. After uploading your certificate, you'll need to restart your App Service for it to be loaded into the Java key store. Public certificates are loaded into the key store at `$JRE_HOME/lib/security/cacerts`, and private certificates are stored in `$JRE_HOME/lib/security/client.jks`.
+
+More configuration might be necessary for encrypting your JDBC connection with certificates in the Java key store. Refer to the documentation for your chosen JDBC driver.
+
+- [PostgreSQL](https://jdbc.postgresql.org/documentation/ssl/)
+- [SQL Server](/sql/connect/jdbc/connecting-with-ssl-encryption)
+- [MongoDB](https://mongodb.github.io/mongo-java-driver/3.4/driver/tutorials/ssl/)
+- [Cassandra](https://docs.datastax.com/en/developer/java-driver/4.3/)
+
+### Initialize the Java key store in Linux
+
+To initialize the `import java.security.KeyStore` object, load the keystore file with the password. The default password for both key stores is `changeit`.
+
+```java
+KeyStore keyStore = KeyStore.getInstance("jks");
+keyStore.load(
+ new FileInputStream(System.getenv("JRE_HOME")+"/lib/security/cacerts"),
+ "changeit".toCharArray());
+
+KeyStore keyStore = KeyStore.getInstance("pkcs12");
+keyStore.load(
+ new FileInputStream(System.getenv("JRE_HOME")+"/lib/security/client.jks"),
+ "changeit".toCharArray());
+```
+
+### Manually load the key store in Linux
+
+You can load certificates manually to the key store. Create an app setting, `SKIP_JAVA_KEYSTORE_LOAD`, with a value of `1` to disable App Service from loading the certificates into the key store automatically. All public certificates uploaded to App Service via the Azure portal are stored under `/var/ssl/certs/`. Private certificates are stored under `/var/ssl/private/`.
+
+You can interact or debug the Java Key Tool by [opening an SSH connection](configure-linux-open-ssh-session.md) to your App Service and running the command `keytool`. See the [Key Tool documentation](https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html) for a list of commands. For more information on the KeyStore API, see [the official documentation](https://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html).
+
+## Next steps
+
+Visit the [Azure for Java Developers](/java/azure/) center to find Azure quickstarts, tutorials, and Java reference documentation.
+
+- [App Service Linux FAQ](faq-app-service-linux.yml)
+- [Environment variables and app settings reference](reference-app-settings.md)
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
- Title: Configure Java apps
-description: Learn how to configure Java apps to run on Azure App Service. This article shows the most common configuration tasks.
-keywords: azure app service, web app, windows, oss, java, tomcat, jboss
- Previously updated : 04/12/2024-
-zone_pivot_groups: app-service-platform-windows-linux
-adobe-target: true
----
-# Configure a Java app for Azure App Service
-
-> [!NOTE]
-> For Spring applications, we recommend using Azure Spring Apps. However, you can still use Azure App Service as a destination. See [Java Workload Destination Guidance](https://aka.ms/javadestinations) for advice.
-
-Azure App Service lets Java developers to quickly build, deploy, and scale their Java SE, Tomcat, and JBoss EAP web applications on a fully managed service. Deploy applications with Maven plugins, from the command line, or in editors like IntelliJ, Eclipse, or Visual Studio Code.
-
-This guide provides key concepts and instructions for Java developers using App Service. If you've never used Azure App Service, you should read through the [Java quickstart](quickstart-java.md) first. General questions about using App Service that aren't specific to Java development are answered in the [App Service FAQ](faq-configuration-and-management.yml).
-
-## Show Java version
--
-To show the current Java version, run the following command in the [Cloud Shell](https://shell.azure.com):
-
-```azurecli-interactive
-az webapp config show --name <app-name> --resource-group <resource-group-name> --query "[javaVersion, javaContainer, javaContainerVersion]"
-```
-
-To show all supported Java versions, run the following command in the [Cloud Shell](https://shell.azure.com):
-
-```azurecli-interactive
-az webapp list-runtimes --os windows | grep java
-```
---
-To show the current Java version, run the following command in the [Cloud Shell](https://shell.azure.com):
-
-```azurecli-interactive
-az webapp config show --resource-group <resource-group-name> --name <app-name> --query linuxFxVersion
-```
-
-To show all supported Java versions, run the following command in the [Cloud Shell](https://shell.azure.com):
-
-```azurecli-interactive
-az webapp list-runtimes --os linux | grep "JAVA\|TOMCAT\|JBOSSEAP"
-```
--
-For more information on version support, see [App Service language runtime support policy](language-support-policy.md).
-
-## Deploying your app
-
-### Build Tools
-
-#### Maven
-
-With the [Maven Plugin for Azure Web Apps](https://github.com/microsoft/azure-maven-plugins/tree/develop/azure-webapp-maven-plugin), you can prepare your Maven Java project for Azure Web App easily with one command in your project root:
-
-```shell
-mvn com.microsoft.azure:azure-webapp-maven-plugin:2.11.0:config
-```
-
-This command adds an `azure-webapp-maven-plugin` plugin and related configuration by prompting you to select an existing Azure Web App or create a new one. Then you can deploy your Java app to Azure using the following command:
-
-```shell
-mvn package azure-webapp:deploy
-```
-
-Here's a sample configuration in `pom.xml`:
-
-```xml
-<plugin>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>azure-webapp-maven-plugin</artifactId>
- <version>2.11.0</version>
- <configuration>
- <subscriptionId>111111-11111-11111-1111111</subscriptionId>
- <resourceGroup>spring-boot-xxxxxxxxxx-rg</resourceGroup>
- <appName>spring-boot-xxxxxxxxxx</appName>
- <pricingTier>B2</pricingTier>
- <region>westus</region>
- <runtime>
- <os>Linux</os>
- <webContainer>Java SE</webContainer>
- <javaVersion>Java 11</javaVersion>
- </runtime>
- <deployment>
- <resources>
- <resource>
- <type>jar</type>
- <directory>${project.basedir}/target</directory>
- <includes>
- <include>*.jar</include>
- </includes>
- </resource>
- </resources>
- </deployment>
- </configuration>
-</plugin>
-```
-
-#### Gradle
-
-1. Set up the [Gradle Plugin for Azure Web Apps](https://github.com/microsoft/azure-gradle-plugins/tree/master/azure-webapp-gradle-plugin) by adding the plugin to your `build.gradle`:
-
- ```groovy
- plugins {
- id "com.microsoft.azure.azurewebapp" version "1.7.1"
- }
- ```
-
-1. Configure your web app details. The corresponding Azure resources are created if they don't exist.
-Here's a sample configuration, for details, refer to this [document](https://github.com/microsoft/azure-gradle-plugins/wiki/Webapp-Configuration).
-
- ```groovy
- azurewebapp {
- subscription = '<your subscription id>'
- resourceGroup = '<your resource group>'
- appName = '<your app name>'
- pricingTier = '<price tier like 'P1v2'>'
- region = '<region like 'westus'>'
- runtime {
- os = 'Linux'
- webContainer = 'Tomcat 9.0' // or 'Java SE' if you want to run an executable jar
- javaVersion = 'Java 8'
- }
- appSettings {
- <key> = <value>
- }
- auth {
- type = 'azure_cli' // support azure_cli, oauth2, device_code and service_principal
- }
- }
- ```
-
-1. Deploy with one command.
-
- ```shell
- gradle azureWebAppDeploy
- ```
-
-### IDEs
-
-Azure provides seamless Java App Service development experience in popular Java IDEs, including:
--- *VS Code*: [Java Web Apps with Visual Studio Code](https://code.visualstudio.com/docs/java/java-webapp#_deploy-web-apps-to-the-cloud)-- *IntelliJ IDEA*:[Create a Hello World web app for Azure App Service using IntelliJ](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app)-- *Eclipse*:[Create a Hello World web app for Azure App Service using Eclipse](/azure/developer/java/toolkit-for-eclipse/create-hello-world-web-app)-
-### Kudu API
-
-#### Java SE
-
-To deploy .jar files to Java SE, use the `/api/publish/` endpoint of the Kudu site. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
-
-> [!NOTE]
-> Your .jar application must be named `app.jar` for App Service to identify and run your application. The [Maven plugin](#maven) does this for you automatically during deployment. If you don't wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](./faq-app-service-linux.yml) textbox in the Configuration section of the portal. The startup script doesn't run from the directory into which it's placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
-
-#### Tomcat
-
-To deploy .war files to Tomcat, use the `/api/wardeploy/` endpoint to POST your archive file. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
--
-#### JBoss EAP
-
-To deploy .war files to JBoss, use the `/api/wardeploy/` endpoint to POST your archive file. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
-
-To deploy .ear files, [use FTP](deploy-ftp.md). Your .ear application is deployed to the context root defined in your application's configuration. For example, if the context root of your app is `<context-root>myapp</context-root>`, then you can browse the site at the `/myapp` path: `http://my-app-name.azurewebsites.net/myapp`. If you want your web app to be served in the root path, ensure that your app sets the context root to the root path: `<context-root>/</context-root>`. For more information, see [Setting the context root of a web application](https://docs.jboss.org/jbossas/guides/webguide/r2/en/html/ch06.html).
--
-Don't deploy your .war or .jar using FTP. The FTP tool is designed to upload startup scripts, dependencies, or other runtime files. It's not the optimal choice for deploying web apps.
-
-## Logging and debugging apps
-
-Performance reports, traffic visualizations, and health checkups are available for each app through the Azure portal. For more information, see [Azure App Service diagnostics overview](overview-diagnostics.md).
-
-### Stream diagnostic logs
------
-For more information, see [Stream logs in Cloud Shell](troubleshoot-diagnostic-logs.md#in-cloud-shell).
--
-### SSH console access
--
-### Troubleshooting tools
-
-The built-in Java images are based on the [Alpine Linux](https://alpine-linux.readthedocs.io/en/latest/getting_started.html) operating system. Use the `apk` package manager to install any troubleshooting tools or commands.
--
-### Java Profiler
-
-All Java runtimes on Azure App Service come with the JDK Flight Recorder for profiling Java workloads. You can use it to record JVM, system, and application events and troubleshoot problems in your applications.
-
-To learn more about the Java Profiler, visit the [Azure Application Insights documentation](/azure/azure-monitor/app/java-standalone-profiler).
-
-### Flight Recorder
-
-All Java runtimes on App Service come with the Java Flight Recorder. You can use it to record JVM, system, and application events and troubleshoot problems in your Java applications.
--
-#### Timed Recording
-
-To take a timed recording, you need the PID (Process ID) of the Java application. To find the PID, open a browser to your web app's SCM site at `https://<your-site-name>.scm.azurewebsites.net/ProcessExplorer/`. This page shows the running processes in your web app. Find the process named "java" in the table and copy the corresponding PID (Process ID).
-
-Next, open the **Debug Console** in the top toolbar of the SCM site and run the following command. Replace `<pid>` with the process ID you copied earlier. This command starts a 30-second profiler recording of your Java application and generate a file named `timed_recording_example.jfr` in the `C:\home` directory.
-
-```
-jcmd <pid> JFR.start name=TimedRecording settings=profile duration=30s filename="C:\home\timed_recording_example.JFR"
-```
--
-SSH into your App Service and run the `jcmd` command to see a list of all the Java processes running. In addition to jcmd itself, you should see your Java application running with a process ID number (pid).
-
-```shell
-078990bbcd11:/home# jcmd
-Picked up JAVA_TOOL_OPTIONS: -Djava.net.preferIPv4Stack=true
-147 sun.tools.jcmd.JCmd
-116 /home/site/wwwroot/app.jar
-```
-
-Execute the following command to start a 30-second recording of the JVM. It profiles the JVM and creates a JFR file named *jfr_example.jfr* in the home directory. (Replace 116 with the pid of your Java app.)
-
-```shell
-jcmd 116 JFR.start name=MyRecording settings=profile duration=30s filename="/home/jfr_example.jfr"
-```
-
-During the 30-second interval, you can validate the recording is taking place by running `jcmd 116 JFR.check`. The command shows all recordings for the given Java process.
-
-#### Continuous Recording
-
-You can use Java Flight Recorder to continuously profile your Java application with minimal impact on runtime performance. To do so, run the following Azure CLI command to create an App Setting named JAVA_OPTS with the necessary configuration. The contents of the JAVA_OPTS App Setting are passed to the `java` command when your app is started.
-
-```azurecli
-az webapp config appsettings set -g <your_resource_group> -n <your_app_name> --settings JAVA_OPTS=-XX:StartFlightRecording=disk=true,name=continuous_recording,dumponexit=true,maxsize=1024m,maxage=1d
-```
-
-Once the recording starts, you can dump the current recording data at any time using the `JFR.dump` command.
-
-```shell
-jcmd <pid> JFR.dump name=continuous_recording filename="/home/recording1.jfr"
-```
--
-#### Analyze `.jfr` files
-
-Use [FTPS](deploy-ftp.md) to download your JFR file to your local machine. To analyze the JFR file, download and install [Java Mission Control](https://www.oracle.com/java/technologies/javase/products-jmc8-downloads.html). For instructions on Java Mission Control, see the [JMC documentation](https://docs.oracle.com/en/java/java-components/jdk-mission-control/) and the [installation instructions](https://www.oracle.com/java/technologies/javase/jmc8-install.html).
-
-### App logging
--
-Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-windows) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az-webapp-log-config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. Logging to the local App Service filesystem instance is disabled 12 hours after it's configured. If you need longer retention, configure the application to write output to a Blob storage container. Your Java and Tomcat app logs can be found in the */home/LogFiles/Application/* directory.
--
-Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az-webapp-log-config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. If you need longer retention, configure the application to write output to a Blob storage container. Your Java and Tomcat app logs can be found in the */home/LogFiles/Application/* directory.
-
-Azure Blob Storage logging for Linux based apps can only be configured using [Azure Monitor](./troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor).
--
-If your application uses [Logback](https://logback.qos.ch/) or [Log4j](https://logging.apache.org/log4j) for tracing, you can forward these traces for review into Azure Application Insights using the logging framework configuration instructions in [Explore Java trace logs in Application Insights](/previous-versions/azure/azure-monitor/app/deprecated-java-2x#explore-java-trace-logs-in-application-insights).
-
-> [!NOTE]
-> Due to known vulnerability [CVE-2021-44228](https://logging.apache.org/log4j/2.x/security.html), be sure to use Log4j version 2.16 or later.
-
-## Customization and tuning
-
-Azure App Service supports out of the box tuning and customization through the Azure portal and CLI. Review the following articles for non-Java-specific web app configuration:
--- [Configure app settings](configure-common.md#configure-app-settings)-- [Set up a custom domain](app-service-web-tutorial-custom-domain.md)-- [Configure TLS/SSL bindings](configure-ssl-bindings.md)-- [Add a CDN](../cdn/cdn-add-to-web-app.md)-- [Configure the Kudu site](https://github.com/projectkudu/kudu/wiki/Configurable-settings#linux-on-app-service-settings)-
-### Copy App Content Locally
-
-Set the app setting `JAVA_COPY_ALL` to `true` to copy your app contents to the local worker from the shared file system. This setting helps address file-locking issues.
-
-### Set Java runtime options
-
-To set allocated memory or other JVM runtime options, create an [app setting](configure-common.md#configure-app-settings) named `JAVA_OPTS` with the options. App Service passes this setting as an environment variable to the Java runtime when it starts.
-
-In the Azure portal, under **Application Settings** for the web app, create a new app setting named `JAVA_OPTS` for Java SE or `CATALINA_OPTS` for Tomcat that includes other settings, such as `-Xms512m -Xmx1204m`.
-
-To configure the app setting from the Maven plugin, add setting/value tags in the Azure plugin section. The following example sets a specific minimum and maximum Java heap size:
-
-```xml
-<appSettings>
- <property>
- <name>JAVA_OPTS</name>
- <value>-Xms1024m -Xmx1024m</value>
- </property>
-</appSettings>
-```
--
-> [!NOTE]
-> You don't need to create a web.config file when using Tomcat on Windows App Service.
--
-Developers running a single application with one deployment slot in their App Service plan can use the following options:
--- B1 and S1 instances: `-Xms1024m -Xmx1024m`-- B2 and S2 instances: `-Xms3072m -Xmx3072m`-- B3 and S3 instances: `-Xms6144m -Xmx6144m`-- P1v2 instances: `-Xms3072m -Xmx3072m`-- P2v2 instances: `-Xms6144m -Xmx6144m`-- P3v2 instances: `-Xms12800m -Xmx12800m`-- P1v3 instances: `-Xms6656m -Xmx6656m`-- P2v3 instances: `-Xms14848m -Xmx14848m`-- P3v3 instances: `-Xms30720m -Xmx30720m`-- I1 instances: `-Xms3072m -Xmx3072m`-- I2 instances: `-Xms6144m -Xmx6144m`-- I3 instances: `-Xms12800m -Xmx12800m`-- I1v2 instances: `-Xms6656m -Xmx6656m`-- I2v2 instances: `-Xms14848m -Xmx14848m`-- I3v2 instances: `-Xms30720m -Xmx30720m`-
-When tuning application heap settings, review your App Service plan details and take into account multiple applications and deployment slot needs to find the optimal allocation of memory.
-
-### Turn on web sockets
-
-Turn on support for web sockets in the Azure portal in the **Application settings** for the application. You need to restart the application for the setting to take effect.
-
-Turn on web socket support using the Azure CLI with the following command:
-
-```azurecli-interactive
-az webapp config set --name <app-name> --resource-group <resource-group-name> --web-sockets-enabled true
-```
-
-Then restart your application:
-
-```azurecli-interactive
-az webapp stop --name <app-name> --resource-group <resource-group-name>
-az webapp start --name <app-name> --resource-group <resource-group-name>
-```
-
-### Set default character encoding
-
-In the Azure portal, under **Application Settings** for the web app, create a new app setting named `JAVA_OPTS` with value `-Dfile.encoding=UTF-8`.
-
-Alternatively, you can configure the app setting using the App Service Maven plugin. Add the setting name and value tags in the plugin configuration:
-
-```xml
-<appSettings>
- <property>
- <name>JAVA_OPTS</name>
- <value>-Dfile.encoding=UTF-8</value>
- </property>
-</appSettings>
-```
-
-### Pre-Compile JSP files
-
-To improve performance of Tomcat applications, you can compile your JSP files before deploying to App Service. You can use the [Maven plugin](https://sling.apache.org/components/jspc-maven-plugin/plugin-info.html) provided by Apache Sling, or using this [Ant build file](https://tomcat.apache.org/tomcat-9.0-doc/jasper-howto.html#Web_Application_Compilation).
-
-## Secure applications
-
-Java applications running in App Service have the same set of [security best practices](../security/fundamentals/paas-applications-using-app-services.md) as other applications.
-
-### Authenticate users (Easy Auth)
-
-Set up app authentication in the Azure portal with the **Authentication and Authorization** option. From there, you can enable authentication using Microsoft Entra ID or social sign-ins like Facebook, Google, or GitHub. Azure portal configuration only works when configuring a single authentication provider. For more information, see [Configure your App Service app to use Microsoft Entra sign-in](configure-authentication-provider-aad.md) and the related articles for other identity providers. If you need to enable multiple sign-in providers, follow the instructions in [Customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md).
-
-#### Java SE
-
-Spring Boot developers can use the [Microsoft Entra Spring Boot starter](/java/azure/spring-framework/configure-spring-boot-starter-java-app-with-azure-active-directory) to secure applications using familiar Spring Security annotations and APIs. Be sure to increase the maximum header size in your *application.properties* file. We suggest a value of `16384`.
-
-#### Tomcat
-
-Your Tomcat application can access the user's claims directly from the servlet by casting the Principal object to a Map object. The `Map` object maps each claim type to a collection of the claims for that type. In the following code example, `request` is an instance of `HttpServletRequest`.
-
-```java
-Map<String, Collection<String>> map = (Map<String, Collection<String>>) request.getUserPrincipal();
-```
-
-Now you can inspect the `Map` object for any specific claim. For example, the following code snippet iterates through all the claim types and prints the contents of each collection.
-
-```java
-for (Object key : map.keySet()) {
- Object value = map.get(key);
- if (value != null && value instanceof Collection {
- Collection claims = (Collection) value;
- for (Object claim : claims) {
- System.out.println(claims);
- }
- }
- }
-```
-
-To sign out users, use the `/.auth/ext/logout` path. To perform other actions, see the documentation on [Customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md). There's also official documentation on the Tomcat [HttpServletRequest interface](https://tomcat.apache.org/tomcat-5.5-doc/servletapi/javax/servlet/http/HttpServletRequest.html) and its methods. The following servlet methods are also hydrated based on your App Service configuration:
-
-```java
-public boolean isSecure()
-public String getRemoteAddr()
-public String getRemoteHost()
-public String getScheme()
-public int getServerPort()
-```
-
-To disable this feature, create an Application Setting named `WEBSITE_AUTH_SKIP_PRINCIPAL` with a value of `1`. To disable all servlet filters added by App Service, create a setting named `WEBSITE_SKIP_FILTERS` with a value of `1`.
-
-### Configure TLS/SSL
-
-To upload an existing TLS/SSL certificate and bind it to your application's domain name, follow the instructions in [Secure a custom DNS name with an TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). You can also configure the app to enforce TLS/SSL.
-
-### Use KeyVault References
-
-[Azure KeyVault](../key-vault/general/overview.md) provides centralized secret management with access policies and audit history. You can store secrets (such as passwords or connection strings) in KeyVault and access these secrets in your application through environment variables.
-
-First, follow the instructions for [granting your app access to a key vault](app-service-key-vault-references.md#grant-your-app-access-to-a-key-vault) and [making a KeyVault reference to your secret in an Application Setting](app-service-key-vault-references.md#source-app-settings-from-key-vault). You can validate that the reference resolves to the secret by printing the environment variable while remotely accessing the App Service terminal.
-
-To inject these secrets in your Spring or Tomcat configuration file, use environment variable injection syntax (`${MY_ENV_VAR}`). For Spring configuration files, see this documentation on [externalized configurations](https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html).
--
-### Use the Java Key Store
-
-By default, any public or private certificates [uploaded to App Service Linux](configure-ssl-certificate.md) are loaded into the respective Java Key Stores as the container starts. After uploading your certificate, you'll need to restart your App Service for it to be loaded into the Java Key Store. Public certificates are loaded into the Key Store at `$JRE_HOME/lib/security/cacerts`, and private certificates are stored in `$JRE_HOME/lib/security/client.jks`.
-
-More configuration might be necessary for encrypting your JDBC connection with certificates in the Java Key Store. Refer to the documentation for your chosen JDBC driver.
--- [PostgreSQL](https://jdbc.postgresql.org/documentation/ssl/)-- [SQL Server](/sql/connect/jdbc/connecting-with-ssl-encryption)-- [MongoDB](https://mongodb.github.io/mongo-java-driver/3.4/driver/tutorials/ssl/)-- [Cassandra](https://docs.datastax.com/en/developer/java-driver/4.3/)-
-#### Initialize the Java Key Store
-
-To initialize the `import java.security.KeyStore` object, load the keystore file with the password. The default password for both key stores is `changeit`.
-
-```java
-KeyStore keyStore = KeyStore.getInstance("jks");
-keyStore.load(
- new FileInputStream(System.getenv("JRE_HOME")+"/lib/security/cacerts"),
- "changeit".toCharArray());
-
-KeyStore keyStore = KeyStore.getInstance("pkcs12");
-keyStore.load(
- new FileInputStream(System.getenv("JRE_HOME")+"/lib/security/client.jks"),
- "changeit".toCharArray());
-```
-
-#### Manually load the key store
-
-You can load certificates manually to the key store. Create an app setting, `SKIP_JAVA_KEYSTORE_LOAD`, with a value of `1` to disable App Service from loading the certificates into the key store automatically. All public certificates uploaded to App Service via the Azure portal are stored under `/var/ssl/certs/`. Private certificates are stored under `/var/ssl/private/`.
-
-You can interact or debug the Java Key Tool by [opening an SSH connection](configure-linux-open-ssh-session.md) to your App Service and running the command `keytool`. See the [Key Tool documentation](https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html) for a list of commands. For more information on the KeyStore API, see [the official documentation](https://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html).
--
-## Configure APM platforms
-
-This section shows how to connect Java applications deployed on Azure App Service with Azure Monitor Application Insights, NewRelic, and AppDynamics application performance monitoring (APM) platforms.
-
-### Configure Application Insights
-
-Azure Monitor Application Insights is a cloud native application monitoring service that enables customers to observe failures, bottlenecks, and usage patterns to improve application performance and reduce mean time to resolution (MTTR). With a few clicks or CLI commands, you can enable monitoring for your Node.js or Java apps, autocollecting logs, metrics, and distributed traces, eliminating the need for including an SDK in your app. For more information about the available app settings for configuring the agent, see the [Application Insights documentation](../azure-monitor/app/java-standalone-config.md).
-
-#### Azure portal
-
-To enable Application Insights from the Azure portal, go to **Application Insights** on the left-side menu and select **Turn on Application Insights**. By default, a new application insights resource of the same name as your web app is used. You can choose to use an existing application insights resource, or change the name. Select **Apply** at the bottom.
-
-#### Azure CLI
-
-To enable via the Azure CLI, you need to create an Application Insights resource and set a couple app settings on the Azure portal to connect Application Insights to your web app.
-
-1. Enable the Applications Insights extension
-
- ```azurecli
- az extension add -n application-insights
- ```
-
-2. Create an Application Insights resource using the following CLI command. Replace the placeholders with your desired resource name and group.
-
- ```azurecli
- az monitor app-insights component create --app <resource-name> -g <resource-group> --location westus2 --kind web --application-type web
- ```
-
- Note the values for `connectionString` and `instrumentationKey`, you'll need these values in the next step.
-
- > To retrieve a list of other locations, run `az account list-locations`.
--
-3. Set the instrumentation key, connection string, and monitoring agent version as app settings on the web app. Replace `<instrumentationKey>` and `<connectionString>` with the values from the previous step.
-
- ```azurecli
- az webapp config appsettings set -n <webapp-name> -g <resource-group> --settings "APPINSIGHTS_INSTRUMENTATIONKEY=<instrumentationKey>" "APPLICATIONINSIGHTS_CONNECTION_STRING=<connectionString>" "ApplicationInsightsAgent_EXTENSION_VERSION=~3" "XDT_MicrosoftApplicationInsights_Mode=default" "XDT_MicrosoftApplicationInsights_Java=1"
- ```
---
-3. Set the instrumentation key, connection string, and monitoring agent version as app settings on the web app. Replace `<instrumentationKey>` and `<connectionString>` with the values from the previous step.
-
- ```azurecli
- az webapp config appsettings set -n <webapp-name> -g <resource-group> --settings "APPINSIGHTS_INSTRUMENTATIONKEY=<instrumentationKey>" "APPLICATIONINSIGHTS_CONNECTION_STRING=<connectionString>" "ApplicationInsightsAgent_EXTENSION_VERSION=~3" "XDT_MicrosoftApplicationInsights_Mode=default"
- ```
--
-### Configure New Relic
--
-1. Create a NewRelic account at [NewRelic.com](https://newrelic.com/signup)
-2. Download the Java agent from NewRelic. It has a file name similar to *newrelic-java-x.x.x.zip*.
-3. Copy your license key, you need it to configure the agent later.
-4. [SSH into your App Service instance](configure-linux-open-ssh-session.md) and create a new directory */home/site/wwwroot/apm*.
-5. Upload the unpacked NewRelic Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/newrelic*.
-6. Modify the YAML file at */home/site/wwwroot/apm/newrelic/newrelic.yml* and replace the placeholder license value with your own license key.
-7. In the Azure portal, browse to your application in App Service and create a new Application Setting.
-
- - For **Java SE** apps, create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`.
- - For **Tomcat**, create an environment variable named `CATALINA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`.
---
-1. Create a NewRelic account at [NewRelic.com](https://newrelic.com/signup)
-2. Download the Java agent from NewRelic. It has a file name similar to *newrelic-java-x.x.x.zip*.
-3. Copy your license key, you'll need it to configure the agent later.
-4. [SSH into your App Service instance](configure-linux-open-ssh-session.md) and create a new directory */home/site/wwwroot/apm*.
-5. Upload the unpacked NewRelic Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/newrelic*.
-6. Modify the YAML file at */home/site/wwwroot/apm/newrelic/newrelic.yml* and replace the placeholder license value with your own license key.
-7. In the Azure portal, browse to your application in App Service and create a new Application Setting.
-
- - For **Java SE** apps, create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`.
- - For **Tomcat**, create an environment variable named `CATALINA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`.
--
-> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
-
-### Configure AppDynamics
--
-1. Create an AppDynamics account at [AppDynamics.com](https://www.appdynamics.com/community/register/)
-2. Download the Java agent from the AppDynamics website. The file name is similar to *AppServerAgent-x.x.x.xxxxx.zip*
-3. Use the [Kudu console](https://github.com/projectkudu/kudu/wiki/Kudu-console) to create a new directory */home/site/wwwroot/apm*.
-4. Upload the Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/appdynamics*.
-5. In the Azure portal, browse to your application in App Service and create a new Application Setting.
-
- - For **Java SE** apps, create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=<app-name>` where `<app-name>` is your App Service name.
- - For **Tomcat** apps, create an environment variable named `CATALINA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=<app-name>` where `<app-name>` is your App Service name.
---
-1. Create an AppDynamics account at [AppDynamics.com](https://www.appdynamics.com/community/register/)
-2. Download the Java agent from the AppDynamics website. The file name is similar to *AppServerAgent-x.x.x.xxxxx.zip*
-3. [SSH into your App Service instance](configure-linux-open-ssh-session.md) and create a new directory */home/site/wwwroot/apm*.
-4. Upload the Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/appdynamics*.
-5. In the Azure portal, browse to your application in App Service and create a new Application Setting.
-
- - For **Java SE** apps, create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=<app-name>` where `<app-name>` is your App Service name.
- - For **Tomcat** apps, create an environment variable named `CATALINA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/appdynamics/javaagent.jar -Dappdynamics.agent.applicationName=<app-name>` where `<app-name>` is your App Service name.
--
-> [!NOTE]
-> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
-
-## Configure data sources
-
-### Java SE
-
-To connect to data sources in Spring Boot applications, we suggest creating connection strings and injecting them into your *application.properties* file.
-
-1. In the "Configuration" section of the App Service page, set a name for the string, paste your JDBC connection string into the value field, and set the type to "Custom". You can optionally set this connection string as slot setting.
-
- This connection string is accessible to our application as an environment variable named `CUSTOMCONNSTR_<your-string-name>`. For example, `CUSTOMCONNSTR_exampledb`.
-
-2. In your *application.properties* file, reference this connection string with the environment variable name. For our example, we would use the following.
-
- ```yml
- app.datasource.url=${CUSTOMCONNSTR_exampledb}
- ```
-
-For more information, see the [Spring Boot documentation on data access](https://docs.spring.io/spring-boot/docs/current/reference/html/howto-data-access.html) and [externalized configurations](https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html).
--
-### Tomcat
-
-These instructions apply to all database connections. You need to fill placeholders with your chosen database's driver class name and JAR file. Provided is a table with class names and driver downloads for common databases.
-
-| Database | Driver Class Name | JDBC Driver |
-||--||
-| PostgreSQL | `org.postgresql.Driver` | [Download](https://jdbc.postgresql.org/download/) |
-| MySQL | `com.mysql.jdbc.Driver` | [Download](https://dev.mysql.com/downloads/connector/j/) (Select "Platform Independent") |
-| SQL Server | `com.microsoft.sqlserver.jdbc.SQLServerDriver` | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server#download) |
-
-To configure Tomcat to use Java Database Connectivity (JDBC) or the Java Persistence API (JPA), first customize the `CATALINA_OPTS` environment variable that is read in by Tomcat at start-up. Set these values through an app setting in the [App Service Maven plugin](https://github.com/Microsoft/azure-maven-plugins/blob/develop/azure-webapp-maven-plugin/README.md):
-
-```xml
-<appSettings>
- <property>
- <name>CATALINA_OPTS</name>
- <value>"$CATALINA_OPTS -Ddbuser=${DBUSER} -Ddbpassword=${DBPASSWORD} -DconnURL=${CONNURL}"</value>
- </property>
-</appSettings>
-```
-
-Or set the environment variables in the **Configuration** > **Application Settings** page in the Azure portal.
-
-Next, determine if the data source should be available to one application or to all applications running on the Tomcat servlet.
-
-#### Application-level data sources
-
-1. Create a *context.xml* file in the *META-INF/* directory of your project. Create the *META-INF/* directory if it doesn't exist.
-
-2. In *context.xml*, add a `Context` element to link the data source to a JNDI address. Replace the `driverClassName` placeholder with your driver's class name from the table above.
-
- ```xml
- <Context>
- <Resource
- name="jdbc/dbconnection"
- type="javax.sql.DataSource"
- url="${connURL}"
- driverClassName="<insert your driver class name>"
- username="${dbuser}"
- password="${dbpassword}"
- />
- </Context>
- ```
-
-3. Update your application's *web.xml* to use the data source in your application.
-
- ```xml
- <resource-env-ref>
- <resource-env-ref-name>jdbc/dbconnection</resource-env-ref-name>
- <resource-env-ref-type>javax.sql.DataSource</resource-env-ref-type>
- </resource-env-ref>
- ```
-
-#### Shared server-level resources
-
-Tomcat installations on App Service on Windows exist in shared space on the App Service Plan. You can't directly modify a Tomcat installation for server-wide configuration. To make server-level configuration changes to your Tomcat installation, you must copy Tomcat to a local folder, in which you can modify Tomcat's configuration.
-
-##### Automate creating custom Tomcat on app start
-
-You can use a startup script to perform actions before a web app starts. The startup script for customizing Tomcat needs to complete the following steps:
-
-1. Check whether Tomcat was already copied and configured locally. If it was, the startup script can end here.
-2. Copy Tomcat locally.
-3. Make the required configuration changes.
-4. Indicate that configuration was successfully completed.
-
-For Windows apps, create a file named `startup.cmd` or `startup.ps1` in the `wwwroot` directory. This file runs automatically before the Tomcat server starts.
-
-Here's a PowerShell script that completes these steps:
-
-```powershell
- # Check for marker file indicating that config has already been done
- if(Test-Path "$Env:LOCAL_EXPANDED\tomcat\config_done_marker"){
- return 0
- }
-
- # Delete previous Tomcat directory if it exists
- # In case previous config isn't completed or a new config should be forcefully installed
- if(Test-Path "$Env:LOCAL_EXPANDED\tomcat"){
- Remove-Item "$Env:LOCAL_EXPANDED\tomcat" -Recurse
- }
-
- # Copy Tomcat to local
- # Using the environment variable $AZURE_TOMCAT90_HOME uses the 'default' version of Tomcat
- New-Item "$Env:LOCAL_EXPANDED\tomcat" -ItemType Directory
- Copy-Item -Path "$Env:AZURE_TOMCAT90_HOME\*" -Destination "$Env:LOCAL_EXPANDED\tomcat" -Recurse
-
- # Perform the required customization of Tomcat
- {... customization ...}
-
- # Mark that the operation was a success
- New-Item -Path "$Env:LOCAL_EXPANDED\tomcat\config_done_marker" -ItemType File
-```
-
-##### Transforms
-
-A common use case for customizing a Tomcat version is to modify the `server.xml`, `context.xml`, or `web.xml` Tomcat configuration files. App Service already modifies these files to provide platform features. To continue to use these features, it's important to preserve the content of these files when you make changes to them. To accomplish this, we recommend that you use an [XSL transformation (XSLT)](https://www.w3schools.com/xml/xsl_intro.asp). Use an XSL transform to make changes to the XML files while preserving the original contents of the file.
-
-###### Example XSLT file
-
-This example transform adds a new connector node to `server.xml`. Note the *Identity Transform*, which preserves the original contents of the file.
-
-```xml
- <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
- <xsl:output method="xml" indent="yes"/>
-
- <!-- Identity transform: this ensures that the original contents of the file are included in the new file -->
- <!-- Ensure that your transform files include this block -->
- <xsl:template match="@* | node()" name="Copy">
- <xsl:copy>
- <xsl:apply-templates select="@* | node()"/>
- </xsl:copy>
- </xsl:template>
-
- <xsl:template match="@* | node()" mode="insertConnector">
- <xsl:call-template name="Copy" />
- </xsl:template>
-
- <xsl:template match="comment()[not(../Connector[@scheme = 'https']) and
- contains(., '&lt;Connector') and
- (contains(., 'scheme=&quot;https&quot;') or
- contains(., &quot;scheme='https'&quot;))]">
- <xsl:value-of select="." disable-output-escaping="yes" />
- </xsl:template>
-
- <xsl:template match="Service[not(Connector[@scheme = 'https'] or
- comment()[contains(., '&lt;Connector') and
- (contains(., 'scheme=&quot;https&quot;') or
- contains(., &quot;scheme='https'&quot;))]
- )]
- ">
- <xsl:copy>
- <xsl:apply-templates select="@* | node()" mode="insertConnector" />
- </xsl:copy>
- </xsl:template>
-
- <!-- Add the new connector after the last existing Connnector if there's one -->
- <xsl:template match="Connector[last()]" mode="insertConnector">
- <xsl:call-template name="Copy" />
-
- <xsl:call-template name="AddConnector" />
- </xsl:template>
-
- <!-- ... or before the first Engine if there's no existing Connector -->
- <xsl:template match="Engine[1][not(preceding-sibling::Connector)]"
- mode="insertConnector">
- <xsl:call-template name="AddConnector" />
-
- <xsl:call-template name="Copy" />
- </xsl:template>
-
- <xsl:template name="AddConnector">
- <!-- Add new line -->
- <xsl:text>&#xa;</xsl:text>
- <!-- This is the new connector -->
- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
- maxThreads="150" scheme="https" secure="true"
- keystoreFile="${{user.home}}/.keystore" keystorePass="changeit"
- clientAuth="false" sslProtocol="TLS" />
- </xsl:template>
-
- </xsl:stylesheet>
-```
-
-###### Function for XSL transform
-
-PowerShell has built-in tools for transforming XML files by using XSL transforms. The following script is an example function that you can use in `startup.ps1` to perform the transform:
-
-```powershell
- function TransformXML{
- param ($xml, $xsl, $output)
-
- if (-not $xml -or -not $xsl -or -not $output)
- {
- return 0
- }
-
- Try
- {
- $xslt_settings = New-Object System.Xml.Xsl.XsltSettings;
- $XmlUrlResolver = New-Object System.Xml.XmlUrlResolver;
- $xslt_settings.EnableScript = 1;
-
- $xslt = New-Object System.Xml.Xsl.XslCompiledTransform;
- $xslt.Load($xsl,$xslt_settings,$XmlUrlResolver);
- $xslt.Transform($xml, $output);
-
- }
-
- Catch
- {
- $ErrorMessage = $_.Exception.Message
- $FailedItem = $_.Exception.ItemName
- Write-Host 'Error'$ErrorMessage':'$FailedItem':' $_.Exception;
- return 0
- }
- return 1
- }
-```
-
-##### App settings
-
-The platform also needs to know where your custom version of Tomcat is installed. You can set the installation's location in the `CATALINA_BASE` app setting.
-
-You can use the Azure CLI to change this setting:
-
-```azurecli
- az webapp config appsettings set -g $MyResourceGroup -n $MyUniqueApp --settings CATALINA_BASE="%LOCAL_EXPANDED%\tomcat"
-```
-
-Or, you can manually change the setting in the Azure portal:
-
-1. Go to **Settings** > **Configuration** > **Application settings**.
-1. Select **New Application Setting**.
-1. Use these values to create the setting:
- 1. **Name**: `CATALINA_BASE`
- 1. **Value**: `"%LOCAL_EXPANDED%\tomcat"`
-
-##### Example startup.ps1
-
-The following example script copies a custom Tomcat to a local folder, performs an XSL transform, and indicates that the transform was successful:
-
-```powershell
- # Locations of xml and xsl files
- $target_xml="$Env:LOCAL_EXPANDED\tomcat\conf\server.xml"
- $target_xsl="$Env:HOME\site\server.xsl"
-
- # Define the transform function
- # Useful if transforming multiple files
- function TransformXML{
- param ($xml, $xsl, $output)
-
- if (-not $xml -or -not $xsl -or -not $output)
- {
- return 0
- }
-
- Try
- {
- $xslt_settings = New-Object System.Xml.Xsl.XsltSettings;
- $XmlUrlResolver = New-Object System.Xml.XmlUrlResolver;
- $xslt_settings.EnableScript = 1;
-
- $xslt = New-Object System.Xml.Xsl.XslCompiledTransform;
- $xslt.Load($xsl,$xslt_settings,$XmlUrlResolver);
- $xslt.Transform($xml, $output);
- }
-
- Catch
- {
- $ErrorMessage = $_.Exception.Message
- $FailedItem = $_.Exception.ItemName
- echo 'Error'$ErrorMessage':'$FailedItem':' $_.Exception;
- return 0
- }
- return 1
- }
-
- $success = TransformXML -xml $target_xml -xsl $target_xsl -output $target_xml
-
- # Check for marker file indicating that config has already been done
- if(Test-Path "$Env:LOCAL_EXPANDED\tomcat\config_done_marker"){
- return 0
- }
-
- # Delete previous Tomcat directory if it exists
- # In case previous config isn't completed or a new config should be forcefully installed
- if(Test-Path "$Env:LOCAL_EXPANDED\tomcat"){
- Remove-Item "$Env:LOCAL_EXPANDED\tomcat" --recurse
- }
-
- md -Path "$Env:LOCAL_EXPANDED\tomcat"
-
- # Copy Tomcat to local
- # Using the environment variable $AZURE_TOMCAT90_HOME uses the 'default' version of Tomcat
- Copy-Item -Path "$Env:AZURE_TOMCAT90_HOME\*" "$Env:LOCAL_EXPANDED\tomcat" -Recurse
-
- # Perform the required customization of Tomcat
- $success = TransformXML -xml $target_xml -xsl $target_xsl -output $target_xml
-
- # Mark that the operation was a success if successful
- if($success){
- New-Item -Path "$Env:LOCAL_EXPANDED\tomcat\config_done_marker" -ItemType File
- }
-```
-
-#### Finalize configuration
-
-Finally, you place the driver JARs in the Tomcat classpath and restart your App Service. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/site/lib* directory. In the [Cloud Shell](https://shell.azure.com), run `az webapp deploy --type=lib` for each driver JAR:
-
-```azurecli-interactive
-az webapp deploy --resource-group <group-name> --name <app-name> --src-path <jar-name>.jar --type=lib --target-path <jar-name>.jar
-```
-----
-### Tomcat
-
-These instructions apply to all database connections. You need to fill placeholders with your chosen database's driver class name and JAR file. Provided is a table with class names and driver downloads for common databases.
-
-| Database | Driver Class Name | JDBC Driver |
-||--||
-| PostgreSQL | `org.postgresql.Driver` | [Download](https://jdbc.postgresql.org/download/) |
-| MySQL | `com.mysql.jdbc.Driver` | [Download](https://dev.mysql.com/downloads/connector/j/) (Select "Platform Independent") |
-| SQL Server | `com.microsoft.sqlserver.jdbc.SQLServerDriver` | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server#download) |
-
-To configure Tomcat to use Java Database Connectivity (JDBC) or the Java Persistence API (JPA), first customize the `CATALINA_OPTS` environment variable that is read in by Tomcat at start-up. Set these values through an app setting in the [App Service Maven plugin](https://github.com/Microsoft/azure-maven-plugins/blob/develop/azure-webapp-maven-plugin/README.md):
-
-```xml
-<appSettings>
- <property>
- <name>CATALINA_OPTS</name>
- <value>"$CATALINA_OPTS -Ddbuser=${DBUSER} -Ddbpassword=${DBPASSWORD} -DconnURL=${CONNURL}"</value>
- </property>
-</appSettings>
-```
-
-Or set the environment variables in the **Configuration** > **Application Settings** page in the Azure portal.
-
-Next, determine if the data source should be available to one application or to all applications running on the Tomcat servlet.
-
-#### Application-level data sources
-
-1. Create a *context.xml* file in the *META-INF/* directory of your project. Create the *META-INF/* directory if it doesn't exist.
-
-2. In *context.xml*, add a `Context` element to link the data source to a JNDI address. Replace the `driverClassName` placeholder with your driver's class name from the table above.
-
- ```xml
- <Context>
- <Resource
- name="jdbc/dbconnection"
- type="javax.sql.DataSource"
- url="${connURL}"
- driverClassName="<insert your driver class name>"
- username="${dbuser}"
- password="${dbpassword}"
- />
- </Context>
- ```
-
-3. Update your application's *web.xml* to use the data source in your application.
-
- ```xml
- <resource-env-ref>
- <resource-env-ref-name>jdbc/dbconnection</resource-env-ref-name>
- <resource-env-ref-type>javax.sql.DataSource</resource-env-ref-type>
- </resource-env-ref>
- ```
-
-#### Shared server-level resources
-
-Adding a shared, server-level data source requires you to edit Tomcat's server.xml. First, upload a [startup script](./faq-app-service-linux.yml) and set the path to the script in **Configuration** > **Startup Command**. You can upload the startup script using [FTP](deploy-ftp.md).
-
-Your startup script will make an [xsl transform](https://www.w3schools.com/xml/xsl_intro.asp) to the server.xml file and output the resulting xml file to `/usr/local/tomcat/conf/server.xml`. The startup script should install libxslt via apk. Your xsl file and startup script can be uploaded via FTP. Below is an example startup script.
-
-```sh
-# Install libxslt. Also copy the transform file to /home/tomcat/conf/
-apk add --update libxslt
-
-# Usage: xsltproc --output output.xml style.xsl input.xml
-xsltproc --output /home/tomcat/conf/server.xml /home/tomcat/conf/transform.xsl /usr/local/tomcat/conf/server.xml
-```
-
-The following example XSL file adds a new connector node to the Tomcat server.xml.
-
-```xml
-<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
- <xsl:output method="xml" indent="yes"/>
-
- <xsl:template match="@* | node()" name="Copy">
- <xsl:copy>
- <xsl:apply-templates select="@* | node()"/>
- </xsl:copy>
- </xsl:template>
-
- <xsl:template match="@* | node()" mode="insertConnector">
- <xsl:call-template name="Copy" />
- </xsl:template>
-
- <xsl:template match="comment()[not(../Connector[@scheme = 'https']) and
- contains(., '&lt;Connector') and
- (contains(., 'scheme=&quot;https&quot;') or
- contains(., &quot;scheme='https'&quot;))]">
- <xsl:value-of select="." disable-output-escaping="yes" />
- </xsl:template>
-
- <xsl:template match="Service[not(Connector[@scheme = 'https'] or
- comment()[contains(., '&lt;Connector') and
- (contains(., 'scheme=&quot;https&quot;') or
- contains(., &quot;scheme='https'&quot;))]
- )]
- ">
- <xsl:copy>
- <xsl:apply-templates select="@* | node()" mode="insertConnector" />
- </xsl:copy>
- </xsl:template>
-
- <!-- Add the new connector after the last existing Connnector if there's one -->
- <xsl:template match="Connector[last()]" mode="insertConnector">
- <xsl:call-template name="Copy" />
-
- <xsl:call-template name="AddConnector" />
- </xsl:template>
-
- <!-- ... or before the first Engine if there's no existing Connector -->
- <xsl:template match="Engine[1][not(preceding-sibling::Connector)]"
- mode="insertConnector">
- <xsl:call-template name="AddConnector" />
-
- <xsl:call-template name="Copy" />
- </xsl:template>
-
- <xsl:template name="AddConnector">
- <!-- Add new line -->
- <xsl:text>&#xa;</xsl:text>
- <!-- This is the new connector -->
- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
- maxThreads="150" scheme="https" secure="true"
- keystoreFile="${{user.home}}/.keystore" keystorePass="changeit"
- clientAuth="false" sslProtocol="TLS" />
- </xsl:template>
-
-</xsl:stylesheet>
-```
-
-#### Finalize configuration
-
-Finally, place the driver JARs in the Tomcat classpath and restart your App Service.
-
-1. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/site/lib* directory. In the [Cloud Shell](https://shell.azure.com), run `az webapp deploy --type=lib` for each driver JAR:
-
-```azurecli-interactive
-az webapp deploy --resource-group <group-name> --name <app-name> --src-path <jar-name>.jar --type=lib --path <jar-name>.jar
-```
-
-If you created a server-level data source, restart the App Service Linux application. Tomcat will reset `CATALINA_BASE` to `/home/tomcat` and use the updated configuration.
-
-### JBoss EAP Data Sources
-
-There are three core steps when [registering a data source with JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/configuration_guide/datasource_management): uploading the JDBC driver, adding the JDBC driver as a module, and registering the module. App Service is a stateless hosting service, so the configuration commands for adding and registering the data source module must be scripted and applied as the container starts.
-
-1. Obtain your database's JDBC driver.
-2. Create an XML module definition file for the JDBC driver. The following example shows a module definition for PostgreSQL.
-
- ```xml
- <?xml version="1.0" ?>
- <module xmlns="urn:jboss:module:1.1" name="org.postgres">
- <resources>
- <!-- ***** IMPORTANT : REPLACE THIS PLACEHOLDER *******-->
- <resource-root path="/home/site/deployments/tools/postgresql-42.2.12.jar" />
- </resources>
- <dependencies>
- <module name="javax.api"/>
- <module name="javax.transaction.api"/>
- </dependencies>
- </module>
- ```
-
-1. Put your JBoss CLI commands into a file named `jboss-cli-commands.cli`. The JBoss commands must add the module and register it as a data source. The following example shows the JBoss CLI commands for PostgreSQL.
-
- ```bash
- #!/usr/bin/env bash
- module add --name=org.postgres --resources=/home/site/deployments/tools/postgresql-42.2.12.jar --module-xml=/home/site/deployments/tools/postgres-module.xml
-
- /subsystem=datasources/jdbc-driver=postgres:add(driver-name="postgres",driver-module-name="org.postgres",driver-class-name=org.postgresql.Driver,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
-
- data-source add --name=postgresDS --driver-name=postgres --jndi-name=java:jboss/datasources/postgresDS --connection-url=${POSTGRES_CONNECTION_URL,env.POSTGRES_CONNECTION_URL:jdbc:postgresql://db:5432/postgres} --user-name=${POSTGRES_SERVER_ADMIN_FULL_NAME,env.POSTGRES_SERVER_ADMIN_FULL_NAME:postgres} --password=${POSTGRES_SERVER_ADMIN_PASSWORD,env.POSTGRES_SERVER_ADMIN_PASSWORD:example} --use-ccm=true --max-pool-size=5 --blocking-timeout-wait-millis=5000 --enabled=true --driver-class=org.postgresql.Driver --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --jta=true --use-java-context=true --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker
- ```
-
-1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The following example shows how to call your `jboss-cli-commands.cli`. Later, you'll configure App Service to run this script when the container starts.
-
- ```bash
- $JBOSS_HOME/bin/jboss-cli.sh --connect --file=/home/site/deployments/tools/jboss-cli-commands.cli
- ```
-
-1. Using an FTP client of your choice, upload your JDBC driver, `jboss-cli-commands.cli`, `startup_script.sh`, and the module definition to `/site/deployments/tools/`.
-2. Configure your site to run `startup_script.sh` when the container starts. In the Azure portal, navigate to **Configuration** > **General Settings** > **Startup Command**. Set the startup command field to `/home/site/deployments/tools/startup_script.sh`. **Save** your changes.
-
-To confirm that the datasource was added to the JBoss server, SSH into your webapp and run `$JBOSS_HOME/bin/jboss-cli.sh --connect`. Once you're connected to JBoss, run the `/subsystem=datasources:read-resource` to print a list of the data sources.
---
-## Choosing a Java runtime version
-
-App Service allows users to choose the major version of the JVM, such as Java 8 or Java 11, and the patch version, such as 1.8.0_232 or 11.0.5. You can also choose to have the patch version automatically updated as new minor versions become available. In most cases, production apps should use pinned patch JVM versions. This prevents unanticipated outages during a patch version autoupdate. All Java web apps use 64-bit JVMs, and it's not configurable.
-
-If you're using Tomcat, you can choose to pin the patch version of Tomcat. On Windows, you can pin the patch versions of the JVM and Tomcat independently. On Linux, you can pin the patch version of Tomcat; the patch version of the JVM is also pinned but isn't separately configurable.
-
-If you choose to pin the minor version, you need to periodically update the JVM minor version on the app. To ensure that your application runs on the newer minor version, create a staging slot and increment the minor version on the staging slot. Once you confirm the application runs correctly on the new minor version, you can swap the staging and production slots.
--
-## JBoss EAP
-
-### Clustering in JBoss EAP
-
-App Service supports clustering for JBoss EAP versions 7.4.1 and greater. To enable clustering, your web app must be [integrated with a virtual network](overview-vnet-integration.md). When the web app is integrated with a virtual network, it restarts, and the JBoss EAP installation automatically starts up with a clustered configuration. The JBoss EAP instances communicate over the subnet specified in the virtual network integration, using the ports shown in the `WEBSITES_PRIVATE_PORTS` environment variable at runtime. You can disable clustering by creating an app setting named `WEBSITE_DISABLE_CLUSTERING` with any value.
-
-> [!NOTE]
-> If you're enabling your virtual network integration with an ARM template, you need to manually set the property `vnetPrivatePorts` to a value of `2`. If you enable virtual network integration from the CLI or Portal, this property is set for you automatically.
-
-When clustering is enabled, the JBoss EAP instances use the FILE_PING JGroups discovery protocol to discover new instances and persist the cluster information like the cluster members, their identifiers, and their IP addresses. On App Service, these files are under `/home/clusterinfo/`. The first EAP instance to start obtains read/write permissions on the cluster membership file. Other instances read the file, find the primary node, and coordinate with that node to be included in the cluster and added to the file.
-
-> [!Note]
-> You can avoid JBOSS clustering timeouts by [cleaning up obsolete discovery files during your app startup](https://github.com/Azure/app-service-linux-docs/blob/master/HowTo/JBOSS/avoid_timeouts_obsolete_nodes.md)
-
-The Premium V3 and Isolated V2 App Service Plan types can optionally be distributed across Availability Zones to improve resiliency and reliability for your business-critical workloads. This architecture is also known as [zone redundancy](../availability-zones/migrate-app-service.md). The JBoss EAP clustering feature is compatible with the zone redundancy feature.
-
-#### Autoscale Rules
-
-When configuring autoscale rules for horizontal scaling, it's important to remove instances incrementally (one at a time) to ensure each removed instance can transfer its activity (such as handling a database transaction) to another member of the cluster. When configuring your autoscale rules in the Portal to scale down, use the following options:
--- **Operation**: "Decrease count by"-- **Cool down**: "5 minutes" or greater-- **Instance count**: 1-
-You don't need to incrementally add instances (scaling out), you can add multiple instances to the cluster at a time.
-
-### JBoss EAP App Service Plans
-
-<a id="jboss-eap-hardware-options"></a>
-
-JBoss EAP is only available on the Premium v3 and Isolated v2 App Service Plan types. Customers that created a JBoss EAP site on a different tier during the public preview should scale up to Premium or Isolated hardware tier to avoid unexpected behavior.
-
-## Tomcat Baseline Configuration On App Services
-
-Java developers can customize the server settings, troubleshoot issues, and deploy applications to Tomcat with confidence if they know about the server.xml file and configuration details of Tomcat. Possible customizations include:
-
-* Customizing Tomcat configuration: By understanding the server.xml file and Tomcat's configuration details, you can fine-tune the server settings to match the needs of their applications.
-* Debugging: When an application is deployed on a Tomcat server, developers need to know the server configuration to debug any issues that might arise. This includes checking the server logs, examining the configuration files, and identifying any errors that might be occurring.
-* Troubleshooting Tomcat issues: Inevitably, Java developers encounter issues with their Tomcat server, such as performance problems or configuration errors. By understanding the server.xml file and Tomcat's configuration details, developers can quickly diagnose and troubleshoot these issues, which can save time and effort.
-* Deploying applications to Tomcat: To deploy a Java web application to Tomcat, developers need to know how to configure the server.xml file and other Tomcat settings. Understanding these details is essential for deploying applications successfully and ensuring that they run smoothly on the server.
-
-When you create an app with built-in Tomcat to host your Java workload (a WAR file or a JAR file), there are certain settings that you get out of the box for Tomcat configuration. You can refer to the [Official Apache Tomcat Documentation](https://tomcat.apache.org/) for detailed information, including the default configuration for Tomcat Web Server.
-
-Additionally, there are certain transformations that are further applied on top of the server.xml for Tomcat distribution upon start. These are transformations to the Connector, Host, and Valve settings.
-
-Note that the latest versions of Tomcat have server.xml (8.5.58 and 9.0.38 onward). Older versions of Tomcat don't use transforms and might have different behavior as a result.
-
-### Connector
-
-```xml
-<Connector port="${port.http}" address="127.0.0.1" maxHttpHeaderSize="16384" compression="on" URIEncoding="UTF-8" connectionTimeout="${site.connectionTimeout}" maxThreads="${catalina.maxThreads}" maxConnections="${catalina.maxConnections}" protocol="HTTP/1.1" redirectPort="8443"/>
- ```
-* `maxHttpHeaderSize` is set to `16384`
-* `URIEncoding` is set to `UTF-8`
-* `conectionTimeout` is set to `WEBSITE_TOMCAT_CONNECTION_TIMEOUT`, which defaults to `240000`
-* `maxThreads` is set to `WEBSITE_CATALINA_MAXTHREADS`, which defaults to `200`
-* `maxConnections` is set to `WEBSITE_CATALINA_MAXCONNECTIONS`, which defaults to `10000`
-
-> [!NOTE]
-> The connectionTimeout, maxThreads and maxConnections settings can be tuned with app settings
-
-Following are example CLI commands that you might use to alter the values of conectionTimeout, maxThreads, or maxConnections:
-
-```azurecli-interactive
-az webapp config appsettings set --resource-group myResourceGroup --name myApp --settings WEBSITE_TOMCAT_CONNECTION_TIMEOUT=120000
-```
-```azurecli-interactive
-az webapp config appsettings set --resource-group myResourceGroup --name myApp --settings WEBSITE_CATALINA_MAXTHREADS=100
-```
-```azurecli-interactive
-az webapp config appsettings set --resource-group myResourceGroup --name myApp --settings WEBSITE_CATALINA_MAXCONNECTIONS=5000
-```
-* Connector uses the address of the container instead of 127.0.0.1
-
-### Host
-
-```xml
-<Host appBase="${site.appbase}" xmlBase="${site.xmlbase}" unpackWARs="${site.unpackwars}" workDir="${site.tempdir}" errorReportValveClass="com.microsoft.azure.appservice.AppServiceErrorReportValve" name="localhost" autoDeploy="true">
-```
-
-* `appBase` is set to `AZURE_SITE_APP_BASE`, which defaults to local `WebappsLocalPath`
-* `xmlBase` is set to `AZURE_SITE_HOME`, which defaults to `/site/wwwroot`
-* `unpackWARs` is set to `AZURE_UNPACK_WARS`, which defaults to `true`
-* `workDir` is set to `JAVA_TMP_DIR`, which defaults `TMP`
-* `errorReportValveClass` uses our custom error report valve
-
-### Valve
-
-```xml
-<Valve prefix="site_access_log.${catalina.instance.name}" pattern="%h %l %u %t &quot;%r&quot; %s %b %D %{x-arr-log-id}i" directory="${site.logdir}/http/RawLogs" maxDays="${site.logRetentionDays}" className="org.apache.catalina.valves.AccessLogValve" suffix=".txt"/>
- ```
-* `directory` is set to `AZURE_LOGGING_DIR`, which defaults to `home\logFiles`
-* `maxDays` is to `WEBSITE_HTTPLOGGING_RETENTION_DAYS`, which defaults to `0` [forever]
-
-On Linux, it has all of the same customization, plus:
-
-* Adds some error and reporting pages to the valve:
- ```xml
- <xsl:attribute name="appServiceErrorPage">
- <xsl:value-of select="'${appService.valves.appServiceErrorPage}'"/>
- </xsl:attribute>
-
- <xsl:attribute name="showReport">
- <xsl:value-of select="'${catalina.valves.showReport}'"/>
- </xsl:attribute>
-
- <xsl:attribute name="showServerInfo">
- <xsl:value-of select="'${catalina.valves.showServerInfo}'"/>
- </xsl:attribute>
- ```
---
-## Next steps
-
-Visit the [Azure for Java Developers](/java/azure/) center to find Azure quickstarts, tutorials, and Java reference documentation.
--- [App Service Linux FAQ](faq-app-service-linux.yml)-- [Environment variables and app settings reference](reference-app-settings.md)
app-service Deploy Authentication Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-authentication-types.md
Azure App Service lets you deploy your web application code and configuration by
|Run directly from an uploaded ZIP file |Microsoft Entra |[Run your app in Azure App Service directly from a ZIP package](deploy-run-package.md) | |Run directly from external URL |Not applicable (outbound connection) |[Run from external URL instead](deploy-run-package.md#run-from-external-url-instead) | |Azure Web app plugin for Maven (Java) |Microsoft Entra |[Quickstart: Create a Java app on Azure App Service](quickstart-java.md)|
-|Azure WebApp Plugin for Gradle (Java) |Microsoft Entra |[Configure a Java app for Azure App Service](configure-language-java.md)|
+|Azure WebApp Plugin for Gradle (Java) |Microsoft Entra |[Configure a Java app for Azure App Service](configure-language-java-deploy-run.md)|
|Webhooks | Basic authentication |[Web hooks](https://github.com/projectkudu/kudu/wiki/Web-hooks) | |App Service migration assistant |Basic authentication |[Azure App Service migration tools](https://azure.microsoft.com/products/app-service/migration-tools/) | |App Service migration assistant for PowerShell scripts |Basic authentication |[Azure App Service migration tools](https://azure.microsoft.com/products/app-service/migration-tools/) |
app-service Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/getting-started.md
Title: Getting started with Azure App Service
-description: Take the first steps toward working with Azure App Service.
+description: Take the first steps toward working with Azure App Service. This is a longer description that meets the length requirement.
zone_pivot_groups: app-service-getting-started-stacks
# Getting started with Azure App Service
-## Introduction
+[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications.
+ ::: zone pivot="stack-net"
-[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications. Use the following resources to get started with .NET.
+
+## ASP.NET or ASP.NET Core
+
+Use the following resources to get started with .NET.
| Action | Resources | | | |
zone_pivot_groups: app-service-getting-started-stacks
| **Connect to a database** | - [.NET with Azure SQL Database](./app-service-web-tutorial-dotnet-sqldatabase.md)<br>- [.NET Core with Azure SQL DB](./tutorial-dotnetcore-sqldb-app.md)| | **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=dotnet&pivots=container-linux-vscode)<br>- [Windows - Visual Studio](./quickstart-custom-container.md?tabs=dotnet&pivots=container-windows-vs)| | **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)|+ ::: zone-end ::: zone pivot="stack-python"
-[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications. Use the following resources to get started with Python.
+
+## Python
+
+Use the following resources to get started with Python.
| Action | Resources | | | |
zone_pivot_groups: app-service-getting-started-stacks
| **Connect to a database** | - [Postgres - CLI](./tutorial-python-postgresql-app.md?tabs=flask%2Cwindows&pivots=deploy-azd)<br>- [Postgres - Azure portal](./tutorial-python-postgresql-app.md?tabs=flask%2Cwindows&pivots=deploy-portal)| | **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=python&pivots=container-linux-vscode)| | **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)|+ ::: zone-end ::: zone pivot="stack-nodejs"
-[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications. Use the following resources to get started with Node.js.
+
+## Node.js
+
+Use the following resources to get started with Node.js.
| Action | Resources | | | |
zone_pivot_groups: app-service-getting-started-stacks
| **Connect to a database** | - [MongoDB](./tutorial-nodejs-mongodb-app.md)| | **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=node&pivots=container-linux-vscode)| | **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)|+ ::: zone-end ::: zone pivot="stack-java"
-[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications. Use the following resources to get started with Java.
+
+## Java
+
+Use the following resources to get started with Java.
| Action | Resources | | | |
-| **Create your first Java app** | Using one of the following tools:<br><br>- [Maven deploy with an embedded web server](./quickstart-java.md?pivots=java-maven-quarkus)<br>- [Maven deploy to a Tomcat server](./quickstart-java.md?pivots=java-maven-tomcat)<br>- [Maven deploy to a JBoss server](./quickstart-java.md?pivots=java-maven-jboss) |
-| **Deploy your app** | - [With Maven](configure-language-java.md?pivots=platform-linux#maven)<br>- [With Gradle](configure-language-java.md?pivots=platform-linux#gradle)<br>- [Deploy War](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [With popular IDEs (VS Code, IntelliJ, and Eclipse)](configure-language-java.md?pivots=platform-linux#ides)<br>- [Deploy WAR or JAR packages directly](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [With GitHub Actions](./deploy-github-actions.md) |
+| **Create your first Java app** | Using one of the following tools:<br><br>- [Maven deploy with an embedded web server](./quickstart-java.md?pivots=java-javase)<br>- [Maven deploy to a Tomcat server](./quickstart-java.md?pivots=java-tomcat)<br>- [Maven deploy to a JBoss server](./quickstart-java.md?pivots=java-jboss) |
+| **Deploy your app** | - [With Maven](configure-language-java-deploy-run.md?pivots=platform-linux#maven)<br>- [With Gradle](configure-language-java-deploy-run.md?pivots=platform-linux#gradle)<br>- [Deploy War](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [With popular IDEs (VS Code, IntelliJ, and Eclipse)](configure-language-java-deploy-run.md?pivots=platform-linux#ides)<br>- [Deploy WAR or JAR packages directly](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [With GitHub Actions](./deploy-github-actions.md) |
| **Monitor your app**| - [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)| | **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)| | **Connect to a database** |- [Java Spring with Cosmos DB](./tutorial-java-spring-cosmosdb.md)| | **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=python&pivots=container-linux-vscode)| | **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)|+ ::: zone-end ::: zone pivot="stack-php"
-[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications. Use the following resources to get started with PHP.
+
+## PHP
+
+Use the following resources to get started with PHP.
| Action | Resources | | | |
zone_pivot_groups: app-service-getting-started-stacks
| **Connect to a database** | - [MySQL with PHP](./tutorial-php-mysql-app.md)| | **Custom containers** |- [Multi-container](./quickstart-multi-container.md)<br>- [Sidecar containers](tutorial-custom-container-sidecar.md)| | **Review best practices** | - [Scale your app]()<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)|+ ::: zone-end ## Next steps
app-service Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/language-support-policy.md
Microsoft and Adoptium builds of OpenJDK are provided and supported on App Servi
--
-If you're [pinned](configure-language-java.md#choosing-a-java-runtime-version) to an older minor version of Java, your app might be using the deprecated [Azul Zulu for Azure](https://devblogs.microsoft.com/java/end-of-updates-support-and-availability-of-zulu-for-azure/) binaries provided through [Azul Systems](https://www.azul.com/). You can keep using these binaries for your app, but any security patches or improvements are available only in new versions of the OpenJDK, so we recommend that you periodically update your Web Apps to a later version of Java.
+If you're [pinned](configure-language-java-deploy-run.md#choosing-a-java-runtime-version) to an older minor version of Java, your app might be using the deprecated [Azul Zulu for Azure](https://devblogs.microsoft.com/java/end-of-updates-support-and-availability-of-zulu-for-azure/) binaries provided through [Azul Systems](https://www.azul.com/). You can keep using these binaries for your app, but any security patches or improvements are available only in new versions of the OpenJDK, so we recommend that you periodically update your Web Apps to a later version of Java.
Major version updates are provided through new runtime options in Azure App Service. Customers update to these newer versions of Java by configuring their App Service deployment and are responsible for testing and ensuring the major update meets their needs.
app-service Quickstart Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arc.md
To update the image after the app is create, see [Change the Docker image of a c
- [Configure a Node.js app](configure-language-nodejs.md?pivots=platform-linux) - [Configure a PHP app](configure-language-php.md?pivots=platform-linux) - [Configure a Linux Python app](configure-language-python.md)-- [Configure a Java app](configure-language-java.md?pivots=platform-linux)
+- [Configure a Java app](configure-language-java-deploy-run.md?pivots=platform-linux)
- [Configure a custom container](configure-custom-container.md?pivots=container-linux)
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md
ms.devlang: java
Last updated 02/10/2024
-zone_pivot_groups: app-service-java-hosting
+zone_pivot_groups: app-service-java-deploy
adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
# Quickstart: Create a Java app on Azure App Service [!INCLUDE [quickstart-java-linux-maven-pivot.md](./includes/quickstart-jav)] ::: zone-end [!INCLUDE [quickstart-java-windows-maven-pivot.md](./includes/quickstart-jav)] ::: zone-end [!INCLUDE [quickstart-java-windows-maven-pivot.md](./includes/quickstart-jav)]
> [Azure for Java Developers Resources](/java/azure/) > [!div class="nextstepaction"]
-> [Configure your Java app](configure-language-java.md)
+> [Configure your Java app](configure-language-java-deploy-run.md)
> [!div class="nextstepaction"] > [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md)
app-service Tutorial Connect App Access Microsoft Graph As App Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md
npm install @azure/identity @microsoft/microsoft-graph-client
### Configure authentication information
-Create an object to hold the [authentication settings](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/blob/main/3-WebApp-graphapi-managed-identity/app.js):
- ```javascript // partial code in app.js const appSettings = {
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
The default Quarkus sample application includes tests with database connectivity
Learn more about running Java apps on App Service in the developer guide. > [!div class="nextstepaction"]
-> [Configure a Java app in Azure App Service](configure-language-java.md?pivots=platform-linux)
+> [Configure a Java app in Azure App Service](configure-language-java-deploy-run.md?pivots=platform-linux)
Learn how to secure your app with a custom domain and certificate.
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
yes | cp -rf .prep/* .
Follow these steps to create an Azure Cosmos DB database in your subscription. The TODO list app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
-1. Login to your Azure CLI, and optionally set your subscription if you have more than one connected to your login credentials.
+1. Sign in to your Azure CLI, and optionally set your subscription if you have more than one connected to your sign-in credentials.
```azurecli az login
Then run the script:
source .scripts/set-env-variables.sh ```
-These environment variables are used in `application.properties` in the TODO list app. The fields in the properties file set up a default repository configuration for Spring Data:
+These environment variables are used in `application.properties` in the TODO list app. The fields in the properties file define a default repository configuration for Spring Data:
```properties azure.cosmosdb.uri=${COSMOSDB_URI}
az group delete --name <your-azure-group-name> --yes
Learn more about running Java apps on App Service on Linux in the developer guide. > [!div class="nextstepaction"]
-> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux)
+> [Java in App Service Linux dev guide](configure-language-java-deploy-run.md?pivots=platform-linux)
Learn how to secure your app with a custom domain and certificate.
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
curl https://${WEBAPP_URL}/checklist/1
Learn more about running Java apps on App Service on Linux in the developer guide. > [!div class="nextstepaction"]
-> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux)
+> [Java in App Service Linux dev guide](configure-language-java-deploy-run.md?pivots=platform-linux)
Learn how to secure your app with a custom domain and certificate.
app-service Tutorial Java Tomcat Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-mysql-app.md
Here are some other things you can say to fine-tune the answer you get.
Learn more about running Java apps on App Service in the developer guide. > [!div class="nextstepaction"]
-> [Configure a Java app in Azure App Service](configure-language-java.md?pivots=platform-linux)
+> [Configure a Java app in Azure App Service](configure-language-java-deploy-run.md?pivots=platform-linux)
Learn how to secure your app with a custom domain and certificate.
automation Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/configure-alerts.md
Title: How to create alerts for Azure Automation Change Tracking and Inventory
description: This article tells how to configure Azure alerts to notify about the status of changes detected by Change Tracking and Inventory. Previously updated : 11/24/2022 Last updated : 07/22/2024
automation Enable From Automation Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-automation-account.md
Title: Enable Azure Automation Change Tracking and Inventory from Automation acc
description: This article tells how to enable Change Tracking and Inventory from an Automation account. Previously updated : 10/14/2020 Last updated : 07/22/2024
automation Enable From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-portal.md
Title: Enable Azure Automation Change Tracking and Inventory from the Azure port
description: This article tells how to enable the Change Tracking and Inventory feature from the Azure portal. Previously updated : 10/14/2020 Last updated : 07/22/2024
automation Enable From Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-runbook.md
description: This article tells how to enable Change Tracking and Inventory from
Previously updated : 10/14/2020 Last updated : 07/22/2024 # Enable Change Tracking and Inventory from a runbook
automation Enable From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-vm.md
Title: Enable Azure Automation Change Tracking and Inventory from an Azure VM
description: This article tells how to enable Change Tracking and Inventory from an Azure VM. Previously updated : 10/14/2020 Last updated : 07/22/2024
automation Manage Change Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/manage-change-tracking.md
Title: Manage Change Tracking and Inventory in Azure Automation
description: This article tells how to use Change Tracking and Inventory to track software and Microsoft service changes in your environment. Previously updated : 12/10/2020 Last updated : 07/22/2024
automation Manage Inventory Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/manage-inventory-vms.md
description: This article tells how to manage inventory collection from VMs.
keywords: inventory, automation, change, tracking Previously updated : 10/14/2020 Last updated : 07/22/2024 # Manage inventory collection from VMs
automation Manage Scope Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/manage-scope-configurations.md
Title: Limit Azure Automation Change Tracking and Inventory deployment scope
description: This article tells how to work with scope configurations to limit the scope of a Change Tracking and Inventory deployment. Previously updated : 05/27/2021 Last updated : 07/22/2024
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
Title: Azure Automation Change Tracking and Inventory overview
description: This article describes the Change Tracking and Inventory feature, which helps you identify software and Microsoft service changes in your environment. Previously updated : 06/30/2024 Last updated : 07/22/2024
automation Remove Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/remove-feature.md
Title: Remove Azure Automation Change Tracking and Inventory feature
description: This article tells how to stop using Change Tracking and Inventory, and unlink an Automation account from the Log Analytics workspace. Previously updated : 10/14/2020 Last updated : 07/22/2024
automation Remove Vms From Change Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/remove-vms-from-change-tracking.md
description: This article tells how to remove Azure and non-Azure machines from
Previously updated : 10/26/2021 Last updated : 07/22/2024 # Remove machines from Change Tracking and Inventory
automation Region Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md
Title: Supported regions for linked Log Analytics workspace description: This article describes the supported region mappings between an Automation account and a Log Analytics workspace as it relates to certain features of Azure Automation. Previously updated : 02/10/2024 Last updated : 07/22/2024
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-high-availability.md
Zone-redundant Premium tier caches are available in the following regions:
| South Central US | West Europe | | | East Asia | | US Gov Virginia | Sweden Central | | | China North 3 | | West US 2 | Switzerland North | | | |
-| West US 3 | | | | |
+| West US 3 | Poland Central | | | |
Zone-redundant Enterprise and Enterprise Flash tier caches are available in the following regions:
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
FunctionsProject
You can use a shared [host.json](functions-host-json.md) file to configure the function app. Each function has its own code file (.java) and binding configuration file (function.json).
-You can put more than one function in a project. Avoid putting your functions into separate jars. The `FunctionApp` in the target directory is what gets deployed to your function app in Azure.
+You can have more than one function in a project. However, don't put your functions into separate jars. Using multiple jars in a single function app isn't supported. The `FunctionApp` in the target directory is what gets deployed to your function app in Azure.
## Triggers and annotations
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
A brief summary of isolation approaches is provided below.
In addition to robust logical compute isolation available by design to all Azure tenants, if you desire physical compute isolation, you can use Azure Dedicated Host or isolated Virtual Machines, which are deployed on server hardware dedicated to a single customer. - **Networking isolation** ΓÇô Azure Virtual Network (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Services can communicate using public IPs or private (VNet) IPs. Communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements. You can use [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md) to achieve network isolation and protect your Azure resources from the Internet while accessing Azure services that have public endpoints. You can use Virtual Network [service tags](../virtual-network/service-tags-overview.md) to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules. Moreover, you can use [Private Link](../private-link/private-link-overview.md) to access Azure PaaS services over a private endpoint in your VNet, ensuring that traffic between your VNet and the service travels across the Microsoft global backbone network, which eliminates the need to expose the service to the public Internet. Finally, Azure provides you with options to encrypt data in transit, including [Transport Layer Security (TLS) end-to-end encryption](../application-gateway/ssl-overview.md) of network traffic with [TLS termination using Key Vault certificates](../application-gateway/key-vault-certs.md), [VPN encryption](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) using IPsec, and Azure ExpressRoute encryption using [MACsec with customer-managed keys (CMK) support](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). - **Storage isolation** ΓÇô To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers. This process relies on multiple encryption keys and services such as Azure Key Vault and Microsoft Entra ID to ensure secure key access and centralized key management. Azure Storage service encryption ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is [encrypted through FIPS 140 validated 256-bit AES encryption](../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption) and you can use Key Vault for customer-managed keys (CMK). Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Moreover, Azure Disk encryption may optionally be used to encrypt Azure Windows and Linux IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes managed disks.-- **Security assurance processes and practices** ΓÇô Azure isolation assurance is further enforced by MicrosoftΓÇÖs internal use of the [Security Development Lifecycle (SDL)](https://www.microsoft.com/securityengineering/sdl/) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
+- **Security assurance processes and practices** ΓÇô Azure isolation assurance is further enforced by Microsoft's internal use of the [Security Development Lifecycle (SDL)](https://www.microsoft.com/securityengineering/sdl/) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
-In line with the [shared responsibility](../security/fundamentals/shared-responsibility.md) model in cloud computing, as you migrate workloads from your on-premises datacenter to the cloud, the delineation of responsibility between you and cloud service provider varies depending on the cloud service model. For example, with the Infrastructure as a Service (IaaS) model, MicrosoftΓÇÖs responsibility ends at the Hypervisor layer, and you're responsible for all layers above the virtualization layer, including maintaining the base operating system in guest VMs. You can use Azure isolation technologies to achieve the desired level of isolation for your applications and data deployed in the cloud.
+In line with the [shared responsibility](../security/fundamentals/shared-responsibility.md) model in cloud computing, as you migrate workloads from your on-premises datacenter to the cloud, the delineation of responsibility between you and cloud service provider varies depending on the cloud service model. For example, with the Infrastructure as a Service (IaaS) model, Microsoft's responsibility ends at the Hypervisor layer, and you're responsible for all layers above the virtualization layer, including maintaining the base operating system in guest VMs. You can use Azure isolation technologies to achieve the desired level of isolation for your applications and data deployed in the cloud.
Throughout this article, call-out boxes outline important considerations or actions considered to be part of your responsibility. For example, you can use Azure Key Vault to store your secrets, including encryption keys that remain under your control.
Tenant isolation in Microsoft Entra ID involves two primary elements:
- Preventing data leakage and access across tenants, which means that data belonging to Tenant A can't in any way be obtained by users in Tenant B without explicit authorization by Tenant A. - Resource access isolation across tenants, which means that operations performed by Tenant A can't in any way impact access to resources for Tenant B.
-As shown in Figure 2, access via Microsoft Entra ID requires user authentication through a Security Token Service (STS). The authorization system uses information on the userΓÇÖs existence and enabled state through the Directory Services API and Azure RBAC to determine whether the requested access to the target Microsoft Entra instance is authorized for the user in the session. Aside from token-based authentication that is tied directly to the user, Microsoft Entra ID further supports logical isolation in Azure through:
+As shown in Figure 2, access via Microsoft Entra ID requires user authentication through a Security Token Service (STS). The authorization system uses information on the user's existence and enabled state through the Directory Services API and Azure RBAC to determine whether the requested access to the target Microsoft Entra instance is authorized for the user in the session. Aside from token-based authentication that is tied directly to the user, Microsoft Entra ID further supports logical isolation in Azure through:
- Microsoft Entra instances are discrete containers and there's no relationship between them. - Microsoft Entra data is stored in partitions and each partition has a predetermined set of replicas that are considered the preferred primary replicas. Use of replicas provides high availability of Microsoft Entra services to support identity separation and logical isolation. - Access isn't permitted across Microsoft Entra instances unless the Microsoft Entra instance administrator grants it through federation or provisioning of user accounts from other Microsoft Entra instances.-- Physical access to servers that comprise the Microsoft Entra service and direct access to Microsoft Entra IDΓÇÖs back-end systems is [restricted to properly authorized Microsoft operational roles](./documentation-government-plan-security.md#restrictions-on-insider-access) using the Just-In-Time (JIT) privileged access management system.
+- Physical access to servers that comprise the Microsoft Entra service and direct access to Microsoft Entra ID's back-end systems is [restricted to properly authorized Microsoft operational roles](./documentation-government-plan-security.md#restrictions-on-insider-access) using the Just-In-Time (JIT) privileged access management system.
- Microsoft Entra users have no access to physical assets or locations, and therefore it isn't possible for them to bypass the logical Azure RBAC policy checks. :::image type="content" source="./media/secure-isolation-fig2.png" alt-text="Microsoft Entra logical tenant isolation"::: **Figure 2.** Microsoft Entra logical tenant isolation
-In summary, AzureΓÇÖs approach to logical tenant isolation uses identity, managed through Microsoft Entra ID, as the first logical control boundary for providing tenant-level access to resources and authorization through Azure RBAC.
+In summary, Azure's approach to logical tenant isolation uses identity, managed through Microsoft Entra ID, as the first logical control boundary for providing tenant-level access to resources and authorization through Azure RBAC.
## Data encryption key management Azure has extensive support to safeguard your data using [data encryption](../security/fundamentals/encryption-overview.md), including various encryption models:
When you create a key vault or managed HSM in an Azure subscription, it's automa
You control access permissions and can extract detailed activity logs from the Azure Key Vault service. Azure Key Vault logs the following information: - All authenticated REST API requests, including failed requests
- - Operations on the key vault such as creation, deletion, setting access policies, and so on.
- - Operations on keys and secrets in the key vault, including a) creating, modifying, or deleting keys or secrets, and b) signing, verifying, encrypting keys, and so on.
+ - Operations on the key vault such as creation, deletion, setting access policies, and so on.
+ - Operations on keys and secrets in the key vault, including a) creating, modifying, or deleting keys or secrets, and b) signing, verifying, encrypting keys, and so on.
- Unauthenticated requests such as requests that don't have a bearer token, are malformed or expired, or have an invalid token. > [!NOTE]
As mentioned previously, managed HSM supports [importing keys generated](../key-
Managed HSM enables you to use the established Azure Key Vault API and management interfaces. You can use the same application development and deployment patterns for all your applications irrespective of the key management solution: multi-tenant vault or single-tenant managed HSM. ## Compute isolation
-Microsoft Azure compute platform is based on [machine virtualization](../security/fundamentals/isolation-choices.md). This approach means that your code ΓÇô whether itΓÇÖs deployed in a PaaS worker role or an IaaS virtual machine ΓÇô executes in a virtual machine hosted by a Windows Server Hyper-V hypervisor. On each Azure physical server, also known as a node, there's a [Type 1 Hypervisor](https://en.wikipedia.org/wiki/Hypervisor) that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as shown in Figure 4. Each node has one special Host VM, also known as Root VM, which runs the Host OS ΓÇô a customized and hardened version of the latest Windows Server, which is stripped down to reduce the attack surface and include only those components necessary to manage the node. Isolation of the Root VM from the Guest VMs and the Guest VMs from one another is a key concept in Azure security architecture that forms the basis of Azure [compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation), as described in Microsoft online documentation.
+Microsoft Azure compute platform is based on [machine virtualization](../security/fundamentals/isolation-choices.md). This approach means that your code ΓÇô whether it's deployed in a PaaS worker role or an IaaS virtual machine ΓÇô executes in a virtual machine hosted by a Windows Server Hyper-V hypervisor. On each Azure physical server, also known as a node, there's a [Type 1 Hypervisor](https://en.wikipedia.org/wiki/Hypervisor) that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as shown in Figure 4. Each node has one special Host VM, also known as Root VM, which runs the Host OS ΓÇô a customized and hardened version of the latest Windows Server, which is stripped down to reduce the attack surface and include only those components necessary to manage the node. Isolation of the Root VM from the Guest VMs and the Guest VMs from one another is a key concept in Azure security architecture that forms the basis of Azure [compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation), as described in Microsoft online documentation.
:::image type="content" source="./media/secure-isolation-fig4.png" alt-text="Isolation of Hypervisor, Root VM, and Guest VMs"::: **Figure 4.** Isolation of Hypervisor, Root VM, and Guest VMs Physical servers hosting VMs are grouped into clusters, and they're independently managed by a scaled-out and redundant platform software component called the **[Fabric Controller](../security/fundamentals/isolation-choices.md#the-azure-fabric-controller)** (FC). Each FC manages the lifecycle of VMs running in its cluster, including provisioning and monitoring the health of the hardware under its control. For example, the FC is responsible for recreating VM instances on healthy servers when it determines that a server has failed. It also allocates infrastructure resources to tenant workloads, and it manages unidirectional communication from the Host to virtual machines. Dividing the compute infrastructure into clusters, isolates faults at the FC level and prevents certain classes of errors from affecting servers beyond the cluster in which they occur.
-The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines used by you and Azure cloud services. The Hypervisor/Host OS pairing uses decades of MicrosoftΓÇÖs experience in operating system security, including security focused investments in [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploits mitigations, and strong security assurance processes.
+The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines used by you and Azure cloud services. The Hypervisor/Host OS pairing uses decades of Microsoft's experience in operating system security, including security focused investments in [Microsoft Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploits mitigations, and strong security assurance processes.
### Management network isolation There are three Virtual Local Area Networks (VLANs) in each compute hardware cluster, as shown in Figure 5:
The Azure Management Console and Management Plane follow strict security archite
- **Management Console (MC)** ΓÇô The MC in Azure Cloud is composed of the Azure portal GUI and the Azure Resource Manager API layers. They both use user credentials to authenticate and authorize all operations. - **Management Plane (MP)** ΓÇô This layer performs the actual management actions and is composed of the Compute Resource Provider (CRP), Fabric Controller (FC), Fabric Agent (FA), and the underlying Hypervisor, which has its own Hypervisor Agent to service communication. These layers all use system contexts that are granted the least permissions needed to perform their operations.
-The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs constitute a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes ΓÇô separate FCs exist to manage compute and storage clusters. If you update your applicationΓÇÖs configuration file while running in the MC, the MC communicates through CRP with the FC, and the FC communicates with the FA.
+The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs constitute a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes ΓÇô separate FCs exist to manage compute and storage clusters. If you update your application's configuration file while running in the MC, the MC communicates through CRP with the FC, and the FC communicates with the FA.
CRP is the front-end service for Azure Compute, exposing consistent compute APIs through Azure Resource Manager, thereby enabling you to create and manage virtual machine resources and extensions via simple templates.
-Communications among various components (for example, Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent more actions. Separate communications channels ensure that communications can't bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure cloud for Hypervisor interaction initiated by a userΓÇÖs [OAuth 2.0 authentication to Microsoft Entra ID](../active-directory/develop/v2-oauth2-auth-code-flow.md).
+Communications among various components (for example, Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent more actions. Separate communications channels ensure that communications can't bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure cloud for Hypervisor interaction initiated by a user's [OAuth 2.0 authentication to Microsoft Entra ID](../active-directory/develop/v2-oauth2-auth-code-flow.md).
:::image type="content" source="./media/secure-isolation-fig6.png" alt-text="Management Console and Management Plane interaction for secure management flow" border="false"::: **Figure 6.** Management Console and Management Plane interaction for secure management flow
Figure 6 illustrates the management flow corresponding to a user command to stop
|-|--|--|-| |**1.**|User authenticates via Microsoft Entra ID by providing credentials and is issued a token.|User Credentials|TLS 1.2| |**2.**|Browser presents token to Azure portal to authenticate user. Azure portal verifies token using token signature and valid signing keys.|JSON Web Token (Microsoft Entra ID)|TLS 1.2|
-|**3.**|User issues &#8220;stop VM&#8221; request on Azure portal. Azure portal sends &#8220;stop VM&#8221; request to Azure Resource Manager and presents userΓÇÖs token that was provided by Microsoft Entra ID. Azure Resource Manager verifies token using token signature and valid signing keys and that the user is authorized to perform the requested operation.|JSON Web Token (Microsoft Entra ID)|TLS 1.2|
+|**3.**|User issues &#8220;stop VM&#8221; request on Azure portal. Azure portal sends &#8220;stop VM&#8221; request to Azure Resource Manager and presents user's token that was provided by Microsoft Entra ID. Azure Resource Manager verifies token using token signature and valid signing keys and that the user is authorized to perform the requested operation.|JSON Web Token (Microsoft Entra ID)|TLS 1.2|
|**4.**|Azure Resource Manager requests a token from dSTS server based on the client certificate that Azure Resource Manager has, enabling dSTS to grant a JSON Web Token with the correct identity and roles.|Client Certificate|TLS 1.2| |**5.**|Azure Resource Manager sends request to CRP. Call is authenticated via OAuth using a JSON Web Token representing the Azure Resource Manager system identity from dSTS, thus transition from user to system context.|JSON Web Token (dSTS)|TLS 1.2| |**6.**|CRP validates the request and determines which fabric controller can complete the request. CRP requests a certificate from dSTS based on its client certificate so that it can connect to the specific Fabric Controller (FC) that is the target of the command. Token will grant permissions only to that specific FC if CRP is allowed to communicate to that FC.|Client Certificate|TLS 1.2|
The Azure Hypervisor acts like a micro-kernel, passing all hardware access reque
Virtualization extensions in the Host CPU enable the Azure Hypervisor to enforce isolation between partitions. The following fundamental CPU capabilities provide the hardware building blocks for Hypervisor isolation: -- **Second-level address translation** ΓÇô the Hypervisor controls what memory resources a partition is allowed to access by using second-level page tables provided by the CPUΓÇÖs memory management unit (MMU). The CPUΓÇÖs MMU uses second-level address translation under Hypervisor control to enforce protection on memory accesses performed by:
+- **Second-level address translation** ΓÇô the Hypervisor controls what memory resources a partition is allowed to access by using second-level page tables provided by the CPU's memory management unit (MMU). The CPU's MMU uses second-level address translation under Hypervisor control to enforce protection on memory accesses performed by:
- CPU when running under the context of a partition. - I/O devices that are being accessed directly by Guest partitions. - **CPU context** ΓÇô the Hypervisor uses virtualization extensions in the CPU to restrict privileges and CPU context that can be accessed while a Guest partition is running. The Hypervisor also uses these facilities to save and restore state when sharing CPUs between multiple partitions to ensure isolation of CPU state between the partitions.
In addition to robust logical compute isolation available by design to all Azure
> Physical tenant isolation increases deployment cost and may not be required in most scenarios given the strong logical isolation assurances provided by Azure. #### Azure Dedicated Host
-[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place [Windows](../virtual-machines/windows/overview.md), [Linux](../virtual-machines/linux/overview.md), and [SQL Server on Azure](/azure/azure-sql/virtual-machines/) VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organizationΓÇÖs workloads to meet corporate compliance requirements.
+[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place [Windows](../virtual-machines/windows/overview.md), [Linux](../virtual-machines/linux/overview.md), and [SQL Server on Azure](/azure/azure-sql/virtual-machines/) VMs directly into provisioned hosts using whatever configuration best meets your needs. Dedicated Host provides hardware isolation at the physical server level, enabling you to place your Azure VMs on an isolated and dedicated physical server that runs only your organization's workloads to meet corporate compliance requirements.
> [!NOTE] > You can deploy a dedicated host using the **[Azure portal, Azure PowerShell, and Azure CLI](../virtual-machines/dedicated-hosts-how-to.md)**.
Network access to VMs is limited by packet filtering at the network edge, at loa
Azure provides network isolation for each deployment and enforces the following rules: - Traffic between VMs always traverses through trusted packet filters.
- - Protocols such as Address Resolution Protocol (ARP), Dynamic Host Configuration Protocol (DHCP), and other OSI Layer-2 traffic from a VM are controlled using rate-limiting and anti-spoofing protection.
- - VMs can't capture any traffic on the network that isn't intended for them.
+ - Protocols such as Address Resolution Protocol (ARP), Dynamic Host Configuration Protocol (DHCP), and other OSI Layer-2 traffic from a VM are controlled using rate-limiting and anti-spoofing protection.
+ - VMs can't capture any traffic on the network that isn't intended for them.
- Your VMs can't send traffic to Azure private interfaces and infrastructure services, or to VMs belonging to other customers. Your VMs can only communicate with other VMs owned or controlled by you and with Azure infrastructure service endpoints meant for public communications. - When you put a VM on a VNet, that VM gets its own address space that is invisible, and hence, not reachable from VMs outside of a deployment or VNet (unless configured to be visible via public IP addresses). Your environment is open only through the ports that you specify for public access; if the VM is defined to have a public IP address, then all ports are open for public access. #### Packet flow and network path protection
-AzureΓÇÖs hyperscale network is designed to provide:
+Azure's hyperscale network is designed to provide:
- Uniform high capacity between servers. - Performance isolation between services, including customers.
This section explains how packets flow through the Azure network, and how the to
The Azure network uses [two different IP-address families](/windows-server/networking/sdn/technologies/hyper-v-network-virtualization/hyperv-network-virtualization-technical-details-windows-server#packet-encapsulation): - **Customer address (CA)** is the customer defined/chosen VNet IP address, also referred to as Virtual IP (VIP). The network infrastructure operates using CAs, which are externally routable. All switches and interfaces are assigned CAs, and switches run an IP-based (Layer-3) link-state routing protocol that disseminates only these CAs. This design allows switches to obtain the complete switch-level topology, and forward packets encapsulated with CAs along shortest paths.-- **Provider address (PA)** is the Azure assigned internal fabric address that isn't visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the Internet to a server; all traffic from the Internet must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses [RFC 1918](https://datatracker.ietf.org/doc/rfc1918/) address space or private address space ΓÇô the provider addresses (PAs) ΓÇô that isn't externally routable. The translation is performed at the SLBs. Customer addresses (CAs) that are externally routable are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their serversΓÇÖ locations change due to virtual-machine migration or reprovisioning.
+- **Provider address (PA)** is the Azure assigned internal fabric address that isn't visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the Internet to a server; all traffic from the Internet must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses [RFC 1918](https://datatracker.ietf.org/doc/rfc1918/) address space or private address space ΓÇô the provider addresses (PAs) ΓÇô that isn't externally routable. The translation is performed at the SLBs. Customer addresses (CAs) that are externally routable are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their servers' locations change due to virtual-machine migration or reprovisioning.
-Each PA is associated with a CA, which is the identifier of the Top of Rack (ToR) switch to which the server is connected. VL2 uses a scalable, reliable directory system to store and maintain the mapping of PAs to CAs, and this mapping is created when servers are provisioned to a service and assigned PA addresses. An agent running in the network stack on every server, called the VL2 agent, invokes the directory systemΓÇÖs resolution service to learn the actual location of the destination and then tunnels the original packet there.
+Each PA is associated with a CA, which is the identifier of the Top of Rack (ToR) switch to which the server is connected. VL2 uses a scalable, reliable directory system to store and maintain the mapping of PAs to CAs, and this mapping is created when servers are provisioned to a service and assigned PA addresses. An agent running in the network stack on every server, called the VL2 agent, invokes the directory system's resolution service to learn the actual location of the destination and then tunnels the original packet there.
-Azure assigns servers IP addresses that act as names alone, with no topological significance. AzureΓÇÖs VL2 addressing scheme separates these server names (PAs) from their locations (CAs). The crux of offering Layer-2 semantics is having servers believe they share a single large IP subnet ΓÇô that is, the entire PA space ΓÇô with other servers in the same service, while eliminating the Address Resolution Protocol (ARP) and Dynamic Host Configuration Protocol (DHCP) scaling bottlenecks that plague large Ethernet deployments.
+Azure assigns servers IP addresses that act as names alone, with no topological significance. Azure's VL2 addressing scheme separates these server names (PAs) from their locations (CAs). The crux of offering Layer-2 semantics is having servers believe they share a single large IP subnet ΓÇô that is, the entire PA space ΓÇô with other servers in the same service, while eliminating the Address Resolution Protocol (ARP) and Dynamic Host Configuration Protocol (DHCP) scaling bottlenecks that plague large Ethernet deployments.
Figure 9 depicts a sample packet flow where sender S sends packets to destination D via a randomly chosen intermediate switch using IP-in-IP encapsulation. PAs are from 20/8, and CAs are from 10/8. H(ft) denotes a hash of the [5-tuple](https://www.techopedia.com/definition/28190/5-tuple), which is composed of source IP, source port, destination IP, destination port, and protocol type. The ToR translates the PA to the CA, sends to the Intermediate switch, which sends to the destination CA ToR switch, which translates to the destination PA.
Figure 9 depicts a sample packet flow where sender S sends packets to destinatio
A server can't send packets to a PA if the directory service refuses to provide it with a CA through which it can route its packets, which means that the directory service enforces access control policies. Further, since the directory system knows which server is making the request when handling a lookup, it can **enforce fine-grained isolation policies**. For example, it can enforce a policy that only servers belonging to the same service can communicate with each other. #### Traffic flow patterns
-To route traffic between servers, which use PA addresses, on an underlying network that knows routes for CA addresses, the VL2 agent on each server captures packets from the host, and encapsulates them with the CA address of the ToR switch of the destination. Once the packet arrives at the CA (that is, the destination ToR switch), the destination ToR switch decapsulates the packet and delivers it to the destination PA carried in the inner header. The packet is first delivered to one of the Intermediate switches, decapsulated by the switch, delivered to the ToRΓÇÖs CA, decapsulated again, and finally sent to the destination. This approach is depicted in Figure 10 using two possible traffic patterns: 1) external traffic (orange line) traversing over Azure ExpressRoute or the Internet to a VNet, and 2) internal traffic (blue line) between two VNets. Both traffic flows follow a similar pattern to isolate and protect network traffic.
+To route traffic between servers, which use PA addresses, on an underlying network that knows routes for CA addresses, the VL2 agent on each server captures packets from the host, and encapsulates them with the CA address of the ToR switch of the destination. Once the packet arrives at the CA (that is, the destination ToR switch), the destination ToR switch decapsulates the packet and delivers it to the destination PA carried in the inner header. The packet is first delivered to one of the Intermediate switches, decapsulated by the switch, delivered to the ToR's CA, decapsulated again, and finally sent to the destination. This approach is depicted in Figure 10 using two possible traffic patterns: 1) external traffic (orange line) traversing over Azure ExpressRoute or the Internet to a VNet, and 2) internal traffic (blue line) between two VNets. Both traffic flows follow a similar pattern to isolate and protect network traffic.
:::image type="content" source="./media/secure-isolation-fig10.png" alt-text="Separation of tenant network traffic using VNets"::: **Figure 10.** Separation of tenant network traffic using VNets
Once all three are verified, the encapsulation is removed and routed to the CA a
Azure VNets implement several mechanisms to ensure secure traffic between tenants. These mechanisms align to existing industry standards and security practices, and prevent well-known attack vectors including: - **Prevent IP address spoofing** ΓÇô Whenever encapsulated traffic is transmitted by a VNet, the service reverifies the information on the receiving end of the transmission. The traffic is looked up and encapsulated independently at the start of the transmission, and reverified at the receiving endpoint to ensure the transmission was performed appropriately. This verification is done with an internal VNet feature called SpoofGuard, which verifies that the source and destination are valid and allowed to communicate, thereby preventing mismatches in expected encapsulation patterns that might otherwise permit spoofing. The GRE encapsulation processes prevent spoofing as any GRE encapsulation and encryption not done by the Azure network fabric is treated as dropped traffic.-- **Provide network segmentation across customers with overlapping network spaces** ΓÇô Azure VNetΓÇÖs implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that you're always operating within your unique address space, overlapping address spaces between tenants and the Azure network fabric. Anything that hasn't been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described previously, any encapsulated traffic not performed by the Azure network fabric is discarded.
+- **Provide network segmentation across customers with overlapping network spaces** ΓÇô Azure VNet's implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that you're always operating within your unique address space, overlapping address spaces between tenants and the Azure network fabric. Anything that hasn't been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described previously, any encapsulated traffic not performed by the Azure network fabric is discarded.
- **Prevent traffic from crossing between VNets** ΓÇô Preventing traffic from crossing between VNets is done through the same mechanisms that handle address overlap and prevent spoofing. Traffic crossing between VNets is rendered infeasible by using unique VNet IDs established per tenant in combination with verification of all traffic at the source and destination. Users don't have access to the underlying transmission mechanisms that rely on these IDs to perform the encapsulation. Therefore, any attempt to encapsulate and simulate these mechanisms would lead to dropped traffic. In addition to these key protections, all unexpected traffic originating from the Internet is dropped by default. Any packet entering the Azure network will first encounter an Edge router. Edge routers intentionally allow all inbound traffic into the Azure network except spoofed traffic. This basic traffic filtering protects the Azure network from known bad malicious traffic. Azure also implements DDoS protection at the network layer, collecting logs to throttle or block traffic based on real time and historical data analysis, and mitigates attacks on demand.
TLS provides strong authentication, message privacy, and integrity. [Perfect For
- **Site-to-Site** (IPsec/IKE VPN tunnel) ΓÇô A cryptographically protected &#8220;tunnel&#8221; is established between Azure and your internal network, allowing an Azure VM to connect to your back-end resources as though it was directly on that network. This type of connection requires a [VPN device](../vpn-gateway/vpn-gateway-vpn-faq.md#s2s) located on-premises that has an externally facing public IP address assigned to it. You can use Azure [VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) to send encrypted traffic between your VNet and your on-premises infrastructure across the public Internet, for example, a [site-to-site VPN](../vpn-gateway/tutorial-site-to-site-portal.md) relies on IPsec for transport encryption. VPN Gateway supports many encryption algorithms that are FIPS 140 validated. Moreover, you can configure VPN Gateway to use [custom IPsec/IKE policy](../vpn-gateway/vpn-gateway-about-compliance-crypto.md) with specific cryptographic algorithms and key strengths instead of relying on the default Azure policies. IPsec encrypts data at the IP level (Network Layer 3). - **Point-to-Site** (VPN over SSTP, OpenVPN, and IPsec) ΓÇô A secure connection is established from your individual client computer to your VNet using Secure Socket Tunneling Protocol (SSTP), OpenVPN, or IPsec. As part of the [Point-to-Site VPN](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md) configuration, you need to install a certificate and a VPN client configuration package, which allow the client computer to connect to any VM within the VNet. [Point-to-Site VPN](../vpn-gateway/point-to-site-about.md) connections don't require a VPN device or a public facing IP address.
-In addition to controlling the type of algorithm that is supported for VPN connections, Azure provides you with the ability to enforce that all traffic leaving a VNet may only be routed through a VNet Gateway (for example, Azure VPN Gateway). This enforcement allows you to ensure that traffic may not leave a VNet without being encrypted. A VPN Gateway can be used for [VNet-to-VNet](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connections while also providing a secure tunnel with IPsec/IKE. Azure VPN uses [Pre-Shared Key (PSK) authentication](../vpn-gateway/vpn-gateway-vpn-faq.md#how-does-my-vpn-tunnel-get-authenticated) whereby Microsoft generates the PSK when the VPN tunnel is created. You can change the autogenerated PSK to your own.
+In addition to controlling the type of algorithm that is supported for VPN connections, Azure provides you with the ability to enforce that all traffic leaving a VNet may only be routed through a VNet Gateway (for example, Azure VPN Gateway). This enforcement allows you to ensure that traffic may not leave a VNet without being encrypted. A VPN Gateway can be used for [VNet-to-VNet](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connections while also providing a secure tunnel with IPsec/IKE. Azure VPN uses [Pre-Shared Key (PSK) authentication](../vpn-gateway/vpn-gateway-vpn-faq.md#how-is-my-vpn-tunnel-authenticated) whereby Microsoft generates the PSK when the VPN tunnel is created. You can change the autogenerated PSK to your own.
-**Azure ExpressRoute encryption** ΓÇô [Azure ExpressRoute](../expressroute/expressroute-introduction.md) allows you to create private connections between Microsoft datacenters and your on-premises infrastructure or colocation facility. ExpressRoute connections don't go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to MicrosoftΓÇÖs global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it's guaranteed to traverse that private networking infrastructure instead of the public Internet. You can use ExpressRoute with several data [encryption options](../expressroute/expressroute-about-encryption.md), including [MACsec](https://1.ieee802.org/security/802-1ae/) that enable you to store [MACsec encryption keys in Azure Key Vault](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). MACsec encrypts data at the Media Access Control (MAC) level, that is, data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are [supported for encryption](../expressroute/expressroute-about-encryption.md#which-cipher-suites-are-supported-for-encryption). You can use MACsec to encrypt the physical links between your network devices and Microsoft network devices when you connect to Microsoft via [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md). ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering locations.
+**Azure ExpressRoute encryption** ΓÇô [Azure ExpressRoute](../expressroute/expressroute-introduction.md) allows you to create private connections between Microsoft datacenters and your on-premises infrastructure or colocation facility. ExpressRoute connections don't go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. [ExpressRoute locations](../expressroute/expressroute-locations-providers.md) are the entry points to Microsoft's global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it's guaranteed to traverse that private networking infrastructure instead of the public Internet. You can use ExpressRoute with several data [encryption options](../expressroute/expressroute-about-encryption.md), including [MACsec](https://1.ieee802.org/security/802-1ae/) that enable you to store [MACsec encryption keys in Azure Key Vault](../expressroute/expressroute-about-encryption.md#point-to-point-encryption-by-macsec-faq). MACsec encrypts data at the Media Access Control (MAC) level, that is, data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are [supported for encryption](../expressroute/expressroute-about-encryption.md#which-cipher-suites-are-supported-for-encryption). You can use MACsec to encrypt the physical links between your network devices and Microsoft network devices when you connect to Microsoft via [ExpressRoute Direct](../expressroute/expressroute-erdirect-about.md). ExpressRoute Direct allows for direct fiber connections from your edge to the Microsoft Enterprise edge routers at the peering locations.
You can enable IPsec in addition to MACsec on your ExpressRoute Direct ports, as shown in Figure 11. Using VPN Gateway, you can set up an [IPsec tunnel over Microsoft Peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md) of your ExpressRoute circuit between your on-premises network and your Azure VNet. MACsec secures the physical connection between your on-premises network and Microsoft. IPsec secures the end-to-end connection between your on-premises network and your VNets in Azure. MACsec and IPsec can be enabled independently.
Each Azure [subscription](/azure/cloud-adoption-framework/decision-guides/subscr
- **Shared symmetric keys** ΓÇô Upon storage account creation, Azure generates two 512-bit storage account keys that control access to the storage account. You can rotate and regenerate these keys at any point thereafter without coordination with your applications. - **Microsoft Entra ID-based authentication** ΓÇô Access to Azure Storage can be controlled by Microsoft Entra ID, which enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, including Microsoft insiders. More information about Microsoft Entra tenant isolation is available from a white paper [Microsoft Entra Data Security Considerations](https://aka.ms/AADDataWhitePaper).-- **Shared access signatures (SAS)** ΓÇô Shared access signatures or ΓÇ£presigned URLsΓÇ¥ can be created from the shared symmetric keys. These URLs can be significantly limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.-- **User delegation SAS** ΓÇô Delegated authentication is similar to SAS but is [based on Microsoft Entra tokens](/rest/api/storageservices/create-user-delegation-sas) rather than the shared symmetric keys. This approach allows a service that authenticates with Microsoft Entra ID to create a pre signed URL with limited scope and grant temporary access to another user, service, or device.-- **Anonymous public read access** ΓÇô You can allow a small portion of your storage to be publicly accessible without authentication or authorization. This capability can be disabled at the subscription level if you desire more stringent control.
+- **Shared access signatures (SAS)** ΓÇô Shared access signatures or "presigned URLs" can be created from the shared symmetric keys. These URLs can be significantly limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.
+- **User delegation SAS** ΓÇô Delegated authentication is similar to SAS but is [based on Microsoft Entra tokens](/rest/api/storageservices/create-user-delegation-sas) rather than the shared symmetric keys. This approach allows a service that authenticates with Microsoft Entra ID to create a pre signed URL with limited scope and grant temporary access to another user, service, or device.
+- **Anonymous public read access** ΓÇô You can allow a small portion of your storage to be publicly accessible without authentication or authorization. This capability can be disabled at the subscription level if you desire more stringent control.
Azure Storage provides storage for a wide variety of workloads, including: -- Azure Virtual Machines (disk storage)-- Big data analytics (HDFS for HDInsight, Azure Data Lake Storage) -- Storing application state, user data (Blob, Queue, Table storage) -- Long-term data storage (Azure Archive Storage) -- Network file shares in the cloud (File storage) -- Serving web pages on the Internet (static websites)
+- Azure Virtual Machines (disk storage)
+- Big data analytics (HDFS for HDInsight, Azure Data Lake Storage)
+- Storing application state, user data (Blob, Queue, Table storage)
+- Long-term data storage (Azure Archive Storage)
+- Network file shares in the cloud (File storage)
+- Serving web pages on the Internet (static websites)
While Azure Storage supports many different externally facing customer storage scenarios, internally, the physical storage for the above services is managed by a common set of APIs. To provide durability and availability, Azure Storage relies on data replication and data partitioning across storage resources that are shared among tenants. To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers as described in this section. ### Data replication Your data in an Azure Storage account is [always replicated](../storage/common/storage-redundancy.md) to help ensure durability and high availability. Azure Storage copies your data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. You can typically choose to replicate your data within the same data center, across [availability zones within the same region](../availability-zones/az-overview.md), or across geographically separated regions. Specifically, when creating a storage account, you can select one of the following [redundancy options](../storage/common/storage-redundancy.md#summary-of-redundancy-options): -- **Locally redundant storage (LRS)** replicates three copies (or the erasure coded equivalent, as described later) of your data within a single data center. A write request to an LRS storage account returns successfully only after the data is written to all three replicas. Each replica resides in separate fault and upgrade domains within a scale unit (set of storage racks within a data center).-- **Zone-redundant storage (ZRS)** replicates your data synchronously across three storage clusters in a single [region](../availability-zones/az-overview.md#regions). Each storage cluster is physically separated from the others and is in its own [Availability Zone](../availability-zones/az-overview.md#availability-zones) (AZ). A write request to a ZRS storage account returns successfully only after the data is written to all replicas across the three clusters.-- **Geo-redundant storage (GRS)** replicates your data to a [secondary (paired) region](../availability-zones/cross-region-replication-azure.md) that is hundreds of kilometers away from the primary region. GRS storage accounts are durable even during a complete regional outage or a disaster in which the primary region isn't recoverable. For a storage account with GRS or RA-GRS enabled, all data is first replicated with LRS. An update is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region using GRS. When data is written to the secondary location, it's also replicated within that location using LRS.-- **Read-access geo-redundant storage (RA-GRS)** is based on GRS. It provides read-only access to the data in the secondary location, in addition to geo-replication across two regions. With RA-GRS, you can read from the secondary region regardless of whether Microsoft initiates a failover from the primary to secondary region.-- **Geo-zone-redundant storage (GZRS)** combines the high availability of ZRS with protection from regional outages as provided by GRS. Data in a GZRS storage account is replicated across three AZs in the primary region and also replicated to a secondary geographic region for protection from regional disasters. Each Azure region is paired with another region within the same geography, together making a [regional pair](../availability-zones/cross-region-replication-azure.md).-- **Read-access geo-zone-redundant storage (RA-GZRS)** is based on GZRS. You can optionally enable read access to data in the secondary region with RA-GZRS if your applications need to be able to read data following a disaster in the primary region.
+- **Locally redundant storage (LRS)** replicates three copies (or the erasure coded equivalent, as described later) of your data within a single data center. A write request to an LRS storage account returns successfully only after the data is written to all three replicas. Each replica resides in separate fault and upgrade domains within a scale unit (set of storage racks within a data center).
+- **Zone-redundant storage (ZRS)** replicates your data synchronously across three storage clusters in a single [region](../availability-zones/az-overview.md#regions). Each storage cluster is physically separated from the others and is in its own [Availability Zone](../availability-zones/az-overview.md#availability-zones) (AZ). A write request to a ZRS storage account returns successfully only after the data is written to all replicas across the three clusters.
+- **Geo-redundant storage (GRS)** replicates your data to a [secondary (paired) region](../availability-zones/cross-region-replication-azure.md) that is hundreds of kilometers away from the primary region. GRS storage accounts are durable even during a complete regional outage or a disaster in which the primary region isn't recoverable. For a storage account with GRS or RA-GRS enabled, all data is first replicated with LRS. An update is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region using GRS. When data is written to the secondary location, it's also replicated within that location using LRS.
+- **Read-access geo-redundant storage (RA-GRS)** is based on GRS. It provides read-only access to the data in the secondary location, in addition to geo-replication across two regions. With RA-GRS, you can read from the secondary region regardless of whether Microsoft initiates a failover from the primary to secondary region.
+- **Geo-zone-redundant storage (GZRS)** combines the high availability of ZRS with protection from regional outages as provided by GRS. Data in a GZRS storage account is replicated across three AZs in the primary region and also replicated to a secondary geographic region for protection from regional disasters. Each Azure region is paired with another region within the same geography, together making a [regional pair](../availability-zones/cross-region-replication-azure.md).
+- **Read-access geo-zone-redundant storage (RA-GZRS)** is based on GZRS. You can optionally enable read access to data in the secondary region with RA-GZRS if your applications need to be able to read data following a disaster in the primary region.
### High-level Azure Storage architecture Azure Storage production systems consist of storage stamps and the location service (LS), as shown in Figure 12. A storage stamp is a cluster of racks of storage nodes, where each rack is built as a separate fault domain with redundant networking and power. The LS manages all the storage stamps and the account namespace across all stamps. It allocates accounts to storage stamps and manages them across the storage stamps for load balancing and disaster recovery. The LS itself is distributed across two geographic locations for its own disaster recovery ([Calder, et al., 2011](https://sigops.org/s/conferences/sosp/2011/current/2011-Cascais/printable/11-calder.pdf)).
Azure provides extensive options for [data encryption at rest](../security/funda
In general, controlling key access and ensuring efficient bulk encryption and decryption of data is accomplished via the following types of encryption keys (as shown in Figure 16), although other encryption keys can be used as described in *[Storage service encryption](#storage-service-encryption)* section. -- **Data Encryption Key (DEK)** is a symmetric AES-256 key that is used for bulk encryption and decryption of a partition or a block of data. The cryptographic modules are FIPS 140 validated as part of the [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Access to DEKs is needed by the resource provider or application instance that is responsible for encrypting and decrypting a specific block of data. A single resource may have many partitions and many DEKs. When a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key. The DEK is always stored encrypted by the Key Encryption Key (KEK).-- **Key Encryption Key (KEK)** is an asymmetric RSA key that is optionally provided by you. This key encryption key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault or Managed HSM. As mentioned previously in *[Data encryption key management](#data-encryption-key-management)* section, Azure Key Vault can use FIPS 140 validated hardware security modules (HSMs) to safeguard encryption keys; Managed HSM always uses FIPS 140 validated hardware security modules. These keys aren't exportable and there can be no clear-text version of the KEK outside the HSMs ΓÇô the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Microsoft Entra ID. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.
+- **Data Encryption Key (DEK)** is a symmetric AES-256 key that is used for bulk encryption and decryption of a partition or a block of data. The cryptographic modules are FIPS 140 validated as part of the [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Access to DEKs is needed by the resource provider or application instance that is responsible for encrypting and decrypting a specific block of data. A single resource may have many partitions and many DEKs. When a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key. The DEK is always stored encrypted by the Key Encryption Key (KEK).
+- **Key Encryption Key (KEK)** is an asymmetric RSA key that is optionally provided by you. This key encryption key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault or Managed HSM. As mentioned previously in *[Data encryption key management](#data-encryption-key-management)* section, Azure Key Vault can use FIPS 140 validated hardware security modules (HSMs) to safeguard encryption keys; Managed HSM always uses FIPS 140 validated hardware security modules. These keys aren't exportable and there can be no clear-text version of the KEK outside the HSMs ΓÇô the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Microsoft Entra ID. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.
:::image type="content" source="./media/secure-isolation-fig16.png" alt-text="Data Encryption Keys are encrypted using your key stored in Azure Key Vault"::: **Figure 16.** Data Encryption Keys are encrypted using your key stored in Azure Key Vault
Azure [Storage service encryption](../storage/common/storage-service-encryption.
However, you can also choose to manage encryption with your own keys by specifying: -- [Customer-managed key](../storage/common/customer-managed-keys-overview.md) for managing Azure Storage encryption whereby the key is stored in Azure Key Vault. This option provides much flexibility for you to create, rotate, disable, and revoke access to customer-managed keys. You must use Azure Key Vault to store customer-managed keys. Both key vaults and managed HSMs are supported, as described previously in *[Azure Key Vault](#azure-key-vault)* section.-- [Customer-provided key](../storage/blobs/encryption-customer-provided-keys.md) for encrypting and decrypting Blob storage only whereby the key can be stored in Azure Key Vault or in another key store on your premises to meet regulatory compliance requirements. Customer-provided keys enable you to pass an encryption key to Storage service using Blob APIs as part of read or write operations.
+- [Customer-managed key](../storage/common/customer-managed-keys-overview.md) for managing Azure Storage encryption whereby the key is stored in Azure Key Vault. This option provides much flexibility for you to create, rotate, disable, and revoke access to customer-managed keys. You must use Azure Key Vault to store customer-managed keys. Both key vaults and managed HSMs are supported, as described previously in *[Azure Key Vault](#azure-key-vault)* section.
+- [Customer-provided key](../storage/blobs/encryption-customer-provided-keys.md) for encrypting and decrypting Blob storage only whereby the key can be stored in Azure Key Vault or in another key store on your premises to meet regulatory compliance requirements. Customer-provided keys enable you to pass an encryption key to Storage service using Blob APIs as part of read or write operations.
> [!NOTE] > You can configure customer-managed keys (CMK) with Azure Key Vault using the **[Azure portal, Azure PowerShell, or Azure CLI](../storage/common/customer-managed-keys-configure-key-vault.md)**. You can **[use .NET to specify a customer-provided key](../storage/blobs/storage-blob-customer-provided-key.md)** on a request to Blob storage. Storage service encryption is enabled by default for all new and existing storage accounts and it [can't be disabled](../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption). As shown in Figure 17, the encryption process uses the following keys to help ensure cryptographic certainty of data isolation at rest: -- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption, and it's unique per storage account in Azure Storage. It's generated by the Azure Storage service as part of the storage account creation and is used to derive a unique key for each block of data. The Storage Service always encrypts the DEK using either the Stamp Key or a Key Encryption Key if the customer has configured customer-managed key encryption.-- *Key Encryption Key (KEK)* is an asymmetric RSA (2048 or greater) key managed by the customer and is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault or Managed HSM. It's never exposed directly to the Azure Storage service or other services.-- *Stamp Key (SK)* is a symmetric AES-256 key managed by Azure Storage. This key is used to protect the DEK when not using a customer-managed key.
+- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption, and it's unique per storage account in Azure Storage. It's generated by the Azure Storage service as part of the storage account creation and is used to derive a unique key for each block of data. The Storage Service always encrypts the DEK using either the Stamp Key or a Key Encryption Key if the customer has configured customer-managed key encryption.
+- *Key Encryption Key (KEK)* is an asymmetric RSA (2048 or greater) key managed by the customer and is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault or Managed HSM. It's never exposed directly to the Azure Storage service or other services.
+- *Stamp Key (SK)* is a symmetric AES-256 key managed by Azure Storage. This key is used to protect the DEK when not using a customer-managed key.
These keys protect any data that is written to Azure Storage and provide cryptographic certainty for logical data isolation in Azure Storage. As mentioned previously, Azure Storage service encryption is enabled by default and it can't be disabled.
Azure Disk encryption does not support Managed HSM or an on-premises key managem
Azure Disk encryption relies on two encryption keys for implementation, as described previously: -- *Data Encryption Key (DEK)* is a symmetric AES-256 key used to encrypt OS and Data volumes through BitLocker or DM-Crypt. DEK itself is encrypted and stored in an internal location close to the data.-- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key used to encrypt the Data Encryption Keys. KEK is kept in Azure Key Vault under your control including granting access permissions through Microsoft Entra ID.
+- *Data Encryption Key (DEK)* is a symmetric AES-256 key used to encrypt OS and Data volumes through BitLocker or DM-Crypt. DEK itself is encrypted and stored in an internal location close to the data.
+- *Key Encryption Key (KEK)* is an asymmetric RSA-2048 key used to encrypt the Data Encryption Keys. KEK is kept in Azure Key Vault under your control including granting access permissions through Microsoft Entra ID.
The DEK, encrypted with the KEK, is stored separately and only an entity with access to the KEK can decrypt the DEK. Access to the KEK is guarded by Azure Key Vault where you can choose to store your keys in [FIPS 140 validated hardware security modules](../key-vault/keys/hsm-protected-keys-byok.md).
You're [always in control of your customer data](https://www.microsoft.com/trust
### Data deletion Storage is allocated sparsely, which means that when a virtual disk is created, disk space isn't allocated for its entire capacity. Instead, a table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty. The first time you write data on the virtual disk, space on the physical disk is allocated and a pointer to it is placed in the table. For more information, see [Azure data security ΓÇô data cleansing and leakage](/archive/blogs/walterm/microsoft-azure-data-security-data-cleansing-and-leakage).
-When you delete a blob or table entity, it will immediately get deleted from the index used to locate and access the data on the primary location, and then the deletion is done asynchronously at the geo-replicated copy of the data, if you provisioned [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage). At the primary location, you can immediately try to access the blob or entity, and you wonΓÇÖt find it in your index, since Azure provides strong consistency for the delete. So, you can verify directly that the data has been deleted.
+When you delete a blob or table entity, it will immediately get deleted from the index used to locate and access the data on the primary location, and then the deletion is done asynchronously at the geo-replicated copy of the data, if you provisioned [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage). At the primary location, you can immediately try to access the blob or entity, and you won't find it in your index, since Azure provides strong consistency for the delete. So, you can verify directly that the data has been deleted.
In Azure Storage, all disk writes are sequential. This approach minimizes the amount of disk &#8220;seeks&#8221; but requires updating the pointers to objects every time they're written ΓÇô new versions of pointers are also written sequentially. A side effect of this design is that it isn't possible to ensure that a secret on disk is gone by overwriting it with other data. The original data will remain on the disk and the new value will be written sequentially. Pointers will be updated such that there's no way to find the deleted value anymore. Once the disk is full, however, the system has to write new logs onto disk space that has been freed up by the deletion of old data. Instead of allocating log files directly from disk sectors, log files are created in a file system running NTFS. A background thread running on Azure Storage nodes frees up space by going through the oldest log file, copying blocks that are still referenced from that oldest log file to the current log file, and updating all pointers as it goes. It then deletes the oldest log file. Therefore, there are two categories of free disk space on the disk: (1) space that NTFS knows is free, where it allocates new log files from this pool; and (2) space within those log files that Azure Storage knows is free since there are no current pointers to it.
Purge and Destroy operations must be performed using tools and processes approve
In addition to technical implementation details that enable Azure compute, networking, and storage isolation, Microsoft has invested heavily in security assurance processes and practices to correctly develop logically isolated services and systems, as described in the next section. ## Security assurance processes and practices
-Azure isolation assurances are further enforced by MicrosoftΓÇÖs internal use of the [Security Development Lifecycle (SDL)](https://www.microsoft.com/securityengineering/sdl/) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
+Azure isolation assurances are further enforced by Microsoft's internal use of the [Security Development Lifecycle (SDL)](https://www.microsoft.com/securityengineering/sdl/) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.
-- **Security Development Lifecycle (SDL)** ΓÇô The Microsoft SDL introduces security and privacy considerations throughout all phases of the development process, helping developers build highly secure software, address security compliance requirements, and reduce development costs. The guidance, best practices, [tools](https://www.microsoft.com/securityengineering/sdl/resources), and processes in the Microsoft SDL are [practices](https://www.microsoft.com/securityengineering/sdl/practices) used internally to build all Azure services and create more secure products and services. This process is also publicly documented to share MicrosoftΓÇÖs learnings with the broader industry and incorporate industry feedback to create a stronger security development process.
+- **Security Development Lifecycle (SDL)** ΓÇô The Microsoft SDL introduces security and privacy considerations throughout all phases of the development process, helping developers build highly secure software, address security compliance requirements, and reduce development costs. The guidance, best practices, [tools](https://www.microsoft.com/securityengineering/sdl/resources), and processes in the Microsoft SDL are [practices](https://www.microsoft.com/securityengineering/sdl/practices) used internally to build all Azure services and create more secure products and services. This process is also publicly documented to share Microsoft's learnings with the broader industry and incorporate industry feedback to create a stronger security development process.
- **Tooling and processes** ΓÇô All Azure code is subject to an extensive set of both static and dynamic analysis tools that identify potential vulnerabilities, ineffective security patterns, memory corruption, user privilege issues, and other critical security problems. - *Purpose built fuzzing* ΓÇô A testing technique used to find security vulnerabilities in software products and services. It consists of repeatedly feeding modified, or fuzzed, data to software inputs to trigger hangs, exceptions, and crashes, which are fault conditions that could be used by an attacker to disrupt or take control of applications and services. The Microsoft SDL recommends [fuzzing](https://www.microsoft.com/research/blog/a-brief-introduction-to-fuzzing-and-why-its-an-important-tool-for-developers/) all attack surfaces of a software product, especially those surfaces that expose a data parser to untrusted data.
- - *Live-site penetration testing* ΓÇô Microsoft conducts [ongoing live-site penetration testing](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf) to improve cloud security controls and processes, as part of the Red Teaming program described later in this section. Penetration testing is a security analysis of a software system performed by skilled security professionals simulating the actions of a hacker. The objective of a penetration test is to uncover potential vulnerabilities resulting from coding errors, system configuration faults, or other operational deployment weaknesses. The tests are conducted against Azure infrastructure and platforms and MicrosoftΓÇÖs own tenants, applications, and data. Your tenants, applications, and data hosted in Azure are never targeted; however, you can conduct [your own penetration testing](../security/fundamentals/pen-testing.md) of your applications deployed in Azure.
- - *Threat modeling* ΓÇô A core element of the Microsoft SDL. ItΓÇÖs an engineering technique used to help identify threats, attacks, vulnerabilities, and countermeasures that could affect applications and services. [Threat modeling](../security/develop/threat-modeling-tool-getting-started.md) is part of the Azure routine development lifecycle.
- - *Automated build alerting of changes to attack surface area* ΓÇô [Attack Surface Analyzer](https://github.com/microsoft/attacksurfaceanalyzer) is a Microsoft-developed open-source security tool that analyzes the attack surface of a target system and reports on potential security vulnerabilities introduced during the installation of software or system misconfiguration. The core feature of Attack Surface Analyzer is the ability to &#8220;diff&#8221; an operating system's security configuration, before and after a software component is installed. This feature is important because most installation processes require elevated privileges, and once granted, they can lead to unintended system configuration changes.
-- **Mandatory security training** ΓÇô The Microsoft Azure security training and awareness program requires all personnel responsible for Azure development and operations to take essential training and any extra training based on individual job requirements. These procedures provide a standard approach, tools, and techniques used to implement and sustain the awareness program. Microsoft has implemented a security awareness program called STRIKE that provides monthly e-mail communication to all Azure engineering personnel about security awareness and allows employees to register for in-person or online security awareness training. STRIKE offers a series of security training events throughout the year plus STRIKE Central, which is a centralized online resource for security awareness, training, documentation, and community engagement.-- **Bug Bounty Program** ΓÇô Microsoft strongly believes that close partnership with academic and industry researchers drives a higher level of security assurance for you and your data. Security researchers play an integral role in the Azure ecosystem by discovering vulnerabilities missed in the software development process. The [Microsoft Bug Bounty Program](https://www.microsoft.com/msrc/bounty) is designed to supplement and encourage research in relevant technologies (for example, encryption, spoofing, hypervisor isolation, elevation of privileges, and so on) to better protect AzureΓÇÖs infrastructure and your data. As an example, for each critical vulnerability identified in the Azure Hypervisor, Microsoft compensates security researchers up to $250,000 ΓÇô a significant amount to incentivize participation and vulnerability disclosure. The bounty range for [vulnerability reports on Azure services](https://www.microsoft.com/msrc/bounty-microsoft-azure) is up to $60,000.-- **Red Team activities** ΓÇô Microsoft uses [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf), a form of live site penetration testing against Microsoft-managed infrastructure, services, and applications. Microsoft simulates real-world breaches, continuously monitors security, and practices security incident response to test and improve Azure security. Red Teaming is predicated on the Assume Breach security strategy and executed by two core groups: Red Team (attackers) and Blue Team (defenders). The approach is designed to test Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the infrastructure and platform Engineering or Operations teams. This approach tests security detection and response capabilities, and helps identify production vulnerabilities, configuration errors, invalid assumptions, or other security issues in a controlled manner. Every Red Team breach is followed by full disclosure between the Red Team and Blue Team to identify gaps, address findings, and significantly improve breach response.
+ - *Live-site penetration testing* ΓÇô Microsoft conducts [ongoing live-site penetration testing](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf) to improve cloud security controls and processes, as part of the Red Teaming program described later in this section. Penetration testing is a security analysis of a software system performed by skilled security professionals simulating the actions of a hacker. The objective of a penetration test is to uncover potential vulnerabilities resulting from coding errors, system configuration faults, or other operational deployment weaknesses. The tests are conducted against Azure infrastructure and platforms and Microsoft's own tenants, applications, and data. Your tenants, applications, and data hosted in Azure are never targeted; however, you can conduct [your own penetration testing](../security/fundamentals/pen-testing.md) of your applications deployed in Azure.
+ - *Threat modeling* ΓÇô A core element of the Microsoft SDL. It's an engineering technique used to help identify threats, attacks, vulnerabilities, and countermeasures that could affect applications and services. [Threat modeling](../security/develop/threat-modeling-tool-getting-started.md) is part of the Azure routine development lifecycle.
+ - *Automated build alerting of changes to attack surface area* ΓÇô [Attack Surface Analyzer](https://github.com/microsoft/attacksurfaceanalyzer) is a Microsoft-developed open-source security tool that analyzes the attack surface of a target system and reports on potential security vulnerabilities introduced during the installation of software or system misconfiguration. The core feature of Attack Surface Analyzer is the ability to &#8220;diff&#8221; an operating system's security configuration, before and after a software component is installed. This feature is important because most installation processes require elevated privileges, and once granted, they can lead to unintended system configuration changes.
+- **Mandatory security training** ΓÇô The Microsoft Azure security training and awareness program requires all personnel responsible for Azure development and operations to take essential training and any extra training based on individual job requirements. These procedures provide a standard approach, tools, and techniques used to implement and sustain the awareness program. Microsoft has implemented a security awareness program called STRIKE that provides monthly e-mail communication to all Azure engineering personnel about security awareness and allows employees to register for in-person or online security awareness training. STRIKE offers a series of security training events throughout the year plus STRIKE Central, which is a centralized online resource for security awareness, training, documentation, and community engagement.
+- **Bug Bounty Program** ΓÇô Microsoft strongly believes that close partnership with academic and industry researchers drives a higher level of security assurance for you and your data. Security researchers play an integral role in the Azure ecosystem by discovering vulnerabilities missed in the software development process. The [Microsoft Bug Bounty Program](https://www.microsoft.com/msrc/bounty) is designed to supplement and encourage research in relevant technologies (for example, encryption, spoofing, hypervisor isolation, elevation of privileges, and so on) to better protect Azure's infrastructure and your data. As an example, for each critical vulnerability identified in the Azure Hypervisor, Microsoft compensates security researchers up to $250,000 ΓÇô a significant amount to incentivize participation and vulnerability disclosure. The bounty range for [vulnerability reports on Azure services](https://www.microsoft.com/msrc/bounty-microsoft-azure) is up to $60,000.
+- **Red Team activities** ΓÇô Microsoft uses [Red Teaming](https://download.microsoft.com/download/C/1/9/C1990DBA-502F-4C2A-848D-392B93D9B9C3/Microsoft_Enterprise_Cloud_Red_Teaming.pdf), a form of live site penetration testing against Microsoft-managed infrastructure, services, and applications. Microsoft simulates real-world breaches, continuously monitors security, and practices security incident response to test and improve Azure security. Red Teaming is predicated on the Assume Breach security strategy and executed by two core groups: Red Team (attackers) and Blue Team (defenders). The approach is designed to test Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the infrastructure and platform Engineering or Operations teams. This approach tests security detection and response capabilities, and helps identify production vulnerabilities, configuration errors, invalid assumptions, or other security issues in a controlled manner. Every Red Team breach is followed by full disclosure between the Red Team and Blue Team to identify gaps, address findings, and significantly improve breach response.
If you're accustomed to a traditional on-premises data center deployment, you would typically conduct a risk assessment to gauge your threat exposure and formulate mitigating measures when migrating to the cloud. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help you with this comparison.
If you're accustomed to a traditional on-premises data center deployment, you wo
A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](../security/fundamentals/isolation-choices.md) to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping enforce controls designed to keep other customers from accessing your data or applications. If you're migrating from traditional on-premises physically isolated infrastructure to the cloud, this section addresses concerns that may be of interest to you. ### Physical versus logical security considerations
-Table 6 provides a summary of key security considerations for physically isolated on-premises deployments (bare metal) versus logically isolated cloud-based deployments (Azure). ItΓÇÖs useful to review these considerations prior to examining risks identified to be specific to shared cloud environments.
+Table 6 provides a summary of key security considerations for physically isolated on-premises deployments (bare metal) versus logically isolated cloud-based deployments (Azure). It's useful to review these considerations prior to examining risks identified to be specific to shared cloud environments.
**Table 6.** Key security considerations for physical versus logical isolation
PaaS VMs offer more advanced **protection against persistent malware** infection
#### Side channel attacks Microsoft has been at the forefront of mitigating **speculative execution side channel attacks** that exploit hardware vulnerabilities in modern processors that use hyper-threading. In many ways, these issues are similar to the Spectre (variant 2) side channel attack, which was disclosed in 2018. Multiple new speculative execution side channel issues were disclosed by both Intel and AMD in 2022. To address these vulnerabilities, Microsoft has developed and optimized Hyper-V **[HyperClear](https://techcommunity.microsoft.com/t5/virtualization/hyper-v-hyperclear-mitigation-for-l1-terminal-fault/ba-p/382429)**, a comprehensive and high performing side channel vulnerability mitigation architecture. HyperClear relies on three main components to ensure strong inter-VM isolation: -- **Core scheduler** to avoid sharing of a CPU coreΓÇÖs private buffers and other resources.-- **Virtual-processor address space isolation** to avoid speculative access to another virtual machineΓÇÖs memory or another virtual CPU coreΓÇÖs private state.-- **Sensitive data scrubbing** to avoid leaving private data anywhere in hypervisor memory other than within a virtual processorΓÇÖs private address space so that this data can't be speculatively accessed in the future.
+- **Core scheduler** to avoid sharing of a CPU core's private buffers and other resources.
+- **Virtual-processor address space isolation** to avoid speculative access to another virtual machine's memory or another virtual CPU core's private state.
+- **Sensitive data scrubbing** to avoid leaving private data anywhere in hypervisor memory other than within a virtual processor's private address space so that this data can't be speculatively accessed in the future.
These protections have been deployed to Azure and are available in Windows Server 2016 and later supported releases. > [!NOTE] > The Hyper-V HyperClear architecture has proven to be a readily extensible design that helps provide strong isolation boundaries against a variety of speculative execution side channel attacks with negligible impact on performance.
-When VMs belonging to different customers are running on the same physical server, it's the HypervisorΓÇÖs job to ensure that they can't learn anything important about what the other customerΓÇÖs VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. These exploits have received plenty of attention in the academic press where researchers have been seeking to learn more specific information about what's going on in a peer VM.
+When VMs belonging to different customers are running on the same physical server, it's the Hypervisor's job to ensure that they can't learn anything important about what the other customer's VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. These exploits have received plenty of attention in the academic press where researchers have been seeking to learn more specific information about what's going on in a peer VM.
-Of particular interest are efforts to learn the **cryptographic keys of a peer VM** by measuring the timing of certain memory accesses and inferring which cache lines the victimΓÇÖs VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. In addition to the previously mentioned Hyper-V HyperClear mitigation architecture that's in use by Azure, there are several extra mitigations in Azure that reduce the risk of such an attack:
+Of particular interest are efforts to learn the **cryptographic keys of a peer VM** by measuring the timing of certain memory accesses and inferring which cache lines the victim's VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. In addition to the previously mentioned Hyper-V HyperClear mitigation architecture that's in use by Azure, there are several extra mitigations in Azure that reduce the risk of such an attack:
- The standard Azure cryptographic libraries have been designed to resist such attacks by not having cache access patterns depend on the cryptographic keys being used. - Azure uses an advanced VM host placement algorithm that is highly sophisticated and nearly impossible to predict, which helps reduce the chances of adversary-controlled VM being placed on the same host as the target VM.
azure-maps Drawing Tools Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md
Check out more code samples:
[Code sample page]: https://samples.azuremaps.com/ [Create a measuring tool sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Create%20a%20measuring%20tool/Create%20a%20measuring%20tool.html [Create a measuring tool]: https://samples.azuremaps.com/drawing-tools-module/create-a-measuring-tool
-[Draw and search polygon area sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Draw%20and%20search%20polygon%20area/Draw%20and%20search%20polygon%20area.html
[Draw and search polygon area]: https://samples.azuremaps.com/drawing-tools-module/draw-and-search-polygon-area [Drawing tools events sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Drawing%20tools%20events/Drawing%20tools%20events.html [Drawing tools events]: https://samples.azuremaps.com/drawing-tools-module/drawing-tools-events
azure-monitor Data Collection Log Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-log-text.md
The incoming stream of data includes the columns in the following table.
## Custom table Before you can collect log data from a text file, you must create a custom table in your Log Analytics workspace to receive the data. The table schema must match the data you are collecting, or you must add a transformation to ensure that the output schema matches the table.
->
-> Warning: You shouldnΓÇÖt use an existing custom log table used by MMA agents. Your MMA agents won't be able to write to the table once the first AMA agent writes to the table. You should create a new table for AMA to use to prevent MMA data loss.
->
+> [!Warning]
+> You shouldnΓÇÖt use an existing custom log table used by MMA agents. Your MMA agents won't be able to write to the table once the first AMA agent writes to the table. You should create a new table for AMA to use to prevent MMA data loss.
For example, you can use the following PowerShell script to create a custom table with `RawData` and `FilePath`. You wouldn't need a transformation for this table because the schema matches the default schema of the incoming stream.
azure-monitor Log Alert Rule Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/log-alert-rule-health.md
This table describes the possible resource health status values for a log search
|Syntax error |The query is failing because of a syntax error.| Review the query and try again.| |The response size is too large|The query is failing because its response size is too large.|Review your query and the [log queries limits](../service-limits.md#log-queries-and-language).| |Query consuming too many resources |The query is failing because it's consuming too many resources.|Review your query. View our [best practices for optimizing log queries](../logs/query-optimization.md).|
-|Query validation error|The query is failing because of a validation error. |Check if the table referenced in your query is set to [Compare the Basic and Analytics log data plans](../logs/basic-logs-configure.md#compare-the-basic-and-analytics-log-data-plans), which doesn't support alerts. |
+|Query validation error|The query is failing because of a validation error. |Check if the table referenced in your query is set to the [Basic or Auxiliary table plans](../logs/logs-table-plans.md), which don't support alerts. |
|Workspace not found |The target Log Analytics workspace for this alert rule couldn't be found. |The target specified in the scope of the alert rule was moved, renamed, or deleted. Recreate your alert rule with a valid Log Analytics workspace target.| |Application Insights resource not found|The target Application Insights resource for this alert rule couldn't be found. |The target specified in the scope of the alert rule was moved, renamed, or deleted. Recreate your alert rule with a valid Log Analytics workspace target. | |Query is throttled|The query is failing for the rule because of throttling (Error 429). |Review your query and the [log queries limits](../service-limits.md#user-query-throttling). |
This table describes the possible resource health status values for a log search
|NSP validation failed |The query is failing because of NSP validations issues.| Review your network security perimeter rules to ensure your alert rule is correctly configured.| |Active alerts limit exceeded |Alert evaluation failed due to exceeding the limit of fired (non- resolved) alerts per day. |See [Azure Monitor service limits](../service-limits.md). | |Dimension combinations limit exceeded | Alert evaluation failed due to exceeding the allowed limit of dimension combinations values meeting the threshold.|See [Azure Monitor service limits](../service-limits.md). |
-|Unavailable for unknown reason | Today, the report health status is supported only for rules with a frequency of 15 minutes or lower.| For using Resource Health the fequency should be 5 minutes or lower. |
+|Unavailable for unknown reason | Today, the report health status is supported only for rules with a frequency of 15 minutes or lower.| For using Resource Health, the frequency should be 5 minutes or lower. |
## Add a new resource health alert
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based Application Insights resources allow you to take advantage of th
When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate changes the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data.
-Your classic resource data persists and is subject to the retention settings on your classic Application Insights resource. All new data ingested post migration is subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#configure-retention-and-archive-at-the-table-level).
+Your classic resource data persists and is subject to the retention settings on your classic Application Insights resource. All new data ingested post migration is subject to the [retention settings](../logs/data-retention-configure.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-configure.md#configure-table-level-retention).
*The migration process is permanent and can't be reversed.* After you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. After you migrate, you can change the target workspace as often as needed.
If you don't need to migrate an existing resource, and instead want to create a
- Check your current retention settings under **Settings** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource. > [!NOTE]
- > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#configure-retention-and-archive-at-the-table-level).
+ > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-configure.md?tabs=portal-1%2cportal-2#configure-table-level-retention).
> - If you've selected data retention longer than 90 days on data ingested into the classic Application Insights resource prior to migration, data retention continues to be billed through that Application Insights resource until the data exceeds the retention period. > - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, use that setting to control the retention days for the telemetry data still saved in your classic resource's storage. - Understand [workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
There's usually no difference, with two exceptions.
- Application Insights resources that were receiving 1 GB per month free via legacy Application Insights pricing model doesn't receive the free data. - Application Insights resources that were in the basic pricing tier before April 2018 continue to be billed at the same nonregional price point as before April 2018. Application Insights resources created after that time, or those resources converted to be workspace-based, will receive the current regional pricing. For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/monitor/).
-The migration to workspace-based Application Insights offers many options to further [optimize cost](../logs/cost-logs.md), including [Log Analytics commitment tiers](../logs/cost-logs.md#commitment-tiers), [dedicated clusters](../logs/cost-logs.md#dedicated-clusters), and [basic logs](../logs/cost-logs.md#basic-logs).
+The migration to workspace-based Application Insights offers many options to further [optimize cost](../logs/cost-logs.md), including [Log Analytics commitment tiers](../logs/cost-logs.md#commitment-tiers), [dedicated clusters](../logs/cost-logs.md#dedicated-clusters), and [Basic and Auxiliary logs](../logs/cost-logs.md#basic-and-auxiliary-table-plans).
### How will telemetry capping work?
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
Application Insights is the feature of Azure Monitor for monitoring your cloud n
You can create a resource in Application Insights for each application that you're going to monitor or a single application resource for multiple applications. Whether to use separate or a single application resource for multiple applications is a fundamental decision of your monitoring strategy. Separate resources can save costs and prevent mixing data from different applications, but a single resource can simplify your monitoring by keeping all relevant telemetry together. See [How many Application Insights resources should I deploy](app/create-workspace-resource.md#how-many-application-insights-resources-should-i-deploy) for criteria to help you make this design decision. When you create the application resource, you must select whether to use classic or workspace based. See [Create an Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource) to create a classic application.
-See [Workspace-based Application Insights resources](app/create-workspace-resource.md) to create a workspace-based application. Log data collected by Application Insights is stored in Azure Monitor Logs for a workspace-based application. Log data for classic applications is stored separately from your Log Analytics workspace as described in [Data structure](logs/log-analytics-workspace-overview.md#data-structure).
+See [Workspace-based Application Insights resources](app/create-workspace-resource.md) to create a workspace-based application. Log data collected by Application Insights is stored in Azure Monitor Logs for a workspace-based application. Log data for classic applications is stored separately from your Log Analytics workspace.
### Configure codeless or code-based monitoring
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
After you apply one or more of these changes to your ConfigMaps, apply it to you
### Configure Basic Logs
-You can save on data ingestion costs on ContainerLog in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs in Azure Monitor](../logs/basic-logs-configure.md). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
+You can save on data ingestion costs on ContainerLog in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs in Azure Monitor](../logs/logs-table-plans.md). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema](container-insights-logs-schema.md).
azure-monitor Container Insights Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logs-schema.md
The following table highlights the key differences between using ContainerLogV2
| - | -- | - | | Schema | Details at [ContainerLog](/azure/azure-monitor/reference/tables/containerlog). | Details at [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2).<br>Additional columns are:<br>- `ContainerName`<br>- `PodName`<br>- `PodNamespace`<br>- `LogLevel`<sup>1</sup><br>- `KubernetesMetadata`<sup>2</sup> | | Onboarding | Only configurable through ConfigMap. | Configurable through both ConfigMap and DCR. <sup>3</sup>|
-| Pricing | Only compatible with full-priced analytics logs. | Supports the low cost [basic logs](../logs/basic-logs-configure.md) tier in addition to analytics logs. |
+| Pricing | Only compatible with full-priced analytics logs. | Supports the low cost [basic logs](../logs/logs-table-plans.md) tier in addition to analytics logs. |
| Querying | Requires multiple join operations with inventory tables for standard queries. | Includes additional pod and container metadata to reduce query complexity and join operations. | | Multiline | Not supported, multiline entries are split into multiple rows. | Support for multiline logging to allow consolidated, single entries for multiline output. |
The following screenshots show multi-line logging for Go exception stack trace:
## Next steps
-* Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2.
+* Configure [Basic Logs](../logs/logs-table-plans.md) for ContainerLogv2.
* Learn how [query data](./container-insights-log-query.md#container-logs) from ContainerLogV2
azure-monitor Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md
Once Container insights is enabled for a cluster, perform the following actions
- Container insights collects many of the same metric values as [Prometheus](#enable-scraping-of-prometheus-metrics). You can disable collection of these metrics by configuring Container insights to only collect **Logs and events** as described in [Enable cost optimization settings in Container insights](../containers/container-insights-cost-config.md#enable-cost-settings). This configuration disables the Container insights experience in the Azure portal, but you can use Grafana to visualize Prometheus metrics and Log Analytics to analyze log data collected by Container insights. - Reduce your cost for Container insights data ingestion by reducing the amount of data that's collected. -- To improve your query experience with data collected by Container insights and to reduce collection costs, [enable the ContainerLogV2 schema](container-insights-logs-schema.md) for each cluster. If you only use logs for occasional troubleshooting, then consider configuring this table as [basic logs](../logs/basic-logs-configure.md).
+- To improve your query experience with data collected by Container insights and to reduce collection costs, [enable the ContainerLogV2 schema](container-insights-logs-schema.md) for each cluster. If you only use logs for occasional troubleshooting, then consider configuring this table as [basic logs](../logs/logs-table-plans.md).
If you have an existing solution for collection of logs, then follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to [Azure Event Hubs](../../event-hubs/event-hubs-about.md) to forward to alternate system.
azure-monitor Cost Estimate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-estimate.md
Last updated 10/27/2023
# Estimate Azure Monitor costs
-Your Azure Monitor cost will vary significantly based on your expected utilization and configuration. Use the [Azure Monitor Pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to get cost estimates for different features of Azure Monitor based on your particular environment.
+Your Azure Monitor cost varies significantly based on your expected utilization and configuration. Use the [Azure Monitor Pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to get cost estimates for different features of Azure Monitor based on your particular environment.
Since Azure Monitor has [multiple types of charges](cost-usage.md#pricing-model), its calculator has multiple categories. See the sections below for an explanation of these categories and guidance for providing estimates. See [Azure Monitor Pricing](https://azure.microsoft.com/pricing/details/monitor/) for current pricing details.
-Some of the values required by the calculator might be difficult to provide if you're just getting started with Azure Monitor. For example, you might have no idea of the volume of analytics logs generated from the different Azure resources that you intend to monitor. A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
+Some of the values required by the calculator might be difficult to provide if you're just getting started with Azure Monitor. For example, you might have no idea of the volume of Analytics logs generated from the different Azure resources that you intend to monitor. A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
[!INCLUDE [azure-monitor-cost-optimization](../../includes/azure-monitor-cost-optimization.md)] ## Log data ingestion
-This section includes the ingestion and retention of data in your Log Analytics workspaces. This includes such features as Container insights and Application insights in addition to resource logs collected from your Azure resources and agents installed on your virtual machines. This is where the bulk of monitoring costs will typically be incurred in most environments.
+This section includes the ingestion and retention of data in your Log Analytics workspaces. This includes such features as Container insights and Application insights in addition to resource logs collected from your Azure resources and agents installed on your virtual machines. This is where the bulk of monitoring costs are typically incurred in most environments.
| Category | Description | |:|:|
-| Estimate Data Volume For Monitoring VMs | Data collected from your virtual machines either using VM insights or by creating a DCR to events and performance data. The data collected from each VM will vary significantly depending on your particular collection settings and the workloads running on your virtual machines, so you should validate these estimates in your own environment. |
+| Estimate Data Volume For Monitoring VMs | Data collected from your virtual machines either using VM insights or by creating a DCR to events and performance data. The data collected from each virtual machine varies significantly depending on your collection settings and the workloads running on your virtual machines, so you should validate these estimates in your own environment. |
| Estimate Data Volume Using Container Insights | Data collected from your Kubernetes clusters. The estimate is based on the number of clusters and their configuration. This estimate applies only for metrics and inventory data collected. Container logs (stdout, stderr, and environmental variables) vary significantly based on the log sizes generated by the workload, and they're excluded from this estimate. You should include their volume in the *Analytics Logs* category. |
-| Estimate Data Volume Based On Application Activity | Data collected from your workspace-based applications using Application Insights. The data collected from each application will vary significantly depending on your particular collection settings and applications, so you should validate these estimates in your own environment.
-| Analytics Logs | Resource logs collected from Azure resources and any other data aside from those listed above sent to Log Analytics tables not configured for [basic logs](logs/basic-logs-configure.md). This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
-| Basic Logs | Resource logs collected from Azure resources and any other data aside from those listed above sent to Log Analytics tables configured for [basic logs](logs/basic-logs-configure.md). This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
-| Interactive Data Retention | [Interactive retention](logs/data-retention-archive.md) setting for your Log Analytics workspace. |
-| Data Archive | [Archive](logs/data-retention-archive.md) setting for your Log Analytics workspace. |
-| Basic Logs Search Queries | Estimated number and scanned data of the queries that you expect to run using tables configured for [basic logs](logs/basic-logs-configure.md). |
-| Search Jobs | Estimated number and scanned data of the [search jobs](logs/search-jobs.md) that you expect to run against [archived data](logs/data-retention-archive.md). |
-| Platform logs| Resource logs collected from Azure resources to an Event Hub, Storage account, or a partner. This doesn't include logs sent to your Log Analytics workspace, which are included in the **Analytics Logs** and **Basic Logs** category. This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+| Estimate Data Volume Based On Application Activity | Data collected from your workspace-based applications using Application Insights. The data collected from each application varies significantly depending on your particular collection settings and applications, so you should validate these estimates in your own environment.
+| Analytics Logs | Resource logs collected from Azure resources and any other data aside from those listed above sent to Logs tables configured for the [Analytics table plan](../azure-monitor/logs/data-platform-logs.md#table-plans). This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+| Basic Logs | Resource logs collected from Azure resources and any other data aside from those listed above sent to Logs tables configured for [Basic table plan](../azure-monitor/logs/data-platform-logs.md#table-plans). This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+| Auxiliary Logs | Resource logs collected from non-Azure resources sent to Logs tables configured for [Auxiliary table plan](logs/logs-table-plans.md). This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+| Interactive Data Retention | [Interactive retention](logs/data-retention-configure.md) setting for your Log Analytics workspace. |
+| Data Archive | [Total retention](logs/data-retention-configure.md) setting for your Log Analytics workspace. |
+| Basic and Auxiliary Logs Search Queries | Estimated number and scanned data of the queries that you expect to run using tables configured for the [Basic and Auxiliary table plans](../azure-monitor/logs/data-platform-logs.md#table-plans). |
+| Search Jobs | Estimated number and scanned data of the [search jobs](logs/search-jobs.md) that you expect to run against [data in long-term retention](logs/data-retention-configure.md). |
+| Platform logs| Resource logs collected from Azure resources to an Event Hub, Storage account, or a partner. This doesn't include logs sent to your Log Analytics workspace, which are included in the **Analytics Logs**, **Basic Logs**, and **Auxiliary Logs** categories. This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
## Managed Prometheus This section includes charges for the ingestion and query of Prometheus metrics by your Kubernetes clusters.
azure-monitor Cost Meters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-meters.md
The following table lists the meters used to bill for data ingestion in your Log
| Pricing tier |ServiceName | MeterName | Regional Meter? | | -- | -- | -- | -- |
+| (any) | Azure Monitor | Auxiliary Logs Data Ingestion | yes |
| (any) | Azure Monitor | Basic Logs Data Ingestion | yes | | Pay-as-you-go | Log Analytics | Pay-as-you-go Data Ingestion | yes | | 100 GB/day Commitment Tier | Azure Monitor | 100 GB Commitment Tier Capacity Reservation | yes |
The following table lists the meters used to bill for data ingestion in your Log
| Standalone (legacy tier) | Log Analytics | Pay-as-you-go Data Analyzed | no | | Standard (legacy tier) | Log Analytics | Standard Data Analyzed | no | | Premium (legacy tier) | Log Analytics | Premium Data Analyzed | no |
-| (any) | Azure Monitor | Free Benefit - M365 Defender Data Ingestion | yes |
+| (any) | Azure Monitor | Free Benefit - Microsoft 365 Defender Data Ingestion | yes |
The **Standard Data Included per Node** meter is used both for the Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance, and also the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud), for workspaces in any pricing tier.
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
Several other features don't have a direct cost, but you instead pay for the ing
| Type | Description | |:|:|
-| Logs |Ingestion, retention, and export of data in [Log Analytics workspaces](logs/log-analytics-workspace-overview.md) and [legacy Application insights resources](app/convert-classic-resource.md). Log data ingestion will typically be the largest component of Azure Monitor charges for most customers. There's no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. |
+| Logs |Ingestion, retention, and export of data in [Log Analytics workspaces](logs/log-analytics-workspace-overview.md) and [legacy Application insights resources](app/convert-classic-resource.md). Log data ingestion is the largest component of Azure Monitor charges for most customers. There's no charge for querying this data except in the case of [Basic and Auxiliary logs](../azure-monitor/logs/data-platform-logs.md#table-plans) or [data in long-term retention](logs/data-retention-configure.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. |
| Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there's a charge for the workspace data ingestion and collection. | | Metrics | There's no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There's a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). | | Prometheus Metrics | Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. |
Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also
### Automated mails and alerts Rather than manually analyzing your costs in the Azure portal, you can automate delivery of information using the following methods. -- **Daily cost analysis emails.** After you configure your Cost Analysis view, you should click **Subscribe** at the top of the screen to receive regular email updates from Cost Analysis.
+- **Daily cost analysis emails.** After you configure your Cost Analysis view, select **Subscribe** at the top of the screen to receive regular email updates from Cost Analysis.
- **Budget alerts.** To be notified if there are significant increases in your spending, create a [budget alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md) for a single workspace or group of workspaces. ### Export usage details
For example, usage from Log Analytics can be found by first filtering on the **M
1. **Log Analytics** (for Pay-as-you-go data ingestion and interactive Data Retention), 2. **Insight and Analytics** (used by some of the legacy pricing tiers), and
-3. **Azure Monitor** (used by most other Log Analytics features such as Commitment Tiers, Basic Logs ingesting, Data Archive, Search Queries, Search Jobs, etc.)
+3. **Azure Monitor** (used by most other Log Analytics features such as Commitment Tiers, Basic Logs ingesting, Long-Term Retention, Search Queries, Search Jobs, and so on)
Add a filter on the **Instance ID** column for **contains workspace** or **contains cluster**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column.
There are several approaches to view the benefits a workspace receives from offe
Since a usage export has both the number of units of usage and their cost, you can use this export to see the benefits you're receiving. In the usage export, to see the benefits, filter the *Instance ID* column to your workspace. (To select all of your workspaces in the spreadsheet, filter the *Instance ID* column to "contains /workspaces/".) Then filter on the Meter to either of the following 2 meters: -- **Standard Data Included per Node**: this meter is under the service "Insight and Analytics" and tracks the benefits received when a workspace in either in Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance and/or has [Defender for Servers](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) enabled. Each of these allowances provide a 500 MB/server/day data allowance.
+- **Standard Data Included per Node**: this meter is under the service "Insight and Analytics" and tracks the benefits received when a workspace in either in Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance and/or has [Defender for Servers](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) enabled. Each of these allowances provides a 500 MB/server/day data allowance.
- **Free Benefit - M365 Defender Data Ingestion**: this meter, under the service "Azure Monitor", tracks the benefit from the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
Operation
(This functionality of reporting the benefits used in the `Operation` table started January 27, 2024.) > [!TIP]
-> If you [increase the data retention](logs/data-retention-archive.md) of the [Operation](/azure/azure-monitor/reference/tables/operation) table, you will be able to view these benefit trends over longer periods.
+> If you [increase the data retention](logs/data-retention-configure.md) of the [Operation](/azure/azure-monitor/reference/tables/operation) table, you will be able to view these benefit trends over longer periods.
> ## Usage and estimated costs
A. Estimated monthly charges based on usage from the past 31 days using the curr
B. Estimated monthly charges using different commitment tiers.<br> C. Billable data ingestion by solution from the past 31 days.
-To explore the data in more detail, click on the icon in the upper-right corner of either chart to work with the query in Log Analytics.
+To explore the data in more detail, select the icon in the upper-right corner of either chart to work with the query in Log Analytics.
:::image type="content" source="logs/media/manage-cost-storage/logs.png" lightbox="logs/media/manage-cost-storage/logs.png" alt-text="Screenshot of log query with Usage table in Log Analytics.":::
Depending on the number of nodes of the suite that your organization purchased,
Workspaces linked to [classic Azure Migrate](/azure/migrate/migrate-services-overview#azure-migrate-versions) receive free data benefits for the data tables related to Azure Migrate (`ServiceMapProcess_CL`, `ServiceMapComputer_CL`, `VMBoundPort`, `VMConnection`, `VMComputer`, `VMProcess`, `InsightsMetrics`). This version of Azure Migrate was retired in February 2024.
-Starting from 1 July 2024, the data benefit for Azure Migrate in Log Analytics will no longer be available. We suggest moving to the [Azure Migrate agentless dependency analysis](/azure/migrate/how-to-create-group-machine-dependencies-agentless). If you continue with agent-based dependency analysis, standard [Azure Monitor charges](https://azure.microsoft.com/pricing/details/monitor/) will apply for the data ingestion that enables dependency visualization.
+Starting from 1 July 2024, the data benefit for Azure Migrate in Log Analytics will no longer be available. We suggest moving to the [Azure Migrate agentless dependency analysis](/azure/migrate/how-to-create-group-machine-dependencies-agentless). If you continue with agent-based dependency analysis, standard [Azure Monitor charges](https://azure.microsoft.com/pricing/details/monitor/) apply for the data ingestion that enables dependency visualization.
## Next steps
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
The Azure Monitor activity log is a platform log that provides insight into subs
For more functionality, create a diagnostic setting to send the activity log to one or more of these locations for the following reasons: -- Send to [Azure Monitor Logs](../logs/data-platform-logs.md) for more complex querying and alerting and for [longer retention of up to 12 years](../logs/data-retention-archive.md).
+- Send to [Azure Monitor Logs](../logs/data-platform-logs.md) for more complex querying and alerting and for [longer retention of up to 12 years](../logs/data-retention-configure.md).
- Send to Azure Event Hubs to forward outside of Azure. - Send to Azure Storage for cheaper, long-term archiving.
You can also access activity log events by using the following methods:
- Use the [Get-AzLog](/powershell/module/az.monitor/get-azlog) cmdlet to retrieve the activity log from PowerShell. See [Azure Monitor PowerShell samples](../powershell-samples.md#retrieve-activity-log). - Use [az monitor activity-log](/cli/azure/monitor/activity-log) to retrieve the activity log from the CLI. See [Azure Monitor CLI samples](../cli-samples.md#view-activity-log). - Use the [Azure Monitor REST API](/rest/api/monitor/) to retrieve the activity log from a REST client.-- -- ## Legacy collection methods > [!NOTE]
azure-monitor Data Collection Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md
There are multiple methods to create transformations depending on the data colle
With transformations, you can send data to multiple destinations in a Log Analytics workspace by using a single DCR. You provide a KQL query for each destination, and the results of each query are sent to their corresponding location. You can send different sets of data to different tables or use multiple queries to send different sets of data to the same table.
-For example, you might send event data into Azure Monitor by using the Logs Ingestion API. Most of the events should be sent an analytics table where it could be queried regularly, while audit events should be sent to a custom table configured for [basic logs](../logs/basic-logs-configure.md) to reduce your cost.
+For example, you might send event data into Azure Monitor by using the Logs Ingestion API. Most of the events should be sent an analytics table where it could be queried regularly, while audit events should be sent to a custom table configured for [basic logs](../logs/logs-table-plans.md) to reduce your cost.
To use multiple destinations, you must currently either manually create a new DCR or [edit an existing one](data-collection-rule-edit.md). See the [Samples](#samples) section for examples of DCRs that use multiple destinations.
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
Last updated 08/16/2023
The Diagnostic Settings Storage Retention feature is being deprecated. To configure retention for logs and metrics sent to an Azure Storage account, use Azure Storage Lifecycle Management. This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal) for retention.
-For logs sent to a Log Analytics workspace, retention is set for each table on the **Tables** page of your workspace. For more information on Log Analytics workspace retention, see [Data retention and archive in Azure Monitor Logs](../logs/data-retention-archive.md).
+For logs sent to a Log Analytics workspace, retention is set for each table on the **Tables** page of your workspace. For more information on Log Analytics workspace retention, see [Manage data retention in a Log Analytics workspace](../logs/data-retention-configure.md).
> [!IMPORTANT] > **Deprecation Timeline.**
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
The following sample output data is from Azure Event Hubs for a resource log:
Send resource logs to Azure Storage to retain them for archiving. After you've created the diagnostic setting, a storage container is created in the storage account as soon as an event occurs in one of the enabled log categories. > [!NOTE]
-> An alternate strategy for archiving is to send the resource log to a Log Analytics workspace with an [archive policy](../logs/data-retention-archive.md).
+> An alternate to archiving is to send the resource log to a table in your Log Analytics workspace with [low-cost, long-term retention](../logs/data-retention-configure.md).
The blobs within the container use the following naming convention:
azure-monitor Azure Cli Log Analytics Workspace Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-cli-log-analytics-workspace-sample.md
- Title: Managing Azure Monitor Logs in Azure CLI
-description: Learn how to use Azure CLI commands to manage a workspace in Azure Monitor Logs, including how workspaces interact with other Azure services.
- Previously updated : 08/09/2023---
-# Managing Azure Monitor Logs in Azure CLI
-
-Use the Azure CLI commands described here to manage your log analytics workspace in Azure Monitor.
--
-## Create a workspace for Monitor Logs
-
-Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group or use an existing resource group. To create a workspace, use the [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create) command.
-
-```azurecli
-az group create --name ContosoRG --location eastus2
-az monitor log-analytics workspace create --resource-group ContosoRG \
- --workspace-name ContosoWorkspace
-```
-
-For more information about workspaces, see [Azure Monitor Logs overview](./data-platform-logs.md).
-
-## List tables in your workspace
-
-Each workspace contains tables with columns that have multiple rows of data. Each table is defined by a unique set of columns of data provided by the data source.
-
-To see the tables in your workspace, use the [az monitor log-analytics workspace table list](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-list) command:
-
-```azurecli
-az monitor log-analytics workspace table list --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --output table
-```
-
-The output value `table` presents the results in a more readable format. For more information, see [Output formatting](/cli/azure/use-cli-effectively#output-formatting).
-
-To change the retention time for a table, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command:
-
-```azurecli
-az monitor log-analytics workspace table update --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --name Syslog --retention-time 45
-```
-
-The retention time is between 30 and 730 days.
-
-For more information about tables, see [Data structure](./log-analytics-workspace-overview.md#data-structure).
-
-## Delete a table
-
-You can delete [Custom Log](logs-ingestion-api-overview.md), [Search Results](search-jobs.md) and [Restored Logs](restore.md) tables.
-
-To delete a table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-data-export-delete) command:
-
-```azurecli
-az monitor log-analytics workspace table delete ΓÇôsubscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name MySearchTable_SRCH
-```
-
-## Export data from selected tables
-
-You can continuously export data from selected tables to an Azure storage account or Azure Event Hubs. Use the [az monitor log-analytics workspace data-export create](/cli/azure/monitor/log-analytics/workspace/data-export#az-monitor-log-analytics-workspace-data-export-create) command:
-
-```azurecli
-az monitor log-analytics workspace data-export create --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --name DataExport --table Syslog \
- --destination /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Storage/storageAccounts/exportaccount \
- --enable
-```
-
-To see your data exports, run the [az monitor log-analytics workspace data-export list](/cli/azure/monitor/log-analytics/workspace/data-export#az-monitor-log-analytics-workspace-data-export-list) command.
-
-```azurecli
-az monitor log-analytics workspace data-export list --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --output table
-```
-
-To delete a data export, run the [az monitor log-analytics workspace data-export delete](/cli/azure/monitor/log-analytics/workspace/data-export#az-monitor-log-analytics-workspace-data-export-delete) command. The `--yes` parameter skips confirmation.
-
-```azurecli
-az monitor log-analytics workspace data-export delete --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --name DataExport --yes
-```
-
-For more information about data export, see [Log Analytics workspace data export in Azure Monitor](./logs-data-export.md).
--
-## Manage a linked service
-
-Linked services define a relation from the workspace to another Azure resource. Azure Monitor Logs and Azure resources use this connection in their operations. Example uses of linked services, including an automation account and a workspace association to customer-managed keys.
-
-To create a linked service, run the [az monitor log-analytics workspace linked-service create](/cli/azure/monitor/log-analytics/workspace/linked-service#az-monitor-log-analytics-workspace-linked-service-create) command:
-
-```azurecli
-az monitor log-analytics workspace linked-service create --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --name linkedautomation \
- --resource-id /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Web/sites/ContosoWebApp09
-
-az monitor log-analytics workspace linked-service list --resource-group ContosoRG \
- --workspace-name ContosoWorkspace
-```
-
-To remove a linked service relation, run the [az monitor log-analytics workspace linked-service delete](/cli/azure/monitor/log-analytics/workspace/linked-service#az-monitor-log-analytics-workspace-linked-service-delete) command:
-
-```azurecli
-az monitor log-analytics workspace linked-service delete --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --name linkedautomation
-```
-
-For more information, see [az monitor log-analytics workspace linked-service](/cli/azure/monitor/log-analytics/workspace/linked-service).
-
-## Manage linked storage
-
-If you provide and manage your own storage account for log analytics, you can manage it with these Azure CLI commands.
-
-To link your workspace to a storage account, run the [az monitor log-analytics workspace linked-storage create](/cli/azure/monitor/log-analytics/workspace/linked-storage#az-monitor-log-analytics-workspace-linked-storage-create) command:
-
-```azurecli
-az monitor log-analytics workspace linked-storage create --resource-group ContosoRG \
- --workspace-name ContosoWorkspace \
- --storage-accounts /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Storage/storageAccounts/contosostorage \
- --type Alerts
-
-az monitor log-analytics workspace linked-storage list --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --output table
-```
-
-To remove the link to a storage account, run the [az monitor log-analytics workspace linked-storage delete](/cli/azure/monitor/log-analytics/workspace/linked-storage#az-monitor-log-analytics-workspace-linked-storage-delete) command:
-
-```azurecli
-az monitor log-analytics workspace linked-storage delete --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --type Alerts
-```
-
-For more information, see, [Using customer-managed storage accounts in Azure Monitor Log Analytics](./private-storage.md).
-
-## Manage intelligence packs
-
-To see the available intelligence packs, run the [az monitor log-analytics workspace pack list](/cli/azure/monitor/log-analytics/workspace/pack#az-monitor-log-analytics-workspace-pack-list) command. The command also tells you whether the pack is enabled.
-
-```azurecli
-az monitor log-analytics workspace pack list --resource-group ContosoRG \
- --workspace-name ContosoWorkspace
-```
-
-Use the [az monitor log-analytics workspace pack enable](/cli/azure/monitor/log-analytics/workspace/pack#az-monitor-log-analytics-workspace-pack-enable) or [az monitor log-analytics workspace pack disable](/cli/azure/monitor/log-analytics/workspace/pack#az-monitor-log-analytics-workspace-pack-disable) commands:
-
-```azurecli
-az monitor log-analytics workspace pack enable --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --name NetFlow
-
-az monitor log-analytics workspace pack disable --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --name NetFlow
-```
-
-## Manage saved searches
-
-To create a saved search, run the [az monitor log-analytics workspace saved-search](/cli/azure/monitor/log-analytics/workspace/saved-search#az-monitor-log-analytics-workspace-saved-search-create) command:
-
-```azurecli
-az monitor log-analytics workspace saved-search create --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --name SavedSearch01 \
- --category "Log Management" --display-name SavedSearch01 \
- --saved-query "AzureActivity | summarize count() by bin(TimeGenerated, 1h)" --fa Function01 --fp "a:string = value"
-```
-
-View your saved search by using the [az monitor log-analytics workspace saved-search show](/cli/azure/monitor/log-analytics/workspace/saved-search#az-monitor-log-analytics-workspace-saved-search-show) command. See all saved searches by using [az monitor log-analytics workspace saved-search list](/cli/azure/monitor/log-analytics/workspace/saved-search#az-monitor-log-analytics-workspace-saved-search-list).
-
-```azurecli
-az monitor log-analytics workspace saved-search show --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --name SavedSearch01
-az monitor log-analytics workspace saved-search list --resource-group ContosoRG \
- --workspace-name ContosoWorkspace
-```
-
-To delete a saved search, run the [az monitor log-analytics workspace saved-search delete](/cli/azure/monitor/log-analytics/workspace/saved-search#az-monitor-log-analytics-workspace-saved-search-delete) command:
-
-```azurecli
-az monitor log-analytics workspace saved-search delete --resource-group ContosoRG \
- --workspace-name ContosoWorkspace --name SavedSearch01 --yes
-```
-
-## Clean up deployment
-
-If you created a resource group to test these commands, you can remove the resource group and all its contents by using the [az group delete](/cli/azure/group#az-group-delete) command:
-
-```azurecli
-az group delete --name ContosoRG
-```
-
-If you want to remove a new workspace from an existing resource group, run the [az monitor log-analytics workspace delete](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-delete) command:
-
-```azurecli
-az monitor log-analytics workspace delete --resource-group ContosoRG
- --workspace-name ContosoWorkspace --yes
-```
-
-Log analytics workspaces have a soft delete option. You can recover a deleted workspace for two weeks after deletion. Run the [az monitor log-analytics workspace recover](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-recover) command:
-
-```azurecli
-az monitor log-analytics workspace recover --resource-group ContosoRG
- --workspace-name ContosoWorkspace
-```
-
-In the delete command, add the `--force` parameter to delete the workspace immediately.
-
-## Azure CLI commands used in this article
--- [az group create](/cli/azure/group#az-group-create)-- [az group delete](/cli/azure/group#az-group-delete)-- [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create)-- [az monitor log-analytics workspace data-export create](/cli/azure/monitor/log-analytics/workspace/data-export#az-monitor-log-analytics-workspace-data-export-create)-- [az monitor log-analytics workspace data-export delete](/cli/azure/monitor/log-analytics/workspace/data-export#az-monitor-log-analytics-workspace-data-export-delete)-- [az monitor log-analytics workspace data-export list](/cli/azure/monitor/log-analytics/workspace/data-export#az-monitor-log-analytics-workspace-data-export-list)-- [az monitor log-analytics workspace delete](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-delete)-- [az monitor log-analytics workspace linked-service create](/cli/azure/monitor/log-analytics/workspace/linked-service#az-monitor-log-analytics-workspace-linked-service-create)-- [az monitor log-analytics workspace linked-service delete](/cli/azure/monitor/log-analytics/workspace/linked-service#az-monitor-log-analytics-workspace-linked-service-delete)-- [az monitor log-analytics workspace linked-storage create](/cli/azure/monitor/log-analytics/workspace/linked-storage#az-monitor-log-analytics-workspace-linked-storage-create)-- [az monitor log-analytics workspace linked-storage delete](/cli/azure/monitor/log-analytics/workspace/linked-storage#az-monitor-log-analytics-workspace-linked-storage-delete)-- [az monitor log-analytics workspace pack disable](/cli/azure/monitor/log-analytics/workspace/pack#az-monitor-log-analytics-workspace-pack-disable)-- [az monitor log-analytics workspace pack enable](/cli/azure/monitor/log-analytics/workspace/pack#az-monitor-log-analytics-workspace-pack-enable)-- [az monitor log-analytics workspace pack list](/cli/azure/monitor/log-analytics/workspace/pack#az-monitor-log-analytics-workspace-pack-list)-- [az monitor log-analytics workspace recover](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-recover)-- [az monitor log-analytics workspace saved-search delete](/cli/azure/monitor/log-analytics/workspace/saved-search#az-monitor-log-analytics-workspace-saved-search-delete)-- [az monitor log-analytics workspace saved-search list](/cli/azure/monitor/log-analytics/workspace/saved-search#az-monitor-log-analytics-workspace-saved-search-list)-- [az monitor log-analytics workspace saved-search show](/cli/azure/monitor/log-analytics/workspace/saved-search#az-monitor-log-analytics-workspace-saved-search-show)-- [az monitor log-analytics workspace saved-search](/cli/azure/monitor/log-analytics/workspace/saved-search#az-monitor-log-analytics-workspace-saved-search-create)-- [az monitor log-analytics workspace table list](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-list)-- [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update)-
-## Next steps
-
-[Overview of Log Analytics in Azure Monitor](log-analytics-overview.md)
azure-monitor Basic Logs Azure Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-azure-tables.md
+
+ Title: Tables that support the Basic table plan in Azure Monitor Logs
+description: This article lists all tables support the Basic table plan in Azure Monitor Logs.
++++ Last updated : 07/22/2024++
+# Tables that support the Basic table plan in Azure Monitor Logs
+
+All custom tables created with or migrated to the [Logs ingestion API](logs-ingestion-api-overview.md), and the Azure tables listed below support the [Basic table plan](../logs/logs-table-plans.md).
+
+> [!NOTE]
+> Tables created with the [Data Collector API](data-collector-api.md) don't support the Basic table plan.
++
+| Service | Table |
+|:|:|
+| Azure Active Directory | [AADDomainServicesDNSAuditsGeneral](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsGeneral)<br> [AADDomainServicesDNSAuditsDynamicUpdates](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsDynamicUpdates)<br>[AADServicePrincipalSignInLogs](/azure/azure-monitor/reference/tables/AADServicePrincipalSignInLogs) |
+| Azure Load Balancing | [ALBHealthEvent](/azure/azure-monitor/reference/tables/ALBHealthEvent) |
+| Azure Databricks | [DatabricksBrickStoreHttpGateway](/azure/azure-monitor/reference/tables/databricksbrickstorehttpgateway)<br>[DatabricksDataMonitoring](/azure/azure-monitor/reference/tables/databricksdatamonitoring)<br>[DatabricksFilesystem](/azure/azure-monitor/reference/tables/databricksfilesystem)<br>[DatabricksDashboards](/azure/azure-monitor/reference/tables/databricksdashboards)<br>[DatabricksCloudStorageMetadata](/azure/azure-monitor/reference/tables/databrickscloudstoragemetadata)<br>[DatabricksPredictiveOptimization](/azure/azure-monitor/reference/tables/databrickspredictiveoptimization)<br>[DatabricksIngestion](/azure/azure-monitor/reference/tables/databricksingestion)<br>[DatabricksMarketplaceConsumer](/azure/azure-monitor/reference/tables/databricksmarketplaceconsumer)<br>[DatabricksLineageTracking](/azure/azure-monitor/reference/tables/databrickslineagetracking)
+| API Management | [ApiManagementGatewayLogs](/azure/azure-monitor/reference/tables/ApiManagementGatewayLogs)<br>[ApiManagementWebSocketConnectionLogs](/azure/azure-monitor/reference/tables/ApiManagementWebSocketConnectionLogs) |
+| API Management Service| APIMDevPortalAuditDiagnosticLog |
+| Application Gateways | [AGWAccessLogs](/azure/azure-monitor/reference/tables/AGWAccessLogs)<br>[AGWPerformanceLogs](/azure/azure-monitor/reference/tables/AGWPerformanceLogs)<br>[AGWFirewallLogs](/azure/azure-monitor/reference/tables/AGWFirewallLogs) |
+| Application Gateway for Containers | [AGCAccessLogs](/azure/azure-monitor/reference/tables/AGCAccessLogs) |
+| Application Insights | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) |
+| Bare Metal Machines | [NCBMSecurityDefenderLogs](/azure/azure-monitor/reference/tables/ncbmsecuritydefenderlogs)<br>[NCBMSystemLogs](/azure/azure-monitor/reference/tables/NCBMSystemLogs)<br>[NCBMSecurityLogs](/azure/azure-monitor/reference/tables/NCBMSecurityLogs) <br>[NCBMBreakGlassAuditLogs](/azure/azure-monitor/reference/tables/ncbmbreakglassauditlogs)|
+| Chaos Experiments | [ChaosStudioExperimentEventLogs](/azure/azure-monitor/reference/tables/ChaosStudioExperimentEventLogs) |
+| Cloud HSM | [CHSMManagementAuditLogs](/azure/azure-monitor/reference/tables/CHSMManagementAuditLogs) |
+| Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) |
+| Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) |
+| Container Apps Environments | [AppEnvSpringAppConsoleLogs](/azure/azure-monitor/reference/tables/AppEnvSpringAppConsoleLogs) |
+| Communication Services | [ACSAdvancedMessagingOperations](/azure/azure-monitor/reference/tables/acsadvancedmessagingoperations)<br>[ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallClientMediaStatsTimeSeries](/azure/azure-monitor/reference/tables/ACSCallClientMediaStatsTimeSeries)<br>[ACSCallClientOperations](/azure/azure-monitor/reference/tables/ACSCallClientOperations)<br>[ACSCallRecordingIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallRecordingIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSCallSummary](/azure/azure-monitor/reference/tables/ACSCallSummary)<br>[ACSJobRouterIncomingOperations](/azure/azure-monitor/reference/tables/ACSJobRouterIncomingOperations)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations)<br>[ACSCallClosedCaptionsSummary](/azure/azure-monitor/reference/tables/acscallclosedcaptionssummary) |
+| Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) |
+ Cosmos DB | [CDBDataPlaneRequests](/azure/azure-monitor/reference/tables/cdbdataplanerequests)<br>[CDBPartitionKeyStatistics](/azure/azure-monitor/reference/tables/cdbpartitionkeystatistics)<br>[CDBPartitionKeyRUConsumption](/azure/azure-monitor/reference/tables/cdbpartitionkeyruconsumption)<br>[CDBQueryRuntimeStatistics](/azure/azure-monitor/reference/tables/cdbqueryruntimestatistics)<br>[CDBMongoRequests](/azure/azure-monitor/reference/tables/cdbmongorequests)<br>[CDBCassandraRequests](/azure/azure-monitor/reference/tables/cdbcassandrarequests)<br>[CDBGremlinRequests](/azure/azure-monitor/reference/tables/cdbgremlinrequests)<br>[CDBControlPlaneRequests](/azure/azure-monitor/reference/tables/cdbcontrolplanerequests)<br>CDBTableApiRequests |
+| Cosmos DB for MongoDB (vCore) | [VCoreMongoRequests](/azure/azure-monitor/reference/tables/VCoreMongoRequests) |
+| Kubernetes clusters - Azure Arc | [ArcK8sAudit](/azure/azure-monitor/reference/tables/ArcK8sAudit)<br>[ArcK8sAuditAdmin](/azure/azure-monitor/reference/tables/ArcK8sAuditAdmin)<br>[ArcK8sControlPlane](/azure/azure-monitor/reference/tables/ArcK8sControlPlane) |
+| Data Manager for Energy | [OEPDataplaneLogs](/azure/azure-monitor/reference/tables/OEPDataplaneLogs) |
+| Dedicated SQL Pool | [SynapseSqlPoolSqlRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolsqlrequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/synapsesqlpoolrequeststeps)<br>[SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolexecrequests)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/synapsesqlpooldmsworkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/synapsesqlpoolwaits) |
+| DNS Security Policies | [DNSQueryLogs](/azure/azure-monitor/reference/tables/DNSQueryLogs) |
+| Dev Centers | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs)<br>[DevCenterResourceOperationLogs](/azure/azure-monitor/reference/tables/DevCenterResourceOperationLogs)<br>[DevCenterBillingEventLogs](/azure/azure-monitor/reference/tables/DevCenterBillingEventLogs) |
+| Data Transfer | [DataTransferOperations](/azure/azure-monitor/reference/tables/DataTransferOperations) |
+| Event Hubs | [AZMSArchiveLogs](/azure/azure-monitor/reference/tables/AZMSArchiveLogs)<br>[AZMSAutoscaleLogs](/azure/azure-monitor/reference/tables/AZMSAutoscaleLogs)<br>[AZMSCustomerManagedKeyUserLogs](/azure/azure-monitor/reference/tables/AZMSCustomerManagedKeyUserLogs)<br>[AZMSKafkaCoordinatorLogs](/azure/azure-monitor/reference/tables/AZMSKafkaCoordinatorLogs)<br>[AZMSKafkaUserErrorLogs](/azure/azure-monitor/reference/tables/AZMSKafkaUserErrorLogs) |
+| Firewalls | [AZFWFlowTrace](/azure/azure-monitor/reference/tables/AZFWFlowTrace) |
+| Health Care APIs | [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs)<br>[AHDSDicomDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSDicomDiagnosticLogs)<br>[AHDSDicomAuditLogs](/azure/azure-monitor/reference/tables/AHDSDicomAuditLogs) |
+| Key Vault | [AZKVAuditLogs](/azure/azure-monitor/reference/tables/AZKVAuditLogs)<br>[AZKVPolicyEvaluationDetailsLogs](/azure/azure-monitor/reference/tables/AZKVPolicyEvaluationDetailsLogs) |
+| Kubernetes services | [AKSAudit](/azure/azure-monitor/reference/tables/AKSAudit)<br>[AKSAuditAdmin](/azure/azure-monitor/reference/tables/AKSAuditAdmin)<br>[AKSControlPlane](/azure/azure-monitor/reference/tables/AKSControlPlane) |
+| Log Analytics | [LASummaryLogs](/azure/azure-monitor/reference/tables/LASummaryLogs) |
+| Managed Lustre | [AFSAuditLogs](/azure/azure-monitor/reference/tables/AFSAuditLogs) |
+| Managed NGINX | [NGXOperationLogs](/azure/azure-monitor/reference/tables/ngxoperationlogs) <br>[NGXSecurityLogs](/azure/azure-monitor/reference/tables/ngxsecuritylogs)|
+| Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) |
+| Microsoft Graph | [MicrosoftGraphActivityLogs](/azure/azure-monitor/reference/tables/microsoftgraphactivitylogs) |
+| Monitor | [AzureMetricsV2](/azure/azure-monitor/reference/tables/AzureMetricsV2) |
+| Network Devices (Operator Nexus) | [MNFDeviceUpdates](/azure/azure-monitor/reference/tables/MNFDeviceUpdates)<br>[MNFSystemStateMessageUpdates](/azure/azure-monitor/reference/tables/MNFSystemStateMessageUpdates) <br>[MNFSystemSessionHistoryUpdates](/azure/azure-monitor/reference/tables/mnfsystemsessionhistoryupdates) |
+| Network Managers | [AVNMConnectivityConfigurationChange](/azure/azure-monitor/reference/tables/AVNMConnectivityConfigurationChange)<br>[AVNMIPAMPoolAllocationChange](/azure/azure-monitor/reference/tables/AVNMIPAMPoolAllocationChange) |
+| Nexus Clusters | [NCCKubernetesLogs](/azure/azure-monitor/reference/tables/NCCKubernetesLogs)<br>[NCCVMOrchestrationLogs](/azure/azure-monitor/reference/tables/NCCVMOrchestrationLogs) |
+| Nexus Storage Appliances | [NCSStorageLogs](/azure/azure-monitor/reference/tables/NCSStorageLogs)<br>[NCSStorageAlerts](/azure/azure-monitor/reference/tables/NCSStorageAlerts) |
+| Operator Insights ΓÇô Data Products | [AOIDatabaseQuery](/azure/azure-monitor/reference/tables/AOIDatabaseQuery)<br>[AOIDigestion](/azure/azure-monitor/reference/tables/AOIDigestion)<br>[AOIStorage](/azure/azure-monitor/reference/tables/AOIStorage) |
+| Redis cache | [ACRConnectedClientList](/azure/azure-monitor/reference/tables/ACRConnectedClientList) |
+| Redis Cache Enterprise | [REDConnectionEvents](/azure/azure-monitor/reference/tables/REDConnectionEvents) |
+| Relays | [AZMSHybridConnectionsEvents](/azure/azure-monitor/reference/tables/AZMSHybridConnectionsEvents) |
+| Security | [SecurityAttackPathData](/azure/azure-monitor/reference/tables/SecurityAttackPathData)<br> [MDCFileIntegrityMonitoringEvents](/azure/azure-monitor/reference/tables/mdcfileintegritymonitoringevents) |
+| Service Bus | [AZMSApplicationMetricLogs](/azure/azure-monitor/reference/tables/AZMSApplicationMetricLogs)<br>[AZMSOperationalLogs](/azure/azure-monitor/reference/tables/AZMSOperationalLogs)<br>[AZMSRunTimeAuditLogs](/azure/azure-monitor/reference/tables/AZMSRunTimeAuditLogs)<br>[AZMSVNetConnectionEvents](/azure/azure-monitor/reference/tables/AZMSVNetConnectionEvents) |
+| Sphere | [ASCAuditLogs](/azure/azure-monitor/reference/tables/ASCAuditLogs)<br>[ASCDeviceEvents](/azure/azure-monitor/reference/tables/ASCDeviceEvents) |
+| Storage | [StorageBlobLogs](/azure/azure-monitor/reference/tables/StorageBlobLogs)<br>[StorageFileLogs](/azure/azure-monitor/reference/tables/StorageFileLogs)<br>[StorageQueueLogs](/azure/azure-monitor/reference/tables/StorageQueueLogs)<br>[StorageTableLogs](/azure/azure-monitor/reference/tables/StorageTableLogs) |
+| Synapse Analytics | [SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/SynapseSqlPoolExecRequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/SynapseSqlPoolRequestSteps)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/SynapseSqlPoolDmsWorkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/SynapseSqlPoolWaits) |
+| Storage Mover | [StorageMoverJobRunLogs](/azure/azure-monitor/reference/tables/StorageMoverJobRunLogs)<br>[StorageMoverCopyLogsFailed](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsFailed)<br>[StorageMoverCopyLogsTransferred](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsTransferred)<br> |
+| Virtual Network Manager | [AVNMNetworkGroupMembershipChange](/azure/azure-monitor/reference/tables/AVNMNetworkGroupMembershipChange)<br>[AVNMRuleCollectionChange](/azure/azure-monitor/reference/tables/AVNMRuleCollectionChange) |
+
+
+## Next steps
+
+- [Manage data retention](../logs/data-retention-configure.md)
+
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
Title: Query data from Basic Logs in Azure Monitor
-description: Create a log query using tables configured for Basic logs in Azure Monitor.
+ Title: Query data in a Basic and Auxiliary table in Azure Monitor Logs
+description: This article explains how to query data from Basic and Auxiliary logs tables.
Previously updated : 11/02/2023 Last updated : 07/21/2024
-# Query Basic Logs in Azure Monitor
-Basic Logs tables reduce the cost of ingesting high-volume verbose logs and let you query the data they store using a limited set of log queries. This article explains how to query data from Basic Logs tables.
-
-For more information, see [Set a table's log data plan](basic-logs-configure.md).
+# Query data in a Basic and Auxiliary table in Azure Monitor Logs
+Basic and Auxiliary logs tables reduce the cost of ingesting high-volume verbose logs and let you query the data they store with some limitations. This article explains how to query data from Basic and Auxiliary logs tables.
+For more information about Basic and Auxiliary table plans, see [Azure Monitor Logs Overview: Table plans](data-platform-logs.md#table-plans).
> [!NOTE]
-> Other tools that use the Azure API for querying - for example, Grafana and Power BI - cannot access Basic Logs.
+> Other tools that use the Azure API for querying - for example, Grafana and Power BI - cannot access data in Basic and Auxiliary tables.
[!INCLUDE [log-analytics-query-permissions](../../../includes/log-analytics-query-permissions.md)] ## Limitations
-Queries with Basic Logs are subject to the following limitations:
-### KQL language limits
-Log queries against Basic Logs are optimized for simple data retrieval using a subset of KQL language, including the following operators:
--- [where](/azure/data-explorer/kusto/query/whereoperator)-- [extend](/azure/data-explorer/kusto/query/extendoperator)-- [project](/azure/data-explorer/kusto/query/projectoperator)-- [project-away](/azure/data-explorer/kusto/query/projectawayoperator)-- [project-keep](/azure/data-explorer/kusto/query/project-keep-operator)-- [project-rename](/azure/data-explorer/kusto/query/projectrenameoperator)-- [project-reorder](/azure/data-explorer/kusto/query/projectreorderoperator)-- [parse](/azure/data-explorer/kusto/query/parseoperator)-- [parse-where](/azure/data-explorer/kusto/query/parsewhereoperator)-
-You can use all functions and binary operators within these operators.
-
-### Time range
+
+Queries on data in Basic and Auxiliary tables are subject to the following limitations:
+
+#### Kusto Query Language (KQL) language limitations
+
+Queries of data in Basic or Auxiliary tables support all KQL [scalar](/azure/data-explorer/kusto/query/scalar-functions) and [aggregation](/azure/data-explorer/kusto/query/aggregation-functions) functions. However, Basic or Auxiliary table queries are limited to a single table. Therefore, these limitations apply:
+
+- Operators that join data from multiple tables are limited:
+ - [join](/azure/data-explorer/kusto/query/join-operator?pivots=azuremonitor), [find](/azure/data-explorer/kusto/query/find-operator?pivots=azuremonitor), [search](/azure/data-explorer/kusto/query/search-operator), and [externaldata](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor) aren't supported.
+ - [lookup](/azure/data-explorer/kusto/query/lookup-operator) and [union](/azure/data-explorer/kusto/query/union-operator?pivots=azuremonitor) are supported, but limited to up to five Analytics tables.
+- [User-defined functions](/azure/data-explorer/kusto/query/functions/user-defined-functions) aren't supported.
+- [Cross-service](/azure/azure-monitor/logs/cross-workspace-query) and [cross-resource](/azure/azure-monitor/logs/cross-workspace-query) queries aren't supported.
++
+#### Time range
Specify the time range in the query header in Log Analytics or in the API call. You can't specify the time range in the query body using a **where** statement.
-### Query context
-Queries with Basic Logs must use a workspace for the scope. You can't run queries using another resource for the scope. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](scope.md).
+#### Query scope
-### Concurrent queries
+Set the Log Analytics workspace as the scope of your query. You can't run queries using another resource for the scope. For more information about query scope, see [Log query scope and time range in Azure Monitor Log Analytics](scope.md).
+
+#### Concurrent queries
You can run two concurrent queries per user.
-### Purge
-You canΓÇÖt [purge personal data](personal-data-mgmt.md#exporting-and-deleting-personal-data) from Basic Logs tables.
+#### Auxiliary log query performance
+
+Queries of data in Auxiliary tables are unoptimized and might take longer to return results than queries you run on Analytics and Basic tables.
+
+#### Purge
+You canΓÇÖt [purge personal data](personal-data-mgmt.md#exporting-and-deleting-personal-data) from Basic and Auxiliary tables.
-## Run a query on a Basic Logs table
-Creating a query using Basic Logs is the same as any other query in Log Analytics. See [Get started with Azure Monitor Log Analytics](./log-analytics-tutorial.md) if you aren't familiar with this process.
+## Run a query on a Basic or Auxiliary table
+Running a query on Basic or Auxiliary tables is the same as querying any other table in Log Analytics. See [Get started with Azure Monitor Log Analytics](./log-analytics-tutorial.md) if you aren't familiar with this process.
# [Portal](#tab/portal-1) In the Azure portal, select **Monitor** > **Logs** > **Tables**.
-In the list of tables, you can identify Basic Logs tables by their unique icon:
-<!-- convertborder later -->
+In the list of tables, you can identify Basic and Auxiliary tables by their unique icon:
+ :::image type="content" source="./media/basic-logs-configure/table-icon.png" lightbox="./media/basic-logs-configure/table-icon.png" alt-text="Screenshot of the Basic Logs table icon in the table list." border="false":::
-You can also hover over a table name for the table information view, which will specify that the table is configured as Basic Logs:
-<!-- convertborder later -->
+You can also hover over a table name for the table information view, which specifies that the table has the Basic or Auxiliary table plan:
+ :::image type="content" source="./media/basic-logs-configure/table-info.png" lightbox="./media/basic-logs-configure/table-info.png" alt-text="Screenshot of the Basic Logs table indicator in the table details." border="false":::
-When you add a table to the query, Log Analytics will identify a Basic Logs table and align the authoring experience accordingly. The following example shows when you attempt to use an operator that isn't supported by Basic Logs.
+When you add a table to the query, Log Analytics identifies a Basic or Auxiliary table and aligns the authoring experience accordingly.
:::image type="content" source="./media/basic-logs-query/query-validator.png" lightbox="./media/basic-logs-query/query-validator.png" alt-text="Screenshot of Query on Basic Logs limitations."::: # [API](#tab/api-1)
-Use **/search** from the [Log Analytics API](api/overview.md) to run a query with Basic Logs using a REST API. This is similar to the [/query](api/request-format.md) API with the following differences:
+Use **/search** from the [Log Analytics API](api/overview.md) to query data in a Basic or Auxiliary table using a REST API. This is similar to the [/query](api/request-format.md) API with the following differences:
-- The query is subject to the language limitations described above.
+- The query is subject to the language limitations described in [KQL language limitations](#kusto-query-language-kql-language-limitations).
- The time span must be specified in the header of the request and not in the query statement. **Sample Request** ```http
-https://api.loganalytics.io/v1/workspaces/testWS/search?timespan=P1D
+https://api.loganalytics.io/v1/workspaces/{workspaceId}/search?timespan=P1D
``` **Request body**
https://api.loganalytics.io/v1/workspaces/testWS/search?timespan=P1D
## Pricing model
-The charge for a query on Basic Logs is based on the amount of data the query scans, which is influenced by the size of the table and the query's time range. For example, a query that scans three days of data in a table that ingests 100 GB each day, would be charged for 300 GB.
+The charge for a query on Basic and Auxiliary tables is based on the amount of data the query scans, which depends on the size of the table and the query's time range. For example, a query that scans three days of data in a table that ingests 100 GB each day, would be charged for 300 GB.
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). ## Next steps -- [Learn more about the Basic Logs and Analytics log plans](basic-logs-configure.md).-- [Use a search job to retrieve data from Basic Logs into Analytics Logs where it can be queries multiple times](search-jobs.md).
+- [Learn more about Azure Monitor Logs table plans](data-platform-logs.md#table-plans).
+
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
ms.reviwer: dalek git
# Azure Monitor Logs cost calculations and options
-The most significant charges for most Azure Monitor implementations will typically be ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor don't have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and Application Insights resources and the different configuration options that affect your costs.
+The most significant charges for most Azure Monitor implementations are typically ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor don't have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and the various configuration options that affect your costs.
[!INCLUDE [azure-monitor-cost-optimization](../../../includes/azure-monitor-cost-optimization.md)]
The following [standard columns](log-standard-columns.md) are common to all tabl
### Excluded tables
-Some tables are free from data ingestion charges altogether, including, for example, [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage), and [Operation](/azure/azure-monitor/reference/tables/operation). This information will always be indicated by the [_IsBillable](log-standard-columns.md#_isbillable) column, which indicates whether a record was excluded from billing for data ingestion, retention and archive.
+Some tables are free from data ingestion charges altogether, including, for example, [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage), and [Operation](/azure/azure-monitor/reference/tables/operation). This information is always indicated by the [_IsBillable](log-standard-columns.md#_isbillable) column, which shows whether a record was excluded from billing for data ingestion and retention.
### Charges for other solutions and services
See the documentation for different services and solutions for any unique billin
## Commitment tiers
-In addition to the pay-as-you-go model, Log Analytics has *commitment tiers*, which can save you as much as 30 percent compared to the pay-as-you-go price. With commitment tier pricing, you can commit to buy data ingestion for a workspace, starting at 100 GB per day, at a lower price than pay-as-you-go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. (Overage is billed using the same commitment tier billing meter. For example if a workspace is in the 200 GB/day commitment tier and ingests 300 GB in a day, that usage will be billed as 1.5 units of the 200 GB/day commitment tier.) The commitment tiers have a 31-day commitment period from the time a commitment tier is selected or changed.
+In addition to the pay-as-you-go model, Log Analytics has *commitment tiers*, which can save you as much as 30 percent compared to the pay-as-you-go price. With commitment tier pricing, you can commit to buy data ingestion for a workspace, starting at 100 GB per day, at a lower price than pay-as-you-go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. (Overage is billed using the same commitment tier billing meter. For example if a workspace is in the 200 GB/day commitment tier and ingests 300 GB in a day, that usage is billed as 1.5 units of the 200 GB/day commitment tier.) The commitment tiers have a 31-day commitment period from the time a commitment tier is selected or changed.
- During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period. - At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to Pay-As-You-Go or to a lower commitment tier at any time.
Cluster billing starts when the cluster is created, regardless of whether worksp
When you link workspaces to a cluster, the pricing tier is changed to cluster, and ingestion is billed based on the cluster's commitment tier. Workspaces associated to a cluster no longer have their own pricing tier. Workspaces can be unlinked from a cluster at any time, and the pricing tier can be changed to per GB.
-If your linked workspace is using the legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's commitment tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
+If your linked workspace is using the legacy Per Node pricing tier, it is billed based on data ingested against the cluster's commitment tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
-If a cluster is deleted, billing for the cluster will stop even if the cluster is within its 31-day commitment period.
+If a cluster is deleted, billing for the cluster stops even if the cluster is within its 31-day commitment period.
For more information on how to create a dedicated cluster and specify its billing type, see [Create a dedicated cluster](logs-dedicated-clusters.md#create-a-dedicated-cluster).
-## Basic Logs
+## Basic and Auxiliary table plans
-You can configure certain tables in a Log Analytics workspace to use [Basic Logs](basic-logs-configure.md). Data in these tables has a significantly reduced ingestion charge and a limited retention period. There's a charge to search against these tables. Basic Logs are intended for high-volume verbose logs you use for debugging, troubleshooting, and auditing, but not for analytics and alerts.
+You can configure certain tables in a Log Analytics workspace to use [Basic and Auxiliary table plans](logs-table-plans.md). Data in these tables has a significantly reduced ingestion charge. There's a charge to query data in these tables.
-The charge for searching against Basic Logs is based on the GB of data scanned in performing the search.
+The charge for querying data in Basic and Auxiliary tables is based on the GB of data scanned in performing the search.
-For more information on Basic Logs, including how to configure them and query their data, see [Configure Basic Logs in Azure Monitor](basic-logs-configure.md).
+For more information about the Basic and Auxiliary table plans, see [Azure Monitor Logs overview: Table plans](data-platform-logs.md#table-plans).
-## Log data retention and archive
+## Log data retention
-In addition to data ingestion, there's a charge for the retention of data in each Log Analytics workspace. You can set the retention period for the entire workspace or for each table. After this period, the data is either removed or archived. Archived logs have a reduced retention charge, and there's a charge to search against them. Use archived logs to reduce your costs for data that you must store for compliance or occasional investigation.
+In addition to data ingestion, there's a charge for the retention of data in each Log Analytics workspace. You can set the retention period for the entire workspace or for each table. After this period, the data is either removed or kept in long-term retention. During the long-term retention period, you pay a reduced retention charge, and there's a charge to retrieve the data using a [search job](search-jobs.md). Use long-term retention to reduce your costs for data that you must store for compliance or occasional investigation.
-[Deleting a custom table](create-custom-table.md#delete-a-table) does not remove data associated with that table, so retention and archive charges will continue to apply.
+[Deleting a custom table](create-custom-table.md#delete-a-table) doesn't remove data associated with that table, so interactive and long-term retention charges continue to apply.
-For more information on data retention and archiving, including how to configure these settings and access archived data, see [Configure data retention and archive policies in Azure Monitor Logs](data-retention-archive.md).
+For more information on data retention, including how to configure these settings and access data in long-term retention, see [Manage data retention in a Log Analytics workspace](data-retention-configure.md).
>[!NOTE] >Deleting data from your Log Analytics workspace using the Log Analytics Purge feature doesn't affect your retention costs. To lower retention costs, decrease the retention period for the workspace or for specific tables. ## Search jobs
-Searching against archived logs uses [search jobs](search-jobs.md). Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. Search jobs are billed by the number of GB of data scanned on each day that's accessed to perform the search.
+Retrieve data from long-term retention by running [search jobs](search-jobs.md). Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. Search jobs are billed by the number of GB of data scanned on each day that's accessed to perform the search.
## Log data restore
-For situations in which older or archived logs must be intensively queried with the full analytic query capabilities, the [data restore](restore.md) feature is a powerful tool. The restore operation makes a specific time range of data in a table available in the hot cache for high-performance queries. You can later dismiss the data when you're finished. Log data restore is billed by the amount of data restored, and by the time the restore is kept active. The minimal values billed for any data restore are 2 TB and 12 hours. Data restored of more than 2 TB and/or more than 12 hours in duration is billed on a pro-rated basis.
+When you need to intensively queried large volumes of data, or data in long-term retention with the full analytic query capabilities, the [data restore](restore.md) feature is a powerful tool. The restore operation makes a specific time range of data in a table available in the hot cache for high-performance queries. You can later dismiss the data when you're finished. Log data restore is billed by the amount of data restored, and by the time the restore is kept active. The minimal values billed for any data restore are 2 TB and 12 hours. Data restored of more than 2 TB and/or more than 12 hours in duration is billed on a pro-rated basis.
## Log data export
In some scenarios, combining this data can result in cost savings. Typically, th
- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled. - [MDCFileIntegrityMonitoringEvents](/azure/azure-monitor/reference/tables/mdcfileintegritymonitoringevents)
-If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data. If the workspace has Sentinel enabled on it, if Sentinel is using a classic pricing tier, the Defender data allocation applies only for the Log Analytics data ingestion billing, but not the classic Sentinel billing. If Sentinel is using a [simplified pricing tier](/azure/sentinel/enroll-simplified-pricing-tier), the Defender data allocation applies to the unified Sentinel billing. To learn more on how Microsoft Sentinel customers can benefit, please see the [Microsoft Sentinel Pricing page](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
+If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data. If the workspace has Sentinel enabled on it, if Sentinel is using a classic pricing tier, the Defender data allocation applies only for the Log Analytics data ingestion billing, but not the classic Sentinel billing. If Sentinel is using a [simplified pricing tier](/azure/sentinel/enroll-simplified-pricing-tier), the Defender data allocation applies to the unified Sentinel billing. To learn more on how Microsoft Sentinel customers can benefit, see the [Microsoft Sentinel Pricing page](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
Access to the legacy Free Trial pricing tier was limited on July 1, 2022. Pricin
A list of Azure Monitor billing meter names, including these legacy tiers, is available [here](../cost-meters.md). > [!IMPORTANT]
-> The legacy pricing tiers do not support access to some of the newest features in Log Analytics such as ingesting data as cost-effective Basic Logs.
+> The legacy pricing tiers do not support access to some of the newest features in Log Analytics such as ingesting data to tables with the cost-effective Basic and Auxiliary table plans.
### Free Trial pricing tier
Workspaces in the Free Trial pricing tier have daily data ingestion limited to 5
### Standalone pricing tier
-Usage on the Standalone pricing tier is billed by the ingested data volume. It's reported in the **Log Analytics** service and the meter is named "Data Analyzed." Workspaces in the Standalone pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Standalone pricing tier don't support the use of [Basic Logs](basic-logs-configure.md).
+Usage on the Standalone pricing tier is billed by the ingested data volume. It's reported in the **Log Analytics** service and the meter is named "Data Analyzed." Workspaces in the Standalone pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Standalone pricing tier don't support the use of [Basic and Auxiliary table plans](logs-table-plans.md).
### Per Node pricing tier
-The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. The Per Node pricing tier is a legacy tier which is only available to existing Subscriptions fulfilling the requirement for [legacy pricing tiers](#legacy-pricing-tiers).
+The Per Node pricing tier charges per monitored VM (node) on an hour granularity. For each monitored node, the workspace is allocated 500 MB of data per day that's not billed. This allocation is calculated with hourly granularity and is aggregated at the workspace level each day. Data ingested above the aggregate daily data allocation is billed per GB as data overage. The Per Node pricing tier is a legacy tier, which is only available to existing Subscriptions fulfilling the requirement for [legacy pricing tiers](#legacy-pricing-tiers).
-On your bill, the service will be **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Workspaces in the Per Node pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Per Node pricing tier don't support the use of [Basic Logs](basic-logs-configure.md). Usage is reported on three meters:
+On your bill, the service is **Insight and Analytics** for Log Analytics usage if the workspace is in the Per Node pricing tier. Workspaces in the Per Node pricing tier have user-configurable retention from 30 to 730 days. Workspaces in the Per Node pricing tier don't support the use of [Basic and Auxiliary table plans](logs-table-plans.md). Usage is reported on three meters:
- **Node**: The usage for the number of monitored nodes (VMs) in units of node months. - **Data Overage per Node**: The number of GB of data ingested in excess of the aggregated data allocation.
On your bill, the service will be **Insight and Analytics** for Log Analytics us
### Standard and Premium pricing tiers
-Workspaces cannot be created in or moved to the **Standard** or **Premium** pricing tiers since October 1, 2016. Workspaces already in these pricing tiers can continue to use them, but if a workspace is moved out of these tiers, it can't be moved back. The Standard and Premium pricing tiers have fixed data retention of 30 days and 365 days, respectively. Workspaces in these pricing tiers don't support the use of [Basic Logs](basic-logs-configure.md) and Data Archive. Data ingestion meters on your Azure bill for these legacy tiers are called "Data Analyzed."
+Workspaces can't be created in or moved to the **Standard** or **Premium** pricing tiers since October 1, 2016. Workspaces already in these pricing tiers can continue to use them, but if a workspace is moved out of these tiers, it can't be moved back. The Standard and Premium pricing tiers have fixed data retention of 30 days and 365 days, respectively. Workspaces in these pricing tiers don't support the use of [Basic and Auxiliary table plans](logs-table-plans.md) and long-term data retention. Data ingestion meters on your Azure bill for these legacy tiers are called "Data Analyzed."
### Microsoft Defender for Cloud with legacy pricing tiers
azure-monitor Create Custom Table Auxiliary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table-auxiliary.md
+
+ Title: Set up a table with the Auxiliary plan for low-cost data ingestion and retention in your Log Analytics workspace
+description: Create a custom table with the Auxiliary table plan in your Log Analytics workspace for low-cost ingestion and retention of log data.
++++++ Last updated : 07/21/2024
+# Customer intent: As a Log Analytics workspace administrator, I want to create a custom table with the Auxiliary table plan, so that I can ingest and retain data at a low cost for auditing and compliance.
++
+# Set up a table with the Auxiliary plan in your Log Analytics workspace (Preview)
+
+The [Auxiliary table plan](../logs/data-platform-logs.md#table-plans) lets you ingest and retain data in your Log Analytics workspace at a low cost. Azure Monitor Logs currently supports the Auxiliary table plan on [data collection rule (DCR)-based custom tables](../logs/manage-logs-tables.md#table-type-and-schema) to which you send data you collect using [Azure Monitor Agent](../agents/agents-overview.md) or the [Logs ingestion API](../logs/logs-ingestion-api-overview.md).
+
+This article explains how to create a custom table with the Auxiliary plan in your Log Analytics workspace and set up a data collection rule that sends data to this table.
+
+> [!IMPORTANT]
+> See [public preview limitations](#public-preview-limitations) for supported regions and limitations related to Auxiliary tables and data collection rules.
+
+## Prerequisites
+
+To create a custom table and collect log data, you need:
+
+- A Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).
+- A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md).
+- All tables in a Log Analytics workspace have a column named `TimeGenerated`. If your raw log data has a `TimeGenerated` property, Azure Monitor uses this value to identify the creation time of the record. For a table with the Auxiliary plan, the `TimeGenerated` column currently supports ISO8601 format only. For information about the `TimeGenerated` format, see [supported ISO 8601 datetime format](/azure/data-explorer/kusto/query/scalar-data-types/datetime#iso-8601).
+
+
+## Create a custom table with the Auxiliary plan
+
+To create a custom table, call the [Tables - Create Or Update API](/rest/api/loganalytics/tables/create-or-update) by using this command:
+
+```http
+https://management.azure.com/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.OperationalInsights/workspaces/{workspace_name}/tables/{table name_CL}?api-version=2023-01-01-preview
+```
+
+Provide this payload - update the table name and adjust the columns based on your table schema:
+
+```json
+ {
+ "properties": {
+ "schema": {
+ "name": "table_name_CL",
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "StringProperty",
+ "type": "string"
+ },
+ {
+ "name": "IntProperty",
+ "type": "int"
+ },
+ {
+ "name": "LongProperty",
+ "type": "long"
+ },
+ {
+ "name": "RealProperty",
+ "type": "real"
+ },
+ {
+ "name": "BooleanProperty",
+ "type": "boolean"
+ },
+ {
+ "name": "GuidProperty",
+ "type": "guid"
+ },
+ {
+ "name": "DateTimeProperty",
+ "type": "datetime"
+ }
+ ]
+ },
+ "totalRetentionInDays": 365,
+ "plan": "Auxiliary"
+ }
+}
+```
+
+## Send data to a table with the Auxiliary plan
+
+There are currently two ways to ingest data to a custom table with the Auxiliary plan:
++
+- [Collect logs from a text or JSON file with Azure Monitor Agent](../agents/data-sources-custom-logs.md).
+
+ If you use this method, your custom table must only have two columns - `TimeGenerated` and `RawData` (of type `string`). The data collection rule sends the entirety of each log entry you collect to the `RawData` column, and Azure Monitor Logs automatically populates the `TimeGenerated` column with the time the log is ingested.
+
+- Send data to Azure Monitor using Logs ingestion API.
+
+ To use this method:
+
+ 1. [Create a custom table with the Auxiliary plan](#create-a-custom-table-with-the-auxiliary-plan) as described in this article.
+ 1. Follow the steps described in [Tutorial: Send data to Azure Monitor using Logs ingestion API](../logs/tutorial-logs-ingestion-api.md) to:
+ 1. [Create a Microsoft Entra application](../logs/tutorial-logs-ingestion-api.md#create-microsoft-entra-application).
+ 1. [Create a data collection rule](../logs/tutorial-logs-ingestion-api.md#create-data-collection-rule) using this ARM template.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionRuleName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the name of the data collection rule to create."
+ }
+ },
+ "location": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the region in which to create the data collection rule. The must be the same region as the destination Log Analytics workspace."
+ }
+ },
+ "workspaceResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "The Azure resource ID of the Log Analytics workspace in which you created a custom table with the Auxiliary plan."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": "[parameters('dataCollectionRuleName')]",
+ "location": "[parameters('location')]",
+ "apiVersion": "2023-03-11",
+ "kind": "Direct",
+ "properties": {
+ "streamDeclarations": {
+ "Custom-table_name_CL": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "StringProperty",
+ "type": "string"
+ },
+ {
+ "name": "IntProperty",
+ "type": "int"
+ },
+ {
+ "name": "LongProperty",
+ "type": "long"
+ },
+ {
+ "name": "RealProperty",
+ "type": "real"
+ },
+ {
+ "name": "BooleanProperty",
+ "type": "boolean"
+ },
+ {
+ "name": "GuidProperty",
+ "type": "guid"
+ },
+ {
+ "name": "DateTimeProperty",
+ "type": "datetime"
+ }
+ }
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]",
+ "name": "myworkspace"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-table_name_CL"
+ ],
+ "destinations": [
+ "myworkspace"
+ ]
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionRuleId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
+ }
+ }
+ }
+ ```
+
+ Where:
+ - `myworkspace` is the name of your Log Analytics workspace.
+ - `table_name_CL` is the name of your table.
+ - `columns` includes the same columns you set in [Create a custom table with the Auxiliary plan](#create-a-custom-table-with-the-auxiliary-plan).
+
+ 1. [Grant your application permission to use your DCR](../logs/tutorial-logs-ingestion-api.md#assign-permissions-to-a-dcr).
+
+## Public preview limitations
+
+During public preview, these limitations apply:
+
+- The Auxiliary plan is gradually being rolled out to all regions and is currently supported in:
+
+ | **Region** | **Locations** |
+ |--||
+ | **Americas** | Canada Central |
+ | | Central US |
+ | | East US |
+ | | East US 2 |
+ | | West US |
+ | | South Central US |
+ | | North Central US |
+ | **Asia Pacific** | Australia East |
+ | | Australia South East |
+ | **Europe** | East Asia |
+ | | North Europe |
+ | | UK South |
+ | | Germany West Central |
+ | | Switzerland North |
+ | | France Central |
+ | **Middle East** | Israel Central |
++
+- You can set the Auxiliary plan only on data collection rule-based custom tables you create using the [Tables - Create Or Update API](/rest/api/loganalytics/tables/create-or-update).
+- Tables with the Auxiliary plan:
+ - Are currently unbilled. There's currently no charge for ingestion, queries, search jobs, and long-term retention.
+ - Do not support columns with dynamic data.
+ - Have a fixed total retention of 365 days.
+ - Support ISO 8601 datetime format only.
+- A data collection rule that sends data to a table with an Auxiliary plan:
+ - Can only send data to a single table.
+ - Can't include a [transformation](../essentials/data-collection-transformations.md).
+- Ingestion data for Auxiliary tables isn't currently available in the Azure Monitor Logs [Usage table](/azure/azure-monitor/reference/tables/usage). To estimate data ingestion volume, you can count the number of records in your Auxiliary table using this query:
+
+ ```kusto
+ MyTable_CL
+ | summarize count()
+ ```
+
+## Next steps
+
+Learn more about:
+
+- [Azure Monitor Logs table plans](../logs/data-platform-logs.md#table-plans)
+- [Collecting logs with the Log Ingestion API](../logs/logs-ingestion-api-overview.md)
+- [Data collection rules](../essentials/data-collection-endpoint-overview.md)
azure-monitor Create Custom Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table.md
To create a custom table, you need:
## Create a custom table
-Azure tables have predefined schemas. To store log data in a different schema, use data collection rules to define how to collect, transform, and send the data to a custom table in your Log Analytics workspace.
+Azure tables have predefined schemas. To store log data in a different schema, use data collection rules to define how to collect, transform, and send the data to a custom table in your Log Analytics workspace. To create a custom table with the Auxiliary plan, see [Set up a table with the Auxiliary plan (Preview)](create-custom-table-auxiliary.md).
> [!IMPORTANT] > Custom tables have a suffix of **_CL**; for example, *tablename_CL*. The Azure portal adds the **_CL** suffix to the table name automatically. When you create a custom table using a different method, you need to add the **_CL** suffix yourself. The *tablename_CL* in the [DataFlows Streams](../essentials/data-collection-rule-structure.md#dataflows) properties in your data collection rules must match the *tablename_CL* name in the Log Analytics workspace.
-> [!NOTE]
-> For information about creating a custom table for logs you ingest with the deprecated Log Analytics agent, also known as MMA or OMS, see [Collect text logs with the Log Analytics agent](../agents/data-sources-custom-logs.md#define-a-custom-log-table).
+> [!WARNING]
+> Table names are used for billing purposes so they should not contain sensitive information.
# [Portal](#tab/azure-portal-1)
-To create a custom table in the Azure portal:
+To create a custom table using the Azure portal:
1. From the **Log Analytics workspaces** menu, select **Tables**.
Use the [Tables - Update PATCH API](/rest/api/loganalytics/tables/update) to cre
There are several types of tables in Azure Monitor Logs. You can delete any table that's not an Azure table, but what happens to the data when you delete the table is different for each type of table.
-For more information, see [What happens to data when you delete a table in a Log Analytics workspace](../logs/data-retention-archive.md#what-happens-to-data-when-you-delete-a-table-in-a-log-analytics-workspace).
+For more information, see [What happens to data when you delete a table in a Log Analytics workspace](../logs/data-retention-configure.md#what-happens-to-data-when-you-delete-a-table-in-a-log-analytics-workspace).
# [Portal](#tab/azure-portal-2)
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
Title: Azure Monitor Logs
-description: Learn the basics of Azure Monitor Logs, which are used for advanced analysis of monitoring data.
+description: This article explains how Azure Monitor Logs works and how people with different monitoring needs and skills can use the basic and advanced capabilities that Azure Monitor Logs offers.
Previously updated : 04/15/2024- Last updated : 07/07/2024++
+# Customer intent: As new user or decision-maker evaluating Azure Monitor Logs, I want to understand how Azure Monitor Logs addresses my monitoring and analysis needs.
# Azure Monitor Logs overview
-Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources. Several features of Azure Monitor store their data in Logs and present this data in various ways to assist you in monitoring the performance and availability of your cloud and hybrid applications and their supporting components.
-Along with using existing Azure Monitor features, you can analyze Logs data by using a sophisticated query language that's capable of quickly analyzing millions of records. You might perform a simple query that retrieves a specific set of records or perform sophisticated data analysis to identify critical patterns in your monitoring data. Work with log queries and their results interactively by using Log Analytics, use them in alert rules to be proactively notified of issues, or visualize their results in a workbook or dashboard.
+Azure Monitor Logs is a centralized software as a service (SaaS) platform for collecting, analyzing, and acting on telemetry data generated by Azure and non-Azure resources and applications.
+
+You can collect logs, manage data models and costs, and consume different types of data in one [Log Analytics workspace](#log-analytics-workspace), the primary Azure Monitor Logs resource. This means you never have to move data or manage other storage, and you can retain different data types for as long or as little as you need.
+
+This article provides an overview of how Azure Monitor Logs works and explains how it addresses the needs and skills of different personas in an organization.
> [!NOTE]
-> Azure Monitor Logs is one half of the data platform that supports Azure Monitor. The other is [Azure Monitor Metrics](../essentials/data-platform-metrics.md), which stores numeric data in a time-series database. Numeric data is more lightweight than data in Azure Monitor Logs. Azure Monitor Metrics can support near real time scenarios, so it's useful for alerting and fast detection of issues.
->
-> Azure Monitor Metrics can only store numeric data in a particular structure, whereas Azure Monitor Logs can store a variety of data types that have their own structures. You can also perform complex analysis on Azure Monitor Logs data by using log queries, which can't be used for analysis of Azure Monitor Metrics data.
-
-## What can you do with Azure Monitor Logs?
-The following table describes some of the ways that you can use Azure Monitor Logs.
-
-| Capability | Description |
-|:|:|
-| Analyze | Use [Log Analytics](./log-analytics-tutorial.md) in the Azure portal to write [log queries](./log-query-overview.md) and interactively analyze log data by using a powerful analysis engine. |
-| Alert | Configure a [log search alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. |
-| Visualize | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.|
-| Get insights | Logs support [insights](../insights/insights-overview.md) that provide a customized monitoring experience for particular applications and services. |
-| Retrieve | Access log query results from:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](/rest/api/loganalytics/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/query/azlogs), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
-| Import | Upload logs from a custom app via the [REST API](/azure/azure-monitor/logs/logs-ingestion-api-overview) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/ingestion/azlogs), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme). |
-| Export | Configure [automated export of log data](./logs-data-export.md) to an Azure Storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md). |
-| Bring your own analysis | [Analyze data in Azure Monitor Logs using a notebook](../logs/notebooks-azure-monitor-logs.md) to create streamlined, multi-step processes on top of data you collect in Azure Monitor Logs. This is especially useful for purposes such as [building and running machine learning pipelines](../logs/aiops-machine-learning.md#create-your-own-machine-learning-pipeline-on-data-in-azure-monitor-logs), advanced analysis, and troubleshooting guides (TSGs) for Support needs. |
-
+> Azure Monitor Logs is one half of the data platform that supports Azure Monitor. The other is [Azure Monitor Metrics](../essentials/data-platform-metrics.md), which stores numeric data in a time-series database.
-## Data collection
-After you create a [Log Analytics workspace](#log-analytics-workspaces), you must configure sources to send their data. No data is collected automatically.
+## Log Analytics workspace
-This configuration will be different depending on the data source. For example:
+A [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) is a data store that holds tables into which you collect data.
-- [Create diagnostic settings](../essentials/diagnostic-settings.md) to send resource logs from Azure resources to the workspace.-- [Enable VM insights](../vm/vminsights-enable-overview.md) to collect data from virtual machines. -- [Configure data sources on the workspace](../data-sources.md) to collect more events and performance data.
+To address the data storage and consumption needs of various personas who use a Log Analytics workspace, you can:
+
+- [Define table plans](#table-plans) based on your data consumption and cost management needs.
+- [Manage low-cost long-term retention and interactive retention](../logs/data-retention-configure.md) for each table.
+- [Manage access](../logs/manage-access.md) to the workspace and to specific tables.
+- [Use summary rules to aggregate critical data](../logs/summary-rules.md) in summary tables. This lets you optimize data for ease of use and actionable insights, and store raw data in a table with a low-cost table plan for however long you need it.
+- Create ready-to-run [saved queries](../logs/save-query.md), [visualizations](../best-practices-analysis.md#built-in-visualization-tools), and [alerts](../alerts/alerts-create-log-alert-rule.md) tailored to specific personas.
++
+You can also configure network isolation, replicate your workspace across regions, and [design a workspace architecture based on your business needs](../logs/workspace-design.md).
+
+## Kusto Query Language (KQL) and Log Analytics
+
+You retrieve data from a Log Analytics workspace using a [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) query, which is a read-only request to process data and return results. KQL is a powerful tool that can analyze millions of records quickly. Use KQL to explore your logs, transform and aggregate data, discover patterns, identify anomalies and outliers, and more.
+
+Log Analytics is a tool in the Azure portal for running log queries and analyzing their results. [Log Analytics Simple mode](log-analytics-simple-mode.md) lets any user, regardless of their knowledge of KQL, retrieve data from one or more tables with one click. A set of controls lets you explore and analyze the retrieved data using the most popular Azure Monitor Logs functionality in an intuitive, spreadsheet-like experience.
++
+Users who are familiar with KQL can use Log Analytics KQL mode to edit and create queries, which they can then use in Azure Monitor features such as alerts and workbooks, or share with other users.
+
+For a description of Log Analytics, see [Overview of Log Analytics in Azure Monitor](./log-analytics-overview.md). For a walkthrough of using Log Analytics features to create a simple log query and analyze its results, see [Log Analytics tutorial](./log-analytics-tutorial.md).
-> [!IMPORTANT]
-> Most data collection in Logs will incur ingestion and retention costs. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before you enable any data collection.
-## Log Analytics workspaces
-Azure Monitor Logs stores the data that it collects in one or more [Log Analytics workspaces](./workspace-design.md). You must create at least one workspace to use Azure Monitor Logs. For a description of Log Analytics workspaces, see [Log Analytics workspace overview](log-analytics-workspace-overview.md).
+## Built-in insights and custom dashboards, workbooks, and reports
-## Log Analytics
-Log Analytics is a tool in the Azure portal. Use it to edit and run log queries and interactively analyze their results. You can then use those queries to support other features in Azure Monitor, such as log search alerts and workbooks. Access Log Analytics from the **Logs** option on the Azure Monitor menu or from most other services in the Azure portal.
+Many of Azure Monitor's [ready-to-use, curated Insights experiences](../insights/insights-overview.md) store data in Azure Monitor Logs, and present this data in an intuitive way so you can monitor the performance and availability of your cloud and hybrid applications and their supporting components.
-For a description of Log Analytics, see [Overview of Log Analytics in Azure Monitor](./log-analytics-overview.md). To walk through using Log Analytics features to create a simple log query and analyze its results, see [Log Analytics tutorial](./log-analytics-tutorial.md).
-## Log queries
-Data is retrieved from a Log Analytics workspace through a log query, which is a read-only request to process data and return results. Log queries are written in [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). KQL is the same query language that Azure Data Explorer uses.
+You can also [create your own visualizations and reports](../best-practices-analysis.md#built-in-visualization-tools) using workbooks, dashboards, and Power BI.
-You can:
+## Table plans
-- Write log queries in Log Analytics to interactively analyze their results.-- Use them in alert rules to be proactively notified of issues.-- Include their results in workbooks or dashboards.
+You can use one Log Analytics workspace to store any type of log required for any purpose. For example:
-Insights include prebuilt queries to support their views and workbooks.
+- High-volume, verbose data that requires **cheap long-term storage for audit and compliance**
+- App and resource data for **troubleshooting** by developers
+- Key event and performance data for scaling and alerting to ensure ongoing **operational excellence and security**
+- Aggregated long-term data trends for **advanced analytics and machine learning**
-For a list of where log queries are used and references to tutorials and other documentation to get you started, see [Log queries in Azure Monitor](./log-query-overview.md).
-<!-- convertborder later -->
+Table plans let you manage data costs based on how often you use the data in a table and the type of analysis you need the data for.
-## Relationship to Azure Data Explorer
-Azure Monitor Logs is based on Azure Data Explorer. A Log Analytics workspace is roughly the equivalent of a database in Azure Data Explorer. Tables are structured the same, and both use KQL. For information on KQL, see [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/).
+The diagram and table below compare the Analytics, Basic, and Auxiliary table plans. For information about interactive and long-term retention, see [Manage data retention in a Log Analytics workspace](../logs/data-retention-configure.md). For information about how to select or modify a table plan, see [Select a table plan](logs-table-plans.md).
++
+| | Analytics | Basic | Auxiliary (Preview) |
+| | | | |
+| Best for | High-value data used for continuous monitoring, real-time detection, and performance analytics. | Medium-touch data needed for troubleshooting and incident response. | Low-touch data, such as verbose logs, and data required for auditing and compliance. |
+| Supported [table types](../logs/manage-logs-tables.md) | All table types | [Azure tables that support Basic logs](basic-logs-azure-tables.md) and DCR-based custom tables | DCR-based custom tables |
+| [Log queries](../logs/get-started-queries.md) | Full query capabilities. | Full Kusto Query Language (KQL) on a single table, which you can extend with data from an Analytics table using [lookup](/azure/data-explorer/kusto/query/lookup-operator). | Full KQL on a single table, which you can extend with data from an Analytics table using [lookup](/azure/data-explorer/kusto/query/lookup-operator). |
+| Query performance | Fast | Fast | Slower<br> Good for auditing. Not optimized for real-time analysis. |
+| [Alerts](../alerts/alerts-overview.md) | ✅ | ❌ | ❌ |
+| [Insights](../insights/insights-overview.md) | ✅ | ❌ | ❌ |
+| [Dashboards](../visualize/tutorial-logs-dashboards.md) | ✅ | ✅ Cost per query for dashboard refreshes not included. | Possible, but slow to refresh, cost per query for dashboard refreshes not included. |
+| [Data export](logs-data-export.md) | ✅ | ❌ | ❌ |
+| [Microsoft Sentinel](../../sentinel/overview.md) | ✅ | ✅ | ✅ |
+| [Search jobs](../logs/search-jobs.md) | ✅ | ✅ | ✅ |
+| [Summary rules](../logs/summary-rules.md) | ✅ | ✅ KQL limited to a single table | ✅ KQL limited to a single table |
+| [Restore](../logs/restore.md) | ✅ | ✅ | ❌ |
+|Query price included |✅ | ❌ | ❌ |
+|Ingestion cost |Standard | Reduced | Minimal |
+| Interactive retention | 30 days (90 days for Microsoft Sentinel and Application Insights).<br> Can be extended to up to two years at a prorated monthly long-term retention charge. | 30 days | 30 days |
+| Total retention | Up to 12 years | Up to 12 years | Up to 12 years*<br>*Public preview limitation: Auxiliary plan total retention is currently fixed at 365 days. |
+
+> [!NOTE]
+> The Auxiliary table plan is in public preview. For current limitations and supported regions, see [Public preview limitations](create-custom-table-auxiliary.md#public-preview-limitations).<br> The Basic and Auxiliary table plans aren't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
+
+## Data collection
+
+To collect data from a resource to your Log Analytics workspace:
+
+1. Set up the relevant data collection tool based on the table below.
+1. Decide which data you need to collect from the resource.
+1. Use [transformations](../essentials/data-collection-transformations.md) to remove sensitive data, enrich data or perform calculations, and filter out data you don't need, to reduce costs.
+
+This table lists the tools Azure Monitor provides for collecting data from various resource types.
+
+| Resource type | Data collection tool |Collected data |
+| | | | |
+| **Azure** | [Diagnostic settings](../essentials/diagnostic-settings.md) | **Azure tenant** - Microsoft Entra audit logs provide sign-in activity history and audit trail of changes made within a tenant.<br/>**Azure resources** - Logs and performance counters.<br/>**Azure subscription** - Service health records along with records on any configuration changes made to the resources in your Azure subscription. |
+| **Application** | [Application insights](../app/app-insights-overview.md) | Application performance monitoring data. |
+| **Container** |[Container insights](../containers/container-insights-overview.md)| Container performance data. |
+| **Virtual machine** | [Data collection rules](/azure/virtual-machines/monitor-vm#overview-monitor-vm-host-and-guest-metrics-and-logs) | Monitoring data from the guest operating system of Azure and non-Azure virtual machines.|
+| **Non-Azure source** | [Logs Ingestion API](../logs/logs-ingestion-api-overview.md) | File-based logs and any data you collect from a monitored resource.|
++
+> [!IMPORTANT]
+> For most data collection in Logs, you incur ingestion and retention costs. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before you enable any data collection.
-The experience of using Log Analytics to work with Azure Monitor queries in the Azure portal is similar to the experience of using the Azure Data Explorer Web UI. You can even [include data from a Log Analytics workspace in an Azure Data Explorer query](/azure/data-explorer/query-monitor-data).
-## Relationship to Azure Sentinel and Microsoft Defender for Cloud
+## Working with Microsoft Sentinel and Microsoft Defender for Cloud
-[Security monitoring](../best-practices-plan.md#security-monitoring-solutions) in Azure is performed by [Microsoft Sentinel](../../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md).
+[Microsoft Sentinel](../../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) perform [Security monitoring](../best-practices-plan.md#security-monitoring-solutions) in Azure.
These services store their data in Azure Monitor Logs so that it can be analyzed with other log data collected by Azure Monitor.
These services store their data in Azure Monitor Logs so that it can be analyzed
| Service | More information | |:--|:--|
-| Azure Sentinel | <ul><li>[Where Microsoft Sentinel data is stored](../../sentinel/geographical-availability-data-residency.md#where-microsoft-sentinel-data-is-stored)</li><li>[Design your Microsoft Sentinel workspace architecture](../../sentinel/design-your-workspace-architecture.md)</li><li>[Design a Log Analytics workspace architecture](./workspace-design.md)</li><li>[Prepare for multiple workspaces and tenants in Microsoft Sentinel](../../sentinel/prepare-multiple-workspaces.md)</li><li>[Enable Microsoft Sentinel on your Log Analytics workspace](../../sentinel/quickstart-onboard.md).</li><li>[Log management in Microsoft Sentinel](../../sentinel/skill-up-resources.md#module-5-log-management)</li><li>[Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/)</li><li>[Charges for workspaces with Microsoft Sentinel](./cost-logs.md#workspaces-with-microsoft-sentinel)</li></ul> |
+| Microsoft Sentinel | <ul><li>[Where Microsoft Sentinel data is stored](../../sentinel/geographical-availability-data-residency.md#where-microsoft-sentinel-data-is-stored)</li><li>[Design your Microsoft Sentinel workspace architecture](../../sentinel/design-your-workspace-architecture.md)</li><li>[Design a Log Analytics workspace architecture](./workspace-design.md)</li><li>[Prepare for multiple workspaces and tenants in Microsoft Sentinel](../../sentinel/prepare-multiple-workspaces.md)</li><li>[Enable Microsoft Sentinel on your Log Analytics workspace](../../sentinel/quickstart-onboard.md).</li><li>[Log management in Microsoft Sentinel](../../sentinel/skill-up-resources.md#module-5-log-management)</li><li>[Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/)</li><li>[Charges for workspaces with Microsoft Sentinel](./cost-logs.md#workspaces-with-microsoft-sentinel)</li></ul> |
| Microsoft Defender for Cloud | <ul><li>[Continuously export Microsoft Defender for Cloud data](../../defender-for-cloud/continuous-export.md)</li><li>[Data consumption](../../defender-for-cloud/data-security.md#data-consumption)</li><li>[Frequently asked questions about Log Analytics workspaces used with Microsoft Defender for Cloud](../../defender-for-cloud/faq-data-collection-agents.yml)</li><li>[Microsoft Defender for Cloud pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/)</li><li>[Charges for workspaces with Microsoft Defender for Cloud](./cost-logs.md#workspaces-with-microsoft-defender-for-cloud)</li></ul> | ## Next steps
azure-monitor Data Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-configure.md
+
+ Title: Manage data retention in a Log Analytics workspace
+description: Configure retention settings for a table in a Log Analytics workspace in Azure Monitor.
++ Last updated : 7/22/2024
+# Customer intent: As an Azure account administrator, I want to manage data retention for each table in my Log Analytics workspace based on my account's data usage and retention needs.
++
+# Manage data retention in a Log Analytics workspace
+
+A Log Analytics workspace retains data in two states:
+
+* **Interactive retention**: In this state, data is available for monitoring, troubleshooting, and near-real-time analytics.
+* **Long-term retention**: In this low-cost state, data isn't available for table plan features, but can be accessed through [search jobs](../logs/search-jobs.md).
+
+This article explains how Log Analytics workspaces retain data and how to manage the data retention of tables in your workspace.
+
+## Interactive, long-term, and total retention
+
+By default, all tables in a Log Analytics workspace retain data for 30 days, except for [log tables with 90-day default retention](#log-tables-with-90-day-default-retention). During this period - the interactive retention period - you can retrieve the data from the table through queries, and the data is available for visualizations, alerts, and other features and services, based on the table plan.
+
+You can extend the interactive retention period of tables with the Analytics plan to up to two years. The Basic and Auxiliary plans have a fixed interactive retention period of 30 days.
+
+> [!NOTE]
+> You can reduce the interactive retention period of Analytics tables to as little as four days using the API or CLI. However, since 31 days of interactive retention are included in the ingestion price, lowering the retention period below 31 days doesn't reduce costs.
++
+To retain data in the same table beyond the interactive retention period, extend the table's total retention to up to 12 years. At the end of the interactive retention period, the data stays in the table for the remainder of the total retention period you configure. During this period - the long-term retention period - run a search job to retrieve the specific data you need from the table and make it available for interactive queries in a search results table.
+++
+## How retention modifications work
+
+When you shorten a table's total retention, Azure Monitor Logs waits 30 days before removing the data, so you can revert the change and avoid data loss if you made an error in configuration.
+
+When you increase total retention, the new retention period applies to all data that was already ingested into the table and wasn't yet removed.
+
+When you change the long-term retention settings of a table with existing data, the change takes effect immediately.
+
+***Example***:
+
+- You have an existing Analytics table with 180 days of interactive retention and no long-term retention.
+- You change the interactive retention to 90 days without changing the total retention period of 180 days.
+- Azure Monitor automatically treats the remaining 90 days of total retention as low-cost, long-term retention, so that data that's 90-180 days old isn't lost.
++
+## Permissions required
+
+| Action | Permissions required |
+|:-|:|
+| Configure default interactive retention for Analytics tables in a Log Analytics workspace | `Microsoft.OperationalInsights/workspaces/write` and `microsoft.operationalinsights/workspaces/tables/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example |
+| Get retention setting by table for a Log Analytics workspace | `Microsoft.OperationalInsights/workspaces/tables/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example |
+
+## Configure the default interactive retention period of Analytics tables
+
+The default interactive retention period of all tables in a Log Analytics workspace is 30 days. You can change the default interactive period of Analytics tables to up to two years by modifying the workspace-level data retention setting. Basic and Auxiliary tables have a fixed interactive retention period of 30 days.
+
+Changing the default workspace-level data retention setting automatically affects all Analytics tables to which the default setting still applies in your workspace. If you've already changed the interactive retention of a particular table, that table isn't affected when you change the workspace default data retention setting.
+
+> [!IMPORTANT]
+> Workspaces with 30-day retention might keep data for 31 days. If you need to retain data for 30 days only to comply with a privacy policy, configure the default workspace retention to 30 days using the API and update the `immediatePurgeDataOn30Days` workspace property to `true`. This operation is currently only supported using the [Workspaces - Update API](/rest/api/loganalytics/workspaces/update).
+
+# [Portal](#tab/portal-3)
+
+To set the default interactive retention period of Analytics tables within a Log Analytics workspace:
+
+1. From the **Log Analytics workspaces** menu in the Azure portal, select your workspace.
+1. Select **Usage and estimated costs** in the left pane.
+1. Select **Data Retention** at the top of the page.
+
+ :::image type="content" source="media/manage-cost-storage/manage-cost-change-retention-01.png" lightbox="media/manage-cost-storage/manage-cost-change-retention-01.png" alt-text="Screenshot that shows changing the workspace data retention setting.":::
+
+1. Move the slider to increase or decrease the number of days, and then select **OK**.
+
+# [API](#tab/api-3)
+
+To set the default interactive retention period of Analytics tables within a Log Analytics workspace, call the [Workspaces - Create Or Update API](/rest/api/loganalytics/workspaces/create-or-update):
+
+```http
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}?api-version=2023-09-01
+```
+
+**Request body**
+
+The request body includes the values in the following table.
+
+|Name | Type | Description |
+| | | |
+|`properties.retentionInDays` | integer | The workspace data retention in days. Allowed values are per pricing plan. See pricing tiers documentation for details. |
+|`location`|string| The geo-location of the resource.|
+|`immediatePurgeDataOn30Days`|boolean|Flag that indicates whether data is immediately removed after 30 days and is nonrecoverable. Applicable only when workspace retention is set to 30 days.|
++
+**Example**
+
+This example sets the workspace's retention to the workspace default of 30 days and ensures that data is immediately removed after 30 days and is nonrecoverable.
+
+**Request**
+
+```http
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}?api-version=2023-09-01
+
+{
+ "properties": {
+ "retentionInDays": 30,
+ "features": {"immediatePurgeDataOn30Days": true}
+ },
+"location": "australiasoutheast"
+}
+
+**Response**
+
+Status code: 200
+
+```http
+{
+ "properties": {
+ ...
+ "retentionInDays": 30,
+ "features": {
+ "legacy": 0,
+ "searchVersion": 1,
+ "immediatePurgeDataOn30Days": true,
+ ...
+ },
+ ...
+```
++
+# [CLI](#tab/cli-3)
+
+To set the default interactive retention period of Analytics tables within a Log Analytics workspace, run the [az monitor log-analytics workspace update](/cli/azure/monitor/log-analytics/workspace/#az-monitor-log-analytics-workspace-update) command and pass the `--retention-time` parameter.
+
+This example sets the table's interactive retention to 30 days, and the total retention to two years, which means that the long-term retention period is 23 months:
+
+```azurecli
+az monitor log-analytics workspace update --resource-group myresourcegroup --retention-time 30 --workspace-name myworkspace
+```
+
+# [PowerShell](#tab/PowerShell-3)
+
+Use the [Set-AzOperationalInsightsWorkspace](/powershell/module/az.operationalinsights/Set-AzOperationalInsightsWorkspace) cmdlet to set the default interactive retention period of Analytics tables within a Log Analytics workspace. This example sets the default interactive retention period to 30 days:
+
+```powershell
+Set-AzOperationalInsightsWorkspace -ResourceGroupName "myResourceGroup" -Name "MyWorkspace" -RetentionInDays 30
+```
++
+## Configure table-level retention
+
+By default, all tables with the Analytics data plan inherit the [Log Analytics workspace's default interactive retention setting](#configure-the-default-interactive-retention-period-of-analytics-tables) and have no long-term retention. You can increase the interactive retention period of Analytics tables to up to 730 days at an [extra cost](https://azure.microsoft.com/pricing/details/monitor/).
+
+To add long-term retention to a table with any data plan, set **total retention** to up to 12 years (4,383 days). The Auxiliary table plan is currently in public preview, during which the plan's total retention is fixed at 365 days.
+
+> [!NOTE]
+> Currently, you can set total retention to up to 12 years through the Azure portal and API. CLI and PowerShell are limited to seven years; support for 12 years will follow.
+
+# [Portal](#tab/portal-1)
+
+To modify the retention setting for a table in the Azure portal:
+
+1. From the **Log Analytics workspaces** menu, select **Tables**.
+
+ The **Tables** screen lists all the tables in the workspace.
+
+1. Select the context menu for the table you want to configure and select **Manage table**.
+
+ :::image type="content" source="media/basic-logs-configure/log-analytics-table-configuration.png" lightbox="media/basic-logs-configure/log-analytics-table-configuration.png" alt-text="Screenshot that shows the Manage table button for one of the tables in a workspace.":::
+
+1. Configure the interactive retention and total retention settings in the **Data retention settings** section of the table configuration screen.
+
+ :::image type="content" source="media/data-retention-configure/log-analytics-configure-table-retention-auxiliary.png" lightbox="media/data-retention-configure/log-analytics-configure-table-retention-auxiliary.png" alt-text="Screenshot that shows the data retention settings on the table configuration screen.":::
+
+# [API](#tab/api-1)
+
+To modify the retention setting for a table, call the [Tables - Update API](/rest/api/loganalytics/tables/update):
+
+```http
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}?api-version=2022-10-01
+```
+
+You can use either PUT or PATCH, with the following difference:
+
+- The **PUT** API sets `retentionInDays` and `totalRetentionInDays` to the default value if you don't set non-null values.
+- The **PATCH** API doesn't change the `retentionInDays` or `totalRetentionInDays` values if you don't specify values.
+
+**Request body**
+
+The request body includes the values in the following table.
+
+|Name | Type | Description |
+| | | |
+|properties.retentionInDays | integer | The table's data retention in days. This value can be between 4 and 730. <br/>Setting this property to null applies the workspace retention period. For a Basic Logs table, the value is always 8. |
+|properties.totalRetentionInDays | integer | The table's total data retention including long-term retention. This value can be between 4 and 730; or 1095, 1460, 1826, 2191, 2556, 2922, 3288, 3653, 4018, or 4383. Set this property to null if you don't want long-term retention. |
+
+**Example**
+
+This example sets the table's interactive retention to the workspace default of 30 days, and the total retention to two years, which means that the long-term retention period is 23 months.
+
+**Request**
+
+```http
+PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/CustomLog_CL?api-version=2022-10-01
+```
+
+**Request body**
+
+```http
+{
+ "properties": {
+ "retentionInDays": null,
+ "totalRetentionInDays": 730
+ }
+}
+```
+
+**Response**
+
+Status code: 200
+
+```http
+{
+ "properties": {
+ "retentionInDays": 30,
+ "totalRetentionInDays": 730,
+ "archiveRetentionInDays": 700,
+ ...
+ },
+ ...
+}
+```
+
+# [CLI](#tab/cli-1)
+
+To modify a table's retention settings, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command and pass the `--retention-time` and `--total-retention-time` parameters.
+
+This example sets the table's interactive retention to 30 days, and the total retention to two years, which means that the long-term retention period is 23 months:
+
+```azurecli
+az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name AzureMetrics --retention-time 30 --total-retention-time 730
+```
+
+To reapply the workspace's default interactive retention value to the table and reset its total retention to 0, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command with the `--retention-time` and `--total-retention-time` parameters set to `-1`.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name Syslog --retention-time -1 --total-retention-time -1
+```
+
+# [PowerShell](#tab/PowerShell-1)
+
+Use the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable) cmdlet to modify a table's retention settings. This example sets the table's interactive retention to 30 days, and the total retention to two years, which means that the long-term retention period is 23 months:
+
+```powershell
+Update-AzOperationalInsightsTable -ResourceGroupName ContosoRG -WorkspaceName ContosoWorkspace -TableName AzureMetrics -RetentionInDays 30 -TotalRetentionInDays 730
+```
+
+To reapply the workspace's default interactive retention value to the table and reset its total retention to 0, run the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable) cmdlet with the `-RetentionInDays` and `-TotalRetentionInDays` parameters set to `-1`.
+
+For example:
+
+```powershell
+Update-AzOperationalInsightsTable -ResourceGroupName ContosoRG -WorkspaceName ContosoWorkspace -TableName Syslog -RetentionInDays -1 -TotalRetentionInDays -1
+```
++++
+## Get retention settings by table
+
+# [Portal](#tab/portal-2)
+
+To view a table's retention settings in the Azure portal, from the **Log Analytics workspaces** menu, select **Tables**.
+
+The **Tables** screen shows the interactive retention and total retention periods for all the tables in the workspace.
++++
+# [API](#tab/api-2)
+
+To get the retention setting of a particular table (in this example, `SecurityEvent`), call the **Tables - Get** API:
+
+```JSON
+GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2022-10-01
+```
+
+To get all table-level retention settings in your workspace, don't set a table name.
+
+For example:
+
+```JSON
+GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2022-10-01
+```
+
+# [CLI](#tab/cli-2)
+
+To get the retention setting of a particular table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name SecurityEvent
+```
+
+# [PowerShell](#tab/PowerShell-2)
+
+To get the retention setting of a particular table, run the [Get-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/get-azoperationalinsightstable) cmdlet.
+
+For example:
+
+```powershell
+Get-AzOperationalInsightsTable -ResourceGroupName ContosoRG -WorkspaceName ContosoWorkspace -tableName SecurityEvent
+```
++++
+## What happens to data when you delete a table in a Log Analytics workspace?
+
+A Log Analytics workspace can contain several [types of tables](../logs/manage-logs-tables.md#table-type-and-schema). What happens when you delete the table is different for each:
+
+|Table type|Data retention|Recommendations|
+|-|-|-|
+|Azure table |An Azure table holds logs from an Azure resource or data required by an Azure service or solution and can't be deleted. When you stop streaming data from the resource, service, or solution, data remains in the workspace until the end of the retention period defined for the table. |To minimize charges, set [table-level retention](#configure-table-level-retention) to four days before you stop streaming logs to the table.|
+|[Custom log table](./create-custom-table.md#create-a-custom-table) (`table_CL`)| Soft deletes the table until the end of the table-level retention or default workspace retention period. During the soft delete period, you continue to pay for data retention and can recreate the table and access the data by setting up a table with the same name and schema. Fourteen days after you delete a custom table, Azure Monitor removes the table-level retention configuration and applies the default workspace retention.|To minimize charges, set [table-level retention](#configure-table-level-retention) to four days before you delete the table.|
+|[Search results table](./search-jobs.md) (`table_SRCH`)| Deletes the table and data immediately and permanently.||
+|[Restored table](./restore.md) `(table_RST`)| Deletes the hot cache provisioned for the restore, but source table data isn't deleted.||
+
+## Log tables with 90-day default retention
+
+By default, the `Usage` and `AzureActivity` tables keep data for at least 90 days at no charge. When you increase the workspace retention to more than 90 days, you also increase the retention of these tables. These tables are also free from data ingestion charges.
+
+Tables related to Application Insights resources also keep data for 90 days at no charge. You can adjust the retention of each of these tables individually:
+
+- `AppAvailabilityResults`
+- `AppBrowserTimings`
+- `AppDependencies`
+- `AppExceptions`
+- `AppEvents`
+- `AppMetrics`
+- `AppPageViews`
+- `AppPerformanceCounters`
+- `AppRequests`
+- `AppSystemEvents`
+- `AppTraces`
+
+## Pricing model
+
+The charge for adding interactive retention and long-term retention is calculated based on the volume of data you retain, in GB, and the number or days for which you retain the data. Log data that has `_IsBillable == false` isn't subject to ingestion or retention charges.
+
+For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Next steps
+
+Learn more about:
+
+- [Managing personal data in Azure Monitor Logs](../logs/personal-data-mgmt.md)
+- [Creating a search job to retrieve auxiliary data matching particular criteria](search-jobs.md)
+- [Restore data from the auxiliary tier for a specific time range](restore.md)
azure-monitor Log Analytics Simple Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-simple-mode.md
After you [get started in Simple mode](#get-started-in-simple-mode), you can exp
By default, Simple mode lists the latest 1,000 entries in the table from the last 24 hours.
-To change the time range and number of records displayed, use the **Time range** and **Limit** selectors. For more information about result limit, see [Configure query result limit](#configure-query-result-limit)
+To change the time range and number of records displayed, use the **Time range** and **Limit** selectors. For more information about result limit, see [Configure query result limit](#configure-query-result-limit).
:::image type="content" source="media/log-analytics-explorer/log-analytics-time-range-limit.png" alt-text="Screenshot that shows the time range and limit selectors in Log Analytics." lightbox="media/log-analytics-explorer/log-analytics-time-range-limit.png":::
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
Title: Log Analytics workspace overview description: Overview of Log Analytics workspace, which stores data for Azure Monitor Logs. Previously updated : 10/24/2023 Last updated : 07/20/2024+
+# Customer intent: As a Log Analytics administrator, I want to understand to set up and manage my workspace, so that I can best address my business needs, including data access, cost management, and workspace health. As a Log Analytics user, I want to understand the workspace configuration options available to me, so I can best address my analysis.
# Log Analytics workspace overview
-A Log Analytics workspace is a unique environment for log data from Azure Monitor and other Azure services, such as Microsoft Sentinel and Microsoft Defender for Cloud. Each workspace has its own data repository and configuration but might combine data from multiple services. This article provides an overview of concepts related to Log Analytics workspaces and provides links to other documentation for more details on each.
+A Log Analytics workspace is a data store into which you can collect any type of log data from all of your Azure and non-Azure resources and applications. Workspace configuration options let you manage all of your log data in one workspace to meet the operations, analysis, and auditing needs of different personas in your organization through:
-> [!IMPORTANT]
-> You might see the term *Microsoft Sentinel workspace* used in [Microsoft Sentinel](../../sentinel/overview.md) documentation. This workspace is the same Log Analytics workspace described in this article, but it's enabled for Microsoft Sentinel. All data in the workspace is subject to Microsoft Sentinel pricing as described in the [Cost](#cost) section.
+- Azure Monitor features, such as built-in [insights experiences](../insights/insights-overview.md), [alerts](../alerts/alerts-create-log-alert-rule.md), and [automatic actions](../autoscale/autoscale-overview.md)
+- Other Azure services, such as [Microsoft Sentinel](/azure/sentinel/overview), [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), and [Logic Apps](/azure/connectors/connectors-azure-monitor-logs)
+- Microsoft tools, such as [Power BI](log-powerbi.md) and [Excel](log-excel.md)
+- Integration with custom and third-party applications
-You can use a single workspace for all your data collection. You can also create multiple workspaces based on requirements such as:
+This article provides an overview of concepts related to Log Analytics workspaces.
+
+> [!IMPORTANT]
+> [Microsoft Sentinel](../../sentinel/overview.md) documentation uses the term *Microsoft Sentinel workspace*. This workspace is the same Log Analytics workspace described in this article, but it's enabled for Microsoft Sentinel. All data in the workspace is subject to Microsoft Sentinel pricing.
-- The geographic location of the data.-- Access rights that define which users can access data.-- Configuration settings like pricing tiers and data retention.
+## Log tables
-To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see [Design a Log Analytics workspace configuration](./workspace-design.md).
+Each Log Analytics workspace contains multiple tables in which Azure Monitor Logs stores data you collect.
-## Data structure
+Azure Monitor Logs automatically creates tables required to store monitoring data you collect from your Azure environment. You [create custom tables](create-custom-table.md) to store data you collect from non-Azure resources and applications, based on the data model of the log data you collect and how you want to store and use the data.
-Each workspace contains multiple tables that are organized into separate columns with multiple rows of data. Each table is defined by a unique set of columns. Rows of data provided by the data source share those columns. Log queries define columns of data to retrieve and provide output to different features of Azure Monitor and other services that use workspaces.
+Table management settings let you control access to specific tables, and manage the data model, retention, and cost of data in each table. For more information, see [Manage tables in a Log Analytics workspace](manage-logs-tables.md).
:::image type="content" source="media/data-platform-logs/logs-structure.png" lightbox="media/data-platform-logs/logs-structure.png" alt-text="Diagram that shows the Azure Monitor Logs structure.":::
-> [!WARNING]
-> Table names are used for billing purposes so they should not contain sensitive information.
-## Cost
+## Data retention
-There's no direct cost for creating or maintaining a workspace. You're charged for the data sent to it, which is also known as data ingestion. You're charged for how long that data is stored, which is otherwise known as data retention. These costs might vary based on the log data plan of each table, as described in [Log data plan](../logs/basic-logs-configure.md).
+A Log Analytics workspace retains data in two states - **interactive retention** and **long-term retention**.
-For information on pricing, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). For guidance on how to reduce your costs, see [Azure Monitor best practices - Cost management](../best-practices-cost.md). If you're using your Log Analytics workspace with services other than Azure Monitor, see the documentation for those services for pricing information.
+During the interactive retention period, you retrieve the data from the table through queries, and the data is available for visualizations, alerts, and other features and services, based on the table plan.
+
+Each table in your Log Analytics workspace lets you retain data up to 12 years in low-cost, long-term retention. Retrieve specific data you need from long-term retention to interactive retention using a search job. This means that you manage your log data in one place, without moving data to external storage, and you get the full analytics capabilities of Azure Monitor on older data, when you need it.
+
+For more information, see [Manage data retention in a Log Analytics workspace](data-retention-configure.md).
+
+## Data access
-## Workspace transformation DCR
+Permission to access data in a Log Analytics workspace is defined by the [access control mode](manage-access.md#access-control-mode) setting on each workspace. You can give users explicit access to the workspace by using a [built-in or custom role](../roles-permissions-security.md). Or, you can allow access to data collected for Azure resources to users with access to those resources.
+
+For more information, see [Manage access to log data and workspaces in Azure Monitor](manage-access.md).
+
+## View Log Analytics workspace insights
+
+[Log Analytics Workspace Insights](log-analytics-workspace-insights-overview.md) helps you manage and optimize your Log Analytics workspaces with a comprehensive view of your workspace usage, performance, health, ingestion, queries, and change log.
++
+## Transform data you ingest into your Log Analytics workspace
[Data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) that define data coming into Azure Monitor can include transformations that allow you to filter and transform data before it's ingested into the workspace. Since all data sources don't yet support DCRs, each workspace can have a [workspace transformation DCR](../essentials/data-collection-transformations-workspace.md).
For information on pricing, see [Azure Monitor pricing](https://azure.microsoft.
For example, you might have [diagnostic settings](../essentials/diagnostic-settings.md) that send [resource logs](../essentials/resource-logs.md) for different Azure resources to your workspace. You can create a transformation for the table that collects the resource logs that filters this data for only records that you want. This method saves you the ingestion cost for records you don't need. You might also want to extract important data from certain columns and store it in other columns in the workspace to support simpler queries.
-## Data retention and archive
-
-Data in each table in a [Log Analytics workspace](log-analytics-workspace-overview.md) is retained for a specified period of time after which it's either removed or archived with a reduced retention fee. Set the retention time to balance your requirement for having data available with reducing your cost for data retention.
+## Cost
-To access archived data, you must first retrieve data from it in an Analytics Logs table by using one of the following methods:
+There's no direct cost for creating or maintaining a workspace. You're charged for the data you ingest into the workspace and for data retention, based on each table's [table plan](data-platform-logs.md#table-plans).
-| Method | Description |
-|:|:|
-| [Search jobs](search-jobs.md) | Retrieve data matching particular criteria. |
-| [Restore](restore.md) | Retrieve data from a particular time range. |
+For information on pricing, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). For guidance on how to reduce your costs, see [Azure Monitor best practices - Cost management](../best-practices-cost.md). If you're using your Log Analytics workspace with services other than Azure Monitor, see the documentation for those services for pricing information.
+## Design a Log Analytics workspace architecture to address specific business needs
-## Permissions
+You can use a single workspace for all your data collection. However, you can also create multiple workspaces based on specific business requirements such as regulatory or compliance requirements to store data in specific locations, split billing, and resilience.
-Permission to access data in a Log Analytics workspace is defined by the [access control mode](manage-access.md#access-control-mode), which is a setting on each workspace. You can give users explicit access to the workspace by using a [built-in or custom role](../roles-permissions-security.md). Or, you can allow access to data collected for Azure resources to users with access to those resources.
+For considerations related to creating multiple workspaces, see [Design a Log Analytics workspace configuration](./workspace-design.md).
-See [Manage access to log data and workspaces in Azure Monitor](manage-access.md) for information on the different permission options and how to configure permissions.
## Next steps
azure-monitor Logs Table Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-table-plans.md
+
+ Title: Select a table plan based on data usage in a Log Analytics workspace
+description: Use the Auxiliary, Basic, and Analytics Logs plans to reduce costs and take advantage of advanced analytics capabilities in Azure Monitor Logs.
++++ Last updated : 07/04/2024+
+# Customer intent: As a Log Analytics workspace administrator, I want to manage configure the plans of tables in my Log Analytics workspace so that I pay less for data I use less frequently.
++
+# Select a table plan based on data usage in a Log Analytics workspace
+
+You can use one Log Analytics workspace to store any type of log required for any purpose. For example:
+
+- High-volume, verbose data that requires **cheap long-term storage for audit and compliance**
+- App and resource data for **troubleshooting** by developers
+- Key event and performance data for scaling and alerting to ensure ongoing **operational excellence and security**
+- Aggregated long-term data trends for **advanced analytics and machine learning**
+
+Table plans let you manage data costs based on how often you use the data in a table and the type of analysis you need the data for. This article explains and how to set a table's plan.
+
+For information about what each table plan offers and which use cases it's optimal for, see [Table plans](data-platform-logs.md#table-plans).
+
+## Permissions required
+
+| Action | Permissions required |
+|:-|:|
+| View table plan | `Microsoft.OperationalInsights/workspaces/tables/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example |
+| Set table plan | `Microsoft.OperationalInsights/workspaces/write` and `microsoft.operationalinsights/workspaces/tables/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example |
+
+## Set the table plan
+
+You can set the table plan to Auxiliary only when you [create a custom table](../logs/create-custom-table.md#create-a-custom-table) by using the API. Built-in Azure tables don't currently support the Auxiliary plan. After you create a table with an Auxiliary plan, you can't switch the table's plan.
+
+All tables support the Analytics plan and all DCR-based custom tables and [some Azure tables support the Basic log plan](basic-logs-azure-tables.md). You can switch between the Analytics and Basic plans, the change takes effect on existing data in the table immediately.
+
+When you change a table's plan from Analytics to Basic, Azure monitor treats any data that's older than 30 days as long-term retention data based on the total retention period set for the table. In other words, the total retention period of the table remains unchanged, unless you explicitly [modify the long-term retention period](../logs/data-retention-configure.md).
+
+> [!NOTE]
+> You can switch a table's plan once a week.
+# [Portal](#tab/portal-1)
+
+Analytics is the default table plan of all tables you create in the portal. You can switch between the Analytics and Basic plans, as described in this section.
+
+To switch a table's plan in the Azure portal:
+
+1. From the **Log Analytics workspaces** menu, select **Tables**.
+
+ The **Tables** screen lists all the tables in the workspace.
+
+1. Select the context menu for the table you want to configure and select **Manage table**.
+
+ :::image type="content" source="media/basic-logs-configure/log-analytics-table-configuration.png" lightbox="media/basic-logs-configure/log-analytics-table-configuration.png" alt-text="Screenshot that shows the Manage table button for one of the tables in a workspace.":::
+
+1. From the **Table plan** dropdown on the table configuration screen, select **Basic** or **Analytics**.
+
+ The **Table plan** dropdown is enabled only for [tables that support Basic logs](basic-logs-azure-tables.md).
+
+ :::image type="content" source="media/data-retention-configure/log-analytics-configure-table-retention-auxiliary.png" lightbox="media/data-retention-configure/log-analytics-configure-table-retention-auxiliary.png" alt-text="Screenshot that shows the data retention settings on the table configuration screen.":::
+
+1. Select **Save**.
+
+# [API](#tab/api-1)
+
+To configure a table for Basic logs or Analytics logs, call the [Tables - Update API](/rest/api/loganalytics/tables/create-or-update):
+
+```http
+PATCH https://management.azure.com/subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/tables/<tableName>?api-version=2021-12-01-preview
+```
+
+> [!IMPORTANT]
+> Use the bearer token for authentication. Learn more about [using bearer tokens](https://social.technet.microsoft.com/wiki/contents/articles/51140.azure-rest-management-api-the-quickest-way-to-get-your-bearer-token.aspx).
+
+**Request body**
+
+|Name | Type | Description |
+| | | |
+|properties.plan | string | The table plan. Possible values are `Analytics` and `Basic`.|
+
+**Example**
+
+This example configures the `ContainerLogV2` table for Basic logs.
+
+Container Insights uses `ContainerLog` by default. To switch to using `ContainerLogV2` for Container insights, [enable the ContainerLogV2 schema](../containers/container-insights-logging-v2.md) before you convert the table to Basic logs.
+
+**Sample request**
+
+```http
+PATCH https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLogV2?api-version=2021-12-01-preview
+```
+
+Use this request body to change to Basic logs:
+
+```http
+{
+ "properties": {
+ "plan": "Basic"
+ }
+}
+```
+
+Use this request body to change to Analytics Logs:
+
+```http
+{
+ "properties": {
+ "plan": "Analytics"
+ }
+}
+```
+
+**Sample response**
+
+This sample is the response for a table changed to Basic logs:
+
+Status code: 200
+
+```http
+{
+ "properties": {
+ "retentionInDays": 30,
+ "totalRetentionInDays": 30,
+ "archiveRetentionInDays": 22,
+ "plan": "Basic",
+ "lastPlanModifiedDate": "2022-01-01T14:34:04.37",
+ "schema": {...}
+ },
+ "id": "subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace",
+ "name": "ContainerLogV2"
+}
+```
+
+# [CLI](#tab/cli-1)
+
+To configure a table for Basic logs or Analytics logs, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command and set the `--plan` parameter to `Basic` or `Analytics`.
+
+For example:
+
+- To set Basic logs:
+
+ ```azurecli
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLogV2 --plan Basic
+ ```
+
+- To set Analytics Logs:
+
+ ```azurecli
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name ContainerLogV2 --plan Analytics
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+
+To configure a table's plan, use the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable) cmdlet:
+
+```powershell
+Update-AzOperationalInsightsTable -ResourceGroupName RG-NAME -WorkspaceName WORKSPACE-NAME -Plan Basic|Analytics
+```
+++
+## Related content
+
+- [Manage data retention](../logs/data-retention-configure.md).
+- [Tables that support the Basic table plan in Azure Monitor Logs](basic-logs-azure-tables.md).
+
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
Each workspace can have multiple accounts associated with it. Each account can h
| Add and remove monitoring solutions. | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/*` <br> `Microsoft.OperationsManagement/*` <br> `Microsoft.Automation/*` <br> `Microsoft.Resources/deployments/*/write`<br><br>These permissions need to be granted at resource group or subscription level. | | View data in the **Backup** and **Site Recovery** solution tiles. | Administrator/Co-administrator<br><br>Accesses resources deployed by using the classic deployment model. | | Run a search job. | `Microsoft.OperationalInsights/workspaces/tables/write` <br> `Microsoft.OperationalInsights/workspaces/searchJobs/write`|
-| Restore data from archived table. | `Microsoft.OperationalInsights/workspaces/tables/write` <br> `Microsoft.OperationalInsights/workspaces/restoreLogs/write`|
+| Restore data from long-term retention. | `Microsoft.OperationalInsights/workspaces/tables/write` <br> `Microsoft.OperationalInsights/workspaces/restoreLogs/write`|
### Built-in roles
Members of the Log Analytics Contributor role can:
- Configure the collection of logs from Azure Storage. - Configure data export rules. - [Run a search job.](search-jobs.md)-- [Restore archived logs.](restore.md)
+- [Restore data from long-term retention.](restore.md)
> [!WARNING] > You can use the permission to add a virtual machine extension to a virtual machine to gain full control over a virtual machine.
In addition to using the built-in roles for a Log Analytics workspace, you can c
- `Microsoft.OperationalInsights/workspaces/query/ComputerGroup/read`: Required to be able to use Update Management solutions - Grant users the following permissions to their resources: `*/read`, assigned to the Reader role, or `Microsoft.Insights/logs/*/read`
-**Example 6: Restrict a user from restoring archived logs.**
+**Example 6: Restrict a user from restoring data from long-term retention.**
- Configure the workspace access control mode to *use workspace or resource permissions*. - Assign the user to the [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#contributor) role.-- Add the following NonAction to block users from restoring archived logs: `Microsoft.OperationalInsights/workspaces/restoreLogs/write`
+- Add the following NonAction to block users from restoring data from long-term retention: `Microsoft.OperationalInsights/workspaces/restoreLogs/write`
## Set table-level read access
-Table-level access settings let you grant specific users or groups read-only permission to data from certain tables. Users with table-level read access can read data from the specified tables in both the workspace and the resource context.
-
-> [!NOTE]
-> We recommend using the method described here, which is currently in **preview**, to define table-level access. Alternatively, you can use the [legacy method of setting table-level read access](#legacy-method-of-setting-table-level-read-access), which has some limitations related to custom log tables. During preview, the recommended method described here does not apply to Microsoft Sentinel Detection Rules, which might have access to more tables than intended. Before using either method, see [Table-level access considerations and limitations](#table-level-access-considerations-and-limitations).
-
-Granting table-level read access involves assigning a user two roles:
--- At the workspace level - a custom role that provides limited permissions to read workspace details and run a query in the workspace, but not to read data from any tables. -- At the table level - a **Reader** role, scoped to the specific table. -
-**To grant a user or group limited permissions to the Log Analytics workspace:**
-
-1. Create a [custom role](../../role-based-access-control/custom-roles.md) at the workspace level to let users read workspace details and run a query in the workspace, without providing read access to data in any tables:
-
- 1. Navigate to your workspace and select **Access control (IAM)** > **Roles**.
-
- 1. Right-click the **Reader** role and select **Clone**.
-
- :::image type="content" source="media/manage-access/access-control-clone-role.png" alt-text="Screenshot that shows the Roles tab of the Access control screen with the clone button highlighted for the Reader role." lightbox="media/manage-access/access-control-clone-role.png":::
-
- This opens the **Create a custom role** screen.
-
- 1. On the **Basics** tab of the screen:
- 1. Enter a **Custom role name** value and, optionally, provide a description.
- 1. Set **Baseline permissions** to **Start from scratch**.
-
- :::image type="content" source="media/manage-access/manage-access-create-custom-role.png" alt-text="Screenshot that shows the Basics tab of the Create a custom role screen with the Custom role name and Description fields highlighted." lightbox="media/manage-access/manage-access-create-custom-role.png":::
-
- 1. Select the **JSON** tab > **Edit**:
-
- 1. In the `"actions"` section, add these actions:
-
- ```json
- "Microsoft.OperationalInsights/workspaces/read",
- "Microsoft.OperationalInsights/workspaces/query/read"
- ```
-
- 1. In the `"not actions"` section, add:
-
- ```json
- "Microsoft.OperationalInsights/workspaces/sharedKeys/read"
- ```
-
- :::image type="content" source="media/manage-access/manage-access-create-custom-role-json.png" alt-text="Screenshot that shows the JSON tab of the Create a custom role screen with the actions section of the JSON file highlighted." lightbox="media/manage-access/manage-access-create-custom-role-json.png":::
-
- 1. Select **Save** > **Review + Create** at the bottom of the screen, and then **Create** on the next page.
-
-1. Assign your custom role to the relevant user:
- 1. Select **Access control (AIM)** > **Add** > **Add role assignment**.
-
- :::image type="content" source="media/manage-access/manage-access-add-role-assignment-button.png" alt-text="Screenshot that shows the Access control screen with the Add role assignment button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-button.png":::
-
- 1. Select the custom role you created and select **Next**.
-
- :::image type="content" source="media/manage-access/manage-access-add-role-assignment-screen.png" alt-text="Screenshot that shows the Add role assignment screen with a custom role and the Next button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-screen.png":::
--
- This opens the **Members** tab of the **Add custom role assignment** screen.
-
- 1. Click **+ Select members** to open the **Select members** screen.
-
- :::image type="content" source="media/manage-access/manage-access-add-role-assignment-select-members.png" alt-text="Screenshot that shows the Select members screen." lightbox="media/manage-access/manage-access-add-role-assignment-select-members.png":::
-
- 1. Search for and select a user and click **Select**.
- 1. Select **Review and assign**.
-
-The user can now read workspace details and run a query, but can't read data from any tables.
-
-**To grant the user read access to a specific table:**
-
-1. From the **Log Analytics workspaces** menu, select **Tables**.
-1. Select the ellipsis ( **...** ) to the right of your table and select **Access control (IAM)**.
-
- :::image type="content" source="media/manage-access/table-level-access-control.png" alt-text="Screenshot that shows the Log Analytics workspace table management screen with the table-level access control button highlighted." lightbox="media/manage-access/manage-access-create-custom-role-json.png":::
-
-1. On the **Access control (IAM)** screen, select **Add** > **Add role assignment**.
-1. Select the **Reader** role and select **Next**.
-1. Click **+ Select members** to open the **Select members** screen.
-1. Search for and select the user and click **Select**.
-1. Select **Review and assign**.
-
-The user can now read data from this specific table. Grant the user read access to other tables in the workspace, as needed.
-
-### Legacy method of setting table-level read access
-
-The legacy method of table-level also uses [Azure custom roles](../../role-based-access-control/custom-roles.md) to let you grant specific users or groups access to specific tables in the workspace. Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode).
-
-To define access to a particular table, create a [custom role](../../role-based-access-control/custom-roles.md):
-
-* Set the user permissions in the **Actions** section of the role definition.
-* Use `Microsoft.OperationalInsights/workspaces/query/*` to grant access to all tables.
-* To exclude access to specific tables when you use a wildcard in **Actions**, list the tables excluded tables in the **NotActions** section of the role definition.
-
-Here are examples of custom role actions to grant and deny access to specific tables.
-
-Grant access to the _Heartbeat_ and _AzureActivity_ tables:
-
-```
-"Actions": [
- "Microsoft.OperationalInsights/workspaces/read",
- "Microsoft.OperationalInsights/workspaces/query/read",
- "Microsoft.OperationalInsights/workspaces/query/Heartbeat/read",
- "Microsoft.OperationalInsights/workspaces/query/AzureActivity/read"
- ],
-```
-
-Grant access to only the _SecurityBaseline_ table:
-
-```
-"Actions": [
- "Microsoft.OperationalInsights/workspaces/read",
- "Microsoft.OperationalInsights/workspaces/query/read",
- "Microsoft.OperationalInsights/workspaces/query/SecurityBaseline/read"
-],
-```
--
-Grant access to all tables except the _SecurityAlert_ table:
-
-```
-"Actions": [
- "Microsoft.OperationalInsights/workspaces/read",
- "Microsoft.OperationalInsights/workspaces/query/read",
- "Microsoft.OperationalInsights/workspaces/query/*/read"
-],
-"notActions": [
- "Microsoft.OperationalInsights/workspaces/query/SecurityAlert/read"
-],
-```
-
-#### Limitations of the legacy method related to custom tables
-
-Custom tables store data you collect from data sources such as [text logs](../agents/data-sources-custom-logs.md) and the [HTTP Data Collector API](data-collector-api.md). To identify the table type, [view table information in Log Analytics](./log-analytics-tutorial.md#view-table-information).
-
-Using the legacy method of table-level access, you can't grant access to individual custom log tables at the table level, but you can grant access to all custom log tables. To create a role with access to all custom log tables, create a custom role by using the following actions:
-
-```
-"Actions": [
- "Microsoft.OperationalInsights/workspaces/read",
- "Microsoft.OperationalInsights/workspaces/query/read",
- "Microsoft.OperationalInsights/workspaces/query/Tables.Custom/read"
-],
-```
-
-### Table-level access considerations and limitations
--- In the Log Analytics UI, users with table-level can see the list of all tables in the workspace, but can only retrieve data from tables to which they have access.-- The standard Reader or Contributor roles, which include the _\*/read_ action, override table-level access control and give users access to all log data.-- A user with table-level access but no workspace-level permissions can access log data from the API but not from the Azure portal. -- Administrators and owners of the subscription have access to all data types regardless of any other permission settings.-- Workspace owners are treated like any other user for per-table access control.-- Assign roles to security groups instead of individual users to reduce the number of assignments. This practice will also help you use existing group management tools to configure and verify access.
+See [Manage table-level read access](manage-table-access.md).
## Next steps
azure-monitor Manage Logs Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-logs-tables.md
description: Learn how to manage table settings in a Log Analytics workspace bas
Previously updated : 05/26/2024 Last updated : 07/21/2024 # Customer intent: As a Log Analytics workspace administrator, I want to understand how table properties work and how to view and manage table properties so that I can manage the data and costs related to a Log Analytics workspace effectively. # Manage tables in a Log Analytics workspace
-A Log Analytics workspace lets you collect logs from Azure and non-Azure resources into one space for data analysis, use by other services, such as [Sentinel](../../../articles/sentinel/overview.md), and to trigger alerts and actions, for example, using [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md). The Log Analytics workspace consists of tables, which you can configure to manage your data model and log-related costs. This article explains the table configuration options in Azure Monitor Logs and how to set table properties based on your data analysis and cost management needs.
-
-## Permissions required
-
-You must have `microsoft.operationalinsights/workspaces/tables/write` permissions to the Log Analytics workspaces you manage, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example.
+A Log Analytics workspace lets you collect log data from Azure and non-Azure resources into one space for analysis, use by other services, such as [Sentinel](../../../articles/sentinel/overview.md), and to trigger alerts and actions, for example, using [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md). The Log Analytics workspace consists of tables, which you can configure to manage your data model, data access, and log-related costs. This article explains the table configuration options in Azure Monitor Logs and how to set table properties based on your data analysis and cost management needs.
## Table properties This diagram provides an overview of the table configuration options in Azure Monitor Logs: ### Table type and schema
Your Log Analytics workspace can contain the following types of tables:
| Azure table | Logs from Azure resources or required by Azure services and solutions. | Azure Monitor Logs creates Azure tables automatically based on Azure services you use and [diagnostic settings](../essentials/diagnostic-settings.md) you configure for specific resources. Each Azure table has a predefined schema. You can [add columns to an Azure table](../logs/create-custom-table.md#add-or-delete-a-custom-column) to store transformed log data or enrich data in the Azure table with data from another source.| | Custom table | Non-Azure resources and any other data source, such as file-based logs. | You can [define a custom table's schema](../logs/create-custom-table.md) based on how you want to store data you collect from a given data source. | | Search results | All data stored in a Log Analytics workspace. | The schema of a search results table is based on the query you define when you [run the search job](../logs/search-jobs.md). You can't edit the schema of existing search results tables. |
-| Restored logs | Archived logs. | A restored logs table has the same schema as the table from which you [restore logs](../logs/restore.md). You can't edit the schema of existing restored logs tables. |
+| Restored logs | Data stored in a specific table in a Log Analytics workspace. | A restored logs table has the same schema as the table from which you [restore logs](../logs/restore.md). You can't edit the schema of existing restored logs tables. |
+
+### Table plan
-### Log data plan
+[Configure a table's plan](../logs/logs-table-plans.md) based on how often you access the data in the table:
+- The **Analytics** plan is suited for continuous monitoring, real-time detection, and performance analytics. This plan makes log data available for interactive multi-table queries and use by features and services for 30 days to two years.
+- The **Basic** plan is suited for troubleshooting and incident response. This plan offers discounted ingestion and optimized single-table queries for 30 days.
+- The **Auxiliary** plan is suited for low-touch data, such as verbose logs, and data required for auditing and compliance. This plan offers low-cost ingestion and unoptimized single-table queries for 30 days.
-[Configure a table's log data plan](../logs/basic-logs-configure.md) based on how often you access the data in the table:
-- The **Analytics** plan makes log data available for interactive queries and use by features and services. -- The **Basic** log data plan provides a low-cost way to ingest and retain logs for troubleshooting, debugging, auditing, and compliance.
+For full details about Azure Monitor Logs table plans, see [Azure Monitor Logs: Table plans](../logs/data-platform-logs.md#table-plans).
-### Retention and archive
+### Long-term retention
-Archiving is a low-cost solution for keeping data that you no longer use regularly in your workspace for compliance or occasional investigation. [Set table-level retention](../logs/data-retention-archive.md) to override the default workspace retention and to archive data within your workspace.
+Long-term retention is a low-cost solution for keeping data that you don't use regularly in your workspace for compliance or occasional investigation. Use [table-level retention settings](../logs/data-retention-configure.md) to add or extend long-term retention.
-To access archived data, [run a search job](../logs/search-jobs.md) or [restore data for a specific time range](../logs/restore.md).
+To access data in long-term retention, [run a search job](../logs/search-jobs.md).
### Ingestion-time transformations Reduce costs and analysis effort by using data collection rules to [filter out and transform data before ingestion](../essentials/data-collection-transformations.md) based on the schema you define for your custom table.
+> [!NOTE]
+> Tables with the [Auxiliary table plan](data-platform-logs.md) do not currently support data transformation. For more details, see [Auxiliary table plan public preview limitations](create-custom-table-auxiliary.md#public-preview-limitations).
+ ## View table properties > [!NOTE]
GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{
|Name | Type | Description | | | | |
-|properties.plan | string | The table plan. Either `Analytics` or `Basic`. |
-|properties.retentionInDays | integer | The table's data retention in days. In `Basic Logs`, the value is eight days, fixed. In `Analytics Logs`, the value is between four and 730 days.|
-|properties.totalRetentionInDays | integer | The table's data retention that also includes the archive period.|
-|properties.archiveRetentionInDays|integer|The table's archive period (read-only, calculated).|
+|properties.plan | string | The table plan. `Analytics`, `Basic`, or `Auxiliary`. |
+|properties.retentionInDays | integer | The table's interactive retention in days. For `Basic` and `Auxiliiary`, this value is 30 days. For `Analytics`, the value is between four and 730 days.|
+|properties.totalRetentionInDays | integer | The table's total data retention, including interactive and long-term retention.|
+|properties.archiveRetentionInDays|integer|The table's long-term retention period (read-only, calculated).|
|properties.lastPlanModifiedDate|String|Last time when the plan was set for this table. Null if no change was ever done from the default settings (read-only). **Sample request**
Use the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalins
Learn how to: -- [Set a table's log data plan](../logs/basic-logs-configure.md)
+- [Set a table's log data plan](../logs/logs-table-plans.md)
- [Add custom tables and columns](../logs/create-custom-table.md)-- [Set retention and archive](../logs/data-retention-archive.md)
+- [Configure data retention](../logs/data-retention-configure.md)
- [Design a workspace architecture](../logs/workspace-design.md)
azure-monitor Manage Table Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-table-access.md
+
+ Title: Manage read access to tables in a Log Analytics workspace
+description: This article explains how you to manage read access to specific tables in a Log Analytics workspace.
++++ Last updated : 07/22/2024++++
+# Manage table-level read access in a Log Analytics workspace
+
+Table-level access settings let you grant specific users or groups read-only permission to data in a table. Users with table-level read access can read data from the specified table in both the workspace and the resource context.
+
+This article describes two ways to manage table-level read access.
+
+> [!NOTE]
+> We recommend using the first method described here, which is currently in **preview**. During preview, the recommended method described here does not apply to Microsoft Sentinel Detection Rules, which might have access to more tables than intended.
+Alternatively, you can use the [legacy method of setting table-level read access](#legacy-method-of-setting-table-level-read-access), which has some limitations related to custom log tables. Before using either method, see [Table-level access considerations and limitations](#table-level-access-considerations-and-limitations).
+
+## Set table-level read access (preview)
+
+Granting table-level read access involves assigning a user two roles:
+
+- At the workspace level - a custom role that provides limited permissions to read workspace details and run a query in the workspace, but not to read data from any tables.
+- At the table level - a **Reader** role, scoped to the specific table.
+
+**To grant a user or group limited permissions to the Log Analytics workspace:**
+
+1. Create a [custom role](../../role-based-access-control/custom-roles.md) at the workspace level to let users read workspace details and run a query in the workspace, without providing read access to data in any tables:
+
+ 1. Navigate to your workspace and select **Access control (IAM)** > **Roles**.
+
+ 1. Right-click the **Reader** role and select **Clone**.
+
+ :::image type="content" source="media/manage-access/access-control-clone-role.png" alt-text="Screenshot that shows the Roles tab of the Access control screen with the clone button highlighted for the Reader role." lightbox="media/manage-access/access-control-clone-role.png":::
+
+ This opens the **Create a custom role** screen.
+
+ 1. On the **Basics** tab of the screen:
+ 1. Enter a **Custom role name** value and, optionally, provide a description.
+ 1. Set **Baseline permissions** to **Start from scratch**.
+
+ :::image type="content" source="media/manage-access/manage-access-create-custom-role.png" alt-text="Screenshot that shows the Basics tab of the Create a custom role screen with the Custom role name and Description fields highlighted." lightbox="media/manage-access/manage-access-create-custom-role.png":::
+
+ 1. Select the **JSON** tab > **Edit**:
+
+ 1. In the `"actions"` section, add these actions:
+
+ ```json
+ "Microsoft.OperationalInsights/workspaces/read",
+ "Microsoft.OperationalInsights/workspaces/query/read"
+ ```
+
+ 1. In the `"not actions"` section, add:
+
+ ```json
+ "Microsoft.OperationalInsights/workspaces/sharedKeys/read"
+ ```
+
+ :::image type="content" source="media/manage-access/manage-access-create-custom-role-json.png" alt-text="Screenshot that shows the JSON tab of the Create a custom role screen with the actions section of the JSON file highlighted." lightbox="media/manage-access/manage-access-create-custom-role-json.png":::
+
+ 1. Select **Save** > **Review + Create** at the bottom of the screen, and then **Create** on the next page.
+
+1. Assign your custom role to the relevant user:
+ 1. Select **Access control (AIM)** > **Add** > **Add role assignment**.
+
+ :::image type="content" source="media/manage-access/manage-access-add-role-assignment-button.png" alt-text="Screenshot that shows the Access control screen with the Add role assignment button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-button.png":::
+
+ 1. Select the custom role you created and select **Next**.
+
+ :::image type="content" source="media/manage-access/manage-access-add-role-assignment-screen.png" alt-text="Screenshot that shows the Add role assignment screen with a custom role and the Next button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-screen.png":::
++
+ This opens the **Members** tab of the **Add custom role assignment** screen.
+
+ 1. Click **+ Select members** to open the **Select members** screen.
+
+ :::image type="content" source="media/manage-access/manage-access-add-role-assignment-select-members.png" alt-text="Screenshot that shows the Select members screen." lightbox="media/manage-access/manage-access-add-role-assignment-select-members.png":::
+
+ 1. Search for and select a user and click **Select**.
+ 1. Select **Review and assign**.
+
+The user can now read workspace details and run a query, but can't read data from any tables.
+
+**To grant the user read access to a specific table:**
+
+1. From the **Log Analytics workspaces** menu, select **Tables**.
+1. Select the ellipsis ( **...** ) to the right of your table and select **Access control (IAM)**.
+
+ :::image type="content" source="media/manage-access/table-level-access-control.png" alt-text="Screenshot that shows the Log Analytics workspace table management screen with the table-level access control button highlighted." lightbox="media/manage-access/manage-access-create-custom-role-json.png":::
+
+1. On the **Access control (IAM)** screen, select **Add** > **Add role assignment**.
+1. Select the **Reader** role and select **Next**.
+1. Click **+ Select members** to open the **Select members** screen.
+1. Search for and select the user and click **Select**.
+1. Select **Review and assign**.
+
+The user can now read data from this specific table. Grant the user read access to other tables in the workspace, as needed.
+
+## Legacy method of setting table-level read access
+
+The legacy method of table-level also uses [Azure custom roles](../../role-based-access-control/custom-roles.md) to let you grant specific users or groups access to specific tables in the workspace. Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](manage-access.md#access-control-mode) regardless of the user's [access mode](manage-access.md#access-mode).
+
+To define access to a particular table, create a [custom role](../../role-based-access-control/custom-roles.md):
+
+* Set the user permissions in the **Actions** section of the role definition.
+* Use `Microsoft.OperationalInsights/workspaces/query/*` to grant access to all tables.
+* To exclude access to specific tables when you use a wildcard in **Actions**, list the tables excluded tables in the **NotActions** section of the role definition.
+
+Here are examples of custom role actions to grant and deny access to specific tables.
+
+Grant access to the _Heartbeat_ and _AzureActivity_ tables:
+
+```
+"Actions": [
+ "Microsoft.OperationalInsights/workspaces/read",
+ "Microsoft.OperationalInsights/workspaces/query/read",
+ "Microsoft.OperationalInsights/workspaces/query/Heartbeat/read",
+ "Microsoft.OperationalInsights/workspaces/query/AzureActivity/read"
+ ],
+```
+
+Grant access to only the _SecurityBaseline_ table:
+
+```
+"Actions": [
+ "Microsoft.OperationalInsights/workspaces/read",
+ "Microsoft.OperationalInsights/workspaces/query/read",
+ "Microsoft.OperationalInsights/workspaces/query/SecurityBaseline/read"
+],
+```
++
+Grant access to all tables except the _SecurityAlert_ table:
+
+```
+"Actions": [
+ "Microsoft.OperationalInsights/workspaces/read",
+ "Microsoft.OperationalInsights/workspaces/query/read",
+ "Microsoft.OperationalInsights/workspaces/query/*/read"
+],
+"notActions": [
+ "Microsoft.OperationalInsights/workspaces/query/SecurityAlert/read"
+],
+```
+
+### Limitations of the legacy method related to custom tables
+
+Custom tables store data you collect from data sources such as [text logs](../agents/data-sources-custom-logs.md) and the [HTTP Data Collector API](data-collector-api.md). To identify the table type, [view table information in Log Analytics](./log-analytics-tutorial.md#view-table-information).
+
+Using the legacy method of table-level access, you can't grant access to individual custom log tables at the table level, but you can grant access to all custom log tables. To create a role with access to all custom log tables, create a custom role by using the following actions:
+
+```
+"Actions": [
+ "Microsoft.OperationalInsights/workspaces/read",
+ "Microsoft.OperationalInsights/workspaces/query/read",
+ "Microsoft.OperationalInsights/workspaces/query/Tables.Custom/read"
+],
+```
+
+## Table-level access considerations and limitations
+
+- In the Log Analytics UI, users with table-level can see the list of all tables in the workspace, but can only retrieve data from tables to which they have access.
+- The standard Reader or Contributor roles, which include the _\*/read_ action, override table-level access control and give users access to all log data.
+- A user with table-level access but no workspace-level permissions can access log data from the API but not from the Azure portal.
+- Administrators and owners of the subscription have access to all data types regardless of any other permission settings.
+- Workspace owners are treated like any other user for per-table access control.
+- Assign roles to security groups instead of individual users to reduce the number of assignments. This practice will also help you use existing group management tools to configure and verify access.
+
+## Next steps
+
+* Learn more about [managing access to Log Analytics workspaces](manage-access.md).
azure-monitor Migrate Splunk To Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs.md
The benefits of migrating to Azure Monitor include:
- Fully managed, Software as a Service (SaaS) platform with: - Automatic upgrades and scaling. - [Simple per-GB pay-as-you-go pricing](https://azure.microsoft.com/pricing/details/monitor/).
- - [Cost optimization and monitoring features](../../azure-monitor/best-practices-cost.md) and low-cost [Basic logs](../logs/basic-logs-configure.md).
+ - [Cost optimization and monitoring features](../../azure-monitor/best-practices-cost.md) and low-cost [Basic and Auxiliary table plans](../logs/data-platform-logs.md#table-plans).
- Cloud-native monitoring and observability, including: - [End-to-end, at-scale monitoring](../overview.md). - [Native monitoring of Azure resources](../essentials/platform-logs-overview.md).
The benefits of migrating to Azure Monitor include:
|Splunk offering| Product |Azure offering| |||| |Splunk Platform|<ul><li>Splunk Cloud Platform</li><li>Splunk Enterprise</li></ul>|[Azure Monitor Logs](../logs/data-platform-logs.md) is a centralized software as a service (SaaS) platform for collecting, analyzing, and acting on telemetry data generated by Azure and non-Azure resources and applications.|
-|Splunk Observability|<ul><li>Splunk Infrastructure Monitoring</li><li>Splunk Application Performance Monitoring</li><li>Splunk IT Service Intelligence</li></ul>|[Azure Monitor](../overview.md) is an end-to-end solution for collecting, analyzing, and acting on telemetry from your cloud, multicloud, and on-premises environments, built over a powerful data ingestion pipeline that's shared with Microsoft Sentinel. Azure Monitor offers enterprises a comprehensive solution for monitoring cloud, hybrid, and on-premises environments, with [network isolation](../logs/private-link-security.md), [resilience features and protection from data center failures](../logs/availability-zones.md), [reporting](../overview.md#insights), and [alerts and response](../overview.md#respond) capabilities.<br>Azure Monitor's built-in features include:<ul><li>[Azure Monitor Insights](../insights/insights-overview.md) - ready-to-use, curated monitoring experiences with pre-configured data inputs, searches, alerts, and visualizations.</li><li>[Application Insights](../app/app-insights-overview.md) - provides Application Performance Management (APM) for live web applications.</li><li>[Azure Monitor AIOps and built-in machine learning capabilities](../logs/aiops-machine-learning.md) - provide insights and help you troubleshoot issues and automate data-driven tasks, such as predicting capacity usage and autoscaling, identifying and analyzing application performance issues, and detecting anomalous behaviors in virtual machines, containers, and other resources.</li></ul> These features are free of installation fees.|
+|Splunk Observability|<ul><li>Splunk Infrastructure Monitoring</li><li>Splunk Application Performance Monitoring</li><li>Splunk IT Service Intelligence</li></ul>|[Azure Monitor](../overview.md) is an end-to-end solution for collecting, analyzing, and acting on telemetry from your cloud, multicloud, and on-premises environments, built over a powerful data ingestion pipeline that's shared with Microsoft Sentinel. Azure Monitor offers enterprises a comprehensive solution for monitoring cloud, hybrid, and on-premises environments, with [network isolation](../logs/private-link-security.md), [resilience features and protection from data center failures](../logs/availability-zones.md), [reporting](../overview.md#insights), and [alerts and response](../overview.md#respond) capabilities.<br>Azure Monitor's built-in features include:<ul><li>[Azure Monitor Insights](../insights/insights-overview.md) - ready-to-use, curated monitoring experiences with preconfigured data inputs, searches, alerts, and visualizations.</li><li>[Application Insights](../app/app-insights-overview.md) - provides Application Performance Management (APM) for live web applications.</li><li>[Azure Monitor AIOps and built-in machine learning capabilities](../logs/aiops-machine-learning.md) - provide insights and help you troubleshoot issues and automate data-driven tasks, such as predicting capacity usage and autoscaling, identifying and analyzing application performance issues, and detecting anomalous behaviors in virtual machines, containers, and other resources.</li></ul> These features are free of installation fees.|
|Splunk Security|<ul><li>Splunk Enterprise Security</li><li>Splunk Mission Control<br>Splunk SOAR</li></ul>|[Microsoft Sentinel](../../sentinel/overview.md) is a cloud-native solution that runs over the Azure Monitor platform to provide intelligent security analytics and threat intelligence across the enterprise.| ## Introduction to key concepts
The benefits of migrating to Azure Monitor include:
|Azure Monitor Logs|Similar Splunk concept|Description| |||| |[Log Analytics workspace](../logs/log-analytics-workspace-overview.md)|Namespace|A Log Analytics workspace is an environment in which you can collect log data from all Azure and non-Azure monitored resources. The data in the workspace is available for querying and analysis, Azure Monitor features, and other Azure services. Similar to a Splunk namespace, you can manage access to the data and artifacts, such as alerts and workbooks, in your Log Analytics workspace.<br/>[Design your Log Analytics workspace architecture](../logs/workspace-design.md) based on your needs - for example, split billing, regional data storage requirements, and resilience considerations.|
-|[Table management](../logs/manage-logs-tables.md)|Indexing|Azure Monitor Logs ingests log data into tables in a managed [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) database. During ingestion, the service automatically indexes and timestamps the data, which means you can store various types of data and access the data quickly using Kusto Query Language (KQL) queries.<br/>Use table properties to manage the table schema, data retention and archive, and whether to store the data for occasional auditing and troubleshooting or for ongoing analysis and use by features and services.<br/>For a comparison of Splunk and Azure Data Explorer data handling and querying concepts, see [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet). |
-|[Basic and Analytics log data plans](../logs/basic-logs-configure.md)| |Azure Monitor Logs offers two log data plans that let you reduce log ingestion and retention costs and take advantage of Azure Monitor's advanced features and analytics capabilities based on your needs.<br/>The **Analytics** plan makes log data available for interactive queries and use by features and services.<br/>The **Basic** log data plan provides a low-cost way to ingest and retain logs for troubleshooting, debugging, auditing, and compliance.|
-|[Archiving and quick access to archived data](../logs/data-retention-archive.md)|Data bucket states (hot, warm, cold, thawed), archiving, Dynamic Data Active Archive (DDAA)|The cost-effective archive option keeps your logs in your Log Analytics workspace and lets you access archived log data immediately, when you need it. Archive configuration changes are effective immediately because data isn't physically transferred to external storage. You can [restore archived data](../logs/restore.md) or run a [search job](../logs/search-jobs.md) to make a specific time range of archived data available for real-time analysis. |
+|[Table management](../logs/manage-logs-tables.md)|Indexing|Azure Monitor Logs ingests log data into tables in a managed [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) database. During ingestion, the service automatically indexes and timestamps the data, which means you can store various types of data and access the data quickly using Kusto Query Language (KQL) queries.<br/>Use table properties to manage the table schema, data retention, and whether to store the data for occasional auditing and troubleshooting or for ongoing analysis and use by features and services.<br/>For a comparison of Splunk and Azure Data Explorer data handling and querying concepts, see [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet). |
+|[Analytics, Basic, and Auxiliary table plans](../logs/data-platform-logs.md#table-plans)| |Azure Monitor Logs offers three table plans that let you reduce log ingestion and retention costs and take advantage of Azure Monitor's advanced features and analytics capabilities based on your needs.<br/>The **Analytics** plan makes log data available for interactive queries and use by features and services.<br/>The **Basic** plan lets you ingest and retain logs at a reduced cost for troubleshooting and incident response.<br/>The **Auxiliary** plan is a low-cost way to ingest and retain logs low-touch data, such as verbose logs, and data required for auditing and compliance.|
+|[Long-term retention](../logs/data-retention-configure.md)|Data bucket states (hot, warm, cold, thawed), archiving, Dynamic Data Active Archive (DDAA)|The cost-effective long-term retention option keeps your logs in your Log Analytics workspace and lets you access this data immediately, when you need it. Retention configuration changes are effective immediately because data isn't physically transferred to external storage. You can [restore data in long-term retention](../logs/restore.md) or run a [search job](../logs/search-jobs.md) to make a specific time range of data available for real-time analysis. |
|[Access control](../logs/manage-access.md)|Role-based user access, permissions|Define which people and resources can read, write, and perform operations on specific resources using [Azure role-based access control (RBAC)](../../role-based-access-control/overview.md). A user with access to a resource has access to the resource's logs.<br/>Azure facilitates data security and access management with features such as [built-in roles](../../role-based-access-control/built-in-roles.md), [custom roles](../../role-based-access-control/custom-roles.md), [inheritance of role permission](../../role-based-access-control/scope-overview.md), and [audit history](/entr#set-table-level-read-access) for granular access control to specific data types. | |[Data transformations](../essentials/data-collection-transformations.md)|Transforms, field extractions|Transformations let you filter or modify incoming data before it's sent to a Log Analytics workspace. Use transformations to remove sensitive data, enrich data in your Log Analytics workspace, perform calculations, and filter out data you don't need to reduce data costs.| |[Data collection rules](../essentials/data-collection-rule-overview.md)|Data inputs, data pipeline|Define which data to collect, how to transform that data, and where to send the data.|
Your current usage in Splunk will help you decide which [pricing tier](../logs/c
## 2. Set up a Log Analytics workspace
-Your Log Analytics workspace is where you collect log data from all of your monitored resources. You can retain data in a Log Analytics workspace for up to seven years. Low-cost data archiving within the workspace lets you access archived data quickly and easily when you need it, without the overhead of managing an external data store.
+Your Log Analytics workspace is where you collect log data from all of your monitored resources. You can retain data in a Log Analytics workspace for up to seven years. Low-cost data archiving within the workspace lets you access data in long-term retention quickly and easily when you need it, without the overhead of managing an external data store.
We recommend collecting all of your log data in a single Log Analytics workspace for ease of management. If you're considering using multiple workspaces, see [Design a Log Analytics workspace architecture](../logs/workspace-design.md).
To set up a Log Analytics workspace for data collection:
1. [Pricing tier](../logs/change-pricing-tier.md). 1. [Link your Log Analytics workspace to a dedicated cluster](../logs/availability-zones.md) to take advantage of advanced capabilities, if you're eligible, based on pricing tier. 1. [Daily cap](../logs/daily-cap.md).
- 1. [Data retention](../logs/data-retention-archive.md).
+ 1. [Data retention](../logs/data-retention-configure.md).
1. [Network isolation](../logs/private-link-security.md). 1. [Access control](../logs/manage-access.md). 1. Use [table-level configuration settings](../logs/manage-logs-tables.md) to:
- 1. [Define each table's log data plan](../logs/basic-logs-configure.md).
+ 1. [Define each table's log data plan](../logs/logs-table-plans.md).
The default log data plan is Analytics, which lets you take advantage of Azure Monitor's rich monitoring and analytics capabilities.
- 1. [Set a data retention and archiving policy for specific tables](../logs/data-retention-archive.md), if you need them to be different from the workspace-level data retention and archiving policy.
+ 1. [Set a data retention and archiving policy for specific tables](../logs/data-retention-configure.md), if you need them to be different from the workspace-level data retention and archiving policy.
1. [Modify the table schema](../logs/create-custom-table.md) based on your data model. ## 3. Migrate Splunk artifacts to Azure Monitor
This table lists Splunk artifacts and links to guidance for setting up the equiv
||| |Alerts|[Alert rules](../alerts/alerts-create-new-alert-rule.md)| |Alert actions|[Action groups](../alerts/action-groups.md)|
-|Infrastructure Monitoring|[Azure Monitor Insights](../insights/insights-overview.md) are a set of ready-to-use, curated monitoring experiences with pre-configured data inputs, searches, alerts, and visualizations to get you started analyzing data quickly and effectively. |
+|Infrastructure Monitoring|[Azure Monitor Insights](../insights/insights-overview.md) are a set of ready-to-use, curated monitoring experiences with preconfigured data inputs, searches, alerts, and visualizations to get you started analyzing data quickly and effectively. |
|Dashboards|[Workbooks](../visualize/workbooks-overview.md)|
-|Lookups|Azure Monitor provides various ways to enrich data, including:<br>- [Data collection rules](../essentials/data-collection-rule-overview.md), which let you send data from multiple sources to a Log Analytics workspace, and perform calculations and transformations before ingesting the data.<br>- KQL operators, such as the [join operator](/azure/data-explorer/kusto/query/joinoperator), which combines data from different tables, and the [externaldata operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor), which returns data from external storage.<br>- Integration with services, such as [Azure Machine Learning](../../machine-learning/overview-what-is-azure-machine-learning.md) or [Azure Event Hubs](../../event-hubs/event-hubs-about.md), to apply advanced machine learning and stream in additional data.|
+|Lookups|Azure Monitor provides various ways to enrich data, including:<br>- [Data collection rules](../essentials/data-collection-rule-overview.md), which let you send data from multiple sources to a Log Analytics workspace, and perform calculations and transformations before ingesting the data.<br>- KQL operators, such as the [join operator](/azure/data-explorer/kusto/query/joinoperator), which combines data from different tables, and the [externaldata operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor), which returns data from external storage.<br>- Integration with services, such as [Azure Machine Learning](../../machine-learning/overview-what-is-azure-machine-learning.md) or [Azure Event Hubs](../../event-hubs/event-hubs-about.md), to apply advanced machine learning and stream in more data.|
|Namespaces|You can grant or limit permission to artifacts in Azure Monitor based on [access control](../logs/manage-access.md) you define on your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) or [Azure resource groups](../../azure-resource-manager/management/manage-resource-groups-portal.md).| |Permissions|[Access management](../logs/manage-access.md)| |Reports|Azure Monitor offers a range of options for analyzing, visualizing, and sharing data, including:<br>- [Integration with Grafana](../visualize/grafana-plugin.md)<br>- [Insights](../insights/insights-overview.md)<br>- [Workbooks](../visualize/workbooks-overview.md)<br>- [Dashboards](../visualize/tutorial-logs-dashboards.md)<br>- [Integration with Power BI](../logs/log-powerbi.md)<br>- [Integration with Excel](../logs/log-excel.md)|
azure-monitor Personal Data Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/personal-data-mgmt.md
Last updated 06/28/2022
-# Customer intent: As an Azure Monitor admin user, I want to understand how to manage personal data in logs Azure Monitor collects.
+# Customer intent: As an Azure Monitor admin user, I want to understand how to manage personal data in logs that Azure Monitor collects.
In this article, _log data_ refers to data sent to a Log Analytics workspace, wh
[!INCLUDE [gdpr-dsr-and-stp-note](~/reusable-content/ce-skilling/azure/includes/gdpr-dsr-and-stp-note.md)]
+## Permissions required
+
+| Action | Permissions required |
+|:-|:|
+| Purge data from a Log Analytics workspace | `Microsoft.OperationalInsights/workspaces/purge/action` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example |
+ ## Strategy for personal data handling
We __strongly__ recommend you restructure your data collection policy to stop co
Use the [Log Analytics query API](/rest/api/loganalytics/dataaccess/query) or the [Application Insights query API](/rest/api/application-insights/query) for view and export data requests.
+> [!NOTE]
+> You can't use the Log Analytics query API on that have the [Basic and Auxiliary table plans](data-platform-logs.md#table-plans). Instead, use the [Log Analytics /search API](basic-logs-query.md#run-a-query-on-a-basic-or-auxiliary-table).
+ You need to implement the logic for converting the data to an appropriate format for delivery to your users. [Azure Functions](https://azure.microsoft.com/services/functions/) is a great place to host such logic. + ### Delete > [!WARNING]
To manage system resources, we limit purge requests to 50 requests an hour. Batc
x-ms-status-location: https://management.azure.com/subscriptions/[SubscriptionId]/resourceGroups/[ResourceGroupName]/providers/Microsoft.OperationalInsights/workspaces/[WorkspaceName]/operations/purge-[PurgeOperationId]?api-version=2015-03-20 ```
+> [!NOTE]
+> You can't purge data from tables that have the [Basic and Auxiliary table plans](data-platform-logs.md#table-plans).
+ #### Application data * The [Components - Purge POST API](/rest/api/application-insights/components/purge) takes an object specifying parameters of data to delete and returns a reference GUID.
azure-monitor Powershell Workspace Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/powershell-workspace-configuration.md
- Title: Configure a Log Analytics workspace in Azure Monitor using PowerShell
-description: PowerShell samples show how to configure a Log Analytics workspace in Azure Monitor to collect data from various data sources.
--- Previously updated : 07/02/2023---
-#Customer intent: As a DevOps engineer or IT expert setting up a workspace, I want samples showing how to run PowerShell commands to collect data from various data sources into a workspace.
-
-# Configure a Log Analytics workspace in Azure Monitor using PowerShell
-
-The following sample script configures the workspace to collect multiple types of logs from virtual machines by using the [Log Analytics agent](../agents/log-analytics-agent.md).
-
-This script performs the following functions:
-
-1. Create a workspace.
-1. Enable collection of IIS logs from computers with the Windows agent installed.
-1. Collect Logical Disk perf counters from Linux computers (% Used Inodes; Free Megabytes; % Used Space; Disk Transfers/sec; Disk Reads/sec; Disk Writes/sec).
-1. Collect Syslog events from Linux computers.
-1. Collect Error and Warning events from the Application Event Log from Windows computers.
-1. Collect Memory Available Mbytes performance counter from Windows computers.
-1. Collect a custom log.
-
-```powershell
-$ResourceGroup = "my-resource-group"
-$WorkspaceName = "log-analytics-" + (Get-Random -Maximum 99999) # workspace names need to be unique in resource group - Get-Random helps with this for the example code
-$Location = "westeurope"
-
-# Create the resource group if needed
-try {
- Get-AzResourceGroup -Name $ResourceGroup -ErrorAction Stop
-} catch {
- New-AzResourceGroup -Name $ResourceGroup -Location $Location
-}
-
-# Create the workspace
-New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Sku PerGB2018 -ResourceGroupName $ResourceGroup
-
-# Enable IIS Log Collection using agent
-Enable-AzOperationalInsightsIISLogCollection -ResourceGroupName $ResourceGroup -WorkspaceName $WorkspaceName
-
-# Linux Perf
-New-AzOperationalInsightsLinuxPerformanceObjectDataSource -ResourceGroupName $ResourceGroup -WorkspaceName $WorkspaceName -ObjectName "Logical Disk" -InstanceName "*" -CounterNames @("% Used Inodes", "Free Megabytes", "% Used Space", "Disk Transfers/sec", "Disk Reads/sec", "Disk Writes/sec") -IntervalSeconds 20 -Name "Example Linux Disk Performance Counters"
-Enable-AzOperationalInsightsLinuxPerformanceCollection -ResourceGroupName $ResourceGroup -WorkspaceName $WorkspaceName
-
-# Linux Syslog
-New-AzOperationalInsightsLinuxSyslogDataSource -ResourceGroupName $ResourceGroup -WorkspaceName $WorkspaceName -Facility "kern" -CollectEmergency -CollectAlert -CollectCritical -CollectError -CollectWarning -Name "Example kernel syslog collection"
-Enable-AzOperationalInsightsLinuxSyslogCollection -ResourceGroupName $ResourceGroup -WorkspaceName $WorkspaceName
-
-# Windows Event
-New-AzOperationalInsightsWindowsEventDataSource -ResourceGroupName $ResourceGroup -WorkspaceName $WorkspaceName -EventLogName "Application" -CollectErrors -CollectWarnings -Name "Example Application Event Log"
-
-# Windows Perf
-New-AzOperationalInsightsWindowsPerformanceCounterDataSource -ResourceGroupName $ResourceGroup -WorkspaceName $WorkspaceName -ObjectName "Memory" -InstanceName "*" -CounterName "Available MBytes" -IntervalSeconds 20 -Name "Example Windows Performance Counter"
-
-# Custom Logs
-
-New-AzOperationalInsightsCustomLogDataSource -ResourceGroupName $ResourceGroup -WorkspaceName $WorkspaceName -CustomLogRawJson "$CustomLog" -Name "Example Custom Log Collection"
-
-```
-
-> [!NOTE]
-> The format for the `CustomLogRawJson` parameter that defines the configuration for a custom log can be complex. Use [Get-AzOperationalInsightsDataSource](/powershell/module/az.operationalinsights/get-azoperationalinsightsdatasource) to retrieve the configuration for an existing custom log. The `Properties` property is the configuration required for the `CustomLogRawJson` parameter.
-
-In the preceding example, `regexDelimiter` was defined as `\\n` for newline. The log delimiter might also be a timestamp. The following table lists the formats that are supported.
-
-| Format | JSON RegEx format uses two `\\` for every `\` in a standard RegEx, so if testing in a RegEx app, reduce `\\` to `\` |
-| | |
-| `YYYY-MM-DD HH:MM:SS` | `((\\d{2})|(\\d{4}))-([0-1]\\d)-(([0-3]\\d)|(\\d))\\s((\\d)|([0-1]\\d)|(2[0-4])):[0-5][0-9]:[0-5][0-9]` |
-| `M/D/YYYY HH:MM:SS AM/PM` | `(([0-1]\\d)|[0-9])/(([0-3]\\d)|(\\d))/((\\d{2})|(\\d{4}))\\s((\\d)|([0-1]\\d)|(2[0-4])):[0-5][0-9]:[0-5][0-9]\\s(AM|PM|am|pm)` |
-| `dd/MMM/yyyy HH:MM:SS` | `(([0-2][1-9]|[3][0-1])\\/(Jan|Feb|Mar|May|Apr|Jul|Jun|Aug|Oct|Sep|Nov|Dec|jan|feb|mar|may|apr|jul|jun|aug|oct|sep|nov|dec)\\/((19|20)[0-9][0-9]))\\s((\\d)|([0-1]\\d)|(2[0-4])):[0-5][0-9]:[0-5][0-9])` |
-| `MMM dd yyyy HH:MM:SS` | `(((?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Sept|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)).*?((?:(?:[0-2]?\\d{1})|(?:[3][01]{1})))(?![\\d]).*?((?:(?:[1]{1}\\d{1}\\d{1}\\d{1})|(?:[2]{1}\\d{3})))(?![\\d]).*?((?:(?:[0-1][0-9])|(?:[2][0-3])|(?:[0-9])):(?:[0-5][0-9])(?::[0-5][0-9])?(?:\\s?(?:am|AM|pm|PM))?))` |
-| `yyMMdd HH:mm:ss` | `([0-9]{2}([0][1-9]|[1][0-2])([0-2][0-9]|[3][0-1])\\s\\s?([0-1]?[0-9]|[2][0-3]):[0-5][0-9]:[0-5][0-9])` |
-| `ddMMyy HH:mm:ss` | `(([0-2][0-9]|[3][0-1])([0][1-9]|[1][0-2])[0-9]{2}\\s\\s?([0-1]?[0-9]|[2][0-3]):[0-5][0-9]:[0-5][0-9])` |
-| `MMM d HH:mm:ss` | `(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\\s\\s?([0]?[1-9]|[1-2][0-9]|[3][0-1])\\s([0-1]?[0-9]|[2][0-3]):([0-5][0-9]):([0-5][0-9])` |
-| `MMM d HH:mm:ss` <br> two spaces after MMM | `(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\\s\\s([0]?[1-9]|[1-2][0-9]|[3][0-1])\\s([0][0-9]|[1][0-2]):([0-5][0-9]):([0-5][0-9])` |
-| `MMM d HH:mm:ss` | `(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\\s([0]?[1-9]|[1-2][0-9]|[3][0-1])\\s([0][0-9]|[1][0-2]):([0-5][0-9]):([0-5][0-9])` |
-| `dd/MMM/yyyy:HH:mm:ss +zzzz` <br> where + is + or a - <br> where zzzz time offset | `(([0-2][1-9]|[3][0-1])\\/(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\\/((19|20)[0-9][0-9]):([0][0-9]|[1][0-2]):([0-5][0-9]):([0-5][0-9])\\s[\\+|\\-][0-9]{4})` |
-| `yyyy-MM-ddTHH:mm:ss` <br> The T is a literal letter T | `((\\d{2})|(\\d{4}))-([0-1]\\d)-(([0-3]\\d)|(\\d))T((\\d)|([0-1]\\d)|(2[0-4])):[0-5][0-9]:[0-5][0-9]` |
-
-## Troubleshooting
-When you create a workspace that was deleted in the last 14 days and is in a [soft-delete state](../logs/delete-workspace.md#delete-a-workspace-into-a-soft-delete-state), the operation could have a different outcome depending on your workspace configuration. For example:
--- If you provide the same workspace name, resource group, subscription, and region as in the deleted workspace, your workspace will be recovered. The recovered workspace includes data, configuration, and connected agents.-- A workspace name must be unique per resource group. If you use a workspace name that already exists and is also in soft delete in your resource group, you'll get an error. The error will state "The workspace name 'workspace-name' is not unique" or "conflict." To override the soft delete, permanently delete your workspace, and create a new workspace with the same name, follow these steps to recover the workspace first and then perform a permanent delete:-
- * [Recover](../logs/delete-workspace.md#recover-a-workspace-in-a-soft-delete-state) your workspace.
- * [Permanently delete](../logs/delete-workspace.md#delete-a-workspace-permanently) your workspace.
- * Create a new workspace by using the same workspace name.
-
-## Next steps
-[Review Log Analytics PowerShell cmdlets](/powershell/module/az.operationalinsights/) for more information on using PowerShell for configuration of Log Analytics.
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
Last updated 10/01/2022
# Restore logs in Azure Monitor The restore operation makes a specific time range of data in a table available in the hot cache for high-performance queries. This article describes how to restore data, query that data, and then dismiss the data when you're done.
+> [!NOTE]
+> Tables with the [Auxiliary table plan](data-platform-logs.md) do not support data restore. Use a [search job](search-jobs.md) to retrieve data that's in long-term retention from an Auxiliary table.
+ ## Permissions
-To restore data from an archived table, you need `Microsoft.OperationalInsights/workspaces/tables/write` and `Microsoft.OperationalInsights/workspaces/restoreLogs/write` permissions to the Log Analytics workspace, for example, as provided by the [Log Analytics Contributor built-in role](../logs/manage-access.md#built-in-roles).
+To restore data from long-term retention, you need `Microsoft.OperationalInsights/workspaces/tables/write` and `Microsoft.OperationalInsights/workspaces/restoreLogs/write` permissions to the Log Analytics workspace, for example, as provided by the [Log Analytics Contributor built-in role](../logs/manage-access.md#built-in-roles).
## When to restore logs
-Use the restore operation to query data in [Archived Logs](data-retention-archive.md). You can also use the restore operation to run powerful queries within a specific time range on any Analytics table when the log queries you run on the source table can't complete within the log query timeout of 10 minutes.
+Use the restore operation to query data in [long-term retention](data-retention-configure.md). You can also use the restore operation to run powerful queries within a specific time range on any Analytics table when the log queries you run on the source table can't complete within the log query timeout of 10 minutes.
> [!NOTE]
-> Restore is one method for accessing archived data. Use restore to run queries against a set of data within a particular time range. Use [Search jobs](search-jobs.md) to access data based on specific criteria.
+> Restore is one method for accessing data in long-term retention. Use restore to run queries against a set of data within a particular time range. Use [Search jobs](search-jobs.md) to access data based on specific criteria.
## What does restore do? When you restore data, you specify the source table that contains the data you want to query and the name of the new destination table to be created.
Here are some examples to illustrate data restore cost calculations:
## Next steps -- [Learn more about data retention and archiving data.](data-retention-archive.md)
+- [Learn more about data retention.](data-retention-configure.md)
-- [Learn about Search jobs, which is another method for retrieving archived data.](search-jobs.md)
+- [Learn about Search jobs, which is another method for retrieving data from long-term retention.](search-jobs.md)
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Title: Run search jobs in Azure Monitor description: Search jobs are asynchronous log queries in Azure Monitor that make results available as a table for further analytics. Previously updated : 05/30/2024 Last updated : 07/22/2024
-# Customer intent: As a data scientist or workspace administrator, I want an efficient way to search through large volumes of data in a table, including archived and basic logs.
+# Customer intent: As a data scientist or workspace administrator, I want an efficient way to search through large volumes of data in a table, including data in long-term retention.
# Run search jobs in Azure Monitor
Search jobs are asynchronous queries that fetch records into a new search table
Use a search job when the log query timeout of 10 minutes isn't sufficient to search through large volumes of data or if you're running a slow query.
-Search jobs also let you retrieve records from [Archived Logs](data-retention-archive.md) and [Basic Logs](basic-logs-configure.md) tables into a new log table you can use for queries. In this way, running a search job can be an alternative to:
+Search jobs also let you retrieve records from [long-term retention](data-retention-configure.md) and [tables with the Basic and Auxiliary plans](data-platform-logs.md#table-plans) into a new Analytics table where you can take advantage of Azure Monitor Log's full analytics capabilities. In this way, running a search job can be an alternative to:
-* [Restoring data from Archived Logs](restore.md) for a specific time range.<br/>
- Use restore when you have a temporary need to run many queries on a large volume of data.
+- [Restoring data from long-term retention](restore.md) for a specific time range.
-* Querying Basic Logs directly and paying for each query.<br/>
- To determine which alternative is more cost-effective, compare the cost of querying Basic Logs with the cost of running a search job and storing the search job results.
+- Querying Basic and Auxiliary tables directly and paying for each query.<br/>
+ To determine which alternative is more cost-effective, compare the cost of querying Basic and Auxiliary tables with the cost of running a search job and storing the search job results.
## What does a search job do? A search job sends its results to a new table in the same workspace as the source data. The results table is available as soon as the search job begins, but it may take time for results to begin to appear.
-The search job results table is an [Analytics table](../logs/basic-logs-configure.md) that is available for log queries and other Azure Monitor features that use tables in a workspace. The table uses the [retention value](data-retention-archive.md) set for the workspace, but you can modify this value after the table is created.
+The search job results table is an [Analytics table](../logs/logs-table-plans.md) that is available for log queries and other Azure Monitor features that use tables in a workspace. The table uses the [retention value](data-retention-configure.md) set for the workspace, but you can modify this value after the table is created.
The search results table schema is based on the source table schema and the specified query. The following other columns help you track the source records:
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pr
## Next steps -- [Learn more about data retention and archiving data.](data-retention-archive.md)-- [Learn about restoring data, which is another method for retrieving archived data.](restore.md)-- [Learn about directly querying Basic Logs.](basic-logs-query.md)
+- [Learn more about data retention and archiving data.](data-retention-configure.md)
+- [Learn about restoring data, which is another method for retrieving data from long-term retention.](restore.md)
+- [Learn about directly querying Basic and Auxiliary tables.](basic-logs-query.md)
azure-monitor Summary Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/summary-rules.md
This article describes how summary rules work and how to define and view summary
## How summary rules work
-Summary rules perform batch processing directly in your Log Analytics workspace. The summary rule aggregates chunks of data, defined by bin size, based on a KQL query, and reingests the summarized results into a custom table with an [Analytics log plan](basic-logs-configure.md) in your Log Analytics workspace.
+Summary rules perform batch processing directly in your Log Analytics workspace. The summary rule aggregates chunks of data, defined by bin size, based on a KQL query, and reingests the summarized results into a custom table with an [Analytics log plan](logs-table-plans.md) in your Log Analytics workspace.
:::image type="content" source="media/summary-rules/ingestion-flow.png" alt-text="A diagram that shows how data is ingested into a Log Analytics workspace and is aggregated and reingested into the workspace by using a summary rule." lightbox="media/summary-rules/ingestion-flow.png":::
Here's the aggregated data that the summary rule sends to the destination table:
:::image type="content" source="media/summary-rules/summary-rules-aggregated-logs.png" alt-text="Screenshot that aggregated data that the summary rules sends to the destination table." lightbox="media/summary-rules/summary-rules-aggregated-logs.png":::
-Instead of logging hundreds of similar entries within an hour, the destination table shows the count of each unique entry, as defined in the KQL query. Set the [Basic data plan](basic-logs-configure.md) on the `ContainerLogsV2` table for cheap retention of the raw data, and use the summarized data in the destination table for your analysis needs.
+Instead of logging hundreds of similar entries within an hour, the destination table shows the count of each unique entry, as defined in the KQL query. Set the [Basic data plan](logs-table-plans.md) on the `ContainerLogsV2` table for cheap retention of the raw data, and use the summarized data in the destination table for your analysis needs.
## Permissions required
Instead of logging hundreds of similar entries within an hour, the destination t
## Pricing model
-There is no additional cost for Summary rules. You only pay for the query and the ingestion of results to the destination table, based on the table plan used in query:
+There's no extra cost for Summary rules. You only pay for the query and the ingestion of results to the destination table, based on the table plan of the source table on which you run the query:
| Source table plan | Query cost | Summary results ingestion cost | | | | | | Analytics | No cost | Analytics ingested GB |
-| Basic | Scanned GB | Analytics ingested GB |
+| Basic and Auxiliary | Scanned GB | Analytics ingested GB |
For example, the cost calculation for an hourly rule that returns 100 records per bin is: | Source table plan | Monthly price calculation | | | | Analytics | Ingestion price x record size x number of records x 24 hours x 30 days |
-| Basic | Scanned GB price x scanned size + Ingestion price x record size x number of records x 24 hours x 30 days |
+| Basic and Auxiliary | Scanned GB price x scanned size + Ingestion price x record size x number of records x 24 hours x 30 days |
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). - ## Create or update a summary rule Before you create a rule, experiment with the query in [Log Analytics](log-analytics-overview.md). Verify that the query doesn't reach or near the query limit. Check that the query produces the intended schema and expected results. If the query is close to the query limits, consider using a smaller `binSize` to process less data per bin. You can also modify the query to return fewer records or remove fields with higher volume.
This table describes the summary rule parameters:
| `query` | [Kusto Query Language (KQL) query](get-started-queries.md) | Defines the query to execute in the rule. You don't need to specify a time range because the `binSize` parameter determines the aggregation interval - for example, `02:00 to 03:00` if `"binSize": 60`. If you add a time filter in the query, the time rage used in the query is the intersection between the filter and the bin size. | | `destinationTable` | `tablename_CL` | Specifies the name of the destination custom log table. The name value must have the suffix `_CL`. Azure Monitor creates the table in the workspace, if it doesn't already exist, based on the query you set in the rule. If the table already exists in the workspace, Azure Monitor adds any new columns introduced in the query. <br><br> If the summary results include a reserved column name - such as `TimeGenerated`, `_IsBillable`, `_ResourceId`, `TenantId`, or `Type` - Azure Monitor appends the `_Original` prefix to the original fields to preserve their original values.| | `binDelay` (optional) | Integer (minutes) | Sets a time to delay before bin execution for late arriving data, also known as [ingestion latency](data-ingestion-time.md). The delay allows for most data to arrive and for service load distribution. The default delay is from three and a half minutes to 10% of the `binSize` value. <br><br> If you know that the data you query is typically ingested with delay, set the `binDelay` parameter with the known delay value or greater. For more information, see [Configure the aggregation timing](#configure-the-aggregation-timing).<br>In some cases, Azure Monitor might begin bin execution slightly after the set bin delay to ensure service reliability and query success.|
-| `binStartTime` (optional) | Datetime in<br>`%Y-%n-%eT%H:%M %Z` format | Specifies the date and time for the initial bin execution. The value can start at rule creation datetime minus the `binSize` value, or later and in whole hours. For example, if the datetime is `2023-12-03T12:13Z` and `binSize` is 1,440, the earliest valid `binStartTime` value is `2023-12-02T13:00Z`, and the aggregation includes data logged between 02T13:00 and 03T13:00. In this scenario, the rules start aggregating a 03T13:00 plus the default or specified delay. <br><br> The `binStartTime` parameter is useful in daily summary scenarios. Suppose you're located in the UTC-8 time zone and you create a daily rule at `2023-12-03T12:13Z`. You want the rule to complete before you start your day at 8:00 (00:00 UTC). Set the `binStartTime` parameter to `2023-12-02T22:00Z`. The first aggregation includes all data logged between 02T:06:00 and 03T:06:00 local time, and the rule runs at the same time daily. For more information, see [Configure the aggregation timing](#configure-the-aggregation-timing).<br><br> When you update rules, you can either: <br> - Use the existing `binStartTime` value or remove the `binStartTime` parameter, in which case execution continues based on the initial definition.<br> - Update the rule with a new `binStartTime` value to set a new datetime value. |
+| `binStartTime` (optional) | Datetime in<br>`%Y-%n-%eT%H:%M %Z` format | Specifies the date and time for the initial bin execution. The value can start at rule creation datetime minus the `binSize` value, or later and in whole hours. For example, if the datetime is `2023-12-03T12:13Z` and `binSize` is 1,440, the earliest valid `binStartTime` value is `2023-12-02T13:00Z`, and the aggregation includes data logged between 02T13:00 and 03T13:00. In this scenario, the rules start aggregating a 03T13:00 plus the default or specified delay. <br><br> The `binStartTime` parameter is useful in daily summary scenarios. Suppose you're in the UTC-8 time zone and you create a daily rule at `2023-12-03T12:13Z`. You want the rule to complete before you start your day at 8:00 (00:00 UTC). Set the `binStartTime` parameter to `2023-12-02T22:00Z`. The first aggregation includes all data logged between 02T:06:00 and 03T:06:00 local time, and the rule runs at the same time daily. For more information, see [Configure the aggregation timing](#configure-the-aggregation-timing).<br><br> When you update rules, you can either: <br> - Use the existing `binStartTime` value or remove the `binStartTime` parameter, in which case execution continues based on the initial definition.<br> - Update the rule with a new `binStartTime` value to set a new datetime value. |
| `timeSelector` (optional) | `TimeGenerated` | Defines the timestamp field that Azure Monitor uses to aggregate data. For example, if you set `"binSize": 120`, you might get entries with a `TimeGenerated` value between `02:00` and `04:00`. |
If you don't need the summary results in the destination table, delete the rule
The destination table schema is defined when you create or update a summary rule. If the query in the summary rule includes operators that allow output schema expansion based on incoming data - for example, if the query uses the `arg_max(expression, *)` function - Azure Monitor doesn't add new columns to the destination table after you create or update the summary rule, and the output data that requires these columns will be dropped. To add the new fields to the destination table, [update the summary rule](#create-or-update-a-summary-rule) or [add a column to your table manually](create-custom-table.md#add-or-delete-a-custom-column).
-### Data for removed columns remains in workspace, subject to retention period
+### Data in removed columns remains in the workspace based on the table's retention settings
-When you remove columns in the query, the columns and data remain in the destination table and are subjected to the [retention period](data-retention-archive.md) defined on the table or workspace. If the removed columns aren't needed in destination table, [Update schema and remove columns](create-custom-table.md#add-or-delete-a-custom-column) accordingly. During the retention period, if you add columns with the same name, old data that hasn't reached retention policy, shows up.
+When you remove columns in the query, the columns and data remain in the destination and based on the [retention period](data-retention-configure.md) defined on the table or workspace. If you don't need the removed in destination table, [delete the columns from the table schema](create-custom-table.md#add-or-delete-a-custom-column). If you then add columns with the same name, any data that's not older that the retention period appears again.
## Related content -- Learn more about [Azure Monitor Logs data plans](basic-logs-configure.md).
+- Learn more about [Azure Monitor Logs data plans](logs-table-plans.md).
- Walk through a [tutorial on using KQL mode in Log Analytics](../logs/log-analytics-tutorial.md). - Access the complete [reference documentation for KQL](/azure/kusto/query/).
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
Last updated 05/30/2024
# Design a Log Analytics workspace architecture
-A single [Log Analytics workspace](log-analytics-workspace-overview.md) might be sufficient for many environments that use Azure Monitor and Microsoft Sentinel. But many organizations will create multiple workspaces to optimize costs and better meet different business requirements. This article presents a set of criteria for determining whether to use a single workspace or multiple workspaces. It also discusses the configuration and placement of those workspaces to meet your requirements while optimizing your costs.
+A single [Log Analytics workspace](log-analytics-workspace-overview.md) might be sufficient for many environments that use Azure Monitor and Microsoft Sentinel. But many organizations create multiple workspaces to optimize costs and better meet different business requirements. This article presents a set of criteria for determining whether to use a single workspace or multiple workspaces. It also discusses the configuration and placement of those workspaces to meet your requirements while optimizing your costs.
> [!NOTE] > This article discusses Azure Monitor and Microsoft Sentinel because many customers need to consider both in their design. Most of the decision criteria apply to both services. If you use only one of these services, you can ignore the other in your evaluation.
Here's a video about the fundamentals of Azure Monitor Logs and best practices a
> [!VIDEO https://www.youtube.com/embed/7RBp9j0P_Ao?cc_load_policy=1&cc_lang_pref=auto] ## Design strategy
-Your design should always start with a single workspace to reduce the complexity of managing multiple workspaces and in querying data from them. There are no performance limitations from the amount of data in your workspace. Multiple services and data sources can send data to the same workspace. As you identify criteria to create more workspaces, your design should use the fewest number that will match your requirements.
+Your design should always start with a single workspace to reduce the complexity of managing multiple workspaces and in querying data from them. There are no performance limitations from the amount of data in your workspace. Multiple services and data sources can send data to the same workspace. As you identify criteria to create more workspaces, your design should use the fewest number of workspace to meet your requirements.
-Designing a workspace configuration includes evaluation of multiple criteria. But some of the criteria might be in conflict. For example, you might be able to reduce egress charges by creating a separate workspace in each Azure region. Consolidating into a single workspace might allow you to reduce charges even more with a commitment tier. Evaluate each of the criteria independently. Consider your requirements and priorities to determine which design will be most effective for your environment.
+Designing a workspace configuration includes evaluation of multiple criteria. But some of the criteria might be in conflict. For example, you might be able to reduce egress charges by creating a separate workspace in each Azure region. Consolidating into a single workspace might allow you to reduce charges even more with a commitment tier. Evaluate each of the criteria independently. Consider your requirements and priorities to determine which design is most effective for your environment.
## Design criteria The following table presents criteria to consider when you design your workspace architecture. The sections that follow describe the criteria.
The following table presents criteria to consider when you design your workspace
| [Azure regions](#azure-regions) | Each workspace resides in a particular Azure region. You might have regulatory or compliance requirements to store data in specific locations. | | [Data ownership](#data-ownership) | You might choose to create separate workspaces to define data ownership. For example, you might create workspaces by subsidiaries or affiliated companies. | | [Split billing](#split-billing) | By placing workspaces in separate subscriptions, they can be billed to different parties. |
-| [Data retention and archive](#data-retention-and-archive) | You can set different retention settings for each workspace and each table in a workspace. You need a separate workspace if you require different retention settings for different resources that send data to the same tables. |
+| [Data retention](#data-retention) | You can set different retention settings for each workspace and each table in a workspace. You need a separate workspace if you require different retention settings for different resources that send data to the same tables. |
| [Commitment tiers](#commitment-tiers) | Commitment tiers allow you to reduce your ingestion cost by committing to a minimum amount of daily data in a single workspace. | | [Legacy agent limitations](#legacy-agent-limitations) | Legacy virtual machine agents have limitations on the number of workspaces they can connect to. | | [Data access control](#data-access-control) | Configure access to the workspace and to different tables and data from different resources. |
You might need to split billing between different parties or perform charge back
- **If you don't need to split billing or perform charge back:** Use a single workspace for all cost owners. - **If you need to split billing or perform charge back:** Consider whether [Azure Cost Management + Billing](../cost-usage.md#azure-cost-management--billing) or a log query provides cost reporting that's granular enough for your requirements. If not, use a separate workspace for each cost owner.
-### Data retention and archive
-You can configure default [data retention and archive settings](data-retention-archive.md) for a workspace or [configure different settings for each table](data-retention-archive.md#configure-retention-and-archive-at-the-table-level). You might require different settings for different sets of data in a particular table. If so, you need to separate that data into different workspaces, each with unique retention settings.
+### Data retention
+You can configure default [data retention settings](data-retention-configure.md) for a workspace or [configure different settings for each table](data-retention-configure.md#configure-table-level-retention). You might require different settings for different sets of data in a particular table. If so, you need to separate that data into different workspaces, each with unique retention settings.
-- **If you can use the same retention and archive settings for all data in each table:** Use a single workspace for all resources.-- **If you require different retention and archive settings for different resources in the same table:** Use a separate workspace for different resources.
+- **If you can use the same retention settings for all data in each table:** Use a single workspace for all resources.
+- **If you require different retention settings for different resources in the same table:** Use a separate workspace for different resources.
### Commitment tiers [Commitment tiers](../logs/cost-logs.md#commitment-tiers) provide a discount to your workspace ingestion costs when you commit to a specific amount of daily data. You might choose to consolidate data in a single workspace to reach the level of a particular tier. This same volume of data spread across multiple workspaces wouldn't be eligible for the same tier, unless you have a dedicated cluster.
To ensure that critical data in your workspace is available in the event of a re
This option requires managing integration with other services and products separately for each workspace. Even though the data will be available in the alternate workspace in case of failure, resources that rely on the data, such as alerts and workbooks, won't know to switch over to the alternate workspace. Consider storing ARM templates for critical resources with configuration for the alternate workspace in Azure DevOps, or as disabled policies that can quickly be enabled in a failover scenario. ## Work with multiple workspaces
-Many designs will include multiple workspaces, so Azure Monitor and Microsoft Sentinel include features to assist you in analyzing this data across workspaces. For more information, see:
+Many designs include multiple workspaces, so Azure Monitor and Microsoft Sentinel include features to assist you in analyzing this data across workspaces. For more information, see:
- [Create a log query across multiple workspaces and apps in Azure Monitor](cross-workspace-query.md) - [Extend Microsoft Sentinel across workspaces and tenants](../../sentinel/extend-sentinel-across-workspaces-tenants.md)
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
|General|[Create a metric alert with dynamic thresholds](alerts/alerts-dynamic-thresholds.md)|Added possible values for alert User Response field.| |Logs|[Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](logs/tutorial-logs-ingestion-api.md)|Updated to use DCR endpoint instead of DCE.| |Logs|[Create and manage a dedicated cluster in Azure Monitor Logs](logs/logs-dedicated-clusters.md)|Added new process for configuring dedicated clusters in Azure portal.|
-|Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|The Basic Logs table plan now includes 30 days of interactive retention.|
+|Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|The Basic Logs table plan now includes 30 days of interactive retention.|
|Logs|[Aggregate data in a Log Analytics workspace by using summary rules (Preview)](logs/summary-rules.md)|Summary rules final 2| |Visualizations|[Link actions](visualize/workbooks-link-actions.md)|Added clarification that the user must have permissions to all resources referenced in a workbook as well as to the workbook itself.<p>Updated process and screenshots for custom views in workbook link actions.|
This article lists significant changes to Azure Monitor documentation.
|Essentials|[Tutorial: Edit a data collection rule (DCR)](essentials/data-collection-rule-edit.md)|Updated API version in REST API calls.| |Essentials|[Monitor and troubleshoot DCR data collection in Azure Monitor](essentials/data-collection-monitor.md)|New article documenting new DCR monitoring feature.| |Logs|[Monitor Log Analytics workspace health](logs/log-analytics-workspace-health.md)|Added new metrics for monitoring data export from a Log Analytics workspace.|
-|Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Azure Databricks logs tables now support the basic logs data plan.|
+|Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|Azure Databricks logs tables now support the basic logs data plan.|
## February 2024
This article lists significant changes to Azure Monitor documentation.
|Logs|[Logs Ingestion API in Azure Monitor](logs/logs-ingestion-api-overview.md)|Add new tables that now support ingestion time transformations.| |Logs|[Plan alerts and automated actions](alerts/alerts-plan.md)|The Getting Started section was edited to make the documentation cleaner and more efficient.| |Logs|[Enhance data and service resilience in Azure Monitor Logs with availability zones](logs/availability-zones.md)|Updated the list of supported regions for Availability Zones.|
-|Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Bare Metal Machines and Microsoft Graph tables now support Basic logs.|
+|Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|Bare Metal Machines and Microsoft Graph tables now support Basic logs.|
|Virtual-Machines|[Monitor virtual machines with Azure Monitor](vm/monitor-virtual-machine.md)|Added information on using Performance Diagnostics to troubleshoot performance issues on Windows or Linux virtual machines.| ## January 2024
Containers|[Enable Container insights](containers/container-insights-onboard.md)
Essentials|[Azure Monitor managed service for Prometheus rule groups](essentials/prometheus-rule-groups.md)|Create or edit Prometheus rule group in the Azure portal (preview)| Logs|[Detect and mitigate potential issues using AIOps and machine learning in Azure Monitor](logs/aiops-machine-learning.md)|Microsoft Copilot in Azure now helps you write KQL queries to analyze data and troubleshoot issues based on prompts, such as "Are there any errors in container logs?". | Logs|[Best practices for Azure Monitor Logs](./best-practices-logs.md)|More guidance on Azure Monitor Logs features that provide enhanced resilience.|
-Logs|[Data retention and archive in Azure Monitor Logs](logs/data-retention-archive.md)|Azure Monitor Logs extended archiving of data to up to 12 years.|
-Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added Basic logs support for Network managers tables.|
+Logs|[Data retention and archive in Azure Monitor Logs](logs/data-retention-configure.md)|Azure Monitor Logs extended archiving of data to up to 12 years.|
+Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|Added Basic logs support for Network managers tables.|
Virtual-Machines|[Enable VM insights in the Azure portal](vm/vminsights-enable-portal.md)|Azure portal no longer supports enabling VM insights using Log Analytics agent.| Virtual-Machines|[Azure Monitor SCOM Managed Instance](vm/scom-managed-instance-overview.md)|Azure Monitor SCOM Managed Instance is now generally available.| Visualizations|[Azure Workbooks](visualize/workbooks-overview.md)|We clarified that when you're viewing Azure workbooks, you can see all of the workbooks that are in your current view. In order to see all of your existing workbooks of any kind, you must Browse across galleries. |
General|[Plan your alerts and automated actions](alerts/alerts-plan.md)|Add aler
General|[Azure Monitor cost and usage](cost-usage.md)|Updated information about the Cost Analysis usage report which contains both the cost for your usage, and the number of units of usage. You can use this export to see the amount of benefit you're receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). | Logs|[Send log data to Azure Monitor by using the HTTP Data Collector API (deprecated)](logs/data-collector-api.md)|Added deprecation notice.| Logs|[Azure Monitor Logs overview](logs/data-platform-logs.md)|Added code samples for the Azure Monitor Ingestion client module for Go.|
-Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added new Virtual Network Manager, Dev Center, and Communication Services tables that now support Basic logs.|
+Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.mdVirtual Network Manager, Dev Center, and Communication Services tables that now support Basic logs.|
## August 2023
Application-Insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, Pyth
Application-Insights|[Data Collection Basics of Azure Monitor Application Insights](app/opentelemetry-overview.md)|We've added a new article to clarify both manual and automatic instrumentation options to enable Application Insights.| Application-Insights|[Enable a framework extension for Application Insights JavaScript SDK](app/javascript-framework-extensions.md)|The "Explore your data" section has been improved.| Application-Insights|[Sampling overrides (preview) - Azure Monitor Application Insights for Java](app/java-standalone-sampling-overrides.md)|We've documented steps for troubleshooting sampling.|
-Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Additional Azure tables now support low-cost basic logs, including tables for the Bare Metal Machines, Managed Lustre, Nexus Clusters, and Nexus Storage Appliances services. |
+Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|Additional Azure tables now support low-cost basic logs, including tables for the Bare Metal Machines, Managed Lustre, Nexus Clusters, and Nexus Storage Appliances services. |
Logs|[Query Basic Logs in Azure Monitor](logs/basic-logs-query.md)|Basic log queries are now billable.| Logs|[Restore logs in Azure Monitor](logs/restore.md)|Restored logs are now billable.| Logs|[Run search jobs in Azure Monitor](logs/search-jobs.md)|Search jobs are now billable.|
Essentials|[Azure Active Directory authorization proxy](essentials/prometheus-au
Essentials|[Integrate KEDA with your Azure Kubernetes Service cluster](essentials/integrate-keda.md)|New Article: Integrate KEDA with AKS and Prometheus| Essentials|[General Availability: Azure Monitor managed service for Prometheus](https://techcommunity.microsoft.com/t5/azure-observability-blog/general-availability-azure-monitor-managed-service-for/ba-p/3817973)|General Availability: Azure Monitor managed service for Prometheus | Insights|[Monitor and analyze runtime behavior with Code Optimizations (Preview)](insights/code-optimizations.md)|New doc for public preview release of Code Optimizations feature.|
-Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added Azure Active Directory, Communication Services, Container Apps Environments, and Data Manager for Energy to the list of tables that support Basic logs.|
+Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|Added Azure Active Directory, Communication Services, Container Apps Environments, and Data Manager for Energy to the list of tables that support Basic logs.|
Logs|[Export data from a Log Analytics workspace to a storage account by using Logic Apps](logs/logs-export-logic-app.md)|Added an Azure Resource Manager template for exporting data from a Log Analytics workspace to a storage account by using Logic Apps.| Logs|[Set daily cap on Log Analytics workspace](logs/daily-cap.md)|Starting September 18, 2023, the Log Analytics Daily Cap will no longer exclude a set of data types from the daily cap, and all billable data types will be capped if the daily cap is met.|
General|[Migrate from Operations Manager to Azure Monitor](azure-monitor-operati
Logs|[Application Insights API Access with Microsoft Azure Active Directory (Azure AD) Authentication](app/app-insights-azure-ad-api.md)|New article that explains how to authenticate and access the Azure Monitor Application Insights APIs using Azure AD.| Logs|[Tutorial: Replace custom fields in Log Analytics workspace with KQL-based custom columns](logs/custom-fields-migrate.md)|Guidance for migrate legacy custom fields to KQL-based custom columns using transformations.| Logs|[Monitor Log Analytics workspace health](logs/log-analytics-workspace-health.md)|View Log Analytics workspace health metrics, including query success metrics, directly from the Log Analytics workspace screen in the Azure portal.|
-Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Dedicated SQL Pool tables and Kubernetes services tables now support Basic logs.|
+Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|Dedicated SQL Pool tables and Kubernetes services tables now support Basic logs.|
Logs|[Set daily cap on Log Analytics workspace](logs/daily-cap.md)|Updated daily cap functionality for workspace-based Application Insights.| Profiler|[View Application Insights Profiler data](profiler/profiler-data.md)|Clarified this section based on user feedback.| Snapshot-Debugger|[Debug snapshots on exceptions in .NET apps](snapshot-debugger/snapshot-collector-release-notes.md)|Removed "how to view" sections and move into its own doc.|
Logs|[Add or delete tables and columns in Azure Monitor Logs](logs/create-custom
Logs|[Enhance data and service resilience in Azure Monitor Logs with availability zones](logs/availability-zones.md)|Clarified availability zone support for data resilience and service resilience and added new supported regions.| Logs|[Monitor Log Analytics workspace health](logs/log-analytics-workspace-health.md)|New article: Explains how to monitor the service and resource health of a Log Analytics workspace.| Logs|[Feature extensions for Application Insights JavaScript SDK (Click Analytics)](app/javascript-click-analytics-plugin.md)|You can now launch Power BI and create a dataset and report connected to a Log Analytics query with one click.|
-Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added new tables to the list of tables that support Basic Logs.|
+Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|Added new tables to the list of tables that support Basic Logs.|
Logs|[Manage tables in a Log Analytics workspace]()|Refreshed all Log Analytics workspace images with the new TOC on the left.| Security-Fundamentals|[Monitoring Azure App Service](../../articles/app-service/monitor-app-service.md)|Revised the Azure Monitor overview to improve usability. The article is cleaned up, streamlined, and better reflects the product architecture and the customer experience. | Snapshot-Debugger|[host.json reference for Azure Functions 2.x and later](../../articles/azure-functions/functions-host-json.md)|Removing the TSG from the Azure Monitor TOC and adding to the support TOC.|
Logs|[Set daily cap on Log Analytics workspace](logs/daily-cap.md)|Clarified spe
Logs|[Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API](essentials/metrics-store-custom-rest-api.md)|Updated and refreshed how to send custom metrics.| Logs|[Migrate from Splunk to Azure Monitor Logs](logs/migrate-splunk-to-azure-monitor-logs.md)|New article: Explains how to migrate your Splunk Observability deployment to Azure Monitor Logs for logging and log data analysis.| Logs|[Manage access to Log Analytics workspaces](logs/manage-access.md)|Added permissions required to run a search job and restore archived data.|
-Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added information about how to modify a table schema by using the API.|
+Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|Added information about how to modify a table schema by using the API.|
Snapshot-Debugger|[Enable Snapshot Debugger for .NET apps in Azure App Service](snapshot-debugger/snapshot-debugger-app-service.md)|Per customer feedback, added new note that Consumption plan isn't supported.| Virtual-Machines|[Collect IIS logs with Azure Monitor Agent](agents/data-collection-iis.md)|Added sample log queries.| Virtual-Machines|[Collect text logs with Azure Monitor Agent](agents/data-collection-text-log.md)|Added sample log queries.|
Containers|[Reports in Container insights](containers/container-insights-reports
Essentials|[Best practices for data collection rule creation and management in Azure Monitor](essentials/data-collection-rule-best-practices.md)|New article.| Essentials|[Configure self-managed Grafana to use Azure Monitor managed service for Prometheus (preview) with Azure Active Directory](essentials/prometheus-self-managed-grafana-azure-active-directory.md)|New article: Configured self-managed Grafana to use Azure Monitor managed service for Prometheus (preview) with Azure Active Directory.| Logs|[Azure Monitor SCOM Managed Instance (preview)](vm/scom-managed-instance-overview.md)|New article.|
-Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Updated the list of tables that support Basic Logs.|
+Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|Updated the list of tables that support Basic Logs.|
Virtual-Machines|[Tutorial: Create availability alert rule for Azure virtual machine (preview)](vm/tutorial-monitor-vm-alert-availability.md)|New article.| Virtual-Machines|[Tutorial: Enable recommended alert rules for Azure virtual machine](vm/tutorial-monitor-vm-alert-recommended.md)|New article.| Virtual-Machines|[Tutorial: Enable monitoring with VM insights for Azure virtual machine](vm/tutorial-monitor-vm-enable-insights.md)|New article.|
Logs|[Azure Monitor Metrics overview](essentials/data-platform-metrics.md)| Adde
Logs|[Azure Monitor Log Analytics API overview](logs/api/overview.md)| Added a new Azure SDK client library for Go.| Logs|[Azure Monitor Logs overview](logs/data-platform-logs.md)| Added a new Azure SDK client library for Go.| Logs|[Log queries in Azure Monitor](logs/log-query-overview.md)| Added a new Azure SDK client library for Go.|
-Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added new tables to the list of tables that support the Basic Log data plan.|
+Logs|[Set a table's log data plan to Basic or Analytics](logs/logs-table-plans.md)|Added new tables to the list of tables that support the Basic Log data plan.|
Visualizations|[Monitor your Azure services in Grafana](visualize/grafana-plugin.md)|The Grafana integration is generally available and is no longer in preview.| Visualizations|[Get started with Azure Workbooks](visualize/workbooks-getting-started.md)|Added instructions for how to share workbooks.|
Essentials|[Azure resource logs](./essentials/resource-logs.md)|Clarified which
Essentials|[Resource Manager template samples for Azure Monitor](resource-manager-samples.md?tabs=portal)|Added template deployment methods.| Essentials|[Azure Monitor service limits](service-limits.md)|Added Azure Monitor managed service for Prometheus.| Logs|[Manage access to Log Analytics workspaces](./logs/manage-access.md)|Table-level role-based access control lets you give specific users or groups read access to particular tables.|
-Logs|[Configure Basic Logs in Azure Monitor](./logs/basic-logs-configure.md)|Added information on general availability of the Basic Logs data plan, retention and archiving, search job, and the table management user experience in the Azure portal.|
+Logs|[Configure Basic Logs in Azure Monitor](./logs/logs-table-plans.md)|Added information on general availability of the Basic Logs data plan, retention and archiving, search job, and the table management user experience in the Azure portal.|
Logs|[Guided project - Analyze logs in Azure Monitor with KQL - Training](/training/modules/analyze-logs-with-kql/)|New Learn module: Learn to write KQL queries to retrieve and transform log data to answer common business and operational questions.| Logs|[Detect and analyze anomalies with KQL in Azure Monitor](logs/kql-machine-learning-azure-monitor.md)|New tutorial: Walkthrough of how to use KQL for time-series analysis and anomaly detection in Azure Monitor Log Analytics. | Virtual-machines|[Enable VM insights for a hybrid virtual machine](./vm/vminsights-enable-hybrid.md)|Updated versions of standalone installers.|
Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to
| Article | Description | |||
-|[Configure data retention and archive in Azure Monitor Logs (preview)](logs/data-retention-archive.md)|Clarified how data retention and archiving work in Azure Monitor Logs to address repeated customer inquiries.|
+|[Configure data retention and archive in Azure Monitor Logs (preview)](logs/data-retention-configure.md)|Clarified how data retention and archiving work in Azure Monitor Logs to address repeated customer inquiries.|
## July 2022 ### General
azure-netapp-files Azure Netapp Files Cost Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-cost-model.md
Previously updated : 11/08/2021 Last updated : 07/15/2024 # Cost model for Azure NetApp Files
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Previously updated : 08/10/2023 Last updated : 05/10/2024
azure-netapp-files Azure Netapp Files Performance Metrics Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-performance-metrics-volumes.md
The intent of SSB is to allow organizations and individuals to measure the perfo
#### Installation of SSB
-Follow the [Getting started](https://github.com/NetApp/SQL_Storage_Benchmark/blob/main/README.md#getting-started) section in the SSB README file to install for the platform of your choice.
+Follow the Getting started section in the SSB README file to install for the platform of your choice.
### FIO
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
Previously updated : 02/21/2023 Last updated : 04/24/2024 #Customer intent: As an IT admin new to Azure NetApp Files, I want to quickly set up Azure NetApp Files and create a volume.
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
Previously updated : 09/18/2023 Last updated : 06/17/2024 # Solution architectures using Azure NetApp Files
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
Previously updated : 07/27/2023 Last updated : 07/18/2024 # Storage hierarchy of Azure NetApp Files
azure-netapp-files Backup Configure Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-manual.md
Previously updated : 06/13/2023 Last updated : 07/03/2024 # Configure manual backups for Azure NetApp Files
azure-netapp-files Backup Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-delete.md
Previously updated : 10/27/2022 Last updated : 04/24/2024 # Delete backups of a volume
azure-netapp-files Backup Manage Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-manage-policies.md
Previously updated : 07/31/2023 Last updated : 04/24/2024 # Manage backup policies for Azure NetApp Files
azure-netapp-files Backup Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-search.md
Previously updated : 09/27/2021 Last updated : 04/24/2024 # Search backups of Azure NetApp Files volumes
azure-netapp-files Backup Vault Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-vault-manage.md
Previously updated : 10/27/2022 Last updated : 04/24/2024 # Manage backup vaults for Azure NetApp Files
azure-netapp-files Cross Region Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
Previously updated : 02/28/2023 Last updated : 05/28/2024
azure-netapp-files Cross Zone Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-requirements-considerations.md
Previously updated : 08/18/2023 Last updated : 05/28/2024 # Requirements and considerations for using cross-zone replication
azure-netapp-files Faq Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-backup.md
Previously updated : 09/10/2022 Last updated : 04/24/2024 # Azure NetApp Files backup FAQs
azure-netapp-files Faq Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-nfs.md
Previously updated : 03/15/2023 Last updated : 05/21/2024 # NFS FAQs for Azure NetApp Files
azure-netapp-files Faq Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-performance.md
Previously updated : 08/18/2022 Last updated : 04/05/2024 # Performance FAQs for Azure NetApp Files
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Previously updated : 05/03/2023 Last updated : 05/21/2024 # SMB FAQs for Azure NetApp Files
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
Previously updated : 11/02/2023 Last updated : 07/22/2024 # Requirements and considerations for large volumes
The following requirements and considerations apply to large volumes. For perfor
<td>50</td> <td>1,024</td> <td>3,200</td>
- <td>12</td>
+ <td>12,800</td>
+ </tr>
+ <tr>
+ <td>Ultra (128 MiB/s per TiB)</td>
+ <td>50</td>
+ <td>1,024</td>
+ <td>6,400</td>
+ <td>12,800</td>
</tr> </tbody> </table>
azure-netapp-files Performance Oracle Multiple Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-multiple-volumes.md
Previously updated : 05/04/2023 Last updated : 04/24/2024
azure-netapp-files Solutions Benefits Azure Netapp Files Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-sql-server.md
The SSB tool generates a SELECT and UPDATE driven workload issuing the said stat
The tests themselves were configured as 80% SELECT and 20% UPDATE statement, thus 90% random read. The database itself, which SSB created, was 1000 GB in size. It's comprised of 15 user tables and 9,000,000 rows per user table and 8192 bytes per row.
-The SSB benchmark is an open-source tool. It's freely available at the [SQL Storage Benchmark GitHub page](https://github.com/NetApp/SQL_Storage_Benchmark.git).
+The SSB benchmark is an open-source tool. It's freely available at the SQL Storage Benchmark GitHub page.
## In summary
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
Previously updated : 02/21/2023 Last updated : 07/15/2024 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
azure-resource-manager Bicep Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-codes.md
If you need more information about a particular warning or error code, select th
| BCP069 | The function "{function}" is not supported. Use the "{@operator}" operator instead. | | BCP070 | Argument of type "{argumentType}" is not assignable to parameter of type "{parameterType}". | | BCP071 | Expected {expected}, but got {argumentCount}. |
-| [BCP072](./bicep-error-bcp072.md) | This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values. |
+| <a id='BCP072' />[BCP072](./bicep-error-bcp072.md) | This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values. |
| [BCP073](./bicep-error-bcp073.md) | The property &lt;property-name> is read-only. Expressions cannot be assigned to read-only properties. | | BCP074 | Indexing over arrays requires an index of type "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". | | BCP075 | Indexing over objects requires an index of type "{LanguageConstants.String}" but the provided index was of type "{wrongType}". |
azure-resource-manager Bicep Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-object.md
The output from the preceding example with the default values is:
`empty(itemToTest)`
-Determines if an array, object, or string is empty.
+Determines if an array, object, or string is empty or null.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
| Parameter | Required | Type | Description | |: |: |: |: |
-| itemToTest |Yes |array, object, or string |The value to check if it's empty. |
+| itemToTest |Yes |array, object, or string |The value to check if it's empty or null. |
### Return value
-Returns **True** if the value is empty; otherwise, **False**.
+Returns **True** if the value is empty or null; otherwise, **False**.
### Example
-The following example checks whether an array, object, and string are empty.
+The following example checks whether an array, object, and string are empty or null.
```bicep param testArray array = [] param testObject object = {} param testString string = ''
+param testNullString string?
output arrayEmpty bool = empty(testArray) output objectEmpty bool = empty(testObject) output stringEmpty bool = empty(testString)
+output stringNull bool = empty(testNullString)
``` The output from the preceding example with the default values is:
The output from the preceding example with the default values is:
| arrayEmpty | Bool | True | | objectEmpty | Bool | True | | stringEmpty | Bool | True |
+| stringNull | Bool | True |
## intersection
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/data-types.md
Title: Data types in Bicep
description: Describes the data types that are available in Bicep Previously updated : 11/03/2023 Last updated : 07/16/2024 # Data types in Bicep
var mixedArray = ['abc', 'def'
'ghi'] ```
-In an array, each item is represented by the [any type](bicep-functions-any.md). You can have an array where each item is the same data type, or an array that holds different data types.
+Each array element can be of any type. You can have an array where each item is the same data type, or an array that holds different data types.
The following example shows an array of integers and an array different types.
var mixedArray = [
] ```
-Arrays in Bicep are zero-based. In the following example, the expression `exampleArray[0]` evaluates to 1 and `exampleArray[2]` evaluates to 3. The index of the indexer may itself be another expression. The expression `exampleArray[index]` evaluates to 2. Integer indexers are only allowed on expression of array types.
+Arrays in Bicep are zero-based. In the following example, the expression `exampleArray[0]` evaluates to 1 and `exampleArray[2]` evaluates to 3. The index of the indexer can be another expression. The expression `exampleArray[index]` evaluates to 2. Integer indexers are only allowed on expression of array types.
```bicep var index = 1
When specifying integer values, don't use quotation marks.
param exampleInt int = 1 ```
-In Bicep, integers are 64-bit integers. When passed as inline parameters, the range of values may be limited by the SDK or command-line tool you use for deployment. For example, when using PowerShell to deploy a Bicep, integer types can range from -2147483648 to 2147483647. To avoid this limitation, specify large integer values in a [parameter file](parameter-files.md). Resource types apply their own limits for integer properties.
+Bicep integers are 64-bit integers. When passed as inline parameters, the range of values can be limited by the SDK or command-line tool you use for deployment. For example, when using PowerShell to deploy a Bicep, integer types can range from -2147483648 to 2147483647. To avoid this limitation, specify large integer values in a [parameter file](parameter-files.md). Resource types apply their own limits for integer properties.
+
+Bicep supports integer literal type that refers to a specific value that is an exact integer. In the following example, _1_ is an integer literal type, _foo_ can only be assigned the value _1_ and no other value.
+
+```bicep
+output foo 1 = 1
+```
+
+An integer literal type can either be declared inline, as shown in the preceding example, or in a [`type` statement](./user-defined-data-types.md).
+
+```bicep
+type oneType = 1
+
+output foo oneType = 1
+output bar oneType = 2
+```
+
+In the preceding example, assigning _2_ to _bar_ results in a [BCP033](./bicep-error-bcp033.md) error - _Expected a value of type "1" but the provided value is of type "2"_.
+
+The following example shows using integer literal type with [union type](#union-types):
+
+```bicep
+output bar 1 | 2 | 3 = 3
+```
Floating point, decimal or binary formats aren't currently supported.
var test = {
} ```
-In the preceding example, quotes are used when the object property keys contain special characters. For example space, '-', or '.'. The following example shows how to use interpolation in object property keys.
+In the preceding example, quotes are used when the object property keys contain special characters. For example space, '-', or '.'. The following example shows how to use interpolation in object property keys.
```bicep var stringVar = 'example value'
output accessorResult string = environmentSettings['dev'].name
[!INCLUDE [JSON object ordering](../../../includes/resource-manager-object-ordering-bicep.md)]
-You will get the following error when accessing an nonexisting property of an object:
+You get the following error when accessing a nonexisting property of an object:
```error The language expression property 'foo' doesn't exist
output bar bool = contains(objectToTest, 'four') && objectToTest.four == 4
## Strings
-In Bicep, strings are marked with singled quotes, and must be declared on a single line. All Unicode characters with code points between *0* and *10FFFF* are allowed.
+In Bicep, strings are marked with singled quotes, and must be declared on a single line. All Unicode characters with code points between _0_ and _10FFFF_ are allowed.
```bicep param exampleString string = 'test value'
The following table lists the set of reserved characters that must be escaped by
| `\n` | line feed (LF) || | `\r` | carriage return (CR) || | `\t` | tab character ||
-| `\u{x}` | Unicode code point `x` | **x** represents a hexadecimal code point value between *0* and *10FFFF* (both inclusive). Leading zeros are allowed. Code points above *FFFF* are emitted as a surrogate pair.
+| `\u{x}` | Unicode code point `x` | **x** represents a hexadecimal code point value between _0_ and _10FFFF_ (both inclusive). Leading zeros are allowed. Code points above _FFFF_ are emitted as a surrogate pair. |
| `\$` | `$` | Only escape when followed by `{`. | ```bicep
The following table lists the set of reserved characters that must be escaped by
var myVar = 'what\'s up?' ```
+Bicep supports string literal type that refers to a specific string value. In the following example, _red_ is a string literal type, _redColor_ can only be assigned the value _red_ and no other value.
+
+```bicep
+output redColor 'red' = 'red'
+```
+
+A string literal type can either be declared inline, as shown in the preceding example, or in a [`type` statement](./user-defined-data-types.md).
+
+```bicep
+type redColor = 'red'
+
+output colorRed redColor = 'red'
+output colorBlue redColor = 'blue'
+```
+
+In the preceding example, assigning _blue_ to _colorBlue_ results in a [BCP033](./bicep-error-bcp033.md) error - _Expected a value of type "'red'" but the provided value is of type "'blue'"_.
+
+The following example shows using string literal type with [union type](#union-types):
+
+```bicep
+type direction = 'north' | 'south' | 'east' | 'west'
+
+output west direction = 'west'
+output northWest direction = 'northwest'
+```
+ All strings in Bicep support interpolation. To inject an expression, surround it by `${` and `}`. Expressions that are referenced can't span multiple lines. ```bicep var storageName = 'storage${uniqueString(resourceGroup().id)}' ```
-## Multi-line strings
+### Multi-line strings
In Bicep, multi-line strings are defined between three single quote characters (`'''`) followed optionally by a newline (the opening sequence), and three single quote characters (`'''` - the closing sequence). Characters that are entered between the opening and closing sequence are read verbatim, and no escaping is necessary or possible.
var myVar6 = '''interpolation
is ${blocked}''' ```
+## Union types
+
+In Bicep, a union type allows the creation of a combined type consisting of a set of sub-types. An assignment is valid if any of the individual sub-type assignments are permitted. The `|` character separates individual sub-types using an _or_ condition. For example, the syntax _'a' | 'b'_ means that a valid assignment could be either _'a'_ or _'b'_. Union types are translated into the [allowed-value](../templates/definitions.md#allowed-values) constraint in Bicep, so only literals are permitted as members. Unions can include any number of literal-typed expressions.
+
+```bicep
+type color = 'Red' | 'Blue' | 'White'
+type trueOrFalse = 'true' | 'false'
+type permittedIntegers = 1 | 2 | 3
+type oneOfSeveralObjects = {foo: 'bar'} | {fizz: 'buzz'} | {snap: 'crackle'}
+type mixedTypeArray = ('fizz' | 42 | {an: 'object'} | null)[]
+```
+
+Any type expression can be used as a sub-type in a union type declaration (between `|` characters). For example, the following examples are all valid:
+
+```bicep
+type foo = 1 | 2
+type bar = foo | 3
+type baz = bar | (4 | 5) | 6
+```
+
+### Custom-tagged union data type
+
+Bicep supports custom tagged union data type, which is used to represent a value that can be one of several different types. To declare a custom tagged union data type, you can use a `@discriminator()` decorator. [Bicep CLI version 0.21.X or higher](./install.md) is required to use this decorator. The syntax is:
+
+```bicep
+@discriminator('<property-name>')
+```
+
+The discriminator decorator takes a single parameter, which represents a shared property name among all union members. This property name must be a required string literal on all members and is case-sensitive. The values of the discriminated property on the union members must be unique in a case-insensitive manner.
+
+```bicep
+type FooConfig = {
+ type: 'foo'
+ value: int
+}
+
+type BarConfig = {
+ type: 'bar'
+ value: bool
+}
+
+@discriminator('type')
+param ServiceConfig FooConfig | BarConfig | { type: 'baz', *: string } = { type: 'bar', value: true }
+```
+
+The parameter value is validated based on the discriminated property value. For instance, in the preceding example, if the _serviceConfig_ parameter is of type _foo_, it's validated using the _FooConfig_ type. Similarly, if the parameter is of type _bar_, it's validated using the _BarConfig_ type. This pattern applies to other types as well.
+
+There are some limitations with union type.
+
+* Union types must be reducible to a single Azure Resource Manager (ARM) type. The following definition is invalid:
+
+ ```bicep
+ type foo = 'a' | 1
+ ```
+
+* Only literals are permitted as members.
+* All literals must be of the same primitive data type (e.g., all strings or all integers).
+
+The union type syntax can be used in [user-defined data types](./user-defined-data-types.md).
+ ## Secure strings and objects Secure string uses the same format as string, and secure object uses the same format as object. With Bicep, you add the `@secure()` [decorator](./parameters.md#decorators) to a string or object.
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameters.md
Resource Manager resolves parameter values before starting the deployment operat
Each parameter must be set to one of the [data types](data-types.md).
-You're limited to 256 parameters in a Bicep file. For more information, see [Template limits](../templates/best-practices.md#template-limits).
+Bicep allows a maximum of 256 parameters. For more information, see [Template limits](../templates/best-practices.md#template-limits).
For parameter best practices, see [Parameters](./best-practices.md#parameters).
param storageAccountConfig {
} ```
-For more information, see [User-defined data types](./user-defined-data-types.md#user-defined-data-type-syntax).
+For more information, see [User-defined data types](./user-defined-data-types.md#syntax).
## Default value
The following table describes the available decorators and how to use them.
| Decorator | Apply to | Argument | Description | | | - | -- | - |
-| [allowed](#allowed-values) | all | array | Allowed values for the parameter. Use this decorator to make sure the user provides correct values. |
+| [allowed](#allowed-values) | all | array | Use this decorator to make sure the user provides correct values. This decorator is only permitted on `param` statements. To declare that a property must be one of a set of predefined values in a [`type`](./user-defined-data-types.md) or [`output`](./outputs.md) statement, use [union type syntax](./data-types.md#union-types). Union type syntax can also be used in `param` statements.|
| [description](#description) | all | string | Text that explains how to use the parameter. The description is displayed to users through the portal. | | [maxLength](#length-constraints) | array, string | int | The maximum length for string and array parameters. The value is inclusive. | | [maxValue](#integer-constraints) | int | int | The maximum value for the integer parameter. This value is inclusive. |
When you hover your cursor over **storageAccountName** in VS Code, you see the f
:::image type="content" source="./media/parameters/vscode-bicep-extension-description-decorator-markdown.png" alt-text="Use Markdown-formatted text in VSCode":::
-Make sure the text follows proper Markdown formatting; otherwise, it may not display correctly when rendered
+Make sure the text follows proper Markdown formatting; otherwise, it may not display correctly when rendered.
### Metadata
You might use this decorator to track information about the parameter that doesn
param settings object ```
-When you provide a `@metadata()` decorator with a property that conflicts with another decorator, that decorator always takes precedence over anything in the `@metadata()` decorator. So, the conflicting property within the @metadata() value is redundant and will be replaced. For more information, see [No conflicting metadata](./linter-rule-no-conflicting-metadata.md).
+When you provide a `@metadata()` decorator with a property that conflicts with another decorator, that decorator always takes precedence over anything in the `@metadata()` decorator. So, the conflicting property within the `@metadata()` value is redundant and will be replaced. For more information, see [No conflicting metadata](./linter-rule-no-conflicting-metadata.md).
## Use parameter
azure-resource-manager User Defined Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md
Learn how to use user-defined data types in Bicep. For system-defined data types
[Bicep CLI version 0.12.X or higher](./install.md) is required to use this feature.
-## User-defined data type syntax
+## Syntax
You can use the `type` statement to define user-defined data types. In addition, you can also use type expressions in some places to define custom types.
You can use the `type` statement to define user-defined data types. In addition,
type <user-defined-data-type-name> = <type-expression> ```
-> [!NOTE]
-> The [`@allowed` decorator](./parameters.md#decorators) is only permitted on [`param` statements](./parameters.md). To declare that a property must be one of a set of predefined values in a `type` or [`output`](./outputs.md) statement, use union type syntax. Union type syntax may also be used in [`param` statements](./parameters.md).
+The [`@allowed`](./parameters.md#decorators) decorator is only permitted on [`param` statements](./parameters.md). To declare that a property with a set of predefined values in a `type`, use [union type syntax](./data-types.md#union-types).
The valid type expressions include:
The valid type expressions include:
} ```
- The following sample shows how to use the union type syntax to list a set of predefined values:
+ The following sample shows how to use the [union type syntax](./data-types.md#union-types) to list a set of predefined values:
```bicep
+ type directions = 'east' | 'south' | 'west' | 'north'
+ type obj = { level: 'bronze' | 'silver' | 'gold' } ```+ **Recursion** Object types may use direct or indirect recursion so long as at least leg of the path to the recursion point is optional. For example, the `myObjectType` definition in the following example is valid because the directly recursive `recursiveProp` property is optional:
resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
} ```
-## Declare tagged union type
-
-To declare a custom tagged union data type within a Bicep file, you can place a discriminator decorator above a user-defined type declaration. [Bicep CLI version 0.21.X or higher](./install.md) is required to use this decorator. The syntax is:
-
-```bicep
-@discriminator('<propertyName>')
-```
-
-The discriminator decorator takes a single parameter, which represents a shared property name among all union members. This property name must be a required string literal on all members and is case-sensitive. The values of the discriminated property on the union members must be unique in a case-insensitive manner.
+## Tagged union data type
-The following example shows how to declare a tagged union type:
+To declare a custom tagged union data type within a Bicep file, you can place a discriminator decorator above a user-defined type declaration. [Bicep CLI version 0.21.X or higher](./install.md) is required to use this decorator. The following example shows how to declare a tagged union data type:
```bicep type FooConfig = {
param serviceConfig ServiceConfig = { type: 'bar', value: true }
output config object = serviceConfig ```
-The parameter value is validated based on the discriminated property value. In the preceding example, if the *serviceConfig* parameter value is of type *foo*, it undergoes validation using the *FooConfig*type. Likewise, if the parameter value is of type *bar*, validation is performed using the *BarConfig* type, and this pattern continues for other types as well.
+For more information, see [Custom tagged union data type](./data-types.md#custom-tagged-union-data-type).
## Import types between Bicep files
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
You can retrieve information about tags by downloading the usage file available
For REST API operations, see [Azure Billing REST API Reference](/rest/api/billing/).
+## Unique tags pagination
+
+When calling the [Unique Tags API](/rest/api/resources/tags/list) there is a limit to the size of each API response page that is returned. A tag that has a large set of unique values will require the API to fetch the next page to retrieve the remaining set of values. When this happens the tag key is shown again to indicate that the vales are still under this key.
+
+This can result in some tools, like the Azure portal, to show the tag key twice.
+ ## Limitations The following limitations apply to tags:
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
In your private cloud, you can:
- Collect logs on each of your VMs. - [Download and install the MMA agent](../azure-monitor/agents/log-analytics-agent.md#installation-options) on Linux and Windows VMs. - Enable the [Azure diagnostics extension](../azure-monitor/agents/diagnostics-extension-overview.md).-- [Create and run new queries](../azure-monitor/logs/data-platform-logs.md#log-queries).
+- [Create and run new queries](../azure-monitor/logs/data-platform-logs.md#kusto-query-language-kql-and-log-analytics).
- Run the same queries you usually run on your VMs. Monitoring patterns inside the Azure VMware Solution are similar to Azure VMs within the IaaS platform. For more information and how-tos, see [Monitoring Azure VMs with Azure Monitor](../azure-monitor/vm/monitor-vm-azure.md).
backup Backup Azure Backup Sharepoint Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-sharepoint-mabs.md
Title: Back up a SharePoint farm to Azure with MABS description: Use Azure Backup Server to back up and restore your SharePoint data. This article provides the information to configure your SharePoint farm so that desired data can be stored in Azure. You can restore protected SharePoint data from disk or from Azure. Previously updated : 11/29/2022 Last updated : 07/22/2024 -+
This article describes how to back up SharePoint farm using Microsoft Azure Back
Microsoft Azure Backup Server (MABS) enables you to back up a SharePoint farm to Microsoft Azure, which gives an experience similar to back up of other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points, and gives you retention policy options for various backup points. It also provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention.
-In this article, you'll learn about:
-
-> [!div class="checklist"]
-> - SharePoint supported scenarios
-> - Prerequisites
-> - Configure the backup
-> - Monitor the operations
-> - Restore a SharePoint item from disk using MABS
-> - Restore a SharePoint database from Azure using MABS
-> - Switch the front-end Web server
-> - Remove a database from a SharePoint farm
- >[!Note] >The backup process for SharePoint to Azure using MABS is similar to back up of SharePoint to Data Protection Manager (DPM) locally. Particular considerations for Azure are noted in this article.
For information on the supported SharePoint versions and the MABS versions requi
* MABS that protects a SharePoint farm doesn't protect search indexes or application service databases. You need to configure the protection of these databases separately. * MABS doesn't provide backup of SharePoint SQL Server databases that are hosted on scale-out file server (SOFS) shares.
+### Limitations
+
+* You can't protect SharePoint databases as a SQL Server data source. You can recover individual databases from a farm backup.
+
+* Protecting application store items isn't supported with SharePoint 2013.
+
+* MABS doesn't support protecting remote FILESTREAM. The FILESTREAM should be part of the database.
+ ## Prerequisites Before you continue, ensure that you've met all the [prerequisites for using Microsoft Azure Backup](backup-azure-dpm-introduction.md#prerequisites-and-limitations) to protect workloads. The tasks for prerequisites also include: create a backup vault, download vault credentials, install Azure Backup Agent, and register the Azure Backup Server with the vault.
Additional prerequisites:
* Remember that MABS runs as **Local System**, and it needs *sysadmin* privileges on that account for the SQL server to back up SQL Server databases. On the SQL Server you want to back up, set *NT AUTHORITY\SYSTEM* to **sysadmin**.
-* For every 10 million items in the farm, there must be at least 2 GB of space on the volume where the MABS folder is located. This space is required for catalog generation. To enable you to use MABS to perform a specific recovery of items (site collections, sites, lists, document libraries, folders, individual documents, and list items), catalog generation creates a list of the URLs contained within each content database. You can view the list of URLs in the recoverable item pane in the Recovery task area of the MABS Administrator Console.
+* For every 10 million items in the farm, there must be at least 2 GB of space on the volume where the MABS folder is located. This space is required for catalog generation. To enable you to use MABS to perform a specific recovery of items (site collections, sites, lists, document libraries, folders, individual documents, and list items), catalog generation creates a list of the URLs contained within each content database. You can view the list of URLs in the recoverable item blade in the Recovery task area of the MABS Administrator Console.
* In the SharePoint farm, if you've SQL Server databases that are configured with SQL Server aliases, install the SQL Server client components on the front-end Web server that MABS will protect.
-### Limitations
-
-* You can't protect SharePoint databases as a SQL Server data source. You can recover individual databases from a farm backup.
-
-* Protecting application store items isn't supported with SharePoint 2013.
-
-* MABS doesn't support protecting remote FILESTREAM. The FILESTREAM should be part of the database.
- ## Configure the backup To back up the SharePoint farm, configure protection for SharePoint by using *ConfigureSharePoint.exe* and then create a protection group in MABS.
Follow these steps:
When you expand the computer running SharePoint, MABS queries VSS to see what data MABS can protect. If the SharePoint database is remote, MABS connects to it. If SharePoint data sources don't appear, check that the VSS writer is running on the computer that's running SharePoint and on any remote instance of SQL Server. Then, ensure that the MABS agent is installed both on the computer running SharePoint and on the remote instance of SQL Server. Also, ensure that SharePoint databases aren't being protected elsewhere as SQL Server databases.
-1. On **Select data protection method**, specify how you want to handle short and long\-term backup. Short\-term back up is always to disk first, with the option of backing up from the disk to the Azure cloud with Azure Backup \(for short or long\-term\).
+1. On **Select data protection method**, specify how you want to handle short and long\-term backup. Short\-term backup is always to disk first, with the option of backing up from the disk to the Azure cloud with Azure Backup \(for short or long\-term\).
1. On **Select short\-term goals**, specify how you want to back up to short\-term storage on disk. In **Retention range** you specify how long you want to keep the data on disk. In **Synchronization frequency**, you specify how often you want to run an incremental backup to disk.
Follow these steps:
1. On the **Review disk allocation** page, review the storage pool disk space allocated for the protection group.
- **Total Data size** is the size of the data you want to back up, and **Disk space to be provisioned on MABS** is the space that MABS recommends for the protection group. MABS chooses the ideal backup volume, based on the settings. However, you can edit the backup volume choices in the **Disk allocation details**. For the workloads, select the preferred storage in the dropdown menu. Your edits change the values for **Total Storage** and **Free Storage** in the **Available Disk Storage** pane. Underprovisioned space is the amount of storage MABS suggests you add to the volume, to continue with backups smoothly in the future.
+ **Total Data size** is the size of the data you want to back up, and **Disk space to be provisioned on MABS** is the space that MABS recommends for the protection group. MABS chooses the ideal backup volume, based on the settings. However, you can edit the backup volume choices in the **Disk allocation details**. For the workloads, select the preferred storage in the dropdown menu. Your edits change the values for **Total Storage** and **Free Storage** in the **Available Disk Storage** blade. Underprovisioned space is the amount of storage MABS suggests you add to the volume, to continue with backups smoothly in the future.
1. On **Choose replica creation method**, select how you want to handle the initial full data replication.
When a database is removed from a SharePoint farm, MABS will skip the backup of
### MABS Alert - Farm Configuration Changed
-This is a warning alert that is generated in Microsoft Azure Backup Server (MABS) when automatic protection of a SharePoint database fails. See the alert **Details** pane for more information about the cause of this alert.
+This is a warning alert that is generated in Microsoft Azure Backup Server (MABS) when automatic protection of a SharePoint database fails. See the alert **Details** blade for more information about the cause of this alert.
To resolve this alert, follow these steps: 1. Verify with the SharePoint administrator if the database has actually been removed from the farm. If the database has been removed from the farm, then it must be removed from active protection in MABS. 1. To remove the database from active protection: 1. In **MABS Administrator Console**, click **Protection** on the navigation bar.
- 1. In the **Display** pane, right-click the protection group for the SharePoint farm, and then click **Stop Protection of member**.
+ 1. In the **Display** blade, right-click the protection group for the SharePoint farm, and then click **Stop Protection of member**.
1. In the **Stop Protection** dialog box, click **Retain Protected Data**. 1. Select **Stop Protection**. You can add the SharePoint farm back for protection by using the **Modify Protection Group** wizard. During re-protection, select the SharePoint front-end server and click **Refresh** to update the SharePoint database cache, then select the SharePoint farm and proceed.
-## Next steps
+## Next step
- [Back up Exchange server](backup-azure-exchange-mabs.md) - [Back up SQL Server](backup-azure-sql-mabs.md)
backup Backup Sql Server Vm From Vm Pane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-vm-from-vm-pane.md
Title: Back up a SQL Server VM from the VM pane description: In this article, learn how to back up SQL Server databases on Azure virtual machines from the VM pane.- Previously updated : 08/11/2022+ Last updated : 07/22/2024 +
-# Back up a SQL Server from the VM pane
+# Back up a SQL Server from the VM blade
This article explains how to back up SQL Server running in Azure VMs with the [Azure Backup](backup-overview.md) service. You can back up SQL Server VMs using two methods: - Single SQL Server Azure VM: The instructions in this article describe how to back up a SQL Server VM directly from the VM view. - Multiple SQL Server Azure VMs: You can set up a Recovery Services vault and configure backup for multiple VMs. Follow the instructions in [this article](backup-sql-server-database-azure-vms.md) for that scenario.
+## Prerequisites
+
+Before you start the SQL Server backup operation, see the [backup prerequisites](backup-sql-server-database-azure-vms.md#prerequisites).
+ ## Before you start
-1. Verify your environment with the [support matrix](sql-support-matrix.md).
-2. Get an [overview](backup-azure-sql-database.md) of Azure Backup for SQL Server VM.
-3. Verify that the VM has [network connectivity](backup-sql-server-database-azure-vms.md#establish-network-connectivity).
+- Verify your environment with the [support matrix](sql-support-matrix.md).
+- Get an [overview](backup-azure-sql-database.md) of Azure Backup for SQL Server VM.
+- Verify that the VM has [network connectivity](backup-sql-server-database-azure-vms.md#establish-network-connectivity).
>[!Note] >See the [SQL backup support matrix](sql-support-matrix.md) to know more about the supported configurations and scenarios. ## Configure backup on the SQL Server
-You can enable backup on your SQL Server VM from the **Backup** pane in the VM. This method does two things:
+You can enable backup on your SQL Server VM from the **Backup** blade in the VM. This method does two things:
- Registers the SQL VM with the Azure Backup service to give it access. - Autoprotects all the SQL Server instances running inside the VM. This means that the backup policy is applied to all the existing databases, as well as the databases that will be added to these instances in the future. 1. Select the banner on the top of the page to open the SQL Server backup view.
- ![SQL Server backup view](./media/backup-sql-server-vm-from-vm-pane/sql-server-backup-view.png)
+ ![Screenshot shows the SQL Server backup view.](./media/backup-sql-server-vm-from-vm-pane/sql-server-backup-view.png)
>[!NOTE] >Don't see the banner? The banner is only displayed for those SQL Server VMs that are created using Azure Marketplace images. It's additionally displayed for the VMs that are protected with Azure VM Backup. For other images, you can configure backup as explained [here](backup-sql-server-database-azure-vms.md).
You can enable backup on your SQL Server VM from the **Backup** pane in the VM.
3. Choose a **Backup Policy**. You can choose from the default policy, or any other existing policies that you created in the vault. If you want to create a new policy, you can refer to [this article](backup-sql-server-database-azure-vms.md#create-a-backup-policy) for a step-by-step guide.
- ![Choose a backup policy](./media/backup-sql-server-vm-from-vm-pane/backup-policy.png)
+ ![Screenshot shows how to choose a backup policy.](./media/backup-sql-server-vm-from-vm-pane/backup-policy.png)
4. Select **Enable Backup**. The operation may take a few minutes to complete.
- ![Select enable backup](./media/backup-sql-server-vm-from-vm-pane/enable-backup.png)
+ ![Screenshot shows how to select enable backup.](./media/backup-sql-server-vm-from-vm-pane/enable-backup.png)
5. Once the operation is completed, you'll see the **vault name** in the banner.
- ![Vault name appears in banner](./media/backup-sql-server-vm-from-vm-pane/vault-name.png)
+ ![Screenshot shows the Vault name in banner.](./media/backup-sql-server-vm-from-vm-pane/vault-name.png)
6. Select the banner to go the vault view, where you can see all the registered VMs and their protection status.
- ![Vault view with registered VMs](./media/backup-sql-server-vm-from-vm-pane/vault-view.png)
+ ![Screenshot shows the Vault view with registered VMs.](./media/backup-sql-server-vm-from-vm-pane/vault-view.png)
7. For non-marketplace images, the registration may be successful, but **configure backup** may not be triggered until the Azure Backup extension is given permission on the SQL Server. In such cases, the **Backup Readiness** column reads **Not Ready**. You need to [assign the appropriate permissions](backup-azure-sql-database.md#set-vm-permissions) manually for non-marketplace images so configure backup can get triggered.
- ![Backup readiness isn't ready](./media/backup-sql-server-vm-from-vm-pane/backup-readiness-not-ready.png)
+ ![Screenshot shows that the backup readiness isn't ready.](./media/backup-sql-server-vm-from-vm-pane/backup-readiness-not-ready.png)
8. For further operations or monitoring that you need to do on the backed-up SQL Server VM, go to the corresponding Recovery Services vault. Go to **Backup Items** to see all the databases backed up in this vault, and trigger operations such as on-demand backup and restore. Similarly, go to **Backup Jobs** to [monitor](manage-monitor-sql-database-backup.md) jobs corresponding to operations such as configure protection, backup, and restore.
- ![See backed-up databases in Backup Items](./media/backup-sql-server-vm-from-vm-pane/backup-items.png)
+ ![Screenshot shows how to view backed-up databases in Backup Items.](./media/backup-sql-server-vm-from-vm-pane/backup-items.png)
>[!NOTE] >The backup isn't automatically configured on any of the new SQL Server instances that may be added later to the protected VM. To configure backup on the newly added instances, you need to go the vault that the VM is registered to and follow the steps listed [here](backup-sql-server-database-azure-vms.md).
-## Next steps
+## Next step
Learn how to:
backup Configure Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/configure-reports.md
Set up one or more Log Analytics workspaces to store your Backup reporting data.
To set up a Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
-By default, the data in a Log Analytics workspace is retained for 30 days. To see data for a longer time horizon, change the retention period of the Log Analytics workspace. To change the retention period, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
+By default, the data in a Log Analytics workspace is retained for 30 days. To see data for a longer time horizon, change the retention period of the Log Analytics workspace. To change the retention period, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-configure.md).
### 2. Configure diagnostics settings for your vaults
business-continuity-center Tutorial View Protectable Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-view-protectable-resources.md
Title: Tutorial - View protectable resources description: In this tutorial, learn how to view your resources that are currently not protected by any solution using Azure Business Continuity center. Previously updated : 03/29/2024 Last updated : 07/22/2024 - ignite-2023
This tutorial shows you how to view your resources that are currently not protec
Before you start this tutorial: -- Review supported regions for ABC Center.
+- Review [supported regions for ABC Center](business-continuity-center-support-matrix.md#supported-regions).
- Ensure you have the required resource permissions to view them in the ABC center. ## View protectable resources
communication-services Phone Number Management For Belgium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-belgium.md
Use the below tables to find all the relevant information on number availability
| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :- | :- | :- | : |
-| Toll-Free | - | - | General Availability | General Availability\* |
+| Toll-Free | - | - | - | General Availability\* |
| Local | - | - | General Availability | General Availability\* | \* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md
Email message content is ephemerally stored for processing in the resource's ```
## Azure Monitor and Log Analytics
-Azure Communication Services feed into Azure Monitor logging data for understanding operational health and utilization of the service. Some of these logs include Communication Service identities and phone numbers as field data. To delete any potentially personal data, [use these procedures for Azure Monitor](../../azure-monitor/logs/personal-data-mgmt.md). You may also want to configure [the default retention period for Azure Monitor](../../azure-monitor/logs/data-retention-archive.md).
+Azure Communication Services feed into Azure Monitor logging data for understanding operational health and utilization of the service. Some of these logs include Communication Service identities and phone numbers as field data. To delete any potentially personal data, [use these procedures for Azure Monitor](../../azure-monitor/logs/personal-data-mgmt.md). You may also want to configure [the default retention period for Azure Monitor](../../azure-monitor/logs/data-retention-configure.md).
## Additional resources
communication-services Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-diagnostics.md
analyze new call data.
Since Call Diagnostics is an application layer on top of data for your Azure Communications Service Resource, you can query these call data and
-[build workbook reports on top of your data.](../../../azure-monitor/logs/data-platform-logs.md#what-can-you-do-with-azure-monitor-logs)
+[build workbook reports on top of your data.](../../../azure-monitor/logs/data-platform-logs.md#built-in-insights-and-custom-dashboards-workbooks-and-reports)
You can access Call Diagnostics from any Azure Communication Services Resource in your Azure portal. When you open your Azure Communications
container-apps Connect Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/connect-apps.md
Previously updated : 11/02/2021 Last updated : 07/12/2024 # Connect applications in Azure Container Apps
-Azure Container Apps exposes each container app through a domain name if [ingress](ingress-overview.md) is enabled. Ingress endpoints can be exposed either publicly to the world and to other container apps in the same environment, or ingress can be limited to only other container apps in the same [environment](environment.md).
+Azure Container Apps exposes each container app through a domain name if [ingress](ingress-overview.md) is enabled. You can expose ingress endpoints either publicly to the world or to the other container apps in the same environment. Alternatively, you can limit ingress to only other container apps in the same [environment](environment.md).
-You can call other container apps in the same environment from your application code using one of the following methods:
+Application code can call other container apps in the same environment using one of the following methods:
- default fully qualified domain name (FQDN) - a custom domain name
The following diagram shows how these values are used to compose a container app
## Dapr location
-Developing microservices often requires you to implement patterns common to distributed architecture. Dapr allows you to secure microservices with mutual TLS (client certificates), trigger retries when errors occur, and take advantage of distributed tracing when Azure Application Insights is enabled.
+Developing microservices often requires you to implement patterns common to distributed architecture. Dapr allows you to secure microservices with mutual Transport Layer Security (TLS) (client certificates), trigger retries when errors occur, and take advantage of distributed tracing when Azure Application Insights is enabled.
A microservice that uses Dapr is available through the following URL pattern:
container-apps Environment Custom Dns Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment-custom-dns-suffix.md
Previously updated : 10/13/2022 Last updated : 07/18/2024 # Custom environment DNS Suffix in Azure Container Apps
-By default, an Azure Container Apps environment provides a DNS suffix in the format `<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`. Each container app in the environment generates a domain name based on this DNS suffix. You can configure a custom DNS suffix for your environment.
+Azure Container Apps environment provides a default DNS suffix in the format `<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`. Each container app in the environment generates a domain name based on this DNS suffix. You can configure a custom DNS suffix for your environment.
> [!NOTE] > > To configure a custom domain for individual container apps, see [Custom domain names and certificates in Azure Container Apps](custom-domains-certificates.md). >
-> If you configure a custom DNS suffix for your environment, traffic to FQDNs that use this suffix will resolve to the environment. FQDNs that use this suffix outside the environment will be unreachable from the environment.
+> If you configure a custom DNS suffix for your environment, traffic to FQDNs (Fully Qualified Domain Names) that use this suffix will resolve to the environment. FQDNs that use this suffix outside of the environment are unreachable.
## Add a custom DNS suffix and certificate
By default, an Azure Container Apps environment provides a DNS suffix in the for
1. In **DNS suffix**, enter the custom DNS suffix for the environment.
- For example, if you enter `example.com`, the container app domain names will be in the format `<APP_NAME>.example.com`.
+ For example, if you enter `example.com`, the container app domain names are in the format `<APP_NAME>.example.com`.
1. In a new browser window, go to your domain provider's website and add the DNS records shown in the *Domain validation* section to your domain.
By default, an Azure Container Apps environment provides a DNS suffix in the for
| A | `*.<DNS_SUFFIX>` | Environment inbound IP address | Wildcard record configured to the IP address of the environment. | | TXT | `asuid.<DNS_SUFFIX>` | Validation token | TXT record with the value of the validation token (not required for Container Apps environment with internal load balancer). |
-1. Back in the *Custom DNS suffix* window, in **Certificate file**, browse and select a certificate for the TLS binding.
+1. Back in the *Custom DNS suffix* windows, in Certificate file, browse, and select a certificate for the TLS binding.
> [!IMPORTANT] > You must use an existing wildcard certificate that's valid for the custom DNS suffix you provided.
By default, an Azure Container Apps environment provides a DNS suffix in the for
1. Select **Save**.
-Once the save operation is complete, the environment is updated with the custom DNS suffix and TLS certificate.
+Once the saved operation is complete, the environment is updated with the custom DNS suffix and TLS certificate.
## Next steps
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
Last updated 4/23/2023
# Sign container images with Notation and Azure Key Vault using a self-signed certificate
-Signing container images is a process that ensures their authenticity and integrity. This is achieved by adding a digital signature to the container image, which can be validated during deployment. The signature helps to verify that the image is from a trusted publisher and has not been modified. [Notation](https://github.com/notaryproject/notation) is an open source supply chain tool developed by the [Notary Project](https://notaryproject.dev/), which supports signing and verifying container images and other artifacts. The Azure Key Vault (AKV) is used to store certificates with signing keys that can be used by Notation with the Notation AKV plugin (azure-kv) to sign and verify container images and other artifacts. The Azure Container Registry (ACR) allows you to attach signatures to container images and other artifacts as well as view those signatures.
+Signing container images is a process that ensures their authenticity and integrity. This is achieved by adding a digital signature to the container image, which can be validated during deployment. The signature helps to verify that the image is from a trusted publisher and has not been modified. [Notation](https://github.com/notaryproject/notation) is an open source supply chain security tool developed by the [Notary Project community](https://notaryproject.dev/) and backed by Microsoft, which supports signing and verifying container images and other artifacts. The Azure Key Vault (AKV) is used to store certificates with signing keys that can be used by Notation with the Notation AKV plugin (azure-kv) to sign and verify container images and other artifacts. The Azure Container Registry (ACR) allows you to attach signatures to container images and other artifacts as well as view those signatures.
In this tutorial:
In this tutorial:
cp ./notation /usr/local/bin ```
-2. Install the Notation Azure Key Vault plugin `azure-kv` v1.1.0 on a Linux amd64 environment.
+2. Install the Notation Azure Key Vault plugin `azure-kv` v1.2.0 on a Linux amd64 environment.
> [!NOTE] > The URL and SHA256 checksum for the Notation Azure Key Vault plugin can be found on the plugin's [release page](https://github.com/Azure/notation-azure-kv/releases). ```bash
- notation plugin install --url https://github.com/Azure/notation-azure-kv/releases/download/v1.1.0/notation-azure-kv_1.1.0_linux_amd64.tar.gz --sha256sum 2fc959bf850275246b044203609202329d015005574fabbf3e6393345e49b884
+ notation plugin install --url https://github.com/Azure/notation-azure-kv/releases/download/v1.2.0/notation-azure-kv_1.2.0_linux_amd64.tar.gz --sha256sum 06bb5198af31ce11b08c4557ae4c2cbfb09878dfa6b637b7407ebc2d57b87b34
```
-3. List the available plugins and confirm that the `azure-kv` plugin with version `1.1.0` is included in the list.
+3. List the available plugins and confirm that the `azure-kv` plugin with version `1.2.0` is included in the list.
```bash notation plugin ls
To verify the container image, add the root certificate that signs the leaf cert
## Next steps
-See [Use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters (Preview)](/azure/aks/image-integrity?tabs=azure-cli) and [Ratify on Azure](https://ratify.dev/docs/1.0/quickstarts/ratify-on-azure/) to get started into verifying and auditing signed images before deploying them on AKS.
+Notation also provides CI/CD solutions on Azure Pipeline and GitHub Actions Workflow:
+
+- [Sign and verify a container image with Notation in Azure Pipeline](/azure/security/container-secure-supply-chain/articles/notation-ado-task-sign)
+- [Sign and verify a container image with Notation in GitHub Actions Workflow](https://github.com/marketplace/actions/notation-actions)
+
+To validate signed image deployment in AKS or Kubernetes:
+
+- [Use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters (Preview)](/azure/aks/image-integrity?tabs=azure-cli)
+- [Use Ratify to validate and audit image deployment in any Kubernetes cluster](https://ratify.dev/)
[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
container-registry Container Registry Tutorial Sign Trusted Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-trusted-ca.md
Signing and verifying container images with a certificate issued by a trusted Ce
Here are some essential components that help you to sign and verify container images with a certificate issued by a trusted CA:
-* The [Notation](https://github.com/notaryproject/notation) is an open-source supply chain tool developed by [Notary Project](https://notaryproject.dev/), which supports signing and verifying container images and other artifacts.
+* The [Notation](https://github.com/notaryproject/notation) is an open-source supply chain security tool developed by [Notary Project community](https://notaryproject.dev/) and backed by Microsoft, which supports signing and verifying container images and other artifacts.
* The Azure Key Vault (AKV), a cloud-based service for managing cryptographic keys, secrets, and certificates will help you ensure to securely store and manage a certificate with a signing key. * The [Notation AKV plugin azure-kv](https://github.com/Azure/notation-azure-kv), the extension of Notation uses the keys stored in Azure Key Vault for signing and verifying the digital signatures of container images and artifacts. * The Azure Container Registry (ACR) allows you to attach these signatures to the signed image and helps you to store and manage these container images.
In this article:
cp ./notation /usr/local/bin ```
-2. Install the Notation Azure Key Vault plugin `azure-kv` v1.1.0 on a Linux amd64 environment.
+2. Install the Notation Azure Key Vault plugin `azure-kv` v1.2.0 on a Linux amd64 environment.
> [!NOTE] > The URL and SHA256 checksum for the Notation Azure Key Vault plugin can be found on the plugin's [release page](https://github.com/Azure/notation-azure-kv/releases). ```bash
- notation plugin install --url https://github.com/Azure/notation-azure-kv/releases/download/v1.1.0/notation-azure-kv_1.1.0_linux_amd64.tar.gz --sha256sum 2fc959bf850275246b044203609202329d015005574fabbf3e6393345e49b884
+ notation plugin install --url https://github.com/Azure/notation-azure-kv/releases/download/v1.2.0/notation-azure-kv_1.2.0_linux_amd64.tar.gz --sha256sum 06bb5198af31ce11b08c4557ae4c2cbfb09878dfa6b637b7407ebc2d57b87b34
```
-3. List the available plugins and confirm that the `azure-kv` plugin with version `1.1.0` is included in the list.
-
+3. List the available plugins and confirm that the `azure-kv` plugin with version `1.2.0` is included in the list.
+ ```bash notation plugin ls ```
To learn more about assigning policy to a principal, see [Assign Access Policy](
## Next steps
-See [Use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters (Preview)](/azure/aks/image-integrity?tabs=azure-cli) and [Ratify on Azure](https://ratify.dev/docs/1.0/quickstarts/ratify-on-azure/) to get started into verifying and auditing signed images before deploying them on AKS.
+Notation also provides CI/CD solutions on Azure Pipeline and GitHub Actions Workflow:
+
+- [Sign and verify a container image with Notation in Azure Pipeline](/azure/security/container-secure-supply-chain/articles/notation-ado-task-sign)
+- [Sign and verify a container image with Notation in GitHub Actions Workflow](https://github.com/marketplace/actions/notation-actions)
+
+To validate signed image deployment in AKS or Kubernetes:
+
+- [Use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters (Preview)](/azure/aks/image-integrity?tabs=azure-cli)
+- [Use Ratify to validate and audit image deployment in any Kubernetes cluster](https://ratify.dev/)
[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
cosmos-db Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache.md
Previously updated : 12/27/2023 Last updated : 7/19/2024
Item cache is used for point reads (key/value look ups based on the Item ID and
- New writes, updates, and deletes are automatically populated in the item cache of the node that the request is routed through - Items from point read requests where the item isnΓÇÖt already in the cache (cache miss) of the node the request is routed through are added to the item cache
+- Read requests for multiple items, such as ReadMany, populate the query cache as a set instead of the item cache as individual items
- Requests that are part of a [transactional batch](./nosql/transactional-batch.md) or in [bulk mode](./nosql/how-to-migrate-from-bulk-executor-library.md#enable-bulk-support) don't populate the item cache ### Item cache invalidation and eviction
The query cache is used to cache queries. The query cache transforms a query int
- If the cache doesn't have a result for that query (cache miss) on the node it was routed through, the query is sent to the backend. After the query is run, the cache will store the results for that query - Queries with the same shape but different parameters or request options that affect the results (ex. max item count) are stored as their own key/value pair
+- Read requests for multiple items, such as ReadMany, populate the query cache. ReadMany results are stored as a set, and requests with different inputs will be stored as their own key/value pair
### Query cache eviction
cosmos-db How To Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-upgrade-cluster.md
+
+ Title: Upgrade a cluster
+
+description: Steps to upgrade Azure Cosmos DB for MongoDB vCore cluster from a lower version to latest version.
+++++++ Last updated : 07/22/2024++
+# Upgrade a cluster in Azure Cosmos DB for MongoDB vCore
++
+Azure Cosmos DB for MongoDB vCore provide customers with a convenient self-service option to upgrade to the latest MongoDB version. This feature ensures a seamless upgrade path with just a click, allowing businesses to continue their operations without interruption.
++
+## Prerequisites
+
+- An existing Azure Cosmos DB for MongoDB vCore cluster.
+ - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free).
+ - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md).
++
+## Upgrade a cluster
+
+Here are the detailed steps to upgrade a cluster to latest version:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Go to the **Overview** blade of your Azure Cosmos DB for MongoDB vCore cluster and click the **Upgrade** button as illustrated below.
+
+ :::image type="content" source="media/how-to-scale-cluster/upgrade-overview-page.png" alt-text="Screenshot of the overview page.":::
+
+ > [!NOTE]
+ > The upgrade button will stay disabled if you're already using the latest version.
+
+3. A new window will appear on the right, allowing you to choose the MongoDB version you wish to upgrade to. Select the appropriate version and submit the upgrade request.
+
+ :::image type="content" source="media/how-to-scale-cluster/upgrade-side-window.png" alt-text="Screenshot of server upgrade page.":::
+
+## Next steps
+
+In this guide, we'll learn more about point in time restore(PITR) on Azure Cosmos DB for MongoDB vCore.
+
+> [!div class="nextstepaction"]
+> [Restore cluster](how-to-restore-cluster.md)
defender-for-cloud Quickstart Onboard Gitlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gitlab.md
# Quickstart: Connect your GitLab Environment to Microsoft Defender for Cloud
-In this quickstart, you connect your GitLab groups on the **Environment settings** page in Microsoft Defender for Cloud. This page provides a simple onboarding experience to autodiscover your GitLab resources.
+In this quickstart, you connect your GitLab groups on the **Environment settings** page in Microsoft Defender for Cloud. This page provides a simple onboarding experience to automatically discover your GitLab resources.
By connecting your GitLab groups to Defender for Cloud, you extend the security capabilities of Defender for Cloud to your GitLab resources. These features include:
To connect your GitLab Group to Defender for Cloud by using a native connector:
1. Enter a name, subscription, resource group, and region. The subscription is the location where Microsoft Defender for Cloud creates and stores the GitLab connection.-
-1. Select **Next: select plans**. Configure the Defender CSPM plan status for your GitLab connector. Learn more about [Defender CSPM](concept-cloud-security-posture-management.md) and see [Support and prerequisites](devops-support.md) for premium DevOps security features.
-
- :::image type="content" source="media/quickstart-onboard-ado/select-plans.png" alt-text="Screenshot that shows plan selection for DevOps connectors." lightbox="media/quickstart-onboard-ado/select-plans.png":::
-
+
1. Select **Next: Configure access**. 1. Select **Authorize**.
To connect your GitLab Group to Defender for Cloud by using a native connector:
- Select **all existing groups** to autodiscover all subgroups and projects in groups you're currently an Owner in. - Select **all existing and future groups** to autodiscover all subgroups and projects in all current and future groups you're an Owner in.
-Since GitLab projects are onboarded at no additional cost, autodiscover is applied across the group to ensure Defender for Cloud can comprehensively assess the security posture and respond to security threats across your entire DevOps ecosystem. Groups can later be manually added and removed through **Microsoft Defender for Cloud** > **Environment settings**.
+Since GitLab projects are onboarded at no additional cost, autodiscovery is applied across the group to ensure Defender for Cloud can comprehensively assess the security posture and respond to security threats across your entire DevOps ecosystem. Groups can later be manually added and removed through **Microsoft Defender for Cloud** > **Environment settings**.
1. Select **Next: Review and generate**.
defender-for-cloud Recommendations Reference Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-ai.md
This recommendation replaces the old recommendation *Cognitive Services accounts
**Description**: By restricting network access, you can ensure that only allowed networks can access the service. This can be achieved by configuring network rules so that only applications from allowed networks can access the Azure AI service resource.
-This recommendation replaces the old recommendation *Cognitive Services accounts should restrict network access*. It was formerly in category Cognitive Services and Cognitive Search, and was updated to comply with the Azure AI Services naming format and align with the relevant resources.
+This recommendation replaces the old recommendation *Cognitive Services accounts should restrict network access*. It was formerly in category Cognitive Services and Cognitive Search, and was updated to comply with the Azure AI Services naming format and align with the relevant resources.
+ **Severity**: Medium
+### [(Enable if required) Azure AI Services resources should encrypt data at rest with a customer-managed key (CMK)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/18bf29b3-a844-e170-2826-4e95d0ba4dc9/showSecurityCenterCommandBar~/false)
+
+**Description**: Using customer-managed keys to encrypt data at rest provides more control over the key lifecycle, including rotation and management. This is particularly relevant for organizations with related compliance requirements.
+
+This is not assessed by default and should only be applied when required by compliance or restrictive policy requirements. If not enabled, the data will be encrypted using platform-managed keys. To implement this, update the 'Effect' parameter in the Security Policy for the applicable scope. (Related policy: [Azure AI Services resources should encrypt data at rest with a customer-managed key (CMK)](/azure/ai-services/openai/how-to/use-your-data-securely))
+
+This recommendation replaces the old recommendation *Cognitive services accounts should enable data encryption using customer keys*. It was formerly in category Data recommendations, and was updated to comply with the Azure AI Services naming format and align with the relevant resources.
+
+**Severity**: Low
+ ### Resource logs in Azure Machine Learning Workspaces should be enabled (Preview) **Description & related policy**: Resource logs enable recreating activity trails to use for investigation purposes when a security incident occurs or when your network is compromised.
defender-for-cloud Recommendations Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-data.md
Manage encryption at rest of your Azure Machine Learning workspace data with cus
**Severity**: Medium
-### [(Enable if required) Cognitive Services accounts should enable data encryption with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/18bf29b3-a844-e170-2826-4e95d0ba4dc9)
-
-**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements.
-To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](tutorial-security-policy.md).
-Customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at <https://aka.ms/cosmosdb-cmk>.
-(Related policy: [Cognitive Services accounts should enable data encryption with a customer-managed key?(CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f67121cc7-ff39-4ab8-b7e3-95b84dab487d))
-
-**Severity**: Low
- ### [(Enable if required) MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6b51b7f7-cbed-75bf-8a02-43384bf47562) **Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements.
defender-for-cloud Release Notes Recommendations Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-recommendations-alerts.md
This article summarizes what's new in security recommendations and alerts in Mic
> `https://aka.ms/mdc/rss-recommendations-alerts` - Review a complete list of multicloud security recommendations and alerts:
+ - [AI recommendations](/azure/defender-for-cloud/recommendations-reference-ai)
+
- [Compute recommendations](recommendations-reference-compute.md)
+
- [Container recommendations](recommendations-reference-container.md) - [Data recommendations](recommendations-reference-data.md) - [DevOps recommendations](recommendations-reference-devops.md)
New and updated recommendations and alerts are added to the table in date order.
| **Date** | **Type** | **State** | **Name** | | -- | | | |
+|July 22|Recommendation|Update|[(Enable if required) Azure AI Services resources should encrypt data at rest with a customer-managed key (CMK)](/azure/defender-for-cloud/recommendations-reference-ai)|
| June 28 | Recommendation | GA | [Azure DevOps repositories should require minimum two-reviewer approval for code pushes](recommendations-reference-devops.md#preview-azure-devops-repositories-should-require-minimum-two-reviewer-approval-for-code-pushes) | | June 28 | Recommendation | GA | [Azure DevOps repositories should not allow requestors to approve their own Pull Requests](recommendations-reference-devops.md#preview-azure-devops-repositories-should-not-allow-requestors-to-approve-their-own-pull-requests) |
-| June 28 | Recommendation | GA | [GitHub organizations should not make action secrets accessible to all repositories](recommendations-reference-devops.md#github-organizations-should-not-make-action-secrets-accessible-to-all repositories) |
+| June 28 | Recommendation | GA | [GitHub organizations should not make action secrets accessible to all repositories](recommendations-reference-devops.md#github-organizations-should-not-make-action-secrets-accessible-to-all-repositories) |
| June 27 | Alert | Deprecation | `Security incident detected suspicious source IP activity`<br><br/> Severity: Medium/High | | June 27 | Alert | Deprecation | `Security incident detected on multiple resources`<br><br/> Severity: Medium/High | | June 27 | Alert | Deprecation | `Security incident detected compromised machine`<br><br/> Severity: Medium/High |
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
This article summarizes what's new in Microsoft Defender for Cloud. It includes
| Date | Category | Update | | - | | |
+| July 22 | Preview | [Security assessments for GitHub no longer requires additional licensing](#preview-security-assessments-for-github-no-longer-requires-additional-licensing) |
+| July 18 | Upcoming update | [Updated timelines toward MMA deprecation in Defender for Servers Plan 2](#updated-timelines-toward-mma-deprecation-in-defender-for-servers-plan-2) |
| July 18 | Upcoming update | [Deprecation of MMA-related features as part of agent retirement](#deprecation-of-mma-related-features-as-part-of-agent-retirement) | | July 15 | Preview | [Binary Drift Public Preview in Defender for Containers](#binary-drift-public-preview-now-available-in-defender-for-containers) | | July 14 | GA | [Automated remediation scripts for AWS and GCP are now GA](#automated-remediation-scripts-for-aws-and-gcp-are-now-ga) |
This article summarizes what's new in Microsoft Defender for Cloud. It includes
| July 9 | Upcoming update | [Inventory experience improvement](#inventory-experience-improvement) | | July 8 | Upcoming update | [Container mapping tool to run by default in GitHub](#container-mapping-tool-to-run-by-default-in-github) |
+### Preview: Security assessments for GitHub no longer requires additional licensing
+
+July 22, 2024
+
+GitHub users in Defender for Cloud no longer need a GitHub Advanced Security license to view security findings. This applies to security assessments for code weaknesses, Infrastructure-as-Code (IaC) misconfigurations, and vulnerabilities in container images that are detected during the build phase.
+
+Customers with GitHub Advanced Security will continue to receive additional security assessments in Defender for Cloud for exposed credentials, vulnerabilities in open source dependencies, and CodeQL findings.
+
+To learn more about DevOps security in Defender for Cloud, see the [DevOps Security Overview](defender-for-devops-introduction.md). To learn how to onboard your GitHub environment to Defender for Cloud, follow the [GitHub onboarding guide](quickstart-onboard-github.md). To learn how to configure the Microsoft Security DevOps GitHub Action, see our [GitHub Action](github-action.md) documentation.
+
+### Updated timelines toward MMA deprecation in Defender for Servers Plan 2
+
+July 18, 2024
+
+**Estimated date for change**: August 2024
++
+With the [upcoming deprecation of Log Analytics agent in August](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341), all security value for server protection in Defender for Cloud will rely on integration with Microsoft Defender for Endpoint (MDE) as a single agent and on agentless capabilities provided by the cloud platform and agentless machine scanning.
+
+The following capabilities have updated timelines and plans, thus the support for them over MMA will be extended for Defender for Cloud customers to the end of November 2024:
+
+- **File Integrity Monitoring (FIM):** Public preview release for FIM new version over MDE is planned for __August 2024__. The GA version of FIM powered by Log Analytics agent will continue to be supported for existing customers until the end of __November 2024__.
+
+- **Security Baseline:** as an alternative to the version based on MMA, the current preview version based on Guest Configuration will be released to general availability in __September 2024.__ OS Security Baselines powered by Log Analytics agent will continue to be supported for existing customers until the end of **November 2024.**
+
+For more information, see [Prepare for retirement of the Log Analytics agent](prepare-deprecation-log-analytics-mma-agent.md).
+ ### Deprecation of MMA-related features as part of agent retirement July 18, 2024
defender-for-cloud Secrets Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secrets-scanning.md
Defender for Cloud provides secrets scanning for virtual machines, and for cloud
- **Cloud deployments**: Agentless secrets scanning across multicloud infrastructure-as-code deployment resources. - **Azure DevOps**: [Scanning to discover exposed secrets in Azure DevOps](defender-for-devops-introduction.md).
+## Prerequisites
+
+Required roles and permissions:
+
+ - Security Reader
+
+ - Security Admin
+
+ - Reader
+
+ - Contributor
+
+ - Owner
+
## Deploying secrets scanning Secrets scanning is provided as a feature in Defender for Cloud plans:
dns Dns Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-custom-domain.md
Previously updated : 12/15/2022 Last updated : 07/22/2024 # Use Azure DNS to provide custom domain settings for an Azure service
-Azure DNS provides naming resolution for any of your Azure resources that support custom domains or that have a fully qualified domain name (FQDN). For example, you have an Azure web app you want your users to access using `contoso.com` or `www.contoso.com` as the FQDN. This article walks you through configuring your Azure service with Azure DNS for using custom domains.
+Azure DNS provides name resolution for any of your Azure resources that support custom domains, or that have a fully qualified domain name (FQDN). For example, you might have an Azure web app you want your users to access using `contoso.com` or `www.contoso.com` as the FQDN. This article walks you through configuring Azure DNS to access your Azure service with custom domains.
+
+You can configure a vanity or custom domain for Azure Function Apps, Public IP addresses, App Service (Web Apps), Blob storage, and Azure CDN
## Prerequisites To use Azure DNS for your custom domain, you must first delegate your domain to Azure DNS. See [Delegate a domain to Azure DNS](./dns-delegate-domain-azure-dns.md) for instructions on how to configure your name servers for delegation. Once your domain is delegated to your Azure DNS zone, you now can configure your DNS records needed.
-You can configure a vanity or custom domain for Azure Function Apps, Public IP addresses, App Service (Web Apps), Blob storage, and Azure CDN.
- ## Azure Function App
-To configure a custom domain for Azure function apps, a CNAME record is created and configured on the function app itself.
+To configure a custom domain for Azure function apps, a [CNAME record](dns-zones-records.md#cname-records) is created and configured on the function app itself. A CNAME record maps a domain name to another domain or subdomain. In this case, you create a CNAME in your public domain and provision the CNAME alias to be the FQDN of your custom domain.
1. Navigate to **Function App** and select your function app. Select **Custom domains** under *Settings*. Note the **current url** under *assigned custom domains*, this address is used as the alias for the DNS record created.
To configure a custom domain for Azure function apps, a CNAME record is created
## Public IP address
-To configure a custom domain for services that use a public IP address resource such as Application Gateway, Load Balancer, Cloud Service, Resource Manager VMs, and, Classic VMs, an A record is used.
+To configure a custom domain for services that use a public IP address resource such as Application Gateway, Load Balancer, Cloud Service, Resource Manager VMs, and, Classic VMs, an A record is used. An A record (address record) maps a domain name to an IP address. In this case, you create a new A record in your public domain and configure it to have an IP address corresponding to the public IP address of your Azure service.
1. Navigate to the Public IP resource and select **Configuration**. Note the IP address shown.
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Azure Machine Configuration, and more. Previously updated : 07/16/2024 Last updated : 07/22/2024
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Azure Machine Configuration, and more. Previously updated : 07/16/2024 Last updated : 07/22/2024
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
Title: Benefits of migrating to Azure HDInsight 4.0.
description: Learn the benefits of migrating to Azure HDInsight 4.0. Previously updated : 10/16/2023 Last updated : 07/22/2024 # Significant version changes in HDInsight 4.0 and advantages
HDInsight 4.0 has several advantages over HDInsight 3.6. Here's an overview of w
**Hive** - Advanced features - LLAP workload management
- - LLAP Support JDBC, Druid and Kafka connectors
+ - LLAP Support JDBC, Druid, and Kafka connectors
- Better SQL features ΓÇô Constraints and default values - Surrogate Keys - Information schema.
Set synchronization of partitions to occur every 10 minutes expressed in seconds
> [!WARNING]
-> With the `management.task` running every 10 minutes, there will be pressure on the SQL server DTU.
->
+> With the `management.task` running every 10 minutes, there will be pressure on the SQL server DTU. This feature also adds cost to Storage access as the partition management threads runs at regular intervals even when cluster is idle.
+ You can verify the output from Microsoft Azure portal. :::image type="content" source="./media/hdinsight-migrate-to-40/hive-verify-output.png" alt-text="Screenshot showing compute utilization graph."::: Hive drops the metadata and corresponding data in any partition created after the retention period. You express the retention time using a numeral and the following character or characters.
-Hive drops the metadata and corresponding data in any partition created after the retention period. You express the retention time using a numeral and the following character(s).
+Hive drops the metadata and corresponding data in any partition created after the retention period. You express the retention time using a numeral and the following characters.
``` ms (milliseconds)
More information, see [Hive - Materialized Views - Microsoft Tech Community](htt
Use the built-in `SURROGATE_KEY` user-defined function (UDF) to automatically generate numerical Ids for rows as you enter data into a table. The generated surrogate keys can replace wide, multiple composite keys.
-Hive supports the surrogate keys on ACID tables only. The table you want to join using surrogate keys can't have column types that need casting. These data types must be primitives, such as INT or `STRING`.
+Hive supports the surrogate keys on ACID tables only. The table you want to join using surrogate keys can't have column types that need to cast. These data types must be primitives, such as INT or `STRING`.
Joins using the generated keys are faster than joins using strings. Using generated keys doesn't force data into a single node by a row number. You can generate keys as abstractions of natural keys. Surrogate keys have an advantage over UUIDs, which are slower and probabilistic.
key-vault About Keys Secrets Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/about-keys-secrets-certificates.md
Where:
|-|-| | `vault-name` or `hsm-name` | The name for a key vault or a Managed HSM pool in the Microsoft Azure Key Vault service.<br /><br />Vault names and Managed HSM pool names are selected by the user and are globally unique.<br /><br />Vault name and Managed HSM pool name must be a 3-24 character string, containing only 0-9, a-z, A-Z, and not consecutive -.| | `object-type` | The type of the object, "keys", "secrets", or "certificates".|
-| `object-name` | An `object-name` is a user provided name for and must be unique within a key vault. The name must be a 1-127 character string, starting with a letter and containing only 0-9, a-z, A-Z, and -.|
+| `object-name` | An `object-name` is a user provided name for and must be unique within a key vault. The name must be a 1-127 character string, containing only 0-9, a-z, A-Z, and -.|
| `object-version `| An `object-version` is a system-generated, 32 character string identifier that is optionally used to address a unique version of an object. | ## DNS suffixes for object identifiers
key-vault Key Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/key-management.md
az keyvault key restore --id https://ContosoMHSM.managedhsm.azure.net/deletedKey
Use `az keyvault key import` command to import a key (only RSA and EC) from a file. The certificate file must have private key and must use PEM encoding (as defined in RFCs [1421](https://tools.ietf.org/html/rfc1421), [1422](https://tools.ietf.org/html/rfc1422), [1423](https://tools.ietf.org/html/rfc1423), [1424](https://tools.ietf.org/html/rfc1424)). ```azurecli-interactive
-az keyvault key import --hsm-name ContosoHSM --name myrsakey --pem-file mycert.key --password 'mypassword'
+az keyvault key import --hsm-name ContosoHSM --name myrsakey --pem-file mycert.key --pem-password 'mypassword'
## OR # Note the key name (myaeskey) in the URI
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation-dual.md
Add secret to key vault with validity period for 60 days, storage account resour
# [Azure CLI](#tab/azure-cli) ```azurecli
-$tomorrowDate = (get-date).AddDays(+1).ToString("yyyy-MM-ddTHH:mm:ssZ")
+tomorrowDate=$(date -u -d "+1 day" +"%Y-%m-%dT%H:%M:%SZ")
az keyvault secret set --name storageKey --vault-name vaultrotation-kv --value <key1Value> --tags "CredentialId=key1" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate ``` # [Azure PowerShell](#tab/azurepowershell)
Add secret to key vault with validity period for 60 days, storage account resour
# [Azure CLI](#tab/azure-cli) ```azurecli
-$tomorrowDate = (Get-Date).AddDays(+1).ToString('yyyy-MM-ddTHH:mm:ssZ')
+tomorrowDate=$(date -u -d "+1 day" +"%Y-%m-%dT%H:%M:%SZ")
az keyvault secret set --name storageKey2 --vault-name vaultrotation-kv --value <key2Value> --tags "CredentialId=key2" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate ``` # [Azure PowerShell](#tab/azurepowershell)
kubernetes-fleet Concepts Resource Propagation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/concepts-resource-propagation.md
Multiple placement types are available for controlling the number of clusters to
You can use a `PickAll` placement policy to deploy a workload across all member clusters in the fleet (optionally matching a set of criteria).
-The following example shows how to deploy a `test-deployment` namespace and all of its objects across all clusters labeled with `environment: production`:
+The following example shows how to deploy a `prod-deployment` namespace and all of its objects across all clusters labeled with `environment: production`:
```yaml apiVersion: placement.kubernetes-fleet.io/v1beta1
spec:
version: v1 ```
-This simple policy takes the `test-deployment` namespace and all resources contained within it and deploys it to all member clusters in the fleet with the given `environment` label. If all clusters are desired, you can remove the `affinity` term entirely.
+This simple policy takes the `prod-deployment` namespace and all resources contained within it and deploys it to all member clusters in the fleet with the given `environment` label. If all clusters are desired, you can remove the `affinity` term entirely.
### `PickFixed` placement policy
machine-learning Concept Automl Forecasting Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md
Each Series in Own Group (1:1) | All Series in Single Group (N:1)
-| -- Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, TCNForecaster
-More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-many-models-in-pipeline/automl-forecasting-demand-many-models-in-pipeline.ipynb).
+More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecast_pipeline/aml-demand-forecast-mm-pipeline/aml-demand-forecast-mm-pipeline.ipynb).
## Next steps
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
az ml job create --file automl-mm-forecasting-pipeline.yml -w <Workspace> -g <Re
After the job finishes, the evaluation metrics can be downloaded locally using the same procedure as in the [single training run pipeline](#orchestrating-training-inference-and-evaluation-with-components-and-pipelines).
-Also see the [demand forecasting with many models notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-many-models-in-pipeline/automl-forecasting-demand-many-models-in-pipeline.ipynb) for a more detailed example.
+Also see the [demand forecasting with many models notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecast_pipeline/aml-demand-forecast-mm-pipeline/aml-demand-forecast-mm-pipeline.ipynb) for a more detailed example.
> [!NOTE] > The many models training and inference components conditionally partition your data according to the `partition_column_names` setting so that each partition is in its own file. This process can be very slow or fail when data is very large. In this case, we recommend partitioning your data manually before running many models training or inference.
az ml job create --file automl-hts-forecasting-pipeline.yml -w <Workspace> -g <R
After the job finishes, the evaluation metrics can be downloaded locally using the same procedure as in the [single training run pipeline](#orchestrating-training-inference-and-evaluation-with-components-and-pipelines).
-Also see the [demand forecasting with hierarchical time series notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-hierarchical-timeseries-in-pipeline/automl-forecasting-demand-hts.ipynb) for a more detailed example.
+Also see the [demand forecasting with hierarchical time series notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecast_pipeline/aml-demand-forecast-hts-pipeline/aml-demand-forecast-hts.ipynb) for a more detailed example.
> [!NOTE] > The HTS training and inference components conditionally partition your data according to the `hierarchy_column_names` setting so that each partition is in its own file. This process can be very slow or fail when data is very large. In this case, we recommend partitioning your data manually before running HTS training or inference.
Also see the [demand forecasting with hierarchical time series notebook](https:/
See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs) for detailed code examples of advanced forecasting configuration including:
-* [Demand forecasting pipeline examples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components)
+* [Demand forecasting pipeline examples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/1k_demand_forecast_pipeline)
* [Deep learning models](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb) * [Holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb) * [Manual configuration for lags and rolling window aggregation features](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb)
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/overview.md
The following table shows an index of tools in prompt flow.
| [Content Safety (Text)](./content-safety-text-tool.md) | Uses Azure Content Safety to detect harmful content. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use AzureOpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [OpenAI GPT-4V](./openai-gpt-4v-tool.md) | Use OpenAI GPT-4V to leverage vision ability. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Index Lookup](./index-lookup-tool.md)* | Search an Azure Machine Learning Vector Index for relevant results using one or more text queries. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Faiss Index Lookup](./faiss-index-lookup-tool.md)* | Searches a vector-based query from the Faiss index file. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Vector DB Lookup](./vector-db-lookup-tool.md)* | Searches a vector-based query from existing vector database. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Vector Index Lookup](./vector-index-lookup-tool.md)* | Searches text or a vector-based query from Azure Machine Learning vector index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Index Lookup](./index-lookup-tool.md)*<sup>1</sup> | Search an Azure Machine Learning Vector Index for relevant results using one or more text queries. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
| [Azure AI Language tools](https://microsoft.github.io/promptflow/integrations/tools/azure-ai-language-tool.html)* | This collection of tools is a wrapper for various Azure AI Language APIs, which can help effectively understand and analyze documents and conversations. The capabilities currently supported include: Abstractive Summarization, Extractive Summarization, Conversation Summarization, Entity Recognition, Key Phrase Extraction, Language Detection, PII Entity Recognition, Conversational PII, Sentiment Analysis, Conversational Language Understanding, Translator. You can learn how to use them by the [Sample flows](https://github.com/microsoft/promptflow/tree/e4542f6ff5d223d9800a3687a7cfd62531a9607c/examples/flows/integrations/azure-ai-language). | Custom | [promptflow-azure-ai-language](https://pypi.org/project/promptflow-azure-ai-language/) |
+<sup>1</sup> The Index Lookup tool replaces the three deprecated legacy index tools: Vector Index Lookup, Vector DB Lookup, and Faiss Index Lookup. If you have a flow that contains one of those tools, follow the [migration steps](./index-lookup-tool.md#how-to-migrate-from-legacy-tools-to-the-index-lookup-tool) to upgrade your flow.
+ _*The asterisk marks indicate custom tools, which are created by the community that extend prompt flow's capabilities for specific use cases. They aren't officially maintained or endorsed by prompt flow team. When you encounter questions or issues for these tools, prioritize using the support contact if it's provided in the description._ To discover more custom tools developed by the open-source community, see [More custom tools](https://microsoft.github.io/promptflow/integrations/tools/https://docsupdatetracker.net/index.html).
machine-learning How To Deploy Advanced Entry Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-advanced-entry-script.md
See the following articles for more entry script examples for specific machine l
* [TensorFlow](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/tensorflow) * [Keras](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb) * [AutoML](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features)
-* [ONNX](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/)
## Related content
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-azure-kubernetes-service.md
When deploying to AKS, you deploy to an AKS cluster that's *connected to your wo
> [!IMPORTANT] > We recommend that you debug locally before deploying to the web service. For more information, see [Troubleshooting with a local model deployment](how-to-troubleshoot-deployment-local.md).
->
-> You can also refer to [Deploy to local notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/deploy-to-local) on GitHub.
[!INCLUDE [endpoints-option](../includes/machine-learning-endpoints-preview-note.md)]
programmable-connectivity Azure Programmable Connectivity Create Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/programmable-connectivity/azure-programmable-connectivity-create-gateway.md
Previously updated : 02/08/2024 Last updated : 07/22/2024
In this quickstart, you learn how to create an Azure Programmable Connectivity (APC) gateway and subscribe to API plans in the Azure portal.
+> [!NOTE]
+> Deleting and modifying existing APC gateways is not supported during the preview. Please open a support ticket in the Azure Portal if you need to delete an APC Gateway.
+>
+ ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
reliability Migrate Api Mgt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-api-mgt.md
There are no downtime requirements for any of the migration options.
* When you're migrating an API Management instance that's deployed in an external or internal virtual network to availability zones, you must specify a new public IP address resource. In an internal virtual network, the public IP address is used only for management operations, not for API requests. [Learn more about IP addresses of API Management](../api-management/api-management-howto-ip-addresses.md).
-* Migrating to availability zones or changing the configuration of availability zones triggers a public [IP address change](../api-management/api-management-howto-ip-addresses.md#changes-to-the-ip-addresses).
+* Migrating to availability zones or changing the configuration of availability zones triggers a public and private [IP address change](../api-management/api-management-howto-ip-addresses.md#changes-to-the-ip-addresses).
* When you're enabling availability zones in a region, you configure API Management scale [units](../api-management/upgrade-and-scale.md) that you can distribute evenly across the zones. For example, if you configure two zones, you can configure two units, four units, or another multiple of two units.
reliability Migrate Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-sql-managed-instance.md
This guide describes how to migrate SQL Managed Instances that use Business Crit
1. Confirm that your instance is located in a supported region. To see the list of supported regions, see [Premium and Business Critical service tier zone redundant availability](/azure/azure-sql/database/high-availability-sla?view=azuresql&preserve-view=true&tabs=azure-powershell#premium-and-business-critical-service-tier-zone-redundant-availability):
-
-1. Your instances must be running on standard-series (Gen5) hardware.
- ## Downtime requirements All scaling operations in Azure SQL are online operations and require minimal to no downtime. For more details on Azure SQL dynamic scaling, see [Dynamically scale database resources with minimal downtime](/azure/azure-sql/database/scale-resources?view=azuresql&preserve-view=true).
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
The current requirements/limitations for enabling availability zones are:
- East US 2 - France Central - Germany West Central
+ - Israel Central
+ - Italy North
- Japan East - Korea Central
+ - Mexico Central
- North Europe - Norway East - Poland Central
The current requirements/limitations for enabling availability zones are:
- South Africa North - South Central US - Southeast Asia
+ - Spain Central
- Sweden Central - Switzerland North - UAE North
route-server Troubleshoot Route Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/troubleshoot-route-server.md
When you deploy a Route Server to a virtual network, we need to update the contr
### Why does my on-premises network connected to Azure VPN gateway not receive the default route advertised by the Route Server?
-Although Azure VPN gateway can receive the default route from its BGP peers including the Route Server, it [doesn't advertise the default route](../vpn-gateway/vpn-gateway-vpn-faq.md#what-address-prefixes-will-azure-vpn-gateways-advertise-to-me) to other peers.
+Although Azure VPN gateway can receive the default route from its BGP peers including the Route Server, it [doesn't advertise the default route](../vpn-gateway/vpn-gateway-vpn-faq.md#what-address-prefixes-do-azure-vpn-gateways-advertise-to-me) to other peers.
### Why does my NVA not receive routes from the Route Server even though the BGP peering is up? The ASN that the Route Server uses is 65515. Make sure you configure a different ASN for your NVA so that an *eBGP* session can be established between your NVA and Route Server so route propagation can happen automatically. Make sure you enable "multi-hop" in your BGP configuration because your NVA and the Route Server are in different subnets in the virtual network.
-### The BGP peering between my NVA and Route Server is up. I can see routes exchanged correctly between them. Why arenΓÇÖt the NVA routes in the effective routing table of my VM?
+### The BGP peering between my NVA and Route Server is up. I can see routes exchanged correctly between them. Why aren't the NVA routes in the effective routing table of my VM?
* If your VM is in the same virtual network as your NVA and Route Server:
sap Get Sap Installation Media https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md
Next, upload the SAP software files to the storage account:
1. Download all packages that aren't labeled as `download: false` from the main BOM URL. Choose the packages based on your SAP version. You can use the URL mentioned in the BOM to download each package. Make sure to download the exact package versions listed in each BOM. 1. For S/4HANA 1909 SPS 03:-
- 1. [S41909SPS03_v0011ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml)
-
+
1. [HANA_2_00_059_v0004ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/archives/HANA_2_00_059_v0004ms/HANA_2_00_059_v0004ms.yaml) 1. For S/4HANA 2020 SPS 03: -
- 1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)
-
+
1. [HANA_2_00_064_v0001ms.yaml](https://github.com/Azure/SAP-automation-samples/blob/main/SAP/archives/HANA_2_00_064_v0001ms/HANA_2_00_064_v0001ms.yaml) 1. Repeat the previous step for the main and dependent BOM files.
sap About Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/about-azure-monitor-sap-solutions.md
Azure Monitor for SAP solutions uses the [Azure Monitor](../../azure-monitor/ove
- Create [custom visualizations](../../azure-monitor/visualize/workbooks-overview.md) by editing the default that Azure Monitor for SAP solutions provides. - Write [custom queries](../../azure-monitor/logs/log-analytics-tutorial.md). - Create [custom alerts](../../azure-monitor/alerts/alerts-log.md) by using Log Analytics workspaces.-- Take advantage of the [flexible retention period](../../azure-monitor/logs/data-retention-archive.md) in Azure Monitor Logs and Log Analytics.
+- Take advantage of the [flexible retention period](../../azure-monitor/logs/data-retention-configure.md) in Azure Monitor Logs and Log Analytics.
- Connect monitoring data with your ticketing system. ## What data is collected?
sap Universal Print Sap Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/universal-print-sap-frontend.md
# SAP front-end printing with Universal Print
-Printing from your SAP landscape is a requirement for many customers. Depending on your business, printing needs can come in different areas and SAP applications. Examples can be data list printing, mass- or label printing. Such production and batch print scenarios are often solved with specialized hardware, drivers and printing solutions. This article addresses options to use [Universal Print](/universal-print/fundamentals/universal-print-whatis) for SAP front-end printing of the SAP users. For backend printing, see [our blog post](https://community.sap.com/t5/technology-blogs-by-members/it-has-never-been-easier-to-print-from-sap-with-microsoft-universal-print/ba-p/13672206) and [GitHub repos](https://github.com/Azure/universal-print-for-sap-starter-pack).
+Printing from your SAP landscape is a requirement for many customers. Depending on your business, printing needs can come in different areas and SAP applications. Examples can be data list printing, mass- or label printing. Such production and batch print scenarios are often solved with specialized hardware, drivers and printing solutions. This article addresses options to use [Universal Print](/universal-print/discover-universal-print) for SAP front-end printing of the SAP users. For backend printing, see [our blog post](https://community.sap.com/t5/technology-blogs-by-members/it-has-never-been-easier-to-print-from-sap-with-microsoft-universal-print/ba-p/13672206) and [GitHub repos](https://github.com/Azure/universal-print-for-sap-starter-pack).
Universal Print is a cloud-based print solution that enables organizations to manage printers and printer drivers in a centralized manner. Removes the need to use dedicated printer servers and available for use by company employees and applications. While Universal Print runs entirely on Microsoft Azure, for use with SAP systems there's no such requirement. Your SAP landscape can run on Azure, be located on-premises or operate in any other cloud environment. You can use SAP systems deployed by SAP RISE. Similarly, SAP cloud services, which are browser based can be used with Universal Print in most front-end printing scenarios.
Universal Print is a cloud-based print solution that enables organizations to ma
- Add Universal Print printer to your Windows client - Able to print on Universal Print printer from OS
-See the [Universal Print documentation](/universal-print/fundamentals/universal-print-getting-started#step-4-add-a-universal-print-printer-to-a-windows-device.md) for details on these prerequisites. As a result, one or more Universal Print printers are visible in your deviceΓÇÖs printer list. For SAP front-end printing, it's not necessary to make it your default printer.
+See the [Universal Print documentation](/universal-print/set-up-universal-print#step-2-check-prerequisities) for details on these prerequisites. As a result, one or more Universal Print printers are visible in your deviceΓÇÖs printer list. For SAP front-end printing, it's not necessary to make it your default printer.
[![Example showing Universal Print printers in Windows 11 settings dialog.](./media/universtal-print-sap/frontend-os-printer.png)](./media/universtal-print-sap/frontend-os-printer.png#lightbox)
SAP defines front-end printing with several [constraints](https://help.sap.com/d
- [Deploy the SAP backend printing Starter Pack](https://github.com/Azure/universal-print-for-sap-starter-pack) - [Learn more from our SAP with Universal Print blog post](https://community.sap.com/t5/technology-blogs-by-members/it-has-never-been-easier-to-print-from-sap-with-microsoft-universal-print/ba-p/13672206)
+- [Learn about running classic SAP printing solutions highly-available on Azure](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/how-to-deploy-sap-print-server-highly-available-architecture-on/ba-p/3901761)
Check out the documentation:
search Cognitive Search Debug Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-debug-session.md
- ignite-2023 Previously updated : 09/29/2023 Last updated : 07/21/2024 # Debug Sessions in Azure AI Search
-Debug Sessions is a visual editor that works with an existing skillset in the Azure portal, exposing the structure and content of a single enriched document, as it's produced by an indexer and skillset for the duration of the session. Because you're working with a live document, the session is interactive - you can identify errors, modify and invoke skill execution, and validate the results in real time. If your changes resolve the problem, you can commit them to a published skillset to apply the fixes globally.
+Debug Sessions is a visual editor that works with an existing skillset in the Azure portal, exposing the structure and content of a single enriched document as it's produced by an indexer and skillset for the duration of the session. Because you're working with a live document, the session is interactive - you can identify errors, modify and invoke skill execution, and validate the results in real time. If your changes resolve the problem, you can commit them to a published skillset to apply the fixes globally.
-## How a debug session works
-
-When you start a session, the search service creates a copy of the skillset, indexer, and a data source containing a single document used to test the skillset. All session state is saved to a new blob container created by the Azure AI Search service in an Azure Storage account that you provide. The name of the generated container has a prefix of "ms-az-cognitive-search-debugsession". The prefix is required because it mitigates the chance of accidentally exporting session data to another container in your account.
-
-A cached copy of the enriched document and skillset is loaded into the visual editor so that you can inspect the content and metadata of the enriched document, with the ability to check each document node and edit any aspect of the skillset definition. Any changes made within the session are cached. Those changes will not affect the published skillset unless you commit them. Committing changes will overwrite the production skillset.
+This article explains how the editor is organized. Tabs and sections of the editor unpack different layers of the skillset so that you can examine skillset structure, flow, and the content it generates at run time.
-If the enrichment pipeline does not have any errors, a debug session can be used to incrementally enrich a document, test and validate each change before committing the changes.
+## How a debug session works
-## Managing the Debug Session state
+When you start a session, the search service creates a copy of the skillset, indexer, and a data source containing a single document used to test the skillset. All session state is saved to a new blob container created by the Azure AI Search service in an Azure Storage account that you provide. The name of the generated container has a prefix of `ms-az-cognitive-search-debugsession`. The prefix is required because it mitigates the chance of accidentally exporting session data to another container in your account.
-You can rerun a debug session using the **Start** button, or cancel an in-progress session using the **Cancel** button.
+A cached copy of the enriched document and skillset is loaded into the visual editor so that you can inspect the content and metadata of the enriched document, with the ability to check each document node and edit any aspect of the skillset definition. Any changes made within the session are cached. Those changes won't affect the published skillset unless you commit them. Committing changes will overwrite the production skillset.
+If the enrichment pipeline doesn't have any errors, a debug session can be used to incrementally enrich a document, test and validate each change before committing the changes.
## AI Enrichments tab > Skill Graph
The visual editor is organized into tabs and panes. This section introduces the
The **Skill Graph** provides a visual hierarchy of the skillset and its order of execution from top to bottom. Skills that are dependent upon the output of other skills are positioned lower in the graph. Skills at the same level in the hierarchy can execute in parallel. Color coded labels of skills in the graph indicate the types of skills that are being executed in the skillset (TEXT or VISION).
-Selecting a skill in the graph will display the details of that instance of the skill in the right pane, including its definition, errors or warnings, and execution history. The **Skill Graph** is where you will select which skill to debug or enhance. The details pane to the right is where you edit and explore.
+The **Skill Graph** is where you select which skill to debug or enhance. The details pane to the right is where you edit and explore.
:::image type="content" source="media/cognitive-search-debug/skills-graph.png" alt-text="Screenshot of Skills Graph tab." border="true"::: ### Skill details pane
-When you select an object in the **Skill Graph**, the adjacent pane provides interactive work areas in a tabbed layout. An illustration of the details pane can be found in the previous screenshot.
-
-Skill details include the following areas:
+Skill details are presented in a tabbed layout and include the following areas:
-+ **Skill Settings** shows a formatted version of the skill definition.
-+ **Skill JSON Editor** shows the raw JSON document of the definition.
-+ **Executions** shows the data corresponding to each time a skill was executed.
-+ **Errors and warnings** shows the messages generated upon session start or refresh.
++ **Skill Settings**: a formatted version of the skill definition.++ **Skill JSON Editor**: the raw JSON document of the definition.++ **Executions**: the data corresponding to each time a skill was executed.++ **Errors and warnings**: the messages generated upon session start or refresh. On Executions or Skill Settings, select the **`</>`** symbol to open the [**Expression Evaluator**](#expression-evaluator) used for viewing and editing the expressions of the skills inputs and outputs.
Nested input controls in Skill Settings can be used to build complex shapes for
### Executions pane
-A skill can execute multiple times in a skillset for a single document. For example, the OCR skill will execute once for each image extracted from a single document. The Executions pane displays the skill's execution history providing a deeper look into each invocation of the skill.
+A skill can execute multiple times in a skillset for a single document. For example, the OCR skill executes once for each image extracted from a single document. The Executions pane displays the skill's execution history providing a deeper look into each invocation of the skill.
The execution history enables tracking a specific enrichment back to the skill that generated it. Clicking on a skill input navigates to the skill that generated that input, providing a stack-trace like feature. This allows identification of the root cause of a problem that might manifest in a downstream skill.
-When you debug an error with a custom skill, there is the option to generate a request for a skill invocation in the execution history.
+When you debug an error with a custom skill, there's the option to generate a request for a skill invocation in the execution history.
## AI Enrichments tab > Enriched Data Structure
The **Enriched Data Structure** pane shows the document's enrichments through th
## Expression Evaluator
-**Expression Evaluator** gives a quick peek into the value of any path. It allows for editing the path and testing the results before updating any of the inputs or context for a skill or projection.
+**Expression Evaluator** shows the executable elements of the kil. It allows for editing the path and testing the results before updating any of the inputs or context for a skill or projection.
-You can open the window from any node or element that shows the **`</>`** symbol, including parts of a dependency graph or nodes in an enrichment tree.
+You can open the evaluator from any node or element that shows the **`</>`** symbol, including parts of a dependency graph or nodes in an enrichment tree.
Expression Evaluator gives you full interactive access for testing skill context, inputs, and checking outputs.
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
- ignite-2023 Previously updated : 01/10/2024 Last updated : 07/22/2024 # Debug an Azure AI Search skillset in Azure portal
-Start a portal-based debug session to identify and resolve errors, validate changes, and push changes to a published skillset in your Azure AI Search service.
+Start a portal-based debug session to identify and resolve errors, validate changes, and push changes to an existing skillset in your Azure AI Search service.
-A debug session is a cached indexer and skillset execution, scoped to a single document, that you can use to edit and test your changes interactively. When you're finished debugging, you can save your changes to the skillset.
+A debug session is a cached indexer and skillset execution, scoped to a single document, that you can use to edit and test skillset changes interactively. When you're finished debugging, you can save your changes to the skillset.
For background on how a debug session works, see [Debug sessions in Azure AI Search](cognitive-search-debug-session.md). To practice a debug workflow with a sample document, see [Tutorial: Debug sessions](cognitive-search-tutorial-debug-sessions.md).
For background on how a debug session works, see [Debug sessions in Azure AI Sea
## Limitations
-Debug sessions work with all generally available [indexer data sources](search-data-sources-gallery.md) and most preview data sources. The following list notes the exceptions:
+Debug sessions work with all generally available [indexer data sources](search-data-sources-gallery.md) and most preview data sources, with the following exceptions:
+ Azure Cosmos DB for MongoDB is currently not supported.
The portal doesn't support customer-managed key encryption (CMK), which means th
1. Sign in to the [Azure portal](https://portal.azure.com) and find your search service.
-1. In the left navigation page, select **Debug sessions**.
+1. In the left menu, select **Search management** > **Debug sessions**.
1. In the action bar at the top, select **Add debug session**.
The portal doesn't support customer-managed key encryption (CMK), which means th
1. In **Debug session name**, provide a name that will help you remember which skillset, indexer, and data source the debug session is about.
-1. In **Storage connection**, find a general-purpose storage account for caching the debug session. You'll be prompted to select and optionally create a blob container in Blob Storage or Azure Data Lake Storage Gen2. You can reuse the same container for all subsequent debug sessions you create. A helpful container name might be "cognitive-search-debug-sessions".
+1. In **Storage connection**, find a general-purpose storage account for caching the debug session. You're prompted to select and optionally create a blob container in Blob Storage or Azure Data Lake Storage Gen2. You can reuse the same container for all subsequent debug sessions you create. An obvious container name might be "debug-sessions".
1. In **Managed identity authentication**, choose **None** if the connection to Azure Storage doesn't use a managed identity. Otherwise, choose the managed identity to which you've granted **Storage Blob Data Contributor** permissions.
The portal doesn't support customer-managed key encryption (CMK), which means th
:::image type="content" source="media/cognitive-search-debug/debug-session-new.png" alt-text="Screenshot of a debug session page." border="true":::
-The debug session begins by executing the indexer and skillset on the selected document. The document's content and metadata created will be visible and available in the session.
+The debug session begins by executing the indexer and skillset on the selected document. The document's content and metadata are visible and available in the session.
A debug session can be canceled while it's executing using the **Cancel** button. If you hit the **Cancel** button you should be able to analyze partial results.
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-ranking-similarity.md
- ignite-2023 Previously updated : 09/25/2023 Last updated : 07/22/2024 # Configure BM25 relevance scoring
search Resource Demo Sites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-demo-sites.md
- ignite-2023 Previously updated : 09/18/2023 Last updated : 07/22/2024 # Demos - Azure AI Search
-Demos are hosted apps that showcase search and AI enrichment functionality in Azure AI Search. Several of these demos include source code on GitHub so that you can see how they were made.
+Demos are hosted apps that showcase search and AI enrichment functionality in Azure AI Search. Demos sometimes include source code on GitHub so that you can see how they were made.
-Microsoft built and hosts the following demos.
+The Azure AI Search currently builds and hosts the following demos.
| Demo name | Description | Source code | |--| |-| | [Chat with your data](https://entgptsearch.azurewebsites.net/) | An Azure web app that uses ChatGPT in Azure OpenAI with fictitious health plan data in a search index. | [https://github.com/Azure-Samples/azure-search-openai-demo/](https://github.com/Azure-Samples/azure-search-openai-demo/) |
-| [JFK files demo](https://jfk-demo-2019.azurewebsites.net/#/) | An ASP.NET web app built on a public data set, transformed with custom and predefined skills to extract searchable content from scanned document (JPEG) files. [Learn more...](https://www.microsoft.com/ai/ai-lab-jfk-files) | [https://github.com/Microsoft/AzureSearch_JFK_Files](https://github.com/Microsoft/AzureSearch_JFK_Files) |
| [Semantic ranking for retail](https://brave-meadow-0f59c9b1e.1.azurestaticapps.net/) | Web app for a fictitious online retailer, "Terra" | Not available |
search Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-javascript.md
Code samples from the Azure AI Search team demonstrate features and workflows. M
| Samples | Article | |||
-| [quickstart](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/quickstart/v11) | Source code for the JavaScript portion of [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md). Covers the basic workflow for creating, loading, and querying a search index using sample data. |
+| [quickstart](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/quickstart) | Source code for the JavaScript portion of [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md). Covers the basic workflow for creating, loading, and querying a search index using sample data. |
| [search-website](https://github.com/Azure-Samples/azure-search-javascript-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-javascript-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.| > [!TIP]
search Search Dotnet Mgmt Sdk Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-mgmt-sdk-migration.md
- devx-track-dotnet - ignite-2023 Previously updated : 09/15/2023 Last updated : 07/22/2024 # Upgrade versions of the Azure Search .NET Management SDK
The following table lists the client libraries used to provision a search servic
| Namespace | Version| Status | Change log | |--|--|--||
-| [Azure.ResourceManager.Search](/dotnet/api/overview/azure/resourcemanager.search-readme?view=azure-dotnet&preserve-view=true) | [Package versions](https://www.nuget.org/packages/Azure.ResourceManager.Search/1.0.0) | **Current** | [Release notes](https://github.com/Azure/azure-sdk-for-net/blob/Azure.ResourceManager.Search_1.2.0-beta.1/sdk/search/Azure.ResourceManager.Search/CHANGELOG.md) |
+| [Azure.ResourceManager.Search](/dotnet/api/overview/azure/resourcemanager.search-readme?view=azure-dotnet&preserve-view=true) | [Package versions](https://www.nuget.org/packages/Azure.ResourceManager.Search) | **Current** | [Change Lot](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.ResourceManager.Search/CHANGELOG.md) |
| [Microsoft.Azure.Management.Search](/dotnet/api/overview/azure/search/management/management-cognitivesearch(deprecated)?view=azure-dotnet&preserve-view=true) | [Package versions](https://www.nuget.org/packages/Microsoft.Azure.Management.Search#versions-body-tab) | **Deprecated** | [Release notes](https://www.nuget.org/packages/Microsoft.Azure.Management.Search#release-body-tab) | ## Checklist for upgrade
-1. Review the [client library change list](https://github.com/Azure/azure-sdk-for-net/blob/Azure.ResourceManager.Search_1.0.0/sdk/search/Azure.ResourceManager.Search/CHANGELOG.md) for insight into the scope of changes.
+1. Review the [change log](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.ResourceManager.Search/CHANGELOG.md) for updates to the library.
1. In your application code, delete the reference to `Microsoft.Azure.Management.Search` and its dependencies.
search Search Faceted Navigation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-faceted-navigation.md
In application code, the pattern is to use facet query parameters to return the
### Facet and filter combination
-The following code snippet from the `JobsSearch.cs` file in the NYCJobs demo adds the selected Business Title to the filter if you select a value from the Business Title facet.
+The following code snippet from the `JobsSearch.cs` file in the [NYCJobs demo](/samples/azure-samples/search-dotnet-asp-net-mvc-jobs/search-dotnet-asp-net-mvc-jobs/) adds the selected Business Title to the filter if you select a value from the Business Title facet.
```cs if (businessTitleFacet != "")
search Search Get Started Rag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-rag.md
+
+ Title: Quickstart RAG
+
+description: In this quickstart, learn how to use grounding data from Azure AI Search with a chat model on Azure OpenAI.
++++ Last updated : 07/22/2024++
+# Quickstart: Generative search (RAG) with grounding data from Azure AI Search
+
+This quickstart shows you how to send queries to a Large Language Model (LLM) for a conversational search experience over your indexed content on Azure AI Search. You use the Azure portal to set up the resources, and then run Python code to call the APIs.
+
+## Prerequisites
+
+- An Azure subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+- [Azure AI Search](search-create-service-portal.md), Basic tier or higher so that you can [enable semantic ranking](semantic-how-to-enable-disable.md). Region must be the same one used for Azure OpenAI.
+
+- [Azure OpenAI](https://aka.ms/oai/access) resource with a deployment of `gpt-35-turbo`, `gpt-4`, or equivalent model, in the same region as Azure AI Search.
+
+- [Visual Studio Code](https://code.visualstudio.com/download) with the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) and the [Jupyter package](https://pypi.org/project/jupyter/). For more information, see [Python in Visual Studio Code](https://code.visualstudio.com/docs/languages/python).
+
+## Download file
+
+[Download a Jupyter notebook](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-RAG) from GitHub to send the requests in this quickstart. For more information, see [Downloading files from GitHub](https://docs.github.com/get-started/start-your-journey/downloading-files-from-github).
+
+You can also start a new file on your local system and create requests manually by using the instructions in this article.
+
+## Configure access
+
+Requests to the search endpoint must be authenticated and authorized. You can use API keys or roles for this task. Keys are easier to start with, but roles are more secure. This quickstart assumes roles.
+
+1. Configure Azure OpenAI to use a system-assigned managed identity:
+
+ 1. In the Azure portal, find your Azure OpenAI resource.
+
+ 1. On the left menu, select **Resource management** > **Identity**.
+
+ 1. On the System assigned tab, set status to **On**.
+
+1. Configure Azure AI Search for role-based access and assign roles:
+
+ 1. In the Azure portal, find your Azure AI Search service.
+
+ 1. On the left menu, select **Settings** > **Keys**, and then select either **Role-based access control** or **Both**.
+
+ 1. On the left menu, select **Access control (IAM)**.
+
+ 1. Add the following role assignments for the Azure OpenAI managed identity: **Search Index Data Reader**, **Search Service Contributor**.
+
+1. Assign yourself to the **Cognitive Services OpenAI User** role on Azure OpenAI. This is the only role you need for query workloads.
+
+It can take several minutes for permissions to take effect.
+
+## Create an index
+
+We recommend the hotels-sample-index, which can be created in minutes and runs on any search service tier. This index is created using built-in sample data.
+
+1. In the Azure portal, find your search service.
+
+1. On the **Overview** home page, select [**Import data**](search-get-started-portal.md) to start the wizard.
+
+1. On the **Connect to your data** page, select **Samples** from the dropdown list.
+
+1. Choose the **hotels-sample**.
+
+1. Select **Next** through the remaining pages, accepting the default values.
+
+1. Once the index is created, select **Search management** > **Indexes** from the left menu to open the index.
+
+1. Select **Edit JSON**.
+
+1. Search for "semantic" to find the section in the index for a semantic configuration. Replace the "semantic" line with the following semantic configuration. This example specifies a `"defaultConfiguration"`, which is important to the running of this quickstart.
+
+ ```json
+ "semantic": {
+ "defaultConfiguration": "semantic-config",
+ "configurations": [
+ {
+ "name": "semantic-config",
+ "prioritizedFields": {
+ "titleField": {
+ "fieldName": "HotelName"
+ },
+ "prioritizedContentFields": [
+ {
+ "fieldName": "Description"
+ }
+ ],
+ "prioritizedKeywordsFields": [
+ {
+ "fieldName": "Category"
+ },
+ {
+ "fieldName": "Tags"
+ }
+ ]
+ }
+ }
+ ]
+ },
+ ```
+
+1. **Save** your changes.
+
+1. Run the following query to test your index: `hotels near the ocean with beach access and good views`.
+
+## Get service endpoints
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
+
+1. On the **Overview** home page, copy the URL. An example endpoint might look like `https://example.search.windows.net`.
+
+1. [Find your Azure OpenAI service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.CognitiveServices%2Faccounts).
+
+1. On the **Overview** home page, select the link to view the endpoints. Copy the URL. An example endpoint might look like `https://example.openai.azure.com/`.
+
+## Set up the query and chat thread
+
+This section uses Visual Studio Code and Python to call the chat APIs on Azure OpenAI.
+
+1. Install the following Python packages.
+
+ ```python
+ ! pip install azure-search-documents==11.6.0b4 --quiet
+ ! pip install azure-identity==1.16.0 --quiet
+ ! pip install openai --quiet
+ ```
+
+1. Set the following variables, substituting placeholders with the endpoints you collected in the previous step.
+
+ ```python
+ AZURE_SEARCH_SERVICE: str = "PUT YOUR SEARCH SERVICE ENDPOINT HERE"
+ AZURE_OPENAI_ACCOUNT: str = "PUT YOUR AZURE OPENAI ENDPOINT HERE"
+ AZURE_DEPLOYMENT_MODEL: str = "gpt-35-turbo"
+ ```
+
+1. Specify query parameters. The query is a keyword search using semantic ranking. The search engine returns up to 50 matches, but the model returns just the top 5 in the response. If you can't enable semantic ranking on your search service, set the value to false.
+
+ ```python
+ # Set query parameters for grounding the conversation on your search index
+ k=50
+ search_type="text"
+ use_semantic_reranker=True
+ sources_to_include=5
+ ```
+
+1. Set up clients, a search functions prompts, and a chat. The function retrieves selected fields from the search index.
+
+ ```python
+ # Set up the query for generating responses
+ from azure.core.credentials_async import AsyncTokenCredential
+ from azure.identity.aio import get_bearer_token_provider
+ from azure.search.documents.aio import SearchClient
+ from azure.search.documents.models import VectorizableTextQuery, HybridSearch
+ from openai import AsyncAzureOpenAI
+ from enum import Enum
+ from typing import List, Optional
+
+ def create_openai_client(credential: AsyncTokenCredential) -> AsyncAzureOpenAI:
+ token_provider = get_bearer_token_provider(credential, "https://cognitiveservices.azure.com/.default")
+ return AsyncAzureOpenAI(
+ api_version="2024-04-01-preview",
+ azure_endpoint=AZURE_OPENAI_ACCOUNT,
+ azure_ad_token_provider=token_provider
+ )
+
+ def create_search_client(credential: AsyncTokenCredential) -> SearchClient:
+ return SearchClient(
+ endpoint=AZURE_SEARCH_SERVICE,
+ index_name="hotels-sample-index",
+ credential=credential
+ )
+
+ # This quickstart is only using text at the moment
+ class SearchType(Enum):
+ TEXT = "text"
+ VECTOR = "vector"
+ HYBRID = "hybrid"
+
+ # This function retrieves the selected fields from the search index
+ async def get_sources(search_client: SearchClient, query: str, search_type: SearchType, use_semantic_reranker: bool = True, sources_to_include: int = 5, k: int = 50) -> List[str]:
+ search_type == SearchType.TEXT,
+ response = await search_client.search(
+ search_text=query,
+ query_type="semantic" if use_semantic_reranker else "simple",
+ top=sources_to_include,
+ select="Description,HotelName,Tags"
+ )
+
+ return [ document async for document in response ]
+
+ # This prompt provides instructions to the model
+ GROUNDED_PROMPT="""
+ You are a friendly assistant that recommends hotels based on activities and amenities.
+ Answer the query using only the sources provided below in a friendly and concise bulleted manner.
+ Answer ONLY with the facts listed in the list of sources below.
+ If there isn't enough information below, say you don't know.
+ Do not generate answers that don't use the sources below.
+ Query: {query}
+ Sources:\n{sources}
+ """
+
+ # This class instantiates the chat
+ class ChatThread:
+ def __init__(self):
+ self.messages = []
+ self.search_results = []
+
+ def append_message(self, role: str, message: str):
+ self.messages.append({
+ "role": role,
+ "content": message
+ })
+
+ async def append_grounded_message(self, search_client: SearchClient, query: str, search_type: SearchType, use_semantic_reranker: bool = True, sources_to_include: int = 5, k: int = 50):
+ sources = await get_sources(search_client, query, search_type, use_semantic_reranker, sources_to_include, k)
+ sources_formatted = "\n".join([f'{document["HotelName"]}:{document["Description"]}:{document["Tags"]}' for document in sources])
+ self.append_message(role="user", message=GROUNDED_PROMPT.format(query=query, sources=sources_formatted))
+ self.search_results.append(
+ {
+ "message_index": len(self.messages) - 1,
+ "query": query,
+ "sources": sources
+ }
+ )
+
+ async def get_openai_response(self, openai_client: AsyncAzureOpenAI, model: str):
+ response = await openai_client.chat.completions.create(
+ messages=self.messages,
+ model=model
+ )
+ self.append_message(role="assistant", message=response.choices[0].message.content)
+
+ def get_last_message(self) -> Optional[object]:
+ return self.messages[-1] if len(self.messages) > 0 else None
+
+ def get_last_message_sources(self) -> Optional[List[object]]:
+ return self.search_results[-1]["sources"] if len(self.search_results) > 0 else None
+ ```
+
+1. Invoke the chat and call the search function, passing in a query string to search for.
+
+ ```python
+ import azure.identity.aio
+
+ chat_thread = ChatThread()
+ chat_deployment = AZURE_DEPLOYMENT_MODEL
+
+ async with azure.identity.aio.DefaultAzureCredential() as credential, create_search_client(credential) as search_client, create_openai_client(credential) as openai_client:
+ await chat_thread.append_grounded_message(
+ search_client=search_client,
+ query="Can you recommend a few hotels near the ocean with beach access and good views",
+ search_type=SearchType(search_type),
+ use_semantic_reranker=use_semantic_reranker,
+ sources_to_include=sources_to_include,
+ k=k)
+ await chat_thread.get_openai_response(openai_client=openai_client, model=chat_deployment)
+
+ print(chat_thread.get_last_message()["content"])
+ ```
+
+## Clean up
+
+When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
+
+You can find and manage resources in the portal by using the **All resources** or **Resource groups** link in the leftmost pane.
+
+## Next steps
+
+As a next step, we recommend that you review the demo code for [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet), or [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).
search Semantic How To Enable Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-enable-disable.md
Follow these steps to enable [semantic ranker](semantic-search-overview.md) at t
1. Navigate to your search service. On the **Overview** page, make sure the service is a billable tier, Basic or higher.
-1. On the left-nav pane, select **Semantic ranking**.
+1. On the left-nav pane, select **Settings** > **Semantic ranking**.
1. Select either the **Free plan** or the **Standard plan**. You can switch between the free plan and the standard plan at any time.
search Tutorial Javascript Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md
Previously updated : 09/13/2023 Last updated : 07/22/2024 - devx-track-js - devx-track-azurecli
search Tutorial Javascript Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-deploy-static-web-app.md
Previously updated : 04/25/2024 Last updated : 07/22/2024 - devx-track-js - ignite-2023
search Tutorial Javascript Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-overview.md
Previously updated : 09/13/2023 Last updated : 07/22/2024 - devx-track-js - ignite-2023
search Tutorial Javascript Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-search-query-integration.md
Previously updated : 09/13/2023 Last updated : 07/22/2024 - devx-track-js - ignite-2023
security Ransomware Features Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-features-resources.md
Title: Azure features & resources that help you protect, detect, and respond
+ Title: Azure features & resources that help you protect, detect, and respond to ransomware attacks
description: Azure features & resources that help you protect, detect, and respond
Last updated 01/10/2022
-# Azure features & resources that help you protect, detect, and respond
+# Azure features & resources that help you protect, detect, and respond to ransomware attacks
Microsoft has invested in Azure native security capabilities that organizations can leverage to defeat ransomware attack techniques found in both high-volume, everyday attacks, and sophisticated targeted attacks.
This alert is an example of a detected Petya ransomware alert:
One important way that organizations can help protect against losses in a ransomware attack is to have a backup of business-critical information in case other defenses fail. Since ransomware attackers have invested heavily into neutralizing backup applications and operating system features like volume shadow copy, it is critical to have backups that are inaccessible to a malicious attacker. With a flexible business continuity and disaster recovery solution, industry-leading data protection and security tools, Azure cloud offers secure services to protect your data: -- **Azure Backup**: Azure Backup service provides simple, secure, and cost-effective solution to back up your Azure VM. Currently, Azure Backup supports backing up of all the disks (OS and Data disks) in a VM using backup solution for Azure Virtual machine.
+- **Azure Backup**: Azure Backup service provides simple, secure, and cost-effective solution to back up your Azure VM. Currently, Azure Backup supports backing up of all the disks (OS and Data disks) in a VM using backup solution for Azure virtual machine.
- **Azure Disaster Recovery**: With disaster recovery from on-prem to the cloud, or from one cloud to another, you can avoid downtime and keep your applications up and running. - **Built-in Security and Management in Azure**: To be successful in the Cloud era, enterprises must have visibility/metrics and controls on every component to pinpoint issues efficiently, optimize and scale effectively, while having the assurance the security, compliance and policies are in place to ensure the velocity.
sentinel Logic Apps Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/logic-apps-playbooks.md
Within the Microsoft Sentinel connector, use triggers, actions, and dynamic fiel
|Component |Description | |||
-|**Trigger** | A trigger is the connector component that starts a workflow, in this case, a playbook. A Microsoft Sentinel trigger defines the schema that the playbook expects to receive when triggered. <br><br>The Microsoft Sentinel connector supports the following types of triggers: <br><br>- [Alert trigger](/connectors/azuresentinel/#triggers): The playbook receives an alert as input.<br> - [Entity trigger (Preview)](/connectors/azuresentinel/#triggers): The playbook receives an entity as input.<br> - [Incident trigger](/connectors/azuresentinel/#triggers): The playbook receives an incident as input, along with all the included alerts and entities. |
+|**Trigger** | A trigger is the connector component that starts a workflow, in this case, a playbook. A Microsoft Sentinel trigger defines the schema that the playbook expects to receive when triggered. <br><br>The Microsoft Sentinel connector supports the following types of triggers: <br><br>- [Alert trigger](/connectors/azuresentinel/#triggers): The playbook receives an alert as input.<br> - [Entity trigger](/connectors/azuresentinel/#triggers): The playbook receives an entity as input.<br> - [Incident trigger](/connectors/azuresentinel/#triggers): The playbook receives an incident as input, along with all the included alerts and entities. |
|**Actions** | Actions are all the steps that happen after the trigger. Actions can be arranged sequentially, in parallel, or in a matrix of complex conditions. | |**Dynamic fields** | Dynamic fields are temporary fields that can be used in the actions that follow your trigger. Dynamic fields are determined by the output schema of triggers and actions, and are populated by their actual output. |
sentinel Basic Logs Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/basic-logs-use-cases.md
The primary log sources used for detection often contain the metadata and contex
Event log data in Basic Logs can't be used as the primary log source for security incidents and alerts. But Basic Log event data is useful to correlate and draw conclusions when you investigate an incident or perform threat hunting.
-This topic highlights log sources to consider configuring for Basic Logs when they're stored in Log Analytics tables. Before configuring tables as Basic Logs, [compare log data plans](../azure-monitor/logs/basic-logs-configure.md).
+This topic highlights log sources to consider configuring for Basic Logs when they're stored in Log Analytics tables. Before configuring tables as Basic Logs, [compare log data plans](../azure-monitor/logs/logs-table-plans.md).
## Storage access logs for cloud providers
A new and growing source of log data is Internet of Things (IoT) connected devic
## Next steps -- [Set a table's log data plan in Azure Monitor Logs](../azure-monitor/logs/basic-logs-configure.md)
+- [Set a table's log data plan in Azure Monitor Logs](../azure-monitor/logs/logs-table-plans.md)
- [Start an investigation by searching for events in large datasets (preview)](investigate-large-datasets.md)
sentinel Billing Reduce Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-reduce-costs.md
When hunting or investigating threats in Microsoft Sentinel, you might need to a
## Turn on basic logs data ingestion for data that's high-volume low security value (preview)
-Unlike analytics logs, [basic logs](../azure-monitor/logs/basic-logs-configure.md) are typically verbose. They contain a mix of high volume and low security value data that isn't frequently used or accessed on demand for ad-hoc querying, investigations, and search. Enable basic log data ingestion at a significantly reduced cost for eligible data tables. For more information, see [Microsoft Sentinel Pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
+Unlike analytics logs, [basic logs](../azure-monitor/logs/logs-table-plans.md) are typically verbose. They contain a mix of high volume and low security value data that isn't frequently used or accessed on demand for ad-hoc querying, investigations, and search. Enable basic log data ingestion at a significantly reduced cost for eligible data tables. For more information, see [Microsoft Sentinel Pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
## Optimize Log Analytics costs with dedicated clusters
Microsoft Sentinel data retention is free for the first 90 days. To adjust the d
Microsoft Sentinel security data might lose some of its value after a few months. Security operations center (SOC) users might not need to access older data as frequently as newer data, but still might need to access the data for sporadic investigations or audit purposes.
-To help you reduce Microsoft Sentinel data retention costs, Azure Monitor now offers archived logs. Archived logs store log data for long periods of time, up to seven years, at a reduced cost with limitations on its usage. Archived logs are in public preview. For more information, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
+To help you reduce Microsoft Sentinel data retention costs, Azure Monitor now offers archived logs. Archived logs store log data for long periods of time, up to seven years, at a reduced cost with limitations on its usage. Archived logs are in public preview. For more information, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-configure.md).
Alternatively, you can use Azure Data Explorer for long-term data retention at lower cost. Azure Data Explorer provides the right balance of cost and usability for aged data that no longer needs Microsoft Sentinel security intelligence.
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
Basic logs have a reduced price and are charged at a flat rate per GB. They have
- Eight-day retention - No support for scheduled alerts
-Basic logs are best suited for use in playbook automation, ad-hoc querying, investigations, and search. For more information, see [Configure Basic Logs in Azure Monitor](../azure-monitor/logs/basic-logs-configure.md).
+Basic logs are best suited for use in playbook automation, ad-hoc querying, investigations, and search. For more information, see [Configure Basic Logs in Azure Monitor](../azure-monitor/logs/logs-table-plans.md).
### Simplified pricing tiers
Any other services you use might have associated costs.
After you enable Microsoft Sentinel on a Log Analytics workspace, consider these configuration options: - Retain all data ingested into the workspace at no charge for the first 90 days. Retention beyond 90 days is charged per the standard [Log Analytics retention prices](https://azure.microsoft.com/pricing/details/monitor/).-- Specify different retention settings for individual data types. Learn about [retention by data type](../azure-monitor/logs/data-retention-archive.md#configure-retention-and-archive-at-the-table-level). -- Enable long-term retention for your data and have access to historical logs by enabling archived logs. Data archive is a low-cost retention layer for archival storage. It's charged based on the volume of data stored and scanned. Learn how to [configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md). Archived logs are in public preview.
+- Specify different retention settings for individual data types. Learn about [retention by data type](../azure-monitor/logs/data-retention-configure.md#configure-table-level-retention).
+- Enable long-term retention for your data and have access to historical logs by enabling archived logs. Data archive is a low-cost retention layer for archival storage. It's charged based on the volume of data stored and scanned. Learn how to [configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-configure.md). Archived logs are in public preview.
The 90 day retention doesn't apply to basic logs. If you want to extend data retention for basic logs beyond eight days, store that data in archived logs for up to seven years.
sentinel Configure Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-data-retention-archive.md
In the previous deployment step, you enabled the User and Entity Behavior Analyt
Retention policies define when to remove or archive data in a Log Analytics workspace. Archiving lets you keep older, less used data in your workspace at a reduced cost. To set up data retention, use one or both of these methods, depending on your use case: -- [Configure data retention and archive for one or more tables](../azure-monitor/logs/data-retention-archive.md) (one table at a time)
+- [Configure data retention and archive for one or more tables](../azure-monitor/logs/data-retention-configure.md) (one table at a time)
- [Configure data retention and archive for multiple tables](https://github.com/Azure/Azure-Sentinel/tree/master/Tools/Archive-Log-Tool) at once ## Next steps
sentinel Configure Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-data-retention.md
No resources were created but you might want to restore the data retention setti
## Next steps > [!div class="nextstepaction"]
-> [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md?tabs=portal-1%2cportal-2)
+> [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-configure.md?tabs=portal-1%2cportal-2)
sentinel Connect Azure Functions Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-functions-template.md
This article describes how to configure Microsoft Sentinel for using Azure Funct
> [!NOTE] > - Once ingested in to Microsoft Sentinel, data is stored in the geographic location of the workspace in which you're running Microsoft Sentinel. >
-> For long-term retention, you may also want to store data in archive log types such as *Basic logs*. For more information, see [Data retention and archive in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
+> For long-term retention, you may also want to store data in archive log types such as *Basic logs*. For more information, see [Data retention and archive in Azure Monitor Logs](../azure-monitor/logs/data-retention-configure.md).
> > - Using Azure Functions to ingest data into Microsoft Sentinel may result in additional data ingestion costs. For more information, see the [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/) page.
sentinel Connect Google Cloud Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-google-cloud-platform.md
For more information about workload identity federation in Google Cloud Platform
1. **Grant access** to the principal that represents the workload identity pool and provider that you created in the previous step. - Use the following format for the principal name: ```http
- principal://iam.googleapis.com/projects/{PROJECT_NUMBER}/locations/global/workloadIdentityPools/{WORKLOAD_IDENTITY_POOL_ID}/subject/{WORKLOAD_IDENTITY_PROVIDER_ID}
+ principalSet://iam.googleapis.com/projects/{PROJECT_NUMBER}/locations/global/workloadIdentityPools/{WORKLOAD_IDENTITY_POOL_ID}/*
```-
+
- Assign the **Workload Identity User** role and save the configuration. For more information about granting access in Google Cloud Platform, see [Manage access to projects, folders, and organizations](https://cloud.google.com/iam/docs/granting-changing-revoking-access) in the Google Cloud documentation.
Follow the instructions in the Google Cloud documentation to [**configure Pub/Su
## Next steps In this article, you learned how to ingest GCP data into Microsoft Sentinel using the GCP Pub/Sub connectors. To learn more about Microsoft Sentinel, see the following articles:
- - Learn how to [get visibility into your data, and potential threats](get-visibility.md).
+- Learn how to [get visibility into your data, and potential threats](get-visibility.md).
- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md). - [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Enable Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-monitoring.md
This article instructs you how to turn on these features.
To implement the health and audit feature using API (Bicep/ARM/REST), review the [Diagnostic Settings operations](/rest/api/monitor/diagnostic-settings).
-To configure the retention time for your audit and health events, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
+To configure the retention time for your audit and health events, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-configure.md).
> [!IMPORTANT] >
sentinel Investigate Large Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-large-datasets.md
The following image shows example search criteria for a search job.
Use search to find events in any of the following log types: - [Analytics logs](../azure-monitor/logs/data-platform-logs.md)-- [Basic logs](../azure-monitor/logs/basic-logs-configure.md)
+- [Basic logs](../azure-monitor/logs/logs-table-plans.md)
-You can also search analytics or basic log data stored in [archived logs](../azure-monitor/logs/data-retention-archive.md).
+You can also search analytics or basic log data stored in [archived logs](../azure-monitor/logs/data-retention-configure.md).
### Limitations of a search job
sentinel Migration Export Ingest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-export-ingest.md
To ingest your historical data into Microsoft Sentinel Basic Logs (option 2 in t
1. [Create an App registration to authenticate against the API](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-azure-ad-application). 1. [Create a custom log table](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-new-table-in-log-analytics-workspace) to store the data, and provide a data sample. In this step, you can also define a transformation before the data is ingested. 1. [Collect information from the data collection rule](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#collect-information-from-the-dcr) and assign permissions to the rule.
-1. [Change the table from Analytics to Basic Logs](../azure-monitor/logs/basic-logs-configure.md).
+1. [Change the table from Analytics to Basic Logs](../azure-monitor/logs/logs-table-plans.md).
1. Run the [Custom Log Ingestion script](https://github.com/Azure/Azure-Sentinel/tree/master/Tools/CustomLogsIngestion-DCE-DCR). The script asks for the following details: - Path to the log files to ingest - Microsoft Entra tenant ID
sentinel Migration Ingestion Target Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-ingestion-target-platform.md
This article compares target platforms in terms of performance, cost, usability
> [!NOTE] > The considerations in this table only apply to historical log migration, and don't apply in other scenarios, such as long-term retention.
-| |[Basic Logs/Archive](../azure-monitor/logs/basic-logs-configure.md) |[Azure Data Explorer (ADX)](/azure/data-explorer/data-explorer-overview) |[Azure Blob Storage](../storage/blobs/storage-blobs-overview.md) |[ADX + Azure Blob Storage](../azure-monitor/logs/azure-data-explorer-query-storage.md) |
+| |[Basic Logs/Archive](../azure-monitor/logs/logs-table-plans.md) |[Azure Data Explorer (ADX)](/azure/data-explorer/data-explorer-overview) |[Azure Blob Storage](../storage/blobs/storage-blobs-overview.md) |[ADX + Azure Blob Storage](../azure-monitor/logs/azure-data-explorer-query-storage.md) |
|||||| |**Capabilities**: |ΓÇó Apply most of the existing Azure Monitor Logs experiences at a lower cost.<br>ΓÇó Basic Logs are retained for eight days, and are then automatically transferred to the archive (according to the original retention period).<br>ΓÇó Use [search jobs](../azure-monitor/logs/search-jobs.md) to search across petabytes of data and find specific events.<br>ΓÇó For deep investigations on a specific time range, [restore data from the archive](../azure-monitor/logs/restore.md). The data is then available in the hot cache for further analytics. |ΓÇó Both ADX and Microsoft Sentinel use the Kusto Query Language (KQL), allowing you to query, aggregate, or correlate data in both platforms. For example, you can run a KQL query from Microsoft Sentinel to [join data stored in ADX with data stored in Log Analytics](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md).<br>ΓÇó With ADX, you have substantial control over the cluster size and configuration. For example, you can create a larger cluster to achieve higher ingestion throughput, or create a smaller cluster to control your costs. |ΓÇó Blob storage is optimized for storing massive amounts of unstructured data.<br>ΓÇó Offers competitive costs.<br>ΓÇó Suitable for a scenario where your organization doesn't prioritize accessibility or performance, such as when there the organization must align with compliance or audit requirements. |ΓÇó Data is stored in a blob storage, which is low in costs.<br>ΓÇó You use ADX to query the data in KQL, allowing you to easily access the data. [Learn how to query Azure Monitor data with ADX](../azure-monitor/logs/azure-data-explorer-query-storage.md) | |**Usability**: |**Great**<br><br>The archive and search options are simple to use and accessible from the Microsoft Sentinel portal. However, the data isn't immediately available for queries. You need to perform a search to retrieve the data, which might take some time, depending on the amount of data being scanned and returned. |**Good**<br><br>Fairly easy to use in the context of Microsoft Sentinel. For example, you can use an Azure workbook to visualize data spread across both Microsoft Sentinel and ADX. You can also query ADX data from the Microsoft Sentinel portal using the [ADX proxy](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md). |**Poor**<br><br>With historical data migrations, you might have to deal with millions of files, and exploring the data becomes a challenge. |**Fair**<br><br>While using the `externaldata` operator is very challenging with large numbers of blobs to reference, using external ADX tables eliminates this issue. The external table definition understands the blob storage folder structure, and allows you to transparently query the data contained in many different blobs and folders. |
sentinel Migration Ingestion Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-ingestion-tool.md
This article describes a set of different tools used to transfer your historical
## Azure Monitor Basic Logs/Archive
-Before you ingest data to Azure Monitor Basic Logs or Archive, for lower ingestion prices, ensure that the table you're writing to is [configured as Basic Logs](../azure-monitor/logs/basic-logs-configure.md). Review the [Azure Monitor custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool) and the [direct API](#direct-api) method for Azure Monitor Basic Logs.
+Before you ingest data to Azure Monitor Basic Logs or Archive, for lower ingestion prices, ensure that the table you're writing to is [configured as Basic Logs](../azure-monitor/logs/logs-table-plans.md). Review the [Azure Monitor custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool) and the [direct API](#direct-api) method for Azure Monitor Basic Logs.
### Azure Monitor custom log ingestion tool
sentinel Migration Track https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-track.md
The **ArchiveRetention** value is calculated by subtracting the **TotalRetention
If you prefer to make changes in the UI, select **Update Retention in UI** to open the relevant page.
-Learn about [data lifecycle management](../azure-monitor/logs/data-retention-archive.md).
+Learn about [data lifecycle management](../azure-monitor/logs/data-retention-configure.md).
## Enable migration tips and instructions
sentinel Quickstart Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/quickstart-onboard.md
To onboard to Microsoft Sentinel by using the API, see the latest supported vers
- **Log Analytics workspace**. Learn how to [create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). For more information about Log Analytics workspaces, see [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/workspace-design.md).
- You may have a default of [30 days retention](../azure-monitor/logs/cost-logs.md#legacy-pricing-tiers) in the Log Analytics workspace used for Microsoft Sentinel. To make sure that you can use all Microsoft Sentinel functionality and features, raise the retention to 90 days. [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
+ You may have a default of [30 days retention](../azure-monitor/logs/cost-logs.md#legacy-pricing-tiers) in the Log Analytics workspace used for Microsoft Sentinel. To make sure that you can use all Microsoft Sentinel functionality and features, raise the retention to 90 days. [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-configure.md).
- **Permissions**:
sentinel Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/search-jobs.md
To learn more, see the following articles.
- [Hunt with bookmarks](bookmarks.md) - [Restore archived logs](restore.md)-- [Configure data retention and archive policies in Azure Monitor Logs (Preview)](../azure-monitor/logs/data-retention-archive.md)
+- [Configure data retention and archive policies in Azure Monitor Logs (Preview)](../azure-monitor/logs/data-retention-configure.md)
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
Want more in-depth information? View the ["Improving the breadth and coverage of
If you prefer another long-term retention solution, see [Export from Microsoft Sentinel / Log Analytics workspace to Azure Storage and Event Hubs](/cli/azure/monitor/log-analytics/workspace/data-export) or [Move logs to long-term storage by using Azure Logic Apps](../azure-monitor/logs/logs-export-logic-app.md). The advantage of using Logic Apps is that it can export historical data.
-Finally, you can set fine-grained retention periods by using [table-level retention settings](https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/azure-log-analytics-data-retention-by-type-in-real-life/ba-p/1416287). For more information, see [Configure data retention and archive policies in Azure Monitor Logs (Preview)](../azure-monitor/logs/data-retention-archive.md).
+Finally, you can set fine-grained retention periods by using [table-level retention settings](https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/azure-log-analytics-data-retention-by-type-in-real-life/ba-p/1416287). For more information, see [Configure data retention and archive policies in Azure Monitor Logs (Preview)](../azure-monitor/logs/data-retention-configure.md).
#### Log security
service-bus-messaging Service Bus Messages Payloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messages-payloads.md
When you use the legacy SBMP protocol, those objects are then serialized with th
[!INCLUDE [service-bus-amqp-support-retirement](../../includes/service-bus-amqp-support-retirement.md)]
-While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. While AMQP has a powerful binary encoding model, it's tied to the AMQP messaging ecosystem, and HTTP clients will have trouble decoding such payloads.
+While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. While AMQP has a powerful binary encoding model, it's tied to the AMQP messaging ecosystem, and HTTP clients will have trouble decoding such payloads.
-The .NET Standard and Java API variants only accept byte arrays, which means that the application must handle object serialization control.
+The .NET Standard and Java API variants only accept byte arrays, which means that the application must handle object serialization control.
-If the payload of a message can't be deserialized, then it's recommended to [dead-letter the message](./service-bus-dead-letter-queues.md?source=recommendations#application-level-dead-lettering).
+When handling object deserialization from the message payload, developers should take into consideration that messages may arrive from multiple sources using different serialization methods. This can also happen when evolving a single application, where old versions may continue to run alongside newer versions. In these cases, it is recommended to have additional deserialization methods to try if the first attempt at deserialization fails. One library that supports this is [NServiceBus](https://docs.particular.net/nservicebus/serialization/#specifying-additional-deserializers). If all deserialization methods fail, then it's recommended to [dead-letter the message](./service-bus-dead-letter-queues.md?source=recommendations#application-level-dead-lettering).
## Next steps
To learn more about Service Bus messaging, see the following topics:
* [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md) * [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md)
-* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
+* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
service-connector Tutorial Java Jboss Connect Managed Identity Mysql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-jboss-connect-managed-identity-mysql-database.md
curl https://${WEBAPP_URL}/checklist/1
Learn more about running Java apps on App Service on Linux in the developer guide. > [!div class="nextstepaction"]
-> [Java in App Service Linux dev guide](../app-service/configure-language-java.md?pivots=platform-linux)
+> [Java in App Service Linux dev guide](../app-service/configure-language-java-security.md?pivots=platform-linux)
site-recovery Monitor Site Recovery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitor-site-recovery-reference.md
To understand the fields of each Site Recovery table in Log Analytics, review th
> [!TIP] > Expand this table for better readability.
-| Category | Category Display Name | Log Table | [Supports basic log plan](../azure-monitor/logs/basic-logs-configure.md#compare-the-basic-and-analytics-log-data-plans) | [Supports ingestion-time transformation](../azure-monitor/essentials/data-collection-transformations.md) | Example queries | Costs to export |
+| Category | Category Display Name | Log Table | [Supports basic log plan](../azure-monitor/logs/data-platform-logs.md#table-plans) | [Supports ingestion-time transformation](../azure-monitor/essentials/data-collection-transformations.md) | Example queries | Costs to export |
| | | | | | | | | *ASRReplicatedItems* | Azure Site Recovery Replicated Item Details | [ASRReplicatedItems](/azure/azure-monitor/reference/tables/asrreplicateditems) <br> This table contains details of Azure Site Recovery replicated items, such as associated vault, policy, replication health, failover readiness. etc. Data is pushed once a day to this table for all replicated items, to provide the latest information for each item. | No | No | [Queries](/azure/azure-monitor/reference/queries/asrreplicateditems) | Yes | | *AzureSiteRecoveryJobs* | Azure Site Recovery Jobs | [ASRJobs](/azure/azure-monitor/reference/tables/asrjobs) <br> This table contains records of Azure Site Recovery jobs such as failover, test failover, reprotection etc., with key details for monitoring and diagnostics, such as the replicated item information, duration, status, description, and so on. Whenever an Azure Site Recovery job is completed (that is, succeeded or failed), a corresponding record for the job is sent to this table. You can view history of Azure Site Recovery jobs by querying this table over a larger time range, provided your workspace has the required retention configured. | No | No | [Queries](/azure/azure-monitor/reference/queries/asrjobs) | No |
site-recovery Report Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/report-site-recovery.md
To start using Azure Site Recovery reports, follow these steps:
Set up one or more Log Analytics workspaces to store your Backup reporting data. The location and subscription of this Log Analytics workspace, can be different from where your vaults are located or subscribed.
-To set up a Log Analytics workspace, [follow these steps](../azure-monitor/logs/quick-create-workspace.md). The data in a Log Analytics workspace is kept for 30 days by default. If you want to see data for a longer time span, change the retention period of the Log Analytics workspace. To change the retention period, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
+To set up a Log Analytics workspace, [follow these steps](../azure-monitor/logs/quick-create-workspace.md). The data in a Log Analytics workspace is kept for 30 days by default. If you want to see data for a longer time span, change the retention period of the Log Analytics workspace. To change the retention period, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-configure.md).
### Configure diagnostics settings for your vaults
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
Previously updated : 06/24/2024 Last updated : 07/22/2024
To transfer files to or from Azure Blob Storage via SFTP clients, see the follow
| Multi-protocol writes | Random writes and appends (`PutBlock`,`PutBlockList`, `GetBlockList`, `AppendBlock`, `AppendFile`) aren't allowed from other protocols (NFS, Blob REST, Data Lake Storage Gen2 REST) on blobs that are created by using SFTP. Full overwrites are allowed.| | Rename Operations | Rename operations where the target file name already exists is a protocol violation. Attempting such an operation returns an error. See [Removing and Renaming Files](https://datatracker.ietf.org/doc/html/draft-ietf-secsh-filexfer-02#section-6.5) for more information.| | Cross Container Operations | Traversing between containers or performing operations on multiple containers from the same connection are unsupported.
+| Undelete | There is no way to restore a soft-deleted blob with SFTP. The `Undelete` REST API must be used.|
## Authentication and authorization
To learn more, see [SFTP permission model](secure-file-transfer-protocol-support
- Only SSH version 2 is supported.
+- Avoid blob or directory names that end with a dot (.), a forward slash (/), a backslash (\), or a sequence or combination of the two. No path segments should end with a dot (.). For more information, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
+ ## Blob Storage features When you enable SFTP support, some Blob Storage features will be fully supported, but some features might be supported only at the preview level or not yet supported at all.
storage Elastic San Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-metrics.md
The following metrics are currently available for your Elastic SAN resource. You
|**Ingress**|The amount of ingress data. This number includes ingress to the resource from external clients as well as ingress within Azure. | |**Egress**|The amount of egress data. This number includes egress from the resource to external clients as well as egress within Azure. |
-All metrics are shown at the elastic SAN level.
+By default, all metrics are shown at the SAN level. To view these metrics at either the volume group or volume level, select a filter on your selected metric to view your data on a specific volume group or volume.
## Diagnostic logging
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/partner-overview.md
This article highlights Microsoft partner companies that deliver a network attac
| Partner | Description | Website/product link | | - | -- | -- |
-| ![Nasuni.](./media/nasuni-logo.png) |**Nasuni**<br>Nasuni is a file storage platform that replaces enterprise NAS and file servers including the associated infrastructure for BCDR and disk tiering. Virtual edge appliances keep files quickly accessible and synchronized with the cloud. The management console lets you manage multiple storage sites from one location including the ability to provision, monitor, control, and report on your file infrastructure. Continuous versioning to the cloud brings file restore times down to minutes.<br><br>Nasuni cloud file storage built on Azure eliminates traditional NAS and file servers across any number of locations and replaces it with a cloud solution. Nasuni cloud file storage provides infinite file storage, backups, disaster recovery, and multi-site file sharing. Nasuni is a software-as-a-service used for data-center-to-the-cloud initiatives, multi-location file synching, sharing and collaboration, and as a cloud storage companion for VDI environments.|[Partner page](https://www.nasuni.com/partner/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/nasunicorporation.nasuni)|
-| ![Panzura.](./media/panzura-logo.png) |**Panzura**<br>Panzura is the fabric that transforms Azure cloud storage into a high-performance global file system. By delivering one authoritative data source for all users, Panzura allows enterprises to use Azure as a globally available data center, with all the functionality and speed of a single-site NAS, including automatic file locking, immediate global data consistency, and local file operation performance. |[Partner page](https://panzura.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/panzura-file-system.panzura-freedom-filer)|
+| ![Nasuni.](./media/nasuni-logo.png) |**Nasuni**<br>Nasuni is a file storage platform that replaces enterprise NAS and file servers including the associated infrastructure for Business Continuity and Disaster Recovery and disk tiering. Virtual edge appliances keep files quickly accessible and synchronized with the cloud. The management console lets you manage multiple storage sites from one location including the ability to provision, monitor, control, and report on your file infrastructure. Continuous versioning to the cloud brings file restore times down to minutes.<br><br>Nasuni cloud file storage built on Azure eliminates traditional NAS and file servers across any number of locations and replaces it with a cloud solution. Nasuni cloud file storage provides infinite file storage, backups, disaster recovery, and multi-site file sharing. Nasuni is a software-as-a-service used for data-center-to-the-cloud initiatives, multi-location file synching, sharing and collaboration, and as a cloud storage companion for Virtual Desktop environments.|[Partner page](https://www.nasuni.com/partner/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/nasunicorporation.nasuni)|
| ![Pure Storage.](./media/pure-logo.png) |**Pure Storage**<br>Pure delivers a modern data experience that empowers organizations to run their operations as a true, automated, storage as-a-service model seamlessly across multiple clouds.|[Partner page](https://www.purestorage.com/company/technology-partners/microsoft.html)<br>[Solution Video](https://azure.microsoft.com/resources/videos/pure-storage-overview)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/purestoragemarketplaceadmin.pure_storage_cloud_block_store_deployment?tab=Overview)|
-| ![Qumulo.](./media/qumulo-logo.png)|**Qumulo**<br>Qumulo is a fast, scalable, and simple to use file system that makes it easy to store, manage, and run applications that use file data at scale on Microsoft Azure. Qumulo on Azure offers multiple petabytes (PB) of storage capacity and up to 20 GB/s of performance per file system. Windows (SMB), and Linux (NFS) are both natively supported. Patented software architecture delivers a low per-terabyte (TB) cost Media & Entertainment, Genomics, Technology, Natural Resources, and Finance companies all run their most demanding workloads on Qumulo in the cloud. With a Net Promoter Score of 89, customers use Qumulo for its scale, performance and ease of use capabilities like real-time visual insights into how storage is used and award winning Slack based support. Sign up for a free POC today through [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp) or [Qumulo.com](https://qumulo.com/). | [Partner page](https://qumulo.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp)<br>[Datasheet](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWUtF0)|
-| ![Scality.](./media/scality-logo.png) |**Scality**<br>Scality builds a software-defined file and object platform designed for on-premises, hybrid, and multicloud environments. ScalityΓÇÖs integration with Azure Blob Storage enable enterprises to manage and secure their data between on-premises environments and Azure, and meet the demand of high-performance, cloud-based file workloads. |[Partner page](https://www.scality.com/partners/azure/)|
+| ![Qumulo.](./media/qumulo-logo.png)|**Qumulo**<br>Qumulo is a fast, scalable, and simple to use file system that makes it easy to store, manage, and run applications that use file data at scale on Microsoft Azure. Qumulo on Azure offers multiple petabytes (PB) of storage capacity and up to 20 GB/s of performance per file system. Windows (SMB) and Linux (NFS) are both natively supported. Patented software architecture delivers a low per-terabyte (TB) cost Media & Entertainment, Genomics, Technology, Natural Resources, and Finance companies all run their most demanding workloads on Qumulo in the cloud. With a Net Promoter Score of 89, customers use Qumulo for its scale, performance, and ease of use capabilities like real-time visual insights into how storage is used and award winning Slack based support. Sign up for a free Proof of Concept today through [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp) or [Qumulo.com](https://qumulo.com/). | [Partner page](https://qumulo.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp)<br>[Datasheet](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWUtF0)|
+| ![Weka company logo](./media/weka-logo.jpg) |**Weka**<br>The WEKA Data Platform provides a fast, scalable file storage system for AI and HPC workloads in Microsoft Azure. WEKA provides a transformational software-defined approach to data that accelerates storage performance, reduces cloud storage costs, and simplifies data operations across on-premises and cloud environments. For generative AI and enterprise AI applications, customers use WEKA to accelerate large language model tuning and training times from months to hours. In the life sciences industry, major pharmaceutical companies use WEKA to accelerate drug discovery times from weeks to hours. Content production studios rely on WEKA to build their studio in the cloud approach, enabling artists with a low frame loss, zero lag experience. Organizations across many other industries like government and defense, computer aided engineering, electronic design and automation, and financial services all use WEKA to accelerate performance intensive applications and reduce time to market. |[Partner page](https://www.weka.io/data-platform/solutions/cloud/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/weka1652213882079.weka_data_platform)<br>[Datasheet](https://www.weka.io/resources/datasheet/weka-on-azure-datasheet/)<br>[Performance Benchmark](https://www.weka.io/lp/performance-benchmark-weka-on-azure/)<br>[TCO Study](https://www.weka.io/lp/economic-benefits-of-weka-in-the-cloud/)|
| ![Silk company logo.](./media/silk-logo.jpg) |**Silk**<br>The Silk Platform quickly moves mission-critical data to Azure and keeps it operating at performance standards on par with even the fastest on-premises environments. Silk works to ensure a seamless, efficient, and smooth migration process, followed by unparalleled performance speeds for all data and applications in the Azure cloud. The platform makes cloud environments run up to 10x faster and the entire application stack is more resilient to any infrastructure hiccups or malfunctions. |[Partner page](https://silk.us/solutions/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/silk.silk_cloud_data_platform?tab=overview)|
+| ![Scality.](./media/scality-logo.png) |**Scality**<br>Scality builds a software-defined file and object platform designed for on-premises, hybrid, and multicloud environments. ScalityΓÇÖs integration with Azure Blob Storage enable enterprises to manage and secure their data between on-premises environments and Azure, and meet the demand of high-performance, cloud-based file workloads. |[Partner page](https://www.scality.com/partners/azure/)|
| ![Tiger Technology company logo.](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure, data management software solutions. Tiger Technology enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br><br> Tiger Bridge is a nonproprietary, software-only data, and storage management system. It blends on-premises and multi-tier cloud storage into a single space, and enables hybrid workflows. This transparent file server extension lets you benefit from Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses several data management challenges, including: file server extension, disaster recovery, cloud migration, backup and archive, remote collaboration, and multi-site sync. It also offers continuous data protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/tiger-technology.tiger_bridge_saas_soft_only)|
-| ![XenData company logo.](./media/xendata-logo.png) |**XenData**<br>XenData software creates multi-tier storage systems that manage files and folders across on-premises storage and Azure Blob Storage. XenData Multi-Site Sync software creates a global file system for distributed teams, enabling them to share and synchronize files across multiple locations. XenData cloud solutions are optimized for video files, supporting video streaming and partial file restore. They're integrated with many complementary software products used in the Media and Entertainment industry and support various workflows. Other industries and applications that use XenData solutions include Oil and Gas, Engineering and Scientific Data, Video Surveillance and Medical Imaging. |[Partner page](https://xendata.com/tech_partners_cloud/azure/)|
+| ![XenData company logo.](./media/xendata-logo.png) |**XenData**<br>XenData software creates multi-tier storage systems that manage files and folders across on-premises storage and Azure Blob Storage. XenData Multi-Site Sync software creates a global file system for distributed teams, enabling them to share and synchronize files across multiple locations. XenData cloud solutions are optimized for video files, supporting video streaming and partial file restore. They are integrated with many complementary software products used in the Media and Entertainment industry and support a variety of workflows. Other industries and applications that use XenData solutions include Oil and Gas, Engineering and Scientific Data, Video Surveillance and Medical Imaging. |[Partner page](https://xendata.com/tech_partners_cloud/azure/)|
+| ![Panzura.](./media/panzura-logo.png) |**Panzura**<br>Panzura is the fabric that transforms Azure cloud storage into a high-performance global file system. Panzura delivers one authoritative data source for all users. Panzura also allows enterprises to use Azure as a globally available data center and offers all the functionality and speed of a single-site NAS, including automatic file locking, immediate global data consistency, and local file operation performance. |[Partner page](https://panzura.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/panzura-file-system.panzura-freedom-filer)|
Are you a storage partner but your solution isn't listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu). ## Next steps
synapse-analytics Implementation Success Perform Monitoring Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-monitoring-review.md
Using your solution requirements and other data collected during the [assessment
You can use [Azure Monitor](../../azure-monitor/overview.md) to provide base-level infrastructure metrics, alerts, and logs for most Azure services. Azure diagnostic logs are emitted by a resource to provide rich, frequent data about the operation of that resource. Azure Synapse can write diagnostic logs in Azure Monitor.
-For more information, see [Use Azure Monitor with your Azure Synapse Analytics workspace](../monitoring/how-to-monitor-using-azure-monitor.md).
+For more information, see [Use Azure Monitor with your Azure Synapse Analytics workspace](../monitor-synapse-analytics.md).
## Monitor dedicated SQL pools
-You can monitor a dedicated SQL pool by using Azure Monitor, altering, dynamic management views (DMVs), and Log Analytics.
+You can monitor a dedicated SQL pool by using Azure Monitor, alerting, dynamic management views (DMVs), and Log Analytics.
-- **Alerts:** You can set up alerts that send you an email or call a webhook when a certain metric reaches a predefined threshold. For example, you can receive an alert email when the database size grows too large. For more information, see [Create alerts for Azure SQL Database and Azure Synapse Analytics using the Azure portal](/azure/azure-sql/database/alerts-insights-configure-portal).
+- **Alerts:** You can set up alerts that send you an email or call a webhook when a certain metric reaches a predefined threshold. For example, you can receive an alert email when the database size grows too large. For more information, see [Alerts](../monitor-synapse-analytics.md#alerts).
- **DMVs:** You can use [DMVs](../sql-data-warehouse/sql-data-warehouse-manage-monitor.md) to monitor workloads to help investigate query executions in SQL pools. - **Log Analytics:** [Log Analytics](../../azure-monitor/logs/log-analytics-tutorial.md) is a tool in the Azure portal that you can use to edit and run log queries from data collected by Azure Monitor. For more information, see [Monitor workload - Azure portal](../sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md).
synapse-analytics Implementation Success Perform Operational Readiness Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-operational-readiness-review.md
Set and document expectations for monitoring readiness with your business. These
Consider using [Azure Monitor](../../azure-monitor/overview.md) to collect, analyze, and act on telemetry data from your Azure and on-premises environments. Azure Monitor helps you maximize performance and availability of your applications by proactively identify problems in seconds.
-List all the important metrics to monitor for each service in your solution along with their acceptable thresholds. For example, the following list includes important metrics to monitor for a dedicated SQL pool:
--- `DWULimit`-- `DWUUsed`-- `AdaptiveCacheHitPercent`-- `AdaptiveCacheUsedPercent`-- `LocalTempDBUsedPercent`-- `ActiveQueries`-- `QueuedQueries`
+List all the important metrics to monitor for each service in your solution along with their acceptable thresholds. For example, you can [view metrics](../monitor-synapse-analytics-reference.md#supported-metrics-for-microsoftsynapseworkspacessqlpools) to monitor for a dedicated SQL pool.
Consider using [Azure Service Health](https://azure.microsoft.com/features/service-health/) to notify you about Azure service incidents and planned maintenance. That way, you can take action to mitigate downtime. You can set up customizable cloud alerts and use a personalized dashboard to analyze health issues, monitor the impact to your cloud resources, get guidance and support, and share details and updates.
Lastly, ensure proper notifications are set up to notify appropriate people when
Define and document *recovery time objective (RTO)* and *recovery point objective (RPO)* for your solution. RTO is how soon the service will be available to users, and RPO is how much data loss would occur in the event of a failover.
-Each of the Azure services publishes a set of guidelines and metrics on the expected high availability (HA) of the service. Ensure these HA metrics align with your business expectations. when they don't align, customizations may be necessary to meet your HA requirements. For example, Azure Synapse dedicated SQL pool supports an eight-hour RPO with automatic restore points. If that RPO isn't sufficient, you can set up user-defined restore points with an appropriate frequency to meet your RPO needs. For more information, see [Backup and restore in Azure Synapse dedicated SQL pool](../sql-data-warehouse/backup-and-restore.md).
+Each of the Azure services publishes a set of guidelines and metrics on the expected high availability (HA) of the service. Ensure these HA metrics align with your business expectations. When they don't align, customizations may be necessary to meet your HA requirements. For example, Azure Synapse dedicated SQL pool supports an eight-hour RPO with automatic restore points. If that RPO isn't sufficient, you can set up user-defined restore points with an appropriate frequency to meet your RPO needs. For more information, see [Backup and restore in Azure Synapse dedicated SQL pool](../sql-data-warehouse/backup-and-restore.md).
### Disaster recovery
synapse-analytics Monitor Articles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitor-articles.md
+
+ Title: Learn about monitoring
+description: Learn how to monitor Azure Synapse Analytics using Azure Monitor.
Last updated : 03/25/2024++++++
+# Learn about monitoring
+
+To learn about using Azure Monitor with Azure Synapse Analytics, see [Monitor Synapse Analytics](monitor-synapse-analytics.md). For general details on monitoring Azure resources, see [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+
+For a reference of the Azure Monitor metrics, logs, and other important values created for Synapse Analytics, see [Synapse Analytics monitoring data reference](monitor-synapse-analytics-reference.md).
+
+For a comparison of Log Analytics, Query Store, DMVs, and Azure Data Explorer analytics, see [Historical query storage and analysis in Azure Synapse Analytics](sql/query-history-storage-analysis.md).
+
+For information about monitoring in Synapse Studio, see [Monitor your Synapse Workspace](get-started-monitor.md).
+[Tutorial: Monitor your Synapse Workspace](get-started-monitor.md).
+
+For monitoring pipeline runs, see [Monitor pipeline runs in Synapse Studio](monitoring/how-to-monitor-pipeline-runs.md).
+
+For monitoring Apache Spark applications, see [Monitor Apache Spark applications in Synapse Studio](monitoring/apache-spark-applications.md).
+
+For monitoring SQL pools, see [Use Synapse Studio to monitor your SQL pools](monitoring/how-to-monitor-sql-pools.md).
+
+For monitoring SQL requests, see [Monitor SQL requests in Synapse Studio](monitoring/how-to-monitor-sql-requests.md).
+
+For information about how to use Dynamic Management Views (DMVs) to programmatically monitor Synapse SQL via T-SQL, see [DMVs](sql/query-history-storage-analysis.md#dmvs) and [Monitor your Azure Synapse Analytics dedicated SQL pool workload using DMVs](sql-data-warehouse/sql-data-warehouse-manage-monitor.md).
synapse-analytics Monitor Synapse Analytics Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitor-synapse-analytics-reference.md
See [Monitor Azure Synapse Analytics](monitor-synapse-analytics.md) for details
### Supported metrics for Microsoft.Synapse/workspaces The following table lists the metrics available for the Microsoft.Synapse/workspaces resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)]+
+### Azure Synapse Link metrics
+
+Azure Synapse Link emits the following metrics to Azure Monitor:
+
+| **Metric** | **Aggregation types** | **Description** |
+||||
+| Link connection events | Sum | Number of Synapse Link connection events, including start, stop, and failure |
+| Link latency in seconds | Max, Min, Avg | Synapse Link data processing latency in seconds |
+| Link processed data volume (bytes) | Sum | Data volume in bytes processed by Synapse Link |
+| Link processed rows | Sum | Row counts processed by Synapse Link |
+| Link table events | Sum | Number of Synapse Link table events, including snapshot, removal, and failure |
### Supported metrics for Microsoft.Synapse/workspaces/bigDataPools The following table lists the metrics available for the Microsoft.Synapse/workspaces/bigDataPools resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] ### Supported metrics for Microsoft.Synapse/workspaces/kustoPools The following table lists the metrics available for the Microsoft.Synapse/workspaces/kustoPools resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] ### Supported metrics for Microsoft.Synapse/workspaces/scopePools The following table lists the metrics available for the Microsoft.Synapse/workspaces/scopePools resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] ### Supported metrics for Microsoft.Synapse/workspaces/sqlPools The following table lists the metrics available for the Microsoft.Synapse/workspaces/sqlPools resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] #### Details
Use the `Result` dimension of the `IntegrationActivityRunsEnded`, `IntegrationPi
[!INCLUDE [horz-monitor-ref-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-resource-logs.md)] ### Supported resource logs for Microsoft.Synapse/workspaces > [!NOTE] > The event **SynapseBuiltinSqlPoolRequestsEnded** is emitted only for queries that read data from storage. It's not emitted for queries that process only metadata. ### Supported resource logs for Microsoft.Synapse/workspaces/bigDataPools ### Supported resource logs for Microsoft.Synapse/workspaces/kustoPools ### Supported resource logs for Microsoft.Synapse/workspaces/scopePools ### Supported resource logs for Microsoft.Synapse/workspaces/sqlPools ### Dynamic Management Views (DMVs)
For more information on these logs, see the following information:
- [sys.dm_pdw_waits](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-waits-transact-sql?view=azure-sqldw-latest&preserve-view=true) - [sys.dm_pdw_sql_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-sql-requests-transact-sql?view=azure-sqldw-latest&preserve-view=true)
+To view the list of DMVs that apply to Synapse SQL, see [System views supported in Synapse SQL](./sql/reference-tsql-system-views.md#dedicated-sql-pool-dynamic-management-views-dmvs).
+ [!INCLUDE [horz-monitor-ref-logs-tables](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-logs-tables.md)] ### Synapse Workspaces
Microsoft.Synapse/workspaces
- [SynapseDXTableUsageStatistics](/azure/azure-monitor/reference/tables/SynapseDXTableUsageStatistics#columns) - [SynapseDXTableDetails](/azure/azure-monitor/reference/tables/SynapseDXTableDetails#columns)
+### Available Apache Spark configurations
+
+| Configuration name | Default value | Description |
+| | - | -- |
+| spark.synapse.logAnalytics.enabled | false | To enable the Log Analytics sink for the Spark applications, true. Otherwise, false. |
+| spark.synapse.logAnalytics.workspaceId | - | The destination Log Analytics workspace ID. |
+| spark.synapse.logAnalytics.secret | - | The destination Log Analytics workspace secret. |
+| spark.synapse.logAnalytics.keyVault.linkedServiceName | - | The Key Vault linked service name for the Log Analytics workspace ID and key. |
+| spark.synapse.logAnalytics.keyVault.name | - | The Key Vault name for the Log Analytics ID and key. |
+| spark.synapse.logAnalytics.keyVault.key.workspaceId | SparkLogAnalyticsWorkspaceId | The Key Vault secret name for the Log Analytics workspace ID. |
+| spark.synapse.logAnalytics.keyVault.key.secret | SparkLogAnalyticsSecret | The Key Vault secret name for the Log Analytics workspace |
+| spark.synapse.logAnalytics.uriSuffix | ods.opinsights.azure.com | The destination Log Analytics workspace [URI suffix](../azure-monitor/logs/data-collector-api.md#request-uri). If your workspace isn't in Azure global, you need to update the URI suffix according to the respective cloud. |
+| spark.synapse.logAnalytics.filter.eventName.match | - | Optional. The comma-separated spark event names, you can specify which events to collect. For example: `SparkListenerJobStart,SparkListenerJobEnd` |
+| spark.synapse.logAnalytics.filter.loggerName.match | - | Optional. The comma-separated log4j logger names, you can specify which logs to collect. For example: `org.apache.spark.SparkContext,org.example.Logger` |
+| spark.synapse.logAnalytics.filter.metricName.match | - | Optional. The comma-separated spark metric name suffixes, you can specify which metrics to collect. For example: `jvm.heap.used`|
+
+> [!NOTE]
+> - For Microsoft Azure operated by 21Vianet, the `spark.synapse.logAnalytics.uriSuffix` parameter should be `ods.opinsights.azure.cn`.
+> - For Azure Government, the `spark.synapse.logAnalytics.uriSuffix` parameter should be `ods.opinsights.azure.us`.
+> - For any cloud except Azure, the `spark.synapse.logAnalytics.keyVault.name` parameter should be the fully qualified domain name (FQDN) of the Key Vault. For example, `AZURE_KEY_VAULT_NAME.vault.usgovcloudapi.net` for AzureUSGovernment.
+ [!INCLUDE [horz-monitor-ref-activity-log](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-activity-log.md)] - [Microsoft.Sql resource provider operations](/azure/role-based-access-control/permissions/databases#microsoftsql)
synapse-analytics Apache Spark Azure Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-log-analytics.md
spark.synapse.logAnalytics.keyVault.key.secret <AZURE_KEY_VAULT_SECRET_KEY_NAME>
spark.synapse.logAnalytics.keyVault.linkedServiceName <LINKED_SERVICE_NAME> ```
-#### Available Apache Spark configuration
-
-| Configuration name | Default value | Description |
-| | - | -- |
-| spark.synapse.logAnalytics.enabled | false | To enable the Log Analytics sink for the Spark applications, true. Otherwise, false. |
-| spark.synapse.logAnalytics.workspaceId | - | The destination Log Analytics workspace ID. |
-| spark.synapse.logAnalytics.secret | - | The destination Log Analytics workspace secret. |
-| spark.synapse.logAnalytics.keyVault.linkedServiceName | - | The Key Vault linked service name for the Log Analytics workspace ID and key. |
-| spark.synapse.logAnalytics.keyVault.name | - | The Key Vault name for the Log Analytics ID and key. |
-| spark.synapse.logAnalytics.keyVault.key.workspaceId | SparkLogAnalyticsWorkspaceId | The Key Vault secret name for the Log Analytics workspace ID. |
-| spark.synapse.logAnalytics.keyVault.key.secret | SparkLogAnalyticsSecret | The Key Vault secret name for the Log Analytics workspace |
-| spark.synapse.logAnalytics.uriSuffix | ods.opinsights.azure.com | The destination Log Analytics workspace [URI suffix][uri_suffix]. If your workspace isn't in Azure global, you need to update the URI suffix according to the respective cloud. |
-| spark.synapse.logAnalytics.filter.eventName.match | - | Optional. The comma-separated spark event names, you can specify which events to collect. For example: `SparkListenerJobStart,SparkListenerJobEnd` |
-| spark.synapse.logAnalytics.filter.loggerName.match | - | Optional. The comma-separated log4j logger names, you can specify which logs to collect. For example: `org.apache.spark.SparkContext,org.example.Logger` |
-| spark.synapse.logAnalytics.filter.metricName.match | - | Optional. The comma-separated spark metric name suffixes, you can specify which metrics to collect. For example: `jvm.heap.used`|
-
-> [!NOTE]
-> - For Microsoft Azure operated by 21Vianet, the `spark.synapse.logAnalytics.uriSuffix` parameter should be `ods.opinsights.azure.cn`.
-> - For Azure Government, the `spark.synapse.logAnalytics.uriSuffix` parameter should be `ods.opinsights.azure.us`.
-> - For any cloud except Azure, the `spark.synapse.logAnalytics.keyVault.name` parameter should be the fully qualified domain name (FQDN) of the Key Vault. For example, `AZURE_KEY_VAULT_NAME.vault.usgovcloudapi.net` for AzureUSGovernment.
-
-[uri_suffix]: ../../azure-monitor/logs/data-collector-api.md#request-uri
-
+For a list of Apache Spark configurations, see [Available Apache Spark configurations](../monitor-synapse-analytics-reference.md#available-apache-spark-configurations)
### Step 3: Upload your Apache Spark configuration to an Apache Spark pool
synapse-analytics Monitor Sql Pool Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/monitor-sql-pool-synapse-analytics.md
+
+ Title: Monitor dedicated SQL pool in Azure Synapse Analytics
+description: Start here to learn how to monitor dedicated SQL pool in Azure Synapse Analytics.
Last updated : 03/25/2024+++++++
+# Monitor dedicated SQL pool in Azure Synapse Analytics
++
+## Synapse Analytics monitoring options
+
+You can collect and analyze metrics and logs for Azure Synapse Analytics built-in and serverless SQL pools, dedicated SQL pools, Azure Spark pools, and Data Explorer pools (preview). You can monitor current and historical activities for SQL, Apache Spark, pipelines and triggers, and integration runtimes.
+
+There are several ways to monitor activities in your Synapse Analytics workspace.
+
+### Synapse Studio
+
+Open Synapse Studio and navigate to the **Monitor** hub to see a history of all the activities in the workspace and which ones are active.
+
+- Under **Integration**, you can monitor pipelines, triggers, and integration runtimes.
+- Under **Activities**, you can monitor Spark and SQL activities.
+
+For more information about monitoring in Synapse Studio, see [Monitor your Synapse Workspace](../get-started-monitor.md).
+
+- For monitoring pipeline runs, see [Monitor pipeline runs in Synapse Studio](../monitoring/how-to-monitor-pipeline-runs.md).
+- For monitoring Apache Spark applications, see [Monitor Apache Spark applications in Synapse Studio](../monitoring/apache-spark-applications.md).
+- For monitoring SQL pools, see [Use Synapse Studio to monitor your SQL pools](../monitoring/how-to-monitor-sql-pools.md).
+- For monitoring SQL requests, see [Monitor SQL requests in Synapse Studio](../monitoring/how-to-monitor-sql-requests.md).
+
+### DMVs and Query Store
+
+To programmatically monitor Synapse SQL via T-SQL, Synapse Analytics provides a set of Dynamic Management Views (DMVs). These views are useful to troubleshoot and identify performance bottlenecks with your workload. For more information, see [DMVs](../sql/query-history-storage-analysis.md#dmvs) and [Monitor your Azure Synapse Analytics dedicated SQL pool workload using DMVs](sql-data-warehouse-manage-monitor.md). For the list of DMVs that apply to Synapse SQL, see [Dedicated SQL pool Dynamic Management Views (DMVs)](../sql/reference-tsql-system-views.md#dedicated-sql-pool-dynamic-management-views-dmvs).
+
+Query Store is a set of internal stores and DMVs that provide insight on query plan choice and performance. Query Store simplifies performance troubleshooting by helping find performance differences caused by query plan changes. For more information about enabling and using Query Store on Synapse Analytics databases, see [Query Store](../sql/query-history-storage-analysis.md#query-store).
+
+### Azure portal
+
+You can monitor Synapse Analytics workspaces and pools directly from their Azure portal pages. On the left sidebar menu, you can access the Azure **Activity log**, or select **Alerts**, **Metrics**, **Diagnostic settings**, **Logs**, or **Advisor recommendations** from the **Monitoring** section. This article provides more details about these options.
++
+The resource types for Synapse Analytics include:
+
+- Microsoft.Synapse/workspaces
+- Microsoft.Synapse/workspaces/bigDataPools
+- Microsoft.Synapse/workspaces/kustoPools
+- Microsoft.Synapse/workspaces/scopePools
+- Microsoft.Synapse/workspaces/sqlPools
+
+For more information about the resource types for Azure Synapse Analytics, see [Azure Synapse Analytics monitoring data reference](../monitor-synapse-analytics-reference.md).
++
+Synapse Analytics supports storing monitoring data in Azure Storage or Azure Data Lake Storage Gen 2.
++
+For lists of available platform metrics for Synapse Analytics, see [Synapse Analytics monitoring data reference](../monitor-synapse-analytics-reference.md#metrics).
+
+In addition to Log Analytics, Synapse Analytics Apache Spark pools support Prometheus server metrics and Grafana dashboards. For more information, see [Monitor Apache Spark Applications metrics with Prometheus and Grafana](../spark/use-prometheus-grafana-to-monitor-apache-spark-application-level-metrics.md) and [Collect Apache Spark applications metrics using Prometheus APIs](../spark/connect-monitor-azure-synapse-spark-application-level-metrics.md).
++
+For the available resource log categories, their associated Log Analytics tables, and the log schemas for Synapse Analytics, see [Synapse Analytics monitoring data reference](../monitor-synapse-analytics-reference.md#resource-logs).
+++
+In addition to the basic tools, Synapse Analytics supports Query Store, DMVs, or Azure Data Explorer to analyze query history and performance. For a comparison of these analytics methods, see [Historical query storage and analysis in Azure Synapse Analytics](../sql/query-history-storage-analysis.md).
+++
+### Sample queries
+
+**Activity Log query for failed operations**: Lists all reports of failed operations over the past hour.
+
+```kusto
+AzureActivity
+| where TimeGenerated > ago(1h)
+| where ActivityStatus == "Failed"
+```
+
+**Synapse Link table fail events**: Displays failed Synapse Link table events.
+
+```kusto
+SynapseLinkEvent
+| where OperationName == "TableFail"
+| limit 100
+```
++
+### Synapse Analytics alert rules
+
+The following table lists some suggested alerts for Synapse Analytics. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Synapse Analytics monitoring data reference](../monitor-synapse-analytics-reference.md).
+
+| Alert type | Condition | Description |
+|:|:|:|
+| Metric| TempDB 75% | Maximum local tempdb used percentage greater than or equal to 75% of threshold value |
+| Metric| Data Warehouse Unit (DWU) Usage near 100% | Average DWU used percentage greater than 95% for 1 hour |
+| Log Analytics | SynapseSqlPoolRequestSteps | ShuffleMoveOperation over 10 million rows |
+
+For more details about creating these and other recommended alert rules, see [Create alerts for your Synapse Dedicated SQL Pool](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/create-alerts-for-your-synapse-dedicated-sql-pool/ba-p/3773256).
++
+Synapse Analytics dedicated SQL pool provides Azure Advisor recommendations to ensure your data warehouse workload is consistently optimized for performance. For more information, see [Azure Advisor recommendations for dedicated SQL pool in Azure Synapse Analytics](sql-data-warehouse-concept-recommendations.md).
+
+## Related content
+
+- For information about monitoring in Synapse Studio, see [Monitor your Synapse Workspace](../get-started-monitor.md).
+- For a comparison of Log Analytics, Query Store, DMVs, and Azure Data Explorer analytics, see [Historical query storage and analysis in Azure Synapse Analytics](../sql/query-history-storage-analysis.md).
+- For information about Prometheus metrics and Grafana dashboards for Synapse Analytics Apache Spark pools, see [Monitor Apache Spark Applications metrics with Prometheus and Grafana](../spark/use-prometheus-grafana-to-monitor-apache-spark-application-level-metrics.md).
+- For a reference of the Azure Monitor metrics, logs, and other important values created for Synapse Analytics, see [Synapse Analytics monitoring data reference](../monitor-synapse-analytics-reference.md).
+- For general details on monitoring Azure resources with Azure Monitor, see [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
synapse-analytics Sql Data Warehouse Monitor Workload Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md
For more information on workspaces, see [Create a Log Analytics workspace](../..
## Turn on Resource logs
-Configure diagnostic settings to emit logs from your SQL pool. Logs consist of telemetry views equivalent to the most commonly used performance troubleshooting DMVs. Currently the following views are supported:
+Configure diagnostic settings to emit logs from your SQL pool. Logs consist of telemetry views equivalent to the most commonly used performance troubleshooting DMVs.
-- [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)-- [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)-- [sys.dm_pdw_dms_workers](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-dms-workers-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)-- [sys.dm_pdw_waits](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-waits-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)-- [sys.dm_pdw_sql_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-sql-requests-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)
+For a list of views that are currently supported, see [Dynamic Management Views](../monitor-synapse-analytics-reference.md#dynamic-management-views-dmvs).
:::image type="content" source="./media/sql-data-warehouse-monitor-workload-portal/enable_diagnostic_logs.png" alt-text="Screenshot of the page to create a diagnostic setting in the Azure portal.":::
synapse-analytics Sql Data Warehouse Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md
Title: What is dedicated SQL pool (formerly SQL DW)?
description: Dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics is the enterprise data warehousing functionality in Azure Synapse Analytics. Previously updated : 02/21/2023 Last updated : 07/19/2024
Azure Synapse Analytics is an analytics service that brings together enterprise data warehousing and Big Data analytics. Dedicated SQL pool (formerly SQL DW) refers to the enterprise data warehousing features that are available in Azure Synapse Analytics. --
-![Dedicated SQL pool (formerly SQL DW) in relation to Azure Synapse](./media/sql-data-warehouse-overview-what-is/dedicated-sql-pool.png)
-- Dedicated SQL pool (formerly SQL DW) represents a collection of analytic resources that are provisioned when using Synapse SQL. The size of a dedicated SQL pool (formerly SQL DW) is determined by Data Warehousing Units (DWU).
Once your dedicated SQL pool is created, you can import big data with simple [Po
Data warehousing is a key component of a cloud-based, end-to-end big data solution.
-![Data warehouse solution](./media/sql-data-warehouse-overview-what-is/data-warehouse-solution.png)
In a cloud data solution, data is ingested into big data stores from a variety of sources. Once in a big data store, Hadoop, Spark, and machine learning algorithms prepare and train the data. When the data is ready for complex analysis, dedicated SQL pool uses PolyBase to query the big data stores. PolyBase uses standard T-SQL queries to bring the data into dedicated SQL pool (formerly SQL DW) tables.
Dedicated SQL pool (formerly SQL DW) stores data in relational tables with colum
The analysis results can go to worldwide reporting databases or applications. Business analysts can then gain insights to make well-informed business decisions.
-## Next steps
+## Related content
- Explore [Azure Synapse architecture](massively-parallel-processing-mpp-architecture.md) - Quickly [create a dedicated SQL pool](../quickstart-create-sql-pool-studio.md)
Or look at some of these other Azure Synapse resources:
- Search [Blogs](https://azure.microsoft.com/blog/tag/azure-sql-data-warehouse/) - Submit a [Feature requests](https://feedback.azure.com/d365community/forum/9b9ba8e4-0825-ec11-b6e6-000d3a4f07b8) - [Create a support ticket](sql-data-warehouse-get-started-create-support-ticket.md)-- Search [Microsoft Q&A question page](/answers/topics/azure-synapse-analytics.html)-- Search [Stack Overflow forum](https://stackoverflow.com/questions/tagged/azure-sqldw)
+- [Microsoft Q&A question page](/answers/topics/azure-synapse-analytics.html)
+- [Stack Overflow forum](https://stackoverflow.com/questions/tagged/azure-sqldw)
synapse-analytics Sql Data Warehouse Tables Distribute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md
Title: Distributed tables design guidance description: Recommendations for designing hash-distributed and round-robin distributed tables using dedicated SQL pool.--- Previously updated : 03/20/2023 - Last updated : 07/19/2024++++
+ - azure-synapse
# Guidance for designing distributed tables using dedicated SQL pool in Azure Synapse Analytics This article contains recommendations for designing hash-distributed and round-robin distributed tables in dedicated SQL pools.
-This article assumes you are familiar with data distribution and data movement concepts in dedicated SQL pool. For more information, see [Azure Synapse Analytics architecture](massively-parallel-processing-mpp-architecture.md).
+This article assumes you are familiar with data distribution and data movement concepts in dedicated SQL pool. For more information, see [Azure Synapse Analytics architecture](massively-parallel-processing-mpp-architecture.md).
## What is a distributed table?
-A distributed table appears as a single table, but the rows are actually stored across 60 distributions. The rows are distributed with a hash or round-robin algorithm.
+A distributed table appears as a single table, but the rows are actually stored across 60 distributions. The rows are distributed with a hash or round-robin algorithm.
-**Hash-distribution** improves query performance on large fact tables, and is the focus of this article. **Round-robin distribution** is useful for improving loading speed. These design choices have a significant impact on improving query and loading performance.
+**Hash-distribution** improves query performance on large fact tables, and is the focus of this article. **Round-robin distribution** is useful for improving loading speed. These design choices have a significant effect on improving query and loading performance.
Another table storage option is to replicate a small table across all the Compute nodes. For more information, see [Design guidance for replicated tables](design-guidance-for-replicated-tables.md). To quickly choose among the three options, see Distributed tables in the [tables overview](sql-data-warehouse-tables-overview.md).
-As part of table design, understand as much as possible about your data and how the data is queried.  For example, consider these questions:
+As part of table design, understand as much as possible about your data and how the data is queried. For example, consider these questions:
- How large is the table? - How often is the table refreshed?
As part of table design, understand as much as possible about your data and how
A hash-distributed table distributes table rows across the Compute nodes by using a deterministic hash function to assign each row to one [distribution](massively-parallel-processing-mpp-architecture.md#distributions). Since identical values always hash to the same distribution, SQL Analytics has built-in knowledge of the row locations. In dedicated SQL pool this knowledge is used to minimize data movement during queries, which improves query performance.
Consider using a hash-distributed table when:
A round-robin distributed table distributes table rows evenly across all distributions. The assignment of rows to distributions is random. Unlike hash-distributed tables, rows with equal values are not guaranteed to be assigned to the same distribution.
-As a result, the system sometimes needs to invoke a data movement operation to better organize your data before it can resolve a query. This extra step can slow down your queries. For example, joining a round-robin table usually requires reshuffling the rows, which is a performance hit.
+As a result, the system sometimes needs to invoke a data movement operation to better organize your data before it can resolve a query. This extra step can slow down your queries. For example, joining a round-robin table usually requires reshuffling the rows, which is a performance hit.
Consider using the round-robin distribution for your table in the following scenarios:
WITH
); ```
-Hash distribution can be applied on multiple columns for a more even distribution of the base table. Multi-column distribution will allow you to choose up to eight columns for distribution. This not only reduces the data skew over time but also improves query performance. For example:
+Hash distribution can be applied on multiple columns for a more even distribution of the base table. Multi-column distribution allows you to choose up to eight columns for distribution. This not only reduces the data skew over time but also improves query performance. For example:
```sql CREATE TABLE [dbo].[FactInternetSales]
WITH
> `ALTER DATABASE SCOPED CONFIGURATION SET DW_COMPATIBILITY_LEVEL = 50;` > For more information on setting the database compatibility level, see [ALTER DATABASE SCOPED CONFIGURATION](/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql). For more information on multi-column distributions, see [CREATE MATERIALIZED VIEW](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql), [CREATE TABLE](/sql/t-sql/statements/create-table-azure-sql-data-warehouse), or [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql).
-Data stored in the distribution column(s) can be updated. Updates to data in distribution column(s) could result in data shuffle operation.
+Data stored in the distribution columns can be updated. Updates to data in distribution columns could result in data shuffle operation.
-Choosing distribution column(s) is an important design decision since the values in the hash column(s) determine how the rows are distributed. The best choice depends on several factors, and usually involves tradeoffs. Once a distribution column or column set is chosen, you cannot change it. If you didn't choose the best column(s) the first time, you can use [CREATE TABLE AS SELECT (CTAS)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to re-create the table with the desired distribution hash key.
+Choosing distribution columns is an important design decision since the values in the hash columns determine how the rows are distributed. The best choice depends on several factors, and usually involves tradeoffs. Once a distribution column or column set is chosen, you cannot change it. If you didn't choose the best columns the first time, you can use [CREATE TABLE AS SELECT (CTAS)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to re-create the table with the desired distribution hash key.
### Choose a distribution column with data that distributes evenly
For best performance, all of the distributions should have approximately the sam
To balance the parallel processing, select a distribution column or set of columns that: -- **Has many unique values.** The distribution column(s) can have duplicate values. All rows with the same value are assigned to the same distribution. Since there are 60 distributions, some distributions can have > 1 unique values while others may end with zero values. -- **Does not have NULLs, or has only a few NULLs.** For an extreme example, if all values in the distribution column(s) are NULL, all the rows are assigned to the same distribution. As a result, query processing is skewed to one distribution, and does not benefit from parallel processing.-- **Is not a date column**. All data for the same date lands in the same distribution, or will cluster records by date. If several users are all filtering on the same date (such as today's date), then only 1 of the 60 distributions do all the processing work.
+- **Has many unique values.** One or more distribution columns can have duplicate values. All rows with the same value are assigned to the same distribution. Since there are 60 distributions, some distributions can have > 1 unique values while others can end with zero values.
+- **Does not have NULLs, or has only a few NULLs.** For an extreme example, if all values in the distribution columns are NULL, all the rows are assigned to the same distribution. As a result, query processing is skewed to one distribution, and does not benefit from parallel processing.
+- **Is not a date column**. All data for the same date lands in the same distribution, or will cluster records by date. If several users are all filtering on the same date (such as today's date), then only 1 of the 60 distributions does all the processing work.
### Choose a distribution column that minimizes data movement
To get the correct query result queries might move data from one Compute node to
To minimize data movement, select a distribution column or set of columns that: -- Is used in `JOIN`, `GROUP BY`, `DISTINCT`, `OVER`, and `HAVING` clauses. When two large fact tables have frequent joins, query performance improves when you distribute both tables on one of the join columns. When a table is not used in joins, consider distributing the table on a column or column set that is frequently in the `GROUP BY` clause.
+- Is used in `JOIN`, `GROUP BY`, `DISTINCT`, `OVER`, and `HAVING` clauses. When two large fact tables have frequent joins, query performance improves when you distribute both tables on one of the join columns. When a table is not used in joins, consider distributing the table on a column or column set that is frequently in the `GROUP BY` clause.
- Is *not* used in `WHERE` clauses. When a query's `WHERE` clause and the table's distribution columns are on the same column, the query could encounter high data skew, leading to processing load falling on only few distributions. This impacts query performance, ideally many distributions share the processing load.-- Is *not* a date column. `WHERE` clauses often filter by date. When this happens, all the processing could run on only a few distributions affecting query performance. Ideally, many distributions share the processing load.
+- Is *not* a date column. `WHERE` clauses often filter by date. When this happens, all the processing could run on only a few distributions affecting query performance. Ideally, many distributions share the processing load.
Once you design a hash-distributed table, the next step is to load data into the table. For loading guidance, see [Loading overview](design-elt-data-loading.md). ## How to tell if your distribution is a good choice
-After data is loaded into a hash-distributed table, check to see how evenly the rows are distributed across the 60 distributions. The rows per distribution can vary up to 10% without a noticeable impact on performance. Consider the following topics to evaluate your distribution column(s).
+After data is loaded into a hash-distributed table, check to see how evenly the rows are distributed across the 60 distributions. The rows per distribution can vary up to 10% without a noticeable impact on performance.
+
+Consider the following ways to evaluate your distribution columns.
### Determine if the table has data skew
DBCC PDW_SHOWSPACEUSED('dbo.FactInternetSales');
To identify which tables have more than 10% data skew:
-1. Create the view `dbo.vTableSizes` that is shown in the [Tables overview](sql-data-warehouse-tables-overview.md#table-size-queries) article.
-2. Run the following query:
+1. Create the view `dbo.vTableSizes` that is shown in the [Tables overview](sql-data-warehouse-tables-overview.md#table-size-queries) article.
+1. Run the following query:
```sql select *
order by two_part_name, row_count;
### Check query plans for data movement
-A good distribution column set enables joins and aggregations to have minimal data movement. This affects the way joins should be written. To get minimal data movement for a join on two hash-distributed tables, one of the join columns needs to be in distribution column or column(s). When two hash-distributed tables join on a distribution column of the same data type, the join does not require data movement. Joins can use additional columns without incurring data movement.
+A good distribution column set enables joins and aggregations to have minimal data movement. This affects the way joins should be written. To get minimal data movement for a join on two hash-distributed tables, one of the join columns needs to be in distribution column or columns. When two hash-distributed tables join on a distribution column of the same data type, the join does not require data movement. Joins can use additional columns without incurring data movement.
To avoid data movement during a join: - The tables involved in the join must be hash distributed on **one** of the columns participating in the join. - The data types of the join columns must match between both tables. - The columns must be joined with an equals operator.-- The join type may not be a `CROSS JOIN`.
+- The join type cannot be a `CROSS JOIN`.
-To see if queries are experiencing data movement, you can look at the query plan.
+To see if queries are experiencing data movement, you can look at the query plan.
## Resolve a distribution column problem
-It is not necessary to resolve all cases of data skew. Distributing data is a matter of finding the right balance between minimizing data skew and data movement. It is not always possible to minimize both data skew and data movement. Sometimes the benefit of having the minimal data movement might outweigh the impact of having data skew.
+It is not necessary to resolve all cases of data skew. Distributing data is a matter of finding the right balance between minimizing data skew and data movement. It is not always possible to minimize both data skew and data movement. Sometimes the benefit of having the minimal data movement might outweigh the effect of having data skew.
-To decide if you should resolve data skew in a table, you should understand as much as possible about the data volumes and queries in your workload. You can use the steps in the [Query monitoring](sql-data-warehouse-manage-monitor.md) article to monitor the impact of skew on query performance. Specifically, look for how long it takes large queries to complete on individual distributions.
+To decide if you should resolve data skew in a table, you should understand as much as possible about the data volumes and queries in your workload. You can use the steps in the [Query monitoring](sql-data-warehouse-manage-monitor.md) article to monitor the effect of skew on query performance. Specifically, look for how long it takes large queries to complete on individual distributions.
-Since you cannot change the distribution column(s) on an existing table, the typical way to resolve data skew is to re-create the table with a different distribution column(s).
+Since you cannot change the distribution columns on an existing table, the typical way to resolve data skew is to re-create the table with different distribution columns.
<a id="re-create-the-table-with-a-new-distribution-column"></a>+ ### Re-create the table with a new distribution column set
-This example uses [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to re-create a table with a different hash distribution column or column(s).
+This example uses [CREATE TABLE AS SELECT](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to re-create a table with different hash distribution columns.
-First use `CREATE TABLE AS SELECT` (CTAS) the new table with the new key. Then re-create the statistics and finally, swap the tables by re-naming them.
+First use `CREATE TABLE AS SELECT` (CTAS) the new table with the new key. Then re-create the statistics and finally, swap the tables by renaming them.
```sql CREATE TABLE [dbo].[FactInternetSales_CustomerKey]
RENAME OBJECT [dbo].[FactInternetSales] TO [FactInternetSales_ProductKey];
RENAME OBJECT [dbo].[FactInternetSales_CustomerKey] TO [FactInternetSales]; ```
-## Next steps
-
+## Related content
To create a distributed table, use one of these statements: - [CREATE TABLE (dedicated SQL pool)](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true)
synapse-analytics Sql Data Warehouse Workload Management Portal Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management-portal-monitor.md
# Azure Synapse Analytics ΓÇô Workload Management Portal Monitoring This article explains how to monitor [workload group](sql-data-warehouse-workload-isolation.md#workload-groups) resource utilization and query activity.
-For details on how to configure the Azure Metrics Explorer see the [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) article. See the [Resource utilization](sql-data-warehouse-concept-resource-utilization-query-activity.md#resource-utilization) section in Azure Synapse Analytics Monitoring documentation for details on how to monitor system resource consumption.
-There are two different categories of workload group metrics provided for monitoring workload management: resource allocation and query activity. These metrics can be split and filtered by workload group. The metrics can be split and filtered based on if they are system defined (resource class workload groups) or user-defined (created by user with [CREATE WORKLOAD GROUP](/sql/t-sql/statements/create-workload-group-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) syntax).
+For details on how to configure the Azure Metrics Explorer see the [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) article. See the [Resource utilization](sql-data-warehouse-concept-resource-utilization-query-activity.md#resource-utilization) section in Azure Synapse Analytics Monitoring documentation for details on how to monitor system resource consumption.
+There are two different categories of workload group metrics provided for monitoring workload management: resource allocation and query activity. These metrics can be split and filtered by workload group. The metrics can be split and filtered based on if they're system defined (resource class workload groups) or user-defined (created by user with [CREATE WORKLOAD GROUP](/sql/t-sql/statements/create-workload-group-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) syntax).
## Workload management metric definitions
-|Metric Name |Description |Aggregation Type |
-|-|-|--|
-|Effective cap resource percent | *Effective cap resource percent* is a hard limit on the percentage of resources accessible by the workload group, taking into account *Effective min resource percentage* allocated for other workload groups. The *Effective cap resource percent* metric is configured using the `CAP_PERCENTAGE_RESOURCE` parameter in the [CREATE WORKLOAD GROUP](/sql/t-sql/statements/create-workload-group-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) syntax. The effective value is described here.<br><br>For example if a workload group `DataLoads` is created with `CAP_PERCENTAGE_RESOURCE` = 100 and another workload group is created with an Effective min resource percentage of 25%, the *Effective cap resource percent* for the `DataLoads` workload group is 75%.<br><br>The *Effective cap resource percent* determines the upper bound of concurrency (and thus potential throughput) a workload group can achieve. If additional throughput is needed beyond what is currently reported by the *Effective cap resource percent* metric, either increase the `CAP_PERCENTAGE_RESOURCE`, decrease the `MIN_PERCENTAGE_RESOURCE` of other workload groups or scale up the instance to add more resources. Decreasing the `REQUEST_MIN_RESOURCE_GRANT_PERCENT` can increase concurrency, but may not increase overall throughput.| Min, Avg, Max |
-|Effective min resource percent |*Effective min resource percent* is the minimum percentage of resources reserved and isolated for the workload group taking into account the service level minimum. The Effective min resource percent metric is configured using the `MIN_PERCENTAGE_RESOURCE` parameter in the [CREATE WORKLOAD GROUP](/sql/t-sql/statements/create-workload-group-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) syntax. The effective value is described [here](/sql/t-sql/statements/create-workload-group-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json?view=azure-sqldw-latest&preserve-view=true#effective-values).<br><br>Use the Sum aggregation type when this metric is unfiltered and unsplit to monitor the total workload isolation configured on the system.<br><br>The *Effective min resource percent* determines the lower bound of guaranteed concurrency (and thus guaranteed throughput) a workload group can achieve. If additional guaranteed resources are needed beyond what is currently reported by the *Effective min resource percent* metric, increase the `MIN_PERCENTAGE_RESOURCE` parameter configured for the workload group. Decreasing the `REQUEST_MIN_RESOURCE_GRANT_PERCENT` can increase concurrency, but may not increase overall throughput. |Min, Avg, Max|
-|Workload group active queries |This metric reports the active queries within the workload group. Using this metric unfiltered and unsplit displays all active queries running on the system.|Sum |
-|Workload group allocation by cap resource percent |This metric displays the percentage allocation of resources relative to the *Effective cap resource percent* per workload group. This metric provides the effective utilization of the workload group.<br><br>Consider a workload group `DataLoads` with an *Effective cap resource percent* of 75% and a `REQUEST_MIN_RESOURCE_GRANT_PERCENT` configured at 25%. The *Workload group allocation by cap resource percent* value filtered to `DataLoads` would be 33% (25% / 75%) if a single query were running in this workload group.<br><br>Use this metric to identify a workload group's utilization. A value close to 100% indicates all resources available to the workload group are being used. Additionally, the *Workload group queued queries metric* for the same workload group showing a value greater than zero would indicate the workload group would utilize additional resources if allocated. Conversely, if this metric is consistently low and the *Workload group active queries* is low the workload group is not being utilized. This situation is especially problematic if *Effective cap resource percent* is greater than zero as that would indicate [underutilized workload isolation](#underutilized-workload-isolation).|Min, Avg, Max |
-|Workload group allocation by system percent | This metric displays the percentage allocation of resources relative to the entire system.<br><br>Consider a workload group `DataLoads` with a `REQUEST_MIN_RESOURCE_GRANT_PERCENT` configured at 25%. *Workload group allocation by system percent* value filtered to `DataLoads` would be 25% (25% / 100%) if a single query were running in this workload group.|Min, Avg, Max |
-|Workload group query timeouts |Queries for the workload group that have timed out. Query timeouts reported by this metric are only once the query has started executing (it does not include wait time due to locking or resource waits).<br><br>Query timeout is configured using the `QUERY_EXECUTION_TIMEOUT_SEC` parameter in the [CREATE WORKLOAD GROUP](/sql/t-sql/statements/create-workload-group-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) syntax. Increasing the value could reduce the number of query timeouts.<br><br>Consider increasing the `REQUEST_MIN_RESOURCE_GRANT_PERCENT` parameter for the workload group to reduce the amount of timeouts and allocate more resources per query. Note, increasing `REQUEST_MIN_RESOURCE_GRANT_PERCENT` reduces the amount of concurrency for the workload group. |Sum |
-|Workload group queued queries | Queries for the workload group that are currently queued waiting to start execution. Queries can be queue because they are waiting for resources or locks.<br><br>Queries could be waiting for numerous reasons. If the system is overloaded and the concurrency demand is greater than what is available, queries will queue.<br><br>Consider adding more resources to the workload group by increasing the `CAP_PERCENTAGE_RESOURCE` parameter in the [CREATE WORKLOAD GROUP](/sql/t-sql/statements/create-workload-group-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) statement. If `CAP_PERCENTAGE_RESOURCE` is greater than the *Effective cap resource percent* metric, the configured workload isolation for other workload group is impacting the resources allocated to this workload group. Consider lowering `MIN_PERCENTAGE_RESOURCE` of other workload groups or scale up the instance to add more resources. |Sum |
+For a description of workload management metrics, see the *SQL dedicated pool - Workload management* entries in [Supported metrics for Microsoft.Synapse/workspaces/sqlPools](../monitor-synapse-analytics-reference.md#supported-metrics-for-microsoftsynapseworkspacessqlpools).
## Monitoring scenarios and actions
Below are a series of chart configurations to highlight workload management metr
### Underutilized workload isolation
-Consider the following workload group and classifier configuration where a workload group named `wgPriority` is created and *TheCEO* `membername` is mapped to it using the `wcCEOPriority` workload classifier. The `wgPriority` workload group has 25% workload isolation configured for it (`MIN_PERCENTAGE_RESOURCE` = 25). Each query submitted by *TheCEO* is given 5% of system resources (`REQUEST_MIN_RESOURCE_GRANT_PERCENT` = 5).
+Consider the following workload group and classifier configuration where a workload group named `wgPriority` is created and *TheCEO* `membername` is mapped to it using the `wcCEOPriority` workload classifier. The `wgPriority` workload group has 25% workload isolation configured for it (`MIN_PERCENTAGE_RESOURCE` = 25). Each query submitted by *TheCEO* is given 5% of system resources (`REQUEST_MIN_RESOURCE_GRANT_PERCENT` = 5).
```sql CREATE WORKLOAD GROUP wgPriority
Metric 1: *Effective min resource percent* (Avg aggregation, `blue line`)<br>
Metric 2: *Workload group allocation by system percent* (Avg aggregation, `purple line`)<br> Filter: [Workload Group] = `wgPriority`<br> ![Screenshot shows a chart with the two metrics and filter.](./media/sql-data-warehouse-workload-management-portal-monitor/underutilized-wg.png)
-The chart shows that with 25% workload isolation, only 10% is being used on average. In this case, the `MIN_PERCENTAGE_RESOURCE` parameter value could be lowered to between 10 or 15 and allow for other workloads on the system to consume the resources.
+The chart shows that with 25% workload isolation, only 10% is being used on average. In this case, the `MIN_PERCENTAGE_RESOURCE` parameter value could be lowered to between 10 or 15 and allow for other workloads on the system to consume the resources.
### Workload group bottleneck
-Consider the following workload group and classifier configuration where a workload group named `wgDataAnalyst` is created and the *DataAnalyst* `membername` is mapped to it using the `wcDataAnalyst` workload classifier. The `wgDataAnalyst` workload group has 6% workload isolation configured for it (`MIN_PERCENTAGE_RESOURCE` = 6) and a resource limit of 9% (`CAP_PERCENTAGE_RESOURCE` = 9). Each query submitted by the *DataAnalyst* is given 3% of system resources (`REQUEST_MIN_RESOURCE_GRANT_PERCENT` = 3).
+Consider the following workload group and classifier configuration where a workload group named `wgDataAnalyst` is created and the *DataAnalyst* `membername` is mapped to it using the `wcDataAnalyst` workload classifier. The `wgDataAnalyst` workload group has 6% workload isolation configured for it (`MIN_PERCENTAGE_RESOURCE` = 6) and a resource limit of 9% (`CAP_PERCENTAGE_RESOURCE` = 9). Each query submitted by the *DataAnalyst* is given 3% of system resources (`REQUEST_MIN_RESOURCE_GRANT_PERCENT` = 3).
```sql CREATE WORKLOAD GROUP wgDataAnalyst
Metric 2: *Workload group allocation by cap resource percent* (Avg aggregation,
Metric 3: *Workload group queued queries* (Sum aggregation, `turquoise line`)<br> Filter: [Workload Group] = `wgDataAnalyst`<br> ![Screenshot shows a chart with the three metrics and filter.](./media/sql-data-warehouse-workload-management-portal-monitor/bottle-necked-wg.png)
-The chart shows that with a 9% cap on resources, the workload group is 90%+ utilized (from the *Workload group allocation by cap resource percent metric*). There is a steady queuing of queries as shown from the *Workload group queued queries metric*. In this case, increasing the `CAP_PERCENTAGE_RESOURCE` to a value higher than 9% will allow more queries to execute concurrently. Increasing the `CAP_PERCENTAGE_RESOURCE` assumes that there are enough resources available and not isolated by other workload groups. Verify the cap increased by checking the *Effective cap resource percent metric*. If more throughput is desired, also consider increasing the `REQUEST_MIN_RESOURCE_GRANT_PERCENT` to a value greater than 3. Increasing the `REQUEST_MIN_RESOURCE_GRANT_PERCENT` could allow queries to run faster.
+The chart shows that with a 9% cap on resources, the workload group is 90%+ utilized (from the *Workload group allocation by cap resource percent metric*). There's a steady queuing of queries as shown from the *Workload group queued queries metric*. In this case, increasing the `CAP_PERCENTAGE_RESOURCE` to a value higher than 9% allows more queries to execute concurrently. Increasing the `CAP_PERCENTAGE_RESOURCE` assumes that there are enough resources available and not isolated by other workload groups. Verify the cap increased by checking the *Effective cap resource percent metric*. If more throughput is desired, also consider increasing the `REQUEST_MIN_RESOURCE_GRANT_PERCENT` to a value greater than *3*. Increasing the `REQUEST_MIN_RESOURCE_GRANT_PERCENT` could allow queries to run faster.
## Next steps
synapse-analytics Query Specific Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-specific-files.md
Your first step is to **create a database** with a datasource that references st
This function returns the file name that row originates from.
-The following sample reads the NYC Yellow Taxi data files for the last three months of 2017 and returns the number of rides per file. The OPENROWSET part of the query specifies which files will be read.
+The following sample reads the NYC Yellow Taxi data files for September 2017 and returns the number of rides per file. The OPENROWSET part of the query specifies which files will be read.
```sql SELECT
synapse-analytics How To Monitor Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-monitor-synapse-link-sql-database.md
In this section, we'll deep dive into setting up metrics, alerts, and logs in Az
### Metrics
-The most important type of Monitor data is the metric, which is also called the performance counter. Metrics are emitted by most Azure resources. Azure Monitor provides several ways to configure and consume these metrics for monitoring and troubleshooting.
+The most important type of Monitor data is the metric, which is also called the performance counter. Metrics are emitted by most Azure resources. Azure Monitor provides several ways to configure and consume these metrics for monitoring and troubleshooting.
-Azure Synapse Link emits the following metrics to Azure Monitor:
-
-| **Metric** | **Aggregation types** | **Description** |
-||||
-| Link connection events | Sum | Number of Synapse Link connection events, including start, stop, and failure |
-| Link latency in seconds | Max, Min, Avg | Synapse Link data processing latency in seconds |
-| Link processed data volume (bytes) | Sum | Data volume in bytes processed by Synapse Link |
-| Link processed rows | Sum | Row counts processed by Synapse Link |
-| Link table events | Sum | Number of Synapse Link table events, including snapshot, removal, and failure |
+For a list of metrics that Azure Synapse Link emits to Azure Monitor, see [Azure Synapse Link metrics](../monitor-synapse-analytics-reference.md#azure-synapse-link-metrics).
-Now letΓÇÖs step through how we can see these metrics in the Azure portal.
+Now letΓÇÖs step through how we can see these metrics in the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com).
Now letΓÇÖs step through how we can see these metrics in the Azure portal.
### Alerts
-Azure Monitor has set up built-in functionality to set up alerts to monitor all your Azure resources efficiently. Alerts allow you to monitor your telemetry and capture signals that indicate that something is happening on the specified resource. Once the signals are captured, an alert rule a defined to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, and notifications are sent through the appropriate channels.
-
-In this section, we're going to walk through how you can set up alerts for your Azure Synapse Link connection through Azure Synapse Analytics. LetΓÇÖs say, for example, that you're running your link connection and realize that you want to monitor the latency of your link connection. The workload requirements for this scenario require that any link connection with a maximum latency over 900 seconds (or 15 minutes) needs to be alerted to your Engineering team. LetΓÇÖs walk through how we would set up an alert for this example:
+In this section, we're going to walk through how you can set up [alerts](../monitor-synapse-analytics.md#alerts) for your Azure Synapse Link connection through Azure Synapse Analytics. LetΓÇÖs say, for example, that you're running your link connection and realize that you want to monitor the latency of your link connection. The workload requirements for this scenario require that any link connection with a maximum latency over 900 seconds (or 15 minutes) needs to be alerted to your Engineering team. LetΓÇÖs walk through how we would set up an alert for this example:
1. Sign in to the [Azure portal](https://portal.azure.com).
synapse-analytics How To Monitor Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-monitor-synapse-link-sql-server-2022.md
In this section, we'll deep dive into setting up metrics, alerts, and logs in Az
### Metrics
-The most important type of Monitor data is the metric, which is also called the performance counter. Metrics are emitted by most Azure resources. Azure Monitor provides several ways to configure and consume these metrics for monitoring and troubleshooting.
+The most important type of Monitor data is the metric, which is also called the performance counter. Metrics are emitted by most Azure resources. Azure Monitor provides several ways to configure and consume these metrics for monitoring and troubleshooting.
-Azure Synapse Link emits the following metrics to Azure Monitor:
-
-| **Metric** | **Aggregation types** | **Description** |
-||||
-| Link connection events | Sum | Number of Synapse Link connection events, including start, stop, and failure |
-| Link latency in seconds | Max, Min, Avg | Synapse Link data processing latency in seconds |
-| Link processed data volume (bytes) | Sum | Data volume in bytes processed by Synapse Link |
-| Link processed rows | Sum | Row counts processed by Synapse Link |
-| Link table events | Sum | Number of Synapse Link table events, including snapshot, removal, and failure |
+For a list of metrics that Azure Synapse Link emits to Azure Monitor, see [Azure Synapse Link metrics](../monitor-synapse-analytics-reference.md#azure-synapse-link-metrics).
Now letΓÇÖs step through how we can see these metrics in the Azure portal.
Now letΓÇÖs step through how we can see these metrics in the Azure portal.
### Alerts
-Azure Monitor has set up built-in functionality to set up alerts to monitor all your Azure resources efficiently. Alerts allow you to monitor your telemetry and capture signals that indicate that something is happening on the specified resource. Once the signals are captured, an alert rule a defined to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, and notifications are sent through the appropriate channels.
-
-In this section, we're going to walk through how you can set up alerts for your Azure Synapse Link connection through Azure Synapse Analytics. LetΓÇÖs say, for example, that you're running your link connection and realize that you want to monitor the latency of your link connection. The workload requirements for this scenario require that any link connection with a maximum latency over 900 seconds (or 15 minutes) needs to be alerted to your Engineering team. LetΓÇÖs walk through how we would set up an alert for this example:
+In this section, we're going to walk through how you can set up [alerts](../monitor-synapse-analytics.md#alerts) for your Azure Synapse Link connection through Azure Synapse Analytics. LetΓÇÖs say, for example, that you're running your link connection and realize that you want to monitor the latency of your link connection. The workload requirements for this scenario require that any link connection with a maximum latency over 900 seconds (or 15 minutes) needs to be alerted to your Engineering team. LetΓÇÖs walk through how we would set up an alert for this example:
1. Sign in to the [Azure portal](https://portal.azure.com).
update-manager Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/deploy-updates.md
To install one-time updates on a single VM:
:::image type="content" source="./media/deploy-updates/include-update-classification-inline.png" alt-text="Screenshot that shows update classification." lightbox="./media/deploy-updates/include-update-classification-expanded.png"::: - Select **Include KB ID/package** to include in the updates. You can add multiple KB IDs and package names. When you add KB ID/package name, the next row appears. The package can have both name and version. . For example, use `3103696` or `3134815`. For Windows, you can refer to the [MSRC webpage](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base release. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, use `kernel*`, `glibc`, or `libc=1.0.1`. Based on the options specified, Update Manager shows a preview of OS updates under the **Selected Updates** section.
- - To exclude updates that you don't want to install, select **Exclude KB ID/package**. We recommend selecting this option because updates that aren't displayed here might be installed, as newer updates might be available. You can excludedd multiple KB IDs and package names.
+ - To exclude updates that you don't want to install, select **Exclude KB ID/package**. We recommend selecting this option because updates that aren't displayed here might be installed, as newer updates might be available. You can exclude multiple KB IDs and package names.
- To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date**. Select the date and select **Add** > **Next**. :::image type="content" source="./media/deploy-updates/include-patch-publish-date-inline.png" alt-text="Screenshot that shows the patch publish date." lightbox="./media/deploy-updates/include-patch-publish-date-expanded.png":::
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set-flex.md
This content applies to the flexible orchestration mode. For uniform orchestrati
**Step 1: Add to the Virtual Machine Scale Set** - For sample code, see [Associate a virtual machine scale set with uniform orchestration to a Capacity Reservation group](capacity-reservation-associate-virtual-machine-scale-set.md).
-**Step 2: Add to the Virtual Machines deployed** - You must add the Capacity Reservation group to the Virtual Machines deployed using the Scale Set. Follow the same process used to associate a VM. For sample code, see [Associate a virtual machine to a Capacity Reservation group](capacity-reservation-associate-vm.md).
+**Step 2: Add to the Virtual Machines deployed** - You must add the Capacity Reservation group to the Virtual Machines deployed using the Scale Set depending on the upgrade mode. Follow the same process used to associate a VM. For sample code, see [Associate a virtual machine to a Capacity Reservation group](capacity-reservation-associate-vm.md).
## Next steps
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-ps-findimage.md
The Windows image alias names and their details outputted by this command are:
```output Architecture Offer Publisher Sku Urn Alias Version -- - - - --
-x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest Win2022Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition:latest Win2022AzureEdition latest
x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest Win2022AzureEditionCore latest x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest Win2019Datacenter latest x64 WindowsServer MicrosoftWindowsServer 2016-Datacenter MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest Win2016Datacenter latest
virtual-machines Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux.md
You can disable encryption using Azure PowerShell, the Azure CLI, or with a Reso
2. Select the subscription, resource group, location, VM, volume type, legal terms, and agreement. 3. Click **Purchase** to disable disk encryption on a running Linux VM.
+> [!WARNING]
+> Once decryption has begun, it is advisable not to interfere with the process.
+ ### Remove the encryption extension If you want to decrypt your disks and remove the encryption extension, you must disable encryption **before** removing the extension; see [disable encryption](#disable-encryption).
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/cli-ps-findimage.md
The Windows image alias names and their details are:
```output Alias Architecture Offer Publisher Sku Urn Version -- -- - - -
-Win2022Datacenter x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest latest
+Win2022AzureEdition x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition:latest latest
Win2022AzureEditionCore x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest latest Win10 x64 Windows MicrosoftVisualStudio Windows-10-N-x64 MicrosoftVisualStudio:Windows:Windows-10-N-x64:latest latest Win2019Datacenter x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest latest
virtual-network How To Dhcp Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/how-to-dhcp-azure.md
Last updated 02/28/2024
Learn how to deploy a highly available DHCP server in Azure on a virtual machine. This server is used as a target for an on-premises DHCP relay agent to provide dynamic IP address allocation to on-premises clients. Broadcast packets directly from clients to a DHCP Server don't work in an Azure Virtual Network by design.
+> [!NOTE]
+> The on-premises client to DHCP Server (source port UDP/68, destination port UDP/67) is still not supported in Azure, since this traffic is intercepted and handled differently. This will result in timeout messages at the time of DHCP RENEW at T1 when the client directly attempts to reach the DHCP Server in Azure. The DHCP RENEW will succeed when the DHCP RENEW attempt is made at T2 via DHCP Relay Agent. For more details on the T1 and T2 DHCP RENEW timers, see [RFC 2131](https://www.ietf.org/rfc/rfc2131.txt).
+ ## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
virtual-network Troubleshoot Vm Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/troubleshoot-vm-connectivity.md
This article helps administrators diagnose and resolve connectivity problems tha
To resolve these problems, follow the steps in the following section.
+> [!NOTE]
+> You can use the following:
+> * `netstat -an` to list the ports that the VM is listening to
+> * Test-NetConnection module in PowerShell to display diagnostic information for a connection such as ping test and tcp test
+>
## Resolution ### Azure VM cannot connect to another Azure VM in same virtual network
virtual-network Virtual Network Troubleshoot Connectivity Problem Between Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-connectivity-problem-between-vms.md
One Azure VM can't connect to another Azure VM.
8. [Try to connect to a VM network share](#step-8-try-to-connect-to-a-vm-network-share) 9. [Check Inter-VNet connectivity](#step-9-check-inter-vnet-connectivity)
+> [!NOTE]
+> You can also use Test-NetConnection module in PowerShell to diagnose information for a connection.
+>
## Troubleshooting steps Follow these steps to troubleshoot the problem. After you complete each step, check whether the problem is resolved.
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
description: Answers to the most frequently asked questions about Microsoft Azur
Previously updated : 06/26/2020 Last updated : 07/22/2024
Unicast is supported in virtual networks. Multicast, broadcast, IP-in-IP encapsu
Azure virtual networks provide DHCP service and DNS to Azure Virtual Machines. However, you can also deploy a DHCP Server in an Azure VM to serve the on-prem clients via a DHCP Relay Agent.
-DHCP Server in Azure was previously marked as unsupported since the traffic to port UDP/67 was rate limited in Azure. However, recent platform updates have removed the rate limitation, enabling this capability.
+DHCP Server in Azure was previously marked not feasible since the traffic to port UDP/67 was rate limited in Azure. However, recent platform updates have removed the rate limitation, enabling this capability.
> [!NOTE]
-> The on-premises client to DHCP Server (source port UDP/68, destination port UDP/67) is still not supported in Azure, since this traffic is intercepted and handled differently. So, this will result in some timeout messages at the time of DHCP RENEW at T1 when the client directly attempts to reach the DHCP Server in Azure, but this should succeed when the DHCP RENEW attempt is made at T2 via DHCP Relay Agent. For more details on the T1 and T2 DHCP RENEW timers, see [RFC 2131](https://www.ietf.org/rfc/rfc2131.txt).
+> The on-premises client to DHCP Server (source port UDP/68, destination port UDP/67) is still not supported in Azure, since this traffic is intercepted and handled differently. This will result in timeout messages at the time of DHCP RENEW at T1 when the client directly attempts to reach the DHCP Server in Azure. The DHCP RENEW will succeed when the DHCP RENEW attempt is made at T2 via DHCP Relay Agent. For more details on the T1 and T2 DHCP RENEW timers, see [RFC 2131](https://www.ietf.org/rfc/rfc2131.txt).
### Can I ping a default gateway in a virtual network?
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
Title: Azure VPN Gateway FAQ
-description: Learn about frequently asked questions for VPN Gateway cross-premises connections, hybrid configuration connections, and virtual network gateways. This FAQ contains comprehensive information about point-to-site, site-to-site, and VNet-to-VNet configuration settings.
+description: Get answers to frequently asked questions about VPN Gateway connections and configuration settings.
# VPN Gateway FAQ
+This article answers frequently asked questions about Azure VPN Gateway cross-premises connections, hybrid configuration connections, and virtual network (VNet) gateways. It contains comprehensive information about point-to-site (P2S), site-to-site (S2S), and VNet-to-VNet configuration settings, including the Internet Protocol Security (IPsec) and Internet Key Exchange (IKE) protocols.
+ ## <a name="connecting"></a>Connecting to virtual networks ### Can I connect virtual networks in different Azure regions?
-Yes. There's no region constraint. One virtual network can connect to another virtual network in the same region, or in a different Azure region.
+Yes. There's no region constraint. One virtual network can connect to another virtual network in the same Azure region or in a different region.
### Can I connect virtual networks in different subscriptions?
Yes.
### Can I specify private DNS servers in my VNet when configuring a VPN gateway?
-If you specified a DNS server or servers when you created your virtual network, VPN Gateway uses the DNS servers that you specified. If you specify a DNS server, verify that your DNS server can resolve the domain names needed for Azure.
+If you specify a Domain Name System (DNS) server or servers when you create your virtual network, the virtual private network (VPN) gateway uses those DNS servers. Verify that your specified DNS servers can resolve the domain names needed for Azure.
### Can I connect to multiple sites from a single virtual network?
-You can connect to multiple sites by using Windows PowerShell and the Azure REST APIs. See the [Multi-site and VNet-to-VNet Connectivity](#V2VMulti) FAQ section.
+You can connect to multiple sites by using Windows PowerShell and the Azure REST APIs. See the [Multi-site and VNet-to-VNet connectivity](#V2VMulti) FAQ section.
### Is there an additional cost for setting up a VPN gateway as active-active?
-No. However, costs for any additional public IPs will be charged accordingly. See [IP Address Pricing](https://azure.microsoft.com/pricing/details/ip-addresses/).
+No. However, costs for any additional public IPs are charged accordingly. See [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses/).
### What are my cross-premises connection options?
-The following cross-premises virtual network gateway connections are supported:
+Azure VPN Gateway supports the following cross-premises gateway connections:
+
+* **Site-to-site**: VPN connection over IPsec (IKEv1 and IKEv2). This type of connection requires a VPN device or Windows Server Routing and Remote Access. For more information, see [Create a site-to-site VPN connection in the Azure portal](./tutorial-site-to-site-portal.md).
+* **Point-to-site**: VPN connection over Secure Socket Tunneling Protocol (SSTP) or IKEv2. This connection doesn't require a VPN device. For more information, see [Configure server settings for point-to-site VPN Gateway certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
+* **VNet-to-VNet**: This type of connection is the same as a site-to-site configuration. VNet-to-VNet is a VPN connection over IPsec (IKEv1 and IKEv2). It doesn't require a VPN device. For more information, see [Configure a VNet-to-VNet VPN gateway connection](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
+* **Azure ExpressRoute**: ExpressRoute is a private connection to Azure from your wide area network (WAN), not a VPN connection over the public internet. For more information, see the [ExpressRoute technical overview](../expressroute/expressroute-introduction.md) and the [ExpressRoute FAQ](../expressroute/expressroute-faqs.md).
+
+For more information about VPN gateway connections, see [What is Azure VPN Gateway?](vpn-gateway-about-vpngateways.md).
+
+### What is the difference between site-to-site and point-to-site connections?
+
+* *Site-to-site* (IPsec/IKE VPN tunnel) configurations are between your on-premises location and Azure. You can connect from any of your computers located on your premises to any virtual machine (VM) or role instance within your virtual network, depending on how you choose to configure routing and permissions. It's a great option for an always-available cross-premises connection and is well suited for hybrid configurations.
-* **Site-to-site:** VPN connection over IPsec (IKE v1 and IKE v2). This type of connection requires a VPN device or RRAS. For more information, see [Site-to-site](./tutorial-site-to-site-portal.md).
-* **Point-to-site:** VPN connection over SSTP (Secure Socket Tunneling Protocol) or IKE v2. This connection doesn't require a VPN device. For more information, see [Point-to-site](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
-* **VNet-to-VNet:** This type of connection is the same as a site-to-site configuration. VNet to VNet is a VPN connection over IPsec (IKE v1 and IKE v2). It doesn't require a VPN device. For more information, see [VNet-to-VNet](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
-* **ExpressRoute:** ExpressRoute is a private connection to Azure from your WAN, not a VPN connection over the public Internet. For more information, see the [ExpressRoute Technical Overview](../expressroute/expressroute-introduction.md) and the [ExpressRoute FAQ](../expressroute/expressroute-faqs.md).
+ This type of connection relies on an IPsec VPN appliance (hardware device or soft appliance). The appliance must be deployed at the edge of your network. To create this type of connection, you must have an externally facing IPv4 address.
-For more information about VPN Gateway connections, see [About VPN Gateway](vpn-gateway-about-vpngateways.md).
+* *Point-to-site* (VPN over SSTP) configurations let you connect from a single computer from anywhere to anything located in your virtual network. It uses the Windows built-in VPN client.
-### What is the difference between a site-to-site connection and point-to-site?
+ As part of the point-to-site configuration, you install a certificate and a VPN client configuration package. The package contains the settings that allow your computer to connect to any virtual machine or role instance within the virtual network.
+
+ This configuration is useful when you want to connect to a virtual network but aren't located on-premises. It's also a good option when you don't have access to VPN hardware or an externally facing IPv4 address, both of which are required for a site-to-site connection.
-**Site-to-site** (IPsec/IKE VPN tunnel) configurations are between your on-premises location and Azure. This means that you can connect from any of your computers located on your premises to any virtual machine or role instance within your virtual network, depending on how you choose to configure routing and permissions. It's a great option for an always-available cross-premises connection and is well suited for hybrid configurations. This type of connection relies on an IPsec VPN appliance (hardware device or soft appliance), which must be deployed at the edge of your network. To create this type of connection, you must have an externally facing IPv4 address.
+You can configure your virtual network to use both site-to-site and point-to-site concurrently, as long as you create your site-to-site connection by using a route-based VPN type for your gateway. Route-based VPN types are called *dynamic gateways* in the classic deployment model.
-**Point-to-site** (VPN over SSTP) configurations let you connect from a single computer from anywhere to anything located in your virtual network. It uses the Windows in-box VPN client. As part of the point-to-site configuration, you install a certificate and a VPN client configuration package, which contains the settings that allow your computer to connect to any virtual machine or role instance within the virtual network. It's great when you want to connect to a virtual network, but aren't located on-premises. It's also a good option when you don't have access to VPN hardware or an externally facing IPv4 address, both of which are required for a site-to-site connection.
+### Does a misconfiguration of custom DNS break the normal operation of a VPN gateway?
-You can configure your virtual network to use both site-to-site and point-to-site concurrently, as long as you create your site-to-site connection using a route-based VPN type for your gateway. Route-based VPN types are called dynamic gateways in the classic deployment model.
+For normal functioning, the VPN gateway must establish a secure connection with the Azure control plane, facilitated through public IP addresses. This connection relies on resolving communication endpoints via public URLs. By default, Azure VNets use the built-in Azure DNS service (168.63.129.16) to resolve these public URLs. This default behavior helps ensure seamless communication between the VPN gateway and the Azure control plane.
-### Does a misconfiguration of custom DNS break the normal operation of Azure VPN Gateway?
+When you're implementing a custom DNS within a VNet, it's crucial to configure a DNS forwarder that points to Azure DNS (168.63.129.16). This configuration helps maintain uninterrupted communication between the VPN gateway and the control plane. Failure to set up a DNS forwarder to Azure DNS can prevent Microsoft from performing operations and maintenance on the VPN gateway, which poses a security risk.
-For normal functioning, the Azure VPN Gateway must establish a secure, mandatory connection with the Azure control plane, facilitated through Public IPs. This connection relies on resolving communication endpoints via public URLs. By default, Azure Virtual Networks (VNets) utilize the built-in Azure DNS (168.63.129.16) to resolve these public URLs, ensuring seamless communication between the Azure VPN Gateway and the Azure control plane.
+To help ensure proper functionality and healthy state for your VPN gateway, consider one of the following DNS configurations in the VNet:
-In implementation of a custom DNS within the VNet, it's crucial to configure a DNS forwarder that points to the Azure native DNS (168.63.129.16), to maintain uninterrupted communication between the VPN Gateway and control plane. Failure to set up a DNS forwarder to the native Azure DNS can prevent Microsoft from performing operations and maintenance on the Azure VPN Gateway, posing a security risk.
+* Revert to the Azure DNS default by removing the custom DNS within the VNet settings (recommended configuration).
+* Add in your custom DNS configuration a DNS forwarder that points to Azure DNS (168.63.129.16). Depending on the specific rules and nature of your custom DNS, this setup might not resolve the issue as expected.
-To proper functionalities and healthy state to your VPN Gateway, consider one of the following configurations DNS configurations in VNet:
-1. Revert to the default native Azure DNS by removing the custom DNS within the VNet settings (recommended configuration).
-2. Add in your custom DNS configuration a DNS forwarder pointing to the native Azure DNS (IP address: 168.63.129.16). Considering the specific rules and nature of your custom DNS, this setup might not resolve and fix the issue as expected.
+### Can two VPN clients connected in point-to-site to the same VPN gateway communicate?
-### Can two VPN clients connected in Point-to-Site to the same VPN Gateway communicate?
+No. VPN clients connected in point-to-site to the same VPN gateway can't communicate with each other.
-Communication between VPN clients connected in Point-to-Site to the same VPN Gateway is not supported. When two VPN clients are connected to the same Point-to-Site (P2S) VPN Gateway instance, the VPN Gateway instance can automatically route traffic between them by determining the IP address each client is assigned from the address pool. However, if the VPN clients are connected to different VPN Gateway instances, routing between the VPN clients is not possible because each VPN Gateway instance is unaware of the IP address assigned to the client by the other instance.
+When two VPN clients are connected to the same point-to-site VPN gateway, the gateway can automatically route traffic between them by determining the IP address that each client is assigned from the address pool. However, if the VPN clients are connected to different VPN gateways, routing between the VPN clients isn't possible because each VPN gateway is unaware of the IP address that the other gateway assigned to the client.
-### Could point-to-site VPN connections be affected by a potential vulnerability known as "tunnel vision"?
+### Could a potential vulnerability known as "tunnel vision" affect point-to-site VPN connections?
-Microsoft is aware of reports discussing network technique that bypasses VPN encapsulation. This is an industry-wide issue impacting any operating system that implements a DHCP client according to its RFC specification and has support for DHCP option 121 routes, including Windows.
-As the research notes, mitigations include running the VPN inside of a VM that obtains a lease from a virtualized DHCP server to prevent the local networks DHCP server from installing routes altogether.
-More information about vulnerability can be found at [NVD - CVE-2024-3661 (nist.gov)](https://nvd.nist.gov/vuln/detail/CVE-2024-3661).
+Microsoft is aware of reports about a network technique that bypasses VPN encapsulation. This is an industry-wide issue. It affects any operating system that implements a Dynamic Host Configuration Protocol (DHCP) client according to its RFC specification and has support for DHCP option 121 routes, including Windows.
+
+As the research notes, mitigations include running the VPN inside a VM that obtains a lease from a virtualized DHCP server to prevent the local network's DHCP server from installing routes altogether. You can find more information about this vulnerability in the [NIST National Vulnerability Database](https://nvd.nist.gov/vuln/detail/CVE-2024-3661).
## <a name="privacy"></a>Privacy
No.
### Is a VPN gateway a virtual network gateway?
-A VPN gateway is a type of virtual network gateway. A VPN gateway sends encrypted traffic between your virtual network and your on-premises location across a public connection. You can also use a VPN gateway to send traffic between virtual networks. When you create a VPN gateway, you use the -GatewayType value 'Vpn'. For more information, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).
+A VPN gateway is a type of virtual network gateway. A VPN gateway sends encrypted traffic between your virtual network and your on-premises location across a public connection. You can also use a VPN gateway to send traffic between virtual networks. When you create a VPN gateway, you use the `-GatewayType` value `Vpn`. For more information, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).
### Why can't I specify policy-based and route-based VPN types?
-As of Oct 1, 2023, you can't create a policy-based VPN gateway through Azure portal. All new VPN gateways will automatically be created as route-based. If you already have a policy-based gateway, you don't need to upgrade your gateway to route-based. You can use Powershell/CLI to create the policy-based gateways.
+As of October 1, 2023, you can't create a policy-based VPN gateway through the Azure portal. All new VPN gateways are automatically created as route-based. If you already have a policy-based gateway, you don't need to upgrade your gateway to route-based. You can use Azure PowerShell or the Azure CLI to create the policy-based gateways.
-Previously, the older gateway SKUs didn't support IKEv1 for route-based gateways. Now, most of the current gateway SKUs support both IKEv1 and IKEv2.
+Previously, the older gateway product tiers (SKUs) didn't support IKEv1 for route-based gateways. Now, most of the current gateway SKUs support both IKEv1 and IKEv2.
[!INCLUDE [Route-based and policy-based table](../../includes/vpn-gateway-vpn-type-table.md)] ### Can I update my policy-based VPN gateway to route-based?
-No. A gateway type can't be changed from policy-based to route-based, or from route-based to policy-based. To change a gateway type, the gateway must be deleted and recreated. This process takes about 60 minutes. When you create the new gateway, you can't retain the IP address of the original gateway.
+No. A gateway type can't be changed from policy-based to route-based, or from route-based to policy-based. To change a gateway type, you must delete and re-create the gateway by taking the following steps. This process takes about 60 minutes. When you create the new gateway, you can't retain the IP address of the original gateway.
1. Delete any connections associated with the gateway.
-1. Delete the gateway using one of the following articles:
+1. Delete the gateway by using one of the following articles:
* [Azure portal](vpn-gateway-delete-vnet-gateway-portal.md) * [Azure PowerShell](vpn-gateway-delete-vnet-gateway-powershell.md) * [Azure PowerShell - classic](vpn-gateway-delete-vnet-gateway-classic-powershell.md)
-1. Create a new gateway using the gateway type that you want, and then complete the VPN setup. For steps, see the [Site-to-site tutorial](./tutorial-site-to-site-portal.md#VNetGateway).
+1. Create a new gateway by using the gateway type that you want, and then complete the VPN setup. For steps, see the [site-to-site tutorial](./tutorial-site-to-site-portal.md#VNetGateway).
### Can I specify my own policy-based traffic selectors?
-Yes, traffic selectors can be defined via the *trafficSelectorPolicies* attribute on a connection via the [New-AzIpsecTrafficSelectorPolicy](/powershell/module/az.network/new-azipsectrafficselectorpolicy) PowerShell command. For the specified traffic selector to take effect, ensure the [Use Policy Based Traffic Selectors](vpn-gateway-connect-multiple-policybased-rm-ps.md#enablepolicybased) option is enabled.
+Yes, you can define traffic selectors by using the `trafficSelectorPolicies` attribute on a connection via the [New-AzIpsecTrafficSelectorPolicy](/powershell/module/az.network/new-azipsectrafficselectorpolicy) Azure PowerShell command. For the specified traffic selector to take effect, be sure to [enable policy-based traffic selectors](vpn-gateway-connect-multiple-policybased-rm-ps.md#enablepolicybased).
+
+The custom-configured traffic selectors are proposed only when a VPN gateway initiates the connection. A VPN gateway accepts any traffic selectors proposed by a remote gateway (on-premises VPN device). This behavior is consistent among all connection modes (`Default`, `InitiatorOnly`, and `ResponderOnly`).
+
+### Do I need a gateway subnet?
+
+Yes. The gateway subnet contains the IP addresses that the virtual network gateway services use. You need to create a gateway subnet for your virtual network in order to configure a virtual network gateway.
-The custom configured traffic selectors are proposed only when an Azure VPN gateway initiates the connection. A VPN gateway accepts any traffic selectors proposed by a remote gateway (on-premises VPN device). This behavior is consistent between all connection modes (Default, InitiatorOnly, and ResponderOnly).
+All gateway subnets must be named `GatewaySubnet` to work properly. Don't name your gateway subnet something else. And don't deploy VMs or anything else to the gateway subnet.
-### Do I need a GatewaySubnet?
+When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway service.
-Yes. The gateway subnet contains the IP addresses that the virtual network gateway services use. You need to create a gateway subnet for your virtual network in order to configure a virtual network gateway. All gateway subnets must be named 'GatewaySubnet' to work properly. Don't name your gateway subnet something else. And don't deploy VMs or anything else to the gateway subnet.
+Some configurations require more IP addresses to be allocated to the gateway services than do others. Make sure that your gateway subnet contains enough IP addresses to accommodate future growth and possible new connection configurations.
-When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway service. Some configurations require more IP addresses to be allocated to the gateway services than do others. You want to make sure your gateway subnet contains enough IP addresses to accommodate future growth and possible additional new connection configurations. So, while you can create a gateway subnet as small as /29, we recommend that you create a gateway subnet of /27 or larger (/27, /26, /25 etc.). Look at the requirements for the configuration that you want to create and verify that the gateway subnet you have will meet those requirements.
+Although you can create a gateway subnet as small as /29, we recommend that you create a gateway subnet of /27 or larger (/27, /26, /25, and so on). Verify that your existing gateway subnet meets the requirements for the configuration that you want to create.
-### Can I deploy Virtual Machines or role instances to my gateway subnet?
+### Can I deploy virtual machines or role instances to my gateway subnet?
No. ### Can I get my VPN gateway IP address before I create it?
-Azure Standard SKU public IP resources must use a static allocation method. Therefore, you'll have the public IP address for your VPN gateway as soon as you create the Standard SKU public IP resource you intend to use for it.
+Azure Standard SKU public IP resources must use a static allocation method. You'll have the public IP address for your VPN gateway as soon as you create the Standard SKU public IP resource that you intend to use for it.
### Can I request a static public IP address for my VPN gateway?
-Standard SKU public IP address resources use a static allocation method. Going forward, you must use a Standard SKU public IP address when you create a new VPN gateway. This applies to all gateway SKUs except the Basic SKU. The Basic gateway SKU currently supports only Basic SKU public IP addresses. We'll soon be adding support for Standard SKU public IP addresses for Basic gateway SKUs.
+Standard SKU public IP address resources use a static allocation method. Going forward, you must use a Standard SKU public IP address when you create a new VPN gateway. This requirement applies to all gateway SKUs except the Basic SKU. The Basic SKU currently supports only Basic SKU public IP addresses. We're working on adding support for Standard SKU public IP addresses for the Basic SKU.
-For non-zone-redundant and non-zonal gateways that were previously created (gateway SKUs that do *not* have *AZ* in the name), dynamic IP address assignment is supported, but is being phased out. When you use a dynamic IP address, the IP address doesn't change after it has been assigned to your VPN gateway. The only time the VPN gateway IP address changes is when the gateway is deleted and then re-created. The VPN gateway public IP address doesn't change when you resize, reset, or complete other internal maintenance and upgrades of your VPN gateway.
+For non-zone-redundant and non-zonal gateways that were previously created (gateway SKUs that don't have *AZ* in the name), dynamic IP address assignment is supported but is being phased out. When you use a dynamic IP address, the IP address doesn't change after it's assigned to your VPN gateway. The only time that the VPN gateway IP address changes is when the gateway is deleted and then re-created. The public IP address doesn't change when you resize, reset, or complete other internal maintenance and upgrades of your VPN gateway.
-### How does Public IP address Basic SKU retirement affect my VPN gateways?
+### How does the retirement of Basic SKU public IP addresses affect my VPN gateways?
-We're taking action to ensure the continued operation of deployed VPN gateways that utilize Basic SKU public IP addresses. If you already have VPN gateways with Basic SKU public IP addresses, there's no need for you to take any action.
+We're taking action to ensure the continued operation of deployed VPN gateways that use Basic SKU public IP addresses. If you already have VPN gateways with Basic SKU public IP addresses, there's no need for you to take any action.
-However, it's important to note that Basic SKU public IP addresses are being phased out. Going forward, when creating a new VPN gateway, you must use the **Standard SKU** public IP address. Further details on the retirement of Basic SKU public IP addresses can be found [here](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired).
+However, Basic SKU public IP addresses are being phased out. Going forward, when you create a VPN gateway, you must use the Standard SKU public IP address. You can find details on the retirement of Basic SKU public IP addresses in the [Azure Updates announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired).
-### How does my VPN tunnel get authenticated?
+### How is my VPN tunnel authenticated?
-Azure VPN uses PSK (Pre-Shared Key) authentication. We generate a pre-shared key (PSK) when we create the VPN tunnel. You can change the autogenerated PSK to your own with the Set Pre-Shared Key PowerShell cmdlet or REST API.
+Azure VPN Gateway uses preshared key (PSK) authentication. We generate a PSK when we create the VPN tunnel. You can change the automatically generated PSK to your own by using the Set Pre-Shared Key REST API or PowerShell cmdlet.
-### Can I use the Set Pre-Shared Key API to configure my policy-based (static routing) gateway VPN?
+### Can I use the Set Pre-Shared Key REST API to configure my policy-based (static routing) gateway VPN?
-Yes, the Set Pre-Shared Key API and PowerShell cmdlet can be used to configure both Azure policy-based (static) VPNs and route-based (dynamic) routing VPNs.
+Yes. You can use the Set Pre-Shared Key REST API and PowerShell cmdlet to configure both Azure policy-based (static) VPNs and route-based (dynamic) routing VPNs.
### Can I use other authentication options?
-We're limited to using preshared keys (PSK) for authentication.
+You're limited to using preshared keys for authentication.
### How do I specify which traffic goes through the VPN gateway?
-#### Resource Manager deployment model
+For the Azure Resource Manager deployment model:
-* PowerShell: use "AddressPrefix" to specify traffic for the local network gateway.
-* Azure portal: navigate to the Local network gateway > Configuration > Address space.
+* Azure PowerShell: Use `AddressPrefix` to specify traffic for the local network gateway.
+* Azure portal: Go to *local network gateway* > **Configuration** > **Address space**.
-#### Classic deployment model
+For the classic deployment model:
-* Azure portal: navigate to the classic virtual network > VPN connections > Site-to-site VPN connections > Local site name > Local site > Client address space.
+* Azure portal: Go to the classic virtual network, and then go to **VPN connections** > **Site-to-site VPN connections** > *local site name* > *local site* > **Client address space**.
### Can I use NAT-T on my VPN connections?
-Yes, NAT traversal (NAT-T) is supported. Azure VPN Gateway will NOT perform any NAT-like functionality on the inner packets to/from the IPsec tunnels. In this configuration, ensure the on-premises device initiates the IPSec tunnel.
+Yes, network address translation traversal (NAT-T) is supported. Azure VPN Gateway does *not* perform any NAT-like functionality on the inner packets to or from the IPsec tunnels. In this configuration, ensure that the on-premises device initiates the IPSec tunnel.
### Can I set up my own VPN server in Azure and use it to connect to my on-premises network?
-Yes, you can deploy your own VPN gateways or servers in Azure either from the Azure Marketplace or creating your own VPN routers. You must configure user-defined routes in your virtual network to ensure traffic is routed properly between your on-premises networks and your virtual network subnets.
+Yes. You can deploy your own VPN gateways or servers in Azure from Azure Marketplace or by creating your own VPN routers. You must configure user-defined routes in your virtual network to ensure that traffic is routed properly between your on-premises networks and your virtual network subnets.
### <a name="gatewayports"></a>Why are certain ports opened on my virtual network gateway?
-They're required for Azure infrastructure communication. They're protected (locked down) by Azure certificates. Without proper certificates, external entities, including the customers of those gateways, won't be able to cause any effect on those endpoints.
+They're required for Azure infrastructure communication. Azure certificates help protect them by locking them down. Without proper certificates, external entities, including the customers of those gateways, can't cause any effect on those endpoints.
-A virtual network gateway is fundamentally a multi-homed device with one NIC tapping into the customer private network, and one NIC facing the public network. Azure infrastructure entities can't tap into customer private networks for compliance reasons, so they need to utilize public endpoints for infrastructure communication. The public endpoints are periodically scanned by Azure security audit.
+A virtual network gateway is fundamentally a multihomed device. One network adapter taps into the customer private network, and one network adapter faces the public network. Azure infrastructure entities can't tap into customer private networks for compliance reasons, so they need to use public endpoints for infrastructure communication. An Azure security audit periodically scans the public endpoints.
-### <a name="vpn-basic"></a>Can I create a VPN gateway using the Basic gateway SKU in the portal?
+### <a name="vpn-basic"></a>Can I create a VPN gateway by using the Basic SKU in the portal?
-No. The Basic SKU isn't available in the portal. You can create a Basic SKU VPN gateway using Azure CLI or PowerShell.
+No. The Basic SKU isn't available in the portal. You can create a Basic SKU VPN gateway by using the Azure CLI or Azure PowerShell.
### Where can I find information about gateway types, requirements, and throughput? See the following articles:+ * [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md) * [About gateway SKUs](about-gateway-skus.md)
-## <a name="sku-deprecate"></a>SKU deprecation for legacy SKUs
+## <a name="sku-deprecate"></a>Deprecation of older SKUs
+
+The Standard and High Performance SKUs will be deprecated on September 30, 2025. You can view the announcement on the [Azure Updates site](https://go.microsoft.com/fwlink/?linkid=2255127). The product team will make a migration path available for these SKUs by November 30, 2024. For more information, see the [VPN Gateway legacy SKUs](vpn-gateway-about-skus-legacy.md#sku-deprecation) article.
-The Standard and High Performance SKUs will be deprecated on September 30, 2025. You can view the announcement [here](https://go.microsoft.com/fwlink/?linkid=2255127). The product team will make a migration path available for these SKUs by November 30, 2024. For more information, see the [VPN Gateway legacy SKUs](vpn-gateway-about-skus-legacy.md#sku-deprecation) article. **At this time, there's no action that you need to take.**
+*At this time, there's no action that you need to take.*
[!INCLUDE [legacy SKU deprecation](../../includes/vpn-gateway-deprecate-sku-faq.md)]
The Standard and High Performance SKUs will be deprecated on September 30, 2025.
### What should I consider when selecting a VPN device?
-We've validated a set of standard site-to-site VPN devices in partnership with device vendors. A list of known compatible VPN devices, their corresponding configuration instructions or samples, and device specs can be found in the [About VPN devices](vpn-gateway-about-vpn-devices.md) article. All devices in the device families listed as known compatible should work with Virtual Network. To help configure your VPN device, refer to the device configuration sample or link that corresponds to appropriate device family.
+We've validated a set of standard site-to-site VPN devices in partnership with device vendors. You can find a list of known compatible VPN devices, their corresponding configuration instructions or samples, and device specifications in the [About VPN devices](vpn-gateway-about-vpn-devices.md) article.
+
+All devices in the device families listed as known compatible should work with virtual networks. To help configure your VPN device, refer to the device configuration sample or link that corresponds to the appropriate device family.
### Where can I find VPN device configuration settings?
We've validated a set of standard site-to-site VPN devices in partnership with d
### How do I edit VPN device configuration samples?
-For information about editing device configuration samples, see [Editing samples](vpn-gateway-about-vpn-devices.md#editing).
+See [Editing device configuration samples](vpn-gateway-about-vpn-devices.md#editing).
### Where do I find IPsec and IKE parameters?
-For IPsec/IKE parameters, see [Parameters](vpn-gateway-about-vpn-devices.md#ipsec).
+See [Default IPsec/IKE parameters](vpn-gateway-about-vpn-devices.md#ipsec).
### Why does my policy-based VPN tunnel go down when traffic is idle?
-This is expected behavior for policy-based (also known as static routing) VPN gateways. When the traffic over the tunnel is idle for more than 5 minutes, the tunnel is torn down. When traffic starts flowing in either direction, the tunnel is reestablished immediately.
+This behavior is expected for policy-based (also known as *static routing*) VPN gateways. When the traffic over the tunnel is idle for more than five minutes, the tunnel is torn down. When traffic starts flowing in either direction, the tunnel is reestablished immediately.
### Can I use software VPNs to connect to Azure?
-We support Windows Server 2012 Routing and Remote Access (RRAS) servers for site-to-site cross-premises configuration.
+We support Windows Server 2012 Routing and Remote Access servers for site-to-site cross-premises configuration.
-Other software VPN solutions should work with our gateway as long as they conform to industry standard IPsec implementations. Contact the vendor of the software for configuration and support instructions.
+Other software VPN solutions should work with the gateway, as long as they conform to industry-standard IPsec implementations. For configuration and support instructions, contact the vendor of the software.
### Can I connect to a VPN gateway via point-to-site when located at a site that has an active site-to-site connection?
-Yes, but the Public IP address(es) of the point-to-site client must be different than the Public IP address(es) used by the site-to-site VPN device, or else the point-to-site connection won't work. Point-to-site connections with IKEv2 can't be initiated from the same Public IP address(es) where a site-to-site VPN connection is configured on the same Azure VPN gateway.
+Yes, but the public IP addresses of the point-to-site client must be different from the public IP addresses that the site-to-site VPN device uses, or else the point-to-site connection won't work. Point-to-site connections with IKEv2 can't be initiated from the same public IP addresses where a site-to-site VPN connection is configured on the same VPN gateway.
-## <a name="P2S"></a>Point-to-site FAQ
+## <a name="P2S"></a>Point-to-site connections
[!INCLUDE [P2S FAQ All](../../includes/vpn-gateway-faq-p2s-all-include.md)]
-## <a name="P2S-cert"></a>Point-to-site - certificate authentication
+## <a name="P2S-cert"></a>Point-to-site connections with certificate authentication
[!INCLUDE [P2S Azure cert](../../includes/vpn-gateway-faq-p2s-azurecert-include.md)]
-## <a name="P2SRADIUS"></a>Point-to-site - RADIUS authentication
+## <a name="P2SRADIUS"></a>Point-to-site connections with RADIUS authentication
### Is RADIUS authentication supported on all Azure VPN Gateway SKUs? RADIUS authentication is supported for all SKUs except the Basic SKU.
-For legacy SKUs, RADIUS authentication is supported on Standard and High Performance SKUs.
+For earlier SKUs, RADIUS authentication is supported on Standard and High Performance SKUs.
### Is RADIUS authentication supported for the classic deployment model?
-No. RADIUS authentication isn't supported for the classic deployment model.
+No.
### What is the timeout period for RADIUS requests sent to the RADIUS server?
-RADIUS requests are set to timeout after 30 seconds. User defined timeout values aren't supported today.
+RADIUS requests are set to time out after 30 seconds. User-defined timeout values aren't currently supported.
-### Are 3rd-party RADIUS servers supported?
+### Are third-party RADIUS servers supported?
-Yes, 3rd-party RADIUS servers are supported.
+Yes.
-### What are the connectivity requirements to ensure that the Azure gateway is able to reach an on-premises RADIUS server?
+### What are the connectivity requirements to ensure that the Azure gateway can reach an on-premises RADIUS server?
-A site-to-site VPN connection to the on-premises site, with the proper routes configured, is required.
+You need a site-to-site VPN connection to the on-premises site, with the proper routes configured.
-### Can traffic to an on-premises RADIUS server (from the Azure VPN gateway) be routed over an ExpressRoute connection?
+### Can traffic to an on-premises RADIUS server (from the VPN gateway) be routed over an ExpressRoute connection?
-No. It can only be routed over a site-to-site connection.
+No. It can be routed only over a site-to-site connection.
-### Is there a change in the number of SSTP connections supported with RADIUS authentication? What is the maximum number of SSTP and IKEv2 connections supported?
+### Is there a change in the number of supported SSTP connections with RADIUS authentication? What is the maximum number of supported SSTP and IKEv2 connections?
-There's no change in the maximum number of SSTP connections supported on a gateway with RADIUS authentication. It remains 128 for SSTP, but depends on the gateway SKU for IKEv2. For more information on the number of connections supported, see [About gateway SKUs](about-gateway-skus.md).
+There's no change in the maximum number of supported SSTP connections on a gateway with RADIUS authentication. It remains 128 for SSTP, but it depends on the gateway SKU for IKEv2. For more information on the number of supported connections, see [About gateway SKUs](about-gateway-skus.md).
-### What is the difference between doing certificate authentication using a RADIUS server vs. using Azure native certificate authentication (by uploading a trusted certificate to Azure)?
+### What is the difference between certificate authentication through a RADIUS server and Azure native certificate authentication through the upload of a trusted certificate?
-In RADIUS certificate authentication, the authentication request is forwarded to a RADIUS server that handles the actual certificate validation. This option is useful if you want to integrate with a certificate authentication infrastructure that you already have through RADIUS.
+In RADIUS certificate authentication, the authentication request is forwarded to a RADIUS server that handles the certificate validation. This option is useful if you want to integrate with a certificate authentication infrastructure that you already have through RADIUS.
-When using Azure for certificate authentication, the Azure VPN gateway performs the validation of the certificate. You need to upload your certificate public key to the gateway. You can also specify list of revoked certificates that shouldnΓÇÖt be allowed to connect.
+When you use Azure for certificate authentication, the VPN gateway performs the validation of the certificate. You need to upload your certificate public key to the gateway. You can also specify list of revoked certificates that shouldn't be allowed to connect.
-### Does RADIUS authentication support Network Policy Server (NPS) integration for multifactor authorization (MFA)?
+### Does RADIUS authentication support Network Policy Server integration for multifactor authentication?
-If your MFA is text based (SMS, mobile app verification code etc.) and requires the user to enter a code or text in the VPN client UI, the authentication won't succeed and isn't a supported scenario. See [Integrate Azure VPN gateway RADIUS authentication with NPS server for multifactor authentication](vpn-gateway-radius-mfa-nsp.md).
+If your multifactor authentication is text based (for example, SMS or a mobile app verification code) and requires the user to enter a code or text in the VPN client UI, the authentication won't succeed and isn't a supported scenario. See [Integrate Azure VPN gateway RADIUS authentication with NPS server for multifactor authentication](vpn-gateway-radius-mfa-nsp.md).
-### Does RADIUS authentication work with both IKEv2, and SSTP VPN?
+### Does RADIUS authentication work with both IKEv2 and SSTP VPN?
-Yes, RADIUS authentication is supported for both IKEv2, and SSTP VPN.
+Yes, RADIUS authentication is supported for both IKEv2 and SSTP VPN.
### Does RADIUS authentication work with the OpenVPN client?
RADIUS authentication is supported for the OpenVPN protocol.
[!INCLUDE [vpn-gateway-vnet-vnet-faq-include](../../includes/vpn-gateway-faq-vnet-vnet-include.md)]
-### How do I enable routing between my site-to-site VPN connection and my ExpressRoute?
+### How do I enable routing between my site-to-site VPN connection and ExpressRoute?
+
+If you want to enable routing between your branch connected to ExpressRoute and your branch connected to a site-to-site VPN, you need to set up [Azure Route Server](../route-server/expressroute-vpn-support.md).
-If you want to enable routing between your branch connected to ExpressRoute and your branch connected to a site-to-site VPN connection, you'll need to set up [Azure Route Server](../route-server/expressroute-vpn-support.md).
+### Can I use a VPN gateway to transit traffic between my on-premises sites or to another virtual network?
-### Can I use Azure VPN gateway to transit traffic between my on-premises sites or to another virtual network?
+* **Resource Manager deployment model**
-**Resource Manager deployment model**<br>
-Yes. See the [BGP](#bgp) section for more information.
+ Yes. See the [BGP and routing](#bgp) section for more information.
-**Classic deployment model**<br>
-Transit traffic via Azure VPN gateway is possible using the classic deployment model, but relies on statically defined address spaces in the network configuration file. BGP isn't yet supported with Azure Virtual Networks and VPN gateways using the classic deployment model. Without BGP, manually defining transit address spaces is very error prone, and not recommended.
+* **Classic deployment model**
+
+ Transiting traffic via a VPN gateway is possible when you use the classic deployment model, but it relies on statically defined address spaces in the network configuration file. Border Gateway Protocol (BGP) isn't currently supported with Azure virtual networks and VPN gateways via the classic deployment model. Without BGP, manually defining transit address spaces is error prone and not recommended.
### Does Azure generate the same IPsec/IKE preshared key for all my VPN connections for the same virtual network?
-No, Azure by default generates different preshared keys for different VPN connections. However, you can use the `Set VPN Gateway Key` REST API or PowerShell cmdlet to set the key value you prefer. The key MUST only contain printable ASCII characters except space, hyphen (-) or tilde (~).
+No. By default, Azure generates different preshared keys for different VPN connections. However, you can use the Set VPN Gateway Key REST API or PowerShell cmdlet to set the key value that you prefer. The key must contain only printable ASCII characters, except space, hyphen (-), or tilde (~).
### Do I get more bandwidth with more site-to-site VPNs than for a single virtual network?
-No, all VPN tunnels, including point-to-site VPNs, share the same Azure VPN gateway and the available bandwidth.
+No. All VPN tunnels, including point-to-site VPNs, share the same VPN gateway and the available bandwidth.
-### Can I configure multiple tunnels between my virtual network and my on-premises site using multi-site VPN?
+### Can I configure multiple tunnels between my virtual network and my on-premises site by using multi-site VPN?
Yes, but you must configure BGP on both tunnels to the same location.
-### Does Azure VPN Gateway honor AS Path prepending to influence routing decisions between multiple connections to my on-premises sites?
+### Does Azure VPN Gateway honor AS path prepending to influence routing decisions between multiple connections to my on-premises sites?
-Yes, Azure VPN gateway honors AS Path prepending to help make routing decisions when BGP is enabled. A shorter AS Path is preferred in BGP path selection.
+Yes, Azure VPN Gateway honors autonomous system (AS) path prepending to help make routing decisions when BGP is enabled. A shorter AS path is preferred in BGP path selection.
### Can I use the RoutingWeight property when creating a new VPN VirtualNetworkGateway connection?
-No, such setting is reserved for ExpressRoute gateway connections. If you want to influence routing decisions between multiple connections, you need to use AS Path prepending.
+No. Such a setting is reserved for ExpressRoute gateway connections. If you want to influence routing decisions between multiple connections, you need to use AS path prepending.
### Can I use point-to-site VPNs with my virtual network with multiple VPN tunnels?
-Yes, point-to-site (P2S) VPNs can be used with the VPN gateways connecting to multiple on-premises sites and other virtual networks.
+Yes. You can use point-to-site VPNs with the VPN gateways connecting to multiple on-premises sites and other virtual networks.
### Can I connect a virtual network with IPsec VPNs to my ExpressRoute circuit?
-Yes, this is supported. For more information, see [Configure ExpressRoute and site-to-site VPN connections that coexist](../expressroute/expressroute-howto-coexist-classic.md).
+Yes, this is supported. For more information, see [Configure ExpressRoute and site-to-site coexisting connections](../expressroute/expressroute-howto-coexist-classic.md).
## <a name="ipsecike"></a>IPsec/IKE policy
Yes. See [Configure forced tunneling](vpn-gateway-about-forced-tunneling.md).
### If my virtual machine is in a virtual network and I have a cross-premises connection, how should I connect to the VM?
-You have a few options. If you have RDP enabled for your VM, you can connect to your virtual machine by using the private IP address. In that case, you would specify the private IP address and the port that you want to connect to (typically 3389). You'll need to configure the port on your virtual machine for the traffic.
+If you have RDP enabled for your VM, you can connect to your virtual machine by using the private IP address. In that case, you specify the private IP address and the port that you want to connect to (typically 3389). You need to configure the port on your virtual machine for the traffic.
-You can also connect to your virtual machine by private IP address from another virtual machine that's located on the same virtual network. You can't RDP to your virtual machine by using the private IP address if you're connecting from a location outside of your virtual network. For example, if you have a point-to-site virtual network configured and you don't establish a connection from your computer, you can't connect to the virtual machine by private IP address.
+You can also connect to your virtual machine by private IP address from another virtual machine that's located on the same virtual network. You can't RDP to your virtual machine by using the private IP address if you're connecting from a location outside your virtual network. For example, if you have a point-to-site virtual network configured and you don't establish a connection from your computer, you can't connect to the virtual machine by private IP address.
### If my virtual machine is in a virtual network with cross-premises connectivity, does all the traffic from my VM go through that connection?
-No. Only the traffic that has a destination IP that is contained in the virtual network Local Network IP address ranges that you specified goes through the virtual network gateway. Traffic has a destination IP located within the virtual network stays within the virtual network. Other traffic is sent through the load balancer to the public networks, or if forced tunneling is used, sent through the Azure VPN gateway.
+No. Only the traffic that has a destination IP that's contained in the virtual network's local network IP address ranges that you specified goes through the virtual network gateway.
+
+Traffic that has a destination IP located within the virtual network stays within the virtual network. Other traffic is sent through the load balancer to the public networks. Or if you use forced tunneling, the traffic is sent through the VPN gateway.
### How do I troubleshoot an RDP connection to a VM
No. Only the traffic that has a destination IP that is contained in the virtual
### How do I find out more about customer-controlled gateway maintenance?
-For more information, see the [VPN Gateway customer-controlled gateway maintenance](customer-controlled-gateway-maintenance.md) article.
+For more information, see the [Configure customer-controlled gateway maintenance for VPN Gateway](customer-controlled-gateway-maintenance.md) article.
-## Next steps
+## Related content
-* For more information about VPN Gateway, see [About VPN Gateway](vpn-gateway-about-vpngateways.md).
+* For more information about VPN Gateway, see [What is Azure VPN Gateway?](vpn-gateway-about-vpngateways.md).
* For more information about VPN Gateway configuration settings, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md). **"OpenVPN" is a trademark of OpenVPN Inc.**