Updates from: 12/04/2023 02:07:56
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
Previously updated : 04/19/2022 Last updated : 11/14/2023
The following IDs are used for claims transformations error messages:
| `DateTimeGreaterThan` |[AssertDateTimeIsGreaterThan](date-transformations.md#assertdatetimeisgreaterthan) | Claim value comparison failed: The provided left operand is greater than the right operand.| | `UserMessageIfClaimsTransformationStringsAreNotEqual` |[AssertStringClaimsAreEqual](string-transformations.md#assertstringclaimsareequal) | Claim value comparison failed using StringComparison "OrdinalIgnoreCase".|
-### Claims transformations example
+### Claims transformations example 1:
+This example shows localized messages for local account signup.
```xml <LocalizedResources Id="api.localaccountsignup.en">
The following IDs are used for claims transformations error messages:
</LocalizedResources> ```
+### Claims transformations example 2:
+This example shows localized messages for local account password reset.
+
+```xml
+<LocalizedResources Id="api.localaccountpasswordreset.en">
+ <LocalizedStrings>
+ <LocalizedString ElementType="ErrorMessage" StringId="UserMessageIfClaimsTransformationBooleanValueIsNotEqual">You cannot use the old password</LocalizedString>
+ </LocalizedStrings>
+</LocalizedResources>
+```
+ ## Next steps See the following articles for localization examples:
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md
Title: Reduce service costs using Azure Advisor description: Use Azure Advisor to optimize the cost of your Azure deployments. Previously updated : 10/29/2021 Last updated : 11/08/2023
Azure Advisor helps you optimize and reduce your overall Azure spend by identify
1. On the **Advisor** dashboard, select the **Cost** tab.
-## Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend by resizing or shutting down underutilized instances
+## Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend by resizing or shutting down underutilized instances
-Although certain application scenarios can result in low utilization by design, you can often save money by managing the size and number of your virtual machines or virtual machine scale sets.
+Although certain application scenarios can result in low utilization by design, you can often save money by managing the size and number of your virtual machines or virtual machine scale sets.
Advisor uses machine-learning algorithms to identify low utilization and to identify the ideal recommendation to ensure optimal usage of virtual machines and virtual machine scale sets. The recommended actions are shut down or resize, specific to the resource being evaluated. ### Shutdown recommendations
-Advisor identifies resources that haven't been used at all over the last 7 days and makes a recommendation to shut them down.
+Advisor identifies resources that weren't used at all over the last seven days and makes a recommendation to shut them down.
-- Recommendation criteria include **CPU** and **Outbound Network utilization** metrics. **Memory** isn't considered since we've found that **CPU** and **Outbound Network utilization** are sufficient.-- The last 7 days of utilization data are analyzed. Note that you can change your lookback period in the configurations. The available lookback periods are 7, 14, 21, 30, 60, and 90 days. After changing the lookback period, please be aware that it may take up to 48 hours for the recommendations to be updated.-- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics across instances.-- A shutdown recommendation is created if:
- - P95th of the maximum value of CPU utilization summed across all cores is less than 3%.
- - P100 of average CPU in last 3 days (sum over all cores) <= 2%
- - Outbound Network utilization is less than 2% over a seven-day period.
+* Recommendation criteria include **CPU** and **Outbound Network utilization** metrics. **Memory** isn't considered since we found that **CPU** and **Outbound Network utilization** are sufficient.
+
+* The last seven days of utilization data are analyzed. You can change your lookback period in the configurations. The available lookback periods are 7, 14, 21, 30, 60, and 90 days. After you change the lookback period, it might take up to 48 hours for the recommendations to be updated.
+
+* Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics across instances.
+
+* A shutdown recommendation is created if:
+ * P95 of the maximum value of CPU utilization summed across all cores is less than 3%
+ * P100 of average CPU in last 3 days (sum over all cores) <= 2%
+ * Outbound Network utilization is less than 2% over a seven-day period
### Resize SKU recommendations Advisor recommends resizing virtual machines when it's possible to fit the current load on a more appropriate SKU, which is less expensive (based on retail rates). On virtual machine scale sets, Advisor recommends resizing when it's possible to fit the current load on a more appropriate cheaper SKU, or a lower number of instances of the same SKU. -- Recommendation criteria include **CPU**, **Memory** and **Outbound Network utilization**. -- The last 7 days of utilization data are analyzed. Note that you can change your lookback period in the configurations. The available lookback periods are 7, 14, 21, 30, 60, and 90 days. After changing the lookback period, please be aware that it may take up to 48 hours for the recommendations to be updated.-- Metrics are sampled every 30 seconds, aggregated to 1 minute and then further aggregated to 30 minutes (taking the max of average values while aggregating to 30 minutes). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics for instance count recommendations, and aggregated using the max of the metrics for SKU change recommendations. -- An appropriate SKU (for virtual machines) or instance count (for virtual machine scale set resources) is determined based on the following criteria:
- - Performance of the workloads on the new SKU shouldn't be impacted.
- - Target for user-facing workloads:
- - P95 of CPU and Outbound Network utilization at 40% or lower on the recommended SKU
- - P100 of Memory utilization at 60% or lower on the recommended SKU
- - Target for non user-facing workloads:
- - P95 of the CPU and Outbound Network utilization at 80% or lower on the new SKU
- - P100 of Memory utilization at 80% or lower on the new SKU
- - The new SKU, if applicable, has the same Accelerated Networking and Premium Storage capabilities
- - The new SKU, if applicable, is supported in the current region of the Virtual Machine with the recommendation
- - The new SKU, if applicable, is less expensive
- - Instance count recommendations also take into account if the virtual machine scale set is being managed by Service Fabric or AKS. For service fabric managed resources, recommendations take into account reliability and durability tiers.
-- Advisor determines if a workload is user-facing by analyzing its CPU utilization characteristics. The approach is based on findings by Microsoft Research. You can find more details here: [Prediction-Based Power Oversubscription in Cloud Platforms - Microsoft Research](https://www.microsoft.com/research/publication/prediction-based-power-oversubscription-in-cloud-platforms/).-- Based on the best fit and the cheapest costs with no performance impacts, Advisor not only recommends smaller SKUs in the same family (for example D3v2 to D2v2), but also SKUs in a newer version (for example D3v2 to D2v3), or a different family (for example D3v2 to E3v2). -- For virtual machine scale set resources, Advisor prioritizes instance count recommendations over SKU change recommendations because instance count changes are easily actionable, resulting in faster savings.
+* Recommendation criteria include **CPU**, **Memory** and **Outbound Network utilization**.
+
+* The last 7 days of utilization data are analyzed. Note that you can change your lookback period in the configurations. The available lookback periods are 7, 14, 21, 30, 60, and 90 days. After changing the lookback period, be aware that it might take up to 48 hours for the recommendations to be updated.
+
+* Metrics are sampled every 30 seconds, aggregated to 1 minute and then further aggregated to 30 minutes (taking the max of average values while aggregating to 30 minutes). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics for instance count recommendations, and aggregated using the max of the metrics for SKU change recommendations.
+
+* An appropriate SKU (for virtual machines) or instance count (for virtual machine scale set resources) is determined based on the following criteria:
+ * Performance of the workloads on the new SKU won't be impacted.
+ * Target for user-facing workloads:
+ * P95 of CPU and Outbound Network utilization at 40% or lower on the recommended SKU
+ * P100 of Memory utilization at 60% or lower on the recommended SKU
+ * Target for non user-facing workloads:
+ * P95 of the CPU and Outbound Network utilization at 80% or lower on the new SKU
+ * P100 of Memory utilization at 80% or lower on the new SKU
+ * The new SKU, if applicable, has the same Accelerated Networking and Premium Storage capabilities
+ * The new SKU, if applicable, is supported in the current region of the Virtual Machine with the recommendation
+ * The new SKU, if applicable, is less expensive
+ * Instance count recommendations also take into account if the virtual machine scale set is being managed by Service Fabric or AKS. For service fabric managed resources, recommendations take into account reliability and durability tiers.
+* Advisor determines if a workload is user-facing by analyzing its CPU utilization characteristics. The approach is based on findings by Microsoft Research. You can find more details here: [Prediction-Based Power Oversubscription in Cloud Platforms - Microsoft Research](https://www.microsoft.com/research/publication/prediction-based-power-oversubscription-in-cloud-platforms/).
+
+* Based on the best fit and the cheapest costs with no performance impacts, Advisor not only recommends smaller SKUs in the same family (for example D3v2 to D2v2), but also SKUs in a newer version (for example D3v2 to D2v3), or a different family (for example D3v2 to E3v2).
+
+* For virtual machine scale set resources, Advisor prioritizes instance count recommendations over SKU change recommendations because instance count changes are easily actionable, resulting in faster savings.
### Burstable recommendations
We evaluate if workloads are eligible to run on specialized SKUs called **Bursta
A burstable SKU recommendation is made if: -- The average **CPU utilization** is less than a burstable SKUs' baseline performance
- - If the P95 of CPU is less than two times the burstable SKUs' baseline performance
- - If the current SKU doesn't have accelerated networking enabled, since burstable SKUs don't support accelerated networking yet
- - If we determine that the Burstable SKU credits are sufficient to support the average CPU utilization over 7 days. Note that you can change your lookback period in the configurations.
+* The average **CPU utilization** is less than a burstable SKUs' baseline performance
+ * If the P95 of CPU is less than two times the burstable SKUs' baseline performance
+ * If the current SKU doesn't have accelerated networking enabled, since burstable SKUs don't support accelerated networking yet
+ * If we determine that the Burstable SKU credits are sufficient to support the average CPU utilization over 7 days. Note that you can change your lookback period in the configurations.
The resulting recommendation suggests that a user should resize their current virtual machine or virtual machine scale set to a burstable SKU with the same number of cores. This suggestion is made so a user can take advantage of lower cost and also the fact that the workload has low average utilization but high spikes in cases, which can be best served by the B-series SKU.
-
+ Advisor shows the estimated cost savings for either recommended action: resize or shut down. For resize, Advisor provides current and target SKU/instance count information.
-To be more selective about the actioning on underutilized virtual machines or virtual machine scale sets, you can adjust the CPU utilization rule on a per-subscription basis.
+To be more selective about the actioning on underutilized virtual machines or virtual machine scale sets, you can adjust the CPU utilization rule by subscription.
+
+In some cases recommendations can't be adopted or might not be applicable, such as some of these common scenarios (there might be other cases):
+
+* Virtual machine or virtual machine scale set has been provisioned to accommodate upcoming traffic
+
+* Virtual machine or virtual machine scale set uses other resources not considered by the resize algorithm, such as metrics other than CPU, Memory and Network
+
+* Specific testing being done on the current SKU, even if not utilized efficiently
-In some cases recommendations can't be adopted or might not be applicable, such as some of these common scenarios (there may be other cases):
-- Virtual machine or virtual machine scale set has been provisioned to accommodate upcoming traffic-- Virtual machine or virtual machine scale set uses other resources not considered by the resize algo, such as metrics other than CPU, Memory and Network-- Specific testing being done on the current SKU, even if not utilized efficiently-- Need to keep virtual machine or virtual machine scale set SKUs homogeneous -- Virtual machine or virtual machine scale set being utilized for disaster recovery purposes
+* Need to keep virtual machine or virtual machine scale set SKUs homogeneous
-In such cases, simply use the Dismiss/Postpone options associated with the recommendation.
+* Virtual machine or virtual machine scale set being utilized for disaster recovery purposes
+
+In such cases, simply use the Dismiss/Postpone options associated with the recommendation.
### Limitations-- The savings associated with the recommendations are based on retail rates and don't take into account any temporary or long-term discounts that might apply to your account. As a result, the listed savings might be higher than actually possible. -- The recommendations don't take into account the presence of Reserved Instances (RI) / Savings plan purchases. As a result, the listed savings might be higher than actually possible. In some cases, for example in the case of cross-series recommendations, depending on the types of SKUs that reserved instances have been purchased for, the costs might increase when the optimization recommendations are followed. We caution you to consider your RI/Savings plan purchases when you act on the right-size recommendations. +
+* The savings associated with the recommendations are based on retail rates and don't take into account any temporary or long-term discounts that might apply to your account. As a result, the listed savings might be higher than actually possible.
+
+* The recommendations don't take into account the presence of Reserved Instances (RI) / Savings plan purchases. As a result, the listed savings might be higher than actually possible. In some cases, for example in the case of cross-series recommendations, depending on the types of SKUs that reserved instances have been purchased for, the costs might increase when the optimization recommendations are followed. We caution you to consider your RI/Savings plan purchases when you act on the right-size recommendations.
We're constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback).
+## Configure VM/VMSS recommendations
+
+You can adjust Advisor virtual machine (VM) and Virtual Machine Scale Sets recommendations. Specifically, you can setup a filter for each subscription to only show recommendations for machines with certain CPU utilization. This setting will filter recommendations but will not change how they are generated.
+
+> [!NOTE]
+> If you don't have the required permissions, the option is disabled in the user interface. For information on permissions, see [Permissions in Azure Advisor](permissions.md).
+
+To adjust Advisor VM/Virtual Machine Scale Sets right sizing rules, follow these steps:
+
+1. From any Azure Advisor page, click **Configuration** in the left navigation pane. The Advisor Configuration page opens with the **Resources** tab selected, by default.
+
+1. Select the **VM/Virtual Machine Scale Sets right sizing** tab.
+
+1. Select the subscriptions youΓÇÖd like to setup a filter for average CPU utilization, and then click **Edit**.
+
+1. Select the desired average CPU utilization value and click **Apply**. It can take up to 24 hours for the new settings to be reflected in recommendations.
+
+ :::image type="content" source="media/advisor-get-started/advisor-configure-rules.png" alt-text="Screenshot of Azure Advisor configuration option for VM/Virtual Machine Scale Sets sizing rules." lightbox="media/advisor-get-started/advisor-configure-rules.png":::
+ ## Next steps To learn more about Advisor recommendations, see:+ * [Advisor cost recommendations (full list)](advisor-reference-cost-recommendations.md) * [Introduction to Advisor](advisor-overview.md) * [Advisor score](azure-advisor-score.md)
advisor Advisor Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-get-started.md
Previously updated : 09/16/2023 Last updated : 12/1/2023
From any Azure Advisor page, click **Configuration** in the left navigation pane
* **Resources**: Uncheck any subscriptions you don't want to receive Advisor recommendations for, click **Apply**. The page refreshes.
-* **VM/VMSS right sizing**: You can adjust the average CPU utilization rule and the look back period on a per-subscription basis. Doing virtual machine (VM) right sizing requires specialized knowledge.
+* **VM/VMSS right sizing**: You can adjust Advisor virtual machine (VM) and virtual machine scale sets (VMSS) recommendations. Specifically, you can setup a filter for each subscription to only show recommendations for machines with certain CPU utilization. This setting will filter recommendations but will not change how they are generated.
- 1. Select the subscriptions youΓÇÖd like to adjust the average CPU utilization rule for, and then click **Edit**. Not all subscriptions can be edited for VM/VMSS right sizing and certain privileges are required; for more information on permissions, see [Permissions in Azure Advisor](permissions.md).
+ 1. Select the subscriptions youΓÇÖd like to setup a filter for average CPU utilization, and then click **Edit**. Not all subscriptions can be edited for VM/VMSS right sizing and certain privileges are required; for more information on permissions, see [Permissions in Azure Advisor](permissions.md).
1. Select the desired average CPU utilization value and click **Apply**. It can take up to 24 hours for the new settings to be reflected in recommendations.
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Reserved CPU is dependent on node type and cluster configuration, which may caus
Memory utilized by AKS includes the sum of two values. > [!IMPORTANT]
-> AKS 1.28 includes certain changes to memory reservations. These changes are detailed in the following section.
+> AKS 1.29 previews in January 2024 and includes certain changes to memory reservations. These changes are detailed in the following section.
-**AKS 1.28 and later**
+**AKS 1.29 and later**
1. **`kubelet` daemon** has the *memory.available<100Mi* eviction rule by default. This ensures that a node always has at least 100Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and frees up memory on the host machine. 2. **A rate of memory reservations** set according to the lesser value of: *20MB * Max Pods supported on the Node + 50MB* or *25% of the total system memory resources*.
Memory utilized by AKS includes the sum of two values.
For more information, see [Configure maximum pods per node in an AKS cluster](./azure-cni-overview.md#maximum-pods-per-node).
-**AKS versions prior to 1.28**
+**AKS versions prior to 1.29**
1. **`kubelet` daemon** is installed on all Kubernetes agent nodes to manage container creation and termination. By default on AKS, `kubelet` daemon has the *memory.available<750Mi* eviction rule, ensuring a node must always have at least 750Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` will trigger to terminate one of the running pods and free up memory on the host machine.
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
Title: Provide an access identity to the Azure Key Vault provider for Secrets Store CSI Driver for Azure Kubernetes Service (AKS) secrets
-description: Learn how to integrate the Azure Key Vault provider for Secrets Store CSI Driver with your Azure key vault.
+ Title: Access Azure Key Vault with the CSI Driver Identity Provider
+description: Learn how to integrate the Azure Key Vault Provider for Secrets Store CSI Driver with your Azure credentials and user identities.
Previously updated : 10/19/2023 Last updated : 12/01/2023
-# Provide an identity to access the Azure Key Vault provider for Secrets Store CSI Driver in Azure Kubernetes Service (AKS)
+# Connect your Azure identity provider to the Azure Key Vault Secrets Store CSI Driver in Azure Kubernetes Service (AKS)
-The Secrets Store CSI Driver on Azure Kubernetes Service (AKS) provides various methods of identity-based access to your Azure Key Vault. This article outlines these methods and how to use them to access your key vault and its contents from your AKS cluster.
+The Secrets Store Container Storage Interface (CSI) Driver on Azure Kubernetes Service (AKS) provides various methods of identity-based access to your Azure Key Vault. This article outlines these methods and best practices for when to use Role-based access control (RBAC) or OpenID Connect (OIDC) security models to access your key vault and AKS cluster.
You can use one of the following access methods: - [Microsoft Entra Workload ID](#access-with-a-microsoft-entra-workload-id) - [User-assigned managed identity](#access-with-a-user-assigned-managed-identity)
-## Prerequisites
+## Prerequisites for CSI Driver
-- Before you begin, make sure you followed the steps in [Use the Azure Key Vault provider for Secrets Store CSI Driver in an Azure Kubernetes Service (AKS) cluster][csi-secrets-store-driver] to create an AKS cluster with Azure Key Vault provider for Secrets Store CSI Driver support.
+- Before you begin, make sure you finish the steps in [Use the Azure Key Vault provider for Secrets Store CSI Driver in an Azure Kubernetes Service (AKS) cluster][csi-secrets-store-driver] to enable the Azure Key Vault Secrets Store CSI Driver in your AKS cluster.
<a name='access-with-an-azure-ad-workload-identity'></a> ## Access with a Microsoft Entra Workload ID
-A [Microsoft Entra Workload ID][workload-identity] is an identity that an application running on a pod uses that authenticates itself against other Azure services that support it, such as Storage or SQL. It integrates with the native Kubernetes capabilities to federate with external identity providers. In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID then uses OpenID Connect (OIDC) to discover public signing keys and verify the authenticity of the service account token before exchanging it for a Microsoft Entra token. Your workload can exchange a service account token projected to its volume for a Microsoft Entra token using the Azure Identity client library using the Azure SDK or the Microsoft Authentication Library (MSAL).
+A [Microsoft Entra Workload ID][workload-identity] is an identity that an application running on a pod uses to authenticate itself against other Azure services, such as workloads in software. The Storage Store CSI Driver integrates with native Kubernetes capabilities to federate with external identity providers.
+
+In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID then uses OIDC to discover public signing keys and verify the authenticity of the service account token before exchanging it for a Microsoft Entra token. For your workload to exchange a service account token projected to its volume for a Microsoft Entra token, you need the Azure Identity client library in the Azure SDK or the Microsoft Authentication Library (MSAL)
> [!NOTE] > > - This authentication method replaces Microsoft Entra pod-managed identity (preview). The open source Microsoft Entra pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022.
-> - Microsoft Entra Workload ID is supported on both Windows and Linux clusters.
+> - Microsoft Entra Workload ID is supports both Windows and Linux clusters.
### Configure workload identity
A [Microsoft Entra Workload ID][workload-identity] is an identity that an applic
echo $AKS_OIDC_ISSUER ```
-5. Establish a federated identity credential between the Microsoft Entra application and the service account issuer and subject. Get the object ID of the Microsoft Entra application using the following commands. Make sure to update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace.
+5. Establish a federated identity credential between the Microsoft Entra application, service account issuer, and subject. Get the object ID of the Microsoft Entra application using the following commands. Make sure to update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace.
```bash export SERVICE_ACCOUNT_NAME="workload-identity-sa" # sample name; can be changed
A [Microsoft Entra Workload ID][workload-identity] is an identity that an applic
``` > [!NOTE]
- > If you use `objectAlias` instead of `objectName`, make sure to update the YAML script.
+ > If you use `objectAlias` instead of `objectName`, update the YAML script to account for it.
8. Deploy a sample pod using the `kubectl apply` command and the following YAML script.
A [Microsoft Entra Workload ID][workload-identity] is an identity that an applic
EOF ```
-## Access with a user-assigned managed identity
+<a name='access-with-a-user-assigned-managed-identity'></a>
+
+## Access with managed identity
+
+A [Microsoft Entra Managed ID][managed-identity] is an identity that an administrator uses to authenticate themselves against other Azure services. The managed identity uses RBAC to federate with external identity providers.
+
+In this security model, you can grant access to your cluster's resources to team members or tenants sharing a managed role. The role is checked for scope to access the keyvault and other credentials. When you [enabled the Azure Key Vault provider for Secrets Store CSI Driver on your AKS Cluster](./csi-secrets-store-driver.md#create-an-aks-cluster-with-azure-key-vault-provider-for-secrets-store-csi-driver-support), it created a user identity.
+
+### Configure managed identity
-1. Access your key vault using the [`az aks show`][az-aks-show] command and the user-assigned managed identity created by the add-on when you [enabled the Azure Key Vault provider for Secrets Store CSI Driver on your AKS Cluster](./csi-secrets-store-driver.md#create-an-aks-cluster-with-azure-key-vault-provider-for-secrets-store-csi-driver-support).
+1. Access your key vault using the [`az aks show`][az-aks-show] command and the user-assigned managed identity created by the add-on.
```azurecli-interactive az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
A [Microsoft Entra Workload ID][workload-identity] is an identity that an applic
az vm identity assign -g <resource-group> -n <agent-pool-vm> --identities <identity-resource-id> ```
-2. Create a role assignment that grants the identity permission to access the key vault secrets, access keys, and certificates using the [`az role assignment create`][az-role-assignment-create] command.
+2. Create a role assignment that grants the identity permission access to the key vault secrets, access keys, and certificates using the [`az role assignment create`][az-role-assignment-create] command.
```azurecli-interactive export IDENTITY_CLIENT_ID="$(az identity show -g <resource-group> --name <identity-name> --query 'clientId' -o tsv)"
A [Microsoft Entra Workload ID][workload-identity] is an identity that an applic
kubectl apply -f pod.yaml ```
-## Validate the secrets
+## Validate Key Vault secrets
-After the pod starts, the mounted content at the volume path that you specified in your deployment YAML is available. Use the following commands to validate your secrets and print a test secret.
+After the pod starts, the mounted content at the volume path specified in your deployment YAML is available. Use the following commands to validate your secrets and print a test secret.
1. Show secrets held in the secrets store using the following command.
After the pod starts, the mounted content at the volume path that you specified
## Obtain certificates and keys
-The Azure Key Vault design makes sharp distinctions between keys, secrets, and certificates. The certificate features of the Key Vault service were designed to make use of key and secret capabilities. When you create a key vault certificate, it creates an addressable key and secret with the same name. The key allows key operations, and the secret allows the retrieval of the certificate value as a secret.
+The Azure Key Vault design makes sharp distinctions between keys, secrets, and certificates. The certificate features of the Key Vault service are designed to make use of key and secret capabilities. When you create a key vault certificate, it creates an addressable key and secret with the same name. This key allows authentication operations, and the secret allows the retrieval of the certificate value as a secret.
A key vault certificate also contains public x509 certificate metadata. The key vault stores both the public and private components of your certificate in a secret. You can obtain each individual component by specifying the `objectType` in `SecretProviderClass`. The following table shows which objects map to the various resources associated with your certificate:
A key vault certificate also contains public x509 certificate metadata. The key
|`cert`|The certificate, in PEM format.|No| |`secret`|The private key and certificate, in PEM format.|Yes|
-## Disable the Azure Key Vault provider for Secrets Store CSI Driver on an existing AKS cluster
+## Disable the addon on existing clusters
> [!NOTE] > Before you disable the add-on, ensure that *no* `SecretProviderClass` is in use. Trying to disable the add-on while a `SecretProviderClass` exists results in an error.
In this article, you learned how to create and provide an identity to access you
[az-aks-show]: /cli/azure/aks#az-aks-show [az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create [workload-identity]: ./workload-identity-overview.md
+[managed-identity]:/entra/identity/managed-identities-azure-resources/overview
[az-account-set]: /cli/azure/account#az-account-set [az-identity-create]: /cli/azure/identity#az-identity-create [az-role-assignment-create]: /cli/azure/role/assignment#az-role-assignment-create
aks Deploy Confidential Containers Default Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-confidential-containers-default-policy.md
For this preview release, we recommend for test and evaluation purposes to eithe
```
-1. Prepare the RSA Encryption/Decryption key by [downloading][download-setup-key-script] the Bash script for the workload from GitHub. Save the file as `setup-key.sh`.
+1. Prepare the RSA Encryption/Decryption key by [https://github.com/microsoft/confidential-container-demos/blob/main/kafka/setup-key.sh] the Bash script for the workload from GitHub. Save the file as `setup-key.sh`.
1. Set the `MAA_ENDPOINT` environmental variable to match the value for the `SkrClientMAAEndpoint` from the `consumer.yaml` manifest file by running the following command.
api-management Sql Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sql-data-source-policy.md
The following example resolves a GraphQL query by making a single-result T-SQL r
ΓÇ» ΓÇ» </sql-statement> ΓÇ» ΓÇ» <parameters> ΓÇ» ΓÇ» <parameter name="@familyId">
- {context.GraphQL.Arguments.["id"]}
+ @(context.GraphQL.Arguments["id"])
</parameter> ΓÇ» ΓÇ» </parameters> </request>
The query parameter is accessed using the `context.GraphQL.Arguments` context va
ΓÇ» ΓÇ» </sql-statement> ΓÇ» ΓÇ» <parameters> ΓÇ» ΓÇ» <parameter name="@familyId">
- {context.GraphQL.Arguments.["id"]}
+ @(context.GraphQL.Arguments["id"])
</parameter> ΓÇ» ΓÇ» </parameters> </request>
The following example resolves a GraphQL mutation using a T-SQL INSERT statement
ΓÇ» ΓÇ» </sql-statement> ΓÇ» ΓÇ» <parameters> ΓÇ» ΓÇ» <parameter name="@familyId">
- {context.GraphQL.Arguments.["id"]}
+ @(context.GraphQL.Arguments["id"])
</parameter> ΓÇ» ΓÇ» <parameter name="@familyName">
- {context.GraphQL.Arguments.["name"]}
+ @(context.GraphQL.Arguments["name"])
</parameter> ΓÇ» ΓÇ» </parameters> </request>
azure-cache-for-redis Cache Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-failover.md
Previously updated : 09/29/2023 Last updated : 11/28/2023
The Azure Cache for Redis service regularly updates your cache with the latest p
1. The replica node connects to the primary node and synchronizes data. 1. When the data sync is complete, the patching process repeats for the remaining nodes.
-Because patching is a planned failover, the replica node quickly promotes itself to become a primary. Then, the node begins servicing requests and new connections. Basic caches don't have a replica node and are unavailable until the update is complete. Each shard of a clustered cache is patched separately and won't close connections to another shard.
+Because patching is a planned failover, the replica node quickly promotes itself to become a primary. Then, the node begins servicing requests and new connections. Basic caches don't have a replica node and are unavailable until the update is complete. Each shard of a clustered cache is patched separately and doesn't close connections to another shard.
> [!IMPORTANT] > Nodes are patched one at a time to prevent data loss. Basic caches will have data loss. Clustered caches are patched one shard at a time.
Many client libraries can throw different types of errors when connections break
The number and type of exceptions depends on where the request is in the code path when the cache closes its connections. For instance, an operation that sends a request but hasn't received a response when the failover occurs might get a time-out exception. New requests on the closed connection object receive connection exceptions until the reconnection happens successfully.
-Most client libraries attempt to reconnect to the cache if they're configured to do so. However, unforeseen bugs can occasionally place the library objects into an unrecoverable state. If errors persist for longer than a pre-configured amount of time, the connection object should be recreated. In Microsoft.NET and other object-oriented languages, recreating the connection without restarting the application can be accomplished by using a [ForceReconnect pattern](cache-best-practices-connection.md#using-forcereconnect-with-stackexchangeredis).
+Most client libraries attempt to reconnect to the cache if they're configured to do so. However, unforeseen bugs can occasionally place the library objects into an unrecoverable state. If errors persist for longer than a preconfigured amount of time, the connection object should be recreated. In Microsoft.NET and other object-oriented languages, recreating the connection without restarting the application can be accomplished by using a [ForceReconnect pattern](cache-best-practices-connection.md#using-forcereconnect-with-stackexchangeredis).
### Can I be notified in advance of planned maintenance?
-Azure Cache for Redis publishes runtime maintenance notifications on a publish/subscribe (pub/sub) channel called `AzureRedisEvents`. Many popular Redis client libraries support subscribing to pub/sub channels. Receiving notifications from the `AzureRedisEvents` channel is usually a simple addition to your client application. For more information about maintenance events, please see [AzureRedisEvents](https://github.com/Azure/AzureCacheForRedis/blob/main/AzureRedisEvents.md).
+Azure Cache for Redis publishes runtime maintenance notifications on a publish/subscribe (pub/sub) channel called `AzureRedisEvents`. Many popular Redis client libraries support subscribing to pub/sub channels. Receiving notifications from the `AzureRedisEvents` channel is usually a simple addition to your client application. For more information about maintenance events, see [AzureRedisEvents](https://github.com/Azure/AzureCacheForRedis/blob/main/AzureRedisEvents.md).
> [!NOTE]
-> The `AzureRedisEvents` channel isn't a mechanism that can notify you days or hours in advance. The channel can notify clients of any upcoming planned server maintenance events that might affect server availability.
+> The `AzureRedisEvents` channel isn't a mechanism that can notify you days or hours in advance. The channel can notify clients of any upcoming planned server maintenance events that might affect server availability. `AzureRedisEvents` is only available for Basic, Standard, and Premium tiers.
### Client network-configuration changes
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
Previously updated : 06/06/2023 Last updated : 12/02/2023 # Monitor Azure Cache for Redis
Each tab contains status tiles and charts. These tiles and charts are a starting
By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-monitor/essentials/data-platform-metrics.md) and then deleted. To persist your cache metrics for longer than 30 days, you can use a [storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) and specify a **Retention (days)** policy that meets your requirements.
-Configure a storage account to use with to store your metrics. The storage account must be in the same region as the caches. Once you've created a storage account, configure a storage account for your cache metrics:
+Configure a storage account to use with to store your metrics. The storage account must be in the same region as the caches. Once you create a storage account, configure the storage account for your cache metrics:
1. In the **Azure Cache for Redis** page, under the **Monitoring** heading, select **Diagnostics settings**.
Configure a storage account to use with to store your metrics. The storage accou
1. Name the settings.
-1. Check **Archive to a storage account**. YouΓÇÖll be charged normal data rates for storage and transactions when you send diagnostics to a storage account.
+1. Check **Archive to a storage account**. You're charged normal data rates for storage and transactions when you send diagnostics to a storage account.
1. Select **Configure** to choose the storage account in which to store the cache metrics.
When you're seeing the aggregation type:
- **Max** shows the maximum value of a data point in the time granularity, - **Min** shows the minimum value of a data point in the time granularity, - **Average** shows the average value of all data points in the time granularity.-- **Sum** shows the sum of all data points in the time granularity and may be misleading depending on the specific metric.
+- **Sum** shows the sum of all data points in the time granularity and might be misleading depending on the specific metric.
Under normal conditions, **Average** and **Max** are similar because only one node emits these metrics (the primary node). In a scenario where the number of connected clients changes rapidly, **Max**, **Average**, and **Min** would show different values and is also expected behavior.
The types **Count** and **ΓÇ£Sum** can be misleading for certain metrics (connec
> Even when the cache is idle with no connected active client applications, you might see some cache activity, such as connected clients, memory usage, and operations being performed. The activity is normal in the operation of cache. >
-For non-clustered caches, we recommend using the metrics without the suffix `Instance Based`. For example, to check server load for your cache instance, use the metric `Server Load`.
+For nonclustered caches, we recommend using the metrics without the suffix `Instance Based`. For example, to check server load for your cache instance, use the metric _Server Load_.
-In contrast, for clustered caches, we recommend using the metrics with the suffix `Instance Based`. Then, add a split or filter on `ShardId`. For example, to check the server load of shard 1, use the metric "Server Load (Instance Based)", then apply filter `ShardId = 1`.
+In contrast, for clustered caches, we recommend using the metrics with the suffix `Instance Based`. Then, add a split or filter on `ShardId`. For example, to check the server load of shard 1, use the metric **Server Load (Instance Based)**, then apply filter **ShardId = 1**.
## List of metrics - 99th Percentile Latency (preview)
- - Depicts the worst-case (99th percentile) latency of server-side commands. Measured by issuing `PING` commands from the load balancer to the Redis server and tracking the time to respond.
- - Useful for tracking the health of your Redis instance. Latency will increase if the cache is under heavy load or if there are long running commands that delay the execution of the `PING` command.
+ - Depicts the worst-case (99th percentile) latency of server-side commands. Measured by issuing `PING` commands from the load balancer to the Redis server and tracking the time to respond.
+ - Useful for tracking the health of your Redis instance. Latency increases if the cache is under heavy load or if there are long running commands that delay the execution of the `PING` command.
- This metric is only available in Standard and Premium tier caches - Cache Latency (preview) - The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval. - Cache Misses
- - The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses don't necessarily mean there's an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item isn't there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache because of memory pressure, then there may be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`.
+ - The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses don't necessarily mean there's an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item isn't there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache because of memory pressure, then there might be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`.
- Cache Miss Rate - The percent of unsuccessful key lookups during the specified reporting interval. This metric isn't available in Enterprise or Enterprise Flash tier caches. - Cache Read
In contrast, for clustered caches, we recommend using the metrics with the suffi
- Cache Write - The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client. - Connected Clients
- - The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there may still be a few instances of connected clients because of internal processes and connections.
+ - The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there might still be a few instances of connected clients because of internal processes and connections.
- Connected Clients Using Microsoft Entra Token (preview)
- - The number of client connections to the cache authenticated using Microsoft Entra token during the specified reporting interval.
+ - The number of client connections to the cache authenticated using Microsoft Entra token during the specified reporting interval.
- Connections Created Per Second - The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. This metric isn't available in Enterprise or Enterprise Flash tier caches. - Connections Closed Per Second
In contrast, for clustered caches, we recommend using the metrics with the suffi
- CPU - The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. Note: This metric can be noisy due to low priority background security processes running on the node, so we recommend monitoring Server Load metric to track load on a Redis server. - Errors
- - Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could have more added in the future. The error types represented now are as follows:
+ - Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could add more in the future. The error types represented now are as follows:
- **Failover** ΓÇô when a cache fails over (subordinate promotes to primary) - **Dataloss** ΓÇô when there's data loss on the cache - **UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough, and specifically, when the number of bytes in the Redis server output buffer for a client goes over 1,000,000 bytes
In contrast, for clustered caches, we recommend using the metrics with the suffi
- **Import** ΓÇô when there's an issue related to Import RDB - **Export** ΓÇô when there's an issue related to Export RDB - **AADAuthenticationFailure** (preview) - when there's an authentication failure using Microsoft Entra access token
- - **AADTokenExpired** (preview) - when a Microsoft Entra access token used for authentication is not renewed and it expires.
+ - **AADTokenExpired** (preview) - when a Microsoft Entra access token used for authentication isn't renewed and it expires.
- Evicted Keys - The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit. - This number maps to `evicted_keys` from the Redis INFO command.
In contrast, for clustered caches, we recommend using the metrics with the suffi
- This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value. - This metric is only available in the Premium tier for caches with geo-replication enabled. - Geo Replication Data Sync Offset
- - Depicts the approximate amount of data, in bytes, that has yet to be synchronized to geo-secondary cache.
+ - Depicts the approximate amount of data in bytes that has yet to be synchronized to geo-secondary cache.
- This metric is only emitted _from the geo-primary_ cache instance. On the geo-secondary instance, this metric has no value. - This metric is only available in the Premium tier for caches with geo-replication enabled. - Geo Replication Full Sync Event Finished - Depicts the completion of full synchronization between geo-replicated caches. When you see lots of writes on geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
- - The metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
- - This metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
- - This metric is only available in the Premium tier for caches with geo-replication enabled.
+ - The metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
+ - This metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
+ - This metric is only available in the Premium tier for caches with geo-replication enabled.
- Geo Replication Full Sync Event Started - Depicts the start of full synchronization between geo-replicated caches. When there are many writes in geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
In contrast, for clustered caches, we recommend using the metrics with the suffi
- 0 disconnected/unhealthy - 1 ΓÇô healthy - The metric is available in the Enterprise, Enterprise Flash tiers, and Premium tier caches with geo-replication enabled.
- - In caches on the Premium tier, this metric is only emitted *from the geo-secondary* cache instance. On the geo-primary instance, this metric has no value.
- - This metric may indicate a disconnected/unhealthy replication status for several reasons, including: monthly patching, host OS updates, network misconfiguration, or failed geo-replication link provisioning.
- - A value of 0 doesn't mean that data on the geo-replica is lost. It just means that the link between geo-primary and geo-secondary is unhealthy.
+ - In caches on the Premium tier, this metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
+ - This metric might indicate a disconnected/unhealthy replication status for several reasons, including: monthly patching, host OS updates, network misconfiguration, or failed geo-replication link provisioning.
+ - A value of 0 doesn't mean that data on the geo-replica is lost. It just means that the link between geo-primary and geo-secondary is unhealthy.
- If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). - Gets
In contrast, for clustered caches, we recommend using the metrics with the suffi
- Operations per Second - The total number of commands processed per second by the cache server during the specified reporting interval. This value maps to "instantaneous_ops_per_sec" from the Redis INFO command. - Server Load
- - The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. If this counter reaches 100, the Redis server has hit a performance ceiling, and the CPU can't process work any faster. If you're seeing high Redis Server Load, then you see timeout exceptions in the client. In this case, you should consider scaling up or partitioning your data into multiple caches.
+ - The percentage of CPU cycles in which the Redis server is busy processing and _not waiting idle_ for messages. If this counter reaches 100, the Redis server has hit a performance ceiling, and the CPU can't process work any faster. You can expect a large latency effect. If you're seeing a high Redis Server Load, such as 100, because you're sending many expensive commands to the server, then you might see timeout exceptions in the client. In this case, you should consider scaling up, scaling out to a Premium cluster, or partitioning your data into multiple caches. When _Server Load_ is only moderately high, such as 50 to 80 percent, then average latency usually remains low, and timeout exceptions could have other causes than high server latency.
+ - The _Server Load_ metric is sensitive to other processes on the machine using the existing CPU cycles that reduce the Redis server's idle time. For example, on the _C1_ tier, background tasks such as virus scanning cause _Server Load_ to spike higher for no obvious reason. We recommended that you pay attention to other metrics such as operations, latency, and CPU, in addition to _Server Load_.
> [!CAUTION]
-> The Server Load metric can present incorrect data for Enterprise and Enterprise Flash tier caches. Sometimes Server Load is represented as being over 100. We are investigating this issue. We recommend using the CPU metric instead in the meantime.
+> The _Server Load_ metric can present incorrect data for Enterprise and Enterprise Flash tier caches. Sometimes _Server Load_ is represented as being over 100. We are investigating this issue. We recommend using the CPU metric instead in the meantime.
- Sets - The number of set operations to the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_set`, `cmdstat_hset`, `cmdstat_hmset`, `cmdstat_hsetnx`, `cmdstat_lset`, `cmdstat_mset`, `cmdstat_msetnx`, `cmdstat_setbit`, `cmdstat_setex`, `cmdstat_setrange`, and `cmdstat_setnx`. - Total Keys - The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval. - Total Operations
- - The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub there will be no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there will be `Total Operations` metrics that reflect the cache usage for pub/sub operations.
+ - The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub, there are no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there are `Total Operations` metrics that reflect the cache usage for pub/sub operations.
- Used Memory - The amount of cache memory in MB that is used for key/value pairs in the cache during the specified reporting interval. This value maps to `used_memory` from the Redis INFO command. This value doesn't include metadata or fragmentation.
- - On the Enterprise and Enterprise Flash tier, the Used Memory value includes the memory in both the primary and replica nodes. This can make the metric appear twice as large as expected.
+ - On the Enterprise and Enterprise Flash tier, the Used Memory value includes the memory in both the primary and replica nodes. This can make the metric appear twice as large as expected.
- Used Memory Percentage - The percent of total memory that is being used during the specified reporting interval. This value references the `used_memory` value from the Redis INFO command to calculate the percentage. This value doesn't include fragmentation. - Used Memory RSS
For more information about configuring and using Alerts, see [Overview of Alerts
## Organize with workbooks
-Once you've defined a metric, you can send it to a workbook. Workbooks provide a way to organize your metrics into groups that provide the information in coherent way. Azure Cache for Redis provides two workbooks by default in the **Azure Cache for Redis Insights** section:
+Once you define a metric, you can send it to a workbook. Workbooks provide a way to organize your metrics into groups that provide the information in coherent way. Azure Cache for Redis provides two workbooks by default in the **Azure Cache for Redis Insights** section:
:::image type="content" source="media/cache-how-to-monitor/cache-monitoring-workbook.png" alt-text="Screenshot showing the workbooks selected in the Resource menu."::: For information on creating a metric, see [Create your own metrics](#create-your-own-metrics).
-The two workbooks provided are:
+The two workbooks provided are:
+ - **Azure Cache For Redis Resource Overview** combines many of the most commonly used metrics so that the health and performance of the cache instance can be viewed at a glance. :::image type="content" source="media/cache-how-to-monitor/cache-monitoring-resource-overview.png" alt-text="Screenshot of graphs showing a resource overview for the cache."::: -- **Geo-Replication Dashboard** pulls geo-replication health and status metrics from both the geo-primary and geo-secondary cache instances to give a complete picture of geo-replcation health. Using this dashboard is recommended, as some geo-replication metrics are only emitted from either the geo-primary or geo-secondary.
+- **Geo-Replication Dashboard** pulls geo-replication health and status metrics from both the geo-primary and geo-secondary cache instances to give a complete picture of geo-replcation health. Using this dashboard is recommended, as some geo-replication metrics are only emitted from either the geo-primary or geo-secondary.
:::image type="content" source="media/cache-how-to-monitor/cache-monitoring-geo-dashboard.png" alt-text="Screenshot showing the geo-replication dashboard with a geo-primary and geo-secondary cache set.":::
-## Next steps
+## Related content
- [Azure Monitor for Azure Cache for Redis](redis-cache-insights-overview.md) - [Azure Monitor Metrics REST API](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md)
azure-cache-for-redis Cache Troubleshoot Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-timeouts.md
Previously updated : 09/29/2023 Last updated : 12/02/2023+ # Troubleshoot Azure Cache for Redis latency and timeouts
This section discusses troubleshooting for latency and timeout issues that occur
- [Server-side troubleshooting](#server-side-troubleshooting) - [Server maintenance](#server-maintenance) - [High server load](#high-server-load)
+ - [[Spikes in server load](#spikes-in-server-load)]
- [High memory usage](#high-memory-usage) - [Long running commands](#long-running-commands) - [Network bandwidth limitation](#network-bandwidth-limitation)
This section discusses troubleshooting for latency and timeout issues that occur
## Client-side troubleshooting
+Here's the client-side troubleshooting.
+ ### Traffic burst and thread pool configuration Bursts of traffic combined with poor `ThreadPool` settings can result in delays in processing data already sent by the Redis server but not yet consumed on the client side. Check the metric "Errors" (Type: UnresponsiveClients) to validate if your client hosts can keep up with a sudden spike in traffic.
-Monitor how your `ThreadPool` statistics change over time using [an example `ThreadPoolLogger`](https://github.com/JonCole/SampleCode/blob/master/ThreadPoolMonitor/ThreadPoolLogger.cs). You can use `TimeoutException` messages from StackExchange.Redis like below to further investigate:
+Monitor how your `ThreadPool` statistics change over time using [an example `ThreadPoolLogger`](https://github.com/JonCole/SampleCode/blob/master/ThreadPoolMonitor/ThreadPoolLogger.cs). You can use `TimeoutException` messages from StackExchange.Redis to further investigate:
```output System.TimeoutException: Timeout performing EVAL, inst: 8, mgr: Inactive, queue: 0, qu: 0, qs: 0, qc: 0, wr: 0, wq: 0, in: 64221, ar: 0,
Monitor how your `ThreadPool` statistics change over time using [an example `Thr
In the preceding exception, there are several issues that are interesting: - Notice that in the `IOCP` section and the `WORKER` section you have a `Busy` value that is greater than the `Min` value. This difference means your `ThreadPool` settings need adjusting.-- You can also see `in: 64221`. This value indicates that 64,221 bytes have been received at the client's kernel socket layer but haven't been read by the application. This difference typically means that your application (for example, StackExchange.Redis) isn't reading data from the network as quickly as the server is sending it to you.
+- You can also see `in: 64221`. This value indicates that 64,221 bytes were received at the client's kernel socket layer but weren't read by the application. This difference typically means that your application (for example, StackExchange.Redis) isn't reading data from the network as quickly as the server is sending it to you.
You can [configure your `ThreadPool` Settings](cache-management-faq.yml#important-details-about-threadpool-growth) to make sure that your thread pool scales up quickly under burst scenarios.
For information about using multiple keys and smaller values, see [Consider more
You can use the `redis-cli --bigkeys` command to check for large keys in your cache. For more information, see [redis-cli, the Redis command line interface--Redis](https://redis.io/topics/rediscli). - Increase the size of your VM to get higher bandwidth capabilities
- - More bandwidth on your client or server VM may reduce data transfer times for larger responses.
- - Compare your current network usage on both machines to the limits of your current VM size. More bandwidth on only the server or only on the client may not be enough.
+ - More bandwidth on your client or server VM might reduce data transfer times for larger responses.
+ - Compare your current network usage on both machines to the limits of your current VM size. More bandwidth on only the server or only on the client might not be enough.
- Increase the number of connection objects your application uses. - Use a round-robin approach to make requests over different connection objects ### High CPU on client hosts
-High client CPU usage indicates the system can't keep up with the work it's been asked to do. Even though the cache sent the response quickly, the client may fail to process the response in a timely fashion. Our recommendation is to keep client CPU below 80%. Check the metric "Errors" (Type: `UnresponsiveClients`) to determine if your client hosts can process responses from Redis server in time.
+High client CPU usage indicates the system can't keep up with the work assigned to it. Even though the cache sent the response quickly, the client might fail to process the response in a timely fashion. Our recommendation is to keep client CPU less 80%. Check the metric "Errors" (Type: `UnresponsiveClients`) to determine if your client hosts can process responses from Redis server in time.
-Monitor the client's system-wide CPU usage using metrics available in the Azure portal or through performance counters on the machine. Be careful not to monitor *process* CPU because a single process can have low CPU usage but the system-wide CPU can be high. Watch for spikes in CPU usage that correspond with timeouts. High CPU may also cause high `in: XXX` values in `TimeoutException` error messages as described in the [[Traffic burst](#traffic-burst-and-thread-pool-configuration)] section.
+Monitor the client's system-wide CPU usage using metrics available in the Azure portal or through performance counters on the machine. Be careful not to monitor process CPU because a single process can have low CPU usage but the system-wide CPU can be high. Watch for spikes in CPU usage that correspond with timeouts. High CPU might also cause high `in: XXX` values in `TimeoutException` error messages as described in the [[Traffic burst](#traffic-burst-and-thread-pool-configuration)] section.
> [!NOTE] > StackExchange.Redis 1.1.603 and later includes the `local-cpu` metric in `TimeoutException` error messages. Ensure you are using the latest version of the [StackExchange.Redis NuGet package](https://www.nuget.org/packages/StackExchange.Redis/). Bugs are regularly fixed in the code to make it more robust to timeouts. Having the latest version is important.
To mitigate a client's high CPU usage:
### Network bandwidth limitation on client hosts
-Depending on the architecture of client machines, they may have limitations on how much network bandwidth they have available. If the client exceeds the available bandwidth by overloading network capacity, then data isn't processed on the client side as quickly as the server is sending it. This situation can lead to timeouts.
+Depending on the architecture of client machines, they might have limitations on how much network bandwidth they have available. If the client exceeds the available bandwidth by overloading network capacity, then data isn't processed on the client side as quickly as the server is sending it. This situation can lead to timeouts.
-Monitor how your Bandwidth usage change over time using [an example `BandwidthLogger`](https://github.com/JonCole/SampleCode/blob/master/BandWidthMonitor/BandwidthLogger.cs). This code may not run successfully in some environments with restricted permissions (like Azure web sites).
+Monitor how your Bandwidth usage change over time using [an example `BandwidthLogger`](https://github.com/JonCole/SampleCode/blob/master/BandWidthMonitor/BandwidthLogger.cs). This code might not run successfully in some environments with restricted permissions (like Azure web sites).
To mitigate, reduce network bandwidth consumption or increase the client VM size to one with more network capacity. For more information, see [Large request or response size](cache-best-practices-development.md#large-request-or-response-size).
Because of optimistic TCP settings in Linux, client applications hosted on Linux
### RedisSessionStateProvider retry timeout
-If you're using `RedisSessionStateProvider`, ensure you have set the retry timeout correctly. The `retryTimeoutInMilliseconds` value should be higher than the `operationTimeoutInMilliseconds` value. Otherwise, no retries occur. In the following example, `retryTimeoutInMilliseconds` is set to 3000. For more information, see [ASP.NET Session State Provider for Azure Cache for Redis](cache-aspnet-session-state-provider.md) and [How to use the configuration parameters of Session State Provider and Output Cache Provider](https://github.com/Azure/aspnet-redis-providers/wiki/Configuration).
+If you're using `RedisSessionStateProvider`, ensure you set the retry timeout correctly. The `retryTimeoutInMilliseconds` value should be higher than the `operationTimeoutInMilliseconds` value. Otherwise, no retries occur. In the following example, `retryTimeoutInMilliseconds` is set to 3000. For more information, see [ASP.NET Session State Provider for Azure Cache for Redis](cache-aspnet-session-state-provider.md) and [How to use the configuration parameters of Session State Provider and Output Cache Provider](https://github.com/Azure/aspnet-redis-providers/wiki/Configuration).
```xml <add
If you're using `RedisSessionStateProvider`, ensure you have set the retry timeo
## Server-side troubleshooting
+Here's the server-side troubleshooting.
+ ### Server maintenance
-Planned or unplanned maintenance can cause disruptions with client connections. The number and type of exceptions depends on the location of the request in the code path, and when the cache closes its connections. For instance, an operation that sends a request but hasn't received a response when the failover occurs might get a time-out exception. New requests on the closed connection object receive connection exceptions until the reconnection happens successfully.
+Planned or unplanned maintenance can cause disruptions with client connections. The number and type of exceptions depends on the location of the request in the code path, and when the cache closes its connections. For instance, an operation that sends a request but doesn't receive a response when the failover occurs might get a time-out exception. New requests on the closed connection object receive connection exceptions until the reconnection happens successfully.
For more information, check these other sections:
For more information, check these other sections:
- [Connection resilience](cache-best-practices-connection.md#connection-resilience) - `AzureRedisEvents` [notifications](cache-failover.md#can-i-be-notified-in-advance-of-planned-maintenance)
-To check whether your Azure Cache for Redis had a failover during when timeouts occurred, check the metric **Errors**. On the Resource menu of the Azure portal, select **Metrics**. Then create a new chart measuring the `Errors` metric, split by `ErrorType`. Once you have created this chart, you see a count for **Failover**.
+To check whether your Azure Cache for Redis had a failover during when timeouts occurred, check the metric **Errors**. On the Resource menu of the Azure portal, select **Metrics**. Then create a new chart measuring the `Errors` metric, split by `ErrorType`. Once you create this chart, you see a count for **Failover**.
For more information on failovers, see [Failover and patching for Azure Cache for Redis](cache-failover.md).
High server load means the Redis server is unable to keep up with the requests,
There are several changes you can make to mitigate high server load: -- Investigate what is causing high server load such as [long-running commands](#long-running-commands), noted below because of high memory pressure.
+- Investigate what is causing high server load such as [long-running commands](#long-running-commands), noted in this article, because of high memory pressure.
- [Scale](cache-how-to-scale.md) out to more shards to distribute load across multiple Redis processes or scale up to a larger cache size with more CPU cores. For more information, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
+- If your production workload on a _C1_ cache is negatively affected by extra latency from virus scanning, you can reduce the effect by to pay for a higher tier offering with multiple CPU cores, such as _C2_.
+
+#### Spikes in server load
+
+On _C0_ and _C1_ caches, you might see short spikes in server load not caused by an increase in requests a couple times a day while virus scanning is running on the VMs. You see higher latency for requests while virus scanning is happening on these tiers. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving virus scanning and Redis requests.
### High memory usage
Using the [SLOWLOG GET](https://redis.io/commands/slowlog-get) command, you can
Customers can use a console to run these Redis commands to investigate long running and expensive commands. - [SLOWLOG](https://redis.io/commands/slowlog) is used to read and reset the Redis slow queries log. It can be used to investigate long running commands on client side.
-The Redis Slow Log is a system to log queries that exceeded a specified execution time. The execution time does not include I/O operations like talking with the client, sending the reply, and so forth, but just the time needed to actually execute the command. Using the SLOWLOG command, Customers can measure/log expensive commands being executed against their Redis server.
+The Redis Slow Log is a system to log queries that exceeded a specified execution time. The execution time doesn't include I/O operations like talking with the client, sending the reply, and so forth, but just the time needed to actually execute the command. Customers can measure/log expensive commands being executed against their Redis server using the `SLOWLOG` command.
- [MONITOR](https://redis.io/commands/monitor) is a debugging command that streams back every command processed by the Redis server. It can help in understanding what is happening to the database. This command is demanding and can negatively affect performance. It can degrade performance.-- [INFO](https://redis.io/commands/info) - command returns information and statistics about the server in a format that is simple to parse by computers and easy to read by humans. In this case, the CPU section could be useful to investigate the CPU usage. A **server_load** of 100 (maximum value) signifies that the Redis server has been busy all the time (has not been idle) processing the requests.
+- [INFO](https://redis.io/commands/info) - command returns information and statistics about the server in a format that is simple to parse by computers and easy to read by humans. In this case, the CPU section could be useful to investigate the CPU usage. A server load of 100 (maximum value) signifies that the Redis server was busy all the time and was never idle when processing the requests.
Output sample:
event_no_wait_count:1
### Network bandwidth limitation
-Different cache sizes have different network bandwidth capacities. If the server exceeds the available bandwidth, then data won't be sent to the client as quickly. Client requests could time out because the server can't push data to the client fast enough.
+Different cache sizes have different network bandwidth capacities. If the server exceeds the available bandwidth, then data isn't sent to the client as quickly. Client requests could time out because the server can't push data to the client fast enough.
The "Cache Read" and "Cache Write" metrics can be used to see how much server-side bandwidth is being used. You can [view these metrics](cache-how-to-monitor.md#view-cache-metrics) in the portal. [Create alerts](cache-how-to-monitor.md#create-alerts) on metrics like cache read or cache write to be notified early about potential impacts.
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log alert rule."::: To use one of the predefined alert rule queries, expand the **Schema and filter** pane on the left of the **Logs** pane. Then select the **Queries** tab, and select one of the queries.
+ > [!NOTE]
+ > Th Plugins 'bag_unpack()', 'pivot()', 'narrow()' cannot be used in the query definition.
1. (Optional) If you're querying an ADX or ARG cluster, Log Analytics can't automatically identify the column with the event timestamp, so we recommend that you add a time range filter to the query. For example: ```KQL
azure-monitor Alerts Payload Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-payload-samples.md
You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to
"/subscriptions/11111111-1111-1111-1111-111111111111" ], "originAlertId":"12345678-1234-1234-1234-1234567890ab",
- "firedDateTime":"2021-11-17T05:34:48.0623172",
+ "firedDateTime":"2021-11-17T05:34:48.0623172Z",
"description":"Alert rule description", "essentialsVersion":"1.0", "alertContextVersion":"1.0"
azure-monitor Proactive Failure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md
# Smart Detection - Failure Anomalies [Application Insights](../app/app-insights-overview.md) automatically alerts you in near real time if your web app experiences an abnormal rise in the rate of failed requests. It detects an unusual rise in the rate of HTTP requests or dependency calls that are reported as failed. For requests, failed requests usually have response codes of 400 or higher. To help you triage and diagnose the problem, an analysis of the characteristics of the failures and related application data is provided in the alert details. There are also links to the Application Insights portal for further diagnosis. The feature needs no set-up nor configuration, as it uses machine learning algorithms to predict the normal failure rate.
-This feature works for any web app, hosted in the cloud or on your own servers, that generate application request or dependency data. For example, if you have a worker role that calls [TrackRequest()](../app/api-custom-events-metrics.md#trackrequest) or [TrackDependency()](../app/api-custom-events-metrics.md#trackdependency).
+This feature works for any web app, hosted in the cloud or on your own servers that generate application request or dependency data. For example, if you have a worker role that calls [TrackRequest()](../app/api-custom-events-metrics.md#trackrequest) or [TrackDependency()](../app/api-custom-events-metrics.md#trackdependency).
After setting up [Application Insights for your project](../app/app-insights-overview.md), and if your app generates a certain minimum amount of data, Smart Detection of Failure Anomalies takes 24 hours to learn the normal behavior of your app, before it's switched on and can send alerts.
The alert details tell you:
Ordinary [metric alerts](./alerts-log.md) tell you there might be a problem. But Smart Detection starts the diagnostic work for you, performing much the analysis you would otherwise have to do yourself. You get the results neatly packaged, helping you to get quickly to the root of the problem. ## How it works
-Smart Detection monitors the data received from your app, and in particular the failure rates. This rule counts the number of requests for which the `Successful request` property is false, and the number of dependency calls for which the `Successful call` property is false. For requests, by default, `Successful request == (resultCode < 400)` (unless you have written custom code to [filter](../app/api-filtering-sampling.md#filtering) or generate your own [TrackRequest](../app/api-custom-events-metrics.md#trackrequest) calls).
+Smart Detection monitors the data received from your app, and in particular the failure rates. This rule counts the number of requests for which the `Successful request` property is false, and the number of dependency calls for which the `Successful call` property is false. For requests, by default, `Successful request == (resultCode < 400)` (unless you write custom code to [filter](../app/api-filtering-sampling.md#filtering) or generate your own [TrackRequest](../app/api-custom-events-metrics.md#trackrequest) calls).
Your app's performance has a typical pattern of behavior. Some requests or dependency calls are more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies.
-As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If an abnormal rise in failure rate is observed by comparison with previous performance, an analysis is triggered.
+As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If the detector discovers an abnormal rise in failure rate comparison with previous performance, the detector triggers a more in-depth analysis.
When an analysis is triggered, the service performs a cluster analysis on the failed request, to try to identify a pattern of values that characterize the failures.
-In the previous example, the analysis has discovered that most failures are about a specific result code, request name, Server URL host, and role instance.
+In the example shown before, the analysis discovered that most failures are about a specific result code, request name, Server URL host, and role instance.
-When your service is instrumented with these calls, the analyzer looks for an exception and a dependency failure that are associated with requests in the cluster it has identified, together with an example of any trace logs associated with those requests.
-
-The resulting analysis is sent to you as alert, unless you have configured it not to.
-
-Like the [alerts you set manually](./alerts-log.md), you can inspect the state of the fired alert, which can be resolved if the issue is fixed. Configure the alert rules in the Alerts page of your Application Insights resource. But unlike other alerts, you don't need to set up or configure Smart Detection. If you want, you can disable it or change its target email addresses.
+When you instrument your service with these calls, the analyzer looks for an exception and a dependency failure that are associated with requests in the identified cluster. It also looks for an example of any trace logs, associated with those requests. The alert you receive includes this additional information that can provide context to the detection and hint on the root cause for the detected problem.
### Alert logic details-
-Our proprietary machine learning algorithm triggers the alerts, so we can't share the exact implementation details. With that said, we understand that you sometimes need to know more about how the underlying logic works. The primary factors that are evaluated to determine if an alert should be triggered are:
+Failure Anomalies detection relies on a proprietary machine learning algorithm, so the reasons for an alert firing or not firing aren't always deterministic. With that said, the primary factors that the algorithm uses are:
* Analysis of the failure percentage of requests/dependencies in a rolling time window of 20 minutes.
-* A comparison of the failure percentage of the last 20 minutes to the rate in the last 40 minutes and the past seven days, and looking for significant deviations that exceed X-times that standard deviation.
-* Using an adaptive limit for the minimum failure percentage, which varies based on the appΓÇÖs volume of requests/dependencies.
-* There's logic that can automatically resolve the fired alert monitor condition, if the issue is no longer detected for 8-24 hours.
- Note: in the current design. a notification or action will not be sent when a Smart Detection alert is resolved. You can check if a Smart Detection alert was resolved in the Azure portal.
+* A comparison of the failure percentage in the last 20 minutes, to the rate in the last 40 minutes and the past seven days. The algorithm is looking for significant deviations that exceed X-times of the standard deviation.
+* The algorithm is using an adaptive limit for the minimum failure percentage, which varies based on the appΓÇÖs volume of requests/dependencies.
+* The algorithm includes logic that can automatically resolve the fired alert, if the issue is no longer detected for 8-24 hours.
+ Note: in the current design. a notification or action isn't sent when a Smart Detection alert is resolved. You can check if a Smart Detection alert was resolved in the Azure portal.
-## Configure alerts
+## Managing Failure Anomalies alert rules
-You can disable Smart Detection alert rule from the portal or using Azure Resource Manager ([see template example](./proactive-arm-config.md)).
+### Alert rule creation
+A Failure Anomalies alert rule is created automatically when your Application Insights resource is created. The rule is automatically configured to analyze the telemetry on that resource.
+You can create the rule again using Azure [REST API](https://learn.microsoft.com/rest/api/monitor/smart-detector-alert-rules?view=rest-monitor-2019-06-01&preserve-view=true) or using a [Resource Manager template](proactive-arm-config.md#failure-anomalies-alert-rule). Creating the rule can be useful if the automatic creation of the rule failed for some reason, or if you deleted the rule.
-This alert rule is created with an associated [Action Group](./action-groups.md) named "Application Insights Smart Detection" that contains email and webhook actions, and can be extended to trigger more actions when the alert fires.
-
-> [!NOTE]
-> Email notifications sent from this alert rule are now sent by default to users associated with the subscription's Monitoring Reader and Monitoring Contributor roles. More information on this is available [here](./proactive-email-notification.md).
-> Notifications sent from this alert rule follow the [common alert schema](./alerts-common-schema.md).
->
-
-Open the Alerts page. Failure Anomalies alert rules are included along with any alerts that you have set manually, and you can see whether it's currently in the alert state.
+### Alert rule configuration
+To configure a Failure Anomalies alert rule in the portal, open the Alerts page and select Alert Rules. Failure Anomalies alert rules are included along with any alerts that you set manually.
:::image type="content" source="./media/proactive-failure-diagnostics/021.png" alt-text="On the Application Insights resource page, click Alerts tile, then Manage alert rules." lightbox="./media/proactive-failure-diagnostics/021.png":::
-Click the alert to configure it.
+Click the alert rule to configure it.
:::image type="content" source="./media/proactive-failure-diagnostics/032.png" alt-text="Rule configuration screen." lightbox="./media/proactive-failure-diagnostics/032.png":::
+You can disable Smart Detection alert rule from the portal or using an [Azure Resource Manager template](proactive-arm-config.md#failure-anomalies-alert-rule).
+
+This alert rule is created with an associated [Action Group](./action-groups.md) named "Application Insights Smart Detection." By default, this action group contains Email Azure Resource Manager Role actions and sends notification to users who have Monitoring Contributor or Monitoring Reader subscription Azure Resource Manager roles in your subscription. You can remove, change or add the action groups that the rule triggers, as for any other Azure alert rule. Notifications sent from this alert rule follow the [common alert schema](./alerts-common-schema.md).
++ ## Delete alerts
-You can disable or delete a Failure Anomalies alert rule.
+You can delete a Failure Anomalies alert rule.
You can do so manually on the Alert rules page or with the following Azure CLI command:
Notice that if you delete an Application Insights resource, the associated Failu
## Triage and diagnose an alert
+An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there's some problem with your app or its environment.
An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there's some problem with your app or its environment.
+To investigate further, click on 'View full details in Application Insights.' The links in this page take you straight to a [search page](../app/diagnostic-search.md) filtered to the relevant requests, exception, dependency, or traces.
To investigate further, click on 'View full details in Application Insights' the links in this page take you straight to a [search page](../app/transaction-search-and-diagnostics.md?tabs=transaction-search) filtered to the relevant requests, exception, dependency, or traces. You can also open the [Azure portal](https://portal.azure.com), navigate to the Application Insights resource for your app, and open the Failures page.
-Clicking on 'Diagnose failures' helps you get more details and resolve the issue.
+Clicking on 'Diagnose failures' can help you get more details and resolve the issue.
:::image type="content" source="./media/proactive-failure-diagnostics/051.png" alt-text="Diagnostic search." lightbox="./media/proactive-failure-diagnostics/051.png#lightbox":::
-From the percentage of requests and number of users affected, you can decide how urgent the issue is. In the previous example, the failure rate of 78.5% compares with a normal rate of 2.2%, indicates that something bad is going on. On the other hand, only 46 users were affected. If it was your app, you'd be able to assess how serious that is.
+From the percentage of requests and number of users affected, you can decide how urgent the issue is. In the example shown before, the failure rate of 78.5% compares with a normal rate of 2.2%, indicates that something bad is going on. On the other hand, only 46 users were affected. This information can help you to assess how serious the problem is.
-In many cases, you will be able to diagnose the problem quickly from the request name, exception, dependency failure, and trace data provided.
+In many cases, you can diagnose the problem quickly from the request name, exception, dependency failure, and trace data provided.
In this example, there was an exception from SQL Database due to request limit being reached.
Click **Alerts** in the Application Insights resource page to get to the most re
:::image type="content" source="./media/proactive-failure-diagnostics/070.png" alt-text="Alerts summary." lightbox="./media/proactive-failure-diagnostics/070.png":::
-## What's the difference ...
-Smart Detection of Failure Anomalies complements other similar but distinct features of Application Insights.
-
-* [metric alerts](./alerts-log.md) are set by you and can monitor a wide range of metrics such as CPU occupancy, request rates, page load times, and so on. You can use them to warn you, for example, if you need to add more resources. By contrast, Smart Detection of Failure Anomalies covers a small range of critical metrics (currently only failed request rate), designed to notify you in near real-time manner once your web app's failed request rate increases compared to web app's normal behavior. Unlike metric alerts, Smart Detection automatically sets and updates thresholds in response changes in the behavior. Smart Detection also starts the diagnostic work for you, saving you time in resolving issues.
-
-* [Smart Detection of performance anomalies](smart-detection-performance.md) also uses machine intelligence to discover unusual patterns in your metrics, and no configuration by you is required. But unlike Smart Detection of Failure Anomalies, the purpose of Smart Detection of performance anomalies is to find segments of your usage manifold that might be badly served - for example, by specific pages on a specific type of browser. The analysis is performed daily, and if any result is found, it's likely to be much less urgent than an alert. By contrast, the analysis for Failure Anomalies is performed continuously on incoming application data, and you will be notified within minutes if server failure rates are greater than expected.
- ## If you receive a Smart Detection alert
-*Why have I received this alert?*
+*Why I received this alert?*
* We detected an abnormal rise in failed requests rate compared to the normal baseline of the preceding period. After analysis of the failures and associated application data, we think that there's a problem that you should look into.
Smart Detection of Failure Anomalies complements other similar but distinct feat
*I lost the email. Where can I find the notifications in the portal?*
-* In the Activity logs. In Azure, open the Application Insights resource for your app, then select Activity logs.
+* You can find Failure Anomalies alerts in the Azure portal, in your Application Insights alerts page.
-*Some of the alerts are about known issues and I do not want to receive them.*
+*Some of the alerts are about known issues and I don't want to receive them.*
* You can use [alert action rules](./alerts-processing-rules.md) suppression feature.
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
Let's assume the input log message body is `User account with userId 123456xx fa
"body": { "toAttributes": { "rules": [
- "userId (?<redactedUserId>[\\da-zA-Z]+)"
+ "userId (?<redactedUserId>[0-9a-zA-Z]+)"
] } }
azure-monitor App Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/app-expression.md
- Title: app() expression in Azure Monitor log queries | Microsoft Docs
-description: The app expression is used in an Azure Monitor log query to retrieve data from a specific Application Insights app in the same resource group, another resource group, or another subscription.
--- Previously updated : 04/20/2023---
-# app() expression in Azure Monitor query
-
-The `app` expression is used in an Azure Monitor query to retrieve data from a specific Application Insights app in the same resource group, another resource group, or another subscription. This is useful to include application data in an Azure Monitor log query and to query data across multiple applications in an Application Insights query.
-
-> [!IMPORTANT]
-> The app() expression is not used if you're using a [workspace-based Application Insights resource](../app/create-workspace-resource.md) since log data is stored in a Log Analytics workspace. Use the workspace() expression to write a query that includes application in multiple workspaces. For multiple applications in the same workspace, you don't need a cross workspace query.
-
-## Syntax
-
-`app(`*Identifier*`)`
--
-## Arguments
--- *Identifier*: Identifies the app using one of the formats in the table below.-
-| Identifier | Description | Example
-|:|:|:|
-| ID | GUID of the app | app("00000000-0000-0000-0000-000000000000") |
-| Azure Resource ID | Identifier for the Azure resource |app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp") |
--
-## Notes
-
-* You must have read access to the application.
-* Identifying an application by its ID or Azure Resource ID is strongly recommended since unique, removes ambiguity, and more performant.
-* Use the related expression [workspace](../logs/workspace-expression.md) to query across Log Analytics workspaces.
-
-## Examples
-
-```Kusto
-app("00000000-0000-0000-0000-000000000000").requests | count
-```
-```Kusto
-app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count
-```
-```Kusto
-union
-(workspace("00000000-0000-0000-0000-000000000000").Heartbeat | where Computer == "myComputer"),
-(app("00000000-0000-0000-0000-000000000000").requests | where cloud_RoleInstance == "myColumnInstance")
-| count
-```
-```Kusto
-union
-(workspace("00000000-0000-0000-0000-000000000000").Heartbeat), (app("00000000-0000-0000-0000-000000000000").requests)
-| where TimeGenerated between(todatetime("2023-03-08 15:00:00") .. todatetime("2023-04-08 15:05:00"))
-```
-
-## Next steps
--- See the [workspace expression](../logs/workspace-expression.md) to refer to a Log Analytics workspace.-- Read about how [Azure Monitor data](./log-query-overview.md) is stored.-- Access full documentation for the [Kusto query language](/azure/kusto/query/).
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Service | Table | |:|:|
-| Active Directory | [AADDomainServicesDNSAuditsGeneral](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsGeneral)<br> [AADDomainServicesDNSAuditsDynamicUpdates](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsDynamicUpdates) |
+| Active Directory | [AADDomainServicesDNSAuditsGeneral](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsGeneral)<br> [AADDomainServicesDNSAuditsDynamicUpdates](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsDynamicUpdates)<br>[AADServicePrincipalSignInLogs](/azure/azure-monitor/reference/tables/AADServicePrincipalSignInLogs) |
| API Management | [ApiManagementGatewayLogs](/azure/azure-monitor/reference/tables/ApiManagementGatewayLogs)<br>[ApiManagementWebSocketConnectionLogs](/azure/azure-monitor/reference/tables/ApiManagementWebSocketConnectionLogs) | | Application Gateways | [AGWAccessLogs](/azure/azure-monitor/reference/tables/AGWAccessLogs)<br>[AGWPerformanceLogs](/azure/azure-monitor/reference/tables/AGWPerformanceLogs)<br>[AGWFirewallLogs](/azure/azure-monitor/reference/tables/AGWFirewallLogs) | | Application Insights | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) |
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md
Title: Query across resources with Azure Monitor | Microsoft Docs
-description: This article describes how you can query against resources from multiple workspaces and an Application Insights app in your subscription.
-
+ Title: Query across resources with Azure Monitor
+description: Query and correlated data from multiple Log Analytics workspaces, applications, or resources using the `workspace()`, `app()`, and `resource()` Kusto Query Language (KQL) expressions.
+ Last updated 05/30/2023
+# Customer intent: As a data analyst, I want to write KQL queries that correlate data from multiple Log Analytics workspaces, applications, or resources, to enable my analysis.
-# Create a log query across multiple workspaces and apps in Azure Monitor
+# Query data across Log Analytics workspaces, applications, and resources in Azure Monitor
-Azure Monitor Logs support querying across multiple Log Analytics workspaces and Application Insights apps in the same resource group, another resource group, or another subscription. This capability provides you with a systemwide view of your data.
+There are two ways to query data from multiple workspaces, applications, and resources:
-If you manage subscriptions in other Microsoft Entra tenants through [Azure Lighthouse](../../lighthouse/overview.md), you can include [Log Analytics workspaces created in those customer tenants](../../lighthouse/how-to/monitor-at-scale.md) in your queries.
+* Explicitly by specifying the workspace, app, or resource information using the [workspace()](#query-across-log-analytics-workspaces-using-workspace), [app()](#query-across-classic-application-insights-applications-using-app), or [resource()](#correlate-data-between-resources-using-resource) expressions, as described in this article.
+* Implicitly by using [resource-context queries](manage-access.md#access-mode). When you query in the context of a specific resource, resource group, or a subscription, the query retrieves relevant data from all workspaces that contain data for these resources. Resource-context queries don't retrieve data from classic Application Insights resources.
-There are two methods to query data that's stored in multiple workspaces and apps:
+This article explains how to use the `workspace()`, `app()`, and `resource()` expressions to query data from multiple Log Analytics workspaces, applications, and resources.
-* Explicitly by specifying the workspace and app information. This technique is used in this article.
-* Implicitly by using [resource-context queries](manage-access.md#access-mode). When you query in the context of a specific resource, resource group, or a subscription, the relevant data will be fetched from all workspaces that contain data for these resources. Application Insights data that's stored in apps won't be fetched.
+If you manage subscriptions in other Microsoft Entra tenants through [Azure Lighthouse](../../lighthouse/overview.md), you can include [Log Analytics workspaces created in those customer tenants](../../lighthouse/how-to/monitor-at-scale.md) in your queries.
> [!IMPORTANT]
-> If you're using a [workspace-based Application Insights resource](../app/create-workspace-resource.md), telemetry is stored in a Log Analytics workspace with all other log data. Use the `workspace()` expression to write a query that includes applications in multiple workspaces. For multiple applications in the same workspace, you don't need a cross-workspace query.
+> If you're using a [workspace-based Application Insights resource](../app/create-workspace-resource.md), telemetry is stored in a Log Analytics workspace with all other log data. Use the `workspace()` expression to query data from applications in multiple workspaces. You don't need a cross-workspace query to query data from multiple applications in the same workspace.
## Permissions required - You must have `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspaces you query, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example. - To save a query, you must have `microsoft.operationalinsights/querypacks/queries/action` permisisons to the query pack where you want to save the query, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example.
-## Cross-resource query limits
+## Limitations
-* The number of Application Insights components and Log Analytics workspaces that you can include in a single query is limited to 100.
+* Cross-resource and cross-service queries donΓÇÖt support parameterized functions and functions whose definition includes other cross-workspace or cross-service expressions, including `adx()`, `arg()`, `resource()`, `workspace()`, and `app()`.
+* You can include up to 100 Log Analytics workspaces or classic Application Insights resources in a single query.
* Querying across a large number of resources can substantially slow down the query. * Cross-resource queries in log alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md).
-* References to a cross resource, such as another workspace, should be explicit and can't be parameterized. See [Gather identifiers for Log Analytics workspaces](?tabs=workspace-identifier#gather-identifiers-for-log-analytics-workspaces-and-application-insights-resources) for examples.
-
-## Gather identifiers for Log Analytics workspaces and Application Insights resources
-
-To reference another workspace in your query, use the [workspace](../logs/workspace-expression.md) identifier. For an app from Application Insights, use the [app](./app-expression.md) identifier.
-
-### [Workspace identifier](#tab/workspace-identifier)
-
-You can identify a workspace using one of these IDs:
-
-* **Workspace ID**: A workspace ID is the unique, immutable, identifier assigned to each workspace represented as a globally unique identifier (GUID).
-
- `workspace("00000000-0000-0000-0000-000000000000").Update | count`
-
-* **Azure Resource ID**: This ID is the Azure-defined unique identity of the workspace. For workspaces, the format is */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/workspaces/workspaceName*.
-
- For example:
-
- ```
- workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/ContosoAzureHQ/providers/Microsoft.OperationalInsights/workspaces/contosoretail-it").Update | count
- ```
-
-### [App identifier](#tab/app-identifier)
-The following examples return a summarized count of requests made against an app named *fabrikamapp* in Application Insights.
-
-You can identify an app using one of these IDs:
-
-* **ID**: This ID is the app GUID of the application.
-
- `app("00000000-0000-0000-0000-000000000000").requests | count`
-
-* **Azure Resource ID**: This ID is the Azure-defined unique identity of the app. The format is */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/components/componentName*.
-
- For example:
-
- ```
- app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count
- ```
--
+* References to a cross resource, such as another workspace, should be explicit and can't be parameterized.
-## Query across Log Analytics workspaces and from Application Insights
+## Query across workspaces, applications, and resources using functions
-Follow the instructions in this section to query without using a function or by using a function.
+This section explains how to query workspaces, applications, and resources using functions with and without using a function.
### Query without using a function You can query multiple resources from any of your resource instances. These resources can be workspaces and apps combined. Example for a query across three workspaces:
-```
+```kusto
union Update, workspace("00000000-0000-0000-0000-000000000001").Update,
applicationsScoping
>[!NOTE] > This method can't be used with log alerts because the access validation of the alert rule resources, including workspaces and applications, is performed at alert creation time. Adding new resources to the function after the alert creation isn't supported. If you prefer to use a function for resource scoping in log alerts, you must edit the alert rule in the portal or with an Azure Resource Manager template to update the scoped resources. Alternatively, you can include the list of resources in the log alert query.
+## Query across Log Analytics workspaces using workspace()
+
+Use the `workspace()` expression to retrieve data from a specific workspace in the same resource group, another resource group, or another subscription. You can use this expression to include log data in an Application Insights query and to query data across multiple workspaces in a log query.
+
+### Syntax
+
+`workspace(`*Identifier*`)`
+
+### Arguments
+
+`*Identifier*`: Identifies the workspace by using one of the formats in the following table.
+
+| Identifier | Description | Example
+|:|:|:|
+| ID | GUID of the workspace | workspace("00000000-0000-0000-0000-000000000000") |
+| Azure Resource ID | Identifier for the Azure resource | workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Contoso/providers/Microsoft.OperationalInsights/workspaces/contosoretail") |
+
+### Examples
+
+```Kusto
+workspace("00000000-0000-0000-0000-000000000000").Update | count
+```
+```Kusto
+workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Contoso/providers/Microsoft.OperationalInsights/workspaces/contosoretail").Event | count
+```
+```Kusto
+union
+( workspace("00000000-0000-0000-0000-000000000000").Heartbeat | where Computer == "myComputer"),
+(app("00000000-0000-0000-0000-000000000000").requests | where cloud_RoleInstance == "myRoleInstance")
+| count
+```
+```Kusto
+union
+(workspace("00000000-0000-0000-0000-000000000000").Heartbeat), (app("00000000-0000-0000-0000-000000000000").requests) | where TimeGenerated between(todatetime("2023-03-08 15:00:00") .. todatetime("2023-04-08 15:05:00"))
+```
+
+## Query across classic Application Insights applications using app()
+
+Use the `app` expression to retrieve data from a specific classic Application Insights resource in the same resource group, another resource group, or another subscription. If you're using a [workspace-based Application Insights resource](../app/create-workspace-resource.md), telemetry is stored in a Log Analytics workspace with all other log data. Use the `workspace()` expression to query data from applications in multiple workspaces. You don't need a cross-workspace query to query data from multiple applications in the same workspace.
+
+### Syntax
+
+`app(`*Identifier*`)`
++
+### Arguments
+
+`*Identifier*`: Identifies the app using one of the formats in the table below.
+
+| Identifier | Description | Example
+|:|:|:|
+| ID | GUID of the app | app("00000000-0000-0000-0000-000000000000") |
+| Azure Resource ID | Identifier for the Azure resource |app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp") |
+
+### Examples
+
+```Kusto
+app("00000000-0000-0000-0000-000000000000").requests | count
+```
+```Kusto
+app("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count
+```
+```Kusto
+union
+(workspace("00000000-0000-0000-0000-000000000000").Heartbeat | where Computer == "myComputer"),
+(app("00000000-0000-0000-0000-000000000000").requests | where cloud_RoleInstance == "myColumnInstance")
+| count
+```
+```Kusto
+union
+(workspace("00000000-0000-0000-0000-000000000000").Heartbeat), (app("00000000-0000-0000-0000-000000000000").requests)
+| where TimeGenerated between(todatetime("2023-03-08 15:00:00") .. todatetime("2023-04-08 15:05:00"))
+```
+
+## Correlate data between resources using resource()
+
+The `resource` expression is used in a Azure Monitor query [scoped to a resource](scope.md#query-scope) to retrieve data from other resources.
++
+### Syntax
+
+`resource(`*Identifier*`)`
+
+### Arguments
+
+`*Identifier*`: Identifies the resource, resource group, or subscription from which to correlate data.
+
+| Identifier | Description | Example
+|:|:|:|
+| Resource | Includes data for the resource. | resource("/subscriptions/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcesgroups/myresourcegroup/providers/microsoft.compute/virtualmachines/myvm") |
+| Resource Group or Subscription | Includes data for the resource and all resources that it contains. | resource("/subscriptions/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcesgroups/myresourcegroup) |
++
+### Examples
+
+```Kusto
+union (Heartbeat),(resource("/subscriptions/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcesgroups/myresourcegroup/providers/microsoft.compute/virtualmachines/myvm").Heartbeat) | summarize count() by _ResourceId, TenantId
+```
+```Kusto
+union (Heartbeat),(resource("/subscriptions/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcesgroups/myresourcegroup).Heartbeat) | summarize count() by _ResourceId, TenantId
+```
+ ## Next steps See [Analyze log data in Azure Monitor](./log-query-overview.md) for an overview of log queries and how Azure Monitor log data is structured.
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
# Azure Monitor customer-managed key
-Data in Azure Monitor is encrypted with Microsoft-managed keys. You can use your own encryption key to protect the data and saved queries in your workspaces. Customer-managed keys in Azure Monitor give you greater flexibility to manage access controls to logs. Once configure, new data for linked workspaces is encrypted with your key stored in [Azure Key Vault](../../key-vault/general/overview.md), or [Azure Key Vault Managed "HSM"](../../key-vault/managed-hsm/overview.md).
+Data in Azure Monitor is encrypted with Microsoft-managed keys. You can use your own encryption key to protect the data and saved queries in your workspaces. Customer-managed keys in Azure Monitor give you greater flexibility to manage access controls to logs. Once configure, new data ingested to linked workspaces gets encrypted with your key stored in [Azure Key Vault](../../key-vault/general/overview.md), or [Azure Key Vault Managed "HSM"](../../key-vault/managed-hsm/overview.md).
-We recommend you review [Limitations and constraints](#limitationsandconstraints) below before configuration.
+Review [limitations and constraints](#limitationsandconstraints) before configuration.
## Customer-managed key overview [Encryption at Rest](../../security/fundamentals/encryption-atrest.md) is a common privacy and security requirement in organizations. You can let Azure completely manage encryption at rest, while you have various options to closely manage encryption and encryption keys.
-Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). You also have the option to encrypt data with your own key in [Azure Key Vault](../../key-vault/general/overview.md), with control over key lifecycle and ability to revoke access to your data at any time. Azure Monitor use of encryption is identical to the way [Azure Storage encryption](../../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption) operates.
+Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). You can encrypt data using your own key in [Azure Key Vault](../../key-vault/general/overview.md), for control over the key lifecycle, and ability to revoke access to your data. Azure Monitor use of encryption is identical to the way [Azure Storage encryption](../../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption) operates.
-Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data to dedicated clusters is encrypted twice, once at the service level using Microsoft-managed keys or Customer-managed keys, and once at the infrastructure level, using two different encryption algorithms and two different keys. [double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](#customer-lockbox) control.
+Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data is encrypted twice, once at the service level using Microsoft-managed keys or Customer-managed keys, and once at the infrastructure level, using two different encryption algorithms and two different keys. [double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. Dedicated cluster also let's you protect data with [Lockbox](#customer-lockbox).
Data ingested in the last 14 days or recently used in queries is kept in hot-cache (SSD-backed) for query efficiency. SSD data is encrypted with Microsoft keys regardless customer-managed key configuration, but your control over SSD access adheres to [key revocation](#key-revocation)
-Log Analytics Dedicated Clusters [pricing model](./logs-dedicated-clusters.md#cluster-pricing-model) requires commitment Tier starting at 500 GB per day and can have values of 500, 1000, 2000 or 5000 GB per day.
+Log Analytics Dedicated Clusters [pricing model](./logs-dedicated-clusters.md#cluster-pricing-model) requires commitment Tier starting at 500 GB per day, and can have values of 500, 1000, 2000 or 5000 GB per day.
## How Customer-managed key works in Azure Monitor
-Azure Monitor uses managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level. To allow Customer-managed key on multiple workspaces, a Log Analytics Cluster resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The cluster's storage uses the managed identity that\'s associated with the Cluster resource to authenticate to your Azure Key Vault via Microsoft Entra ID.
+Azure Monitor uses managed identity to grant access to your Azure Key Vault. The identity of the Log Analytics cluster is supported at the cluster level. To allow Customer-managed key on multiple workspaces, a Log Analytics Cluster resource performs as an intermediate identity connection between your Key Vault and your Log Analytics workspaces. The cluster's storage uses the managed identity associated with the cluster to authenticate to your Azure Key Vault via Microsoft Entra ID.
Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): System-assigned and User-assigned, while a single identity can be defined in a cluster depending on your scenario. - System-assigned managed identity is simpler and being generated automatically with the cluster creation when identity `type` is set to "*SystemAssigned*". This identity can be used later to grant storage access to your Key Vault for wrap and unwrap operations. - User-assigned managed identity lets you configure Customer-managed key at cluster creation, when granting it permissions in your Key Vault before cluster creation.
-You can apply Customer-managed key configuration to a new cluster, or existing cluster that has linked workspaces with data ingested to them. New data ingested to linked workspaces gets encrypted with your key, and older data ingested before the configuration, remains encrypted with Microsoft key. Your queries aren't affected by Customer-managed key configuration and query is performed across old and new data seamlessly. You can unlink workspaces from your cluster at any time, and new data ingested after the unlink gets encrypted with Microsoft key, and query is performed across old and new data seamlessly.
+You can apply Customer-managed key configuration to a new cluster, or existing cluster linked to workspaces and ingesting data. New data ingested to linked workspaces gets encrypted with your key, and older data ingested before the configuration, remains encrypted with Microsoft key. Your queries aren't affected by Customer-managed key configuration and performed across old, and new data seamlessly. You can unlink workspaces from cluster at any time. New data ingested after the unlink gets encrypted with Microsoft key, and queries are performed across old, and new data seamlessly.
> [!IMPORTANT] > Customer-managed key capability is regional. Your Azure Key Vault, cluster and linked workspaces must be in the same region, but they can be in different subscriptions.
The following rules apply:
- The "AEK" is used to derive "DEKs, which are the keys that are used to encrypt each block of data written to disk. - When you configure a key in your Key Vault, and updated the key details in the cluster, the cluster storage performs requests to 'wrap' and 'unwrap' "AEK" for encryption and decryption. - Your "KEK" never leaves your Key Vault, and in the case of Managed "HSM", it never leaves the hardware.-- Azure Storage uses managed identity that's associated with the *Cluster* resource for authentication. It accesses Azure Key Vault via Microsoft Entra ID.
+- Azure Storage uses managed identity associated with the *Cluster* resource for authentication. It accesses Azure Key Vault via Microsoft Entra ID.
### Customer-Managed key provisioning steps
Customer-managed key configuration isn't supported in Azure portal currently and
A [portfolio of Azure Key Management products](../../key-vault/managed-hsm/mhsm-control-data.md#portfolio-of-azure-key-management-products) lists the vaults and managed HSMs that can be used.
-Create or use an existing Azure Key Vault in the region that the cluster is planed, and generate or import a key to be used for logs encryption. The Azure Key Vault must be configured as recoverable, to protect your key and the access to your data in Azure Monitor. You can verify this configuration under properties in your Key Vault, both **Soft delete** and **Purge protection** should be enabled.
+Create or use an existing Azure Key Vault in the region that the cluster is planed. In your Key vault, generate or import a key to be used for logs encryption. The Azure Key Vault must be configured as recoverable, to protect your key and the access to your data in Azure Monitor. You can verify this configuration under properties in your Key Vault, both **Soft delete** and **Purge protection** should be enabled.
<!-- convertborder later --> :::image type="content" source="media/customer-managed-keys/soft-purge-protection.png" lightbox="media/customer-managed-keys/soft-purge-protection.png" alt-text="Screenshot of soft delete and purge protection settings." border="false":::
N/A
# [Azure CLI](#tab/azure-cli)
-```azurecli
-az account set ΓÇösubscription "cluster-subscription-id"
+When entering "''" value for ```key-version```, the cluster always uses the last key version in Key Vault and there is no need to update cluster post key rotation.
-az monitor log-analytics cluster update ΓÇöno-wait ΓÇöname "cluster-name" ΓÇöresource-group "resource-group-name" ΓÇökey-name "key-name" ΓÇökey-vault-uri "key-uri" ΓÇökey-version "key-version"
+```azurecli
+az account set --subscription cluster-subscription-id
-# Wait for job completion when `ΓÇöno-wait` was used
-$clusterResourceId = az monitor log-analytics cluster list ΓÇöresource-group "resource-group-name" ΓÇöquery "[?contains(name, "cluster-name")].[id]" ΓÇöoutput tsv
-az resource wait ΓÇöcreated ΓÇöids $clusterResourceId ΓÇöinclude-response-body true
+az monitor log-analytics cluster update --no-wait --name "cluster-name" --resource-group "resource-group-name" --key-name "key-name" --key-vault-uri "key-uri" --key-version "key-version"
+$clusterResourceId = az monitor log-analytics cluster list --resource-group "resource-group-name" --query "[?contains(name, "cluster-name")].[id]" --output tsv
+az resource wait --created --ids $clusterResourceId --include-response-body true
``` # [PowerShell](#tab/powershell)
+When entering "''" value for ```KeyVersion```, the cluster always uses the last key version in Key Vault and there is no need to update cluster post key rotation.
+ ```powershell Select-AzSubscription "cluster-subscription-id"
Follow the procedure illustrated in [Dedicated Clusters article](./logs-dedicate
> - The recommended way to revoke access to your data is by disabling your key, or deleting Access Policy in your Key Vault. > - Setting the cluster's `identity` `type` to `None` also revokes access to your data, but this approach isn't recommended since you can't revert it without contacting support.
-The cluster storage will always respect changes in key permissions within an hour or sooner and storage will become unavailable. New data ingested to linked workspaces is dropped and non-recoverable. Data is inaccessible on these workspaces and queries fail. Previously ingested data remains in storage as long as your cluster and your workspaces aren't deleted. Inaccessible data is governed by the data-retention policy and will be purged when retention is reached. Data ingested in the last 14 days and data recently used in queries is also kept in hot-cache (SSD-backed) for query efficiency. The data on SSD gets deleted on key revocation operation and becomes inaccessible. The cluster storage attempts reach your Key Vault to unwrap encryption periodically, and when key is enabled, unwrap succeeds, SSD data is reloaded from storage, data ingestion and query are resumed within 30 minutes.
+The cluster storage always respect changes in key permissions within an hour or sooner, and storage become unavailable. New data ingested to linked workspaces is dropped and non-recoverable. Data is inaccessible on these workspaces and queries fail. Previously ingested data remains in storage as long as your cluster and your workspaces aren't deleted. Inaccessible data is governed by the data-retention policy and purged when retention is reached. Data ingested in the last 14 days and data recently used in queries is also kept in hot-cache (SSD-backed) for query efficiency. The data on SSD gets deleted on key revocation operation and becomes inaccessible. The cluster storage attempts to reach Key Vault for wrap and unwrap periodically, and once key is enabled, unwrap succeeds, SSD data is reloaded from storage, and data ingestion and query are resumed within 30 minutes.
## Key rotation Key rotation has two modes: -- AutorotationΓÇöupdate your cluster with ```"keyVaultProperties"``` but omit ```"keyVersion"``` property, or set it to ```""```. Storage will automatically use the latest versions.-- Explicit key version updateΓÇöupdate your cluster with key version in ```"keyVersion"``` property. Rotation of keys requires an explicit ```"keyVaultProperties"``` update in cluster, see [Update cluster with Key identifier details](#update-cluster-with-key-identifier-details). If you generate new key version in Key Vault but don't update it in the cluster, the cluster storage will keep using your previous key. If you disable or delete the old key before updating a new one in the cluster, you will get into [key revocation](#key-revocation) state.
+- AutorotationΓÇöupdate your cluster with ```"keyVaultProperties"``` but omit ```"keyVersion"``` property, or set it to ```""```. Storage automatically use the latest key version.
+- Explicit key version updateΓÇöupdate your cluster with key version in ```"keyVersion"``` property. Rotation of keys requires an explicit ```"keyVaultProperties"``` update in cluster, see [Update cluster with Key identifier details](#update-cluster-with-key-identifier-details). If you generate new key version in Key Vault but don't update it in the cluster, the cluster storage keeps using your previous key. If you disable or delete the old key before updating a new one in the cluster, you get into [key revocation](#key-revocation) state.
All your data remains accessible after the key rotation operation. Data always encrypted with the Account Encryption Key ("AEK"), which is encrypted with your new Key Encryption Key ("KEK") version in Key Vault.
When linking your Storage Account for saved queries, the service stores saved-qu
* Linking a Storage Account for queries removed existing saves queries from your workspace. Copy saves queries that you need before this configuration. You can view your saved queries using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch). * Query 'history' and 'pin to dashboard' aren't supported when linking Storage Account for queries. * You can link a single Storage Account to a workspace, which can be used for both saved queries and log alerts queries.
-* Fired log alerts will not contain search results or alert query. You can use [alert dimensions](../alerts/alerts-unified-log.md#split-by-alert-dimensions) to get context in the fired alerts.
+* Fired log alerts won't contain search results or alert query. You can use [alert dimensions](../alerts/alerts-unified-log.md#split-by-alert-dimensions) to get context in the fired alerts.
**Configure BYOS for saved queries**
N/A
# [Azure CLI](#tab/azure-cli) ```azurecli
-az account set ΓÇösubscription "storage-account-subscription-id"
+az account set --subscription "storage-account-subscription-id"
$storageAccountId = '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage name>'
-az account set ΓÇösubscription "workspace-subscription-id"
+az account set --subscription "workspace-subscription-id"
-az monitor log-analytics workspace linked-storage create ΓÇötype Query ΓÇöresource-group "resource-group-name" ΓÇöworkspace-name "workspace-name" ΓÇöstorage-accounts $storageAccountId
+az monitor log-analytics workspace linked-storage create --type Query --resource-group "resource-group-name" --workspace-name "workspace-name" --storage-accounts $storageAccountId
``` # [PowerShell](#tab/powershell)
N/A
# [Azure CLI](#tab/azure-cli) ```azurecli
-az account set ΓÇösubscription "storage-account-subscription-id"
+az account set --subscription "storage-account-subscription-id"
$storageAccountId = '/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage name>'
-az account set ΓÇösubscription "workspace-subscription-id"
+az account set --subscription "workspace-subscription-id"
-az monitor log-analytics workspace linked-storage create ΓÇötype ALerts ΓÇöresource-group "resource-group-name" ΓÇöworkspace-name "workspace-name" ΓÇöstorage-accounts $storageAccountId
+az monitor log-analytics workspace linked-storage create --type ALerts --resource-group "resource-group-name" --workspace-name "workspace-name" --storage-accounts $storageAccountId
``` # [PowerShell](#tab/powershell)
After the configuration, any new alert query will be saved in your storage.
Lockbox gives you the control to approve or reject Microsoft engineer request to access your data during a support request.
-In Azure Monitor, you have this control on data in workspaces linked to your dedicated cluster. The Lockbox control applies to data stored in a dedicated cluster where itΓÇÖs kept isolated in the cluster storage under your Lockbox protected subscription.
+Lockbox is provided in dedicated cluster in Azure Monitor, where your permission to access data is granted at the subscription level.
Learn more about [Customer Lockbox for Microsoft Azure](../../security/fundamentals/customer-lockbox-overview.md)
Customer-Managed key is provided on dedicated cluster and these operations are r
- [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) is configured automatically for clusters created from October 2020 in supported regions. You can verify if your cluster is configured for double encryption by sending a GET request on the cluster and observing that the `isDoubleEncryptionEnabled` value is `true` for clusters with Double encryption enabled. - If you create a cluster and get an errorΓÇö"region-name doesnΓÇÖt support Double Encryption for clusters", you can still create the cluster without Double encryption, by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body.
- - Double encryption settings cannot be changed after the cluster has been created.
+ - Double encryption settings cannot be changed after cluster is created.
Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.
Deleting a linked workspace is permitted while linked to cluster. If you decide
- Behavior per Key Vault availability: - Normal operationΓÇöstorage caches "AEK" for short periods of time and goes back to Key Vault to unwrap periodically.
- - Key Vault connection errorsΓÇöstorage handles transient errors (timeouts, connection failures, "DNS" issues), by allowing keys to stay in cache for the duration of the availability issue, and it overcomes blips and availability issues. The query and ingestion capabilities continue without interruption.
+ - Key Vault connection errorsΓÇöstorage handles transient errors (timeouts, connection failures, "DNS" issues), by allowing keys to stay in cache during the availability issue, and it overcomes blips and availability issues. The query and ingestion capabilities continue without interruption.
- Key Vault access rateΓÇöThe frequency that Azure the cluster storage accesses Key Vault for wrap and unwrap is between 6 to 60 seconds. -- If you update your cluster while it's at provisioning state, or updating state, the update will fail.
+- If you update your cluster while it's at provisioning state, or updating state, the update fails.
-- If you get conflictΓÇöerror when creating a cluster, you may have deleted your cluster in the last 14 days and itΓÇÖs in a soft-delete state. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.
+- If you get conflictΓÇöerror when creating a cluster, a cluster with the same name may have been deleted in the last 14 days and kept reserved. Deleted cluster name becomes available 14 days after deletion.
-- Workspace link to cluster will fail if it is linked to another cluster.
+- Workspace link to cluster fails if it is linked to another cluster.
-- If you create a cluster and specify the KeyVaultProperties immediately, the operation may fail since the Access Policy can't be defined until system identity is assigned to the cluster.
+- If you create a cluster and specify the KeyVaultProperties immediately, operation may fail until identity is assigned in cluster, and granted at Key Vault.
-- If you update existing cluster with KeyVaultProperties and 'Get' key Access Policy is missing in Key Vault, the operation will fail.
+- If you update existing cluster with KeyVaultProperties and 'Get' key Access Policy is missing in Key Vault, the operation fails.
- If you fail to deploy your cluster, verify that your Azure Key Vault, cluster and linked workspaces are in the same region. The can be in different subscriptions. -- If you update your key version in Key Vault and don't update the new key identifier details in the cluster, the cluster will keep using your previous key and your data will become inaccessible. Update new key identifier details in the cluster to resume data ingestion and ability to query data.
+- If you rotate your key in Key Vault and don't update the new key identifier details in the cluster, cluster keep using previous key and your data will become inaccessible. Update new key identifier details in cluster to resume data ingestion and query. You can update key version with "''" to have storage always use lates key version automatically.
-- Some operations are long and can take a while to complete ΓÇö these are cluster create, cluster key update and cluster delete. You can check the operation status by sending GET request to cluster or workspace and observe the response. For example, unlinked workspace won't have the *clusterResourceId* under *features*.
+- Some operations are long running and can take a while to complete, include cluster create, cluster key update and cluster delete. You can check the operation status by sending GET request to cluster or workspace and observe the response. For example, unlinked workspace won't have the *clusterResourceId* under *features*.
- Error messages
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
To help you determine an appropriate daily cap for your workspace, see [Azure M
## Workspaces with Microsoft Defender for Cloud > [!IMPORTANT]
-> Starting September 18, 2023, the Log Analytics Daily Cap all billable data types will
-> be capped if the daily cap is met, and there is no special behavior for any data types when [Microsoft Defender for Servers](../../defender-for-cloud/plan-defender-for-servers-select-plan.md) is enabled on your workspace.
+> Starting September 18, 2023, Azure Monitor caps all billable data types
+> when the daily cap is met. There is no special behavior for any data types when [Microsoft Defender for Servers](../../defender-for-cloud/plan-defender-for-servers-select-plan.md) is enabled on your workspace.
> This change improves your ability to fully contain costs from higher-than-expected data ingestion.
-> If you have a Daily Cap set on your workspace which has Microsoft Defender for Servers,
+> If you have a daily cap set on a workspace that has Microsoft Defender for Servers enabled,
> be sure that the cap is high enough to accommodate this change.
-> Also, be sure to set an alert (see below) so that you are notified as soon as your Daily Cap is met.
+> Also, be sure to set an alert (see below) so that you are notified as soon as your daily cap is met.
Until September 18, 2023, if a workspace enabled the [Microsoft Defenders for Servers](../../defender-for-cloud/plan-defender-for-servers-select-plan.md) solution after June 19, 2017, some security related data types are collected for Microsoft Defender for Cloud or Microsoft Sentinel despite any daily cap configured. The following data types will be subject to this special exception from the daily cap WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus, Update, UpdateSummary, CommonSecurityLog and Syslog
azure-monitor Log Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-query-overview.md
Although Azure Monitor uses the same KQL as Azure Data Explorer, there are some
### Other operators in Azure Monitor The following operators support specific Azure Monitor features and aren't available outside of Azure Monitor:
-* [app()](../logs/app-expression.md)
-* [resource()](./resource-expression.md)
-* [workspace()](../logs/workspace-expression.md)
+* [workspace()](../logs/cross-workspace-query.md#query-across-log-analytics-workspaces-using-workspace)
+* [app()](../logs/cross-workspace-query.md#query-across-classic-application-insights-applications-using-app)
+* [resource()](../logs/cross-workspace-query.md#correlate-data-between-resources-using-resource)
+ ## Next steps - Walk through a [tutorial on writing queries](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor).
azure-monitor Query Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-optimization.md
A query that spans more than five workspaces is considered a query that consumes
> [!IMPORTANT] > - In some multi-workspace scenarios, the CPU and data measurements won't be accurate and will represent the measurement of only a few of the workspaces.
-> - Cross workspace queries having an explicit identifier: workspace ID, or workspace Azure Resource ID, consume less resources and are more performant. See [Gather identifiers for Log Analytics workspaces](./cross-workspace-query.md?tabs=workspace-identifier#gather-identifiers-for-log-analytics-workspaces-and-application-insights-resources)
+> - Cross workspace queries having an explicit identifier: workspace ID, or workspace Azure Resource ID, consume less resources and are more performant.
## Parallelism Azure Monitor Logs uses large clusters of Azure Data Explorer to run queries. These clusters vary in scale and potentially get up to dozens of compute nodes. The system automatically scales the clusters according to workspace placement logic and capacity.
azure-monitor Resource Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/resource-expression.md
- Title: resource() expression in Azure Monitor log query | Microsoft Docs
-description: The resource expression is used in a resource-centric Azure Monitor log query to retrieve data from multiple resources.
--- Previously updated : 08/06/2022---
-# resource() expression in Azure Monitor log query
-
-The `resource` expression is used in a Azure Monitor query [scoped to a resource](scope.md#query-scope) to retrieve data from other resources.
--
-## Syntax
-
-`resource(`*Identifier*`)`
-
-## Arguments
--- *Identifier*: Resource ID of a resource.-
-| Identifier | Description | Example
-|:|:|:|
-| Resource | Includes data for the resource. | resource("/subscriptions/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcesgroups/myresourcegroup/providers/microsoft.compute/virtualmachines/myvm") |
-| Resource Group or Subscription | Includes data for the resource and all resources that it contains. | resource("/subscriptions/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcesgroups/myresourcegroup) |
--
-## Notes
-
-* You must have read access to the resource.
--
-## Examples
-
-```Kusto
-union (Heartbeat),(resource("/subscriptions/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcesgroups/myresourcegroup/providers/microsoft.compute/virtualmachines/myvm").Heartbeat) | summarize count() by _ResourceId, TenantId
-```
-```Kusto
-union (Heartbeat),(resource("/subscriptions/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcesgroups/myresourcegroup).Heartbeat) | summarize count() by _ResourceId, TenantId
-```
--
-## Next steps
--- See [Log query scope and time range in Azure Monitor Log Analytics](scope.md) for details on a query scope.-- Access full documentation for the [Kusto query language](/azure/kusto/query/).
azure-monitor Workspace Expression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-expression.md
- Title: workspace() expression in Azure Monitor log query | Microsoft Docs
-description: The workspace expression is used in an Azure Monitor log query to retrieve data from a specific workspace in the same resource group, another resource group, or another subscription.
--- Previously updated : 04/20/2023---
-# Using the workspace() expression in Azure Monitor log query
-
-Use the `workspace` expression in an Azure Monitor query to retrieve data from a specific workspace in the same resource group, another resource group, or another subscription. You can use this expression to include log data in an Application Insights query and to query data across multiple workspaces in a log query.
--
-## Syntax
-
-`workspace(`*Identifier*`)`
-
-### Arguments
-
-The `workspace` expression takes the following arguments.
-
-#### Identifier
-
-Identifies the workspace by using one of the formats in the following table.
-
-| Identifier | Description | Example
-|:|:|:|
-| ID | GUID of the workspace | workspace("00000000-0000-0000-0000-000000000000") |
-| Azure Resource ID | Identifier for the Azure resource | workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Contoso/providers/Microsoft.OperationalInsights/workspaces/contosoretail") |
--
-> [!NOTE]
-> We strongly recommend identifying a workspace by its unique ID or Azure Resource ID because they remove ambiguity and are more performant.
-
-## Examples
-
-```Kusto
-workspace("00000000-0000-0000-0000-000000000000").Update | count
-```
-```Kusto
-workspace("/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Contoso/providers/Microsoft.OperationalInsights/workspaces/contosoretail").Event | count
-```
-```Kusto
-union
-( workspace("00000000-0000-0000-0000-000000000000").Heartbeat | where Computer == "myComputer"),
-(app("00000000-0000-0000-0000-000000000000").requests | where cloud_RoleInstance == "myRoleInstance")
-| count
-```
-```Kusto
-union
-(workspace("00000000-0000-0000-0000-000000000000").Heartbeat), (app("00000000-0000-0000-0000-000000000000").requests) | where TimeGenerated between(todatetime("2023-03-08 15:00:00") .. todatetime("2023-04-08 15:05:00"))
-```
-
-## Next steps
--- See the [app expression](./app-expression.md), which allows you to query across Application Insights applications.-- Read about how [Azure Monitor data](./log-query-overview.md) is stored.-- Access full documentation for the [Kusto query language](/azure/kusto/query/).
container-apps Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/logging.md
You can view the [log streams](log-streaming.md) in near real-time in the Azure
## Container console Logs
-Container Apps captures the `stdout` and `stderr` output streams from your application containers and displays them as console logs. When you implement logging in your application, you can troubleshoot problems and monitor the health of your app.
+Console logs originate from the `stderr` and `stdout` messages from the containers in your container app and Dapr sidecars. When you implement logging in your application, you can troubleshoot problems and monitor the health of your app.
++
+> [!TIP]
+> Instrumenting your code with well-defined log messages can help you to understand how your code is performing and to debug issues. To learn more about best practices refer to [Design for operations](/azure/architecture/guide/design-principles/design-for-operations).
## System logs
Container Apps generates system logs to inform you of the status of service leve
- Deactivating Old revisions - Error provisioning revision
+System logs emit the following messages:
+
+| Source | Type | Message |
+||||
+| Dapr | Info | Successfully created dapr component \<component-name\> with scope \<dapr-component-scope\> |
+| Dapr | Info | Successfully updated dapr component \<component-name\> with scope \<component-type\> |
+| Dapr | Error | Error creating dapr component \<component-name\> |
+| Volume Mounts | Info | Successfully mounted volume \<volume-name\> for revision \<revision-scope\> |
+| Volume Mounts | Error | Error mounting volume \<volume-name\> |
+| Domain Binding | Info | Successfully bound Domain \<domain\> to the container app \<container app name\> |
+| Authentication | Info | Auth enabled on app. Creating authentication config |
+| Authentication | Info | Auth config created successfully |
+| Traffic weight | Info | Setting a traffic weight of \<percentage>% for revision \<revision-name\\> |
+| Revision Provisioning | Info | Creating a new revision: \<revision-name\> |
+| Revision Provisioning | Info | Successfully provisioned revision \<name\> |
+| Revision Provisioning | Info| Deactivating Old revisions since 'ActiveRevisionsMode=Single' |
+| Revision Provisioning | Error | Error provisioning revision \<revision-name>. ErrorCode: \<[ErrImagePull]\|[Timeout]\|[ContainerCrashing]\> |
+ ## Next steps > [!div class="nextstepaction"]
defender-for-cloud Concept Easm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-easm.md
EASM collects data for publicly exposed assets (ΓÇ£outside-inΓÇ¥). Defender for
- Learn about [cloud security explorer and attack paths](concept-attack-path.md) in Defender for Cloud. - Learn about [Defender EASM](../external-attack-surface-management/index.md).-- Learn how [deploy Defender for EASM](../external-attack-surface-management/deploying-the-defender-easm-azure-resource.md).
+- Learn how to [deploy Defender for EASM](../external-attack-surface-management/deploying-the-defender-easm-azure-resource.md).
defender-for-cloud Defender For Apis Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-deploy.md
Previously updated : 11/02/2023 Last updated : 12/03/2023 # Protect your APIs with Defender for APIs
defender-for-cloud Defender For Apis Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-introduction.md
Microsoft Defender for APIs is a plan provided by [Microsoft Defender for Cloud]
Defender for APIs helps you to gain visibility into business-critical APIs. You can investigate and improve your API security posture, prioritize vulnerability fixes, and quickly detect active real-time threats.
-> [!IMPORTANT]
-> Defender for APIs is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Defender for APIs currently provides security for APIs published in Azure API Management. Defender for APIs can be onboarded in the Defender for Cloud portal, or within the API Management instance in the Azure portal. ## What can I do with Defender for APIs?
defender-for-cloud Defender For Apis Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-manage.md
Previously updated : 11/02/2023 Last updated : 12/03/2023 # Manage your Defender for APIs deployment This article describes how to manage your [Microsoft Defender for APIs](defender-for-apis-introduction.md) plan deployment in Microsoft Defender for Cloud. Management tasks include offboarding APIs from Defender for APIs.
-Defender for APIs is currently in preview.
- ## Offboard an API 1. In the Defender for Cloud portal, select **Workload protections**.
defender-for-cloud Defender For Apis Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-prepare.md
Previously updated : 03/23/2023 Last updated : 12/03/2023 # Support and prerequisites for Defender for APIs deployment
-Review the requirements on this page before setting up [Microsoft Defender for APIs](defender-for-apis-introduction.md). Defender for APIs is currently in preview.
+Review the requirements on this page before setting up [Microsoft Defender for APIs](defender-for-apis-introduction.md).
## Cloud and region support
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Last updated 11/02/2023
# What is Microsoft Defender for Cloud?
-Microsoft Defender for Cloud is a cloud-native application protection platform (CNAPP) with a set of security measures and practices designed to protect cloud-based applications from various cyber threats and vulnerabilities. Defender for Cloud combines the capabilities of:
+Microsoft Defender for Cloud is a cloud-native application protection platform (CNAPP) that is made up of security measures and practices that are designed to protect cloud-based applications from various cyber threats and vulnerabilities. Defender for Cloud combines the capabilities of:
- A development security operations (DevSecOps) solution that unifies security management at the code level across multicloud and multiple-pipeline environments - A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches
Microsoft Defender for Cloud is a cloud-native application protection platform (
> [!NOTE] > For Defender for Cloud pricing information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-When you [enable Defender for Cloud on your](connect-azure-subscription.md), you'll automatically gain access to Microsoft 365 Defender.
+When you [enable Defender for Cloud](connect-azure-subscription.md), you automatically gain access to Microsoft 365 Defender.
-The Microsoft 365 Defender portal provides richer context to investigations that span cloud resources, devices, and identities. In addition, security teams are able to get the complete picture of an attack, including suspicious and malicious events that happen in their cloud environment, through the immediate correlation of all alerts and incidents, including cloud alerts and incidents.
+The Microsoft 365 Defender portal helps security teams investigate attacks across cloud resources, devices, and identities. Microsoft 365 Defender provides an overview of attacks, including suspicious and malicious events that occur in cloud environments. Microsoft 365 Defender accomplishes this goal by correlating all alerts and incidents, including cloud alerts and incidents.
You can learn more about the [integration between Microsoft Defender for Cloud and Microsoft 365 Defender](concept-integration-365.md).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
If you're looking for items older than six months, you can find them in the [Arc
| November 22 | [Enable permissions management with Defender for Cloud (Preview)](#enable-permissions-management-with-defender-for-cloud-preview) | | November 22 | [Defender for Cloud integration with ServiceNow](#defender-for-cloud-integration-with-servicenow) | | November 20| [General Availability of the autoprovisioning process for SQL Servers on machines plan](#general-availability-of-the-autoprovisioning-process-for-sql-servers-on-machines-plan)|
+| November 15 | [General availability of Defender for APIs](#general-availability-of-defender-for-apis) |
| November 15 | [Defender for Cloud is now integrated with Microsoft 365 Defender (Preview)](#defender-for-cloud-is-now-integrated-with-microsoft-365-defender-preview) | | November 15 | [General availability of Containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries](#general-availability-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-for-containers-and-defender-for-container-registries) | | November 15 | [Change to Container Vulnerability Assessments recommendation names](#change-to-container-vulnerability-assessments-recommendation-names) |
Agentless secret scanning enhances the security cloud based Virtual Machines (VM
We're announcing the General Availability (GA) of agentless secret scanning, which is included in both the [Defender for Servers P2](tutorial-enable-servers-plan.md) and the [Defender CSPM](tutorial-enable-cspm-plan.md) plans.
-Agentless secret scanning utilizes cloud APIs to capture snapshots of your disks, conducting out-of-band analysis that ensures that there is no effect on your VM's performance. Agentless secret scanning broadens the coverage offered by Defender for Cloud over cloud assets across Azure, AWS, and GCP environments to enhance your cloud security.
+Agentless secret scanning utilizes cloud APIs to capture snapshots of your disks, conducting out-of-band analysis that ensures that there's no effect on your VM's performance. Agentless secret scanning broadens the coverage offered by Defender for Cloud over cloud assets across Azure, AWS, and GCP environments to enhance your cloud security.
-With this release, Defender for Cloud's detection capabilities now support additional database types, data store signed URLs, access tokens, and more.
+With this release, Defender for Cloud's detection capabilities now support other database types, data store signed URLs, access tokens, and more.
Learn how to [manage secrets with agentless secret scanning](secret-scanning.md).
In preparation for the Microsoft Monitoring Agent (MMA) deprecation in August 20
Customers using the MMA autoprovisioning process are requested to [migrate to the new Azure Monitoring Agent for SQL server on machines autoprovisioning process](/azure/defender-for-cloud/defender-for-sql-autoprovisioning). The migration process is seamless and provides continuous protection for all machines.
+### General availability of Defender for APIs
+
+November 15, 2023
+
+We're announcing the General Availability (GA) of Microsoft Defender for APIs. Defender for APIs is designed to protect organizations against API security threats.
+
+Defender for APIs allows organizations to protect their APIs and data from malicious actors. Organizations can investigate and improve their API security posture, prioritize vulnerability fixes, and quickly detect and respond to active real-time threats. Organizations can also integrate security alerts directly into their Security Incident and Event Management (SIEM) platform, for example Microsoft Sentinel, to investigate and triage issues.
+
+You can learn how to [Protect your APIs with Defender for APIs](defender-for-apis-deploy.md). You can also learn more about [About Microsoft Defender for APIs](defender-for-apis-introduction.md).
+
+You can also read [this blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-announces-general-availability-of-defender-for-apis/ba-p/3981488) to learn more about the GA announcement.
+ ### Defender for Cloud is now integrated with Microsoft 365 Defender (Preview) November 15, 2023
Learn [how to identify and remediate attack paths](how-to-manage-attack-path.md)
November 15, 2023
-The attack path's Azure Resource Graph (ARG) table scheme is updated. The `attackPathType` property is removed and other properties are added. Read more about the [updated Azure Resource Graph table scheme]().
+The attack path's Azure Resource Graph (ARG) table scheme is updated. The `attackPathType` property is removed and other properties are added.
### General Availability release of GCP support in Defender CSPM
defender-for-cloud Support Matrix Cloud Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-cloud-environment.md
Title: Support across Azure clouds description: Review Defender for Cloud features and plans supported across different clouds Previously updated : 05/01/2023 Last updated : 12/03/2023 # Defender for Cloud support for Azure commercial/other clouds
In the support table, **NA** indicates that the feature isn't available.
[DevOps security posture](concept-devops-environment-posture-management-overview.md) | Preview | NA | NA **DEFENDER FOR CLOUD PLANS** | | | [Defender CSPM](concept-cloud-security-posture-management.md)| GA | NA | NA
-[Defender for APIs](defender-for-apis-introduction.md). [Review support preview regions](defender-for-apis-prepare.md#cloud-and-region-support). | Preview | NA | NA
+[Defender for APIs](defender-for-apis-introduction.md). | GA | NA | NA
[Defender for App Service](defender-for-app-service-introduction.md) | GA | NA | NA [Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | GA | NA | NA [Defender for Azure SQL database servers](defender-for-sql-introduction.md) | GA | GA | GA<br/><br/>A subset of alerts/vulnerability assessments is available.<br/>Behavioral threat protection isn't available.
defender-for-cloud Tutorial Enable Storage Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-storage-plan.md
Last updated 08/21/2023
# Deploy Microsoft Defender for Storage
-Microsoft Defender for Storage is an Azure-native solution offering an advanced layer of intelligence for threat detection and mitigation in storage accounts, powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684), Microsoft Defender Antimalware technologies, and Sensitive Data Discovery. With protection for Azure Blob Storage, Azure Files, and Azure Data Lake Storage services, it provides a comprehensive alert suite, near real-time Malware Scanning (add-on), and sensitive data threat detection (no extra cost), allowing quick detection, triage, and response to potential security threats with contextual information. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption.
+Microsoft Defender for Storage is an Azure-native solution offering an advanced layer of intelligence for threat detection and mitigation in storage accounts, powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684), Microsoft Defender Antimalware technologies, and Sensitive Data Discovery. With protection for Azure Blob Storage, Azure Files, and Azure Data Lake Storage services, it provides a comprehensive alert suite, near real-time malware scanning (add-on), and sensitive data threat detection (no extra cost), allowing quick detection, triage, and response to potential security threats with contextual information. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption.
With Microsoft Defender for Storage, organizations can customize their protection and enforce consistent security policies by enabling it on subscriptions and storage accounts with granular control and flexibility.
With Microsoft Defender for Storage, organizations can customize their protectio
| Aspect | Details | ||| |Release state: | General Availability (GA) |
-| Feature availability: |- Activity monitoring (security alerts) ΓÇô General Availability (GA)<br>- Malware Scanning ΓÇô General Availability (GA)<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview<br><br>Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud) to learn more. |
-|Required roles and permissions: | For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions. |
+| Feature availability: |- Activity monitoring (security alerts) ΓÇô General Availability (GA)<br>- Malware scanning ΓÇô General Availability (GA)<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview<br><br>Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud) to learn more. |
+|Required roles and permissions: | For malware scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions. |
| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds*<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government (only activity monitoring support on the classic plan)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts |
-*Azure DNS Zone is not supported for Malware Scanning and sensitive data threat detection.
+*Azure DNS Zone is not supported for malware scanning and sensitive data threat detection.
-## Prerequisites for Malware scanning
-To enable and configure Malware Scanning, you must have Owner roles (such as Subscription Owner or Storage Account Owner) or specific roles with the necessary data actions. Learn more about the [required permissions](support-matrix-defender-for-storage.md).
+## Prerequisites for malware scanning
+To enable and configure malware scanning, you must have Owner roles (such as Subscription Owner or Storage Account Owner) or specific roles with the necessary data actions. Learn more about the [required permissions](support-matrix-defender-for-storage.md).
## Set up and configure Microsoft Defender for Storage To enable and configure Microsoft Defender for Storage and ensure maximum protection and cost optimization, the following configuration options are available: - Enable/disable Microsoft Defender for Storage at the subscription and storage account levels.-- Enable/disable the Malware Scanning or sensitive data threat detection configurable features.-- Set a monthly cap ("capping") on the Malware Scanning per storage account per month to control costs (default value is 5,000GB).
+- Enable/disable the malware scanning or sensitive data threat detection configurable features.
+- Set a monthly cap ("capping") on the malware scanning per storage account per month to control costs (default value is 5,000GB).
- Configure methods to set up response to malware scanning results. - Configure methods for saving malware scanning results logging.
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md
You must provide following information to execute a full backup:
- HSM name or URL - Storage account name - Storage account blob storage container-- Storage container SAS token with permissions `crdw`
+- Storage container SAS token with permissions `crdw` (if storage account is not behind a private endpoint)
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
+### Prerequisites if the storage account is behind a private endpoint (preview):
+
+1. Ensure you have the latest CLI version installed.
+2. Create a user assigned managed identity.
+3. Create a storage account (or use an existing storage account).
+4. Enable Trusted service bypass on the storage account in the ΓÇ£NetworkingΓÇ¥ tab, under ΓÇ£Exceptions.ΓÇ¥
+
+6. Provide ΓÇÿstorage blob data contributorΓÇÖ role access to the user assigned managed identity created in step#2. Do this by going to the ΓÇ£Access ControlΓÇ¥ tab on the portal -> Add Role Assignment. Then select ΓÇ£managed identityΓÇ¥ and select the managed identity created in step#2 -> Review + Assign
+7. Create the Managed HSM and associate the managed identity with below command.
+ ```azurecli-interactive
+ az keyvault create --hsm-name mhsmdemo2 ΓÇôg mhsmrgname ΓÇôl mhsmlocation -- retention-days 7 --administrators "initialadmin" --mi-user-assigned "/subscriptions/subid/resourcegroups/mhsmrgname/providers/Microsoft.ManagedIdentity/userAssignedIdentities/userassignedidentitynamefromstep2"
+ ```
+8. If you have an existing Managed HSM, associate the managed identity by updating the MHSM with the below command.
+ ```azurecli-interactive
+ az keyvault update-hsm --hsm-name mhsmdemo2 ΓÇôg mhsmrgname --mi-user-assigned "/subscriptions/subid/resourcegroups/mhsmrgname/providers/Microsoft.ManagedIdentity/userAssignedIdentities/userassignedidentitynamefromstep2"
+ ```
+ ## Full backup Backup is a long running operation but will immediately return a Job ID. You can check the status of backup process using this Job ID. The backup process creates a folder inside the designated container with a following naming pattern **`mhsm-{HSM_NAME}-{YYYY}{MM}{DD}{HH}{mm}{SS}`**, where HSM_NAME is the name of managed HSM being backed up and YYYY, MM, DD, HH, MM, mm, SS are the year, month, date, hour, minutes, and seconds of date/time in UTC when the backup command was received. While the backup is in progress, the HSM might not operate at full throughput as some HSM partitions will be busy performing the backup operation.
-> [!IMPORTANT]
-> Public internet access must **not** be blocked from the storage accounts being used to backup or restore resources.
-
+### Backup HSM when storage account is behind a private endpoint (preview)
+```azurecli-interactive
+az keyvault backup start --use-managed-identity true --hsm-name mhsmdemo2 -- storage-account-name mhsmdemobackup --blob-container-name mhsmdemobackupcontainer
+ ```
+### Backup HSM when storage account is not behind a private endpoint
```azurecli-interactive # time for 500 minutes later for SAS token expiry
sas=$(az storage container generate-sas -n mhsmdemobackupcontainer --account-nam
# Backup HSM az keyvault backup start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobackup --blob-container-name mhsmdemobackupcontainer --storage-container-SAS-token $sas --subscription 361da5d4-a47a-4c79-afdd-d66f684f4070+ ``` ## Full restore
You must provide the following information to execute a full restore:
- HSM name or URL - Storage account name - Storage account blob container-- Storage container SAS token with permissions `rl`
+- Storage container SAS token with permissions `rl` (if storage account is not behind a private endpoint)
- Storage container folder name where the source backup is stored Restore is a long running operation but will immediately return a Job ID. You can check the status of the restore process using this Job ID. When the restore process is in progress, the HSM enters a restore mode and all data plane command (except check restore status) are disabled.
+### Restore HSM when storage account is behind a private endpoint (preview)
+```azurecli-interactive
+az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobackup--blob-container-name mhsmdemobackupcontainer --backup-folder mhsm-backup-foldername --use-managed-identity true
+ ```
+### Restore HSM when storage account is not behind a private endpoint
+ ```azurecli-interactive
-#### time for 500 minutes later for SAS token expiry
+# time for 500 minutes later for SAS token expiry
end=$(date -u -d "500 minutes" '+%Y-%m-%dT%H:%MZ')
skey=$(az storage account keys list --query '[0].value' -o tsv --account-name mh
# Generate a container sas token sas=$(az storage container generate-sas -n mhsmdemobackupcontainer --account-name mhsmdemobackup --permissions rl --expiry $end --account-key $skey -o tsv --subscription a1ba9aaa-b7f6-4a33-b038-6e64553a6c7b)
-```
-## Restore HSM
+# Restore HSM
-```
az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobackup --blob-container-name mhsmdemobackupcontainer --storage-container-SAS-token $sas --backup-folder mhsm-mhsmdemo-2020083120161860 ```
az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemoba
Selective key restore allows you to restore one individual key with all its key versions from a previous backup to an HSM.
+### Selective key restore when storage account is behind a private endpoint (preview)
+```
+az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobackup --blob-container-name mhsmdemobackupcontainer --backup-folder mhsm-backup-foldername --use-managed-identity true --key-name rsa-key2
+ ```
+
+### Selective key restore when storage account is not behind a private endpoint
``` az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobackup --blob-container-name mhsmdemobackupcontainer --storage-container-SAS-token $sas --backup-folder mhsm-mhsmdemo-2020083120161860 -ΓÇôkey-name rsa-key2 ```
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md
# Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
-# Register and authorize spacecraft
+# Create and authorize a spacecraft resource
To contact a satellite, it must be registered and authorized as a spacecraft resource with Azure Orbital Ground Station.
To contact a satellite, it must be registered and authorized as a spacecraft res
- [KSAT Lite](https://azuremarketplace.microsoft.com/marketplace/apps/kongsbergsatelliteservicesas1657024593438.ksatlite?exp=ubp8&tab=Overview) - [Viasat RTE](https://azuremarketplace.microsoft.com/marketplace/apps/viasatinc1628707641775.viasat-real-time-earth?tab=overview)
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://aka.ms/orbital/portal).
- ## Create a spacecraft resource Create a [spacecraft resource](spacecraft-object.md) as a representation of your satellite in Azure.
-1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
-2. In the **Spacecraft** page, select Create.
-3. In **Create spacecraft resource**, enter or select this information in the Basics tab:
+### Azure portal method
+1. Sign in to the [Azure portal](https://aka.ms/orbital/portal).
+2. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+3. In the **Spacecraft** page, click **Create**.
+4. In **Create spacecraft resource**, enter or select this information in the Basics tab:
| **Field** | **Value** | | | |
Create a [spacecraft resource](spacecraft-object.md) as a representation of your
| TLE line 2 | Enter TLE line 2 | > [!NOTE]
- > TLE stands for Two-Line Element.
- > Be sure to update this TLE value before you schedule a contact. A TLE that is more than two weeks old might result in an unsuccessful downlink.
+ > TLE stands for [Two-Line Element](spacecraft-object.md#ephemeris).
+ > Be sure to update this TLE value before you schedule a contact. A TLE that's more than two weeks old might result in an unsuccessful downlink.
> [!NOTE] > Spacecraft resources can be created in any Azure region with a Microsoft ground station and can schedule contacts on any ground station. Current eligible Azure regions are West US 2, Sweden Central, Southeast Asia, Brazil South, and South Africa North. :::image type="content" source="media/orbital-eos-register-bird.png" alt-text="Register Spacecraft Resource Page" lightbox="media/orbital-eos-register-bird.png":::
-4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
-5. In the **Links** page, enter or select this information:
+5. Click the **Links** tab, or click the **Next: Links** button at the bottom of the page.
+6. In the **Links** page, enter or select this information:
| **Field** | **Value** | | | |
Create a [spacecraft resource](spacecraft-object.md) as a representation of your
:::image type="content" source="media/orbital-eos-register-links.png" alt-text="Spacecraft Links Resource Page" lightbox="media/orbital-eos-register-links.png":::
-6. Select the **Review + create** tab, or select the **Review + create** button.
-7. Select **Create**
+7. Click the **Review + create** tab, or click the **Review + create** button.
+8. Click **Create**.
+
+### API method
+
+Use the Spacecrafts REST Operation Group to [create a spacecraft resource](/rest/api/orbital/azureorbitalgroundstation/spacecrafts/create-or-update/) in the Azure Orbital Ground Station API.
## Request authorization of the new spacecraft resource
-Submit a spacecraft authorization request in order to schedule [contacts](concepts-contact.md) with your new spacecraft resource at applicable ground station sites.
+Submit a spacecraft authorization request in order to schedule [contacts](concepts-contact.md) with your new spacecraft resource at applicable ground station sites.
> [!NOTE] > A [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required to submit a spacecraft authorization request.
Submit a spacecraft authorization request in order to schedule [contacts](concep
> > **Public spacecraft**: licensing is not required for authorization. The Azure Orbital Ground Station service supports several public satellites including Aqua, Suomi NPP, JPSS-1/NOAA-20, and Terra.
-1. Navigate to the newly created spacecraft resource's overview page.
-2. Select **New support request** in the Support + troubleshooting section of the left-hand blade.
-3. In the **New support request** page, enter or select this information in the Basics tab:
+1. Sign in to the [Azure portal](https://aka.ms/orbital/portal).
+2. Navigate to the newly created spacecraft resource's overview page.
+3. Click **New support request** in the Support + troubleshooting section of the left-hand blade.
+4. In the **New support request** page, enter or select the following information in the Basics tab:
| **Field** | **Value** | | | |
Submit a spacecraft authorization request in order to schedule [contacts](concep
| Problem type | Select **Spacecraft Management and Setup** | | Problem subtype | Select **Spacecraft Registration** |
-4. Select the Details tab at the top of the page
-5. In the Details tab, enter this information in the Problem details section:
+4. Click the **Details** tab at the top of the page.
+5. In the Details tab, enter the following information in the **problem details section**:
| **Field** | **Value** | | | |
Submit a spacecraft authorization request in order to schedule [contacts](concep
| File upload | Upload any **pertinent licensing material**, if applicable. | 6. Complete the **Advanced diagnostic information** and **Support method** sections of the **Details** tab.
-7. Select the **Review + create** tab, or select the **Review + create** button.
-8. Select **Create**.
+7. Click the **Review + create** tab, or click the **Review + create** button.
+8. Click **Create**.
-After the spacecraft authorization request is submitted, the Azure Orbital Ground Station team will review the request and authorize the spacecraft resource at relevant ground stations according to the licenses. Authorization requests for public satellites will be quickly approved.
+After the spacecraft authorization request is submitted, the Azure Orbital Ground Station team reviews the request and authorizes the spacecraft resource at relevant ground stations according to the licenses. Authorization requests for public satellites will be quickly approved.
## Confirm spacecraft is authorized
-1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
-2. In the **Spacecraft** page, select the newly registered spacecraft.
+1. In the Azure portal search box, enter **Spacecrafts**. Select **Spacecrafts** in the search results.
+2. In the **Spacecraft** page, click the newly registered spacecraft.
3. In the new spacecraft's overview page, check that the **Authorization status** shows **Allowed**. ## Next steps
orbital Schedule Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/schedule-contact.md
# Schedule a contact
-Schedule a contact with your satellite for data retrieval and delivery on Azure Orbital Ground Station. At the scheduled time, the selected ground station will contact the satellite and start data retrieval/delivery using the designated contact profile.
+Schedule a contact with your satellite for data retrieval and delivery on Azure Orbital Ground Station. At the scheduled time, the selected ground station will contact the spacecraft and start data retrieval/delivery using the designated contact profile. Learn more about [contact resources](concepts-contact.md).
+
+Contacts are created on a per-pass and per-site basis. If you already know the pass timings for your spacecraft and desired ground station, you can directly proceed to schedule the pass with these times. The service will succeed in creating the contact resource if the window is available and fail if the window is unavailable.
+
+If you don't know your spacecraft's pass timings or which ground station sites are available, you can use the [Azure portal](https://aka.ms/orbital/portal) or [Azure Orbital Ground Station API](/rest/api/orbital/) to query for available contact opportunities and use the results to schedule your passes accordingly.
+
+| Method | List available contacts | Schedule contacts | Notes |
+|-|-|-|-|
+|Portal| Yes | Yes | Custom pass timings aren't supported. You must use the results from the query. |
+|API | Yes | Yes | Custom pass timings are supported. |
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Contributor permissions at the subscription level.-- A registered and authorized spacecraft. Learn more on how to [register a spacecraft](register-spacecraft.md).-- A contact profile. Learn more on how to [configure a contact profile](contact-profile.md).-
-## Sign in to Azure
-
-Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
+- A registered and authorized spacecraft resource. Learn more on how to [register a spacecraft](register-spacecraft.md).
+- A contact profile with links in accordance with the spacecraft resource above. Learn more on how to [configure a contact profile](contact-profile.md).
-## Select an available contact
+## Azure portal method
-1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
-2. In the **Spacecraft** page, select the spacecraft for the contact.
-3. Select **Schedule contact** on the top bar of the spacecraftΓÇÖs overview.
+1. Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
+2. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+3. In the **Spacecraft** page, select the spacecraft for the contact.
+4. Select **Schedule contact** on the top bar of the spacecraftΓÇÖs overview.
:::image type="content" source="media/orbital-eos-schedule.png" alt-text="Schedule a contact at spacecraft resource page" lightbox="media/orbital-eos-schedule.png":::
-4. In the **Schedule contact** page, specify this information from the top of the page:
+5. In the **Schedule contact** page, specify this information from the top of the page:
| **Field** | **Value** | | | |
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
:::image type="content" source="media/orbital-eos-schedule-search.png" alt-text="Search for available contact schedules page" lightbox="media/orbital-eos-schedule-search.png":::
-5. Select **Search** to view available contact times.
-6. Select one or more contact windows and select **Schedule**.
+6. Select **Search** to view available contact times.
+7. Select one or more contact windows and select **Schedule**.
:::image type="content" source="media/orbital-eos-select-schedule.png" alt-text="Select an available contact schedule page" lightbox="media/orbital-eos-select-schedule.png":::
-7. View scheduled contacts by selecting the spacecraft page, and navigating to **Contacts**.
+8. View scheduled contacts by selecting the spacecraft page, and navigating to **Contacts**.
:::image type="content" source="media/orbital-eos-view-scheduled-contacts.png" alt-text="View scheduled contacts page" lightbox="media/orbital-eos-view-scheduled-contacts.png":::
-## Cancel a contact
+## API method
-To cancel a scheduled contact, you must delete the contact resource. Learn more at [contact resource](concepts-contact.md).
+Use the Contacts REST Operation Group to [create a contact](/rest/api/orbital/azureorbitalgroundstation/contacts/create/) with the Azure Orbital Ground Station API.
## Next steps
orbital Spacecraft Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/spacecraft-object.md
Learn about how you can represent your spacecraft details in Azure Orbital Ground Station.
-## Spacecraft details
+## Spacecraft parameters
The spacecraft resource captures three types of information:
+- **Ephemeris** - The latest spacecraft TLE to predict the position and velocity of the satellite.
+- **Links** - RF details on center frequency, bandwidth, direction, and polarization for each link.
+- **Authorizations** - Regulatory authorizations are held on a per-link, per-site basis.
-- **Links** - RF details on center frequency, direction, and bandwidth for each link.-- **Ephemeris** - The latest spacecraft TLE.-- **Licensing** - Authorizations are held on a per-link, per-site basis.
+### Ephemeris
+
+The spacecraft ephemeris is captured in Azure Orbital Ground Station using a Two-Line Element, or TLE.
+
+A TLE is associated with the spacecraft to determine contact opportunities at the time of scheduling. The TLE is also used to determine the path the antenna must follow during the contact as the spacecraft passes over the ground station during contact execution.
+
+As TLEs are prone to expiration, the user must keep the TLE up-to-date using the [TLE update](update-tle.md) procedure. A TLE that is more than two weeks old might result in an unsuccessful contact.
### Links
Make sure to capture each link that you wish to use with Azure Orbital Ground St
Dual polarization schemes are represented by two individual links with their respective LHCP and RHCP polarizations.
-### Ephemeris
-
-The spacecraft ephemeris is captured in Azure Orbital Ground Station using the Two-Line Element, or TLE.
-
-A TLE is associated with the spacecraft to determine contact opportunities at the time of scheduling. The TLE is also used to determine the path the antenna must follow during the contact as the spacecraft passes over the ground station during contact execution.
-
-As TLEs are prone to expiration, the user must keep the TLE up-to-date using the [TLE update](update-tle.md) procedure. A TLE that is more than two weeks old might result in an unsuccessful contact.
-
-### Licensing
+### Authorizations
In order to uphold regulatory requirements across the world, the spacecraft resource contains authorizations for specific links and sites that permit usage of the Azure Orbital Ground Station sites. The platform will deny scheduling or execution of contacts if the spacecraft resource links aren't authorized. The platform will also deny contact if a profile contains links that aren't included in the spacecraft resource authorized links.
-For more information, see the [spacecraft authorization and ground station licensing](register-spacecraft.md) documentation.
-
-## Create spacecraft resource
-
-For more information on how to create a spacecraft resource, see the details listed in the [register a spacecraft](register-spacecraft.md) article.
+Learn how to [initiate ground station licensing](initiate-licensing.md) and [authorize a spacecraft resource](register-spacecraft.md).
-## Managing spacecraft resources
+## Create a spacecraft resource
-Spacecraft resources can be created and deleted via the Portal and Azure Orbital Ground Station APIs. Once the resource is created, modification to the resource is dependent on the authorization status.
+Learn how to [create and authorize a spacecraft resource](register-spacecraft.md) in the Azure portal or Azure Orbital Ground Station API.
-When the spacecraft is unauthorized, then the spacecraft resource can be modified. The API is the best way to make changes to the spacecraft resource as the Portal only allows TLE updates.
+## Modify or delete spacecraft resources
-Once the spacecraft is authorized, TLE updates are the only modifications possible. Other fields, such as links, become immutable. The TLE updates are possible via the Portal and Orbital API.
+Spacecraft resources can be modified and deleted via the Azure portal or the Azure Orbital Ground Station API. Once the spacecraft resource is created, modification to the resource is dependent on the authorization status:
+- When the spacecraft is **unauthorized**, the spacecraft resource can be modified. The [API](/rest/api/orbital/azureorbitalgroundstation/spacecrafts/create-or-update/) is the recommended way to update the spacecraft resource as the [Azure portal](https://aka.ms/orbital/portal) only allows for TLE updates.
+- After the spacecraft is **authorized**, [TLE updates](update-tle.md) are the only modifications possible. Other fields, such as links, become immutable.
-## Delete spacecraft resource
+To delete the spacecraft resource, you must first delete all scheduled contacts associated with that spacecraft resource. See [contact resource](concepts-contact.md) for more information.
-You can delete the spacecraft resource via the Azure portal or the Azure Orbital Ground Station API. You must first delete all scheduled contacts associated with that spacecraft resource. See [contact resource](concepts-contact.md) for more information.
+To delete a spacecraft via the [Azure portal](https://aka.ms/orbital/portal), navigate to the spacecraft resource. Click 'Overview' on the left panel, then click 'Delete.' Alternatively, use the Spacecrafts REST Operation Group to [delete a spacecraft](/rest/api/orbital/azureorbitalgroundstation/spacecrafts/delete/) with the Azure Orbital Ground Station API.
## Next steps -- [Register a spacecraft](register-spacecraft.md)
+- [Create and authorize a spacecraft resource](register-spacecraft.md)
postgresql Concepts Scaling Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-scaling-resources.md
Last updated 12/01/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL = Flexible Server supports both vertical and horizontal scaling options.
+Azure Database for PostgreSQL Flexible Server supports both **vertical** and **horizontal** scaling options.
-You scale vertically by adding more resources to the Flexible server instance, such as increasing the instance-assigned number of CPUs and memory. Network throughput of your instance depends on the values you choose for CPU and memory. Once a Flexible server instance is created, you can independently change the CPU (vCores), the amount of storage, and the backup retention period. The number of vCores can be scaled up or down. The storage size however can only be increased. In addition, uou can scale the backup retention period up or down from 7 to 35 days. The resources can be scaled using multiple tools for instance [Azure portal](./quickstart-create-server-portal.md) or the [Azure CLI](./quickstart-create-server-cli.md).
+You can scale **vertically** by adding more resources to the Flexible server instance, such as increasing the instance-assigned number of CPUs and memory. Network throughput of your instance depends on the values you choose for CPU and memory. Once a Flexible server instance is created, you can independently change the CPU (vCores), the amount of storage, and the backup retention period. The number of vCores can be scaled up or down. The storage size however can only be increased. In addition, You can scale the backup retention period up or down from 7 to 35 days. The resources can be scaled using multiple tools, for instance, [Azure portal](./quickstart-create-server-portal.md) or the [Azure CLI](./quickstart-create-server-cli.md).
> [!NOTE] > After you increase the storage size, you can't go back to a smaller storage size.
-You scale horizontally by creating [read replicas](./concepts-read-replicas.md). Read replicas let you scale your read workloads onto separate flexible server instance without affecting the performance and availability of the primary instance.
+You scale **horizontally** by creating [read replicas](./concepts-read-replicas.md). Read replicas let you scale your read workloads onto separate flexible server instance without affecting the performance and availability of the primary instance.
-When you change the number of vCores or the compute tier, the instance is restarted for the new server type to take effect. During this time the system switches over to the new server type, no new connections can be established, and all uncommitted transactions will be rolled back. The overall time it takes to restart your server depends on the crash recovery process and database activity at the time of the restart. Restarts typically takes a minute or less but it can be higher and can take several minutes, depending on transactional activity at the time of the restart. Scaling the storage does not require a server restart in most cases. Similarly, backup retention period changes is an online operation. To improve the restart time, we recommend that you perform scale operations during off-peak hours. That approach reduces the time needed to restart the database server.
+When you change the number of vCores or the compute tier, the instance is restarted for the new server type to take effect. During this time the system switches over to the new server type, no new connections can be established, and all uncommitted transactions are rolled back. The overall time it takes to restart your server depends on the crash recovery process and database activity at the time of the restart. Restarts typically takes a minute or less but it can be higher and can take several minutes, depending on transactional activity at the time of the restart. Scaling the storage doesn't require a server restart in most cases. Similarly, backup retention period changes are an online operation. To improve the restart time, we recommend that you perform scale operations during off-peak hours. That approach reduces the time needed to restart the database server.
## Near-zero downtime scaling
-Near-zero Downtime Scaling is a feature designed to minimize downtime when modifying storage and compute tiers. If you modify the number of vCores or change the compute tier, the server undergoes a restart to apply the new configuration. During this transition to the new server, no new connections can be established. Typically, this process with regular scaling could take anywhere between 2 to 10 minutes. However, with the new 'Near-zero downtime' Scaling feature this duration has been reduced to less than 30 seconds. This significant reduction in downtime during scaling resources, that greatly improves the overall availability of your database instance.
+Near-zero Downtime Scaling is a feature designed to minimize downtime when modifying storage and compute tiers. If you modify the number of vCores or change the compute tier, the server undergoes a restart to apply the new configuration. During this transition to the new server, no new connections can be established. Typically, this process with regular scaling could take anywhere between 2 to 10 minutes. However, with the new 'Near-zero downtime' Scaling feature this duration is reduced to less than 30 seconds. This significant reduction in downtime during scaling resources, that greatly improves the overall availability of your database instance.
### How it works
-When updating your Flexible server in scaling scenarios, we create a new copy of your server (VM) with the updated configuration, synchronize it with your current one, briefly switch to the new copy with a 30-second interruption, and retire the old server, all at no extra cost to you. This process allows for seamless updates while minimizing downtime and ensuring cost-efficiency. This scaling process is triggered when changes are made to the storage and compute tiers, and the experience remains consistent for both (HA) and non-HA servers. This feature is enabled in all Azure regions and there is **no customer action required** to use this capability.
+When updating your Flexible server in scaling scenarios, we create a new copy of your server (VM) with the updated configuration, synchronize it with your current one, briefly switch to the new copy with a 30-second interruption, and retire the old server, all at no extra cost to you. This process allows for seamless updates while minimizing downtime and ensuring cost-efficiency. This scaling process is triggered when changes are made to the storage and compute tiers, and the experience remains consistent for both (HA) and non-HA servers. This feature is enabled in all Azure regions and there's **no customer action required** to use this capability.
> [!NOTE] > Near-zero downtime scaling process is the _default_ operation. However, in cases where the following limitations are encountered, the system switches to regular scaling, which involves more downtime compared to the near-zero downtime scaling.
-#### Pre-requisites
-- In order for near-zero downtime scaling to work, you should enable all inbound/outbound connections between the IPs in the delegated subnet. If these are not enabled near zero downtime scaling process will not work and scaling will occur through the standard scaling workflow.
+#### Prerequisites
+- In order for near-zero downtime scaling to work, you should enable all inbound/outbound connections between the IPs in the delegated subnet. If these aren't enabled near zero downtime scaling process will not work and scaling will occur through the standard scaling workflow.
#### Limitations -- Near-zero Downtime Scaling will not work if there are regional capacity constraints or quota limits on customer subscriptions.
+- Near-zero Downtime Scaling won't work if there are regional capacity constraints or quota limits on customer subscriptions.
- Near-zero Downtime Scaling doesn't work for replica server but supports the primary server. For replica server it will automatically go through regular scaling process.-- Near-zero Downtime Scaling will not work if a VNET injected Server with delegated subnet does not have sufficient usable IP addresses. If you have a standalone server, one additional IP address is necessary, and for a HA-enabled server, two extra IP addresses are required.
+- Near-zero Downtime Scaling won't work if a virtual network injected Server with delegated subnet doesn't have sufficient usable IP addresses. If you have a standalone server, one extra IP address is necessary, and for a HA-enabled server, two extra IP addresses are required.
## Related content
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md
You can use Azure Repos to store your configuration files. Azure Pipelines provi
To use Azure DevOps Services, you need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Microsoft Entra ID. To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either sign in or create a new account.
-To configure Azure DevOps for SAP Deployment Automation Framework, see [Configure Azure DevOps for SAP Deployment Automation Framework](configure-devops.md).
+## Create the SAP Deployment Automation Framework environment with Azure DevOps
+
+You can use the following script to do a basic installation of Azure DevOps Services for SAP Deployment Automation Framework.
+
+Open PowerShell ISE and copy the following script and update the parameters to match your environment.
+
+```powershell
+ $Env:SDAF_ADO_ORGANIZATION = "https://dev.azure.com/ORGANIZATIONNAME"
+ $Env:SDAF_ADO_PROJECT = "SAP Deployment Automation Framework"
+ $Env:SDAF_CONTROL_PLANE_CODE = "MGMT"
+ $Env:SDAF_WORKLOAD_ZONE_CODE = "DEV"
+ $Env:SDAF_ControlPlaneSubscriptionID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ $Env:SDAF_WorkloadZoneSubscriptionID = "yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"
+ $Env:ARM_TENANT_ID="zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz"
+
+ $UniqueIdentifier = Read-Host "Please provide an identifier that makes the service principal names unique, for instance a project code"
+
+ $confirmation = Read-Host "Do you want to create a new Application registration (needed for the Web Application) y/n?"
+ if ($confirmation -eq 'y') {
+ $Env:SDAF_APP_NAME = $UniqueIdentifier + " SDAF Control Plane"
+ }
+
+ else {
+ $Env:SDAF_APP_NAME = Read-Host "Please provide the Application registration name"
+ }
+
+ $confirmation = Read-Host "Do you want to create a new Service Principal for the Control plane y/n?"
+ if ($confirmation -eq 'y') {
+ $Env:SDAF_MGMT_SPN_NAME = $UniqueIdentifier + " SDAF " + $Env:SDAF_CONTROL_PLANE_CODE + " SPN"
+ }
+ else {
+ $Env:SDAF_MGMT_SPN_NAME = Read-Host "Please provide the Control Plane Service Principal Name"
+ }
+
+ $confirmation = Read-Host "Do you want to create a new Service Principal for the Workload zone y/n?"
+ if ($confirmation -eq 'y') {
+ $Env:SDAF_WorkloadZone_SPN_NAME = $UniqueIdentifier + " SDAF " + $Env:SDAF_WORKLOAD_ZONE_CODE + " SPN"
+ }
+ else {
+ $Env:SDAF_WorkloadZone_SPN_NAME = Read-Host "Please provide the Workload Zone Service Principal Name"
+ }
+
+ if ( $PSVersionTable.Platform -eq "Unix") {
+ if ( Test-Path "SDAF") {
+ }
+ else {
+ $sdaf_path = New-Item -Path "SDAF" -Type Directory
+ }
+ }
+ else {
+ $sdaf_path = Join-Path -Path $Env:HOMEDRIVE -ChildPath "SDAF"
+ if ( Test-Path $sdaf_path) {
+ }
+ else {
+ New-Item -Path $sdaf_path -Type Directory
+ }
+ }
+
+ Set-Location -Path $sdaf_path
+
+ if ( Test-Path "New-SDAFDevopsProject.ps1") {
+ remove-item .\New-SDAFDevopsProject.ps1
+ }
+
+ Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/New-SDAFDevopsProject.ps1 -OutFile .\New-SDAFDevopsProject.ps1 ; .\New-SDAFDevopsProject.ps1
+
+```
+
+Run the script and follow the instructions. The script opens browser windows for authentication and for performing tasks in the Azure DevOps project.
+
+You can choose to either run the code directly from GitHub or you can import a copy of the code into your Azure DevOps project.
+
+To confirm that the project was created, go to the Azure DevOps portal and select the project. Ensure that the repo was populated and that the pipelines were created.
+
+> [!IMPORTANT]
+> Run the following steps on your local workstation. Also ensure that you have the latest Azure CLI installed by running the `az upgrade` command.
++
+For more information on how to configure Azure DevOps for SAP Deployment Automation Framework, see [Configure Azure DevOps for SAP Deployment Automation Framework](configure-devops.md).
## Create the SAP Deployment Automation Framework environment without Azure DevOps
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Data connectors are available as part of the following offerings:
- [Cybersixgill Actionable Alerts (using Azure Functions)](data-connectors/cybersixgill-actionable-alerts-using-azure-functions.md)
+## Cyborg Security, Inc.
+
+- [Cyborg Security HUNTER Hunt Packages](data-connectors/cyborg-security-hunter-hunt-packages.md)
+ ## Cynerio - [Cynerio Security Events](data-connectors/cynerio-security-events.md)
Data connectors are available as part of the following offerings:
- [Darktrace Connector for Microsoft Sentinel REST API](data-connectors/darktrace-connector-for-microsoft-sentinel-rest-api.md)
+## Dataminr, Inc.
+
+- [Dataminr Pulse Alerts Data Connector (using Azure Functions)](data-connectors/dataminr-pulse-alerts-data-connector-using-azure-functions.md)
+ ## Darktrace plc - [AI Analyst Darktrace](data-connectors/ai-analyst-darktrace.md) ## Defend Limited
+- [Atlassian Beacon Alerts](data-connectors/atlassian-beacon-alerts.md)
- [Cortex XDR - Incidents](data-connectors/cortex-xdr-incidents.md) ## Delinea Inc.
Data connectors are available as part of the following offerings:
- [Fortinet](data-connectors/fortinet.md)
+## Gigamon, Inc
+
+- [Gigamon AMX Data Connector](data-connectors/gigamon-amx-data-connector.md)
+ ## GitLab - [GitLab](data-connectors/gitlab.md)
Data connectors are available as part of the following offerings:
- [Google Cloud Platform IAM (using Azure Functions)](data-connectors/google-cloud-platform-iam-using-azure-functions.md) - [Google Workspace (G Suite) (using Azure Functions)](data-connectors/google-workspace-g-suite-using-azure-functions.md)
+## Greynoise Intelligence, Inc.
+
+- [GreyNoise Threat Intelligence (using Azure Functions)](data-connectors/greynoise-threat-intelligence-using-azure-functions-using-azure-functions.md)
+ ## H.O.L.M. Security Sweden AB - [Holm Security Asset Data (using Azure Functions)](data-connectors/holm-security-asset-data-using-azure-functions.md)
Data connectors are available as part of the following offerings:
- [[Recommended] Forcepoint CASB via AMA](data-connectors/recommended-forcepoint-casb-via-ama.md) - [[Recommended] Forcepoint CSG via AMA](data-connectors/recommended-forcepoint-csg-via-ama.md) - [[Recommended] Forcepoint NGFW via AMA](data-connectors/recommended-forcepoint-ngfw-via-ama.md)
+- [Barracuda CloudGen Firewall](data-connectors/barracuda-cloudgen-firewall.md)
- [Exchange Security Insights Online Collector (using Azure Functions)](data-connectors/exchange-security-insights-online-collector-using-azure-functions.md) - [Forcepoint DLP](data-connectors/forcepoint-dlp.md) - [MISP2Sentinel](data-connectors/misp2sentinel.md)
+## Mimecast North America
+
+- [Mimecast Audit & Authentication (using Azure Functions)](data-connectors/mimecast-audit-authentication-using-azure-functions.md)
+- [Mimecast Intelligence for Microsoft - Microsoft Sentinel (using Azure Functions)](data-connectors/mimecast-intelligence-for-microsoft-microsoft-sentinel-using-azure-functions.md)
+- [Mimecast Secure Email Gateway (using Azure Functions)](data-connectors/mimecast-secure-email-gateway-using-azure-functions.md)
+- [Mimecast Targeted Threat Protection (using Azure Functions)](data-connectors/mimecast-targeted-threat-protection-using-azure-functions.md)
+ ## MongoDB - [MongoDB Audit](data-connectors/mongodb-audit.md)
Data connectors are available as part of the following offerings:
- [NXLog AIX Audit](data-connectors/nxlog-aix-audit.md) - [NXLog BSM macOS](data-connectors/nxlog-bsm-macos.md) - [NXLog DNS Logs](data-connectors/nxlog-dns-logs.md)
+- [NXLog FIM](data-connectors/nxlog-fim.md)
- [NXLog LinuxAudit](data-connectors/nxlog-linuxaudit.md) ## Okta
sentinel Atlassian Beacon Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/atlassian-beacon-alerts.md
+
+ Title: "Atlassian Beacon Alerts connector for Microsoft Sentinel"
+description: "Learn how to install the connector Atlassian Beacon Alerts to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Atlassian Beacon Alerts connector for Microsoft Sentinel
+
+Atlassian Beacon is a cloud product that is built for Intelligent threat detection across the Atlassian platforms (Jira, Confluence, and Atlassian Admin). This can help users detect, investigate and respond to risky user activity for the Atlassian suite of products. The solution is a custom data connector from DEFEND Ltd. that is used to visualize the alerts ingested from Atlassian Beacon to Microsoft Sentinel via a Logic App.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | atlassian_beacon_alerts_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [DEFEND Ltd.](https://www.defend.co.nz/) |
+
+## Query samples
+
+**Atlassian Beacon Alerts**
+ ```kusto
+atlassian_beacon_alerts_CL
+ | sort by TimeGenerated desc
+ ```
+++
+## Vendor installation instructions
+
+Step 1: Microsoft Sentinel
+
+1. Navigate to the newly installed Logic App 'Atlassian Beacon Integration'
+
+1. Navigate to 'Logic app designer'
+
+1. Expand the 'When a HTTP request is received'
+
+1. Copy the 'HTTP POST URL'
+
+Step 2: Atlassian Beacon
+
+1. Login to Atlassian Beacon using an admin account
+
+1. Navigate to 'SIEM forwarding' under SETTINGS
+
+1. Paste the copied URL from Logic App in the text box
+
+1. Click the 'Save' button
+
+Step 3: Testing and Validation
+
+1. Login to Atlassian Beacon using an admin account
+
+1. Navigate to 'SIEM forwarding' under SETTINGS
+
+1. Click the 'Test' button right next to the newly configured webhook
+
+1. Navigate to Microsoft Sentinel
+
+1. Navigate to the newly installed Logic App
+
+1. Check for the Logic App Run under 'Runs history'
+
+1. Check for logs under the table name 'atlassian_beacon_alerts_CL' in 'Logs'
+
+1. If the analytic rule has been enabled, the above Test alert should have created an incident in Microsoft Sentinel
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/defendlimited1682894612656.microsoft-sentinel-solution-atlassian-beacon?tab=Overview) in the Azure Marketplace.
sentinel Barracuda Cloudgen Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/barracuda-cloudgen-firewall.md
+
+ Title: "Barracuda CloudGen Firewall connector for Microsoft Sentinel"
+description: "Learn how to install the connector Barracuda CloudGen Firewall to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Barracuda CloudGen Firewall connector for Microsoft Sentinel
+
+The Barracuda CloudGen Firewall (CGFW) connector allows you to easily connect your Barracuda CGFW logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | Syslog (Barracuda)<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) |
+
+## Query samples
+
+**All logs**
+ ```kusto
+CGFWFirewallActivity
+
+ | sort by TimeGenerated
+ ```
+
+**Top 10 Active Users (Last 24 Hours)**
+ ```kusto
+CGFWFirewallActivity
+
+ | extend User = coalesce(User, "Unauthenticated")
+
+ | summarize count() by User
+
+ | take 10
+ ```
+
+**Top 10 Applications (Last 24 Hours)**
+ ```kusto
+CGFWFirewallActivity
+
+ | where isnotempty(Application)
+
+ | summarize count() by Application
+
+ | take 10
+ ```
+++
+## Prerequisites
+
+To integrate with Barracuda CloudGen Firewall make sure you have:
+
+- **Barracuda CloudGen Firewall**: must be configured to export logs via Syslog
++
+## Vendor installation instructions
++
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CGFWFirewallActivity and load the function code or click [here](https://aka.ms/sentinel-barracudacloudfirewall-parser). The function usually takes 10-15 minutes to activate after solution installation/update.
+
+1. Install and onboard the agent for Linux
+
+ Typically, you should install the agent on a different computer from the one on which the logs are generated.
+
+ Syslog logs are collected only from **Linux** agents.
++
+2. Configure the logs to be collected
+
+Configure the facilities you want to collect and their severities.
+
+1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
+2. Select **Apply below configuration to my machines** and select the facilities and severities.
+3. Click **Save**.
+
+Configure and connect the Barracuda CloudGen Firewall
+
+[Follow instructions](https://aka.ms/sentinel-barracudacloudfirewall-connector) to configure syslog streaming. Use the IP address or hostname for the Linux machine with the Microsoft Sentinel agent installed for the Destination IP address.
++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-barracudacloudgenfirewall?tab=Overview) in the Azure Marketplace.
sentinel Cyberpion Security Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cyberpion-security-logs.md
- Title: "Cyberpion Security Logs connector for Microsoft Sentinel"
-description: "Learn how to install the connector Cyberpion Security Logs to connect your data source to Microsoft Sentinel."
-- Previously updated : 02/23/2023----
-# Cyberpion Security Logs connector for Microsoft Sentinel
-
-The Cyberpion Security Logs data connector, ingests logs from the Cyberpion system directly into Sentinel. The connector allows users to visualize their data, create alerts and incidents and improve security investigations.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | CyberpionActionItems_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Cyberpion](https://www.cyberpion.com/contact/) |
-
-## Query samples
-
-**Fetch latest Action Items that are currently open**
- ```kusto
-let lookbackTime = 14d;
-let maxTimeGeneratedBucket = toscalar(
- CyberpionActionItems_CL
-
- | where TimeGenerated > ago(lookbackTime)
-
- | summarize max(bin(TimeGenerated, 1h))
- );
-CyberpionActionItems_CL
-
- | where TimeGenerated > ago(lookbackTime) and is_open_b == true
-
- | where bin(TimeGenerated, 1h) == maxTimeGeneratedBucket
-
- ```
---
-## Prerequisites
-
-To integrate with Cyberpion Security Logs make sure you have:
--- **Cyberpion Subscription**: a subscription and account is required for cyberpion logs. [One can be acquired here.](https://azuremarketplace.microsoft.com/en/marketplace/apps/cyberpion1597832716616.cyberpion)--
-## Vendor installation instructions
--
-Follow the [instructions](https://www.cyberpion.com/resource-center/integrations/azure-sentinel/) to integrate Cyberpion Security Alerts into Sentinel.
-----
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberpion1597832716616.cyberpion_mss?tab=Overview) in the Azure Marketplace.
sentinel Cyborg Security Hunter Hunt Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cyborg-security-hunter-hunt-packages.md
+
+ Title: "Cyborg Security HUNTER Hunt Packages connector for Microsoft Sentinel"
+description: "Learn how to install the connector Cyborg Security HUNTER Hunt Packages to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Cyborg Security HUNTER Hunt Packages connector for Microsoft Sentinel
+
+Cyborg Security is a leading provider of advanced threat hunting solutions, with a mission to empower organizations with cutting-edge technology and collaborative tools to proactively detect and respond to cyber threats. Cyborg Security's flagship offering, the HUNTER Platform, combines powerful analytics, curated threat hunting content, and comprehensive hunt management capabilities to create a dynamic ecosystem for effective threat hunting operations.
+
+Follow the steps to gain access to Cyborg Security's Community and setup the 'Open in Tool' capabilities in the HUNTER Platform.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | SecurityEvents<br/> |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
+| **Supported by** | [Cyborg Security](https://hunter.cyborgsecurity.io/customer-support) |
+
+## Query samples
+
+**All Alerts**
+ ```kusto
+SecurityEvent
+ ```
+++
+## Vendor installation instructions
+++
+ ResourceGroupName & WorkspaceName
+
+ {0}
+
+ WorkspaceID
+
+ {0}
+
+1. Sign up for Cyborg Security's HUNTER Community Account
+
+ Cyborg Security offers Community Member access to a subset of the Emerging Threat Collections and hunt packages.
+
+ Create a Free Community Account to get access to Cyborg Security's Hunt Packages: [Sign Up Now!](https://www.cyborgsecurity.com/user-account-creation/)
+
+2. Configure the Open in Tool Feature
+++
+1. Navigate to the [Environment](https://hunter.cyborgsecurity.io/environment) section of the HUNTER Platform.
+2. Fill in the **Root URI** of your environment in the section labeled **Microsoft Sentinel**. Replace the `<bolded items>` with the IDs and Names of your Subscription, Resource Groups and Workspaces.
+
+ `https[]()://portal.azure.com#@**AzureTenantID**/blade/Microsoft_OperationsManagementSuite_Workspace/Logs.ReactView/resourceId/%2Fsubscriptions%2F**AzureSubscriptionID**%2Fresourcegroups%2F**ResourceGroupName**%2Fproviders%2Fmicrosoft.operationalinsights%2Fworkspaces%2F<**WorkspaceName**>/`
+3. Click **Save**.
+
+3. Execute a HUNTER hunt package in Microsoft Sentinel
+++
+Identify a Cyborg Security HUNTER hunt package to deploy and use the **Open In Tool** button to quickly open Microsoft Sentinel and stage the hunting content.
+
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyborgsecurityinc1689265652101.azure-sentinel-solution-cyborgsecurity-hunter?tab=Overview) in the Azure Marketplace.
sentinel Dataminr Pulse Alerts Data Connector Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/dataminr-pulse-alerts-data-connector-using-azure-functions.md
+
+ Title: "Dataminr Pulse Alerts Data Connector (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Dataminr Pulse Alerts Data Connector (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Dataminr Pulse Alerts Data Connector (using Azure Functions) connector for Microsoft Sentinel
+
+Dataminr Pulse Alerts Data Connector brings our AI-powered real-time intelligence into Microsoft Sentinel for faster threat detection and response.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Azure function app code** | https://aka.ms/sentinel-DataminrPulseAlerts-functionapp |
+| **Log Analytics table(s)** | DataminrPulse_Alerts_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Dataminr Support](https://www.dataminr.com/dataminr-support#support) |
+
+## Query samples
+
+**Dataminr Pulse Alerts Data for all alertTypes**
+ ```kusto
+DataminrPulse_Alerts_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Dataminr Pulse Alerts Data Connector (using Azure Functions) make sure you have:
+
+- **Azure Subscription**: Azure Subscription with owner role is required to register an application in Microsoft Entra ID and assign role of contributor to app in resource group.
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Required Dataminr Credentials/permissions**:
+
+a. Users must have a valid Dataminr Pulse API **client ID** and **secret** to use this data connector.
+
+ b. One or more Dataminr Pulse Watchlists must be configured in the Dataminr Pulse website.
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the DataminrPulse in which logs are pushed via Dataminr RTAP and it will ingest logs into Microsoft Sentinel. Furthermore, the connector will fetch the ingested data from the custom logs table and create Threat Intelligence Indicators into Microsoft Sentinel Threat Intelligence. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
+
+**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++
+**STEP 1- Credentials for the Dataminr Pulse Client ID and Client Secret**
+
+ * Obtain Dataminr Pulse user ID/password and API client ID/secret from your Dataminr Customer Success Manager (CSM).
++
+**STEP 2- Configure Watchlists in Dataminr Pulse portal.**
+
+ Follow the steps in this section to configure watchlists in portal:
+
+ 1. **Login** to the Dataminr Pulse [website](https://app.dataminr.com).
+
+ 2. Click on the settings gear icon, and select **Manage Lists**.
+
+ 3. Select the type of Watchlist you want to create (Cyber, Topic, Company, etc.) and click the **New List** button.
+
+ 4. Provide a **name** for your new Watchlist, and select a highlight color for it, or keep the default color.
+
+ 5. When you are done configuring the Watchlist, click **Save** to save it.
++
+**STEP 3 - App Registration steps for the Application in Microsoft Entra ID**
+
+ This integration requires an App registration in the Azure portal. Follow the steps in this section to create a new application in Microsoft Entra ID:
+ 1. Sign in to the [Azure portal](https://portal.azure.com/).
+ 2. Search for and select **Microsoft Entra ID**.
+ 3. Under **Manage**, select **App registrations > New registration**.
+ 4. Enter a display **Name** for your application.
+ 5. Select **Register** to complete the initial app registration.
+ 6. When registration finishes, the Azure portal displays the app registration's Overview pane. You see the **Application (client) ID** and **Tenant ID**. The client ID and Tenant ID is required as configuration parameters for the execution of DataminrPulse Data Connector.
+
+**Reference link:** [https://learn.microsoft.com/azure/active-directory/develop/quickstart-register-app](/azure/active-directory/develop/quickstart-register-app)
++
+**STEP 4 - Add a client secret for application in Microsoft Entra ID**
+
+ Sometimes called an application password, a client secret is a string value required for the execution of DataminrPulse Data Connector. Follow the steps in this section to create a new Client Secret:
+ 1. In the Azure portal, in **App registrations**, select your application.
+ 2. Select **Certificates & secrets > Client secrets > New client secret**.
+ 3. Add a description for your client secret.
+ 4. Select an expiration for the secret or specify a custom lifetime. Limit is 24 months.
+ 5. Select **Add**.
+ 6. *Record the secret's value for use in your client application code. This secret value is never displayed again after you leave this page.* The secret value is required as configuration parameter for the execution of DataminrPulse Data Connector.
+
+**Reference link:** [https://learn.microsoft.com/azure/active-directory/develop/quickstart-register-app#add-a-client-secret](/azure/active-directory/develop/quickstart-register-app#add-a-client-secret)
++
+**STEP 5 - Assign role of Contributor to application in Microsoft Entra ID**
+
+ Follow the steps in this section to assign the role:
+ 1. In the Azure portal, Go to **Resource Group** and select your resource group.
+ 2. Go to **Access control (IAM)** from left panel.
+ 3. Click on **Add**, and then select **Add role assignment**.
+ 4. Select **Contributor** as role and click on next.
+ 5. In **Assign access to**, select `User, group, or service principal`.
+ 6. Click on **add members** and type **your app name** that you have created and select it.
+ 7. Now click on **Review + assign** and then again click on **Review + assign**.
+
+**Reference link:** [https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal](/azure/role-based-access-control/role-assignments-portal)
++
+**STEP 6 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+**IMPORTANT:** Before deploying the Dataminr Pulse Microsoft Sentinel data connector, have the Workspace ID and Workspace Primary Key (can be copied from the following) readily available.
+++
+Option 1 - Azure Resource Manager (ARM) Template
+
+Use this method for automated deployment of the DataminrPulse connector.
+
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-DataminrPulseAlerts-azuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the below information :
+
+ - Function Name
+ - Workspace ID
+ - Workspace Key
+ - AlertsTableName
+ - BaseURL
+ - ClientId
+ - ClientSecret
+ - AzureClientId
+ - AzureClientSecret
+ - AzureTenantId
+ - AzureResourceGroupName
+ - AzureWorkspaceName
+ - AzureSubscriptionId
+ - Schedule
+ - LogLevel
+
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+Option 2 - Manual Deployment of Azure Functions
+
+Use the following step-by-step instructions to deploy the Dataminr Pulse Microsoft Sentinel data connector manually with Azure Functions (Deployment via Visual Studio Code).
+
+1) Deploy a Function App
+
+ > [!NOTE]
+ > You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+
+1. Download the [Azure Function App](https://aka.ms/sentinel-DataminrPulseAlerts-functionapp) file. Extract archive to your local development computer.
+2. Start VS Code. Choose File in the main menu and select Open Folder.
+3. Select the top level folder from extracted files.
+4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
+If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
+If you're already signed in, go to the next step.
+5. Provide the following information at the prompts:
+
+ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
+
+ b. **Select Subscription:** Choose the subscription to use.
+
+ c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
+
+ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. DmPulseXXXXX).
+
+ e. **Select a runtime:** Choose Python 3.8 or above.
+
+ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
+
+6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
+7. Go to Azure Portal for the Function App configuration.
+
+Configure the Function App
+
+1. In the Function App, select the Function App Name and select **Configuration**.
+2. In the **Application settings** tab, select **+ New application setting**.
+3. Add each of the following application settings individually, with their respective values (case-sensitive):
+
+ - Function Name
+ - Workspace ID
+ - Workspace Key
+ - AlertsTableName
+ - BaseURL
+ - ClientId
+ - ClientSecret
+ - AzureClientId
+ - AzureClientSecret
+ - AzureTenantId
+ - AzureResourceGroupName
+ - AzureWorkspaceName
+ - AzureSubscriptionId
+ - Schedule
+ - LogLevel
+ - logAnalyticsUri (optional)
+1. Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
+1. Once all application settings have been entered, click **Save**.
++
+**STEP 4 - Post Deployment steps**
+
+Get the Function app endpoint
+
+1. Go to Azure function Overview page and Click on **"Functions"** in the left blade.
+2. Click on the function called **"DataminrPulseAlertsHttpStarter"**.
+3. Go to **"GetFunctionurl"** and copy the function url.
+4. Replace **{functionname}** with **"DataminrPulseAlertsSentinelOrchestrator"** in copied function url.
+
+To add integration settings in Dataminr RTAP using the function URL
+
+1. Within Microsoft Sentinel, go to Azure function apps then `<your_function_app>` Overview page and Click on **"Functions"** in the left blade.
+2. Click on the function called **"DataminrPulseAlertsHttpStarter"**.
+3. Go to **"Code + Test"** and click **"Test/Run"**.
+4. Provide the necessary details as mentioned below:
+
+ ```rest
+ HTTP Method : "POST"
+ Key : default(Function key)"
+ Query : Name=functionName ,Value=DataminrPulseAlertsSentinelOrchestrator
+ Request Body (case-sensitive) :
+ {
+ 'integration-settings': 'ADD',
+ 'url': <URL part from copied Function-url>,
+ 'token': <value of code parameter from copied Function-url>
+ }
+ ```
+
+1. After providing all required details, click **Run**.
+1. You will receive an integration setting ID in the HTTP response with a status code of 200.
+1. Save **Integration ID** for future reference.
++
+*Now we are done with the adding integration settings for Dataminr RTAP. Once the Dataminr RTAP send an alert data, Function app is triggered and you should be able to see the Alerts data from the Dataminr Pulse into LogAnalytics workspace table called "DataminrPulse_Alerts_CL".*
+++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dataminrinc1648845584891.dataminr_sentinel?tab=Overview) in the Azure Marketplace.
sentinel Gigamon Amx Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/gigamon-amx-data-connector.md
+
+ Title: "Gigamon AMX Data connector for Microsoft Sentinel"
+description: "Learn how to install the connector Gigamon AMX Data to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Gigamon AMX Data connector for Microsoft Sentinel
+
+Use this data connector to integrate with Gigamon Application Metadata Exporter (AMX) and get data sent directly to Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | Gigamon_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Gigamon](https://www.gigamon.com/) |
+
+## Query samples
+
+**List all artifacts**
+ ```kusto
+Gigamon_CL
+ ```
+++
+## Vendor installation instructions
+
+Gigamon Data Connector
+
+1. Application Metadata Exporter (AMX) application converts the output from the Application Metadata Intelligence (AMI) in CEF format into JSON format and sends it to the cloud tools and Kafka.
+ 2. The AMX application can be deployed only on a V Series Node and can be connected to Application Metadata Intelligence running on a physical node or a virtual machine.
+ 3. The AMX application and the AMI are managed by GigaVUE-FM. This application is supported on VMware ESXi, VMware NSX-T, AWS and Azure.
+
+++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/gigamon-inc.microsoft-sentinel-solution-gigamon?tab=Overview) in the Azure Marketplace.
sentinel Greynoise Threat Intelligence Using Azure Functions Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/greynoise-threat-intelligence-using-azure-functions-using-azure-functions.md
+
+ Title: "GreyNoise Threat Intelligence (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector GreyNoise Threat Intelligence (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# GreyNoise Threat Intelligence (using Azure Functions) connector for Microsoft Sentinel
+
+This Data Connector installs an Azure Function app to download GreyNoise indicators once per day and inserts them into the ThreatIntelligenceIndicator table in Microsoft Sentinel.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | ThreatIntelligenceIndicator<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [GreyNoise](https://www.greynoise.io/contact/general) |
+
+## Query samples
+
+**All Threat Intelligence APIs Indicators**
+ ```kusto
+ThreatIntelligenceIndicator
+ | where SourceSystem == 'GreyNoise'
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with GreyNoise Threat Intelligence (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **GreyNoise API Key**: Retreive your GreyNoise API Key [here](https://viz.greynoise.io/account/api-key).
++
+## Vendor installation instructions
+
+You can connect GreyNoise Threat Intelligence to Microsoft Sentinel by following the below steps:
+
+The following steps create an Azure AAD application, retrieves a GreyNoise API key, and saves the values in an Azure Function App Configuration.
+
+1. Retrieve API Key from GreyNoise Portal.
+
+ Generate an API key from GreyNoise Portal https://docs.greynoise.io/docs/using-the-greynoise-api
+
+2. In your Azure AD tenant, create an Azure Active Directory (AAD) application and acquire Tenant ID, Client ID and (note: hold off generating a Client Secret until Step 5).Also get the Log Analytics Workspace ID associated with your Microsoft Sentinel instance should be below.
+
+ Follow the instructions here to create your Azure AAD app and save your Client ID and Tenant ID: [Connect your threat intelligence platform to Microsoft Sentinel with the upload indicators API](/azure/sentinel/connect-threat-intelligence-upload-api#instructions)
+ NOTE: Wait until step 5 to generate your client secret.
++
+3. Assign the AAD application the Microsoft Sentinel Contributor Role.
+
+ Follow the instructions here to add the Microsoft Sentinel Contributor Role: [Connect your threat intelligence platform to Microsoft Sentinel with the upload indicators API](/azure/sentinel/connect-threat-intelligence-upload-api#assign-a-role-to-the-application)
+
+4. Specify the AAD permissions to enable MS Graph API access to the upload-indicators API.
+
+ Follow this section here to add **'ThreatIndicators.ReadWrite.OwnedBy'** permission to the AAD App: [Connect your threat intelligence platform to Microsoft Sentinel](/azure/sentinel/connect-threat-intelligence-tip#specify-the-permissions-required-by-the-application).
+ Back in your AAD App, ensure you grant admin consent for the permissions you just added.
+ Finally, in the 'Tokens and APIs' section, generate a client secret and save it. You will need it in Step 6.
+
+5. Deploy the Threat Intellegence (Preview) Solution which includes the Threat Intelligence Upload Indicators API (Preview)
+
+ See Microsoft Sentinel Content Hub for this Solution, and install it this Microsoft Sentinel instance.
+
+6. Deploy the Azure Function
+
+ Click the Deploy to Azure button.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-GreyNoise-azuredeploy)
+
+ Fill in the appropriate values for each parameter. **Be aware** that the only valid values for the **GREYNOISE_CLASSIFICATIONS** parameter are **malicious** and/or **unknown**, which must be comma separated. Do not bring in **<i>benign</i>**, as this will bring in millions of IPs which are known good and will likely cause many unwanted alerts.
+
+7. Send indicators to Sentinel
+
+ The function app installed in Step 6 queries the GreyNoise GNQL API once per day, and submits each indicator found in STIX 2.1 format to the [Microsoft Upload Threat Intelligence Indicators API](/azure/sentinel/upload-indicators-api).
+
+ Each indicator expires in ~24 hours from creation unless it's found on the next day's query, in which case the TI Indicator's **Valid Until** time is extended for another 24 hours, which keeps it active in Microsoft Sentinel.
+
+ For more information on the GreyNoise API and the GreyNoise Query Language (GNQL) [click here](https://developer.greynoise.io/docs/using-the-greynoise-api).
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/greynoiseintelligenceinc1681236078693.microsoft-sentinel-byol-greynoise?tab=Overview) in the Azure Marketplace.
sentinel Mimecast Audit Authentication Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mimecast-audit-authentication-using-azure-functions.md
+
+ Title: "Mimecast Audit & Authentication (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Mimecast Audit & Authentication (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Mimecast Audit & Authentication (using Azure Functions) connector for Microsoft Sentinel
+
+The data connector for [Mimecast Audit & Authentication](https://community.mimecast.com/s/article/Azure-Sentinel) provides customers with the visibility into security events related to audit and authentication events within Microsoft Sentinel. The data connector provides pre-created dashboards to allow analysts to view insight into user activity, aid in incident correlation and reduce investigation response times coupled with custom alert capabilities.
+The Mimecast products included within the connector are:
+Audit & Authentication
+
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | MimecastAudit_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Mimecast](https://community.mimecast.com/s/contactsupport) |
+
+## Query samples
+
+**MimecastAudit_CL**
+ ```kusto
+MimecastAudit_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Mimecast Audit & Authentication (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Mimecast API credentials**: You need to have the following pieces of information to configure the integration:
+- mimecastEmail: Email address of a dedicated Mimecast admin user
+- mimecastPassword: Password for the dedicated Mimecast admin user
+- mimecastAppId: API Application Id of the Mimecast Microsoft Sentinel app registered with Mimecast
+- mimecastAppKey: API Application Key of the Mimecast Microsoft Sentinel app registered with Mimecast
+- mimecastAccessKey: Access Key for the dedicated Mimecast admin user
+- mimecastSecretKey: Secret Key for the dedicated Mimecast admin user
+- mimecastBaseURL: Mimecast Regional API Base URL
+
+The Mimecast Application Id, Application Key, along with the Access Key and Secret keys for the dedicated Mimecast admin user are obtainable via the Mimecast Administration Console: Administration | Services | API and Platform Integrations.
+
+The Mimecast API Base URL for each region is documented here: https://integrations.mimecast.com/documentation/api-overview/global-base-urls/
+- **Resource group**: You need to have a resource group created with a subscription you are going to use.
+- **Functions app**: You need to have an Azure App registered for this connector to use
+- Application Id
+- Tenant Id
+- Client Id
+- Client Secret
++
+## Vendor installation instructions
++
+> [!NOTE]
+> This connector uses Azure Functions to connect to a Mimecast API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
+**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+
+Configuration:
+
+**STEP 1 - Configuration steps for the Mimecast API**
+
+Go to ***Azure portal > App registrations > [your_app] > Certificates & secrets > New client secret*** and create a new secret (save the Value somewhere safe right away because you will not be able to preview it later)
++
+**STEP 2 - Deploy Mimecast API Connector**
+
+**IMPORTANT:** Before deploying the Mimecast API connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Mimecast API authorization key(s) or Token, readily available.
+++
+Deploy the Mimecast Audit & Authentication Data Connector:
++
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-MimecastAudit-azuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the following fields:
+ - appName: Unique string that will be used as id for the app in Azure platform
+ - objectId: Azure portal > Azure Active Directory > more info > Profile --> Object ID
+ - appInsightsLocation(default): westeurope
+ - mimecastEmail: Email address of dedicated user for this integraion
+ - mimecastPassword: Password for dedicated user
+ - mimecastAppId: Application Id from the Microsoft Sentinel app registered with Mimecast
+ - mimecastAppKey: Application Key from the Microsoft Sentinel app registered with Mimecast
+ - mimecastAccessKey: Access Key for the dedicated Mimecast user
+ - mimecastSecretKey: Secret Key for dedicated Mimecast user
+ - activeDirectoryAppId: Azure portal > App registrations > [your_app] > Application ID
+ - mimecastBaseURL: Regional Mimecast API Base URL
+ - activeDirectoryAppSecret: Azure portal > App registrations > [your_app] > Certificates & secrets > [your_app_secret]
+
+ Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+6. Go to ***Azure portal > Resource groups > [your_resource_group] > [appName](type: Storage account) > Storage Explorer > BLOB CONTAINERS > Audit checkpoints > Upload*** and create empty file on your machine named checkpoint.txt and select it for upload (this is done so that date_range for SIEM logs is stored in consistent state)
++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/mimecastnorthamerica1584469118674.azure-sentinel-solution-mimecastaudit?tab=Overview) in the Azure Marketplace.
sentinel Mimecast Intelligence For Microsoft Microsoft Sentinel Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mimecast-intelligence-for-microsoft-microsoft-sentinel-using-azure-functions.md
+
+ Title: "Mimecast Intelligence for Microsoft - Microsoft Sentinel (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Mimecast Intelligence for Microsoft - Microsoft Sentinel (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Mimecast Intelligence for Microsoft - Microsoft Sentinel (using Azure Functions) connector for Microsoft Sentinel
+
+The data connector for Mimecast Intelligence for Microsoft provides regional threat intelligence curated from MimecastΓÇÖs email inspection technologies with pre-created dashboards to allow analysts to view insight into email based threats, aid in incident correlation and reduce investigation response times.
+Mimecast products and features required:
+- Mimecast Secure Email Gateway
+- Mimecast Threat Intelligence
++
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | Event(ThreatIntelligenceIndicator)<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Mimecast](https://community.mimecast.com/s/contactsupport) |
+
+## Query samples
+
+**ThreatIntelligenceIndicator**
+ ```kusto
+ThreatIntelligenceIndicator
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Mimecast Intelligence for Microsoft - Microsoft Sentinel (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Mimecast API credentials**: You need to have the following pieces of information to configure the integration:
+- mimecastEmail: Email address of a dedicated Mimecast admin user
+- mimecastPassword: Password for the dedicated Mimecast admin user
+- mimecastAppId: API Application Id of the Mimecast Microsoft Sentinel app registered with Mimecast
+- mimecastAppKey: API Application Key of the Mimecast Microsoft Sentinel app registered with Mimecast
+- mimecastAccessKey: Access Key for the dedicated Mimecast admin user
+- mimecastSecretKey: Secret Key for the dedicated Mimecast admin user
+- mimecastBaseURL: Mimecast Regional API Base URL
+
+The Mimecast Application Id, Application Key, along with the Access Key and Secret keys for the dedicated Mimecast admin user are obtainable via the Mimecast Administration Console: Administration | Services | API and Platform Integrations.
+
+The Mimecast API Base URL for each region is documented here: https://integrations.mimecast.com/documentation/api-overview/global-base-urls/
+- **Resource group**: You need to have a resource group created with a subscription you are going to use.
+- **Functions app**: You need to have an Azure App registered for this connector to use
+- Application Id
+- Tenant Id
+- Client Id
+- Client Secret
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to a Mimecast API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
+**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+
+Configuration:
+
+**STEP 1 - Configuration steps for the Mimecast API**
+
+Go to ***Azure portal > App registrations > [your_app] > Certificates & secrets > New client secret*** and create a new secret (save the Value somewhere safe right away because you will not be able to preview it later)
++
+**STEP 2 - Deploy Mimecast API Connector**
+
+**IMPORTANT:** Before deploying the Mimecast API connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Mimecast API authorization key(s) or Token, readily available.
+++
+Enable Mimecast Intelligence for Microsoft - Microsoft Sentinel Connector:
++
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-MimecastTIRegional-azuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the following fields:
+ - appName: Unique string that will be used as id for the app in Azure platform
+ - objectId: Azure portal > Azure Active Directory > more info > Profile --> Object ID
+ - appInsightsLocation(default): westeurope
+ - mimecastEmail: Email address of dedicated user for this integraion
+ - mimecastPassword: Password for dedicated user
+ - mimecastAppId: Application Id from the Microsoft Sentinel app registered with Mimecast
+ - mimecastAppKey: Application Key from the Microsoft Sentinel app registered with Mimecast
+ - mimecastAccessKey: Access Key for the dedicated Mimecast user
+ - mimecastSecretKey: Secret Key for dedicated Mimecast user
+ - mimecastBaseURL: Regional Mimecast API Base URL
+ - activeDirectoryAppId: Azure portal > App registrations > [your_app] > Application ID
+ - activeDirectoryAppSecret: Azure portal > App registrations > [your_app] > Certificates & secrets > [your_app_secret]
+
+ Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+6. Go to ***Azure portal > Resource groups > [your_resource_group] > [appName](type: Storage account) > Storage Explorer > BLOB CONTAINERS > TIR checkpoints > Upload*** and create empty file on your machine named checkpoint.txt and select it for upload (this is done so that date_range for TIR logs is stored in consistent state)
++
+Additional configuration:
+
+Connect to a **Threat Intelligence Platforms** Data Connector. Follow instructions on the connector page and then click connect button.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/mimecastnorthamerica1584469118674.azure-sentinel-solution-mimecasttiregional?tab=Overview) in the Azure Marketplace.
sentinel Mimecast Secure Email Gateway Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mimecast-secure-email-gateway-using-azure-functions.md
+
+ Title: "Mimecast Secure Email Gateway (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Mimecast Secure Email Gateway (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Mimecast Secure Email Gateway (using Azure Functions) connector for Microsoft Sentinel
+
+The data connector for [Mimecast Secure Email Gateway](https://community.mimecast.com/s/article/Azure-Sentinel) allows easy log collection from the Secure Email Gateway to surface email insight and user activity within Microsoft Sentinel. The data connector provides pre-created dashboards to allow analysts to view insight into email based threats, aid in incident correlation and reduce investigation response times coupled with custom alert capabilities. Mimecast products and features required:
+- Mimecast Secure Email Gateway
+- Mimecast Data Leak Prevention
+
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | MimecastSIEM_CL<br/> MimecastDLP_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Mimecast](https://community.mimecast.com/s/contactsupport) |
+
+## Query samples
+
+**MimecastSIEM_CL**
+ ```kusto
+MimecastSIEM_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**MimecastDLP_CL**
+ ```kusto
+MimecastDLP_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Mimecast Secure Email Gateway (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **Mimecast API credentials**: You need to have the following pieces of information to configure the integration:
+- mimecastEmail: Email address of a dedicated Mimecast admin user
+- mimecastPassword: Password for the dedicated Mimecast admin user
+- mimecastAppId: API Application Id of the Mimecast Microsoft Sentinel app registered with Mimecast
+- mimecastAppKey: API Application Key of the Mimecast Microsoft Sentinel app registered with Mimecast
+- mimecastAccessKey: Access Key for the dedicated Mimecast admin user
+- mimecastSecretKey: Secret Key for the dedicated Mimecast admin user
+- mimecastBaseURL: Mimecast Regional API Base URL
+
+The Mimecast Application Id, Application Key, along with the Access Key and Secret keys for the dedicated Mimecast admin user are obtainable via the Mimecast Administration Console: Administration | Services | API and Platform Integrations.
+
+The Mimecast API Base URL for each region is documented here: https://integrations.mimecast.com/documentation/api-overview/global-base-urls/
+- **Resource group**: You need to have a resource group created with a subscription you are going to use.
+- **Functions app**: You need to have an Azure App registered for this connector to use
+1. Application Id
+2. Tenant Id
+3. Client Id
+4. Client Secret
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to a Mimecast API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+
+Configuration:
+
+**STEP 1 - Configuration steps for the Mimecast API**
+
+Go to ***Azure portal > App registrations > [your_app] > Certificates & secrets > New client secret*** and create a new secret (save the Value somewhere safe right away because you will not be able to preview it later)
++
+**STEP 2 - Deploy Mimecast API Connector**
+
+**IMPORTANT:** Before deploying the Mimecast API connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Mimecast API authorization key(s) or Token, readily available.
+++
+Deploy the Mimecast Secure Email Gateway Data Connector:
++
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-MimecastSEG-azuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the following fields:
+ - appName: Unique string that will be used as id for the app in Azure platform
+ - objectId: Azure portal > Azure Active Directory > more info > Profile --> Object ID
+ - appInsightsLocation(default): westeurope
+ - mimecastEmail: Email address of dedicated user for this integraion
+ - mimecastPassword: Password for dedicated user
+ - mimecastAppId: Application Id from the Microsoft Sentinel app registered with Mimecast
+ - mimecastAppKey: Application Key from the Microsoft Sentinel app registered with Mimecast
+ - mimecastAccessKey: Access Key for the dedicated Mimecast user
+ - mimecastSecretKey: Secret Key for dedicated Mimecast user
+ - mimecastBaseURL: Regional Mimecast API Base URL
+ - activeDirectoryAppId: Azure portal > App registrations > [your_app] > Application ID
+ - activeDirectoryAppSecret: Azure portal > App registrations > [your_app] > Certificates & secrets > [your_app_secret]
+
+ Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+6. Go to ***Azure portal > Resource groups > [your_resource_group] > [appName](type: Storage account) > Storage Explorer > BLOB CONTAINERS > SIEM checkpoints > Upload*** and create empty file on your machine named checkpoint.txt, dlp-checkpoint.txt and select it for upload (this is done so that date_range for SIEM logs is stored in consistent state)
++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/mimecastnorthamerica1584469118674.azure-sentinel-solution-mimecastseg?tab=Overview) in the Azure Marketplace.
sentinel Mimecast Targeted Threat Protection Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mimecast-targeted-threat-protection-using-azure-functions.md
+
+ Title: "Mimecast Targeted Threat Protection (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Mimecast Targeted Threat Protection (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# Mimecast Targeted Threat Protection (using Azure Functions) connector for Microsoft Sentinel
+
+The data connector for [Mimecast Targeted Threat Protection](https://community.mimecast.com/s/article/Azure-Sentinel) provides customers with the visibility into security events related to the Targeted Threat Protection inspection technologies within Microsoft Sentinel. The data connector provides pre-created dashboards to allow analysts to view insight into email based threats, aid in incident correlation and reduce investigation response times coupled with custom alert capabilities.
+The Mimecast products included within the connector are:
+- URL Protect
+- Impersonation Protect
+- Attachment Protect
++
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | MimecastTTPUrl_CL<br/> MimecastTTPAttachment_CL<br/> MimecastTTPImpersonation_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Mimecast](https://community.mimecast.com/s/contactsupport) |
+
+## Query samples
+
+**MimecastTTPUrl_CL**
+ ```kusto
+MimecastTTPUrl_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**MimecastTTPAttachment_CL**
+ ```kusto
+MimecastTTPAttachment_CL
+
+ | sort by TimeGenerated desc
+ ```
+
+**MimecastTTPImpersonation_CL**
+ ```kusto
+MimecastTTPImpersonation_CL
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with Mimecast Targeted Threat Protection (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **REST API Credentials/permissions**: You need to have the following pieces of information to configure the integration:
+- mimecastEmail: Email address of a dedicated Mimecast admin user
+- mimecastPassword: Password for the dedicated Mimecast admin user
+- mimecastAppId: API Application Id of the Mimecast Microsoft Sentinel app registered with Mimecast
+- mimecastAppKey: API Application Key of the Mimecast Microsoft Sentinel app registered with Mimecast
+- mimecastAccessKey: Access Key for the dedicated Mimecast admin user
+- mimecastSecretKey: Secret Key for the dedicated Mimecast admin user
+- mimecastBaseURL: Mimecast Regional API Base URL
+
+> The Mimecast Application Id, Application Key, along with the Access Key and Secret keys for the dedicated Mimecast admin user are obtainable via the Mimecast Administration Console: Administration | Services | API and Platform Integrations.
+
+> The Mimecast API Base URL for each region is documented here: https://integrations.mimecast.com/documentation/api-overview/global-base-urls/
++
+## Vendor installation instructions
+
+Resource group
+
+You need to have a resource group created with a subscription you are going to use.
+
+Functions app
+
+You need to have an Azure App registered for this connector to use
+1. Application Id
+2. Tenant Id
+3. Client Id
+4. Client Secret
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to a Mimecast API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
++
+**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
+
+Configuration:
+
+**STEP 1 - Configuration steps for the Mimecast API**
+
+Go to ***Azure portal > App registrations > [your_app] > Certificates & secrets > New client secret*** and create a new secret (save the Value somewhere safe right away because you will not be able to preview it later)
++
+**STEP 2 - Deploy Mimecast API Connector**
+
+**IMPORTANT:** Before deploying the Mimecast API connector, have the Workspace ID and Workspace Primary Key (can be copied from the following), as well as the Mimecast API authorization key(s) or Token, readily available.
+++
+Deploy the Mimecast Targeted Threat Protection Data Connector:
++
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-MimecastTTP-azuredeploy)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the following fields:
+ - appName: Unique string that will be used as id for the app in Azure platform
+ - objectId: Azure portal > Azure Active Directory > more info > Profile --> Object ID
+ - appInsightsLocation(default): westeurope
+ - mimecastEmail: Email address of dedicated user for this integraion
+ - mimecastPassword: Password for dedicated user
+ - mimecastAppId: Application Id from the Microsoft Sentinel app registered with Mimecast
+ - mimecastAppKey: Application Key from the Microsoft Sentinel app registered with Mimecast
+ - mimecastAccessKey: Access Key for the dedicated Mimecast user
+ - mimecastSecretKey: Secret Key for dedicated Mimecast user
+ - mimecastBaseURL: Regional Mimecast API Base URL
+ - activeDirectoryAppId: Azure portal > App registrations > [your_app] > Application ID
+ - activeDirectoryAppSecret: Azure portal > App registrations > [your_app] > Certificates & secrets > [your_app_secret]
+
+ Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+6. Go to ***Azure portal > Resource groups > [your_resource_group] > [appName](type: Storage account) > Storage Explorer > BLOB CONTAINERS > TTP checkpoints > Upload*** and create empty files on your machine named attachment-checkpoint.txt, impersonation-checkpoint.txt, url-checkpoint.txt and select them for upload (this is done so that date_range for TTP logs are stored in consistent state)
++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/mimecastnorthamerica1584469118674.azure-sentinel-solution-mimecastttp?tab=Overview) in the Azure Marketplace.
sentinel Nxlog Fim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-fim.md
+
+ Title: "NXLog FIM connector for Microsoft Sentinel"
+description: "Learn how to install the connector NXLog FIM to connect your data source to Microsoft Sentinel."
++ Last updated : 11/29/2023++++
+# NXLog FIM connector for Microsoft Sentinel
+
+The [NXLog FIM](https://docs.nxlog.co/refman/current/im/fim.html) module allows for the scanning of files and directories, reporting detected additions, changes, renames and deletions on the designated paths through calculated checksums during successive scans. This REST API connector can efficiently export the configured FIM events to Microsoft Sentinel in real time.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | NXLogFIM_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [NXLog](https://nxlog.co/support-tickets/add/support-ticket) |
+
+## Query samples
+
+**Find all DELETE events**
+ ```kusto
+NXLogFIM_CL
+
+ | where EventType_s == 'DELETE'
+
+ | project-away
+ SourceSystem,
+ Type
+
+ | sort by EventTime_t
+ ```
+
+**Bar Chart for Events per type, per host**
+ ```kusto
+NXLogFIM_CL
+
+ | summarize EventCount = count() by Hostname_s, EventType_s
+
+ | where strlen(EventType_s) > 1
+
+ | project Eventype = Hostname_s, EventType_s, EventCount
+
+ | order by EventCount desc
+
+ | render barchart
+ ```
+
+**Pie Chart for visualization of events per host**
+ ```kusto
+NXLogFIM_CL
+
+ | summarize EventCount = count() by Hostname_s, EventType_s
+
+ | sort by EventCount
+
+ | render piechart
+ ```
+
+**General Summary of Events per Host**
+ ```kusto
+NXLogFIM_CL
+
+ | summarize count() by Hostname_s, EventType_s
+ ```
+++
+## Vendor installation instructions
++
+Follow the step-by-step instructions in the [Microsoft Sentinel](https://docs.nxlog.co/userguide/integrate/microsoft-azure-sentinel.html) integration chapter of the *NXLog User Guide* to configure this connector.
+++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nxlogltd1589381969261.nxlog_fim?tab=Overview) in the Azure Marketplace.
spring-apps How To Configure Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-enterprise-spring-cloud-gateway.md
az monitor autoscale rule create \
For more information on the available metrics, see the [User metrics options](./concept-metrics.md#user-metrics-options) section of [Metrics for Azure Spring Apps](./concept-metrics.md).
+## Configure the response cache
+
+Response cache configuration provides a way to define an HTTP response cache that you can apply globally or at the route level.
+
+### Enable the response cache globally
+
+After you enable the response cache globally, the response cache is automatically enabled for all applicable routes.
+
+#### [Azure portal](#tab/Azure-portal)
+
+Use the following steps to enable the response cache globally:
+
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** on the navigation pane.
+1. On the **Spring Cloud Gateway** page, select **Configuration**.
+1. In the **Response Cache** section, select **Enable response cache** and then set **Scope** to **Instance**.
+1. Set **Size** and **Time to live** for the response cache.
+1. Select **Save**.
+
+Use the following steps to disable the response cache:
+
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** on the navigation pane.
+1. On the **Spring Cloud Gateway** page, select **Configuration**.
+1. In the **Response Cache** section, clear **Enable response cache**.
+1. Select **Save**.
+
+#### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to enable the response cache globally:
+
+```azurecli
+az spring gateway update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name>
+ --enable-response-cache \
+ --response-cache-scope Instance \
+ --response-cache-size {Examples are 1GB, 100MB, 100KB} \
+ --response-cache-ttl {Examples are 1h, 30m, 50s}
+```
+
+Use the following command to disable the response cache:
+
+```azurecli
+az spring gateway update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --enable-response-cache false
+```
+++
+### Enable the response cache at the route level
+
+To enable the response cache for any route, use the `LocalResponseCache` filter. The following example shows you how to use the `LocalResponseCache` filter in the routing rule configuration:
+
+```json
+{
+ "filters": [
+ "<other-app-level-filter-of-route>",
+ ],
+ "routes": [
+ {
+ "predicates": [
+ "Path=/api/**",
+ "Method=GET"
+ ],
+ "filters": [
+ "<other-filter-of-route>",
+ "LocalResponseCache=3m, 1MB"
+ ],
+ }
+ ]
+}
+```
+
+For more information, see the [LocalResponseCache](./how-to-configure-enterprise-spring-cloud-gateway-filters.md#localresponsecache) section of [How to use VMware Spring Cloud Gateway route filters with the Azure Spring Apps Enterprise plan](./how-to-configure-enterprise-spring-cloud-gateway-filters.md) and [LocalResponseCache](https://aka.ms/vmware/scg/filters/localresponsecache) in the VMware documentation.
+
+Instead of configuring `size` and `timeToLive` for each `LocalResponseCache` filter individually, you can set these parameters at the Spring Cloud Gateway level. This option enables you to use the `LocalResponseCache` filter without specifying these values initially, while retaining the flexibility to override them later.
+
+#### [Azure portal](#tab/Azure-portal)
+
+Use the following steps to enable the response cache at the route level and set `size` and `timeToLive`:
+
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** on the navigation pane.
+1. On the **Spring Cloud Gateway** page, select **Configuration**.
+1. In the **Response Cache** section, select **Enable response cache** and then set **Scope** to **Route**.
+1. Set **Size** and **Time to live** for the response cache.
+1. Select **Save**.
+
+Use the following steps to disable the response cache at the route level, which clears `size` and `timeToLive`:
+
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** on the navigation pane.
+1. On the **Spring Cloud Gateway** page, select **Configuration**.
+1. In the **Response Cache** section, clear **Enable response cache**.
+1. Select **Save**.
+
+#### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to enable the response cache at the route level and set `size` and `timeToLive`:
+
+```azurecli
+az spring gateway update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --enable-response-cache \
+ --response-cache-scope Route \
+ --response-cache-size {Examples are 1GB, 100MB, 100KB} \
+ --response-cache-ttl {Examples are 1h, 30m, 50s}
+```
+
+Use the following command to disable the response cache at the route level, which clears `size` and `timeToLive`:
+
+```azurecli
+az spring gateway update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --enable-response-cache false
+```
+++
+The following example shows you how to use the `LocalResponseCache` filter when `size` and `timeToLive` are set at the Spring Cloud Gateway level:
+
+```json
+{
+ "filters": [
+ "<other-app-level-filter-of-route>",
+ ],
+ "routes": [
+ {
+ "predicates": [
+ "Path=/api/path1/**",
+ "Method=GET"
+ ],
+ "filters": [
+ "<other-filter-of-route>",
+ "LocalResponseCache"
+ ],
+ },
+ {
+ "predicates": [
+ "Path=/api/path2/**",
+ "Method=GET"
+ ],
+ "filters": [
+ "<other-filter-of-route>",
+ "LocalResponseCache=3m, 1MB"
+ ],
+ }
+ ]
+}
+```
+ ## Configure environment variables The Azure Spring Apps service manages and tunes VMware Spring Cloud Gateway. Except for the use cases that configure application performance monitoring (APM) and the log level, you don't normally need to configure VMware Spring Cloud Gateway with environment variables.
spring-apps How To Use Enterprise Api Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-api-portal.md
This article shows you how to use API portal for VMware Tanzu with the Azure Spr
- An already provisioned Azure Spring Apps Enterprise plan instance with API portal enabled. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md). - [Spring Cloud Gateway for Tanzu](./how-to-use-enterprise-spring-cloud-gateway.md) is enabled during provisioning and the corresponding API metadata is configured.
-## Configure API portal
-
-The following sections describe configuration in API portal.
-
-### Configure single sign-on (SSO)
+## Configure single sign-on (SSO)
API portal supports authentication and authorization using single sign-on (SSO) with an OpenID identity provider (IdP) that supports the OpenID Connect Discovery protocol.
API portal supports authentication and authorization using single sign-on (SSO)
| Property | Required? | Description | | - | - | - |
-| issuerUri | Yes | The URI that the app asserts as its Issuer Identifier. For example, if the issuer-uri provided is "https://example.com", then an OpenID Provider Configuration Request will be made to "https://example.com/.well-known/openid-configuration". The result is expected to be an OpenID Provider Configuration Response. |
+| issuerUri | Yes | The URI that the app asserts as its Issuer Identifier. For example, if the issuer-uri provided is "https://example.com", then an OpenID Provider Configuration Request is made to "https://example.com/.well-known/openid-configuration". The result is expected to be an OpenID Provider Configuration Response. |
| clientId | Yes | The OpenID Connect client ID provided by your IdP | | clientSecret | Yes | The OpenID Connect client secret provided by your IdP | | scope | Yes | A list of scopes to include in JWT identity tokens. This list should be based on the scopes allowed by your identity provider |
To set up SSO with Microsoft Entra ID, see [How to set up single sign-on with Mi
> [!NOTE] > If you configure the wrong SSO property, such as the wrong password, you should remove the entire SSO property and re-add the correct configuration.
-### Configure the instance count
+## Configure the instance count
+
+### [Azure portal](#tab/Portal)
+
+Use the following steps to configure the instance count using API portal:
+
+1. Navigate to your service instance and select **API portal**.
+1. Select **Scale out**.
+1. Configure **Instance count** and then select **Save**.
+
+### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to configure the instance count using API portal. Be sure to replace the placeholder with your actual value.
+
+```azurecli
+az spring api-portal update --instance-count <number>
+```
-Configuration of the instance count for API portal is supported, unless you're using SSO. If you're using the SSO feature, only one instance count is supported.
+ ## Assign a public endpoint for API portal
-To access API portal, use the following steps to assign a public endpoint:
+### [Azure portal](#tab/Portal)
+
+Use the following steps to assign a public endpoint to API portal:
1. Select **API portal**. 1. Select **Overview** to view the running state and resources allocated to API portal.
-1. Select **Yes** next to *Assign endpoint* to assign a public endpoint. A URL will be generated within a few minutes.
+1. Select **Yes** next to *Assign endpoint* to assign a public endpoint. A URL is generated within a few minutes.
1. Save the URL for use later.
-You can also use the Azure CLI to assign a public endpoint with the following command:
+### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to assign a public endpoint to API portal:
```azurecli az spring api-portal update --assign-endpoint ``` ++
+## Configure the API try-out feature
+
+API portal enables you to view APIs centrally and try them out using the API try-out feature. API try-out is enabled by default and this configuration helps you turn it off across the whole API portal instance. For more information, see the [Try out APIs in API portal](#try-out-apis-in-api-portal) section.
+
+### [Azure portal](#tab/Portal)
+
+Use the following steps to enable or disable API try-out:
+
+1. Navigate to your service instance and select **API portal**.
+1. Select **Configuration**.
+1. Select or clear **Enable API try-out** and then select **Save**.
+
+### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to enable API try-out:
+
+```azurecli
+az spring api-portal update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --enable-api-try-out
+```
+
+Use following command to disable API try-out:
+
+```azurecli
+az spring api-portal update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --enable-api-try-out false
+```
+++ ## Configure API routing with OpenAPI Spec on Spring Cloud Gateway for Tanzu This section describes how to view and try out APIs with schema definitions in API portal. Use the following steps to configure API routing with an OpenAPI spec URL on Spring Cloud Gateway for Tanzu.
-1. Create an app in Azure Spring Apps that the gateway will route traffic to.
+1. Create an app in Azure Spring Apps that the gateway routes traffic to.
1. Generate the OpenAPI definition and get the URI to access it. The following two URI options are accepted:
This section describes how to view and try out APIs with schema definitions in A
> [!NOTE] > It takes several minutes to sync between Spring Cloud Gateway for Tanzu and API portal.
-Select the `endpoint URL` to go to API portal. You'll see all the routes configured in Spring Cloud Gateway for Tanzu.
+Select the `endpoint URL` to go to API portal. You see all the routes configured in Spring Cloud Gateway for Tanzu.
## Try out APIs in API portal Use the following steps to try out APIs: 1. Select the API you would like to try.
-1. Select **EXECUTE**, and the response will be shown.
+1. Select **EXECUTE**, and the response appears.
:::image type="content" source="media/how-to-use-enterprise-api-portal/api-portal-tryout.png" alt-text="Screenshot of API portal.":::
storage Storage Files Configure P2s Vpn Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-windows.md
Title: Configure a point-to-site (P2S) VPN on Windows for use with Azure Files
-description: How to configure a point-to-site (P2S) VPN on Windows for use with Azure Files
+description: How to configure a point-to-site (P2S) VPN on Windows for use with SMB Azure file shares
Previously updated : 11/21/2023 Last updated : 12/01/2023
The article details the steps to configure a point-to-site VPN on Windows (Windo
- An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage accounts, which are management constructs that represent a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources. Learn more about how to deploy Azure file shares and storage accounts in [Create an Azure file share](storage-how-to-create-file-share.md). -- A virtual network with a private endpoint for the storage account that contains the Azure file share you want to mount on-premises. To learn how to create a private endpoint, see [Configuring Azure Files network endpoints](storage-files-networking-endpoints.md?tabs=azure-powershell).
+- A [virtual network](../../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md) with a private endpoint for the storage account that contains the Azure file share you want to mount on-premises. To learn how to create a private endpoint, see [Configuring Azure Files network endpoints](storage-files-networking-endpoints.md?tabs=azure-powershell).
-- A [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) must be created on the virtual network, and you'll need to know the name of the gateway subnet.
+- You must create a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) on the virtual network. To create a gateway subnet, sign into the Azure portal, navigate to the virtual network, select **Settings > Subnets**, and then select **+ Gateway subnet**. When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The number of IP addresses needed depends on the VPN gateway configuration that you want to create. It's best to specify /27 or larger (/26, /25 etc.) to allow enough IP addresses for future changes, such as adding an ExpressRoute gateway.
## Collect environment information
-Before setting up the point-to-site VPN, you need to collect some information about your environment. Replace `<resource-group>`, `<vnet-name>`, `<subnet-name>`, and `<storage-account-name>` with the appropriate values for your environment.
+Before setting up the point-to-site VPN, you need to collect some information about your environment.
+
+# [Portal](#tab/azure-portal)
+
+In order to set up a point-to-site VPN using the Azure portal, you'll need to know your resource group name, virtual network name, gateway subnet name, and storage account name.
+
+# [PowerShell](#tab/azure-powershell)
+
+Run this script to collect the necessary information. Replace `<resource-group>`, `<vnet-name>`, `<subnet-name>`, and `<storage-account-name>` with the appropriate values for your environment.
```PowerShell $resourceGroupName = "<resource-group-name>"
$privateEndpoint = Get-AzPrivateEndpoint | `
} | ` Select-Object -First 1 ```+ ## Create root certificate for VPN authentication
-In order for VPN connections from your on-premises Windows machines to be authenticated to access your virtual network, you must create two certificates: a root certificate, which will be provided to the virtual machine gateway, and a client certificate, which will be signed with the root certificate. The following PowerShell creates the root certificate; you'll create the client certificate after deploying the Azure virtual network gateway.
+In order for VPN connections from your on-premises Windows machines to be authenticated to access your virtual network, you must create two certificates:
+
+1. A root certificate, which will be provided to the virtual machine gateway
+2. A client certificate, which will be signed with the root certificate
+
+You can either use a root certificate that was generated with an enterprise solution, or you can generate a self-signed certificate. If you're using an enterprise solution, acquire the .cer file for the root certificate from your IT organization.
+
+If you aren't using an enterprise certificate solution, create a self-signed root certificate using this PowerShell script. You'll create the client certificate after deploying the virtual network gateway. If possible, leave your PowerShell session open so you don't need to redefine variables when you create the client certificate later in this article.
+
+> [!IMPORTANT]
+> Run this PowerShell script as administrator from an on-premises machine running Windows 10/Windows Server 2016 or later. Don't run the script from a Cloud Shell or VM in Azure.
```PowerShell $rootcertname = "CN=P2SRootCert"
foreach($line in $rawRootCertificate) {
## Deploy virtual network gateway
-The Azure virtual network gateway is the service that your on-premises Windows machines will connect to. Before deploying the virtual network gateway, you must create a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) on the virtual network.
+The Azure virtual network gateway is the service that your on-premises Windows machines will connect to. If you haven't already, you must create a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) on the virtual network before deploying the virtual network gateway.
Deploying a virtual network gateway requires two basic components: 1. A public IP address that will identify the gateway to your clients wherever they are in the world
-2. The root certificate you created earlier, which will be used to authenticate your clients
+2. The root certificate you created in the previous step, which will be used to authenticate your clients
-Remember to replace `<desired-vpn-name-here>`, `<desired-region-here>`, and `<gateway-subnet-name-here>` in the following script with the proper values for these variables.
+You can use the Azure portal or Azure PowerShell to deploy the virtual network gateway. Deployment can take up to 45 minutes to complete.
-> [!NOTE]
-> Deploying the Azure virtual network gateway can take up to 45 minutes. While this resource is being deployed, this PowerShell script will block the deployment from being completed. This is expected.
+# [Portal](#tab/azure-portal)
+
+To deploy a virtual network gateway using the Azure portal, follow these instructions.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In **Search resources, services, and docs**, type *virtual network gateways*. Locate Virtual network gateways in the Marketplace search results and select it.
+
+1. Select **+ Create** to create a new virtual network gateway.
+
+1. On the **Basics** tab, fill in the values for **Project details** and **Instance details**.
+
+ :::image type="content" source="media/storage-files-configure-p2s-vpn-windows/create-virtual-network-gateway.png" alt-text="Screenshot showing how to create a virtual network gateway using the Azure portal." lightbox="media/storage-files-configure-p2s-vpn-windows/create-virtual-network-gateway.png":::
+
+ * **Subscription**: Select the subscription you want to use from the dropdown.
+ * **Resource Group**: This setting is autofilled when you select your virtual network on this page.
+ * **Name**: Name your gateway. Naming your gateway not the same as naming a gateway subnet. It's the name of the gateway object you're creating.
+ * **Region**: Select the region in which you want to create this resource. The region for the gateway must be the same as the virtual network.
+ * **Gateway type**: Select **VPN**. VPN gateways use the virtual network gateway type **VPN**.
+ * **SKU**: Select the gateway SKU that supports the features you want to use from the dropdown. See [Gateway SKUs](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsku). Don't use the Basic SKU because it doesn't support IKEv2 authentication.
+ * **Generation**: Select the generation you want to use. We recommend using a Generation2 SKU. For more information, see [Gateway SKUs](../../vpn-gateway/vpn-gateway-about-vpngateways.md#gwsku).
+ * **Virtual network**: From the dropdown, select the virtual network to which you want to add this gateway. If you can't see the virtual network for which you want to create a gateway, make sure you selected the correct subscription and region.
+ * **Subnet**: This field should be grayed out and list the name of the gateway subnet you created, along with its IP address range. If you instead see a **Gateway subnet address range** field with a text box, then you haven't yet configured a gateway subnet (see [Prerequisites](#prerequisites).)
+
+1. Specify the values for the **Public IP address** that gets associated to the virtual network gateway. The public IP address is assigned to this object when the virtual network gateway is created. The only time the primary public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades.
+
+ :::image type="content" source="media/storage-files-configure-p2s-vpn-windows/create-public-ip-address.png" alt-text="Screenshot showing how to specify the public IP address for a virtual network gateway using the Azure portal." lightbox="media/storage-files-configure-p2s-vpn-windows/create-public-ip-address.png":::
+
+ * **Public IP address**: Leave **Create new** selected.
+ * **Public IP address name**: In the text box, type a name for your public IP address instance.
+ * **Public IP address SKU**: Setting is autoselected.
+ * **Assignment**: The assignment is typically autoselected and can be either Dynamic or Static.
+ * **Enable active-active mode**: Select **Disabled**. Only enable this setting if you're creating an active-active gateway configuration.
+ * **Configure BGP**: Select **Disabled**, unless your configuration specifically requires this setting. If you do require this setting, the default ASN is 65515, although this value can be changed.
+
+1. Select **Review + create** to run validation. Once validation passes, select **Create** to deploy the virtual network gateway. Deployment can take up to 45 minutes to complete.
+
+1. When deployment is complete, select **Go to resource**.
+
+1. In the left pane, select **Settings > Point-to-site configuration** and then select **Configure now**. You should see the Point-to-site configuration page.
+
+ :::image type="content" source="media/storage-files-configure-p2s-vpn-windows/point-to-site-configuration.png" alt-text="Screenshot showing how to configure a point-to-site VPN using the Azure portal." lightbox="media/storage-files-configure-p2s-vpn-windows/point-to-site-configuration.png":::
+
+ * **Address pool**: Add the private IP address range that you want to use. VPN clients dynamically receive an IP address from the range that you specify. The minimum subnet mask is 29 bit for active/passive and 28 bit for active/active configuration.
+ * **Tunnel type**: Specify the tunnel type you want to use. Computers connecting via the native Windows VPN client will try IKEv2 first. If that doesn't connect, they fall back to SSTP (if you select both IKEv2 and SSTP from the dropdown). If you select the OpenVPN tunnel type, you can connect using an OpenVPN Client or the Azure VPN Client.
+ * **Authentication type**: Specify the authentication type you want to use (in this case, choose Azure certificate).
+ * **Root certificate name**: The file name of the root certificate (.cer file).
+ * **Public certificate data**: Open the root certificate with NotePad and copy/paste the public certificate data in this text field. If you used the PowerShell script in this article to generate a self-signed root certificate, it will be located in `C:\vpn-temp`. Be sure to only paste the text that's in between --BEGIN CERTIFICATE-- and --END CERTIFICATE--. Don't include any additional spaces or characters.
+
+ > [!NOTE]
+ > If you don't see tunnel type or authentication type, your gateway is using the Basic SKU. The Basic SKU doesn't support IKEv2 authentication. If you want to use IKEv2, you need to delete and recreate the gateway using a different gateway SKU.
+
+1. Select **Save** at the top of the page to save all of the configuration settings and upload the root certificate public key information to Azure.
+
+# [PowerShell](#tab/azure-powershell)
+
+Replace `<desired-vpn-name>`, `<desired-region>`, and `<gateway-subnet-name>` in the following script with the proper values for these variables.
+
+While this resource is being deployed, this PowerShell script will block the deployment from being completed. This is expected.
```PowerShell
-$vpnName = "<desired-vpn-name-here>"
+$vpnName = "<desired-vpn-name>"
$publicIpAddressName = "$vpnName-PublicIP"
-$region = "<desired-region-here>"
-$gatewaySubnet = "<gateway-subnet-name-here>"
+$region = "<desired-region>"
+$gatewaySubnet = "<gateway-subnet-name>"
$publicIPAddress = New-AzPublicIpAddress ` -ResourceGroupName $resourceGroupName `
$vpn = New-AzVirtualNetworkGateway `
-ResourceGroupName $resourceGroupName ` -Name $vpnName ` -Location $region `
- -GatewaySku VpnGw1 `
+ -GatewaySku VpnGw2 `
-GatewayType Vpn ` -VpnType RouteBased ` -IpConfigurations $gatewayIpConfig `
$vpn = New-AzVirtualNetworkGateway `
-VpnClientProtocol IkeV2 ` -VpnClientRootCertificates $azRootCertificate ```+ ## Create client certificate
-The following script creates the client certificate with the URI of the virtual network gateway. This certificate is signed with the root certificate you created earlier.
+Each client computer that you connect to a virtual network with a point-to-site connection must have a client certificate installed. You generate the client certificate from the root certificate and install it on each client computer. If you don't install a valid client certificate, authentication will fail when the client tries to connect. You can either create a client certificate from a root certificate that was generated with an enterprise solution, or you can create a client certificate from a self-signed root certificate.
+
+### Create client certificate using an enterprise solution
+
+If you're using an enterprise certificate solution, generate a client certificate with the common name value format *name@yourdomain.com*. Use this format instead of the *domain name\username* format. Make sure the client certificate is based on a user certificate template that has *Client Authentication* listed as the first item in the user list. Check the certificate by double-clicking it and viewing **Enhanced Key Usage** in the **Details** tab.
+
+### Create client certificate from a self-signed root certificate
+
+If you're not using an enterprise certificate solution, you can use PowerShell to create a client certificate with the URI of the virtual network gateway. This certificate will be signed with the root certificate you created earlier. When you generate a client certificate from a self-signed root certificate, it's automatically installed on the computer that you used to generate it.
+
+If you want to install a client certificate on another client computer, export the certificate as a .pfx file, along with the entire certificate chain. Doing so will create a .pfx file that contains the root certificate information required for the client to authenticate. To export the self-signed root certificate as a .pfx, select the root certificate and use the same steps as described in [Export the client certificate](../../vpn-gateway/vpn-gateway-certificates-point-to-site.md#clientexport).
+
+#### Identify the self-signed root certificate
+
+If you're using the same PowerShell session that you used to create your self-signed root certificate, you can skip ahead to [Generate a client certificate](#generate-a-client-certificate).
+
+If not, use the following steps to identify the self-signed root certificate that's installed on your computer.
+
+1. Get a list of the certificates that are installed on your computer.
+
+ ```powershell
+ Get-ChildItem -Path "Cert:\CurrentUser\My"
+ ```
+
+1. Locate the subject name from the returned list, then copy the thumbprint that's located next to it to a text file. In the following example, there are two certificates. The CN name is the name of the self-signed root certificate from which you want to generate a child certificate. In this case, it's called *P2SRootCert*.
+
+ ```
+ Thumbprint Subject
+ - -
+ AED812AD883826FF76B4D1D5A77B3C08EFA79F3F CN=P2SChildCert4
+ 7181AA8C1B4D34EEDB2F3D3BEC5839F3FE52D655 CN=P2SRootCert
+ ```
+
+1. Declare a variable for the root certificate using the thumbprint from the previous step. Replace THUMBPRINT with the thumbprint of the root certificate from which you want to generate a client certificate.
+
+ ```powershell
+ $rootcert = Get-ChildItem -Path "Cert:\CurrentUser\My\<THUMBPRINT>"
+ ```
+
+ For example, using the thumbprint for *P2SRootCert* in the previous step, the command looks like this:
+
+ ```powershell
+ $rootcert = Get-ChildItem -Path "Cert:\CurrentUser\My\7181AA8C1B4D34EEDB2F3D3BEC5839F3FE52D655"
+ ```
+
+#### Generate a client certificate
+
+Use the `New-AzVpnClientConfiguration` PowerShell cmdlet to generate a client certificate. If you're not using the same PowerShell session that you used to create your self-signed root certificate, you'll need to [identify the self-signed root certificate](#identify-the-self-signed-root-certificate) as described in the previous section. Before running the script, replace `<resource-group-name>` with your resource group name and `<vpn-gateway-name>` with the name of the virtual network gateway you just deployed.
+
+> [!IMPORTANT]
+> Run this PowerShell script as administrator from the on-premises Windows machine that you want to connect to the Azure file share. The computer must be running Windows 10/Windows Server 2016 or later. Don't run the script from a Cloud Shell in Azure. Make sure you sign in to your Azure account before running the script (`Connect-AzAccount`).
```PowerShell $clientcertpassword = "1234"
+$resourceGroupName = "<resource-group-name>"
+$vpnName = "<vpn-gateway-name>"
+$vpnTemp = "C:\vpn-temp\"
+$certLocation = "Cert:\CurrentUser\My"
$vpnClientConfiguration = New-AzVpnClientConfiguration ` -ResourceGroupName $resourceGroupName `
Export-PfxCertificate `
## Configure the VPN client
-The Azure virtual network gateway will create a downloadable package with configuration files required to initialize the VPN connection on your on-premises Windows machine. You'll configure the VPN connection using the [Always On VPN](/windows-server/remote/remote-access/vpn/always-on-vpn/) feature introduced in Windows 10/Windows Server 2016. This package also contains executables that will configure the legacy Windows VPN client, if desired. This guide uses Always On VPN rather than the legacy Windows VPN client because the Always On VPN client allows you to connect/disconnect from the Azure VPN without having administrator permissions to the machine.
+The Azure virtual network gateway will create a downloadable package with configuration files required to initialize the VPN connection on your on-premises Windows machine. The configuration package contains settings that are specific to the VPN gateway that you created. If you make changes to the gateway, such as changing a tunnel type, certificate, or authentication type, you'll need to generate another VPN client profile configuration package and install it on each client. Otherwise, your VPN clients may not be able to connect.
+
+You'll configure the VPN connection using the [Always On VPN](/windows-server/remote/remote-access/vpn/always-on-vpn/) feature introduced in Windows 10/Windows Server 2016. This package also contains executables that will configure the legacy Windows VPN client, if desired. This guide uses Always On VPN rather than the legacy Windows VPN client because the Always On VPN client allows you to connect/disconnect from the Azure VPN without having administrator permissions to the machine.
+
+# [Portal](#tab/azure-portal)
+
+## Install the client certificate
+
+To install the client certificate required for authentication against the virtual network gateway, follow these steps on the client computer.
++
+## Install the VPN client
+
+This section helps you configure the native VPN client that's part of your Windows operating system to connect to your virtual network (IKEv2 and SSTP). This configuration doesn't require additional client software.
+
+### View configuration files
-The following script will install the client certificate required for authentication against the virtual network gateway, and then download and install the VPN package. Remember to replace `<computer1>` and `<computer2>` with the desired computers. You can run this script on as many machines as you desire by adding more PowerShell sessions to the `$sessions` array. Your user account must be an administrator on each of these machines. If one of these machines is the local machine you're running the script from, you must run the script from an elevated PowerShell session.
+On the client computer, navigate to `C:\vpn-temp` and open the **vpnclientconfiguration** folder to view the following subfolders:
+
+* **WindowsAmd64** and **WindowsX86**, which contain the Windows 64-bit and 32-bit installer packages, respectively. The **WindowsAmd64** installer package is for all supported 64-bit Windows clients, not just Amd.
+* **Generic**, which contains general information used to create your own VPN client configuration. The Generic folder is provided if IKEv2 or SSTP+IKEv2 was configured on the gateway. If only SSTP is configured, then the Generic folder isn't present.
+
+### Configure VPN client profile
+
+You can use the same VPN client configuration package on each Windows client computer, as long as the version matches the architecture for the client.
+
+>[!NOTE]
+>You must have Administrator rights on the Windows client computer from which you want to connect in order to run the installer package.
+
+1. Select the VPN client configuration files that correspond to the architecture of the Windows computer. For a 64-bit processor architecture, choose the `VpnClientSetupAmd64` installer package. For a 32-bit processor architecture, choose the `VpnClientSetupX86` installer package.
+
+1. Double-click the package to install it. If you see a SmartScreen popup, select **More info**, then **Run anyway**.
+
+1. Connect to your VPN. Go to **VPN Settings** and locate the VPN connection that you created. It's the same name as your virtual network. Select **Connect**. A pop-up message might appear. Select **Continue** to use elevated privileges.
+
+1. On the **Connection status** page, select **Connect** to start the connection. If you see a **Select Certificate** screen, verify that the client certificate showing is the one that you want to use to connect. If it isn't, use the drop-down arrow to select the correct certificate, and then select **OK**.
+
+# [PowerShell](#tab/azure-powershell)
+
+The following PowerShell script will install the client certificate required for authentication against the virtual network gateway, and then download and install the VPN package. Remember to replace `<computer1>` and `<computer2>` with the desired computers. You can run this script on as many machines as you desire by adding more PowerShell sessions to the `$sessions` array. Your user account must be an administrator on each of these machines. If one of these machines is the local machine you're running the script from, you must run the script from an elevated PowerShell session.
```PowerShell $sessions = [System.Management.Automation.Runspaces.PSSession[]]@()
foreach ($session in $sessions) {
Remove-Item -Path $vpnTemp -Recurse ```+ ## Mount Azure file share
-Now that you've set up your point-to-Site VPN, you can use it to mount the Azure file share to an on-premises machine. The following example will mount the share, list the root directory of the share to prove the share is actually mounted, and then unmount the share.
+Now that you've set up your point-to-site VPN, you can use it to mount the Azure file share to an on-premises machine.
+
+# [Portal](#tab/azure-portal)
+
+To mount the file share using your storage account key, open a Windows command prompt and run the following command. Replace `<YourStorageAccountName>`, `<FileShareName>`, and `<YourStorageAccountKey>` with your own values. If Z: is already in use, replace it with an available drive letter. You can find your storage account key in the Azure portal by navigating to the storage account and selecting **Security + networking** > **Access keys**.
+
+```
+net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:localhost\<YourStorageAccountName> <YourStorageAccountKey>
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+The following PowerShell script will mount the share, list the root directory of the share to prove the share is actually mounted, and then unmount the share.
> [!NOTE] > It isn't possible to mount the share persistently over PowerShell remoting. To mount persistently, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
Invoke-Command `
Remove-PSDrive -Name Z } ```+ ## Rotate VPN Root Certificate
Add-AzVpnClientRootCertificate `
## See also
+- [Configure server settings for P2S VPN Gateway connections](../../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)
- [Networking considerations for direct Azure file share access](storage-files-networking-overview.md) - [Configure a point-to-site (P2S) VPN on Linux for use with Azure Files](storage-files-configure-p2s-vpn-linux.md) - [Configure a site-to-site (S2S) VPN for use with Azure Files](storage-files-configure-s2s-vpn.md)
virtual-network-manager Create Virtual Network Manager Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-terraform.md
Last updated 6/7/2023 content_well_notification: - AI-contribution
+zone_pivot_groups: azure-virtual-network-manager-quickstart-options
+ # Quickstart: Create a mesh network topology with Azure Virtual Network Manager using Terraform Get started with Azure Virtual Network Manager by using Terraform to provision connectivity for all your virtual networks.
-In this quickstart, you deploy three virtual networks and use Azure Virtual Network Manager to create a mesh network topology. Then, you verify that the connectivity configuration was applied.
+In this quickstart, you deploy three virtual networks and use Azure Virtual Network Manager to create a mesh network topology. Then, you verify that the connectivity configuration was applied. You can choose from a deployment with a Subscription scope or a management group scope. Learn more about [network manager scopes](concept-network-manager-scope.md).
[!INCLUDE [virtual-network-manager-preview](../../includes/virtual-network-manager-preview.md)]
In this article, you learn how to:
- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure) - To modify dynamic network groups, you must be [granted access via Azure RBAC role](concept-network-groups.md#network-groups-and-azure-policy) assignment only. Classic Admin/legacy authorization is not supported + ## Implement the Terraform code
+This code sample will implement Azure Virtual Network Manager at the subscription scope.
+ > [!NOTE] > The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-virtual-network-manager-create-mesh). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-virtual-network-manager-create-mesh/TestRecord.md). >
In this article, you learn how to:
1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
-1. Create a file named `providers.tf` and insert the following code:
+2. Create a file named `providers.tf` and insert the following code:
[!code-terraform[master](~/terraform_samples/quickstart/101-virtual-network-manager-create-mesh/providers.tf)]
-1. Create a file named `main.tf` and insert the following code:
+3. Create a file named `main.tf` and insert the following code:
[!code-terraform[master](~/terraform_samples/quickstart/101-virtual-network-manager-create-mesh/main.tf)]
-1. Create a file named `variables.tf` and insert the following code:
+4. Create a file named `variables.tf` and insert the following code:
[!code-terraform[master](~/terraform_samples/quickstart/101-virtual-network-manager-create-mesh/variables.tf)]
-1. Create a file named `outputs.tf` and insert the following code:
+5. Create a file named `outputs.tf` and insert the following code:
[!code-terraform[master](~/terraform_samples/quickstart/101-virtual-network-manager-create-mesh/outputs.tf)] +++
+## Implement the Terraform code
+
+This code sample will implement Azure Virtual Network Manager at the management group scope.
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-virtual-network-manager-create-management-group-scope). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/blob/master/quickstart/101-virtual-network-manager-create-management-group-scope/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-virtual-network-manager-create-management-group-scope/providers.tf)]
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-virtual-network-manager-create-management-group-scope/main.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-virtual-network-manager-create-management-group-scope/variables.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-virtual-network-manager-create-management-group-scope/outputs.tf)]
++ ## Initialize Terraform [!INCLUDE [terraform-init.md](~/azure-dev-docs-pr/articles/terraform/includes/terraform-init.md)]
In this article, you learn how to:
--resource-group $resource_group_name \ --vnet-name <virtual_network_name> ```
-
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource group name.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Run [Get-AzResourceGroup](/powershell/module/az.resources/Get-AzResourceGroup) to display the resource group.
+
+ ```azurepowershell
+ Get-AzResourceGroup -Name $resource_group_name
+ ```
+
+1. For each virtual network name printed in the previous step, run [Get-AzNetworkManagerEffectiveConnectivityConfiguration](/powershell/module/az.network/get-aznetworkmanagereffectiveconnectivityconfiguration) to print the effective (applied) configurations. Replace the `<virtual_network_name>` placeholder with the vnet name.
+
+```azurepowershell
+ Get-AzNetworkManagerEffectiveConnectivityConfiguration
+ -VirtualNetworkName <String>
+ -VirtualNetworkResourceGroupName $resource_group_name
+```
## Clean up resources
virtual-wan About Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing.md
Consider the following when configuring Virtual WAN routing:
* You may specify multiple next hop IP addresses on a single Virtual Network connection. However, Virtual Network Connection doesn't support ΓÇÿmultiple/uniqueΓÇÖ next hop IP to the ΓÇÿsameΓÇÖ network virtual appliance in a SPOKE Virtual Network 'if' one of the routes with next hop IP is indicated to be public IP address or 0.0.0.0/0 (internet) * All information pertaining to 0.0.0.0/0 route is confined to a local hub's route table. This route doesn't propagate across hubs. * You can only use Virtual WAN to program routes in a spoke if the prefix is shorter (less specific) than the virtual network prefix. For example, in the diagram above the spoke VNET1 has the prefix 10.1.0.0/16: in this case, Virtual WAN wouldn't be able to inject a route that matches the virtual network prefix (10.1.0.0/16) or any of the subnets (10.1.0.0/24, 10.1.1.0/24). In other words, Virtual WAN can't attract traffic between two subnets that are in the same virtual network.
+* While true that 2 hubs on the same virtual WAN will announce routes to each other (as long as the propagation is enabled to the same labels) this only applies to dynamic routing. Once you define a static route, this is not the case.
## Next steps