Updates from: 01/29/2024 02:08:19
Service Microsoft Docs article Related commit history on GitHub Change details
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
AKS follows a strict supportability versioning window. With properly selected au
You can specify cluster auto-upgrade specifics using the following guidance. The upgrades occur based on your specified cadence and are recommended to remain on supported Kubernetes versions.
-AKS also initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default. Stopped nodepools will be upgraded during an auto-upgrade operation. The upgrade will apply to nodes when the node pool is started. To minimize disruptions, set up [maintenance windows][planned-maintenance].
+AKS also initiates auto-upgrades for unsupported clusters. When a cluster in an n-3 version (where n is the latest supported AKS GA minor version) is about to drop to n-4, AKS automatically upgrades the cluster to n-2 to remain in an AKS support [policy][supported-kubernetes-versions]. Automatically upgrading a platform supported cluster to a supported version is enabled by default. Stopped node pools will be upgraded during an auto-upgrade operation. The upgrade will apply to nodes when the node pool is started. To minimize disruptions, set up [maintenance windows][planned-maintenance].
## Cluster auto-upgrade limitations If youΓÇÖre using cluster auto-upgrade, you can no longer upgrade the control plane first, and then upgrade the individual node pools. Cluster auto-upgrade always upgrades the control plane and the node pools together. You can't upgrade the control plane only. Running the `az aks upgrade --control-plane-only` command raises the following error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.`
-If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] is disabled by default.
+If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.
## Use cluster auto-upgrade
The following upgrade channels are available:
| `patch`| automatically upgrades the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster runs version *1.17.7*, and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster upgrades to *1.17.9*.| | `stable`| automatically upgrades the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster runs version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster upgrades to *1.18.6*.| | `rapid`| automatically upgrades the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster's Kubernetes version is an *N-2* minor version, where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster runs version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster first upgrades to *1.18.6*, then upgrades to *1.19.1*.|
-| `node-image`| automatically upgrades the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default. Node image upgrades will work on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.|
+| `node-image`| automatically upgrades the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default. Node image upgrades work on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.|
> [!NOTE] >
Use the following best practices to help maximize your success when using auto-u
* Follow [PDB best practices][pdb-best-practices]. * For upgrade troubleshooting information, see the [AKS troubleshooting documentation][aks-troubleshoot-docs].
+For a detailed discussion of upgrade best practices and other considerations, see [AKS patch and upgrade guidance][upgrade-operators-guide].
+ <!-- INTERNAL LINKS --> [supported-kubernetes-versions]: ./supported-kubernetes-versions.md [upgrade-aks-cluster]: ./upgrade-cluster.md
Use the following best practices to help maximize your success when using auto-u
[az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-update]: /cli/azure/aks#az_aks_update [aks-troubleshoot-docs]: /support/azure/azure-kubernetes/welcome-azure-kubernetes
+[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices
<!-- EXTERNAL LINKS --> [pdb-best-practices]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
aks Auto Upgrade Node Os Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-os-image.md
The following upgrade channels are available. You're allowed to choose one of th
|Channel|Description|OS-specific behavior| ||| | `None`| Your nodes don't have security updates applied automatically. This means you're solely responsible for your security updates.|N/A|
-| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially. The OS's infrastructure patches them at some point.|Ubuntu and Azure Linux (CPU node pools) apply security patches through unattended upgrade/dnf-automatic roughly once per day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`.|
+| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially. The OS's infrastructure patches them at some point.|Ubuntu and Azure Linux (CPU node pools) apply security patches through unattended upgrade/dnf-automatic roughly once per day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`. You'll need to manage the reboot process by using a tool like [kured][kured].|
| `SecurityPatch`|This channel is in preview and requires enabling the feature flag `NodeOsUpgradeChannelPreview`. Refer to the prerequisites section for details. AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only." There might be disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs. `SecurityPatch` works on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.| | `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default. Node image upgrades support patch versions that are deprecated, so long as the minor Kubernetes version is still supported.|
The default cadence means there's no planned maintenance window applied.
| `SecurityPatch`|AKS-tested, fully managed, and applied with safe deployment practices. For more information, refer to [Increased security and resiliency of Canonical workloads on Azure][Blog].|Weekly.| | `NodeImage`|AKS|Weekly.|
+> [!NOTE]
+> While Windows security updates are released on a monthly basis, using the `Unmanaged` channel will not automatically apply these updates to Windows nodes. If you choose the `Unmanaged` channel, you need to manage the reboot process by using a tool like [kured][kured] in order to properly apply security patches.
+ ## SecurityPatch channel requirements To use the `SecurityPatch` channel, your cluster must support these requirements. - Must be using API version `11-02-preview` or later -- If using Azure CLI, the `aks-preview` CLI extension version `0.5.127` or later must be installed
+- If using Azure CLI, the `aks-preview` CLI extension version `0.5.166` or later must be installed
- The `NodeOsUpgradeChannelPreview` feature flag must be enabled on your subscription
On the `Unmanaged` channel, AKS has no control over how and when the security up
kubectl get nodes --show-labels ```
-Among the labels in the output, you'll see a line similar to the following:
+Among the returned labels, you should see a line similar to the following output:
```output kubernetes.azure.com/node-image-version=AKSUbuntu-2204gen2containerd-202311.07.0 ```
-Here, the base node image version is `AKSUbuntu-2204gen2containerd`. If applicable, the security patch version typically follows. In the above example it is `202311.07.0`.
+Here, the base node image version is `AKSUbuntu-2204gen2containerd`. If applicable, the security patch version typically follows. In the above example, it's `202311.07.0`.
+
+The same details also be looked up in the Azure portal under the node label view:
+
-The same details also be looked up in the Azure portal under the node label view as illustrated below.
+## Next steps
+For a detailed discussion of upgrade best practices and other considerations, see [AKS patch and upgrade guidance][upgrade-operators-guide].
<!-- LINKS -->
The same details also be looked up in the Azure portal under the node label view
[monitor-aks]: ./monitor-aks-reference.md [aks-eventgrid]: ./quickstart-event-grid.md [aks-upgrade]: ./upgrade-cluster.md
+[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices
<!-- LINKS - external --> [Blog]: https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/increased-security-and-resiliency-of-canonical-workloads-on/ba-p/3970623
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
Last updated 03/28/2023
# Upgrade Azure Kubernetes Service (AKS) node images
-Azure Kubernetes Service (AKS) regularly provides new node images, so it's beneficial to upgrade your node images frequently to use the latest AKS features. Linux node images are updated weekly, and Windows node images are updated monthly. Image upgrade announcements are included in the [AKS release notes](https://github.com/Azure/AKS/releases), and it can take up to a week for these updates to be rolled out across all regions. Node image upgrades can also be performed automatically and scheduled using planned maintenance. For more details, see [Automatically upgrade node images][auto-upgrade-node-image].
+Azure Kubernetes Service (AKS) regularly provides new node images, so it's beneficial to upgrade your node images frequently to use the latest AKS features. Linux node images are updated weekly, and Windows node images are updated monthly. Image upgrade announcements are included in the [AKS release notes](https://github.com/Azure/AKS/releases), and it can take up to a week for these updates to be rolled out across all regions. Node image upgrades can also be performed automatically and scheduled using planned maintenance. For more information, see [Automatically upgrade node images][auto-upgrade-node-image].
This article shows you how to upgrade AKS cluster node images and how to update node pool images without upgrading the Kubernetes version. For information on upgrading the Kubernetes version for your cluster, see [Upgrade an AKS cluster][upgrade-cluster].
az aks nodepool get-upgrades \
--resource-group myResourceGroup ```
-The output will show the `latestNodeImageVersion`, like in the following example:
+The output shows the `latestNodeImageVersion`, like in the following example:
```output {
az aks upgrade \
You can check the status of the node images using the `kubectl get nodes` command. >[!NOTE]
-> This command may differ slightly depending on the shell you use. See the [Kubernetes JSONPath documentation][kubernetes-json-path] for more information on Windows/PowerShell environments.
+> This command may differ slightly depending on the shell you use. For more information on Windows and PowerShell environments, see the [Kubernetes JSONPath documentation][kubernetes-json-path].
```bash kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}'
az aks nodepool upgrade \
You can check the status of the node images with the `kubectl get nodes` command. >[!NOTE]
-> This command may differ slightly depending on the shell you use. See the [Kubernetes JSONPath documentation][kubernetes-json-path] for more information on Windows/PowerShell environments.
+> This command may differ slightly depending on the shell you use. For more information on Windows and PowerShell environments, see the [Kubernetes JSONPath documentation][kubernetes-json-path].
```bash kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}'
az aks nodepool show \
## Upgrade node images with node surge
-To speed up the node image upgrade process, you can upgrade your node images using a customizable node surge value. By default, AKS uses one additional node to configure upgrades.
+To speed up the node image upgrade process, you can upgrade your node images using a customizable node surge value. By default, AKS uses one extra node to configure upgrades.
If you'd like to increase the speed of upgrades, use the [`az aks nodepool update`][az-aks-nodepool-update] command with the `--max-surge` flag to configure the number of nodes used for upgrades. To learn more about the trade-offs of various `--max-surge` settings, see [Customize node surge upgrade][max-surge].
az aks nodepool show \
- Learn how to upgrade the Kubernetes version with [Upgrade an AKS cluster][upgrade-cluster]. - [Automatically apply cluster and node pool upgrades with GitHub Actions][github-schedule]. - Learn more about multiple node pools with [Create multiple node pools][use-multiple-node-pools].
+- For a detailed discussion of upgrade best practices and other considerations, see [AKS patch and upgrade guidance][upgrade-operators-guide].
<!-- LINKS - external --> [kubernetes-json-path]: https://kubernetes.io/docs/reference/kubectl/jsonpath/
az aks nodepool show \
[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az_aks_nodepool_update [az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade [az-aks-show]: /cli/azure/aks#az_aks_show
+[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices
+
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
If updates were applied that require a node reboot, a file is written to */var/r
## Monitor and review reboot process
-When one of the replicas in the DaemonSet has detected that a node reboot is required, a lock is placed on the node through the Kubernetes API. This lock prevents more pods from being scheduled on the node. The lock also indicates that only one node should be rebooted at a time. With the node cordoned off, running pods are drained from the node, and the node is rebooted.
+When one of the replicas in the DaemonSet detects that a node reboot is required, a lock is placed on the node through the Kubernetes API. This lock prevents more pods from being scheduled on the node. The lock also indicates that only one node should be rebooted at a time. With the node cordoned off, running pods are drained from the node, and the node is rebooted.
You can monitor the status of the nodes using the [kubectl get nodes][kubectl-get-nodes] command. The following example output shows a node with a status of *SchedulingDisabled* as the node prepares for the reboot process:
This article detailed how to use `kured` to reboot Linux nodes automatically as
For AKS clusters that use Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade].
+For a detailed discussion of upgrade best practices and other considerations, see [AKS patch and upgrade guidance][upgrade-operators-guide].
+ <!-- LINKS - external --> [kured]: https://github.com/kubereboot/kured [kured-install]: https://github.com/kubereboot/charts/tree/main/charts/kured
For AKS clusters that use Windows Server nodes, see [Upgrade a node pool in AKS]
[aks-upgrade]: upgrade-cluster.md [nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool [node-image-upgrade]: node-image-upgrade.md
+[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices
aks Node Upgrade Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md
For more information about AKS upgrades, see the following articles and resource
* [AKS release notes](https://github.com/Azure/AKS/releases) * [Upgrade an AKS cluster][cluster-upgrades-article]
+For a detailed discussion of upgrade best practices and other considerations, see [AKS patch and upgrade guidance][upgrade-operators-guide].
+ <!-- LINKS - external --> [github]: https://github.com [profile-repository]: https://docs.github.com/en/free-pro-team@latest/github/setting-up-and-managing-your-github-profile/about-your-profile
For more information about AKS upgrades, see the following articles and resource
[azure-built-in-roles]: ../role-based-access-control/built-in-roles.md [azure-rbac-scope-levels]: ../role-based-access-control/scope-overview.md#scope-format [az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az-ad-sp-create-for-rbac
+[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
description: Learn how to use Planned Maintenance to schedule and control cluster and node image upgrades in Azure Kubernetes Service (AKS). Previously updated : 01/17/2023 Last updated : 01/26/2024
Create a `default.json` file with the following contents:
} ```
-The above JSON file specifies maintenance windows every Tuesday at 1:00am - 3:00am and every Wednesday at 1:00am - 2:00am and at 6:00am - 7:00am in the `UTC` timezone. There's also an exception from *2021-05-26T03:00:00Z* to *2021-05-30T12:00:00Z* where maintenance isn't allowed even if it overlaps with a maintenance window.
+The above JSON file specifies maintenance windows every Tuesday at 1:00am - 3:00am and every Wednesday at 1:00am - 2:00am and at 6:00am - 7:00am in the `UTC` timezone. There's also an exception from `2021-05-26T03:00:00Z` to `2021-05-30T12:00:00Z` where maintenance isn't allowed even if it overlaps with a maintenance window.
Create an `autoUpgradeWindow.json` file with the following contents:
Create an `autoUpgradeWindow.json` file with the following contents:
} ```
-The above JSON file specifies maintenance windows every three months on the first of the month between 9:00 AM - 1:00 PM in the `UTC-08` timezone. There's also an exception from *2023-12-23* to *2024-01-05* where maintenance isn't allowed even if it overlaps with a maintenance window.
+The above JSON file specifies maintenance windows every three months on the first of the month between 9:00 AM - 1:00 PM in the `UTC-08` timezone. There's also an exception from `2023-12-23` to `2024-01-05` where maintenance isn't allowed even if it overlaps with a maintenance window.
The following command adds the maintenance windows from `default.json` and `autoUpgradeWindow.json`:
To delete a certain maintenance configuration window in your AKS Cluster, use th
```azurecli-interactive az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCluster --name autoUpgradeSchedule ```+ ## Frequently Asked Questions
-* How can I check the existing maintenance configurations in my cluster?
+- How can I check the existing maintenance configurations in my cluster?
Use the `az aks maintenanceconfiguration show` command.
-* Can reactive, unplanned maintenance happen during the `notAllowedTime` or `notAllowedDates` periods too?
+- Can reactive, unplanned maintenance happen during the `notAllowedTime` or `notAllowedDates` periods too?
Yes, AKS reserves the right to break these windows for unplanned, reactive maintenance operations that are urgent or critical.
-* How can you tell if a maintenance event occurred?
+- How can you tell if a maintenance event occurred?
For releases, check your cluster's region and look up release information in [weekly releases][release-tracker] and validate if it matches your maintenance schedule or not. To view the status of your auto upgrades, look up [activity logs][monitor-aks] on your cluster. You may also look up specific upgrade related events as mentioned in [Upgrade an AKS cluster][aks-upgrade]. AKS also emits upgrade related Event Grid events. To learn more, see [AKS as an Event Grid source][aks-eventgrid].
-* Can you use more than one maintenance configuration at the same time?
-
+- Can you use more than one maintenance configuration at the same time?
+ Yes, you can run all three configurations i.e `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`simultaneously. In case the windows overlap AKS decides the running order.
-* I configured a maintenance window, but upgrade didn't happen - why?
+- I configured a maintenance window, but upgrade didn't happen - why?
- AKS auto-upgrade needs a certain amount of time to take the maintenance window into consideration. We recommend at least 24 hours between the creation/update of the maintenance configuration, and when it's scheduled to start.
+ AKS auto-upgrade needs a certain amount of time to take the maintenance window into consideration. We recommend at least 24 hours between the creation or update of a maintenance configuration the scheduled start time.
- Also, please ensure your cluster is started when the planned maintenance window is starting. If the cluster is stopped, then its control plane is deallocated and no operations can be performed.
+ Also, ensure your cluster is started when the planned maintenance window is starting. If the cluster is stopped, then its control plane is deallocated and no operations can be performed.
-* AKS auto-upgrade didn't upgrade all my agent pools - or one of the pools was upgraded outside of the maintenance window?
+- AKS auto-upgrade didn't upgrade all my agent pools - or one of the pools was upgraded outside of the maintenance window?
- If an agent pool fails to upgrade (eg. because of Pod Disruption Budgets preventing it to upgrade) or is in a Failed state, then it might be upgraded later outside of the maintenance window. This scenario is called "catch-up upgrade" and avoids letting Agent pools with a different version than the AKS control plane.
+ If an agent pool fails to upgrade (for example, because of Pod Disruption Budgets preventing it to upgrade) or is in a Failed state, then it might be upgraded later outside of the maintenance window. This scenario is called "catch-up upgrade" and avoids letting Agent pools with a different version than the AKS control plane.
-* Are there any best practices for the maintenance configurations?
-
- We recommend setting the [Node OS security updates][node-image-auto-upgrade] schedule to a weekly cadence if you're using `NodeImage` channel since a new node image gets shipped every week and daily if you opt in for `SecurityPatch` channel to receive daily security updates. Set the [auto-upgrade][auto-upgrade] schedule to a monthly cadence to stay on top of the kubernetes N-2 [support policy][aks-support-policy].
+- Are there any best practices for the maintenance configurations?
- 
+ We recommend setting the [Node OS security updates][node-image-auto-upgrade] schedule to a weekly cadence if you're using `NodeImage` channel since a new node image gets shipped every week and daily if you opt in for `SecurityPatch` channel to receive daily security updates. Set the [auto-upgrade][auto-upgrade] schedule to a monthly cadence to stay on top of the kubernetes N-2 [support policy][aks-support-policy]. For a detailed discussion of upgrade best practices and other considerations, see [AKS patch and upgrade guidance][upgrade-operators-guide].
## Next steps
az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl
[monitor-aks]: monitor-aks-reference.md [aks-eventgrid]:quickstart-event-grid.md [aks-support-policy]:support-policies.md
+[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices
aks Upgrade Aks Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-aks-cluster.md
Title: Upgrade an Azure Kubernetes Service (AKS) cluster
description: Learn how to upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates. Previously updated : 10/19/2023 Last updated : 01/26/2024 # Upgrade an Azure Kubernetes Service (AKS) cluster
If your Azure CLI is updated and you receive the following example output, it me
ERROR: Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info. ```
-If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `az aks get-upgrades` shows that no upgrades are available.
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. AKS does not support upgrading a cluster to a newer Kubernetes version when `az aks get-upgrades` shows that no upgrades are available.
### [Azure PowerShell](#tab/azure-powershell)
-If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `Get-AzAksUpgradeProfile` shows that no upgrades are available.
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. AKS does not support upgrading a cluster to a newer Kubernetes version when `Get-AzAksUpgradeProfile` shows that no upgrades are available.
### [Azure portal](#tab/azure-portal)
-If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when no upgrades are available.
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. AKS does not support upgrading a cluster to a newer Kubernetes version when no upgrades are available.
During the cluster upgrade process, AKS performs the following operations:
* For long running pods, you can configure the node drain timeout, which allows for custom wait time on the eviction of pods and graceful termination per node. If not specified, the default is 30 minutes. * When the old node is fully drained, it's reimaged to receive the new version and becomes the buffer node for the following node to be upgraded. * Optionally, you can set a duration of time to wait between draining a node and proceeding to reimage it and move on to the next node. A short interval allows you to complete other tasks, such as checking application health from a Grafana dashboard during the upgrade process. We recommend a short timeframe for the upgrade process, as close to 0 minutes as reasonably possible. Otherwise, a higher node soak time (preview) affects how long before you discover an issue. The minimum soak time value is 0 minutes, with a maximum of 30 minutes. If not specified, the default value is 0 minutes.
-* This process repeats until all nodes in the cluster have been upgraded.
+* This process repeats until all nodes in the cluster are upgraded.
* At the end of the process, the last buffer node is deleted, maintaining the existing agent node count and zone balance. [!INCLUDE [alias minor version callout](./includes/aliasminorversion/alias-minor-version-upgrade.md)]
AKS accepts both integer values and a percentage value for max surge. An integer
#### Set node drain timeout value
-At times, you may have a long running workload on a certain pod and it cannot be rescheduled to another node during runtime, for example, a memory intensive stateful workload that must finish running. In these cases, you can configure a node drain timeout that AKS will respect in the upgrade workflow. If no node drain timeout value is specified, the default is 30 minutes. If the drain time out value elapses and pods have not yet finished running , then the upgrade operation is stopped. Any subsequent PUT operation shall resume the stopped upgrade.
-
+At times, you may have a long running workload on a certain pod and it can't be rescheduled to another node during runtime, for example, a memory intensive stateful workload that must finish running. In these cases, you can configure a node drain timeout that AKS will respect in the upgrade workflow. If no node drain timeout value is specified, the default is 30 minutes. If the drain timeout value elapses and pods are still running, then the upgrade operation is stopped. Any subsequent PUT operation shall resume the stopped upgrade.
* Set node drain timeout for new or existing node pools using the [`az aks nodepool add`][az-aks-nodepool-add] or [`az aks nodepool update`][az-aks-nodepool-update] command.
To allow for a duration of time to wait between draining a node and proceeding t
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-> [!NOTE]
+> [!NOTE]
> To use node soak duration (preview), you must have the aks-preview Azure CLI extension version 0.5.173 or later installed. * Enable the aks-preview Azure CLI.
To allow for a duration of time to wait between draining a node and proceeding t
az aks nodepool upgrade -n MyNodePool -g MyResourceGroup --cluster-name MyManagedCluster --max-surge 33% --node-soak-duration 20 ``` - ## View upgrade events * View upgrade events using the `kubectl get events` command.
To allow for a duration of time to wait between draining a node and proceeding t
## Next steps
-To learn how to configure automatic upgrades, see [Configure automatic upgrades for an AKS cluster][configure-automatic-aks-upgrades].
+To learn how to configure automatic upgrades, see [Configure automatic upgrades for an AKS cluster][configure-automatic-aks-upgrades].
+
+For a detailed discussion of upgrade best practices and other considerations, see [AKS patch and upgrade guidance][upgrade-operators-guide].
<!-- LINKS - internal --> [azure-cli-install]: /cli/azure/install-azure-cli
To learn how to configure automatic upgrades, see [Configure automatic upgrades
[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az_aks_nodepool_upgrade [configure-automatic-aks-upgrades]: ./upgrade-cluster.md#configure-automatic-upgrades [release-tracker]: release-tracker.md
+[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices
<!-- LINKS - external --> [kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Title: Upgrade options for Azure Kubernetes Service (AKS) clusters
description: Learn the different ways to upgrade an Azure Kubernetes Service (AKS) cluster. Previously updated : 10/19/2023 Last updated : 01/26/2024 # Upgrade options for Azure Kubernetes Service (AKS) clusters
Persistent volume claims (PVCs) backed by Azure locally redundant storage (LRS)
The combination of [Planned Maintenance Window][planned-maintenance], [Max Surge](./upgrade-aks-cluster.md#customize-node-surge-upgrade), [Pod Disruption Budget][pdb-spec], [node drain timeout][drain-timeout], and [node soak time][soak-time] (preview) can significantly increase the likelihood of node upgrades completing successfully by the end of the maintenance window while also minimizing disruptions.
-* [Planned Maintenance Window][planned-maintenance] enables service teams to schedule auto-upgrade during a pre-defined window, typically a low-traffic period, to minimize workload impact. We recommend a window duration of at least *four hours*.
+* [Planned Maintenance Window][planned-maintenance] enables service teams to schedule auto-upgrade during a predefined window, typically a low-traffic period, to minimize workload impact. We recommend a window duration of at least *four hours*.
* [Max Surge](./upgrade-aks-cluster.md#customize-node-surge-upgrade) on the node pool allows requesting extra quota during the upgrade process and limits the number of nodes selected for upgrade simultaneously. A higher max surge results in a faster upgrade process. We don't recommend setting it at 100%, as it upgrades all nodes simultaneously, which can cause disruptions to running applications. We recommend a max surge quota of *33%* for production node pools. * [Pod Disruption Budget][pdb-spec] is set for service applications and limits the number of pods that can be down during voluntary disruptions, such as AKS-controlled node upgrades. It can be configured as `minAvailable` replicas, indicating the minimum number of application pods that need to be active, or `maxUnavailable` replicas, indicating the maximum number of application pods that can be terminated, ensuring high availability for the application. Refer to the guidance provided for configuring [Pod Disruption Budgets (PDBs)][pdb-concepts]. PDB values should be validated to determine the settings that work best for your specific service. * [Node drain timeout][drain-timeout] on the node pool allows you to configure the wait duration for eviction of pods and graceful termination per node during an upgrade. This option is useful when dealing with long running workloads. When the node drain timeout is specified (in minutes), AKS respects waiting on pod disruption budgets. If not specified, the default timeout is 30 minutes.
The combination of [Planned Maintenance Window][planned-maintenance], [Max Surge
> [!NOTE] > To use node soak duration (preview), you must have the aks-preview Azure CLI extension version 0.5.173 or later installed. - ## Next steps
-This article listed different upgrade options for AKS clusters. To learn more about deploying and managing AKS clusters, see the following tutorial:
-
-> [!div class="nextstepaction"]
-> [AKS tutorials][aks-tutorial-prepare-app]
+This article listed different upgrade options for AKS clusters. For a detailed discussion of upgrade best practices and other considerations, see [AKS patch and upgrade guidance][upgrade-operators-guide].
<!-- LINKS - external --> [pdb-spec]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
This article listed different upgrade options for AKS clusters. To learn more ab
[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool [planned-maintenance]: planned-maintenance.md [specific-nodepool]: node-image-upgrade.md#upgrade-a-specific-node-pool
+[upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices
aks Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade.md
description: Learn about the various upgradeable components of an Azure Kubernet
Previously updated : 11/21/2023 Last updated : 01/26/2024 # Upgrading Azure Kubernetes Service clusters and node pools
-An Azure Kubernetes Service (AKS) cluster will periodically need to be updated to ensure security and compatibility with the latest features. There are two components of an AKS cluster that are necessary to maintain:
+An Azure Kubernetes Service (AKS) cluster needs to be periodically updated to ensure security and compatibility with the latest features. There are two components of an AKS cluster that are necessary to maintain:
- *Cluster Kubernetes version*: Part of the AKS cluster lifecycle involves performing upgrades to the latest Kubernetes version. ItΓÇÖs important that you upgrade to apply the latest security releases and to get access to the latest Kubernetes features, as well as to stay within the [AKS support window][supported-k8s-versions]. - *Node image version*: AKS regularly provides new node images with the latest OS and runtime updates. It's beneficial to upgrade your nodes' images regularly to ensure support for the latest AKS features and to apply essential security patches and hot fixes.
The following table summarizes the details of updating each component:
|Component name|Frequency of upgrade|Planned Maintenance supported|Supported operation methods|Documentation link| |--|--|--|--|--|
-|Cluster Kubernetes version (minor) upgrade|Roughly every three months|Yes| Automatic, Manual|[Upgrade an AKS cluster][upgrade-cluster]|
+|Cluster Kubernetes version (minor) upgrade|Roughly every three months|Yes|Automatic, Manual|[Upgrade an AKS cluster][upgrade-cluster]|
|Cluster Kubernetes version upgrade to supported patch version|Approximately weekly. To determine the latest applicable version in your region, see the [AKS release tracker][release-tracker]|Yes|Automatic, Manual|[Upgrade an AKS cluster][upgrade-cluster]| |Node image version upgrade|**Linux**: weekly<br>**Windows**: monthly|Yes|Automatic, Manual|[AKS node image upgrade][node-image-upgrade]| |Security patches and hot fixes for node images|As-necessary|||[AKS node security patches][node-security-patches]|
Automatic upgrades can be performed through [auto upgrade channels][auto-upgrade
## Planned maintenance
- [Planned maintenance][planned-maintenance] allows you to schedule weekly maintenance windows that will update your control plane as well as your kube-system pods, helping to minimize workload impact.
+ [Planned maintenance][planned-maintenance] allows you to schedule weekly maintenance windows that will update your control plane and your kube-system pods, helping to minimize workload impact.
## Troubleshooting
To find details and solutions to specific issues, view the following troubleshoo
## Next steps
-For more information what cluster operations may trigger specific upgrade events, see the [AKS operator's guide on patching][operator-guide-patching].
+For more information what cluster operations may trigger specific upgrade events, upgrade best practices, and other considerations, see the [AKS operator's guide on patching][operator-guide-patching].
<!-- LINKS --> [auto-upgrade]: ./auto-upgrade-cluster.md
For more information what cluster operations may trigger specific upgrade events
[release-tracker]: ./release-tracker.md [node-image-upgrade]: ./node-image-upgrade.md [gh-actions-upgrade]: ./node-upgrade-github-actions.md
-[operator-guide-patching]: /azure/architecture/operator-guides/aks/aks-upgrade-practices#considerations
+[operator-guide-patching]: /azure/architecture/operator-guides/aks/aks-upgrade-practices
[supported-k8s-versions]: ./supported-kubernetes-versions.md#kubernetes-version-support-policy [ts-nsg]: /troubleshoot/azure/azure-kubernetes/upgrade-fails-because-of-nsg-rules [ts-pod-drain]: /troubleshoot/azure/azure-kubernetes/error-code-poddrainfailure
api-center Import Api Management Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/import-api-management-apis.md
+
+ Title: Import APIs from Azure API Management - Azure API Center
+description: Add APIs to your Azure API center inventory from your API Management instance.
+++ Last updated : 01/25/2024++
+# Customer intent: As an API program manager, I want to add APIs that are managed in my Azure API Management instance to my API center.
++
+# Import APIs to your API center from Azure API Management
+
+This article shows how to import (add) APIs from an Azure API Management instance to your [API center](overview.md) using the Azure CLI. Adding APIs from API Management to your API inventory helps make them discoverable and accessible to developers, API program managers, and other stakeholders in your organization.
+
+When you add an API from an API Management instance to your API center:
+
+* The API's [versions](key-concepts.md#api-version), [definitions](key-concepts.md#api-definition), and [deployment](key-concepts.md#deployment) information are copied to your API center.
+* The API receives a system-generated API name in your API center. It retains its display name (title) from API Management.
+* The **Lifecycle stage** of the API is set to *Design*.
+* Azure API Management is added as an [environment](key-concepts.md#environment).
+
+After adding an API from API Management, you can add metadata and documentation in your API center to help stakeholders discover, understand, and consume the API.
++
+## Prerequisites
+
+* An API center in your Azure subscription. If you haven't created one, see [Quickstart: Create your API center](set-up-api-center.md).
+
+* One or more instances of Azure API Management, in the same or a different subscription in your directory. If you haven't created one, see [Create an Azure API Management instance](../api-management/get-started-create-service-instance.md).
+
+* One or more APIs managed in your API Management instance that you want to add to your API center.
+
+* For Azure CLI:
+ [!INCLUDE [include](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
+
+ > [!NOTE]
+ > `az apic` commands require the `apic-extension` Azure CLI extension. If you haven't used `az apic` commands, the extension is installed dynamically when you run your first `az apic` command. Learn more about [Azure CLI extensions](/cli/azure/azure-cli-extensions-overview).
+
+ > [!NOTE]
+ > Azure CLI command examples in this article can run in PowerShell or a bash shell. Where needed because of different variable syntax, separate command examples are provided for the two shells.
++
+## Add a managed identity in your API center
+
+For this scenario, your API center uses a [managed identity](/entra/identity/managed-identities-azure-resources/overview) to access APIs in your API Management instance. You can use either a system-assigned or user-assigned managed identity. If you haven't added a managed identity in your API center, you can add it in the Azure portal or by using the Azure CLI.
+
+### Add a system-assigned identity
+
+#### [Portal](#tab/portal)
+
+1. In the [portal](https://azure.microsoft.com), navigate to your API center.
+1. In the left menu, select **Managed identities**.
+1. Select **System assigned**, and set the status to **On**.
+1. Select **Save**.
+
+#### [Azure CLI](#tab/cli)
+
+Set the system-assigned identity in your API center using the following [az apic service update](/cli/azure/apic/service#az-apic-service-update) command. Substitute the names of your API center and resource group:
+
+```azurecli
+az apic service update --name <api-center-name> --resource-group <resource-group-name> --identity '{"type": "SystemAssigned"}'
+```
++
+### Add a user-assigned identity
+
+To add a user-assigned identity, you need to create a user-assigned identity resource, and then add it to your API center.
+
+#### [Portal](#tab/portal)
+
+1. Create a user-assigned identity according to [these instructions](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities#create-a-user-assigned-managed-identity).
+1. In the [portal](https://azure.microsoft.com), navigate to your API center.
+1. In the left menu, select **Managed identities**.
+1. Select **User assigned** > **+ Add**.
+1. Search for the identity you created earlier, select it, and select **Add**.
+
+#### [Azure CLI](#tab/cli)
+
+1. Create a user-assigned identity.
+
+ ```azurecli
+ az identity create --resource-group <resource-group-name> --name <identity-name>
+ ```
+
+ In the command output, note the value of the identity's `id` property. The `id` property should look something like this:
+
+ ```json
+ {
+ [...]
+ "id": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>"
+ [...]
+ }
+ ```
+
+1. Create a JSON file with the following content, substituting the value of the `id` property from the previous step.
+
+ ```json
+ {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<identity-id>": {}
+ }
+ }
+ ```
+
+1. Add the user-assigned identity to your API center using the following [az apic service update](/cli/azure/apic/service#az-apic-service-update) command. Substitute the names of your API center and resource group, and pass the JSON file as the value of the `--identity` parameter. Here, the JSON file is named `identity.json`.
+
+ ```azurecli
+ az apic service update --name <api-center-name> --resource-group <resource-group-name> --identity "@identity.json"
+ ```
++
+## Assign the managed identity the API Management Service Reader role
+
+To allow import of APIs, assign your API center's managed identity the **API Management Service Reader** role in your API Management instance. You can use the [portal](../role-based-access-control/role-assignments-portal-managed-identity.md) or the Azure CLI.
+
+#### [Portal](#tab/portal)
+
+1. In the [portal](https://azure.microsoft.com), navigate to your API Management instance.
+1. In the left menu, select **Access control (IAM)**.
+1. Select **+ Add role assignment**.
+1. On the **Add role assignment** page, set the values as follows:
+ 1. On the **Role** tab - Select **API Management Service Reader**.
+ 1. On the **Members** tab, in **Assign access to** - Select **Managed identity** > **+ Select members**.
+ 1. On the **Select managed identities** page - Select the system-assigned or user-assigned managed identity of your API center that you added in the previous section. Click **Select**.
+ 1. Select **Review + assign**.
+
+#### [Azure CLI](#tab/cli)
+
+1. Get the principal ID of the identity. If you're configuring a system-assigned identity, use the [az apic service show](/cli/azure/apic/service#az-apic-service-show) command. For a user-assigned identity, use [az identity show](/cli/azure/identity#az-identity-show).
+
+ **System-assigned identity**
+ ```azurecli
+ #! /bin/bash
+ apicObjID=$(az apic service show --name <api-center-name> \
+ --resource-group <resource-group-name> \
+ --query "identity.principalId" --output tsv)
+ ```
+
+ ```azurecli
+ # PowerShell syntax
+ $apicObjID=$(az apic service show --name <api-center-name> `
+ --resource-group <resource-group-name> `
+ --query "identity.principalId" --output tsv)
+ ```
+
+ **User-assigned identity**
+ ```azurecli
+ #! /bin/bash
+ apicObjID=$(az identity show --name <identity-name> --resource-group <resource-group-name> --query "principalId" --output tsv)
+ ```
+
+ ```azurecli
+ # PowerShell syntax
+ $apicObjID=$(az identity show --name <identity-name> --resource-group <resource-group-name> --query "principalId" --output tsv)
+ ```
+1. Get the resource ID of your API Management instance using the [az apim show](/cli/azure/apim#az-apim-show) command.
+
+ ```azurecli
+ #! /bin/bash
+ apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query "id" --output tsv)
+ ```
+
+ ```azurecli
+ # PowerShell syntax
+ $apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query "id" --output tsv)
+ ```
+
+1. Assign the managed identity the **API Management Service Reader** role in your API Management instance using the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command.
+
+ ```azurecli
+ #! /bin/bash
+ scope="${apimID:1}"
+
+ az role assignment create \
+ --role "API Management Service Reader Role" \
+ --assignee-object-id $apicObjID \
+ --assignee-principal-type ServicePrincipal \
+ --scope $scope
+ ```
+
+ ```azurecli
+ #! PowerShell syntax
+ $scope=$apimID.substring(1)
+
+ az role assignment create `
+ --role "API Management Service Reader Role" `
+ --assignee-object-id $apicObjID `
+ --assignee-principal-type ServicePrincipal `
+ --scope $scope
++
+## Import APIs from your API Management instance
+
+Use the [az apic service import-from-apim](/cli/azure/apic/service#az-apic-service-import-from-apim) command to import one or more APIs from your API Management instance to your API center.
+
+> [!NOTE]
+> * This command depends on a managed identity configured in your API center that has read permissions to the API Management instance. If you haven't added or configured a managed identity, see [Add a managed identity in your API center](#add-a-managed-identity-in-your-api-center) earlier in this article.
+>
+> * If your API center has multiple managed identities, the command searches first for a system-assigned identity. If none is found, it picks the first user-assigned identity in the list.
+
+### Import all APIs from an API Management instance
+
+Use a wildcard (`*`) to specify all APIs from the API Management instance.
+
+1. Get the resource ID of your API Management instance using the [az apim show](/cli/azure/apim#az-apim-show) command.
+
+ ```azurecli
+ #! /bin/bash
+ apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query id --output tsv)
+ ```
+
+ ```azurecli
+ # PowerShell syntax
+ $apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query id --output tsv)
+ ```
+
+1. Use the `az apic service import-from-apim` command to import the APIs. Substitute the names of your API center and resource group, and use `*` to specify all APIs from the API Management instance.
+
+ ```azurecli
+
+ #! /bin/bash
+ apiIDs="$apimID/apis/*"
+
+ az apic service import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> --source-resource-ids $apiIDs
+ ```
+
+ ```azurecli
+ # PowerShell syntax
+ $apiIDs=$apimID + "/apis/*"
+
+ az apic service import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> --source-resource-ids $apiIDs
+ ```
+
+ > [!NOTE]
+ > If your API Management instance has a large number of APIs, import to your API center might take some time.
+
+### Import a specific API from an API Management instance
+
+Specify an API to import using its name from the API Management instance.
+
+1. Get the resource ID of your API Management instance using the [az apim show](/cli/azure/apim#az-apim-show) command.
+
+ ```azurecli
+ #! /bin/bash
+ apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query id --output tsv)
+ ```
+
+ ```azurecli
+ # PowerShell syntax
+ $apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query id --output tsv)
+ ```
+
+1. Use the `az apic service import-from-apim` command to import the API. Substitute the names of your API center and resource group, and specify an API name from the API Management instance.
+
+ ```azurecli
+ #! /bin/bash
+ apiIDs="$apimID/apis/<api-name>"
+
+ az apic service import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> --source-resource-ids $apiIDs
+ ```
+
+ ```azurecli
+ # PowerShell syntax
+ $apiIDs=$apimID + "/apis/<api-name>"
+
+ az apic service import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> --source-resource-ids $apiIDs
+ ```
+
+ > [!NOTE]
+ > Specify `<api-name>` using the API resource name in the API Management instance, not the display name. Example: `petstore-api` instead of `Petstore API`.
+
+After importing APIs from API Management, you can view and manage the imported APIs in your API center.
+
+## Related content
+
+* [Azure CLI reference for API Center](/cli/azure/apic)
+* [Manage API inventory with Azure CLI commands](manage-apis-azure-cli.md)
+* [Assign Azure roles to a managed identity](../role-based-access-control/role-assignments-portal-managed-identity.md)
+* [Azure API Management documentation](../api-management/index.yml)
api-center Manage Apis Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/manage-apis-azure-cli.md
To delete individual API versions and definitions, use [az apic api version dele
## Related content
-See the [Azure CLI reference for API Center](/cli/azure/apic) for a complete command list, including commands to manage [environments](/cli/azure/apic/environment), [deployments](/cli/azure/apic/api/deployment), [metadata schemas](/cli/azure/apic/metadata-schema), and [API Center services](/cli/azure/apic/service).
+* See the [Azure CLI reference for API Center](/cli/azure/apic) for a complete command list, including commands to manage [environments](/cli/azure/apic/environment), [deployments](/cli/azure/apic/api/deployment), [metadata schemas](/cli/azure/apic/metadata-schema), and [API Center services](/cli/azure/apic/service).
+* [Import APIs to your API center from API Management](import-api-management-apis.md)
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
The appliance VM hosts a management Kubernetes cluster with a control plane that
Control plane IP requirements:
- - Open communication with the management machine.
+- Open communication with the management machine.
- Static IP address assigned; the IP address should be outside the DHCP range but still available on the network segment. This IP address can't be assigned to any other machine on the network. - If using DHCP, the control plane IP should be a single reserved IP that is outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability. - If using Azure Kubernetes Service on Azure Stack HCI (AKS hybrid) and installing Arc resource bridge, then the control plane IP for the resource bridge can't be used by the AKS hybrid cluster. For specific instructions on deploying Arc resource bridge with AKS on Azure Stack HCI, see [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).
Three configuration files are created when the `createconfig` command completes
By default, these files are generated in the current CLI directory when `createconfig` completes. These files should be saved in a secure location on the management machine, because they're required for maintaining the appliance VM. Because the configuration files reference each other, all three files must be stored in the same location. If the files are moved from their original location at deployment, open the files to check that the reference paths to the configuration files are accurate.
-By default, these files are generated in the current CLI directory when `createconfig` completes. These files should be saved in a secure location on the management machine, because they're required for maintaining the appliance VM. Because the configuration files reference each other, all three files must be stored in the same location. If the files are moved from their original location at deployment, open the files to check that the reference paths to the configuration files are accurate.
- ### Kubeconfig The appliance VM hosts a management Kubernetes cluster. The kubeconfig is a low-privilege Kubernetes configuration file that is used to maintain the appliance VM. By default, it's generated in the current CLI directory when the `deploy` command completes. The kubeconfig should be saved in a secure location to the management machine, because it's required for maintaining the appliance VM.
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
# Upgrade Arc resource bridge
-This article describes how Arc resource bridge is upgraded, and the two ways upgrade can be performed: cloud-managed upgrade or manual upgrade. Currently, some private cloud providers differ in how they handle Arc resource bridge upgrades. For more information, see the [Private cloud providers](#private-cloud-providers) section.
+This article describes how Arc resource bridge is upgraded, and the two ways upgrade can be performed: cloud-managed upgrade or manual upgrade. Currently, some private cloud providers differ in how they handle Arc resource bridge upgrades.
+
+## Private cloud providers
+Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider.
+
+For **Arc-enabled VMware vSphere**, manual upgrade is available, but appliances on version 1.0.15 and higher automatically receive cloud-managed upgrade as the default experience. Appliances that are earlier than version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This deploys a new Arc resource bridge using the latest version and reconnects pre-existing Azure resources.
+
+For **Azure Arc VM management (preview) on Azure Stack HCI**, to use appliance version 1.0.15 or higher, you must be on Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all HCI, Arc resource bridge, and extension components as a "validated recipe" package. Attempting to upgrade Arc resource bridge independent of other HCI environment components by using the `az arcappliance upgrade` command may cause problems in your environment that could result in a disaster recovery scenario. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq). Customers on Azure Stack HCI, version 22H2 will receive limited support.
+
+For **Arc-enabled System Center Virtual Machine Manager (SCVMM)**, the manual upgrade feature is available for appliance version 1.0.14 and higher. Appliances below version 1.0.14 need to perform the recovery option to get to version 1.0.15 or higher. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps. This deploys a new resource bridge and reconnects pre-existing Azure resources.
## Prerequisites
-In order to upgrade Arc resource bridge, the appliance VM must be online, its status is "Running" and the [credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm) must be valid.
+Before upgrading an Arc resource bridge, the following prerequisites must be met:
+
+- The appliance VM must be online, its status is "Running" and the [credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm) must be valid.
+
+- There must be sufficient space on the management machine (~3.5 GB) and appliance VM (35 GB) to download required images. For VMware, a new template is created.
-There must be sufficient space on the management machine (~3.5 GB) and appliance VM (35 GB) to download required images. For VMware, a new template is created.
+- The outbound connection from the Appliance VM IPs (`k8snodeippoolstart/end`, VM IP 1/2) to `msk8s.sb.tlu.dl.delivery.mp.microsoft.com`, port 443 must be enabled. Be sure the full list of [required endpoints for Arc resource bridge](network-requirements.md) are also enabled.
-The outbound connection from the Appliance VM IPs (`k8snodeippoolstart/end`, VM IP 1/2) to `msk8s.sb.tlu.dl.delivery.mp.microsoft.com`, port 443 must be enabled. Be sure the full list of [required endpoints for Arc resource bridge](network-requirements.md) are also enabled.
+- If you are performing a manual upgrade, the upgrade command should be run from the management machine used to initially deploy the Arc resource bridge and still contains the [appliance configuration files](system-requirements.md#configuration-files) or one that meets the [management machine requirements](system-requirements.md#management-machine-requirements) and also contains the appliance configuration files.
-If you are performing a manual upgrade, the upgrade command should be run from the management machine used to initially deploy the Arc resource bridge and still contains the [appliance configuration files](system-requirements.md#configuration-files) or one that meets the [management machine requirements](system-requirements.md#management-machine-requirements) and also contains the appliance configuration files.
-Arc resource bridges configured with DHCP can't be upgraded and aren't supported in a production environment. Instead, a new Arc resource bridge should be deployed using [static IP configuration](system-requirements.md#static-ip-configuration).
+- Arc resource bridge configured with DHCP can't be upgraded and aren't supported in a production environment. Instead, a new Arc resource bridge should be deployed using [static IP configuration](system-requirements.md#static-ip-configuration).
## Overview
To upgrade a resource bridge on System Center Virtual Machine Manager (SCVMM), r
To upgrade a resource bridge on Azure Stack HCI, please transition to 23H2 and use the built-in upgrade management tool. More info available [here](/azure-stack/hci/update/whats-the-lifecycle-manager-23h2).
-## Private cloud providers
-
-Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider.
-
-For Arc-enabled VMware vSphere, manual upgrade is available, but appliances on version 1.0.15 and higher automatically receive cloud-managed upgrade as the default experience. Appliances that are earlier than version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This deploys a new Arc resource bridge using the latest version and reconnects pre-existing Azure resources.
-
-For Azure Arc VM management (preview) on Azure Stack HCI, to use appliance version 1.0.15 or higher, you must be on Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all HCI, Arc resource bridge, and extension components as a "validated recipe" package. Attempting to upgrade Arc resource bridge independent of other HCI environment components by using the `az arcappliance upgrade` command may cause problems in your environment that could result in a disaster recovery scenario. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq). Customers on Azure Stack HCI, version 22H2 will receive limited support.
-
-For Arc-enabled System Center Virtual Machine Manager (SCVMM), the manual upgrade feature is available for appliance version 1.0.14 and higher. Appliances below version 1.0.14 need to perform the recovery option to get to version 1.0.15 or higher. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps. This deploys a new resource bridge and reconnects pre-existing Azure resources.
- ## Version releases The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there's a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. A new extension is typically released on a monthly cadence at the end of the month. For detailed release info, see the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions is designed to work with all Azure Functions programming langu
::: zone-end ::: zone pivot="python"
+> [!IMPORTANT]
+> This article uses tabs to support multiple versions of the Python programming model. The v2 model is generally available and is designed to provide a more code-centric way for authoring functions through decorators. For more details about how the v2 model works, refer to the [Azure Functions Python developer guide](../functions-reference-python.md).
::: zone-end Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md).
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
The input binding allows you to read blob storage data as input to an Azure Func
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
-Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
-
-# [v2](#tab/python-v2)
-The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
-
-# [v1](#tab/python-v1)
-The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
---
-This article supports both programming models.
- ## Example
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
The output binding allows you to modify and delete blob storage data in an Azure
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
-Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
-
-# [v2](#tab/python-v2)
-The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
-
-# [v1](#tab/python-v1)
-The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
---
-This article supports both programming models.
- ## Example
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
The Blob storage trigger starts a function when a new or updated blob is detecte
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
-Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
-
-# [v2](#tab/python-v2)
-The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
-
-# [v1](#tab/python-v1)
-The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
---
-This article supports both programming models.
- ::: zone-end ## Example
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for Windows and Linux machines, in Azure and non-Azure environments, including on-premises and third-party clouds. The agent introduces a simplified, flexible method of configuring data collection using [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). This article provides guidance on how to implement a successful migration from the Log Analytics agent to Azure Monitor Agent.
-> [!IMPORTANT]
-> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). After this date, Microsoft will no longer provide any support for the Log Analytics agent. If you're currently using the Log Analytics agent with Azure Monitor or [other supported features and services](#migrate-additional-services-and-features), start planning your migration to Azure Monitor Agent by using the information in this article. If you are using the Log Analytics Agent for SCOM you will need to [migrate to the SCOM Agent](../vm/scom-managed-instance-overview.md)
+If you're currently using the Log Analytics agent with Azure Monitor or [other supported features and services](#migrate-additional-services-and-features), start planning your migration to Azure Monitor Agent by using the information in this article. If you are using the Log Analytics Agent for SCOM you will need to [migrate to the SCOM Agent](../vm/scom-managed-instance-overview.md)
+
+The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). You can expect the following when you use the MMA or OMS agent after this date.
+> - **Data upload**: You can still upload data. At some point when major customer have finished migrating and data volumes significantly drop, upload will be suspended. You can expect this to take at least 6 to 9 months. You will not receive a breaking change notification of the suspension.
+> - **Install or reinstall**: You can still install and reinstall the legacy agents. You will not be able to get support for installing or reinstalling issues.
+> - **Customer Support**: You can expect support for MMA/OMS for security issues.
## Benefits
azure-monitor Resource Manager Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-action-groups.md
Title: Resource Manager template samples for action groups
description: Sample Azure Resource Manager templates to deploy Azure Monitor action groups. Previously updated : 04/27/2022- Last updated : 01/28/2024+ # Resource Manager template samples for action groups in Azure Monitor
azure-monitor Test Action Group Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/test-action-group-errors.md
Title: Test Notification Troubleshooting Guide description: Detailed description of error codes and actions to take when troubleshooting the test action group feature. Previously updated : 11/15/2022 Last updated : 01/28/2024
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 09/12/2023 Last updated : 01/31/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, vb
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
Monitoring of your Java web applications running on [Azure App Services](../../app-service/index.yml) doesn't require any modifications to the code. This article walks you through enabling Azure Monitor Application Insights monitoring and provides preliminary guidance for automating the process for large-scale deployments.
+> [!NOTE]
+> With Spring Boot Native Image applications, use the [Azure Monitor OpenTelemetry Distro / Application Insights in Spring Boot native image Java application](https://aka.ms/AzMonSpringNative) project instead of the Application Insights Java agent solution described below.
+ ## Enable Application Insights The recommended way to enable application monitoring for Java applications running on Azure App Services is through Azure portal.
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Title: ApplicationInsights.config reference - Azure | Microsoft Docs description: Enable or disable data collection modules and add performance counters and other parameters. Previously updated : 09/12/2023 Last updated : 01/31/2024 ms.devlang: csharp
azure-monitor Data Model Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md
documentationcenter: .net ibiza Previously updated : 09/25/2023 Last updated : 01/31/2024 # Application Insights telemetry data model
azure-monitor Eventcounters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/eventcounters.md
Title: Event counters in Application Insights | Microsoft Docs description: Monitor system and custom .NET/.NET Core EventCounters in Application Insights. Previously updated : 07/21/2023 Last updated : 01/31/2024
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
For more information, see [Use Application Insights Java In-Process Agent in Azu
## Containers
+> [!NOTE]
+> With Spring Boot Native Image applications, use the [Azure Monitor OpenTelemetry Distro / Application Insights in Spring Boot native image Java application](https://aka.ms/AzMonSpringNative) project instead of the Application Insights Java agent.
+ ### Docker entry point If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.19.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Title: Monitor Node.js services with Application Insights | Microsoft Docs description: Monitor performance and diagnose problems in Node.js services with Application Insights. Previously updated : 12/15/2023 Last updated : 01/31/2024 ms.devlang: javascript
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
Title: Performance counters in Application Insights | Microsoft Docs description: Monitor system and custom .NET performance counters in Application Insights. Previously updated : 01/06/2023 Last updated : 01/31/2024 ms.devlang: csharp
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
Title: Log-based and pre-aggregated metrics in Application Insights | Microsoft Docs description: This article explains when to use log-based versus pre-aggregated metrics in Application Insights. Previously updated : 04/05/2023 Last updated : 01/31/2024
azure-monitor Standard Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/standard-metrics.md
Title: Azure Application Insights standard metrics | Microsoft Docs
description: This article lists Azure Application Insights metrics with supported aggregations and dimensions. Previously updated : 04/05/2023 Last updated : 01/31/2024
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
For example, usage from Log Analytics can be found by first filtering on the **M
Add a filter on the **Instance ID** column for **contains workspace** or **contains cluster**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column.
+> [!NOTE]
+> See [Azure Monitor billing meter names](cost-meters.md) for a reference of the billing meter names used by Azure Monitor in Azure Cost Management + Billing.
+ ## View data allocation benefits
-Since the usage export has both the number of units of usage and their cost, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). In the usage export, to see the benefits, filter the *Instance ID* column to your workspace. (To select all of your workspaces in the spreadsheet, filter the *Instance ID* column to "contains /workspaces/".) Then filter on the Meter to either of the following two meters:
+There are several approaches to view the benefits a workspace receives from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
+
+### View benefits in a usage export
+
+Since a usage export has both the number of units of usage and their cost, you can use this export to see the amount of benefits you are receiving. In the usage export, to see the benefits, filter the *Instance ID* column to your workspace. (To select all of your workspaces in the spreadsheet, filter the *Instance ID* column to "contains /workspaces/".) Then filter on the Meter to either of the following two meters:
- **Standard Data Included per Node**: this meter is under the service "Insight and Analytics" and tracks the benefits received when a workspace in either in Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance and/or has [Defender for Servers](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) enabled. Each of these provide a 500 MB/server/day data allowance. - **Free Benefit - M365 Defender Data Ingestion**: this meter, under the service "Azure Monitor", tracks the benefit from the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
-> [!NOTE]
-> See [Azure Monitor billing meter names](cost-meters.md) for a reference of the billing meter names used by Azure Monitor in Azure Cost Management + Billing.
+### View benefits in Usage and estimated costs
You can also see these data benefits in the Log Analytics Usage and estimated costs page. If the workspace is receiving these benefits, there will be a sentence below the cost estimate table that gives the data volume of the benefits used over the last 31 days.
+### Query benefits from the Operation table
+
+The [Operation](/azure/azure-monitor/reference/tables/operation) table contains daily events which given the amount of benefit used from the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). The `Detail` column for these events are all of the format `Benefit amount used 1.234 GB`, and the type of benefit is in the `OperationKey` column. Here is a query that charts the benefits used in the last 31-days:
+
+```kusto
+Operation
+| where TimeGenerated >= ago(31d)
+| where Detail startswith "Benefit amount used"
+| parse Detail with "Benefit amount used: " BenefitUsedGB " GB"
+| extend BenefitUsedGB = toreal(BenefitUsedGB)
+| parse OperationKey with "Benefit type used: " BenefitType
+| project BillingDay=TimeGenerated, BenefitType, BenefitUsedGB
+| sort by BillingDay asc, BenefitType asc
+| render columnchart
+```
+
+(This functionality of reporting the benefits used in the `Operation` table came online in January 2024.)
+ ## Usage and estimated costs You can get additional usage details about Log Analytics workspaces and Application Insights resources from the **Usage and Estimated Costs** option for each.
azure-monitor Create Custom Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table.md
Previously updated : 10/23/2023 Last updated : 01/28/2024 # Customer intent: As a Log Analytics workspace administrator, I want to manage table schemas and be able create a table with a custom schema to store logs from an Azure or non-Azure data source.
Use the [Tables - Update PATCH API](/rest/api/loganalytics/tables/update) to cre
## Delete a table
-You can delete any table in your Log Analytics workspace that's not an [Azure table](../logs/manage-logs-tables.md#table-type-and-schema).
-
-> [!NOTE]
-> - Deleting a restored table doesn't delete the data in the source table.
-> - Azure tables that are part of a solution can be removed from workspace when [deleting the solution](/cli/azure/monitor/log-analytics/solution#az-monitor-log-analytics-solution-delete). The data remains in workspace for the duration of the retention policy defined for the tables. If the [solution is re-created](/cli/azure/monitor/log-analytics/solution#az-monitor-log-analytics-solution-create) in the workspace, these tables become visible again.
-> - In both cases, data retention and archive charges will continue to apply to the data associated with these tables.
+There are several types of tables in Log Analytics and the delete experience is different for each:
+- [Azure table](../logs/manage-logs-tables.md#table-type-and-schema) -- Can't be deleted. Tables that are part of a solution are removed from workspace when [deleting the solution](/cli/azure/monitor/log-analytics/solution#az-monitor-log-analytics-solution-delete), but data remains in workspace for the duration of the retention policy defined for the tables, or if not exist, for the duration of the retention policy defined in workspace. If the [solution is re-created](/cli/azure/monitor/log-analytics/solution#az-monitor-log-analytics-solution-create) in the workspace, these tables and previously ingested data become visible again. To avoid charges, define [retention policy for tables in solutions](https://learn.microsoft.com/rest/api/loganalytics/tables/update) to minimum (4-days) before deleting the solution.
+ - [Restored table](./restore.md) (table_RST) -- Deletes the hot cache provisioned for the restore, but source table data isn't deleted.
+ - [Search results table](./search-jobs.md) (table_SRCH) -- Deletes the table and data immediately and permanently.
+ - [Custom log table](./create-custom-table.md#create-a-custom-table) (table_CL) -- Deletes the table definition immediately, but data remains in workspace for the duration of the retention policy defined for the table, or workspace. The retention policy for table is removed in 14-days and workspace retention governs. If custom log table is created with the same name and schema, the table and previously ingested data become visible again. To avoid charges and remove data from table, define [retention policy for table](https://learn.microsoft.com/rest/api/loganalytics/tables/update) to minimum (4-days) before deleting the table.
# [Portal](#tab/azure-portal-2)
azure-resource-manager Manage Resource Groups Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-python.md
description: Use Python to manage your resource groups through Azure Resource Ma
Previously updated : 02/27/2023 Last updated : 01/27/2024 content_well_notification: - AI-contribution
Learn how to use Python with [Azure Resource Manager](overview.md) to manage you
## Prerequisites
-* Python 3.7 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/)
+* Python 3.8 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/)
* The following Azure library packages for Python installed in your virtual environment. To install any of the packages, use `pip install {package-name}` * azure-identity
azure-resource-manager Tag Resources Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-python.md
Title: Tag resources, resource groups, and subscriptions with Python description: Shows how to use Python to apply tags to Azure resources. Previously updated : 04/19/2023 Last updated : 01/27/2024 content_well_notification: - AI-contribution
This article describes how to use Python to tag resources, resource groups, and
## Prerequisites
-* Python 3.7 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/)
+* Python 3.8 or later installed. To install the latest, see [Python.org](https://www.python.org/downloads/)
* The following Azure library packages for Python installed in your virtual environment. To install any of the packages, use `pip install {package-name}` * azure-identity
azure-resource-manager Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tls-support.md
Title: TLS version supported by Azure Resource Manager
description: Describes the deprecation of TLS versions prior to 1.2 in Azure Resource Manager Previously updated : 10/05/2023 Last updated : 01/27/2024 # Migrating to TLS 1.2 for Azure Resource Manager
Azure Resource Manager is the deployment and management service for Azure. You u
We recommend the following steps as you prepare to migrate your clients to TLS 1.2: * Update your operating system to the latest version.
-* Update your development libraries and frameworks to their latest versions.
-
- For example, Python 3.6 and 3.7 support TLS 1.2.
+* Update your development libraries and frameworks to their latest versions. For example, Python 3.8 supports TLS 1.2.
* Fix hardcoded instances of security protocols older than TLS 1.2. * Notify your customers and partners of your product or service's migration to TLS 1.2.
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
The following are known limitations in Chaos Studio.
- **Lockbox** At present, we don't have integration with Customer Lockbox. - **Java SDK** At present, we don't have a dedicated Java SDK. If this is something you would use, reach out to us with your feature request. - **Built-in roles** - Chaos Studio doesn't currently have its own built-in roles. Permissions can be attained to run a chaos experiment by either assigning an [Azure built-in role](chaos-studio-fault-providers.md) or a created custom role to the experiment's identity.-- **Agent Service Tags** Currently we don't have service tags available for our Agent-based faults.
+- **Agent Service Tags** Currently we don't have service tags available for our Agent-based faults.
+- **Chaos Studio Private Accesses (CSPA)** - For the CSPA resource type, there is a **strict 1:1 mapping of Chaos Target:CSPA Resource (abstraction for private endpoint).** We only allow **5 CSPA resources to be created per Subscription** to maintain the expected experience for all of our customers.
## Known issues - When selecting target resources for an agent-based fault in the experiment designer, it's possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected.
chaos-studio Chaos Studio Private Link Agent Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-link-agent-service.md
az feature register --namespace Microsoft.Resources --name "EUAPParticipation" -
- The entire end-to-end for this flow requires some use of the CLI. The current end-to-end experience cannot be done from the Azure portal currently.
+- The **Chaos Studio Private Accesses (CSPA)** resource type has a **strict 1:1 mapping of Chaos Target:CSPA Resource (abstraction for private endpoint).**.** We only allow **5 CSPA resources to be created per Subscription** to maintain the expected experience for all of our customers.
+ ## Step 1: Make sure you allowlist Microsoft.Network/AllowPrivateEndpoints in your subscription The first step is to ensure that your desired subscription allows the Networking Resource Provider to operate.
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-dotnet-get-started.md
To connect to Azure Cosmos DB with the MongoDB native driver, create an instance
### [Azure CLI](#tab/azure-cli) ### [PowerShell](#tab/azure-powershell) ### [Portal](#tab/azure-portal)
Skip this step and use the information for the portal in the next step.
### [Azure CLI](#tab/azure-cli) ### [PowerShell](#tab/azure-powershell) ### [Portal](#tab/azure-portal) > [!TIP] > For this guide, we recommend using the resource group name ``msdocs-cosmos``. ## Configure environment variables ## Create MongoClient with connection string
Define a new instance of the ``MongoClient`` class using the constructor and the
## Use the MongoDB client classes with Azure Cosmos DB for API for MongoDB Each type of resource is represented by one or more associated C# classes. Here's a list of the most common classes:
cosmos-db How To Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-get-started.md
Refer to the [Troubleshooting guide](error-codes-solutions.md) for connection is
### [Azure CLI](#tab/azure-cli) ### [PowerShell](#tab/azure-powershell) ### [Portal](#tab/azure-portal)
Skip this step and use the information for the portal in the next step.
### [Azure CLI](#tab/azure-cli) ### [PowerShell](#tab/azure-powershell) ### [Portal](#tab/azure-portal) > [!TIP] > For this guide, we recommend using the resource group name ``msdocs-cosmos``. ## Configure environment variables ## Create MongoClient with connection string
client.close()
## Use MongoDB client classes with Azure Cosmos DB for API for MongoDB Each type of resource is represented by one or more associated JavaScript classes. Here's a list of the most common classes:
cosmos-db How To Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-python-get-started.md
In the commands below, we show *msdocs-cosmos* as the resource group name. Chang
### [Azure CLI](#tab/azure-cli) ### [PowerShell](#tab/azure-powershell) ### [Portal](#tab/azure-portal)
Skip this step and use the information for the portal in the next step.
### [Azure CLI](#tab/azure-cli) ### [PowerShell](#tab/azure-powershell) ### [Portal](#tab/azure-portal) ## Configure environment variables ## Create MongoClient with connection string
client.close()
## Use MongoDB client classes with Azure Cosmos DB for API for MongoDB Each type of resource is represented by one or more associated Python classes. Here's a list of the most common classes:
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-dotnet.md
This quickstart will create a single Azure Cosmos DB account using the API for M
#### [Azure CLI](#tab/azure-cli) #### [PowerShell](#tab/azure-powershell) #### [Portal](#tab/azure-portal)
This quickstart will create a single Azure Cosmos DB account using the API for M
#### [Azure CLI](#tab/azure-cli) #### [PowerShell](#tab/azure-powershell) #### [Portal](#tab/azure-portal)
dotnet add package MongoDb.Driver
### Configure environment variables ## Object model
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-nodejs.md
This quickstart will create a single Azure Cosmos DB account using the API for M
#### [Azure CLI](#tab/azure-cli) #### [PowerShell](#tab/azure-powershell) #### [Portal](#tab/azure-portal)
This quickstart will create a single Azure Cosmos DB account using the API for M
#### [Azure CLI](#tab/azure-cli) #### [PowerShell](#tab/azure-powershell) #### [Portal](#tab/azure-portal)
npm install mongodb dotenv
### Configure environment variables ## Object model
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-python.md
This quickstart will create a single Azure Cosmos DB account using the API for M
#### [Azure CLI](#tab/azure-cli) #### [PowerShell](#tab/azure-powershell) #### [Portal](#tab/azure-portal)
This quickstart will create a single Azure Cosmos DB account using the API for M
#### [Azure CLI](#tab/azure-cli) #### [PowerShell](#tab/azure-powershell) #### [Portal](#tab/azure-portal)
This quickstart will create a single Azure Cosmos DB account using the API for M
### Configure environment variables ## Object model
defender-for-cloud Concept Aws Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-aws-connector.md
To protect your AWS-based resources, you must [connect your AWS account](quickst
- [**Cloud Security Posture Management (CSPM)**](overview-page.md) assesses your AWS resources according to AWS-specific security recommendations and reflects your security posture in your secure score. The [asset inventory](asset-inventory.md) gives you one place to see all of your protected AWS resources. The [regulatory compliance dashboard](regulatory-compliance-dashboard.md) shows your compliance with built-in standards specific to AWS, including AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices. - [**Microsoft Defender for Servers**](defender-for-servers-introduction.md) brings threat detection and advanced defenses to [supported Windows and Linux EC2 instances](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multicloud).
-
+ - [**Microsoft Defender for Containers**](defender-for-containers-introduction.md) brings threat detection and advanced defenses to [supported Amazon EKS clusters](supported-machines-endpoint-solutions-clouds-containers.md). - [**Microsoft Defender for SQL**](defender-for-sql-introduction.md) brings threat detection and advanced defenses to your SQL Servers running on AWS EC2, AWS RDS Custom for SQL Server.
-The retired **Classic cloud connector** - Requires you to configure your AWS account to create a user that Defender for Cloud can use to connect to your AWS environment. The classic connector is only available to customers who have previously connected AWS accounts with it.
-
-> [!NOTE]
-> If you are connecting an AWS account that was previously connected with the classic connector, you must [remove them](how-to-use-the-classic-connector.md#remove-classic-aws-connectors) first. Using an AWS account that is connected by both the classic and native connectors can produce duplicate recommendations.
- ## AWS authentication process Federated authentication is used between Microsoft Defender for Cloud and AWS. All of the resources related to the authentication are created as a part of the CloudFormation template deployment, including: -- An identity provider (OpenID connect)
+- An identity provider (OpenID connect)
- Identity and Access Management (IAM) roles with a federated principal (connected to the identity providers). The architecture of the authentication process across clouds is as follows: :::image type="content" source="media/quickstart-onboard-aws/architecture-authentication-across-clouds.png" alt-text="Diagram showing architecture of authentication process across clouds." lightbox="media/quickstart-onboard-aws/architecture-authentication-across-clouds.png":::
-1. Microsoft Defender for Cloud CSPM service acquires a Microsoft Entra token with a validity life time of 1 hour that is signed by the Microsoft Entra ID using the RS256 algorithm.
+1. Microsoft Defender for Cloud CSPM service acquires a Microsoft Entra token with a validity life time of 1 hour that is signed by the Microsoft Entra ID using the RS256 algorithm.
1. The Microsoft Entra token is exchanged with AWS short living credentials and Defender for Cloud's CSPM service assumes the CSPM IAM role (assumed with web identity). 1. Since the principal of the role is a federated identity as defined in a trust relationship policy, the AWS identity provider validates the Microsoft Entra token against the Microsoft Entra ID through a process that includes: - audience validation
- - token digital signature validation
+ - token digital signature validation
- certificate thumbprint
- 1. The Microsoft Defender for Cloud CSPM role is assumed only after the validation conditions defined at the trust relationship have been met. The conditions defined for the role level are used for validation within AWS and allows only the Microsoft Defender for Cloud CSPM application (validated audience) access to the specific role (and not any other Microsoft token).
+1. The Microsoft Defender for Cloud CSPM role is assumed only after the validation conditions defined at the trust relationship have been met. The conditions defined for the role level are used for validation within AWS and allows only the Microsoft Defender for Cloud CSPM application (validated audience) access to the specific role (and not any other Microsoft token).
1. After the Microsoft Entra token is validated by the AWS identity provider, the AWS STS exchanges the token with AWS short-living credentials which the CSPM service uses to scan the AWS account.
Each plan has its own requirements for the native connector.
- An active AWS account, with EC2 instances running SQL server or RDS Custom for SQL Server. - Azure Arc for servers installed on your EC2 instances/RDS Custom for SQL Server.
- - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances.
+ - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances.
+
+ Auto provisioning managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent preinstalled. If you already have the SSM agent preinstalled, the AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you need to install it using either of the following relevant instructions from Amazon:
- Auto provisioning managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent preinstalled. If you already have the SSM agent preinstalled, the AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you need to install it using either of the following relevant instructions from Amazon:
-
- - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)
+ - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)
+
+ > [!NOTE]
+ > To enable the Azure Arc auto-provisioning, you'll need **Owner** permission on the relevant Azure subscription.
- > [!NOTE]
- > To enable the Azure Arc auto-provisioning, you'll need **Owner** permission on the relevant Azure subscription.
-
- Other extensions should be enabled on the Arc-connected machines:
- - Microsoft Defender for Endpoint
- - VA solution (TVM/Qualys)
- - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA)
+ - Microsoft Defender for Endpoint
+ - VA solution (TVM/Qualys)
+ - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA)
+
+ Make sure the selected LA workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level. All of your AWS accounts and GCP projects under the same subscription inherit the subscription settings for the LA agent and AMA.
- Make sure the selected LA workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level. All of your AWS accounts and GCP projects under the same subscription inherit the subscription settings for the LA agent and AMA.
-
- Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
+ Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
### Defender for Servers plan
-
+ - Microsoft Defender for Servers enabled on your subscription. Learn how to [enable plans](enable-all-plans.md).
-
+ - An active AWS account, with EC2 instances.
-
-- Azure Arc for servers installed on your EC2 instances.
- - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances.
-
- Auto provisioning managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent preinstalled. If that is the case, their AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you need to install it using either of the following relevant instructions from Amazon:
-
- - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)
-
- - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)
-
- > [!NOTE]
- > To enable the Azure Arc auto-provisioning, you'll need an **Owner** permission on the relevant Azure subscription.
-
- - If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed.
-
+
+- Azure Arc for servers installed on your EC2 instances.
+ - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances.
+
+ Auto provisioning managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent preinstalled. If that is the case, their AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you need to install it using either of the following relevant instructions from Amazon:
+
+ - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)
+
+ - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)
+
+ > [!NOTE]
+ > To enable the Azure Arc auto-provisioning, you'll need an **Owner** permission on the relevant Azure subscription.
+
+ - If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed.
+ - Other extensions should be enabled on the Arc-connected machines:
- - Microsoft Defender for Endpoint
- - VA solution (TVM/Qualys)
- - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA)
+ - Microsoft Defender for Endpoint
+ - VA solution (TVM/Qualys)
+ - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA)
- Make sure the selected LA workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level. All of your AWS accounts and GCP projects under the same subscription inherit the subscription settings for the LA agent and AMA.
+ Make sure the selected LA workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level. All of your AWS accounts and GCP projects under the same subscription inherit the subscription settings for the LA agent and AMA.
- Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
+ Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
- > [!NOTE]
- > Defender for Servers assigns tags to your AWS resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources:
- **AccountId**, **Cloud**, **InstanceId**, **MDFCSecurityConnector**
+ > [!NOTE]
+ > Defender for Servers assigns tags to your AWS resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources:
+ **AccountId**, **Cloud**, **InstanceId**, **MDFCSecurityConnector**
## Learn more
You can check out the following blogs:
Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud. - [Protect all of your resources with Defender for Cloud](enable-all-plans.md)-
defender-for-cloud Episode Forty Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-forty-four.md
+
+ Title: Agentless malware detection | Defender for Cloud in the field
+description: Learn about agentless malware detection in Defender for Cloud
+ Last updated : 01/28/2024++
+# Agentless malware detection
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Gal Fenigshtein joins Yuri Diogenes to talk about the new agentless malware detection for virtual machines in Microsoft Defender for Cloud. Gal explains the use case scenario for this feature, how this feature works, and the prerequisites to enable this feature. Gal also demonstrates the user experience when a new malware is detected, using the security alert dashboard in Defender for Cloud.
+
+> [!VIDEO https://aka.ms/docs/player?id=44105e9d-8165-44c9-9763-16c1bd0736d4]
+
+- [01:28](/shows/mdc-in-the-field/agentless-malware#time=01m28s) - Understanding the value of agentless malware detection
+- [03:51](/shows/mdc-in-the-field/agentless-malware#time=03m51s) - Which Microsoft Defender for Cloud plan enables this feature?
+- [04:38](/shows/mdc-in-the-field/agentless-malware#time=04m38s) - How does agentless malware detection work?
+- [07:58](/shows/mdc-in-the-field/agentless-malware#time=07m58s) - Scan frequency
+- [08:17](/shows/mdc-in-the-field/agentless-malware#time=08m17s) - Demonstration
+
+## Recommended resources
+
+- Learn more about [agentless machine scanning](concept-agentless-data-collection.md).
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY).
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS).
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Forty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-forty-three.md
Title: Unified insights from Microsoft Entra permissions management | Defender for Cloud in the field description: Learn about unified insights from Microsoft Entra permissions management Previously updated : 01/18/2024 Last updated : 01/24/2024 # Unified insights from Microsoft Entra permissions management
Last updated 01/18/2024
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Agentless malware detection](episode-forty-four.md)
defender-for-cloud How To Use The Classic Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-use-the-classic-connector.md
- Title: Manage classic cloud connectors-
-description: Learn how to manage AWS and GCP classic connectors and remove them from your subscription.
- Previously updated : 06/29/2023--
-# Manage classic cloud connectors (retired)
-
-The retired *classic cloud connector* requires configuration in your Google Cloud Platform (GCP) project or Amazon Web Services (AWS) account to create a user that Microsoft Defender for Cloud can use to connect to your GCP project or AWS environment. The classic connector is available only to customers who previously used it to connect GCP projects or AWS environments.
-
-To connect a [GCP project](quickstart-onboard-gcp.md) or an [AWS account](quickstart-onboard-aws.md), you should use the native connector available in Defender for Cloud.
-
-## Connect your AWS account by using the classic connector
-
-### Prerequisites
-
-To complete the procedures for connecting an AWS account, you need:
--- A Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free one](https://azure.microsoft.com/pricing/free-trial/).--- [Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) enabled on your Azure subscription.--- Access to an AWS account.--- **Owner** permission on the relevant Azure subscription. A **Contributor** can also connect an AWS account if an **Owner** provides the service principal details.-
-### Set up AWS Security Hub
-
-To view security recommendations for multiple regions, repeat the following steps for each relevant region.
-
-If you're using an AWS management account, repeat the following steps to configure the management account and all connected member accounts across all relevant regions.
-
-1. Enable [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/gs-console.html).
-1. Enable [AWS Security Hub](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html).
-1. Verify that data is flowing to Security Hub. When you first enable Security Hub, the data might take several hours to become available.
-
-### Set up authentication for Defender for Cloud in AWS
-
-There are two ways to allow Defender for Cloud to authenticate to AWS:
--- [Create an identity and access management (IAM) role for Defender for Cloud](#create-an-iam-role-for-defender-for-cloud): The more secure and recommended method.-- [Create an AWS user for Defender for Cloud](#create-an-aws-user-for-defender-for-cloud): A less secure option if you don't have IAM enabled.-
-#### Create an IAM role for Defender for Cloud
-
-1. From your Amazon Web Services console, under **Security, Identity & Compliance**, select **IAM**.
-
- :::image type="content" source="./media/quickstart-onboard-aws/aws-identity-and-compliance.png" alt-text="Screenshot of the AWS services." lightbox="./media/quickstart-onboard-aws/aws-identity-and-compliance.png":::
-
-1. Select **Roles** > **Create role**.
-
-1. Select **Another AWS account**.
-
-1. Enter the following details:
-
- - For **Account ID**, enter the Microsoft account ID **158177204117**, as shown on the AWS connector page in Defender for Cloud.
- - Select **Require External ID**.
- - For **External ID**, enter the subscription ID, as shown on the AWS connector page in Defender for Cloud.
-
-1. Select **Next**.
-
-1. In the **Attach permission policies** section, select the following [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html):
-
- - `SecurityAudit` (`arn:aws:iam::aws:policy/SecurityAudit`)
- - `AmazonSSMAutomationRole` (`arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole`)
- - `AWSSecurityHubReadOnlyAccess` (`arn:aws:iam::aws:policy/AWSSecurityHubReadOnlyAccess`)
-
-1. Optionally, add tags. Adding tags to the user doesn't affect the connection.
-
-1. Select **Next**.
-
-1. In The **Roles** list, choose the role that you created.
-
-1. Save the Amazon Resource Name (ARN) for later.
-
-#### Create an AWS user for Defender for Cloud
-
-1. Open the **Users** tab and select **Add user**.
-
-1. In the **Details** step, enter a username for Defender for Cloud. Select **Programmatic access** for the AWS access type.
-
-1. Select **Next: Permissions**.
-
-1. Select **Attach existing policies directly** and apply the following policies:
- - `SecurityAudit`
- - `AmazonSSMAutomationRole`
- - `AWSSecurityHubReadOnlyAccess`
-
-1. Select **Next: Tags**. Optionally, add tags. Adding tags to the user doesn't affect the connection.
-
-1. Select **Review**.
-
-1. Save the automatically generated **Access key ID** and **Secret access key** CSV files for later.
-
-1. Review the summary, and then select **Create user**.
-
-### Configure the SSM Agent
-
-AWS Systems Manager (SSM) is required for automating tasks across your AWS resources. If your EC2 instances don't have the SSM Agent, follow the relevant instructions from Amazon:
--- [Installing and Configuring SSM Agent on Windows Instances](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-ssm-win.html)--- [Installing and Configuring SSM Agent on Amazon EC2 Linux Instances](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-ssm-agent.html)-
-### Complete the Azure Arc prerequisites
-
-1. Make sure the appropriate [Azure resource providers](../azure-arc/servers/prerequisites.md#azure-resource-providers) are registered:
- - `Microsoft.HybridCompute`
- - `Microsoft.GuestConfiguration`
-
-1. As an **Owner** on the subscription that you want to use for onboarding, create a service principal for Azure Arc onboarding, as described in [Create a service principal for onboarding at scale](../azure-arc/servers/onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale).
-
-### Connect AWS to Defender for Cloud
-
-1. From the Defender for Cloud menu, open **Environment settings**. Then select the option to switch back to the classic connectors experience.
-
- :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot that shows how to switch back to the classic connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png":::
-
-1. Select **Add AWS account**.
-
- :::image type="content" source="./media/quickstart-onboard-aws/add-aws-account.png" alt-text="Screenshot that shows the button for adding an AWS account on the pane for multicloud connectors in Defender for Cloud." lightbox="./media/quickstart-onboard-aws/add-aws-account.png":::
-
-1. Configure the options on the **AWS authentication** tab:
-
- 1. For **Display name**, enter a name for the connector.
-
- 1. For **Subscription**, confirm that the value is correct. It's the subscription that includes the connector and AWS Security Hub recommendations.
-
- 1. Depending on the authentication option that you chose when you [set up authentication for Defender for Cloud in AWS](#set-up-authentication-for-defender-for-cloud-in-aws), take one of the following actions:
- - For **Authentication method**, select **Assume Role**. Then, for **AWS role ARN**, paste the ARN that you got when you [created an IAM role for Defender for Cloud](#create-an-iam-role-for-defender-for-cloud).
-
- :::image type="content" source="./media/quickstart-onboard-aws/paste-arn-in-portal.png" alt-text="Screenshot that shows the location for pasting the ARN file in the AWS connection wizard in the Azure portal." lightbox="./media/quickstart-onboard-aws/paste-arn-in-portal.png":::
-
- - For **Authentication method**, select **Credentials**. Then, in the relevant boxes, paste the access key and secret key from the CSV files that you saved when you [created an AWS user for Defender for Cloud](#create-an-aws-user-for-defender-for-cloud).
-
-1. Select **Next**.
-
-1. Configure the options on the **Azure Arc Configuration** tab.
-
- Defender for Cloud discovers the EC2 instances in the connected AWS account and uses SSM to onboard them to Azure Arc. For the list of supported operating systems, see [What operating systems for my EC2 instances are supported?](faq-general.yml) in the common questions.
-
- 1. For **Resource Group** and **Azure Region**, select the resource group and region that the discovered AWS EC2s will be onboarded to in the selected subscription.
-
- 1. Enter the **Service Principal ID** and **Service Principal Client Secret** values for Azure Arc, as described in [Create a service principal for onboarding at scale](../azure-arc/servers/onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale).
-
- 1. If the machine is connecting to the internet via proxy server, specify the proxy server IP address, or the name and port number that the machine uses to communicate with the proxy server. Enter the value in the format `http://<proxyURL>:<proxyport>`.
-
- 1. Select **Review + create**.
-
-1. Review the summary information.
-
- The **Tags** section lists all Azure tags that are automatically created for each onboarded EC2 instance. Each tag has its own relevant details, so you can easily recognize it in Azure. Learn more about Azure tags in [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md).
-
-### Confirm the connection
-
-After you successfully create the connector and properly configure AWS Security Hub:
--- Defender for Cloud scans the environment for AWS EC2 instances and onboards them to Azure Arc. You can then install the Log Analytics agent and get threat protection and security recommendations.--- The Defender for Cloud service scans for new AWS EC2 instances every 6 hours and onboards them according to the configuration.--- The AWS CIS standard appears in the regulatory compliance dashboard in Defender for Cloud.--- If a Security Hub policy is enabled, recommendations appear in the Defender for Cloud portal and the regulatory compliance dashboard 5 to 10 minutes after onboarding finishes.--
-## Remove classic AWS connectors
-
-To remove any connectors that you created by using the classic connectors experience:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Go to **Defender for Cloud** > **Environment settings**.
-
-1. Select the option to switch back to the classic connectors experience.
-
- :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot that shows switching back to the classic connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png":::
-
-1. For each connector, select the ellipsis (**…**) button at the end of the row, and then select **Delete**.
-
-1. On AWS, delete the ARN role or the credentials created for the integration.
-
-## Connect your GCP project by using the classic connector
-
-Create a connector for every organization that you want to monitor from Defender for Cloud.
-
-When you're connecting GCP projects to specific Azure subscriptions, consider the [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#resource-hierarchy-detail) and these guidelines:
--- You can connect your GCP projects to Defender for Cloud at the *organization* level.-- You can connect multiple organizations to one Azure subscription.-- You can connect multiple organizations to multiple Azure subscriptions.-- When you connect an organization, all projects within that organization are added to Defender for Cloud.-
-### Prerequisites
-
-To complete the procedures for connecting a GCP project, you need:
--- A Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free one](https://azure.microsoft.com/pricing/free-trial/).--- [Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) enabled on your Azure subscription.--- Access to a GCP project.--- The **Owner** or **Contributor** role on the relevant Azure subscription.-
-You can learn more about Defender for Cloud pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-
-### Set up GCP Security Command Center with Security Health Analytics
-
-For all the GCP projects in your organization, you must:
-
-1. Set up GCP Security Command Center by using [these instructions from the GCP documentation](https://cloud.google.com/security-command-center/docs/quickstart-scc-setup).
-
-1. Enable Security Health Analytics by using [these instructions from the GCP documentation](https://cloud.google.com/security-command-center/docs/how-to-use-security-health-analytics).
-
-1. Verify that data is flowing to Security Command Center.
-
-The instructions for connecting your GCP environment for security configuration follow Google's recommendations for consuming security configuration recommendations. The integration applies Google Security Command Center and consumes extra resources that might affect your billing.
-
-When you first enable Security Health Analytics, the data might take several hours to become available.
-
-### Enable the GCP Security Command Center API
-
-1. Go to Google's Cloud Console API Library.
-
-1. Select each project in the organization that you want to connect to Microsoft Defender for Cloud.
-
-1. Find and select **Security Command Center API**.
-
-1. On the API's page, select **ENABLE**.
-
-[Learn more about the Security Command Center API](https://cloud.google.com/security-command-center/docs/reference/rest/).
-
-### Create a dedicated service account for the security configuration integration
-
-1. On the GCP console, select a project from the organization in which you're creating the required service account.
-
- > [!NOTE]
- > When you add this service account at the organization level, it will be used to access the data that Security Command Center gathers from all of the other enabled projects in the organization.
-
-1. In the **IAM & admin** section of the left menu, select **Service accounts**.
-
-1. Select **CREATE SERVICE ACCOUNT**.
-
-1. Enter an account name, and then select **Create**.
-
-1. Specify **Role** as **Defender for Cloud Admin Viewer**, and then select **Continue**.
-
-1. The **Grant users access to this service account** section is optional. Select **Done**.
-
-1. Copy the **Email value** information for the created service account, and save it for later use.
-
-1. In the **IAM & admin** section of the left menu, select **IAM**, and then:
-
- 1. Switch to the organization level.
-
- 1. Select **ADD**.
-
- 1. In the **New members** box, paste the **Email value** information that you copied earlier.
-
- 1. Specify the role as **Security Center Admin Viewer**, and then select **Save**.
-
- :::image type="content" source="./media/quickstart-onboard-gcp/iam-settings-gcp-permissions-admin-viewer.png" alt-text="Screenshot that shows how to set the relevant GCP permissions." lightbox="./media/quickstart-onboard-gcp/iam-settings-gcp-permissions-admin-viewer.png":::
-
-### Create a private key for the dedicated service account
-
-1. Switch to the project level.
-
-1. In the **IAM & admin** section of the left menu, select **Service accounts**.
-
-1. Open the dedicated service account, and then select **Edit**.
-
-1. In the **Keys** section, select **ADD KEY** > **Create new key**.
-
-1. On the **Create private key** pane, select **JSON**, and then select **CREATE**.
-
-1. Save this JSON file for later use.
-
-### Connect GCP to Defender for Cloud
-
-1. From the Defender for Cloud menu, open **Environment settings**. Then select the option to switch back to the classic connectors experience.
-
- :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot that shows how to switch back to the classic connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png" :::
-
-1. Select **Add GCP project**.
-
-1. On the onboarding page:
-
- 1. Validate the chosen subscription.
-
- 1. In the **Display name** box, enter a display name for the connector.
-
- 1. In the **Organization ID** box, enter your organization's ID. If you don't know it, see the Google guide [Creating and managing organizations](https://cloud.google.com/resource-manager/docs/creating-managing-organization).
-
- 1. In the **Private key** box, browse to the JSON file that you downloaded when you [created a private key for the dedicated service account](#create-a-private-key-for-the-dedicated-service-account).
-
-1. Select **Next**.
-
-### Confirm the connection
-
-After you successfully create the connector and properly configure GCP Security Command Center:
--- The GCP CIS standard appears in the regulatory compliance dashboard in Defender for Cloud.--- Security recommendations for your GCP resources appear in the Defender for Cloud portal and the regulatory compliance dashboard 5 to 10 minutes after onboarding finishes.-
- :::image type="content" source="./media/quickstart-onboard-gcp/gcp-resources-in-recommendations.png" alt-text="Screenshot that shows the GCP resources and recommendations on the recommendations pane in Defender for Cloud." lightbox="./media/quickstart-onboard-gcp/gcp-resources-in-recommendations.png" :::
-
-## Remove classic GCP connectors
-
-To remove any connectors that you created by using the classic connectors experience:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Go to **Defender for Cloud** > **Environment settings**.
-
-1. Select the option to switch back to the classic connectors experience.
-
- :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot that shows how to switch back to the classic connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png":::
-
-1. For each connector, select the ellipsis (**...**) button at the end of the row, and then select **Delete**.
-
-## Next steps
--- [Protect all of your resources with Defender for Cloud](enable-all-plans.md)
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Last updated 01/03/2024
Workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Amazon Web Services (AWS), but you need to set up the connection between them and Defender for Cloud.
-If you're connecting an AWS account that you previously connected by using the classic connector, you must [remove it](how-to-use-the-classic-connector.md#remove-classic-aws-connectors) first. Using an AWS account connected by both the classic and native connectors can produce duplicate recommendations.
- The following screenshot shows AWS accounts displayed in the Defender for Cloud [overview dashboard](overview-page.md). :::image type="content" source="./media/quickstart-onboard-aws/aws-account-in-overview.png" alt-text="Screenshot that shows four AWS projects listed on the overview dashboard in Defender for Cloud." lightbox="./media/quickstart-onboard-aws/aws-account-in-overview.png":::
Connecting your AWS account is part of the multicloud experience available in Mi
- Set up your [on-premises machines](quickstart-onboard-machines.md) and [GCP projects](quickstart-onboard-gcp.md). - Get answers to [common questions](faq-general.yml) about onboarding your AWS account. - [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshoot-connectors).-
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Last updated 01/16/2024
Workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Google Cloud Platform (GCP), but you need to set up the connection between them and Defender for Cloud.
-If you're connecting a GCP project that you previously connected by using the classic connector, you must [remove it](how-to-use-the-classic-connector.md#remove-classic-gcp-connectors) first. Using a GCP project connected by both the classic and native connectors can produce duplicate recommendations.
- This screenshot shows GCP accounts displayed in the Defender for Cloud [overview dashboard](overview-page.md). :::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot that shows GCP projects listed on the overview dashboard in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/gcp-account-in-overview.png":::
digital-twins Concepts Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-history.md
description: Understand the data history feature for Azure Digital Twins. Previously updated : 06/29/2023 Last updated : 01/26/2024
For more of an introduction to data history, including a quick demo, watch the f
> [!VIDEO https://aka.ms/docs/player?id=2f9a9af4-1556-44ea-ab5f-afcfd6eb9c15]
-## Resources and data flow
+Messages emitted by data history are metered under the [Message pricing dimension](https://azure.microsoft.com/pricing/details/digital-twins/#pricing).
+
+## Prerequisites: Resources and permissions
Data history requires the following resources: * Azure Digital Twins instance, with a [system-assigned managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) enabled.
When the digital twin graph is updated, the information passes through the event
When working with data history, it's recommended to use the [2023-01-31](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/resource-manager/Microsoft.DigitalTwins/stable/2023-01-31) version or later of the APIs. With the [2022-05-31](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/stable/2022-05-31) version, only twin properties (not twin lifecycle or relationship lifecycle events) can be historized. With earlier versions, data history is not available.
-### History from multiple Azure Digital Twins instances
+### Required permissions
-If you'd like, you can have multiple Azure Digital Twins instances historize updates to the same Azure Data Explorer cluster.
+In order to set up a data history connection, your Azure Digital Twins instance must have the following permissions to access the Event Hubs and Azure Data Explorer resources. These roles enable Azure Digital Twins to configure the event hub and Azure Data Explorer database on your behalf (for example, creating a table in the database). These permissions can optionally be removed after data history is set up.
+* Event Hubs resource: **Azure Event Hubs Data Owner**
+* Azure Data Explorer cluster: **Contributor** (scoped to either the entire cluster or specific database)
+* Azure Data Explorer database principal assignment: **Admin** (scoped to the database being used)
-Each Azure Digital Twins instance will have its own data history connection targeting the same Azure Data Explorer cluster. Within the cluster, instances can send their twin data to either...
-* **a separate set of tables** in the Azure Data Explorer cluster.
-* **the same set of tables** in the Azure Data Explorer cluster. To do this, specify the same Azure Data Explorer table names while [creating the data history connections](how-to-create-data-history-connection.md#set-up-data-history-connection). In the [data history table schemas](#data-types-and-schemas), the `ServiceId` column in each table will contain the URL of the source Azure Digital Twins instance, so you can use this field to resolve which Azure Digital Twins instance emitted each record in shared tables.
+Later, your Azure Digital Twins instance must have the following permission on the Event Hubs resource while data history is being used: **Azure Event Hubs Data Sender** (you can also opt instead to keep **Azure Event Hubs Data Owner** from data history setup).
-## Creating a data history connection
+These permissions can be assigned using the Azure CLI or Azure portal.
+
+If you'd like to restrict network access to the resources involved in data history (your Azure Digital Twins instance, event hub, or Azure Data Explorer cluster), you should set those restrictions *after* setting up the data history connection. For more information about this process, see [Restrict network access to data history resources](how-to-create-data-history-connection.md#restrict-network-access-to-data-history-resources).
-Once all the [resources](#resources-and-data-flow) and [permissions](#required-permissions) are set up, you can use the [Azure CLI](/cli/azure/what-is-azure-cli), [Azure portal](https://portal.azure.com), or the [Azure Digital Twins SDK](concepts-apis-sdks.md) to create the data history connection between them. The CLI command set is [az dt data-history](/cli/azure/dt/data-history).
+## Create and manage data history connection
+
+This section contains information for creating, updating, and deleting a data history connection.
+
+### Create a data history connection
+
+Once all the [resources](#prerequisites-resources-and-permissions) and [permissions](#required-permissions) are set up, you can use the [Azure CLI](/cli/azure/what-is-azure-cli), [Azure portal](https://portal.azure.com), or the [Azure Digital Twins SDK](concepts-apis-sdks.md) to create the data history connection between them. The CLI command set is [az dt data-history](/cli/azure/dt/data-history).
The command will always create a table for historized twin property events, which can use the default name or a custom name that you provide. Twin property deletions can optionally be included in this table. You can also provide table names for relationship lifecycle events and twin lifecycle events, and the command will create tables with those names to historize those event types. For step-by-step instructions on how to set up a data history connection, see [Create a data history connection](how-to-create-data-history-connection.md).
-## Updating a properties-only data history connection
+#### History from multiple Azure Digital Twins instances
+
+If you'd like, you can have multiple Azure Digital Twins instances historize updates to the same Azure Data Explorer cluster.
+
+Each Azure Digital Twins instance will have its own data history connection targeting the same Azure Data Explorer cluster. Within the cluster, instances can send their twin data to either...
+* **a separate set of tables** in the Azure Data Explorer cluster.
+* **the same set of tables** in the Azure Data Explorer cluster. To do this, specify the same Azure Data Explorer table names while [creating the data history connections](how-to-create-data-history-connection.md#set-up-data-history-connection). In the [data history table schemas](#data-types-and-schemas), the `ServiceId` column in each table will contain the URL of the source Azure Digital Twins instance, so you can use this field to resolve which Azure Digital Twins instance emitted each record in shared tables.
+
+### Update a properties-only data history connection
Prior to February 2023, the data history feature only historized twin property updates. If you have a properties-only data history connection from that time, you can update it to historize all graph updates to Azure Data Explorer (including twin properties, twin lifecycle events, and relationship lifecycle events).
This will require creating new tables in your Azure Data Explorer cluster for th
**If you want to continue using your existing table for twin property updates:** Use the instructions in [Create a data history connection](how-to-create-data-history-connection.md) to create a new data history connection with the new capabilities. The data history connection name can be the same as the original one, or a different name. Use the parameter options to provide new names for the two new event type tables, and to pass in the original table name for the twin property updates table. The new connection will override the old one, and continue to use the original table for future historized twin property updates.
-**If you want to use all new tables:** First, [delete your original data history connection](#deleting-a-data-history-connection). Then, use the instructions in [Create a data history connection](how-to-create-data-history-connection.md) to create a new data history connection with the new capabilities. The data history connection name can be the same as the original one, or a different name. Use the parameter options to provide new names for all three event type tables.
+**If you want to use all new tables:** First, [delete your original data history connection](#delete-a-data-history-connection). Then, use the instructions in [Create a data history connection](how-to-create-data-history-connection.md) to create a new data history connection with the new capabilities. The data history connection name can be the same as the original one, or a different name. Use the parameter options to provide new names for all three event type tables.
-### Required permissions
+### Delete a data history connection
-In order to set up a data history connection, your Azure Digital Twins instance must have the following permissions to access the Event Hubs and Azure Data Explorer resources. These roles enable Azure Digital Twins to configure the event hub and Azure Data Explorer database on your behalf (for example, creating a table in the database). These permissions can optionally be removed after data history is set up.
-* Event Hubs resource: **Azure Event Hubs Data Owner**
-* Azure Data Explorer cluster: **Contributor** (scoped to either the entire cluster or specific database)
-* Azure Data Explorer database principal assignment: **Admin** (scoped to the database being used)
+You can use the [Azure CLI](/cli/azure/what-is-azure-cli), [Azure portal](https://portal.azure.com), or [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md) to delete a data history connection. The CLI command is [az dt data-history connection delete](/cli/azure/dt/data-history/connection#az-dt-data-history-connection-delete).
-Later, your Azure Digital Twins instance must have the following permission on the Event Hubs resource while data history is being used: **Azure Event Hubs Data Sender** (you can also opt instead to keep **Azure Event Hubs Data Owner** from data history setup).
+Deleting a connection also gives the option to clean up resources associated with the data history connection (for the CLI command, the optional parameter to add is `--clean true`). If you use this option, the command will delete the resources within Azure Data Explorer that are used to link your cluster to your event hub, including data connections for the database and the ingestion mappings associated with your table. The "clean up resources" option will **not** delete the actual event hub and Azure Data Explorer cluster used for the data history connection.
-These permissions can be assigned using the Azure CLI or Azure portal.
+The cleanup is a best-effort attempt, and requires the account running the command to have delete permission for these resources.
-If you'd like to restrict network access to the resources involved in data history (your Azure Digital Twins instance, event hub, or Azure Data Explorer cluster), you should set those restrictions *after* setting up the data history connection. For more information about this process, see [Restrict network access to data history resources](how-to-create-data-history-connection.md#restrict-network-access-to-data-history-resources).
+>[!NOTE]
+> If you have multiple data history connections that share the same event hub or Azure Data Explorer cluster, using the "clean up resources" option while deleting one of these connections may disrupt your other data history connections that rely on these resources.
## Data types and schemas
Below is an example table of relationship lifecycle updates stored to Azure Data
| PasteurizationMachine_A01_feeds_Relationship0 | feeds | Create | 2022-12-15 07:16:12.7120 | dairyadtinstance.api.wcus.digitaltwins.azure.net | PasteurizationMachine_A01 | SaltMachine_C0 | | PasteurizationMachine_A02_feeds_Relationship0 | feeds | Create | 2022-12-15 07:16:12.7160 | dairyadtinstance.api.wcus.digitaltwins.azure.net | PasteurizationMachine_A02 | SaltMachine_C0 | | PasteurizationMachine_A03_feeds_Relationship0 | feeds | Create | 2022-12-15 07:16:12.7250 | dairyadtinstance.api.wcus.digitaltwins.azure.net | PasteurizationMachine_A03 | SaltMachine_C1 |
-| OsloFactory_contains_Relationship0 | contains | Delete | 2022-12-15 07:16:13.1780 | dairyadtinstance.api.wcus.digitaltwins.azure.net | OsloFactory | SaltMachine_C0 |
-
-## Deleting a data history connection
-
-You can use the [Azure CLI](/cli/azure/what-is-azure-cli), [Azure portal](https://portal.azure.com), or [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md) to delete a data history connection. The CLI command is [az dt data-history connection delete](/cli/azure/dt/data-history/connection#az-dt-data-history-connection-delete).
-
-Deleting a connection also gives the option to clean up resources associated with the data history connection (for the CLI command, the optional parameter to add is `--clean true`). If you use this option, the command will delete the resources within Azure Data Explorer that are used to link your cluster to your event hub, including data connections for the database and the ingestion mappings associated with your table. The "clean up resources" option will **not** delete the actual event hub and Azure Data Explorer cluster used for the data history connection.
-
-The cleanup is a best-effort attempt, and requires the account running the command to have delete permission for these resources.
-
->[!NOTE]
-> If you have multiple data history connections that share the same event hub or Azure Data Explorer cluster, using the "clean up resources" option while deleting one of these connections may disrupt your other data history connections that rely on these resources.
-
-## Pricing
-
-Messages emitted by data history are metered under the [Message pricing dimension](https://azure.microsoft.com/pricing/details/digital-twins/#pricing).
+| OsloFactory_contains_Relationship0 | contains | Delete | 2022-12-15 07:16:13.1780 | dairyadtinstance.api.wcus.digitaltwins.azure.net | OsloFactory | SaltMachine_C0 |
## End-to-end ingestion latency
To enable streaming ingestion for your Azure Digital Twins data history table, t
Ensure that `<table_name>` is replaced with the name of the table that was set up for you. It may take 5-10 minutes for the policy to take effect.
+## Visualize historized properties
+
+[Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md), a developer tool for visualizing and interacting with Azure Digital Twins data, offers a **Data history explorer** feature for viewing historized properties over time in a chart or a table. This feature is also available in [3D Scenes Studio](concepts-3d-scenes-studio.md), an immersive 3D environment for giving Azure Digital Twins the visual context of 3D assets.
++
+For more detailed information about using the data history explorer, see [Validate and explore historized properties](how-to-use-azure-digital-twins-explorer.md#validate-and-explore-historized-properties).
+
+>[!NOTE]
+> If you encounter issues selecting a property in the visual data history explorer experience, this might mean there's an error in some model in your instance. For example, having non-unique enum values in the attributes of a model will break this visualization feature. If this happens, [review your model definitions](how-to-use-azure-digital-twins-explorer.md#view-model-definition) and make sure all properties are valid.
+ ## Next steps Once twin data has been historized to Azure Data Explorer, you can use the Azure Digital Twins query plugin for Azure Data Explorer to run queries across the data. Read more about the plugin here: [Querying with the Azure Data Explorer plugin](concepts-data-explorer-plugin.md).
digital-twins How To Use 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md
Choose a **Display name** for the rule.
Next, choose whether the rule is dependent on a **Single property** or a **Custom (advanced)** property expression. For a **Single property**, you'll get a dropdown list of numeric properties on the primary twin. For **Custom (advanced)**, you'll get a text box where you can write a custom JavaScript expression using one or more properties. The result of your expression must match the result type that you specify in the **Type** field. For more information about writing custom expressions, see [Use custom (advanced) expressions](#use-custom-advanced-expressions). + Once you've defined your property expression, select **Add condition** to define the conditional visual effects. :::image type="content" source="media/how-to-use-3d-scenes-studio/new-behavior-visual-rules-2.png" alt-text="Screenshot of the New visual rule options in 3D Scenes Studio. The described fields are highlighted." lightbox="media/how-to-use-3d-scenes-studio/new-behavior-visual-rules-2.png":::
Here are the types of widget that you can create:
:::image type="content" source="media/how-to-use-3d-scenes-studio/new-behavior-widgets-gauge.png" alt-text="Screenshot of creating a new gauge-type widget in 3D Scenes Studio." lightbox="media/how-to-use-3d-scenes-studio/new-behavior-widgets-gauge.png":::
-* **Link**: For including externally-referenced content via a linked URL
+ [!INCLUDE [digital-twins-visual-property-error-note.md](../../includes/digital-twins-visual-property-error-note.md)]
+
+* **Link**: For including externally referenced content via a linked URL
Enter a **Label** and destination **URL**.
Here are the types of widget that you can create:
* **Value**: For directly displaying twin property values
- Enter a **Display name** and select a **Property expression** that you want to display. This can be a **Single property** of the primary twin, or a **Custom (advanced)** property expression. Custom expressions should be JavaScript expressions using one or more properties of the twin, and you'll select which outcome type the expression will produce. For more information about writing custom expressions, see [Use custom (advanced) expressions](#use-custom-advanced-expressions).
+ Enter a **Display name** and select a **Property expression** that you want to display. This can be a **Single property** of the primary twin, or a **Custom (advanced)** property expression. Custom expressions should be JavaScript expressions using one or more properties of the twin, and you'll select which outcome type the expression will produce. If your custom property expression outputs a string, you can also use JavaScript's template literal syntax to include a dynamic expression in the string output. Format the dynamic expression with this syntax: `${<calculation-expression>}`. Then, wrap the whole string output with backticks (`` ` ``). For more information about writing custom expressions, see [Use custom (advanced) expressions](#use-custom-advanced-expressions).
:::image type="content" source="media/how-to-use-3d-scenes-studio/new-behavior-widgets-value.png" alt-text="Screenshot of creating a new value-type widget in 3D Scenes Studio." lightbox="media/how-to-use-3d-scenes-studio/new-behavior-widgets-value.png":::
- If your custom property expression outputs a string, you can also use JavaScript's template literal syntax to include a dynamic expression in the string output. Format the dynamic expression with this syntax: `${<calculation-expression>}`. Then, wrap the whole string output with backticks (`` ` ``).
+ [!INCLUDE [digital-twins-visual-property-error-note.md](../../includes/digital-twins-visual-property-error-note.md)]
Below is an example of a value widget that checks if the `InFlow` value of the primary twin exceeds 99. If so, it outputs a string with an expression containing the twin's `$dtId`. Otherwise, there will be no expression in the output, so no backticks are required.
Here are the types of widget that you can create:
:::image type="content" source="media/how-to-use-3d-scenes-studio/new-behavior-widgets-data-history.png" alt-text="Screenshot of creating a new data history widget in 3D Scenes Studio." lightbox="media/how-to-use-3d-scenes-studio/new-behavior-widgets-data-history.png":::
+ [!INCLUDE [digital-twins-visual-property-error-note.md](../../includes/digital-twins-visual-property-error-note.md)]
+ ### Use custom (advanced) expressions While defining [visual rules](#visual-rules) and [widgets](#widgets) in your behaviors, you may want to use custom expressions to define a property condition.
When the recipient pastes this URL into their browser, the specified scene will
Try out 3D Scenes Studio with a sample scenario in [Get started with 3D Scenes Studio](quickstart-3d-scenes-studio.md).
-Or, visualize your Azure Digital Twins graph differently using [Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
+Or, visualize your Azure Digital Twins graph differently using [Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
digital-twins How To Use Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md
If your Azure Digital Twins instance has [data history](concepts-data-history.md
:::image type="content" source="media/how-to-use-azure-digital-twins-explorer/data-history-explorer-property.png" alt-text="Screenshot of the Data history explorer with the property details highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/data-history-explorer-property.png":::
+ [!INCLUDE [digital-twins-visual-property-error-note.md](../../includes/digital-twins-visual-property-error-note.md)]
+ 1. Choose a **Label** for the time series and select **Update**. This will load the chart view of the historized values for the chosen property. You can use the tabs above the chart to toggle between the [chart view](#view-history-in-chart) and [table view](#view-history-in-table).
hdinsight Apache Hadoop Deep Dive Advanced Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-deep-dive-advanced-analytics.md
There are three scalable machine learning libraries that bring algorithmic model
### Apache Spark and Deep learning
-[Deep learning](https://www.microsoft.com/research/group/dltc/) is a branch of machine learning that uses *deep neural networks* (DNNs), inspired by the biological processes of the human brain. Many researchers see deep learning as a promising approach for artificial intelligence. Some examples of deep learning are spoken language translators, image recognition systems, and machine reasoning. To help advance its own work in deep learning, Microsoft has developed the free, easy-to-use, open-source [Microsoft Cognitive Toolkit](https://www.microsoft.com/en-us/cognitive-toolkit/). The toolkit is being used extensively by a wide variety of Microsoft products, by companies worldwide with a need to deploy deep learning at scale, and by students interested in the latest algorithms and techniques.
+[Deep learning](https://www.microsoft.com/research/group/dltc/) is a branch of machine learning that uses *deep neural networks* (DNNs), inspired by the biological processes of the human brain. Many researchers see deep learning as a promising approach to artificial intelligence. Some examples of deep learning are spoken language translators, image recognition systems, and machine reasoning. To help advance its own work in deep learning, Microsoft has developed the free, easy-to-use, open-source [Microsoft Cognitive Toolkit](/cognitive-toolkit/). The toolkit is being used extensively by a wide variety of Microsoft products, by companies worldwide with a need to deploy deep learning at scale, and by students interested in the latest algorithms and techniques.
## Scenario - Score Images to Identify Patterns in Urban Development
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
The Azure API for FHIR supports the following query parameters. All of these par
| \_typefilter | Yes | To request finer-grained filtering, you can use \_typefilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results | | \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container isnΓÇÖt specified, the data will be exported to a new container. | | \_till | No | Allows you to only export resources that have been modified till the time provided. This parameter is applicable to only System-Level export. In this case, if historical versions have not been disabled or purged, export guarantees true snapshot view, or, in other words, enables time travel. |
-|\includeAssociatedData | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as 'history' to export history/ non latest versioned resources. Include value as 'deleted' to export soft deleted resources. |
+|\includeAssociatedData | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as '_history' to export history/ non latest versioned resources. Include value as '_deleted' to export soft deleted resources. |
> [!NOTE] > Only storage accounts in the same subscription as that for Azure API for FHIR are allowed to be registered as the destination for $export operations.
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
Content-Type: `application/json`
} ```
+## Performance consideration with Conditional operations
+Conditional interactions can be complex and performance-intensive. To enhance the latency of queries involving conditional interactions, you have the option to utilize the request header **x-conditionalquery-processing-logic** . Setting this header to **parallel** allows concurrent execution of queries with conditional interactions.
+ ## Next steps In this article, you learned about some of the REST capabilities of Azure API for FHIR. Next, you can learn more about the key aspects to searching resources in FHIR.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
The FHIR service supports the following query parameters for filtering exported
| `_typeFilter` | Yes | To request finer-grained filtering, you can use `_typeFilter` along with the `_type` parameter. The value of the `_typeFilter` parameter is a comma-separated list of FHIR queries that further limit the results. | | `_container` | No | Specifies the name of the container in the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder in that container. If the container isn't specified, the data will be exported to a new container with an autogenerated name. | | `_till` | No | Allows you to export resources that have been modified till the specified time. This parameter is applicable only with System-Level export. In this case, if historical versions have not been disabled or purged, export guarantees true snapshot view, or, in other words, enables time travel. |
-|`includeAssociatedData` | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as 'history' to export history/ non latest versioned resources. Include value as 'deleted' to export soft deleted resources. |
+|`includeAssociatedData` | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as '_history' to export history/ non latest versioned resources. Include value as '_deleted' to export soft deleted resources. |
> [!NOTE] > Only storage accounts in the same subscription as the FHIR service are allowed to be registered as the destination for `$export` operations.
machine-learning Feature Set Materialization Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/feature-set-materialization-concepts.md
Last updated 12/06/2023-+ # Feature set materialization concepts
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
Last updated 11/03/2022-+ # Author scoring scripts for batch deployments
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-custom-output.md
Last updated 10/10/2022 -+ # Customize outputs in batch deployments
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-image-processing-batch.md
Last updated 10/10/2022 -+ # Image processing with batch model deployments
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md
Last updated 10/10/2022 -+ # Deploy MLflow models in batch deployments
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-nlp-processing-batch.md
Last updated 10/10/2022 -+ # Deploy language models in batch endpoints
machine-learning How To Use Batch Model Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-model-deployments.md
Last updated 11/04/2022-+ #Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
machine-learning How To Use Batch Model Openai Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-model-openai-embeddings.md
Last updated 11/04/2023-+ # Run OpenAI models in batch endpoints to compute embeddings
machine-learning How To Use Batch Pipeline Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-deployments.md
- devplatv2 - event-tier1-build-2023 - ignite-2023
+ - update-code
# How to deploy pipelines with batch endpoints
search Search Get Started Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-retrieval-augmented-generation.md
In this quickstart:
+ [Azure Storage](/azure/storage/common/storage-account-create)
-+ [Azure AI Search](search-create-app-portal.md), in any region, on a billable tier (Basic and above), preferably with [semantic ranking enabled](semantic-how-to-enable-disable.md)
++ [Azure AI Search](search-create-app-portal.md), in any region, on a billable tier (Basic and higher), preferably with [semantic ranking enabled](semantic-how-to-enable-disable.md) + Contributor permissions in the Azure subscription for creating resources ++ Download the sample famous-speeches-pdf PDFs in [azure-search-sample-data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/famous-speeches-pdf).+
+ For this quickstart, we recommend starting with smaller files so that you can conserve [vector storage](vector-search-index-size.md) and [Azure OpenAI quota](/azure/ai-services/openai/quotas-limits) for other work.
+ ## Set up model deployments 1. Start [Azure OpenAI Studio](https://oai.azure.com/portal).
In this quickstart:
+ [text-embedding-ada-002](/azure/ai-services/openai/concepts/models#embeddings) + [gpt-35-turbo](/azure/ai-services/openai/concepts/models#gpt-35)
- Deploy more chat models if you want to test them with your data. Note that Text-Davinci-002 isn't supported.
+ Deploy more *chat* models if you want to compare them using your data. *Fine-tuning* models like Text-Davinci-002 aren't supported for this scenario.
If you create new deployments, the default configurations are suited for this tutorial. It's helpful to name each deployment after the model. For example, "text-embedding-ada-002" as the deployment name of the text-embedding-ada-002 model. ## Generate a vector store for the playground
-1. Download the sample famous-speeches-pdf PDFs in [azure-search-sample-data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/famous-speeches-pdf).
- 1. Sign in to the [Azure OpenAI Studio](https://oai.azure.com/portal). 1. On the **Chat** page under **Playground**, select **Add your data (preview)**.
In this quickstart:
1. Select **Next**.
-1. In Data Management, choose **Hybrid + semantic** if [semantic ranking is enabled](semantic-how-to-enable-disable.md) on your search service. If semantic ranking is disabled, choose **Hybrid (vector + keyword)**. Hybrid is a better choice because vector (similarity) search and keyword search execute the same query input in parallel, which can produce a more relevant response.
+1. In Data Management, choose **Hybrid + semantic** if [semantic ranking is enabled](semantic-how-to-enable-disable.md) on your search service. If semantic ranking is disabled, choose **Hybrid (vector + keyword)**. [Hybrid](hybrid-search-overview.md) is a better choice because vector (similarity) search and keyword search execute the same query input in parallel, which can produce a more relevant response.
:::image type="content" source="media/search-get-started-rag/azure-openai-data-manage.png" lightbox="media/search-get-started-rag/azure-openai-data-manage.png" alt-text="Screenshot of the data management options.":::
In this quickstart:
## Chat with your data
-1. Review advanced settings that determine how much flexibility the chat model has in supplementing the grounding data, and how many chunks are returned from the query to the vector store.
+1. Review advanced settings that determine how much flexibility the chat model has in supplementing the grounding data, and how many chunks are provided to the model to generate its response.
- Strictness determines whether the model supplements the query with its own information. A level of 5 is no supplementation. Only your grounding data is used, which means the search engine plays a large role in the quality of the response. Semantic ranking can be helpful in this scenario because the ranking models do a better job of interpreting the intent of the query.
+ Strictness determines whether the model supplements the query with its own information. Level of 5 is no supplementation. Only your grounding data is used, which means the search engine plays a large role in the quality of the response. Semantic ranking can be helpful in this scenario because the ranking models do a better job of inferring the intent of the query.
Lower levels of strictness produce more verbose answers, but might also include information that isn't in your index.
In this quickstart:
1. Start with these settings:
- + Check the **Limit responses to your data content** option.
- + Strictness set to 3.
+ + Verify the **Limit responses to your data content** option is selected.
+ + Strictness set to 3 or 4.
+ Retrieved documents set to 20. Given chunk sizes of 1024 tokens, a setting of 20 gives you roughly 20,000 tokens to use for generating responses. The tradeoff is query latency, but you can experiment with chat replay to find the right balance. 1. Send your first query. The chat models perform best in question and answer exercises. For example, "who gave the Gettysburg speech" or "when was the Gettysburg speech delivered". More complex queries, such as "why was Gettysburg important", perform better if the model has some latitude to answer (lower levels of strictness) or if semantic ranking is enabled.
- Queries that require deeper analysis, such as "how many speeches are in the vector store", might fail to return a response. In RAG pattern chat scenarios, information retrieval is keyword and similarity search against the query string, where the search engine looks for chunks having exact or similar terms, phrases, or construction. The payload might have insufficient data for the model to work with.
+ Queries that require deeper analysis or language understanding, such as "how many speeches are in the vector store" or "what's in this vector store", will probably fail to return a response. In RAG pattern chat scenarios, information retrieval is keyword and similarity search against the query string, where the search engine looks for chunks having exact or similar terms, phrases, or construction. The return payload might be insufficient for handling an open-ended question.
- Finally, chats are constrained by the number of documents (chunks) returned in the response (limited to 3-20 in Azure OpenAI Studio playground). As you can imagine, posing a question about "all of the titles" requires a full scan of the entire vector store, which means a different approach, or modifying the generated code to allow for [exhaustive search](vector-search-how-to-create-index.md#add-a-vector-search-configuration) in the vector search configuration.
+ Finally, chats are constrained by the number of documents (chunks) returned in the response (limited to 3-20 in Azure OpenAI Studio playground). As you can imagine, posing a question about "all of the titles" requires a full scan of the entire vector store, which means adopting an approach that allows more than 20 chunks. You could modify the generated code (assuming you [deploy the solution](/azure/ai-services/openai/use-your-data-quickstart#deploy-your-model)) to allow for [exhaustive search](vector-search-how-to-create-index.md#add-a-vector-search-configuration) on your queries.
:::image type="content" source="media/search-get-started-rag/chat-results.png" lightbox="media/search-get-started-rag/chat-results.png" alt-text="Screenshot of a chat session."::: ## Next steps
-Now that you're familiar with the benefits of Azure OpenAI Studio for scenario testing, review code samples that demonstrate the full range of APIs for RAG applications. Samples are available in [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet), and [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).
+In the playground, it's easy to start over with different data and configurations and compare the results. If you didn't try **Hybrid + semantic** the first time, perhaps try again with [semantic ranking enabled](semantic-how-to-enable-disable.md).
+
+We also provide code samples that demonstrate the full range of APIs for RAG applications. Samples are available in [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet), and [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).
## Clean up
spring-apps Github Actions Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/github-actions-key-vault.md
The credential you created above can get only general information about the Key
Go to the **Key Vault** dashboard in Azure portal, select the **Access control** menu, then open the **Role assignments** tab. Select **Apps** for **Type** and `This resource` for **scope**. You should see the credential you created in previous step:
-![Set access policy](./media/github-actions/key-vault1.png)
+![Set access policy](media/github-actions-key-vault/key-vault1.png)
Copy the credential name, for example, `azure-cli-2020-01-19-04-39-02`. Open the **Access policies** menu, then select the **Add Access Policy** link. Select `Secret Management` for **Template**, then select **Principal**. Paste the credential name in **Principal**/**Select** input box:
-![Select](./media/github-actions/key-vault2.png)
+![Select](media/github-actions-key-vault/key-vault2.png)
Select the **Add** button in the **Add access policy** dialog, then select **Save**.
Again, results:
Copy the entire JSON string. Go back to **Key Vault** dashboard. Open the **Secrets** menu, then select the **Generate/Import** button. Input the secret name, such as `AZURE-CREDENTIALS-FOR-SPRING`. Paste the JSON credential string to the **Value** input box. You may notice the value input box is a one-line text field, rather than a multi-line text area. You can paste the complete JSON string there.
-![Full scope credential](./media/github-actions/key-vault3.png)
+![Full scope credential](media/github-actions-key-vault/key-vault3.png)
## Combine credentials in GitHub Actions
spring-apps How To Configure Health Probes Graceful Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-health-probes-graceful-termination.md
Azure Spring Apps offers default health probe rules for every application. This
- *Liveness probes* determine when to restart an application. For example, liveness probes can identify a deadlock, such as when an application is running but unable to make progress. Restarting the application in a deadlock state can make the application available despite errors. -- *Readiness probes* determine when an app instance is ready to start accepting traffic. For example, readiness probes can control which app instances are used as backends for the application. When an app instance isn't ready, it's removed from Kubernetes service discovery. For more information, see [Discover and register your Spring Boot applications](how-to-service-registration.md).
+- *Readiness probes* determine when an app instance is ready to start accepting traffic. For example, readiness probes can control which app instances are used as backends for the application. When an app instance isn't ready, it's removed from Kubernetes service discovery. For more information, see [Discover and register your Spring Boot applications](how-to-service-registration.md). For more information about service discovery with the Enterprise plan, see [Use Tanzu Service Registry](how-to-enterprise-service-registry.md).
- *Startup probes* determine when an application has started. A startup probe disables liveness and readiness checks until startup succeeds, ensuring that liveness and readiness probes don't interfere with application startup. You can use startup probes to perform liveness checks on slow starting applications, preventing the app from terminating before it's up and running.
spring-apps How To Enable Ingress To App Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-ingress-to-app-tls.md
This article describes secure communications in Azure Spring Apps. The article a
The following picture shows the overall secure communication support in Azure Spring Apps. ## Secure communication model within Azure Spring Apps
To enable ingress-to-app TLS in the [Azure portal](https://portal.azure.com/), f
3. Select **Ingress-to-app TLS**. 4. Switch **Ingress-to-app TLS** to *Yes*. ### Verify ingress-to-app TLS status
spring-apps How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-system-assigned-managed-identity.md
An app can use its managed identity to get tokens to access other resources prot
You may need to [configure the target resource to allow access from your application](/entra/identity/managed-identities-azure-resources/howto-assign-access-portal). For example, if you request a token to access Key Vault, make sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault are rejected, even if they include the token. To learn more about which resources support Microsoft Entra tokens, see [Azure services that can use managed identities to access other services](/entra/identity/managed-identities-azure-resources/managed-identities-status).
-Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machine. We recommend using Java SDK or spring boot starters to acquire a token. See [How to use VM token](/entra/identity/managed-identities-azure-resources/how-to-use-vm-token) for various code and script examples and guidance on important topics such as handling token expiration and HTTP errors.
+Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machine. We recommend using Java SDK or spring boot starters to acquire a token. For various code and script examples and guidance on important topics such as handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](/entra/identity/managed-identities-azure-resources/how-to-use-vm-token).
## Disable system-assigned identity from an app
spring-apps How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-github-actions.md
The command should output a JSON object:
This example uses the [steeltoe sample on GitHub](https://github.com/Azure-Samples/azure-spring-apps-samples/tree/main/steeltoe-sample). Fork the repository, open the GitHub repository page for the fork, and select the **Settings** tab. Open the **Secrets** menu, and select **New secret**: Set the secret name to `AZURE_CREDENTIALS` and its value to the JSON string that you found under the heading *Set up your GitHub repository and authenticate*. You can also get the Azure login credential from Key Vault in GitHub Actions as explained in [Authenticate Azure Spring with Key Vault in GitHub Actions](./github-actions-key-vault.md).
The command should output a JSON object:
This example uses the [PiggyMetrics](https://github.com/Azure-Samples/piggymetrics) sample on GitHub. Fork the sample, uncheck **Copy the Azure branch only**, open the GitHub repository page, and select the **Settings** tab. Open **Secrets** menu, and select **Add a new secret**: Set the secret name to `AZURE_CREDENTIALS` and its value to the JSON string that you found under the heading *Set up your GitHub repository and authenticate*. You can also get the Azure login credential from Key Vault in GitHub Actions as explained in [Authenticate Azure Spring with Key Vault in GitHub Actions](./github-actions-key-vault.md).
GitHub **Actions** should be enabled automatically after you push *.github/workf
To verify that the action has been enabled, select the **Actions** tab on the GitHub repository page: If your action runs in error, for example, if you haven't set the Azure credential, you can rerun checks after fixing the error. On the GitHub repository page, select **Actions**, select the specific workflow task, and then select the **Rerun checks** button to rerun checks: ## Next steps
spring-apps How To Integrate Azure Load Balancers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-integrate-azure-load-balancers.md
To integrate with Azure Spring Apps service, complete the following configuratio
1. **Override with new host name:** select **No**. 1. **Use custom probe**: select **Yes** and pick the custom probe created above.
- :::image type="content" source="media/how-to-integrate-azure-load-balancers/app-gateway-3.png" alt-text="Screenshot of the Azure portal that shows the Add Backend setting page." lightbox="media/how-to-integrate-azure-load-balancers/app-gateway-3.png":::
+ :::image type="content" source="media/how-to-integrate-azure-load-balancers/app-gateway-3.png" alt-text="Screenshot of the Azure portal that shows the Add Backend setting page." lightbox="media/how-to-integrate-azure-load-balancers/app-gateway-3.png":::
## Integrate Azure Spring Apps with Azure Front Door
To integrate with Azure Spring Apps service and configure an origin group, use t
1. **Add origin group**. 1. Specify the backend endpoints by adding origins for the different Azure Spring Apps instances.
- :::image type="content" source="media/how-to-integrate-azure-load-balancers/front-door-1.png" alt-text="Screenshot of the Azure portal that shows the Add an origin group page with the Add an origin button highlighted." lightbox="media/how-to-integrate-azure-load-balancers/front-door-1.png":::
+ :::image type="content" source="media/how-to-integrate-azure-load-balancers/front-door-1.png" alt-text="Screenshot of the Azure portal that shows the Add an origin group page with the Add an origin button highlighted." lightbox="media/how-to-integrate-azure-load-balancers/front-door-1.png":::
1. Specify **origin type** as *Azure Spring Apps*. 1. Select your Azure Spring Apps instance for the **host name**. 1. Keep the **origin host header** empty, so that the incoming host header will be used towards the backend. For more information, see [Azure Front Door configuration for host name preservation](/azure/architecture/best-practices/host-name-preservation#azure-front-door).
- :::image type="content" source="media/how-to-integrate-azure-load-balancers/front-door-2.png" alt-text="Screenshot of the Azure portal that shows the Add an origin page." lightbox="media/how-to-integrate-azure-load-balancers/front-door-2.png":::
+ :::image type="content" source="media/how-to-integrate-azure-load-balancers/front-door-2.png" alt-text="Screenshot of the Azure portal that shows the Add an origin page." lightbox="media/how-to-integrate-azure-load-balancers/front-door-2.png":::
## Next steps
spring-apps How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-permissions.md
The Developer role includes permissions to restart apps and see their log stream
1. In the search box, search for **Microsoft.app**. Select **Microsoft Azure Spring Apps**:
- :::image type="content" source="media/how-to-permissions/spring-cloud-permissions.png" alt-text="Screenshot that shows the results of searching for Microsoft.app." lightbox="media/how-to-permissions/spring-cloud-permissions.png":::
+ :::image type="content" source="media/how-to-permissions/permissions.png" alt-text="Screenshot that shows the results of searching for Microsoft.app." lightbox="media/how-to-permissions/permissions.png":::
1. Select the permissions for the Developer role.
spring-apps How To Set Up Sso With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-set-up-sso-with-azure-ad.md
You can also add redirect URIs after app registration by following these steps:
1. Select **Web**, then select **Add URI** under *Redirect URIs*. 1. Add a new redirect URI, then select **Save**. For more information on Application Registration, see [Quickstart: Register an application with the Microsoft identity platform](/entra/identity-platform/quickstart-register-app).
The application uses a client secret to authenticate itself in SSO workflow. You
## Configure scope
-The `scope` property of SSO is a list of scopes to be included in JWT identity tokens. They're often referred to permissions. Identity platform supports several OpenID Connect scopes, such as `openid`, `email` and `profile`. For more information, see the [OpenID Connect scopes](/entra/identity-platform/scopes-oidc#openid-connect-scopes) section of [Scopes and permissions in the Microsoft identity platform](/entra/identity-platform/scopes-oidc).
+The `scope` property of SSO is a list of scopes to be included in JWT identity tokens. They're often referred to as permissions. Identity platform supports several OpenID Connect scopes, such as `openid`, `email`, and `profile`. For more information, see the [OpenID Connect scopes](/entra/identity-platform/scopes-oidc#openid-connect-scopes) section of [Scopes and permissions in the Microsoft identity platform](/entra/identity-platform/scopes-oidc).
## Configure issuer URI
spring-apps How To Setup Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-setup-autoscale.md
There are two options for Autoscale demand management:
In the Azure portal, choose how you want to scale. The following figure shows the **Custom autoscale** option and mode settings. ## Set up Autoscale settings for your application in Azure CLI
spring-apps How To Use Flush Dns Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-flush-dns-settings.md
Use the following steps to flush the DNS settings for an existing Azure Spring A
1. Select **Flush DNS settings (preview)**. ### [Azure CLI](#tab/azure-cli)
spring-apps How To Use Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-tls-certificate.md
You need to grant Azure Spring Apps access to your key vault before you import y
1. In the left navigation pane, select **Access policies**, then select **Create**. 1. Select **Certificate permissions**, then select **Get** and **List**.
- :::image type="content" source="media/how-to-use-tls-certificates/grant-key-vault-permission.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Permission pane showing and Get and List permissions highlighted." lightbox="media/how-to-use-tls-certificates/grant-key-vault-permission.png":::
+ :::image type="content" source="media/how-to-use-tls-certificate/grant-key-vault-permission.png" alt-text="Screenshot of the Azure portal that shows the Create an access policy page with Permission pane showing and Get and List permissions highlighted." lightbox="media/how-to-use-tls-certificate/grant-key-vault-permission.png":::
1. Under **Principal**, select your **Azure Spring Cloud Resource Provider**.
- :::image type="content" source="media/how-to-use-tls-certificates/select-service-principal.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Principal pane showing and Azure Spring Apps Resource Provider highlighted." lightbox="media/how-to-use-tls-certificates/select-service-principal.png":::
+ :::image type="content" source="media/how-to-use-tls-certificate/select-service-principal.png" alt-text="Screenshot of the Azure portal that shows the Create an access policy page with Principal pane showing and Azure Spring Apps Resource Provider highlighted." lightbox="media/how-to-use-tls-certificate/select-service-principal.png":::
1. Select **Review + Create**, then select **Create**.
To load a certificate into your application in Azure Spring Apps, start with the
1. From the left navigation pane of your app, select **Certificate management**. 1. Select **Add certificate** to choose certificates accessible for the app. ### Load a certificate from code
spring-apps Quickstart Automate Deployments Github Actions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-automate-deployments-github-actions-enterprise.md
The automation associated with the sample application requires a Storage account
1. This example uses the [fitness store](https://github.com/Azure-Samples/acme-fitness-store) sample on GitHub. Fork the sample, open the GitHub repository page, and then select the **Settings** tab. Open the **Secrets** menu, then select **Add a new secret**, as shown in the following screenshot.
- :::image type="content" source="media/github-actions/actions1.png" alt-text="Screenshot showing GitHub Settings Add new secret." lightbox="media/github-actions/actions1.png"
+ :::image type="content" source="media/quickstart-automate-deployments-github-actions-enterprise/actions1.png" alt-text="Screenshot showing GitHub Settings Add new secret." lightbox="media/quickstart-automate-deployments-github-actions-enterprise/actions1.png":::
1. Set the secret name to `AZURE_CREDENTIALS` and set its value to the JSON string that you found under the heading **Set up your GitHub repository and authenticate**.
- :::image type="content" source="media/github-actions/actions2.png" alt-text="Screenshot showing GitHub Settings Set secret data." lightbox="media/github-actions/actions2.png"
+ :::image type="content" source="media/quickstart-automate-deployments-github-actions-enterprise/actions2.png" alt-text="Screenshot showing GitHub Settings Set secret data." lightbox="media/quickstart-automate-deployments-github-actions-enterprise/actions2.png":::
1. Add the following secrets to GitHub Actions:
The automation associated with the sample application requires a Storage account
Now you can run GitHub Actions in your repository. The [provision workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/HEAD/.github/workflows/provision.yml) provisions all resources necessary to run the example application. The following screenshot shows an example run: Each application has a [deploy workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/HEAD/.github/workflows/catalog.yml) that will redeploy the application when changes are made to that application. The following screenshot shows some example output from the catalog service: The [cleanup workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/HEAD/.github/workflows/cleanup.yml) can be manually run to delete all resources created by the `provision` workflow. The following screenshot shows the output: ## Clean up resources
spring-apps Quickstart Deploy Restful Api App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-restful-api-app.md
The following diagram shows the architecture of the system:
- A custom directory role that includes the [permission to grant permissions to applications](/entra/identity/role-based-access-control/custom-consent-permissions), for the permissions required by the application. For more information, see [Grant tenant-wide admin consent to an application](/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal).-- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](how-to-enterprise-marketplace-offer.md).
- [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant).
The following diagram shows the architecture of the system:
- A custom directory role that includes the [permission to grant permissions to applications](/entra/identity/role-based-access-control/custom-consent-permissions), for the permissions required by the application. For more information, see [Grant tenant-wide admin consent to an application](/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal).-- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](how-to-enterprise-marketplace-offer.md).
- [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant).
Use the following steps to access the RESTful APIs of the `ToDo` app in the Swag
## 7. Next steps > [!div class="nextstepaction"]
-> [Quickstart: Deploy an event-driven application to Azure Spring Apps](./quickstart-deploy-event-driven-app-standard-consumption.md)
+> [Quickstart: Deploy an event-driven application to Azure Spring Apps](quickstart-deploy-event-driven-app.md)
> [!div class="nextstepaction"]
-> [Quickstart: Deploy microservice applications to Azure Spring Apps](./quickstart-deploy-microservice-apps.md)
+> [Quickstart: Deploy microservice applications to Azure Spring Apps](quickstart-deploy-microservice-apps.md)
> [!div class="nextstepaction"]
-> [Structured application log for Azure Spring Apps](./structured-app-log.md)
+> [Structured application log for Azure Spring Apps](structured-app-log.md)
> [!div class="nextstepaction"]
-> [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md)
+> [Map an existing custom domain to Azure Spring Apps](how-to-custom-domain.md)
> [!div class="nextstepaction"]
-> [Use Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
+> [Use Azure Spring Apps CI/CD with GitHub Actions](how-to-github-actions.md)
> [!div class="nextstepaction"]
-> [Automate application deployments to Azure Spring Apps](./how-to-cicd.md)
+> [Automate application deployments to Azure Spring Apps](how-to-cicd.md)
For more information, see the following articles:
spring-apps Quickstart Integrate Azure Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-mysql.md
Use [Service Connector](../service-connector/overview.md) to connect the app hos
1. Under **Settings**, select **Apps**, and then select the `customers-service` application from the list. 1. Select **Service Connector** from the left table of contents and select **Create**.
- :::image type="content" source="./media/quickstart-integrate-azure-database-mysql/create-service-connection.png" alt-text="Screenshot of the Azure portal, in the Azure Spring Apps instance, create a connection with Service Connector.":::
+ :::image type="content" source="media/quickstart-integrate-azure-database-mysql/create-service-connection.png" alt-text="Screenshot of the Azure portal, in the Azure Spring Apps instance, create a connection with Service Connector.":::
1. Select or enter the following settings in the table.
Use [Service Connector](../service-connector/overview.md) to connect the app hos
| **MySQL database** | *petclinic* | Select the database you created earlier. | | **Client type** | *SpringBoot* | Select the application stack that works with the target service you selected. |
- :::image type="content" source="./media/quickstart-integrate-azure-database-mysql/basics-tab.png" alt-text="Screenshot of the Azure portal, filling out the basics tab in Service Connector.":::
+ :::image type="content" source="media/quickstart-integrate-azure-database-mysql/basics-tab.png" alt-text="Screenshot of the Azure portal, filling out the basics tab in Service Connector.":::
1. Select **Next: Authentication** to select the authentication type. Then select **Connection string > Database credentials** and enter your database username and password.
Username and password validated. success
Azure Spring Apps connections are displayed under **Settings > Service Connector**. Select **Validate** to check your connection status, and select **Learn more** to review the connection validation details.
spring-apps Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md
Azure Firewall provides the FQDN tag `AzureKubernetesService` to simplify the fo
## Azure Spring Apps optional FQDN for Application Insights
-You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK or the Application Insights Agent to send data to the portal. For more information, see the [outgoing ports](../azure-monitor/ip-addresses.md#outgoing-ports) section of [IP addresses used by Azure Monitor](../azure-monitor/ip-addresses.md).
+You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK or the Application Insights Agent to send data to the portal. For more information, see the [Outgoing ports](../azure-monitor/ip-addresses.md#outgoing-ports) section of [IP addresses used by Azure Monitor](../azure-monitor/ip-addresses.md).
## Next steps
static-web-apps Gitlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/gitlab.md
Now that the repository is created, you can create a static web app from the Azu
Next you add a workflow task responsible for building and deploying your site as you make changes. ### Add deployment token-
+When following the below steps, make sure that you select * for the environments section. It may appear the default value is all but you need to click the dropdown and manually select *.
1. Go to the repository in GitLab. 1. Select **Settings**. 1. Select **CI/CD**.
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
description: Learn about file shares hosted in Azure Files using the Network Fil
Previously updated : 10/16/2023 Last updated : 01/26/2024 # NFS file shares in Azure Files+ Azure Files offers two industry-standard file system protocols for mounting Azure file shares: the [Server Message Block (SMB)](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) protocol and the [Network File System (NFS)](https://en.wikipedia.org/wiki/Network_File_System) protocol, allowing you to pick the protocol that is the best fit for your workload. Azure file shares don't support accessing an individual Azure file share with both the SMB and NFS protocols, although you can create SMB and NFS file shares within the same FileStorage storage account. Azure Files offers enterprise-grade file shares that can scale up to meet your storage needs and can be accessed concurrently by thousands of clients. This article covers NFS Azure file shares. For information about SMB Azure file shares, see [SMB file shares in Azure Files](files-smb-protocol.md).
This article covers NFS Azure file shares. For information about SMB Azure file
> NFS Azure file shares aren't supported for Windows. Before using NFS Azure file shares in production, see [Troubleshoot NFS Azure file shares](/troubleshoot/azure/azure-storage/files-troubleshoot-linux-nfs?toc=/azure/storage/files/toc.json) for a list of known issues. NFS access control lists (ACLs) aren't supported. ## Common scenarios+ NFS file shares are often used in the following scenarios: - Backing storage for Linux/UNIX-based applications, such as line-of-business applications written using Linux or POSIX file system APIs (even if they don't require POSIX-compliance).
NFS file shares are often used in the following scenarios:
- New application and service development, particularly if that application or service has a requirement for random I/O and hierarchical storage. ## Features+ - Fully POSIX-compliant file system. - Hard link support. - Symbolic link support.
NFS file shares are often used in the following scenarios:
> Creating a hard link from an existing symbolic link isn't currently supported. ## Security and networking+ All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the SMB and NFS protocols.
-For encryption in transit, Azure provides a layer of encryption for all data in transit between Azure datacenters using [MACSec](https://en.wikipedia.org/wiki/IEEE_802.1AE). Through this, encryption exists when data is transferred between Azure datacenters.
+For encryption in transit, Azure provides a layer of encryption for all data in transit between Azure datacenters using [MACSec](https://en.wikipedia.org/wiki/IEEE_802.1AE). Through this, encryption exists when data is transferred between Azure data centers.
Unlike Azure Files using the SMB protocol, file shares using the NFS protocol don't offer user-based authentication. Authentication for NFS shares is based on the configured network security rules. Due to this, to ensure only secure connections are established to your NFS share, you must set up either a private endpoint or a service endpoint for your storage account.
The status of items that appear in this table might change over time as support
| [Azure file share soft delete](storage-files-prevent-file-share-deletion.md) | Γ¢ö | | [Azure File Sync](../file-sync/file-sync-introduction.md)| Γ¢ö | | [Azure file share backups](../../backup/azure-file-share-backup-overview.md)| Γ¢ö |
-| [Azure file share snapshots](storage-snapshots-files.md)| ✔️ (preview) |
+| [Azure file share snapshots](storage-snapshots-files.md)| ✔️ |
| [GRS or GZRS redundancy types](storage-files-planning.md#redundancy)| Γ¢ö | | [AzCopy](../common/storage-use-azcopy-v10.md?toc=/azure/storage/files/toc.json)| Γ¢ö | | Azure Storage Explorer| Γ¢ö |
The status of items that appear in this table might change over time as support
[!INCLUDE [files-nfs-regional-availability](../../../includes/files-nfs-regional-availability.md)] ## Performance+ NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned model](understanding-billing.md#provisioned-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress might face additional latencies due to the high number of open and close operations. > [!NOTE] > You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md). ## Workloads+ > [!IMPORTANT] > Before using NFS Azure file shares in production, see [Troubleshoot NFS Azure file shares](/troubleshoot/azure/azure-storage/files-troubleshoot-linux-nfs?toc=/azure/storage/files/toc.json) for a list of known issues. NFS has been validated to work well with workloads such as SAP application layer, database backups, database replication, messaging queues, home directories for general purpose file servers, and content repositories for application workloads. ## Next steps+ - [Create an NFS file share](storage-files-how-to-create-nfs-shares.md) - [Compare access to Azure Files, Blob Storage, and Azure NetApp Files with NFS](../common/nfs-comparison.md?toc=/azure/storage/files/toc.json)
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
description: Learn about new features and enhancements in Azure Files and Azure
Previously updated : 10/23/2023 Last updated : 01/28/2024
-# What's new in Azure Files
-Azure Files is updated regularly to offer new features and enhancements. This article provides detailed information about what's new in Azure Files and Azure File Sync.
+# What's new in Azure Files and Azure File Sync
-## What's new in 2023
+Azure Files and Azure File Sync are updated regularly to offer new features and enhancements. This article provides detailed information about what's new in Azure Files and Azure File Sync.
-### 2023 quarter 4 (October, November, December)
+## What's new in 2024
+
+### 2024 quarter 1 (January, February, March)
+
+#### Snapshot support for NFS Azure premium file shares is generally available
+
+Customers using NFS Azure file shares can now take point-in-time snapshots of file shares. This enables users to roll back their entire filesystem to a previous point in time, or restore specific files that were accidentally deleted or corrupted. Customers using this feature can perform share-level Snapshot management operations via REST API, PowerShell, and Azure CLI. This feature is now available in all Azure public cloud regions. [Learn more](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots).
+
+#### Sync upload performance improvements for Azure File Sync
-#### Snapshot support for NFS Azure premium file shares is in public preview
+Sync upload performance has improved, and performance numbers will be posted when they are available. This improvement will mainly benefit file share migrations (initial upload) and high churn events on the server in which a large number of files need to be uploaded.
-Customers using NFS Azure file shares can now take point-in-time snapshots of file shares. This enables users to roll back their entire filesystem to a previous point in time, or restore specific files that were accidentally deleted or corrupted. Customers using this preview feature can perform share-level Snapshot management operations via REST API, PowerShell, and Azure CLI.
+#### Expanded character support for Azure File Sync
-This preview feature is now available in all Azure public cloud regions. [Learn more](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots-preview).
+Azure File Sync now supports an expanded list of characters. This expansion allows users to create and sync SMB file shares with file and directory names on par with NTFS file system, for valid Unicode characters. For more information on unsupported characters, refer to the documentation [here](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=%2Fazure%2Fstorage%2Ffile-sync%2Ftoc.json&tabs=portal1%2Cazure-portal#handling-unsupported-characters).
+
+#### New cloud tiering low disk space mode metric for Azure File Sync
+
+You can now configure an alert to let you know if a server is in low disk space mode. To learn more, see [Monitor Azure File Sync](../file-sync/file-sync-monitoring.md).
+
+## What's new in 2023
+
+### 2023 quarter 4 (October, November, December)
#### Azure Files now supports all valid Unicode characters
Azure Files geo-redundancy for large file shares preview significantly improves
Azure Files now offers a 99.99 percent SLA per file share for all Azure Files Premium shares, regardless of protocol (SMB, NFS, and REST) or redundancy type. This means that you can benefit from this SLA immediately, without any configuration changes or extra costs. If the availability drops below the guaranteed 99.99 percent uptime, youΓÇÖre eligible for service credits.
-#### Azure Active Directory support for Azure Files REST API with OAuth authentication is in public preview
+#### Support for Azure Files REST API with OAuth authentication is in public preview
This preview enables share-level read and write access to SMB Azure file shares for users, groups, and managed identities when accessing file share data through the REST API. Cloud native and modern applications that use REST APIs can utilize identity-based authentication and authorization to access file shares. For more information, [read the blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-introducing-azure-ad-support-for-azure-files-smb/ba-p/3826733).
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
Title: Frequently asked questions (FAQ) for Azure Files
description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. Previously updated : 01/05/2024 Last updated : 01/26/2024
* <a id="backup-nfs-data"></a> **How do I backup data stored in NFS shares?**
- Backing up your data on NFS shares can either be orchestrated using familiar tooling like rsync or products from one of our third-party backup partners. Multiple backup partners including [Commvault](https://documentation.commvault.com/https://docsupdatetracker.net/index.html), [Veeam](https://www.veeam.com/blog/?p=123438), and [Veritas](https://players.brightcove.net/4396107486001/default_default/https://docsupdatetracker.net/index.html?videoId=6189967101001) have extended their solutions to work with both SMB 3.x and NFS 4.1 for Azure Files. You can also use [NFS file share snapshots (preview)](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots-preview).
+ Backing up your data on NFS shares can either be orchestrated using familiar tooling like rsync or products from one of our third-party backup partners. Multiple backup partners including [Commvault](https://documentation.commvault.com/https://docsupdatetracker.net/index.html), [Veeam](https://www.veeam.com/blog/?p=123438), and [Veritas](https://players.brightcove.net/4396107486001/default_default/https://docsupdatetracker.net/index.html?videoId=6189967101001) have extended their solutions to work with both SMB 3.x and NFS 4.1 for Azure Files. You can also use [NFS Azure file share snapshots](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots).
* <a id="migrate-nfs-data"></a> **Can I migrate existing data to an NFS share?**
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
description: Learn how to mount a Network File System (NFS) Azure file share on
Previously updated : 12/04/2023 Last updated : 01/28/2024 - # Mount NFS Azure file share on Linux
Azure file shares can be mounted in Linux distributions using either the Server Message Block (SMB) protocol or the Network File System (NFS) protocol. This article is focused on mounting with NFS. For details on mounting SMB Azure file shares, see [Use Azure Files with Linux](storage-how-to-use-files-linux.md). For details on each of the available protocols, see [Azure file share protocols](storage-files-planning.md#available-protocols). ## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
For more information, enter the command `man fstab` from the Linux command line.
If your mount failed, it's possible that your private endpoint wasn't set up correctly or isn't accessible. For details on confirming connectivity, see [Verify connectivity](storage-files-networking-endpoints.md#verify-connectivity).
-## NFS file share snapshots (preview)
+## NFS file share snapshots
-Customers using NFS Azure file shares can now create, list, and delete NFS Azure file share snapshots. This capability allows users to roll back entire file systems or recover files that were accidentally deleted or corrupted.
+Customers using NFS Azure file shares can create, list, and delete NFS Azure file share snapshots. This capability allows users to roll back entire file systems or recover files that were accidentally deleted or corrupted. This feature is now available in all Azure public cloud regions.
> [!IMPORTANT] > You should mount your file share before creating snapshots. If you create a new NFS file share and take snapshots before mounting the share, attempting to list the snapshots for the share will return an empty list. We recommend deleting any snapshots taken before the first mount and re-creating them after you've mounted the share. ### Limitations
-Only file management APIs (`AzRmStorageShare`) are supported for NFS Azure file shares. File data plane APIs (`AzStorageShare`) aren't supported.
+Only file management APIs (`AzRmStorageShare`) are supported for NFS Azure file share snapshots. File data plane APIs (`AzStorageShare`) aren't supported.
Azure Backup isn't currently supported for NFS file shares. AzCopy isn't currently supported for NFS file shares. To copy data from an NFS Azure file share or share snapshot, use file system copy tools such as rsync or fpsync.
-### Regional availability
+### Create a snapshot
-NFS Azure file share snapshots preview is now available in all Azure public cloud regions.
+You can create a snapshot of an NFS Azure file share using the Azure portal, Azure PowerShell, or Azure CLI. A share can support the creation of up to 200 share snapshots.
-### Create a snapshot
+# [Azure portal](#tab/portal)
+
+To create a snapshot of an existing file share, sign in to the Azure portal and follow these steps.
+
+1. In the search box at the top of the Azure portal, type and select *storage accounts*.
+
+1. Select the FileStorage storage account that contains the NFS Azure file share that you want to take a snapshot of.
-You can create a snapshot of an NFS Azure file share using Azure PowerShell or Azure CLI. A share can support the creation of up to 200 share snapshots.
+1. Select **Data storage** > **File shares**.
+
+1. Select the file share that you want to snapshot, then select **Operations** > **Snapshots**.
+
+1. Select **+ Add snapshot**. Add an optional comment, and select **OK**.
+
+ :::image type="content" source="media/storage-files-how-to-mount-nfs-shares/add-file-share-snapshot.png" alt-text="Screenshot of adding a file share snapshot.":::
# [Azure PowerShell](#tab/powershell)
az storage share snapshot --name <file-share-name> --account-name <storage-accou
```
-### List file shares and snapshots
+### List file share snapshots
+
+You can list all the snapshots for a file share using the Azure portal, Azure PowerShell, or Azure CLI.
+
+# [Azure portal](#tab/portal)
+
+To list all the snapshots for an existing file share, sign in to the Azure portal and follow these steps.
-You can list all file shares in a storage account, including the share snapshots, using Azure PowerShell or Azure CLI.
+1. In the search box at the top of the Azure portal, type and select *storage accounts*.
+
+1. Select the FileStorage storage account that contains the NFS Azure file share that you want to list the snapshots of.
+
+1. Select **Data storage** > **File shares**.
+
+1. Select the file share for which you want to list the snapshots.
+
+1. Select **Operations** > **Snapshots**, and any existing snapshots for the file share will be listed.
# [Azure PowerShell](#tab/powershell)
az storage share list --account-name <storage-account-name> --include-snapshots
### Delete snapshots
-Existing share snapshots are never overwritten. They must be deleted explicitly. You can delete share snapshots using Azure PowerShell or Azure CLI.
+Existing share snapshots are never overwritten. They must be deleted explicitly. You can delete share snapshots using the Azure portal, Azure PowerShell, or Azure CLI.
+
+# [Azure portal](#tab/portal)
+
+To delete a snapshot of an existing file share, sign in to the Azure portal and follow these steps.
+
+1. In the search box at the top of the Azure portal, type and select *storage accounts*.
+
+1. Select the FileStorage storage account that contains the NFS Azure file share for which you want to delete snapshots.
+
+1. Select **Data storage** > **File shares**.
+
+1. Select the file share for which you want to delete one or more snapshots, then select **Operations** > **Snapshots**. Any existing snapshots for the file share will be listed.
+
+1. Select the snapshot(s) that you want to delete, and then select **Delete**.
+
+ :::image type="content" source="media/storage-files-how-to-mount-nfs-shares/delete-file-share-snapshot.png" alt-text="Screenshot of deleting file share snapshots.":::
# [Azure PowerShell](#tab/powershell)
storage Storage Snapshots Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-snapshots-files.md
description: A share snapshot is a read-only version of an Azure file share that
Previously updated : 10/16/2023 Last updated : 01/26/2024 # Overview of share snapshots for Azure Files
-Azure Files provides the capability to take snapshots of SMB file shares. Share snapshots capture the share state at that point in time. This article describes the capabilities that file share snapshots provide and how you can take advantage of them in your use case.
-Snapshots for NFS file shares are currently in [public preview](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots-preview) with limited regional availability.
+Azure Files provides the capability to take snapshots of file shares. Share snapshots capture the share state at that point in time. This article describes the capabilities that file share snapshots provide and how you can take advantage of them in your use case.
## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
Snapshots for NFS file shares are currently in [public preview](storage-files-ho
### Protection against application error and data corruption
-Applications that use file shares perform operations such as writing, reading, storage, transmission, and processing. If an application is misconfigured or an unintentional bug is introduced, accidental overwrite or damage can happen to a few blocks. To help protect against these scenarios, you can take a share snapshot before you deploy new application code. If a bug or application error is introduced with the new deployment, you can go back to a previous version of your data on that file share.
+Applications that use file shares perform operations such as writing, reading, storage, transmission, and processing. If an application is misconfigured or an unintentional bug is introduced, accidental overwrite or damage can happen to a few blocks. To help protect against these scenarios, you can take a share snapshot before you deploy new application code. If a bug or application error is introduced with the new deployment, you can go back to a previous version of your data on that file share.
### Protection against accidental deletions or unintended changes
After you create a file share, you can periodically create a share snapshot of t
## Capabilities
-A share snapshot is a point-in-time, read-only copy of your data. Share snapshot capability is provided at the file share level. Retrieval is provided at the individual file level, to allow for restoring individual files. You can restore a complete file share by using SMB, NFS (preview), REST API, the Azure portal, the client library, or PowerShell/CLI.
+A share snapshot is a point-in-time, read-only copy of your data. Share snapshot capability is provided at the file share level. Retrieval is provided at the individual file level, to allow for restoring individual files. You can restore a complete file share by using SMB, NFS, REST API, the Azure portal, the client library, or PowerShell/CLI.
-You can view snapshots of a share by using the REST API, SMB, or NFS (preview). You can retrieve the list of versions of the directory or file, and you can mount a specific version directly as a drive (only available on Windows - see [Limits](#limits)).
+You can view snapshots of a share by using the REST API, SMB, or NFS. You can retrieve the list of versions of the directory or file, and you can mount a specific version directly as a drive (only available on Windows - see [Limits](#limits)).
After a share snapshot is created, it can be read, copied, or deleted, but not modified. You can't copy a whole share snapshot to another storage account. You have to do that file by file, by using AzCopy or other copying mechanisms.
Snapshots don't count towards the maximum share size limit, which is 100 TiB for
## Limits
-The maximum number of share snapshots that Azure Files allows today is 200 per share. After 200 share snapshots, you must delete older share snapshots in order to create new ones. You can retain snapshots for up to 10 years.
+The maximum number of share snapshots that Azure Files allows is 200 per share. After 200 share snapshots, you must delete older share snapshots in order to create new ones. You can retain snapshots for up to 10 years.
There's no limit to the simultaneous calls for creating share snapshots. There's no limit to the amount of space that share snapshots of a particular file share can consume.
-Taking snapshots of NFS Azure file shares is currently in [public preview](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots-preview) with limited regional availability. The preview only supports management APIs (`AzRmStorageShare`), not data plane APIs (`AzStorageShare`), allowing users to create, list, and delete snapshots of NFS Azure file shares.
+Only file management APIs (`AzRmStorageShare`) are supported for NFS Azure file share snapshots. File data plane APIs (`AzStorageShare`) aren't supported.
## Copying data back to a share from share snapshot Copy operations that involve files and share snapshots follow these rules:
-You can copy individual files in a file share snapshot over to its base share or any other location. You can restore an earlier version of a file or restore the complete file share by copying file by file from the share snapshot. The share snapshot is not promoted to base share.
+You can copy individual files in a file share snapshot over to its base share or any other location. You can restore an earlier version of a file or restore the complete file share by copying file by file from the share snapshot. The share snapshot isn't promoted to base share.
The share snapshot remains intact after copying, but the base file share is overwritten with a copy of the data that was available in the share snapshot. All the restored files count toward "changed content."
Before you deploy the share snapshot scheduler, carefully consider your share sn
Share snapshots provide only file-level protection. Share snapshots don't prevent fat-finger deletions on a file share or storage account. To help protect a storage account from accidental deletions, you can either [enable soft delete](storage-files-prevent-file-share-deletion.md), or lock the storage account and/or the resource group. ## See also+ - Working with share snapshots in: - [Azure file share backup](../../backup/azure-file-share-backup-overview.md) - [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare)
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
To create your data warehouse solution, you can choose from different kinds of i
| :::image type="content" source="./media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Analytics**<br>Together, Logi Analytics enables your organization to collect, analyze, and immediately act on the largest and most diverse data sets in the world. |[Logi Analytics](https://insightsoftware.com/logi-analytics/)<br>| | :::image type="content" source="./media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Report**<br>Logi Report is an embeddable BI solution for the enterprise. The solution offers capabilities such as report creation, dashboards, and data analysis on cloud, big data, and transactional data sources. By visualizing data, you can conduct your own reporting and data discovery for agile, on-the-fly decision making. |[Logi Report](https://insightsoftware.com/logi-analytics/logi-report/)<br> | | :::image type="content" source="./media/business-intelligence/looker_logo.png" alt-text="The logo of Looker."::: |**Looker for Business Intelligence**<br>Looker gives everyone in your company the ability to explore and understand the data that drives your business. Looker also gives the data analyst a flexible and reusable modeling layer to control and curate that data. Companies have fundamentally transformed their culture using Looker as the catalyst.|[Looker for BI](https://looker.com/)<br> [Looker Analytics Platform Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aad.lookeranalyticsplatform)<br> |
-| :::image type="content" source="./media/business-intelligence/microstrategy_logo.png" alt-text="The logo of Microstrategy."::: |**MicroStrategy**<br>The MicroStrategy platform offers a complete set of business intelligence and analytics capabilities that enable organizations to get value from their business data. MicroStrategy's powerful analytical engine, comprehensive toolsets, variety of data connectors, and open architecture ensures you have everything you need to extend access to analytics across every team.|[MicroStrategy](https://www.microstrategy.com/en/business-intelligence)<br> [MicroStrategy Cloud in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microstrategy.microstrategy_cloud)<br> |
+| :::image type="content" source="./media/business-intelligence/microstrategy_logo.png" alt-text="The logo of Microstrategy."::: |**MicroStrategy**<br>The MicroStrategy platform offers a complete set of business intelligence and analytics capabilities that enable organizations to get value from their business data. MicroStrategy's powerful analytical engine, comprehensive toolsets, variety of data connectors, and open architecture ensure you have everything you need to extend access to analytics across every team.|[MicroStrategy](https://www.microstrategy.com/enterprise-analytics)<br> [MicroStrategy Cloud in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microstrategy.microstrategy_cloud)<br> |
| :::image type="content" source="./media/business-intelligence/mode-logo.png" alt-text="The logo of Mode Analytics."::: |**Mode**<br>Mode is a modern analytics and BI solution that helps teams make decisions through unreasonably fast and unexpectedly delightful data analysis. Data teams move faster through a preferred workflow that combines SQL, Python, R, and visual analysis, while stakeholders work alongside them exploring and sharing data on their own. With data more accessible to everyone, we shorten the distance from questions to answers and help businesses make better decisions, faster.|[Mode](https://mode.com/)<br> | | :::image type="content" source="./media/business-intelligence/pyramid-logo.png" alt-text="The logo of Pyramid Analytics."::: |**Pyramid Analytics**<br>Pyramid 2020 is the trusted analytics platform that connects your teams, drives confident decisions, and produces winning results. Business users can do high-end, cloud-scale analytics and data science without IT help ΓÇö on any browser or device. Data scientists can take advantage of machine learning algorithms and scripting to understand difficult business problems. Power users can prepare and model their own data to create illuminating analytic content. Non-technical users can benefit from stunning visualizations and guided analytic presentations. It's the next generation of self-service analytics with governance. |[Pyramid Analytics](https://www.pyramidanalytics.com/resources/analyst-reports/)<br> [Pyramid Analytics in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/pyramidanalytics.pyramid2020-25-102) | | :::image type="content" source="./media/business-intelligence/qlik_logo.png" alt-text="The logo of Qlik."::: |**Qlik Sense**<br>Drive insight discovery with the data visualization app that anyone can use. With Qlik Sense, everyone in your organization can easily create flexible, interactive visualizations and make meaningful decisions. |[Qlik Sense](https://www.qlik.com/us/products/qlik-sense)<br> [Qlik Sense in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik-sense) |
virtual-network Virtual Network Troubleshoot Cannot Delete Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-cannot-delete-vnet.md
Move-AzureVirtualNetwork -VirtualNetworkName "Name" -Abort
### Check whether the virtual network was used by a web app for VNet integration
-If the virtual network was integrated with a web app in the past, then the web app was deleted without disconnecting the VNet integration, see [Deleting the App Service plan or web app before disconnecting the VNet integration](https://github.com/MicrosoftDocs/azure-docs/blob/046310ca15df6c82612b11971b9481b98125dd64/includes/app-service-web-vnet-troubleshooting.md).
+If the virtual network was integrated with a web app in the past, then the web app was deleted without disconnecting the VNet integration, see [Deleting the App Service plan or web app before disconnecting the VNet integration](../azure-functions/functions-networking-options.md#troubleshooting).
## Next steps
virtual-wan Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitoring-best-practices.md
The following section details the configuration of metric-based alerts only. How
### ExpressRoute gateway
-This section of the article focuses on metric-based alerts. There are no diagnostic logs currently available for Virtual WAN ExpressRoute gateways. In addition to the alerts described below, which focus on the gateway component, we recommend that you use the available metrics, logs, and tools to monitor the ExpressRoute circuit. To learn more about ExpressRoute monitoring, see [ExpressRoute monitoring, metrics, and alerts](../expressroute/expressroute-monitoring-metrics-alerts.md). To learn about how you can use the ExpressRoute Traffic Collector tool, see [Configure ExpressRoute Traffic Collector for ExpressRoute Direct](../expressroute/how-to-configure-traffic-collector.md).
+The following section focuses on metric-based alerts. In addition to the alerts described below, which focus on the gateway component, we recommend that you use the available metrics, logs, and tools to monitor the ExpressRoute circuit. To learn more about ExpressRoute monitoring, see [ExpressRoute monitoring, metrics, and alerts](../expressroute/expressroute-monitoring-metrics-alerts.md). To learn about how you can use the ExpressRoute Traffic Collector tool, see [Configure ExpressRoute Traffic Collector for ExpressRoute Direct](../expressroute/how-to-configure-traffic-collector.md).
**Design checklist - metric alerts**
-* Create alert rule for Bits Received Per Second.
+* Create alert rule for bits received per second.
* Create alert rule for CPU overutilization.
-* Create alert rule for Packets per Second.
+* Create alert rule for packets per second.
* Create alert rule for number of routes advertised to peer. * Count alert rule for number of routes learned from peer. * Create alert rule for high frequency in route changes.
This section of the article focuses on metric-based alerts. There are no diagnos
## Virtual hub
-We're working to support alerts based on virtual hub metrics soon. Currently, you can retrieve information for the Metrics, but alerting is unsupported. There are no diagnostic logs available for virtual hubs at this time.
+The following section focuses on metrics-based alerts for virtual hubs.
+
+**Design checklist - metric alerts**
+
+* Create alert rule for BGP peer status
+
+|Recommendation | Description|
+|||
+|Create alert rule to monitor BGP peer status.| Select the **BGP Peer Status** metric when creating the alert rule. Using a **static** threshold, choose the **Average** aggregation type and configure the alert to be triggered whenever the value is **less than 1**.<br><br> This will allow you to identify when the virtual hub router is having connectivity issues with ExpressRoute, Site-to-Site VPN, and Point-to-Site VPN gateways deployed in the hub.|
+ ## Azure Firewall
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Additional things to note:
* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you'll need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label).
+* If your hub is connected to a large number of spoke virtual networks (60 or more), then you may notice that 1 or more spoke VNet peerings will enter a failed state after the upgrade. To restore these VNet peerings to a successful state after the upgrade, you can configure the virtual network connections to propagate to a dummy label, or you can delete and recreate these respective VNet connections.
+ ### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway? The route limit for OpenVPN clients is 1000.
vpn-gateway Vpn Gateway About Compliance Crypto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-compliance-crypto.md
description: Learn how to configure Azure VPN gateways to satisfy cryptographic
Previously updated : 05/02/2023 Last updated : 01/26/2024