Updates from: 07/06/2024 01:09:43
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
For how to use Pronunciation Assessment in streaming mode in your own applicatio
## Set configuration parameters ::: zone pivot="programming-language-go"+ > [!NOTE] > Pronunciation assessment is not available with the Speech SDK for Go. You can read about the concepts in this guide. Select another programming language for your solution.++
+In the `SpeechRecognizer`, you can specify the language to learn or practice improving pronunciation. The default locale is `en-US`. To learn how to specify the learning language for pronunciation assessment in your own application, you can use the following sample code.
++
+```csharp
+var recognizer = new SpeechRecognizer(config, "en-US", audioInput);
+```
+++
+```cpp
+auto recognizer = SpeechRecognizer::FromConfig(config, "en-US", audioConfig);
+```
+++
+```Java
+SpeechRecognizer recognizer = new SpeechRecognizer(config, "en-US", audioInput);
+```
+ ::: zone-end
-In the `SpeechRecognizer`, you can specify the language to learn or practice improving pronunciation. The default locale is `en-US`. To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#LL1086C13-L1086C98).
+
+```Python
+speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, language="en-US", audio_config=audio_config)
+```
+++
+```JavaScript
+speechConfig.speechRecognitionLanguage = "en-US";
+```
+++
+```ObjectiveC
+SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig language:@"en-US" audioConfiguration:pronAudioSource];
+```
+++
+```swift
+let reco = try! SPXSpeechRecognizer(speechConfiguration: speechConfig, language: "en-US", audioConfiguration: audioInput)
+```
+++ > [!TIP] > If you aren't sure which locale to set for a language that has multiple locales, try each locale separately. For instance, for Spanish, try `es-ES` and `es-MX`. Determine which locale scores higher for your scenario.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md
With the cross-lingual feature, you can transfer your custom neural voice model
# [Pronunciation assessment](#tab/pronunciation-assessment)
-The table in this section summarizes the 32 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 31 more languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. If you're interested in languages not listed in the following table, fill out this [intake form](https://aka.ms/speechpa/intake) for further assistance.
+The table in this section summarizes the 33 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 32 more languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. If you're interested in languages not listed in the following table, fill out this [intake form](https://aka.ms/speechpa/intake) for further assistance.
[!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)]
aks Advanced Network Observability Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/advanced-network-observability-cli.md
rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
1. Set up port forwarding for Hubble UI using the `kubectl port-forward` command. ```azurecli-interactive
- kubectl port-forward svc/hubble-ui 12000:80
+ kubectl -n kube-system port-forward svc/hubble-ui 12000:80
``` 1. Access Hubble UI by entering `http://localhost:12000/` into your web browser.
aks Cluster Autoscaler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler-overview.md
# Cluster autoscaling in Azure Kubernetes Service (AKS) overview
-To keep up with application demands in Azure Kubernetes Service (AKS), you might need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed.
+To keep up with application demands in Azure Kubernetes Service (AKS), you might need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects unscheduled pods, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes that don't have any scheduled pods and scales down the number of nodes as needed.
This article helps you understand how the cluster autoscaler works in AKS. It also provides guidance, best practices, and considerations when configuring the cluster autoscaler for your AKS workloads. If you want to enable, disable, or update the cluster autoscaler for your AKS workloads, see [Use the cluster autoscaler in AKS](./cluster-autoscaler.md).
Clusters often need a way to scale automatically to adjust to changing applicati
:::image type="content" source="media/cluster-autoscaler/cluster-autoscaler.png" alt-text="Screenshot of how the cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands.":::
-It's a common practice to enable cluster autoscaler for nodes and either the Vertical Pod Autoscaler or Horizontal Pod Autoscaler for pods. When you enable the cluster autoscaler, it applies the specified scaling rules when the node pool size is lower than the minimum or greater than the maximum. The cluster autoscaler waits to take effect until a new node is needed in the node pool or until a node might be safely deleted from the current node pool. For more information, see [How does scale down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
+It's a common practice to enable cluster autoscaler for nodes and either the Vertical Pod Autoscaler or Horizontal Pod Autoscaler for pods. When you enable the cluster autoscaler, it applies the specified scaling rules when the node pool size is lower than the minimum node count, up to the maximum node count. The cluster autoscaler waits to take effect until a new node is needed in the node pool or until a node might be safely deleted from the current node pool. For more information, see [How does scale down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
## Best practices and considerations
It's a common practice to enable cluster autoscaler for nodes and either the Ver
* To **effectively run workloads concurrently on both Spot and Fixed node pools**, consider using [*priority expanders*](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders). This approach allows you to schedule pods based on the priority of the node pool. * Exercise caution when **assigning CPU/Memory requests on pods**. The cluster autoscaler scales up based on pending pods rather than CPU/Memory pressure on nodes. * For **clusters concurrently hosting both long-running workloads, like web apps, and short/bursty job workloads**, we recommend separating them into distinct node pools with [Affinity Rules](./operator-best-practices-advanced-scheduler.md#node-affinity)/[expanders](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) or using [PriorityClass](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) to help prevent unnecessary node drain or scale down operations.
-* In an autoscaler-enabled node pool, scale down nodes by removing workloads, instead of manually reducing the node count. This can be problematic if the node pool is already at maximum capacity or if there are active workloads running on the nodes, potentially causing unexpected behavior by the cluster autoscaler
+* In an autoscaler-enabled node pool, scale down nodes by removing workloads, instead of manually reducing the node count. This can be problematic if the node pool is already at maximum capacity or if there are active workloads running on the nodes, potentially causing unexpected behavior by the cluster autoscaler.
* Nodes don't scale up if pods have a PriorityClass value below -10. Priority -10 is reserved for [overprovisioning pods](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler). For more information, see [Using the cluster autoscaler with Pod Priority and Preemption](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-cluster-autoscaler-work-with-pod-priority-and-preemption). * **Don't combine other node autoscaling mechanisms**, such as Virtual Machine Scale Set autoscalers, with the cluster autoscaler. * The cluster autoscaler **might be unable to scale down if pods can't move, such as in the following situations**:
It's a common practice to enable cluster autoscaler for nodes and either the Ver
* A pod uses node selectors or anti-affinity that can't be honored if scheduled on a different node. For more information, see [What types of pods can prevent the cluster autoscaler from removing a node?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node). >[!IMPORTANT]
-> **Do not make changes to individual nodes within the autoscaled node pools**. All nodes in the same node group should have uniform capacity, labels, taints and system pods running on them.
+> **Don't make changes to individual nodes within the autoscaled node pools**. All nodes in the same node group should have uniform capacity, labels, taints and system pods running on them.
+* The cluster autoscaler isn't responsible for enforcing a "maximum node count" in a cluster node pool irrespective of pod scheduling considerations. If any non-cluster autoscaler actor sets the node pool count to a number beyond the cluster autoscaler's configured maximum, the cluster autoscaler doesn't automatically remove nodes. The cluster autoscaler scale down behaviors remain scoped to removing only nodes that have no scheduled pods. The sole purpose of the cluster autoscaler's max node count configuration is to enforce an upper limit for scale up operations. It doesn't have any effect on scale down considerations.
## Cluster autoscaler profile
It's important to note that the cluster autoscaler profile settings are cluster-
#### Example 1: Optimizing for performance
-For clusters that handle substantial and bursty workloads with a primary focus on performance, we recommend increasing the `scan-interval` and decreasing the `scale-down-utilization-threshold`. These settings help batch multiple scaling operations into a single call, optimizing scaling time and the utilization of compute read/write quotas. It also helps mitigate the risk of swift scale down operations on underutilized nodes, enhancing the pod scheduling efficiency. Also increase `ok-total-unready-count`and `max-total-unready-percentage`.
+For clusters that handle substantial and bursty workloads with a primary focus on performance, we recommend increasing the `scan-interval` and decreasing the `scale-down-utilization-threshold`. These settings help batch multiple scaling operations into a single call, optimizing scaling time and the utilization of compute read/write quotas. It also helps mitigate the risk of swift scale down operations on underutilized nodes, enhancing the pod scheduling efficiency. Also increase `ok-total-unready-count`and `max-total-unready-percentage`.
For clusters with daemonset pods, we recommend setting `ignore-daemonset-utilization` to `true`, which effectively ignores node utilization by daemonset pods and minimizes unnecessary scale down operations. See [profile for bursty workloads](./cluster-autoscaler.md#configure-cluster-autoscaler-profile-for-bursty-workloads)
If you want a [cost-optimized profile](./cluster-autoscaler.md#configure-cluster
* Increase `scale-down-utilization-threshold`, which is the utilization threshold for removing nodes. * Increase `max-empty-bulk-delete`, which is the maximum number of nodes that can be deleted in a single call. * Set `skip-nodes-with-local-storage` to false.
-* Increase `ok-total-unready-count`and `max-total-unready-percentage`
+* Increase `ok-total-unready-count`and `max-total-unready-percentage`.
## Common issues and mitigation recommendations View scaling failures and scale-up not triggered events via [CLI or Portal](./cluster-autoscaler.md#retrieve-cluster-autoscaler-logs-and-status).
Depending on how long the scaling operations have been experiencing failures, it
<!-- LINKS > [vertical-pod-autoscaler]: vertical-pod-autoscaler.md [horizontal-pod-autoscaler]:concepts-scale.md#horizontal-pod-autoscaler-
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service (AKS) cluster description: Learn how to create a private Azure Kubernetes Service (AKS) cluster ++ Last updated 06/29/2023
The API server endpoint has no public IP address. To manage the API server, you'
* Use an [Express Route or VPN][express-route-or-VPN] connection. * Use the [AKS `command invoke` feature][command-invoke]. * Use a [private endpoint][private-endpoint-service] connection.
+* Use a [Cloud Shell][cloud-shell-vnet] instance deployed into a subnet that's connected to the API server for the cluster.
> [!NOTE] > Creating a VM in the same VNet as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges.
For associated best practices, see [Best practices for network connectivity and
[az-network-vnet-peering-create]: /cli/azure/network/vnet/peering#az_network_vnet_peering_create [az-network-vnet-peering-list]: /cli/azure/network/vnet/peering#az_network_vnet_peering_list [intro-azure-linux]: ../azure-linux/intro-azure-linux.md
+[cloud-shell-vnet]: ../cloud-shell/vnet/overview.md
aks Stop Cluster Upgrade Api Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/stop-cluster-upgrade-api-breaking-changes.md
description: Learn how to stop Azure Kubernetes Service (AKS) cluster upgrades a
Previously updated : 10/19/2023 Last updated : 07/05/2024 - # Stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes
+This article shows you how to stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes.
+
+## Overview
+ To stay within a supported Kubernetes version, you have to upgrade your cluster at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes, deprecations, and dependencies such as Helm and Container Storage Interface (CSI). It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
-AKS now automatically stops upgrade operations consisting of a minor version change with deprecated APIs and sends you an error message to alert you about the issue.
+You can configure your AKS cluster to automatically stop upgrade operations consisting of a minor version change with deprecated APIs and alert you to the issue. This feature helps you avoid unexpected disruptions and gives you time to address the deprecated APIs before proceeding with the upgrade.
## Before you begin
Bad Request({
}) ```
-You have two options to mitigate the issue. You can either [remove usage of deprecated APIs (recommended)](#remove-usage-of-deprecated-apis-recommended) or [bypass validation to ignore API changes](#bypass-validation-to-ignore-api-changes).
+You have two options to mitigate the issue: you can [remove usage of deprecated APIs (recommended)](#remove-usage-of-deprecated-apis-recommended) or [bypass validation to ignore API changes](#bypass-validation-to-ignore-api-changes).
### Remove usage of deprecated APIs (recommended)
-1. In the Azure portal, navigate to your cluster's overview page, and select **Diagnose and solve problems**.
-
-2. Navigate to the **Create, Upgrade, Delete, and Scale** category, and select **Kubernetes API deprecations**.
+1. In the Azure portal, navigate to your cluster resource and select **Diagnose and solve problems**
+2. Select **Create, Upgrade, Delete, and Scale** > **Kubernetes API deprecations**.
:::image type="content" source="./media/upgrade-cluster/applens-api-detection-full-v2.png" alt-text="A screenshot of the Azure portal showing the 'Selected Kubernetes API deprecations' section.":::
-3. Wait 12 hours from the time the last deprecated API usage was seen. Check the verb in the deprecated API usage to know if it's a [watch][k8s-api].
-
+3. Wait 12 hours from the time the last deprecated API usage was seen. Check the verb in the deprecated API usage to know if it's a [watch][k8s-api]. If it's a watch, you can wait for the usage to drop to zero. (You can also check past API usage by enabling [Container insights][container-insights] and exploring kube audit logs.)
4. Retry your cluster upgrade.
-You can also check past API usage by enabling [Container Insights][container-insights] and exploring kube audit logs. Check the verb in the deprecated API usage to understand if it's a [watch][k8s-api] use case.
- ### Bypass validation to ignore API changes > [!NOTE]
-> This method requires you to use the Azure CLI version 2.53 or later. If you have the `aks-preview` CLI extension installed, you'll need to update to version `0.5.154` or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend removing them as soon as possible after the upgrade completes.
+> This method requires you to use the Azure CLI version 2.53 or later. If you have the `aks-preview` CLI extension installed, you need to update to version `0.5.154` or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version might not work long term. We recommend removing them as soon as possible after the upgrade completes.
-* Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command. Specify the `enable-force-upgrade` flag and set the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future.
+1. Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command. Specify the `enable-force-upgrade` flag and set the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future.
```azurecli-interactive
- az aks update --name myAKSCluster --resource-group myResourceGroup --enable-force-upgrade --upgrade-override-until 2023-10-01T13:00:00Z
+ az aks update --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --enable-force-upgrade --upgrade-override-until 2023-10-01T13:00:00Z
``` > [!NOTE] > `Z` is the zone designator for the zero UTC/GMT offset, also known as 'Zulu' time. This example sets the end of the window to `13:00:00` GMT. For more information, see [Combined date and time representations](https://wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations).
-* Once the previous command has succeeded, you can retry the upgrade operation.
+2. Retry your cluster upgrade using the [`az aks upgrade`][az-aks-upgrade] command.
```azurecli-interactive
- az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes-version <KUBERNETES_VERSION>
+ az aks upgrade --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --kubernetes-version $KUBERNETES_VERSION
``` - ## Next steps This article showed you how to stop AKS cluster upgrades automatically on API breaking changes. To learn more about more upgrade options for AKS clusters, see [Upgrade options for Azure Kubernetes Service (AKS) clusters](./upgrade-cluster.md).
This article showed you how to stop AKS cluster upgrades automatically on API br
<!-- LINKS - internal --> [az-aks-update]: /cli/azure/aks#az_aks_update
+[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
[container-insights]:/azure/azure-monitor/containers/container-insights-log-query#resource-logs-
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
The virtual network integration feature supports two virtual interfaces per work
Virtual network integration depends on a dedicated subnet. When you create a subnet, the Azure subnet consumes five IPs from the start. One address is used from the integration subnet for each App Service plan instance. If you scale your app to four instances, then four addresses are used.
-When you scale up/down in instance size, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned. The scale operation affects the real, available supported instances for a given subnet size. Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic. Finally, after scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. In rare cases, this operation can be up to 12 hours and if you rapidly scaling in/out or up/down, you need more IPs than the maximum scale.
+When you scale up/down in instance size, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned. The scale operation affects the real, available supported instances for a given subnet size. Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic. Finally, after scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. In rare cases, this operation can be up to 12 hours and if you rapidly scale in/out or up/down, you need more IPs than the maximum scale.
Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. You should also reserve IP addresses for platform upgrades. To avoid any issues with subnet capacity, we recommand allocating double the IPs of your planned maximum scale. A `/26` with 64 addresses cover the maximum scale of a single multitenant App Service plan. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of `/27` is required. If the subnet already exists before integrating through the portal, you can use a `/28` subnet.
When you're using virtual network integration, you can configure how parts of th
#### Content share
-Bringing your own storage for content in often used in Functions where [content share](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
-
-To route content share traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure content share routing](./configure-vnet-integration-routing.md#content-share).
+By default, Azure Functions uses a [content share](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) as the deployment source when scaling function apps in a Premium plan. You must configure an extra setting to guarantee traffic is routed to this content share through the virtual network integration. For more information, see [how to configure content share routing](./configure-vnet-integration-routing.md#content-share).
In addition to configuring the routing, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Title: How to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 01/03/2024 Last updated : 07/03/2024
Connectivity options include public endpoint, proxy server, and private link or
> [!TIP] > To take advantage of the full range of offerings for Arc-enabled servers, such as extensions and remote connectivity, ensure that you allow the additional URLs that apply to your scenario. For more information, see [Connected machine agent networking requirements](network-requirements.md).
+## Required Certificate Authorities
+
+The following [Certificate Authorities](/azure/security/fundamentals/azure-ca-details?tabs=root-and-subordinate-cas-list) are required for Extended Security Updates for Windows Server 2012:
+
+- [Microsoft Azure RSA TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt)
+- [Microsoft Azure RSA TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004%20-%20xsign.crt)
+- [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007%20-%20xsign.crt)
+- [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt)
+
+If necessary, these Certificate Authorities can be [manually download and installed](troubleshoot-extended-security-updates.md#option-2-manually-download-and-install-the-intermediate-ca-certificates).
+ ## Next steps * Find out more about [planning for Windows Server and SQL Server end of support](https://www.microsoft.com/en-us/windows-server/extended-security-updates) and [getting Extended Security Updates](/windows-server/get-started/extended-security-updates-deploy).
azure-arc Troubleshoot Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-extended-security-updates.md
Title: How to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 05/22/2024 Last updated : 07/03/2024
Once the network changes are made to allow access to the PKI URL, try installing
If you're unable to allow access to the PKI URL from your servers, you can manually download and install the certificates on each machine. 1. On any computer with internet access, download these intermediate CA certificates:
- 1. [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001%20-%20xsign.crt)
- 1. [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002%20-%20xsign.crt)
- 1. [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005%20-%20xsign.crt)
- 1. [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006%20-%20xsign.crt)
+ 1. [Microsoft Azure RSA TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt)
1. [Microsoft Azure RSA TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004%20-%20xsign.crt)
+ 1. [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007%20-%20xsign.crt)
+ 1. [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt)
1. Copy the certificate files to your Windows Server 2012 (R2) machines. 1. Run any one set of the following commands in an elevated command prompt or PowerShell session to add the certificates to the "Intermediate Certificate Authorities" store for the local computer. The command should be run from the same directory as the certificate files. The commands are idempotent and won't make any changes if you've already imported the certificate: ```
- certutil -addstore CA "Microsoft Azure TLS Issuing CA 01 - xsign.crt"
- certutil -addstore CA "Microsoft Azure TLS Issuing CA 02 - xsign.crt"
- certutil -addstore CA "Microsoft Azure TLS Issuing CA 05 - xsign.crt"
- certutil -addstore CA "Microsoft Azure TLS Issuing CA 06 - xsign.crt"
+ certutil -addstore CA "Microsoft Azure RSA TLS Issuing CA 03 - xsign.crt"
certutil -addstore CA "Microsoft Azure RSA TLS Issuing CA 04 - xsign.crt"
+ certutil -addstore CA "Microsoft Azure RSA TLS Issuing CA 07 - xsign.crt"
+ certutil -addstore CA "Microsoft Azure RSA TLS Issuing CA 08 - xsign.crt"
``` 1. Try installing the Windows updates again. You may need to reboot your computer for the validation logic to recognize the newly imported intermediate CA certificates.
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
Title: Configure monitoring for Azure Functions
description: Learn how to connect your function app to Application Insights for monitoring and how to configure data collection. Previously updated : 06/17/2024 Last updated : 07/05/2024 # Customer intent: As a developer, I want to understand how to configure monitoring for my functions correctly, so I can collect the data that I need.
For a function app to send data to Application Insights, it needs to connect to
| Setting name | Description | | - | - |
-| **[APPLICATIONINSIGHTS_CONNECTION_STRING](functions-app-settings.md#applicationinsights_connection_string)** | This setting is recommended and is required when your Application Insights instance runs in a sovereign cloud. The connection string supports other [new capabilities](../azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md#new-capabilities). |
-| **[APPINSIGHTS_INSTRUMENTATIONKEY](functions-app-settings.md#appinsights_instrumentationkey)** | Legacy setting, which Application Insights has deprecated in favor of the connection string setting. |
+| **[`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string)** | This setting is recommended and is required when your Application Insights instance runs in a sovereign cloud. The connection string supports other [new capabilities](../azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md#new-capabilities). |
+| **[`APPLICATIONINSIGHTS_AUTHENTICATION_STRING`](./functions-app-settings.md#applicationinsights_authentication_string)** | Connects to Application Insights using Microsoft Entra authentication. The value contains the client ID of either a system-assigned or a user-assigned managed identity that is authorized to publish telemetry to your Application Insights workspace. The string has a format of `ClientId=<YOUR_CLIENT_ID>;Authorization=AAD`. For more information, see [Microsoft Entra authentication for Application Insights](../azure-monitor/app/azure-ad-authentication.md).|
+| **[`APPINSIGHTS_INSTRUMENTATIONKEY`](functions-app-settings.md#appinsights_instrumentationkey)** | Legacy setting, which Application Insights has deprecated in favor of the connection string setting. |
When you create your function app in the [Azure portal](./functions-get-started.md) from the command line by using [Azure Functions Core Tools](./create-first-function-cli-csharp.md) or [Visual Studio Code](./create-first-function-vs-code-csharp.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and is created either in the same region or in the nearest region.
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
You're now ready to route your function app's traffic to go through the virtual
1. Enable [content share routing](../app-service/overview-vnet-integration.md#content-share) to enable your function app to communicate with your new storage account through its virtual network. In the same page as the previous step, under **Configuration routing**, select **Content storage**. + ### 4. Update application settings Finally, you need to update your application settings to point to the new secure storage account:
azure-functions Flex Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-plan.md
Keep these other considerations in mind when using Flex Consumption plan during
+ **Triggers**: All triggers are fully supported except for Kafka, Azure SQL, and SignalR triggers. The Blob storage trigger only supports the [Event Grid source](./functions-event-grid-blob-trigger.md). Non-C# function apps must use version `[4.0.0, 5.0.0)` of the [extension bundle](./functions-bindings-register.md#extension-bundles), or a later version. + **Regions**: + Not all regions are currently supported. To learn more, see [View currently supported regions](flex-consumption-how-to.md#view-currently-supported-regions).
- + There is a temporary limitation in West US 3. If you see the following error "This region has quota of 0 instances for your subscription. Try selecting different region or SKU." in that region please raise a support ticket so that your app can be unblocked.
+ + There is a temporary limitation where App Service quota limits for creating new apps are also being applied to Flex Consumption apps. If you see the following error "This region has quota of 0 instances for your subscription. Try selecting different region or SKU." please raise a support ticket so that your app creation can be unblocked.
+ **Deployments**: These deployment-related features aren't currently supported: + Deployment slots + Continuous deployment using Azure DevOps Tasks (`AzureFunctionApp@2`)
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECT
[!INCLUDE [azure-monitor-log-analytics-rebrand](~/reusable-content/ce-skilling/azure/includes/azure-monitor-instrumentation-key-deprecation.md)]
+## APPLICATIONINSIGHTS_AUTHENTICATION_STRING
+
+The connection string for Application Insights by using Microsoft Entra authentication. Use this setting when you must connect to your Application Insights workspace by using Microsoft Entra authentication. The string contains the client ID of either a system-assigned or a user-assigned managed identity that is authorized to publish telemetry to your Application Insights workspace. For more information, see [Microsoft Entra authentication for Application Insights](../azure-monitor/app/azure-ad-authentication.md).
+
+|Key|Sample value|
+|||
+|APPLICATIONINSIGHTS_AUTHENTICATION_STRING|`ClientId=<YOUR_CLIENT_ID>;Authorization=AAD`|
+ ## APPLICATIONINSIGHTS_CONNECTION_STRING The connection string for Application Insights. Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING`. While the use of `APPLICATIONINSIGHTS_CONNECTION_STRING` is recommended in all cases, it's required in the following cases:
For more information, see [Connection strings](../azure-monitor/app/sdk-connecti
||| |APPLICATIONINSIGHTS_CONNECTION_STRING|`InstrumentationKey=...`|
+To connect to Application Insights with Microsoft Entra authentication, you should instead use [`APPLICATIONINSIGHTS_AUTHENTICATION_STRING`](#applicationinsights_authentication_string).
+ ## AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL > [!IMPORTANT]
Add `EnableProxies` to this list to re-enable proxies on version 4.x of the Func
## AzureWebJobsKubernetesSecretName
-Indicates the Kubernetes Secrets resource used for storing keys. Supported only when running in Kubernetes. This setting requires you to set `AzureWebJobsSecretStorageType` to `kubernetes`. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.
+Indicates the Kubernetes Secrets resource used for storing keys. Supported only when running in Kubernetes. This setting requires you to set `AzureWebJobsSecretStorageType` to `kubernetes`. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.
|Key|Sample value| |||
Specifies the repository or provider to use for key storage. Keys are always enc
|AzureWebJobsSecretStorageType|`blob`|Keys are stored in a Blob storage container in the account provided by the `AzureWebJobsStorage` setting. Blob storage is the default behavior when `AzureWebJobsSecretStorageType` isn't set.<br/>To specify a different storage account, use the `AzureWebJobsSecretStorageSas` setting to indicate the SAS URL of a second storage account. | |AzureWebJobsSecretStorageType | `files` | Keys are persisted on the file system. This is the default behavior for Functions v1.x.| |AzureWebJobsSecretStorageType |`keyvault` | Keys are stored in a key vault instance set by `AzureWebJobsSecretStorageKeyVaultName`. |
-|AzureWebJobsSecretStorageType | `kubernetes` | Supported only when running the Functions runtime in Kubernetes. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.|
+|AzureWebJobsSecretStorageType | `kubernetes` | Supported only when running the Functions runtime in Kubernetes. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.|
To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
Specifies the connection string for an Azure Storage account that the Functions
||| |AzureWebJobsStorage|`DefaultEndpointsProtocol=https;AccountName=...`|
-Instead of a connection string, you can use an identity based connection for this storage account. For more information, see [Connecting to host storage with an identity](functions-reference.md#connecting-to-host-storage-with-an-identity).
+Instead of a connection string, you can use an identity-based connection for this storage account. For more information, see [Connecting to host storage with an identity](functions-reference.md#connecting-to-host-storage-with-an-identity).
## AzureWebJobsStorage__accountName
The configuration is specific to Python function apps. It defines the prioritiza
|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`1`| Prioritize loading the Python libraries from application's package defined in requirements.txt. This prevents your libraries from colliding with internal Python worker's libraries. | ## PYTHON_ENABLE_DEBUG_LOGGING
-Enables debug-level logging in a Python function app. A value of `1` enables debug-level logging. Without this setting or with a value of `0`, only information and higher level logs are sent from the Python worker to the Functions host. Use this setting when debugging or tracing your Python function executions.
+Enables debug-level logging in a Python function app. A value of `1` enables debug-level logging. Without this setting or with a value of `0`, only information and higher-level logs are sent from the Python worker to the Functions host. Use this setting when debugging or tracing your Python function executions.
When debugging Python functions, make sure to also set a debug or trace [logging level](functions-host-json.md#logging) in the host.json file, as needed. To learn more, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
A value of `1` enables your function app to scale when you have your storage acc
This app setting is required on the [Elastic Premium](functions-premium-plan.md) and [Dedicated (App Service) plans](dedicated-plan.md) (Standard and higher). Not supported when running on a [Consumption plan](consumption-plan.md). + ## WEBSITE\_CONTENTSHARE The name of the file share that Functions uses to store function app code and configuration files. This content is required by event-driven scaling plans. Used with `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`. Default is a unique string generated by the runtime, which begins with the function app name. For more information, see [Storage account connection setting](storage-considerations.md#storage-account-connection-setting).
Some configurations must be maintained at the App Service level as site settings
### alwaysOn
-On a function app running in a [Dedicated (App Service) plan](./dedicated-plan.md), the functions runtime goes idle after a few minutes of inactivity, a which point only requests to an HTTP triggers _wakes-up_ your functions. To make sure that your non-HTTP triggered functions run correctly, including Timer trigger, enable Always On for the function app by setting the `alwaysOn` site setting to a value of `true`.
+On a function app running in a [Dedicated (App Service) plan](./dedicated-plan.md), the Functions runtime goes idle after a few minutes of inactivity, a which point only requests to an HTTP trigger _wakes-up_ your function app. To make sure that your non-HTTP triggered functions run correctly, including Timer trigger functions, enable Always On for the function app by setting the `alwaysOn` site setting to a value of `true`.
### linuxFxVersion
When running locally, you instead use the [`FUNCTIONS_WORKER_RUNTIME_VERSION`](f
Apps running in a Premium plan use a file share to store content. The name of this content share is stored in the [`WEBSITE_CONTENTSHARE`](#website_contentshare) app setting and its connection string is stored in [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](#website_contentazurefileconnectionstring). To route traffic between your function app and content share through a virtual network, you must also set `vnetContentShareEnabled` to `true`. Enabling this site property is a requirement when [restricting your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) in the Elastic Premium and Dedicated hosting plans. + This site property replaces the legacy [`WEBSITE_CONTENTOVERVNET`](#website_contentovervnet) setting. ### vnetImagePullEnabled
In the [Flex Consumption plan](./flex-consumption-plan.md), these site propertie
| `properties.use32BitWorkerProcess` |32-bit not supported | | `properties.vnetBackupRestoreEnabled` |Not used for networking in Flex Consumption| | `properties.vnetContentShareEnabled` |Not used for networking in Flex Consumption|
-| `properties.vnetImagePullEnabled` |Not used for networking in Flex Consumptionlid|
+| `properties.vnetImagePullEnabled` |Not used for networking in Flex Consumption|
| `properties.vnetRouteAllEnabled` |Not used for networking in Flex Consumption| | `properties.windowsFxVersion` |Not valid|
azure-functions Functions Create Function App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-app-portal.md
Title: Create your first function in the Azure portal description: Learn how to create your first Azure Function for serverless execution using the Azure portal. Previously updated : 06/03/2024 Last updated : 07/03/2024 zone_pivot_groups: programming-languages-set-functions
Choose your preferred programming language at the top of the article.
>Because of [development limitations in the Azure portal](functions-how-to-use-azure-function-app-settings.md#development-limitations-in-the-azure-portal), you should instead [develop your functions locally](functions-develop-local.md) and publish to a function app in Azure. Use one of the following links to get started with your chosen local development environment: >+ [Visual Studio Code](./create-first-function-vs-code-node.md) >+ [Terminal/command prompt](./create-first-function-cli-node.md)
+>[!NOTE]
+>Because of [development limitations in the Azure portal](functions-how-to-use-azure-function-app-settings.md#development-limitations-in-the-azure-portal), you should instead [develop your functions locally](functions-develop-local.md) and publish to a function app in Azure. Use one of the following links to get started with your chosen local development environment:
+>+ [Visual Studio Code](./create-first-function-vs-code-python.md)
+>+ [Terminal/command prompt](./create-first-function-cli-python.md)
::: zone pivot="programming-language-typescript" >[!NOTE] >Editing your TypeScript function code in the Azure portal isn't currently supported. For more information, see [Development limitations in the Azure portal](functions-how-to-use-azure-function-app-settings.md#development-limitations-in-the-azure-portal).
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
Use one of the following procedures to update this XML file to run in the isolat
+Changing your project's target framework might also require changes to parts of your toolchain, outside of project code. For example, in VS Code, you might need to update the `azureFunctions.deploySubpath` extension setting through user settings or your project's `.vscode/settings.json` file. Check for any dependencies on the framework version that may exist outside of your project code, as part of build steps or a CI/CD pipeline.
+ ### Package references When migrating to the isolated worker model, you need to change the packages your application references.
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/set-runtime-version.md
Title: How to target Azure Functions runtime versions
-description: Azure Functions supports multiple versions of the runtime. Learn how to specify the runtime version of a function app hosted in Azure.
-
+description: Learn how to specify the runtime version of a function app hosted in Azure.
++ Last updated : 07/03/2024 Previously updated : 03/11/2024 zone_pivot_groups: app-service-platform-windows-linux+
+#Customer intent: As a function developer, I want to learn how to view and edit the runtime version of my function app so that I can pin it to a specific minor version, if necessary.
# How to target Azure Functions runtime versions
-A function app runs on a specific version of the Azure Functions runtime. By default, function apps are created in latest 4.x version of the Functions runtime. Your function apps are only supported when running on a [supported major version](functions-versions.md). This article explains how to configure a function app in Azure to target, or _pin_ to, a specific version when required.
-
+A function app runs on a specific version of the Azure Functions runtime. By default, function apps are created in the latest 4.x version of the Functions runtime. Your function apps are supported only when they run on a [supported major version](functions-versions.md). This article explains how to configure a function app in Azure to target, or _pin_ to, a specific version when required.
+ The way that you target a specific version depends on whether you're running Windows or Linux. This version of the article supports Windows. Choose your operating system at the top of the article. ::: zone-end The way that you target a specific version depends on whether you're running Windows or Linux. This version of the article supports Linux. Choose your operating system at the top of the article. ::: zone-end >[!IMPORTANT]
->When possible, you should always run your functions on the latest supported version of the Azure Functions runtime. You should only pin your app to a specific version when instructed to do so because of an issue in the latest version. You should always move up to the latest runtime version as soon as your functions can run correctly.
+>When possible, always run your functions on the latest supported version of the Azure Functions runtime. You should only pin your app to a specific version if you're instructed to do so due to an issue with the latest version. Always move up to the latest runtime version as soon as your functions can run correctly.
-During local development, your installed version of Azure Functions Core Tools must match major runtime version used by the function app in Azure. For more information, see [Core Tools versions](functions-run-local.md#v2).
+During local development, your installed version of Azure Functions Core Tools must match the major runtime version used by the function app in Azure. For more information, see [Core Tools versions](functions-run-local.md#v2).
## Update your runtime version
When possible, you should always run your function apps on the latest supported
[!INCLUDE [functions-migrate-apps](../../includes/functions-migrate-apps.md)]
-To determine your current runtime version, see [View the current runtime version](#view-the-current-runtime-version).
+To determine your current runtime version, see [View the current runtime version](#view-the-current-runtime-version).
## View the current runtime version You can view the current runtime version of your function app in one of these ways:
-### [Portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
[!INCLUDE [Set the runtime version in the portal](../../includes/functions-view-update-version-portal.md)]
-### [Azure CLI](#tab/azurecli)
+### [Azure CLI](#tab/azure-cli)
-You can view the `FUNCTIONS_EXTENSION_VERSION` from the Azure CLI.
+You can view the `FUNCTIONS_EXTENSION_VERSION` app setting from the Azure CLI.
-Using the Azure CLI, view the current runtime version with the [`az functionapp config appsettings list`](/cli/azure/functionapp/config/appsettings) command.
+Using the Azure CLI, view the current runtime version with the [`az functionapp config appsettings list`](/cli/azure/functionapp/config/appsettings) command:
```azurecli-interactive az functionapp config appsettings list --name <function_app> \ --resource-group <my_resource_group> ```
-In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app.
+In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app.
-You see the `FUNCTIONS_EXTENSION_VERSION` in the following output, which has been truncated for clarity:
+You see the `FUNCTIONS_EXTENSION_VERSION` in the following partial output:
```output [
You see the `FUNCTIONS_EXTENSION_VERSION` in the following output, which has bee
Choose **Open Cloud Shell** in the previous code example to run the command in [Azure Cloud Shell](../cloud-shell/overview.md). You can also run the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command. When running locally, you must first run [`az login`](/cli/azure/reference-index#az-login) to sign in.
-### [PowerShell](#tab/powershell)
+### [Azure PowerShell](#tab/azure-powershell)
-To check the Azure Functions runtime, use the following cmdlet:
+To check the Azure Functions runtime, use the following cmdlet:
-```powershell
+```azurepowershell-interactive
Get-AzFunctionAppSetting -Name "<FUNCTION_APP>" -ResourceGroupName "<RESOURCE_GROUP>" ```+ Replace `<FUNCTION_APP>` with the name of your function app and `<RESOURCE_GROUP>`. The current value of the `FUNCTIONS_EXTENSION_VERSION` setting is returned in the hash table.
-## <a name="manual-version-updates-on-linux"></a>Pinning to a specific version
+## <a name="manual-version-updates-on-linux"></a>Pin to a specific version
-Azure Functions lets you use the `FUNCTIONS_EXTENSION_VERSION` application setting to target the runtime version used by a given function app. When you specify only the major version (`~4`), the function app is automatically updated to new minor versions of the runtime when they become available. Minor version updates are done automatically because new minor versions shouldn't introduce breaking changes.
+Azure Functions lets you use the `FUNCTIONS_EXTENSION_VERSION` app setting to target the runtime version used by a given function app. If you specify only the major version (`~4`), the function app is automatically updated to new minor versions of the runtime as they become available. Minor version updates are done automatically because new minor versions aren't likely to introduce changes that would break your functions.
::: zone pivot="platform-linux"
-Linux apps use the [`linuxFxVersion` site setting](./functions-app-settings.md#linuxfxversion) along with `FUNCTIONS_EXTENSION_VERSION` to determine the correct Linux base image in which to run your functions. When you create a new funtion app on Linux, the runtime automatically chooses the correct base image for you based on the runtime version of your language stack.
+Linux apps use the [`linuxFxVersion` site setting](./functions-app-settings.md#linuxfxversion) along with `FUNCTIONS_EXTENSION_VERSION` to determine the correct Linux base image in which to run your functions. When you create a new function app on Linux, the runtime automatically chooses the correct base image for you based on the runtime version of your language stack.
::: zone-end Pinning to a specific runtime version causes your function app to restart.
-When you specify a specific minor version (such as `4.0.12345`) in `FUNCTIONS_EXTENSION_VERSION`, the function app is pinned to that specific version of the runtime until you explicitly choose to move back to automatic updates. You should only pin to a specific minor version long enough to resolve any issues with your function app that prevent you from targeting the major version. Older minor versions are regularly removed from the production environment. When you're pinned to a minor version that gets removed, your function app is instead run on the closest existing version instead of the version set in `FUNCTIONS_EXTENSION_VERSION`. Minor version removals are announced in [App Service announcements](https://github.com/Azure/app-service-announcements/issues).
+When you specify a specific minor version (such as `4.0.12345`) in `FUNCTIONS_EXTENSION_VERSION`, the function app is pinned to that specific version of the runtime until you explicitly choose to move back to automatic version updates. You should only pin to a specific minor version long enough to resolve any issues with your function app that prevent you from targeting the major version. Older minor versions are regularly removed from the production environment. When your function app is pinned to a minor version that is later removed, your function app is instead run on the closest existing version instead of the version set in `FUNCTIONS_EXTENSION_VERSION`. Minor version removals are announced in [App Service announcements](https://github.com/Azure/app-service-announcements/issues).
> [!NOTE]
-> When you try to publish from Visual Studio to an app that is pinned to a specific minor version of the runtime, a dialog prompts you to update to the latest version or cancel the publish. To avoid this check when you must use a specific minor version, add the `<DisableFunctionExtensionVersionUpdate>true</DisableFunctionExtensionVersionUpdate>` property in your `.csproj` file.
+> When you try to publish from Visual Studio to an app that is pinned to a specific minor version of the runtime, a dialog prompts you to update to the latest version or cancel the publish. To avoid this check when you must use a specific minor version, add the `<DisableFunctionExtensionVersionUpdate>true</DisableFunctionExtensionVersionUpdate>` property in your `.csproj` file.
-Use one of these methods to temporarily pin your app to a specific version of the runtime:
+Use one of these methods to temporarily pin your app to a specific version of the runtime:
-### [Portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
[!INCLUDE [Set the runtime version in the portal](../../includes/functions-view-update-version-portal.md)]
-3. To pin your app to a specific minor version, select **Application settings** > **FUNCTIONS_EXTENSION_VERSION**, change **Value** to your required minor version, and select **OK**.
+3. To pin your app to a specific minor version, in the left pane, expand **Settings**, and then select **Environment variables**.
+
+4. From the **App settings** tab, select **FUNCTIONS_EXTENSION_VERSION**, change **Value** to your required minor version, and then select **Apply**.
-4. Select **Save** > **Continue** to apply changes and restart the app.
+5. Select **Apply**, and then select **Confirm** to apply the changes and restart the app.
-### [Azure CLI](#tab/azurecli)
+### [Azure CLI](#tab/azure-cli)
-You can update the `FUNCTIONS_EXTENSION_VERSION` setting in the function app with the [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings) command.
+You can update the `FUNCTIONS_EXTENSION_VERSION` app setting in the function app with the [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings) command.
```azurecli-interactive az functionapp config appsettings set --name <FUNCTION_APP> \
az functionapp config appsettings set --name <FUNCTION_APP> \
Replace `<FUNCTION_APP>` with the name of your function app and `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<VERSION>` with the specific minor version you temporarily need to target.
-Choose **Try it** in the previous code example to run the command in [Azure Cloud Shell](../cloud-shell/overview.md). You can also run the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command. When running locally, you must first run [`az login`](/cli/azure/reference-index#az-login) to sign in.
+Choose **Open Cloud Shell** in the previous code example to run the command in [Azure Cloud Shell](../cloud-shell/overview.md). You can also run the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command. When running locally, you must first run [`az login`](/cli/azure/reference-index#az-login) to sign in.
-### [PowerShell](#tab/powershell)
+### [Azure PowerShell](#tab/azure-powershell)
Use this script to pin the Functions runtime:
-```powershell
+```azurepowershell-interactive
Update-AzFunctionAppSetting -Name "<FUNCTION_APP>" -ResourceGroupName "<RESOURCE_GROUP>" -AppSetting @{"FUNCTIONS_EXTENSION_VERSION" = "<VERSION>"} -Force ```
-Replace `<FUNCTION_APP>` with the name of your function app and `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<VERSION>` with the specific minor version you temporarily need to target. You can verify the updated value of the `FUNCTIONS_EXTENSION_VERSION` setting in the returned hash table.
+Replace `<FUNCTION_APP>` with the name of your function app and `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<VERSION>` with the specific minor version you temporarily need to target. You can verify the updated value of the `FUNCTIONS_EXTENSION_VERSION` setting in the returned hash table.
-The function app restarts after the change is made to the application setting.
+The function app restarts after the change is made to the application setting.
::: zone-end
-To pin your function app to a specific runtime version on Linux, you set a version-specific base image URL in the [`linuxFxVersion` site setting][`linuxFxVersion`] in the format `DOCKER|<PINNED_VERSION_IMAGE_URI>`.
+To pin your function app to a specific runtime version on Linux, you set a version-specific base image URL in the [`linuxFxVersion` site setting][`linuxFxVersion`] in the format `DOCKER|<PINNED_VERSION_IMAGE_URI>`.
> [!IMPORTANT]
-> Pinned function apps on Linux don't receive regular security and host functionality updates. Unless recommended by a support professional, use the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) setting and a standard [`linuxFxVersion`] value for your language and version, such as `Python|3.9`. For valid values, see the [`linuxFxVersion` reference article][`linuxFxVersion`].
+> Pinned function apps on Linux don't receive regular security and host functionality updates. Unless recommended by a support professional, use the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) setting and a standard [`linuxFxVersion`] value for your language and version, such as `Python|3.9`. For valid values, see the [`linuxFxVersion` reference article][`linuxFxVersion`].
>
-> Pinning to a specific runtime isn't currently supported for Linux function apps running in a Consumption plan.
+> Pinning to a specific runtime isn't currently supported for Linux function apps running in a Consumption plan.
-The following is an example of the [`linuxFxVersion`] value required to pin a Node.js 16 function app to a specific runtime version of 4.14.0.3:
+The following example shows the [`linuxFxVersion`] value required to pin a Node.js 16 function app to a specific runtime version of 4.14.0.3:
-`DOCKER|mcr.microsoft.com/azure-functions/node:4.14.0.3-node16`
+`DOCKER|mcr.microsoft.com/azure-functions/node:4.14.0.3-node16`
-When needed, a support professional can provide you with a valid base image URI for your application.
+When needed, a support professional can provide you with a valid base image URI for your application.
-Use the following Azure CLI commands to view and set the [`linuxFxVersion`]. You can't currently set [`linuxFxVersion`] in the portal or by using Azure PowerShell.
+Use the following Azure CLI commands to view and set the [`linuxFxVersion`]. You can't currently set [`linuxFxVersion`] in the portal or by using Azure PowerShell:
-+ To view the current runtime version, use with the [az functionapp config show](/cli/azure/functionapp/config) command.
++ To view the current runtime version, use the [az functionapp config show](/cli/azure/functionapp/config) command: ```azurecli-interactive az functionapp config show --name <function_app> \ --resource-group <my_resource_group> --query 'linuxFxVersion' -o tsv ```
-
- In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app. The current value of [`linuxFxVersion`] is returned.
-
-+ To update the [`linuxFxVersion`] setting in the function app, use the [az functionapp config set](/cli/azure/functionapp/config) command.
+
+ In this code, replace `<function_app>` with the name of your function app. Also, replace `<my_resource_group>` with the name of the resource group for your function app. The current value of [`linuxFxVersion`] is returned.
+++ To update the [`linuxFxVersion`] setting in the function app, use the [az functionapp config set](/cli/azure/functionapp/config) command: ```azurecli-interactive az functionapp config set --name <FUNCTION_APP> \ --resource-group <RESOURCE_GROUP> \ --linux-fx-version <LINUX_FX_VERSION> ```
-
- Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Finally, replace `<LINUX_FX_VERSION>` with the value of a specific image provided to you by a support professional.
+
+ Replace `<FUNCTION_APP>` with the name of your function app. Also, replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Finally, replace `<LINUX_FX_VERSION>` with the value of a specific image provided to you by a support professional.
You can run these commands from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Open Cloud Shell** in the preceding code examples. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [`az login`](/cli/azure/reference-index#az-login) to sign in.
The function app restarts after the change is made to the site config.
## Next steps > [!div class="nextstepaction"]
-> [See Release notes for runtime versions](https://github.com/Azure/azure-webjobs-sdk-script/releases)
+> [Release notes for runtime versions](https://github.com/Azure/azure-webjobs-sdk-script/releases)
[`linuxFxVersion`]: functions-app-settings.md#linuxfxversion
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
To limit the potential impact of any broadly scoped storage permissions, conside
[!INCLUDE [functions-shared-storage](../../includes/functions-shared-storage.md)]
+### Consistent routing through virtual networks
+
+Multiple function apps hosted in the same plan can also use the same storage account for the Azure Files content share (defined by `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`). When this storage account is also secured by a virtual network, all of these apps should also use the same value for `vnetContentShareEnabled` (formerly `WEBSITE_CONTENTOVERVNET`) to guarantee that traffic is routed consistently through the intended virtual network. A mismatch in this setting between apps using the same Azure Files storage account might result in traffic being routed through public networks, which causes access to be blocked by storage account network rules.
+ ## Working with blobs A key scenario for Functions is file processing of files in a blob container, such as for image processing or sentiment analysis. To learn more, see [Process file uploads](./functions-scenarios.md#process-file-uploads).
The Azure Files service provides a shared file system that supports high-scale s
By default, function apps hosted in Premium and Consumption plans use [zip deployment](./deployment-zip-push.md), with deployment packages stored in this Azure file share. This section is only relevant to these hosting plans.
-Using Azure Files requires the use of a connection string, which is stored in your app settings as [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring). Azure Files doesn't currently supported identity-based connections. If your scenario requires you to not store any secrets in app settings, you must remove your app's dependency on Azure Files. You can do this by creating your app without the default Azure Files dependency.
+Using Azure Files requires the use of a connection string, which is stored in your app settings as [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](functions-app-settings.md#website_contentazurefileconnectionstring). Azure Files doesn't currently support identity-based connections. If your scenario requires you to not store any secrets in app settings, you must remove your app's dependency on Azure Files. You can do this by creating your app without the default Azure Files dependency.
>[!NOTE] >You should also consider running in your function app in the Flex Consumption plan, which is currently in preview. The Flex Consumption plan provides greater control over the deployment package, including the ability use managed identity connections. For more information, see [Configure deployment settings](flex-consumption-how-to.md#configure-deployment-settings) in the Flex Consumption article.
azure-functions Update Language Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/update-language-versions.md
Use these steps to update the project on your local computer:
1. Update your project's target framework to the new version. For C# projects, you must update the `<TargetFramework>` element in the `.csproj` file. See [Target frameworks](/dotnet/standard/frameworks) for specifics related to the chosen version.
+ Changing your project's target framework might also require changes to parts of your toolchain, outside of project code. For example, in VS Code, you might need to update the `azureFunctions.deploySubpath` extension setting through user settings or your project's `.vscode/settings.json` file. Check for any dependencies on the framework version that may exist outside of your project code, as part of build steps or a CI/CD pipeline.
+ 1. Make any updates to your project code that are required by the new .NET version. Check the version's release notes for specifics. You can also use the [.NET Upgrade Assistant](/dotnet/core/porting/upgrade-assistant-overview) to help you update your code in response to changes across major versions.
-After you've made those changes, rebuild your project and test it to confirm your app runs as expected.
+After you've made those changes, rebuild your project and test it to confirm your app runs as expected.
+ ::: zone-end ### 2. Move to the latest Functions runtime
azure-monitor Edge Pipeline Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/edge-pipeline-configure.md
Replace the properties in the following table before deploying the template.
```json {
- "type": "Microsoft.monitor/pipelineGroups",
- "location": "eastus",
- "apiVersion": "2023-10-01-preview",
- "name": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.ExtendedLocation/customLocations/my-custom-location",
-
- "extendedLocation": {
- "name": "my-custom-location",
- "type": "CustomLocation"
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "metadata": {
+ "description": "This template deploys an edge pipeline for azure monitor."
},
- "properties": {
- "receivers": [
- {
- "type": "OTLP",
- "name": "receiver-OTLP",
- "otlp": {
- "endpoint": "0.0.0.0:4317"
- }
+ "resources": [
+ {
+ "type": "Microsoft.monitor/pipelineGroups",
+ "location": "eastus",
+ "apiVersion": "2023-10-01-preview",
+ "name": "my-pipeline-group-name",
+ "extendedLocation": {
+ "name": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.ExtendedLocation/customLocations/my-custom-location",
+ "type": "CustomLocation"
},
- {
- "type": "Syslog",
- "name": "receiver-Syslog",
- "syslog": {
- "endpoint": "0.0.0.0:514"
- }
- }
- ],
- "processors": [],
- "exporters": [
- {
- "type": "AzureMonitorWorkspaceLogs",
- "name": "exporter-log-analytics-workspace",
- "azureMonitorWorkspaceLogs": {
- "api": {
- "dataCollectionEndpointUrl": "https://my-dce-4agr.eastus-1.ingest.monitor.azure.com",
- "dataCollectionRule": "dcr-00000000000000000000000000000000",
- "stream": "Custom-OTLP",
- "cache": {
- "maxStorageUsage": "10000",
- "retentionPeriod": "60"
- },
- "schema": {
- "recordMap": [
- {
- "from": "body",
- "to": "Body"
- },
- {
- "from": "severity_text",
- "to": "SeverityText"
- },
- {
- "from": "time_unix_nano",
- "to": "TimeGenerated"
+ "properties": {
+ "receivers": [
+ {
+ "type": "OTLP",
+ "name": "receiver-OTLP",
+ "otlp": {
+ "endpoint": "0.0.0.0:4317"
+ }
+ },
+ {
+ "type": "Syslog",
+ "name": "receiver-Syslog",
+ "syslog": {
+ "endpoint": "0.0.0.0:514"
+ }
+ }
+ ],
+ "processors": [],
+ "exporters": [
+ {
+ "type": "AzureMonitorWorkspaceLogs",
+ "name": "exporter-log-analytics-workspace",
+ "azureMonitorWorkspaceLogs": {
+ "api": {
+ "dataCollectionEndpointUrl": "https://my-dce-4agr.eastus-1.ingest.monitor.azure.com",
+ "dataCollectionRule": "dcr-00000000000000000000000000000000",
+ "stream": "Custom-OTLP",
+ "schema": {
+ "recordMap": [
+ {
+ "from": "body",
+ "to": "Body"
+ },
+ {
+ "from": "severity_text",
+ "to": "SeverityText"
+ },
+ {
+ "from": "time_unix_nano",
+ "to": "TimeGenerated"
+ }
+ ]
}
- ]
+ },
+ "cache": {
+ "maxStorageUsage": 10000,
+ "retentionPeriod": 60
+ }
} }
- }
- }
- ],
- "service": {
- "pipelines": [
- {
- "name": "DefaultOTLPLogs",
- "receivers": [
- "receiver-OTLP"
- ],
- "processors": [],
- "exporters": [
- "exporter-log-analytics-workspace"
+ ],
+ "service": {
+ "pipelines": [
+ {
+ "name": "DefaultOTLPLogs",
+ "receivers": [
+ "receiver-OTLP"
+ ],
+ "processors": [],
+ "exporters": [
+ "exporter-log-analytics-workspace"
+ ],
+ "type": "logs"
+ },
+ {
+ "name": "DefaultSyslogs",
+ "receivers": [
+ "receiver-Syslog"
+ ],
+ "processors": [],
+ "exporters": [
+ "exporter-log-analytics-workspace"
+ ],
+ "type": "logs"
+ }
],
- "type": "logs"
+ "persistence": {
+ "persistentVolumeName": "my-persistent-volume"
+ }
},
- {
- "name": "DefaultSyslogs",
- "receivers": [
- "receiver-Syslog"
- ],
- "processors": [],
- "exporters": [
- "exporter-log-analytics-workspace"
- ],
- "type": "logs"
- }
- ],
- "persistence": {
- "persistentVolume": "my-persistent-volume"
- }
- },
- "networkingConfigurations": [
- {
- "externalNetworkingMode": "LoadBalancerOnly",
- "routes": [
+ "networkingConfigurations": [
{
- "receiver": "receiver-OTLP"
- },
- {
- "receiver": "receiver-Syslog"
+ "externalNetworkingMode": "LoadBalancerOnly",
+ "routes": [
+ {
+ "receiver": "receiver-OTLP"
+ },
+ {
+ "receiver": "receiver-Syslog"
+ }
+ ]
} ] }
- ]
- }
+ }
+ ]
} ```
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/overview.md
Title: Bicep language for deploying Azure resources
description: Describes the Bicep language for deploying infrastructure to Azure. It provides an improved authoring experience over using JSON to develop templates. Previously updated : 03/20/2024 Last updated : 07/05/2024 # What is Bicep?
Bicep provides the following advantages:
You can also create Bicep files in Visual Studio with the [Bicep extension for Visual Studio](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.visualstudiobicep). -- **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Bicep files are idempotent, which means you can deploy the same file many times and get the same resource types in the same state. You can develop one file that represents the desired state, rather than developing lots of separate files to represent updates.
+- **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Bicep files are idempotent, which means you can deploy the same file many times and get the same resource types in the same state. You can develop one file that represents the desired state, rather than developing lots of separate files to represent updates. For example, the following file creates a storage account. If you deploy this template and the storage account with the specified properties already exists , no changes is made.
+
+ # [Bicep](#tab/bicep)
+
+ ```bicep
+ param location string = resourceGroup().location
+
+ resource mystore 'Microsoft.Storage/storageAccounts@2023-04-01' = {
+ name: 'mystorageaccount'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ }
+ ```
+
+ # [JSON](#tab/json)
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": {
+ "mystore": {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2023-04-01",
+ "name": "mystorageaccount",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2"
+ }
+ }
+ }
+ ```
++
+
- **Orchestration**: You don't have to worry about the complexities of ordering operations. Resource Manager orchestrates the deployment of interdependent resources so they're created in the correct order. When possible, Resource Manager deploys resources in parallel so your deployments finish faster than serial deployments. You deploy the file through one command, rather than through multiple imperative commands. :::image type="content" source="./media/overview/bicep-processing.png" alt-text="Bicep deployment comparison" border="false":::
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/overview.md
Title: Templates overview
description: Describes the benefits using Azure Resource Manager templates (ARM templates) for deployment of resources. Previously updated : 06/23/2023 Last updated : 07/05/2024 # What are ARM templates?
If you're trying to decide between using ARM templates and one of the other infr
* **Declarative syntax**: ARM templates allow you to create and deploy an entire Azure infrastructure declaratively. For example, you can deploy not only virtual machines, but also the network infrastructure, storage systems, and any other resources you may need.
-* **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Templates are idempotent, which means you can deploy the same template many times and get the same resource types in the same state. You can develop one template that represents the desired state, rather than developing lots of separate templates to represent updates.
+* **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Templates are idempotent, which means you can deploy the same template many times and get the same resource types in the same state. You can develop one template that represents the desired state, rather than developing lots of separate templates to represent updates. For example, the following file creates a storage account. If you deploy this template and the storage account with the specified properties already exists , no changes is made.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": {
+ "mystore": {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2023-04-01",
+ "name": "mystorageaccount",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2"
+ }
+ }
+}
+```
* **Orchestration**: You don't have to worry about the complexities of ordering operations. Resource Manager orchestrates the deployment of interdependent resources so they're created in the correct order. When possible, Resource Manager deploys resources in parallel so your deployments finish faster than serial deployments. You deploy the template through one command, rather than through multiple imperative commands.
backup Backup Mabs Protection Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-protection-matrix.md
Title: MABS (Azure Backup Server) V4 protection matrix description: This article provides a support matrix listing all workloads, data types, and installations that Azure Backup Server v4 protects. Previously updated : 04/30/2024 Last updated : 07/05/2024
The following sections details the protection support matrix for MABS:
| **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | | - | | - | | | Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM | Windows Server 2022, 2019, 2016, 2012 R2, 2012 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V4 | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
-| Azure Stack HCI | V1, 20H2, 21H2, and 22H2 | Physical server <br><br> Hyper-V / Azure Stack HCI virtual machine <br><br> VMware virtual machine | V4 | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
+| Azure Stack HCI | V1, 20H2, 21H2, 22H2, and 23H2 | Physical server <br><br> Hyper-V / Azure Stack HCI virtual machine <br><br> VMware virtual machine | V4 | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives <br><br> Recovery of Arc VMs is supported in a limited capacity in Azure Stack HCI, version 23H2. [Learn more](back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md). |
| VMware VMs | VMware server 6.5, 6.7, 7.0, 8.0 (Licensed Version) | Hyper-V virtual machine <br><br> VMware virtual machine | V4 | Protect: VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives <br><br> VMware vApps aren't supported. <br><br> vSphere 8.0 DataSets feature isn't supported for backup. | >[!NOTE]
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
- ignite-2023 Previously updated : 02/17/2023 Last updated : 07/02/2024
The following quotas are on a per subscription basis for Azure Container Apps.
-You can [request a quota increase in the Azure portal](/azure/quotas/quickstart-increase-quota-portal).
+You can [request a quota increase in the Azure portal](/azure/quotas/quickstart-increase-quota-portal). Any time when the maximum quota is larger than the default quota you can request a quota increase. When requesting a quota increase make sure to pick type _Container Apps_. For more information, see [how to request a limit increase](faq.yml#how-can-i-request-a-quota-increase-).
-The *Is Configurable* column in the following tables denotes a feature maximum may be increased. For more information, see [how to request a limit increase](faq.yml#how-can-i-request-a-quota-increase-).
-
-| Feature | Scope | Default Quota | Is Configurable | Remarks |
+| Feature | Scope | Default Quota | Maximum Quota | Remarks |
|--|--|--|--|--|
-| Environments | Region | Up to 15 | Yes | Up to 15 environments per subscription, per region. |
-| Environments | Global | Up to 20 | Yes | Up to 20 environments per subscription, across all regions. |
-| Container Apps | Environment | Unlimited | n/a | |
-| Revisions | Container app | Up to 100 | No | |
-| Replicas | Revision | Unlimited | No | Maximum replicas configurable are 300 in Azure portal and 1000 in Azure CLI. There must also be enough cores quota available. |
-| Session pools | Global | Up to 6 | Yes | Maximum number of dynamic session pools per subscription. |
-
-## Consumption plan
+| Environments | Region | 15 | Unlimited | Up to 15 environments per subscription, per region. Quota name: Managed Environment Count |
+| Environments | Global | 20 | Unlimited | Up to 20 environments per subscription, across all regions. Adjusted through Managed Environment Count quota (usually 20% more than Managed Environment Count) |
+| Container Apps | Environment | Unlimited | Unlimited | |
+| Revisions | Container app | Up to 100 | Unlimited | |
+| Replicas | Revision | Unlimited | Unlimited | Maximum replicas configurable are 300 in Azure portal and 1000 in Azure CLI. There must also be enough cores quota available. |
+| Session pools | Global | Up to 6 | 10,000 | Maximum number of dynamic session pools per subscription. No official Azure quota yet, please raise support case. |
-| Feature | Scope | Default | Is Configurable | Remarks |
-|--|--|--|--|--|
-| Cores | Replica | 2 | No | Maximum number of cores available to a revision replica. |
-| Cores | Environment | 100 | Yes | Maximum number of cores an environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
## Workload Profiles Environments ### Consumption workload profile
-| Feature | Scope | Default | Is Configurable | Remarks |
+| Feature | Scope | Default Quota | Maximum Quota | Remarks |
|--|--|--|--|--|
-| Cores | Replica | 4 | No | Maximum number of cores available to a revision replica. |
-| Cores | Environment | 100 | Yes | Maximum number of cores the Consumption workload profile in a Dedicated plan environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
+| Cores | Replica | 4 | 4 | Maximum number of cores available to a revision replica. |
+| Cores | Environment | 100 | 5,000 | Maximum number of cores the Consumption workload profile in a Dedicated plan environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. Quota name: Managed Environment General Purpose Cores |
### Dedicated workload profiles
-| Feature | Scope | Default | Is Configurable | Remarks |
+| Feature | Scope | Default Quota | Maximum Quota | Remarks |
|--|--|--|--|--|
-| Cores | Subscription | 2000 | Yes | Maximum number of dedicated workload profile cores within one subscription |
-| Cores | Replica | Up to maximum cores a workload profile supports | No | Maximum number of cores available to a revision replica. |
-| Cores | General Purpose Workload Profiles | 100 | Yes | The total cores available to all general purpose (D-series) profiles within an environment. |
-| Cores | Memory Optimized Workload Profiles | 50 | Yes | The total cores available to all memory optimized (E-series) profiles within an environment. |
-
-For more information regarding quotas, see the [Quotas roadmap](https://github.com/microsoft/azure-container-apps/issues/503) in the Azure Container Apps GitHub repository.
+| Cores | Subscription | 2,000 | Unlimited | Maximum number of dedicated workload profile cores within one subscription |
+| Cores | Replica | Maximum cores a workload profile supports | Same as default quota | Maximum number of cores available to a revision replica. |
+| Cores | Environment | 100 | 5,000 | The total cores available to all general purpose (D-series) profiles within an environment. Maximum assumes appropriate network size. Quota name: Managed Environment General Purpose Cores |
+| Cores | Environment | 50 | 5,000 | The total cores available to all memory optimized (E-series) profiles within an environment. Maximum assumes appropriate network size. Quota name: Managed Environment Memory Optimized Cores |
> [!NOTE] > For GPU enabled workload profiles, you need to request capacity via a [request for a quota increase in the Azure portal](/azure/quotas/quickstart-increase-quota-portal).
For more information regarding quotas, see the [Quotas roadmap](https://github.c
> [!NOTE] > [Free trial](https://azure.microsoft.com/offers/ms-azr-0044p) and [Azure for Students](https://azure.microsoft.com/free/students/) subscriptions are limited to one environment per subscription globally and ten (10) cores per environment. ++
+## Consumption plan
+
+All new environments use the Consumption workload profile architecture listed above. Only environments created before January 2024 use the consumption plan below.
+
+| Feature | Scope | Default Quota | Maximum Quota | Remarks |
+|--|--|--|--|--|
+| Cores | Replica | 2 | 2 | Maximum number of cores available to a revision replica. |
+| Cores | Environment | 100 | 1,500 | Maximum number of cores an environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. Quota name: Managed Environment Consumption Cores |
+++ ## Considerations * If an environment runs out of allowed cores:
cosmos-db Analytics And Business Intelligence Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytics-and-business-intelligence-use-cases.md
+
+ Title: Near real-time analytics use cases for Azure Cosmos DB
+description: Learn how real-time analytics is used in Supply chain analytics, forecasting, reporting, real-time personalization, and IOT predictive maintenance.
++++ Last updated : 06/25/2024+++
+# Azure Cosmos DB: No-ETL analytics use cases
+
+Azure Cosmos DB provides various analytics options for no-ETL, near real-time analytics over operational data. You can enable analytics on your Azure Cosmos DB data using following options:
+* Mirroring Azure Cosmos DB in Microsoft Fabric
+* Azure Synapse Link for Azure Cosmos DB
+
+To learn more about these options, see ["Analytics and BI on your Azure Cosmos DB data."](analytics-and-business-intelligence-overview.md)
+
+> [!IMPORTANT]
+> Mirroring Azure Cosmos DB in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context).
+
+No-ETL, near real-time analytics can open up various possibilities for your businesses. Here are three sample scenarios:
+
+* Supply chain analytics, forecasting & reporting
+* Real-time personalization
+* Predictive maintenance, anomaly detection in IOT scenarios
+
+## Supply chain analytics, forecasting & reporting
+
+Research studies show that embedding big data analytics in supply chain operations leads to improvements in order-to-cycle delivery times and supply chain efficiency.
+
+Manufacturers are onboarding to cloud-native technologies to break out of constraints of legacy Enterprise Resource Planning (ERP) and Supply Chain Management (SCM) systems. With supply chains generating increasing volumes of operational data every minute (order, shipment, transaction data), manufacturers need an operational database. This operational database should scale to handle the data volumes as well as an analytical platform to get to a level of real-time contextual intelligence to stay ahead of the curve.
+
+The following architecture shows the power of using Azure Cosmos DB as the cloud-native operational database in supply chain analytics:
++
+Based on previous architecture, you can achieve the following use cases:
+
+* **Prepare & train predictive pipeline:** Generate insights over the operational data across the supply chain using machine learning translates. This way you can lower inventory, operations costs, and reduce the order-to-delivery times for customers.
+
+ Mirroring and Synapse Link allow you to analyze the changing operational data in Azure Cosmos DB without any manual ETL processes. These offerings save you from additional cost, latency, and operational complexity. They enable data engineers and data scientists to build robust predictive pipelines:
+
+ * Query operational data from Azure Cosmos DB by using native integration with Apache Spark pools in Microsoft Fabric or Azure Synapse Analytics. You can query the data in an interactive notebook or scheduled remote jobs without complex data engineering.
+
+ * Build Machine Learning (ML) models with Spark ML algorithms and Azure Machine Learning (AML) integration in Microsoft Fabric or Azure Synapse Analytics.
+
+ * Write back the results after model inference into Azure Cosmos DB for operational near-real-time scoring.
+
+* **Operational reporting:** Supply chain teams need flexible and custom reports over real-time, accurate operational data. These reports are required to obtain a snapshot view of supply chain effectiveness, profitability, and productivity. It allows data analysts and other key stakeholders to constantly reevaluate the business and identify areas to tweak to reduce operational costs.
+
+ Mirroring and Synapse Link for Azure Cosmos DB enable rich business intelligence (BI)/reporting scenarios:
+
+ * Query operational data from Azure Cosmos DB by using native integration with full expressiveness of T-SQL language.
+
+ * Model and publish auto refreshing BI dashboards over Azure Cosmos DB through Power BI integrated in Microsoft Fabric or Azure Synapse Analytics.
+
+The following is some guidance for data integration for batch & streaming data into Azure Cosmos DB:
+
+* **Batch data integration & orchestration:** With supply chains getting more complex, supply chain data platforms need to integrate with variety of data sources and formats. Microsoft Fabric and Azure Synapse come built-in with the same data integration engine and experiences as Azure Data Factory. This integration allows data engineers to create rich data pipelines without a separate orchestration engine:
+
+ * Move data from 85+ supported data sources to [Azure Cosmos DB with Azure Data Factory](../data-factory/connector-azure-cosmos-db.md).
+
+ * Write code-free ETL pipelines to Azure Cosmos DB including [relational-to-hierarchical and hierarchical-to-hierarchical mappings with mapping data flows](../data-factory/how-to-sqldb-to-cosmosdb.md).
+
+* **Streaming data integration & processing:** With the growth of Industrial IoT (sensors tracking assets from 'floor-to-store', connected logistics fleets, etc.), there is an explosion of real-time data being generated in a streaming fashion that needs to be integrated with traditional slow moving data for generating insights. Azure Stream Analytics is a recommended service for streaming ETL and processing on Azure with a [wide range of scenarios](../stream-analytics/streaming-technologies.md). Azure Stream Analytics supports [Azure Cosmos DB as a native data sink](../stream-analytics/stream-analytics-documentdb-output.md).
+
+## Real-time personalization
+
+Retailers today must build secure and scalable e-commerce solutions that meet the demands of both customers and business. These e-commerce solutions need to engage customers through customized products and offers, process transactions quickly and securely, and focus on fulfillment and customer service. Azure Cosmos DB along with the latest Synapse Link for Azure Cosmos DB allows retailers to generate personalized recommendations for customers in real time. They use low-latency and tunable consistency settings for immediate insights as shown in the following architecture:
++
+* **Prepare & train predictive pipeline:** You can generate insights over the operational data across your business units or customer segments using Fabric or Synapse Spark and machine learning models. This translates to personalized delivery to target customer segments, predictive end-user experiences, and targeted marketing to fit your end-user requirements.
+)
+## IOT predictive maintenance
+
+Industrial IOT innovations have drastically reduced downtimes of machinery and increased overall efficiency across all fields of industry. One of such innovations is predictive maintenance analytics for machinery at the edge of the cloud.
+
+The following is an architecture using the cloud native HTAP capabilities in IoT predictive maintenance:
++
+* **Prepare & train predictive pipeline:** The historical operational data from IoT device sensors could be used to train predictive models such as anomaly detectors. These anomaly detectors are then deployed back to the edge for real-time monitoring. Such a virtuous loop allows for continuous retraining of the predictive models.
+
+* **Operational reporting:** With the growth of digital twin initiatives, companies are collecting vast amounts of operational data from large number of sensors to build a digital copy of each machine. This data powers BI needs to understand trends over historical data in addition to recent hot data.
+
+## Related content
++
+* [Mirroring Azure Cosmos DB overview](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context)
+
+* [Getting started with mirroring](/fabric/database/mirrored-database/azure-cosmos-db-tutorial?context=/azure/cosmos-db/context/context)
+
+* [Azure Synapse Link for Azure Cosmos DB](synapse-link.md)
+
+* [Working with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md)
+
cosmos-db Concepts Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-security-overview.md
Title: Security overview - Azure Cosmos DB for PostgreSQL description: Information protection and network security for Azure Cosmos DB for PostgreSQL.--++ Previously updated : 01/14/2022 Last updated : 07/04/2024 # Security in Azure Cosmos DB for PostgreSQL
This page outlines the multiple layers of security available to protect the data
### In transit
-Whenever data is ingested into a node, Azure Cosmos DB for PostgreSQL secures your data by encrypting it in-transit with Transport Layer Security 1.2. Encryption (SSL/TLS) is always enforced, and canΓÇÖt be disabled.
+Whenever data is ingested into a node, Azure Cosmos DB for PostgreSQL secures your data by encrypting it in-transit with Transport Layer Security (TLS) 1.2 or higher. Encryption (SSL/TLS) is always enforced, and canΓÇÖt be disabled.
+
+The minimum TLS version required to connect to the cluster might be enforced by setting **ssl_min_protocol_version** coordinator and worker node parameter to *TLSV1.2* or *TLSV1.3* for TLS 1.2 or TLS 1.3 respectively.
### At rest
cosmos-db Howto Scale Initial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-initial.md
Title: Initial cluster size - Azure Cosmos DB for PostgreSQL description: Pick the right initial size for your use case--++ Previously updated : 01/30/2023 Last updated : 07/04/2024 # Pick initial size for cluster in Azure Cosmos DB for PostgreSQL
Last updated 01/30/2023
[!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] The size of a cluster, both number of nodes and their hardware capacity,
-is [easy to change](howto-scale-grow.md)). However you still need to
+is [easy to change](howto-scale-grow.md). However you still need to
choose an initial size for a new cluster. Here are some tips for a reasonable choice.
cosmos-db Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-upgrade.md
Title: Upgrade cluster - Azure Cosmos DB for PostgreSQL description: See how you can upgrade PostgreSQL and Citus in Azure Cosmos DB for PostgreSQL.--++ Previously updated : 01/30/2023 Last updated : 07/04/2024 # Upgrade cluster in Azure Cosmos DB for PostgreSQL
works properly, upgrade the original cluster.
> [!NOTE] > If you're already running the latest PostgreSQL version, the selection and button are grayed out.
+## Post-upgrade tasks
+
+After a major PostgreSQL version upgrade, run the `ANALYZE` operation to refresh the `pg_statistic` table. `pg_statistic` is a system catalog table in PostgreSQL that stores statistical data about the content of table columns and index expressions. Entries in `pg_statistic` are created by the [ANALYZE](https://www.postgresql.org/docs/16/sql-analyze.html) command and used by the query planner.
+
+Run the `ANALYZE` command without any parameters to generate statistics for the tables in the database on your cluster. The default database name is 'citus'. If custom database name was used at the cluster creation time, you can find it on the **Overview** page of your cluster's properties. Using the optional `VERBOSE` flag allows you to see the progress.
+
+```sql
+ANALYZE VERBOSE;
+```
+
+> [!NOTE]
+> Database performance might be impacted if you don't run `ANALYZE` operation after the major PostgreSQL version upgrade on your cluster.
+ ## Next steps * Learn about [supported PostgreSQL versions](reference-versions.md).
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
Previously updated : 06/17/2024 Last updated : 06/26/2024 # Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
These generic properties are supported for a SQL server linked service when you
| type | The type property must be set to **SqlServer**. | Yes | | server | The name or network address of the SQL server instance you want to connect to. | Yes | | database | The name of the database. | Yes |
-| authenticationType |The type used for authentication. Allowed values are [**SQL**](#sql-authentication) (default), [**Windows**](#windows-authentication). Go to the relevant authentication section on specific properties and prerequisites. | Yes |
+| authenticationType |The type used for authentication. Allowed values are [**SQL**](#sql-authentication) (default), [**Windows**](#windows-authentication) and [**UserAssignedManagedIdentity**](#user-assigned-managed-identity-authentication) (only for [SQL Server on Azure VMs](/azure/azure-sql/virtual-machines)). Go to the relevant authentication section on specific properties and prerequisites. | Yes |
| alwaysEncryptedSettings | Specify **alwaysencryptedsettings** information that's needed to enable Always Encrypted to protect sensitive data stored in SQL server by using either managed identity or service principal. For more information, see the JSON example following the table and [Using Always Encrypted](#using-always-encrypted) section. If not specified, the default always encrypted setting is disabled. |No | | encrypt |Indicate whether TLS encryption is required for all data sent between the client and server. Options: mandatory (for true, default)/optional (for false)/strict. | No | | trustServerCertificate | Indicate whether the channel will be encrypted while bypassing the certificate chain to validate trust. | No |
To use Windows authentication, in addition to the generic properties that are de
} ```
+#### User-assigned managed identity authentication
+
+>[!Note]
+>The user-assigned managed identity authentication only applies to [SQL Server on Azure VMs](/azure/azure-sql/virtual-machines).
+
+A data factory or Synapse workspace can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity) that represents the service when authenticating to other resources in Azure. You can use this managed identity for [SQL Server on Azure VMs](/azure/azure-sql/virtual-machines) authentication. The designated factory or Synapse workspace can access and copy data from or to your database by using this identity.
+
+To use user-assigned managed identity authentication, in addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+
+You also need to follow the steps below:
+
+1. [Grant permissions to your user-assigned managed identity](/azure/azure-sql/virtual-machines/windows/configure-azure-ad-authentication-for-sql-vm#grant-permissions).
+
+1. [Enable Microsoft Entra authentication](/azure/azure-sql/virtual-machines/windows/configure-azure-ad-authentication-for-sql-vm#enable-microsoft-entra-authentication) to your [SQL Server on Azure VMs](/azure/azure-sql/virtual-machines).
+
+1. [Create contained database users](/azure/azure-sql/database/authentication-aad-configure#create-contained-users-mapped-to-azure-ad-identities) for the user-assigned managed identity. Connect to the database from or to which you want to copy data by using tools like SQL Server Management Studio, with a Microsoft Entra identity that has at least ALTER ANY USER permission. Run the following T-SQL:
+
+ ```sql
+ CREATE USER [your_resource_name] FROM EXTERNAL PROVIDER;
+ ```
+
+1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant the user-assigned managed identity needed permissions as you normally do for SQL users and others. Run the following code. For more options, see [this document](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql).
+
+ ```sql
+ ALTER ROLE [role name] ADD MEMBER [your_resource_name];
+ ```
+1. Assign one or multiple user-assigned managed identities to your data factory and [create credentials](credentials.md) for each user-assigned managed identity.
+
+1. Configure a SQL Server linked service.
+
+**Example**
+
+```json
+{
+ "name": "SqlServerLinkedService",
+ "properties": {
+ "type": "SqlServer",
+ "typeProperties": {
+ "server": "<name or network address of the SQL server instance>",
+ "database": "<database name>",
+ "encrypt": "<encrypt>",
+ "trustServerCertificate": false,
+ "authenticationType": "UserAssignedManagedIdentity",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ### Legacy version These generic properties are supported for a SQL server linked service when you apply **Legacy** version:
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Cloud Security Posture Management (CSPM) description: Learn more about Cloud Security Posture Management (CSPM) in Microsoft Defender for Cloud and how it helps improve your security posture. Previously updated : 06/30/2024 Last updated : 07/04/2024 #customer intent: As a reader, I want to understand the concept of Cloud Security Posture Management (CSPM) in Microsoft Defender for Cloud.
The following table summarizes each plan and their cloud availability.
| [Code-to-cloud mapping for IaC](iac-template-mapping.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure DevOps | | [PR annotations](review-pull-request-annotations.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | GitHub, Azure DevOps | | Internet exposure analysis | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
-| [External attack surface management (EASM)](concept-easm.md) (for details see [Defender CSPM integration](concept-easm.md#defender-cspm-integration)) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| [External attack surface management (EASM)](concept-easm.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
| [Permissions Management (CIEM)](permissions-management.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Regulatory compliance assessments](concept-regulatory-compliance-standards.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [ServiceNow Integration](integration-servicenow.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
defender-for-cloud Concept Easm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-easm.md
Title: Microsoft Defender for Cloud integration with Defender External attack surface management (EASM)
+ Title: External attack surface management in Defender for Cloud
description: Learn about Defender for Cloud integration with Defender External attack surface management (EASM) to enhance security and reduce the risk of attacks. Previously updated : 05/20/2024 Last updated : 07/03/2024 #customer intent: As a reader, I want to learn about the integration between Defender for Cloud and Defender External attack surface management (EASM) so that I can enhance my organization's security.
-# Integration with Defender EASM
+# External attack surface management in Defender for Cloud
-You can use Microsoft Defender for Cloud's integration with Microsoft Defender External Attack Surface Management (EASM) to improve your organization's security posture, and reduce the potential risk of being attacked.
+Microsoft Defender for Cloud has the capability to perform external attack surface management (EASM), (outside-in) scans on multicloud environments. Defender for Cloud accomplishes this through its integration with Microsoft Defender EASM. The integration allows organizations to improve their security posture while reducing the potential risk of being attacked by exploring their external attack surface. The integration is included with the Defender Cloud Security Posture Management (CSPM) plan by default and doesn't require a license from Defender EASM or any special configurations.
-An external attack surface is the entire area of an organization or system that is susceptible to an attack from an external source. The attack surface is made up of all the points of access that an unauthorized person could use to enter their system. The larger your attack surface is, the harder it's to protect.
+Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that are related to your known online infrastructure, and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by applying vulnerability and infrastructure data to showcase the key areas of concern for your organization, such as:
-Defender EASM continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. This visibility enables security and IT teams to identify unknowns, prioritize risk, eliminate threats, and extend vulnerability and exposure control beyond the firewall.
+- Discover digital assets, always-on inventory.
+- Analyze and prioritize risks and threats.
+- Pinpoint attacker-exposed weaknesses, anywhere and on-demand.
+- Gain visibility into third-party attack surfaces.
-Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that are related to your known online infrastructure, and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by applying vulnerability and infrastructure data to showcase the key areas of concern for your organization, such as:
+With this information, security and IT teams are able to identify unknowns, prioritize risks, eliminate threats, and extend vulnerability and exposure control beyond the firewall. The attack surface is made up of all the points of access that an unauthorized person could use to enter their system. The larger your attack surface is, the harder it's to protect.
+
+EASM collects data on publicly exposed assets (ΓÇ£outside-inΓÇ¥) which Defender for Cloud's Cloud Security Posture Management (CSPM) (ΓÇ£inside-outΓÇ¥) plan uses to assist with internet-exposure validation and discovery capabilities.
-- Discover digital assets, always-on inventory -- Analyze and prioritize risks and threats-- Pinpoint attacker-exposed weaknesses, anywhere and on-demand-- Gain visibility into third-party attack surfaces
+Learn more about [Defender EASM](../external-attack-surface-management/overview.md).
-EASM collects data for publicly exposed assets (ΓÇ£outside-inΓÇ¥). Defender for Cloud CSPM (ΓÇ£inside-outΓÇ¥) can use that data to assist with internet-exposure validation and discovery capabilities, to provide better visibility to customers.
+## EASM capabilities in Defender CSPM
-## Defender CSPM integration
+The [Defender CSPM](concept-cloud-security-posture-management.md) plan utilizes the data collected through the Defender EASM integration to provide the following capabilities within the Defender for Cloud portal:
-While [Defender CSPM](concept-cloud-security-posture-management.md) includes some external attack surface management capabilities, it doesn't include the full EASM solution. Instead, it provides detection of internet accessible assets via Defender for Cloud recommendations and attack paths.
+- Discover of all the internet facing cloud resources through the use of an outside-in scan.
+- Attack path analysis which finds all exploitable paths starting from internet exposed IPs.
+- Custom queries that correlate all internet exposed IPs with the rest of Defender for Cloud data in the cloud security explorer.
-## Next steps
-- Learn about [cloud security explorer and attack paths](concept-attack-path.md) in Defender for Cloud.-- Learn about [Defender EASM](../external-attack-surface-management/overview.md).-- Learn how to [deploy Defender for EASM](../external-attack-surface-management/deploying-the-defender-easm-azure-resource.md).
+## Related content
+- [Detect internet exposed IP addresses](detect-exposed-ip-addresses.md)
+- [Cloud security explorer and attack paths](concept-attack-path.md) in Defender for Cloud.
+- [Deploy Defender for EASM](../external-attack-surface-management/deploying-the-defender-easm-azure-resource.md).
defender-for-cloud Detect Exposed Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/detect-exposed-ip-addresses.md
+
+ Title: Detect internet exposed IP addresses
+description: Learn how to detect exposed IP addresses with cloud security explorer in Microsoft Defender for Cloud to proactively identify security risks.
+ Last updated : 07/03/2024++
+ai-usage: ai-assisted
+#customer intent: As a security professional, I want to learn how to detect exposed IP addresses with cloud security explorer in Microsoft Defender for Cloud so that I can proactively identify security risks in my cloud environment and improve my security posture.
++
+# Detect internet exposed IP addresses
+
+Microsoft Defender for Cloud's provides organizations the capability to perform External Attack Surface Management (EASM) (outside-in) scans to improve their security posture through its integration with Defender EASM. Defender for Cloud's EASM scans uses the information provided by the Defender EASM integration to provide actionable recommendations and visualizations of attack paths to reduce the risk of bad actors exploiting internet exposed IP addresses.
+
+Through the use Defender for Cloud's cloud security explorer, security teams can build queries and proactively hunt for security risks. Security teams can also use the attack path analysis to visualize the potential attack paths that an attacker could use to reach their critical assets.
+
+## Prerequisites
+
+- You need a Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free subscription](https://azure.microsoft.com/pricing/free-trial/).
+
+- You must [enable the Defender Cloud Security Posture Management (CSPM) plan](tutorial-enable-cspm-plan.md).
+
+## Detect internet exposed IP addresses with the cloud security explorer
+
+The cloud security explorer allows you to build queries, such as an outside-in scan, that can proactively hunt for security risks in your environments, including IP addresses that are exposed to the internet.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Search for and select **Microsoft Defender for Cloud** > **Cloud security explorer**.
+
+1. In the dropdown menu, search for and select **IP addresses**.
+
+ :::image type="content" source="media/detect-exposed-ip-addresses/search-ip-addresses.png" alt-text="Screenshot that shows where to navigate to in Defender for Cloud to search for and select the IP addresses option." lightbox="media/detect-exposed-ip-addresses/search-ip-addresses.png":::
+
+1. Select **Done**.
+
+1. Select **+**.
+
+1. In the select condition dropdown menu, select **DEASM Findings**.
+
+ :::image type="content" source="media/detect-exposed-ip-addresses/deasm-findings.png" alt-text="Screenshot that shows where to locate the DEASM Findings option." lightbox="media/detect-exposed-ip-addresses/deasm-findings.png":::
+
+1. Select the **+** button.
+
+1. In the select condition dropdown menu, select **Routes traffic to**.
+
+1. In the select resource type dropdown menu, select **Select all**.
+
+ :::image type="content" source="media/detect-exposed-ip-addresses/select-all.png" alt-text="Screenshot that shows where the select all option is located." lightbox="media/detect-exposed-ip-addresses/select-all.png":::
+
+1. Select **Done**.
+
+1. Select the **+** button.
+
+1. In the select condition dropdown menu, select **Routes traffic to**.
+
+1. In the select resource type dropdown menu, select **Virtual machine**.
+
+1. Select **Done**.
+
+1. Select **Search**.
+
+ :::image type="content" source="media/detect-exposed-ip-addresses/search-results.png" alt-text="Screenshot that shows the fully built query and where the search button is located." lightbox="media/detect-exposed-ip-addresses/search-results.png":::
+
+1. Select a result to review the findings.
+
+## Detect exposed IP addresses with attack path analysis
+
+Using the attack path analysis, you can view a visualization of the attack paths that an attacker could use to reach your critical assets.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Search for and select **Microsoft Defender for Cloud** > **Attack path analysis**.
+
+1. Search for **Internet exposed**.
+
+1. Review and select a result.
+
+1. [Remediate the attack path](how-to-manage-attack-path.md#remediate-attack-paths).
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Identify and remediate attack paths](how-to-manage-attack-path.md)
energy-data-services How To Deploy Gcz https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-deploy-gcz.md
Last updated 05/11/2024
-zone_pivot_groups: energy-data-services-gcz-options
+zone_pivot_groups: gcz-aks-or-windows
# Deploy Geospatial Consumption Zone
There are two main deployment options for the GCZ service:
- **Azure Kubernetes Service (AKS)**: Deploy the GCZ service on an AKS cluster. This deployment option is recommended for production environments. It requires more setup, configuration, and maintenance. It also has some limitations in the provided container images. - **Windows**: Deploy the GCZ service on a Windows. This deployment option recommended for development and testing environments, as it's easier to set up and configure, and requires less maintenance. [!INCLUDE [Azure Kubernetes Service (AKS)](includes/how-to/how-to-deploy-gcz/deploy-gcz-on-aks.md)] ::: zone-end [!INCLUDE [Windows](includes/how-to/how-to-deploy-gcz/deploy-gcz-on-windows.md)]
Through APIM we can add policies to secure, monitor, and manage the APIs.
- url: "http://<GCZ-Service-External-IP>/ignite-provider" ```
+##### [Azure portal](#tab/portal)
[!INCLUDE [Azure portal](includes/how-to/how-to-deploy-gcz/deploy-gcz-apim-portal.md)] -
+##### [Azure CLI](#tab/cli)
[!INCLUDE [Azure CLI](includes/how-to/how-to-deploy-gcz/deploy-gcz-apim-cli.md)] + ## Testing the GCZ service
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 05/27/2024 Last updated : 07/05/2024 # Azure HDInsight release notes
To subscribe, click the **watch** button in the banner and watch out for [HDInsi
## Release Information
-### Release date: May 16, 2024
+### Release date: Jun 19, 2024
This release note applies to + :::image type="icon" source="./media/hdinsight-release-notes/yes-icon.svg" border="false"::: HDInsight 5.0 version. :::image type="icon" source="./media/hdinsight-release-notes/yes-icon.svg" border="false"::: HDInsight 4.0 version.
-HDInsight release will be available to all regions over several days. This release note is applicable for image number **2405081840**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+HDInsight release will be available to all regions over several days. This release note is applicable for image number **2406180258**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions. **OS versions**
+* HDInsight 5.1: Ubuntu 18.04.5 LTS Linux Kernel 5.4
* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4 * HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md). ## Fixed issues
+* Security enhancements
+ * Improvements on using Tags for clusters in line with the [SFI](https://www.microsoft.com/microsoft-cloud/resources/secure-future-initiative) requirements.
+ * Improvements in probes scripts as per the [SFI](https://www.microsoft.com/microsoft-cloud/resources/secure-future-initiative) requirements.
+* Improvements in the HDInsight Log Analytics with System Managed Identity support for HDInsight Resource Provider.
+* Addition of new activity to upgrade the `mdsd` agent version for old image (created before 2024).
+* Enabling MISE in gateway as part of the continued improvements for [MSAL Migration](/entra/identity-platform/msal-overview).
+* Incorporate Spark Thrift Server `Httpheader hiveConf` to the Jetty HTTP ConnectionFactory.
+* Revert RANGER-3753 and RANGER-3593.
+
+ The `setOwnerUser` implementation given in Ranger 2.3.0 release has a critical regression issue when being used by Hive. In Ranger 2.3.0, when HiveServer2 tries to evaluate the policies, Ranger Client tries to get the owner of the hive table by calling the Metastore in the setOwnerUser function which essentially makes call to storage to check access for that table. This issue causes the queries to run slow when Hive runs on 2.3.0 Ranger.
-* Added API in gateway to get token for Keyvault, as part of the SFI initiative.
-* In the new Log monitor `HDInsightSparkLogs` table, for log type `SparkDriverLog`, some of the fields were missing. For example, `LogLevel & Message`. This release adds the missing fields to schemas and fixed formatting for `SparkDriverLog`.
-* Livy logs not available in Log Analytics monitoring `SparkDriverLog` table, which was due to an issue with Livy log source path and log parsing regex in `SparkLivyLog` configs.
-* Any HDInsight cluster, using ADLS Gen2 as a primary storage account can leverage MSI based access to any of the Azure resources (for example, SQL, Keyvaults) which is used within the application code.
-
-
## :::image type="icon" border="false" source="./media/hdinsight-release-notes/clock.svg"::: Coming soon * [Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/). * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). * To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
-* Retirement Notifications for [HDInsight 4.0](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/) and [HDInsight 5.0](https://azure.microsoft.com/updates/hdinsight5retire/).
+* Retirement Notifications for [HDInsight 4.0](https://azure.microsoft.com/updates/azure-hdinsight-40-will-be-retired-on-31-march-2025-migrate-your-hdinsight-clusters-to-51) and [HDInsight 5.0](https://azure.microsoft.com/updates/hdinsight5retire/).
If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
machine-learning Apache Spark Azure Ml Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-azure-ml-concepts.md
---+++ Last updated 10/05/2023 #Customer intent: As a full-stack machine learning pro, I want to use Apache Spark in Azure Machine Learning.
machine-learning Apache Spark Environment Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-environment-configuration.md
Title: Apache Spark - environment configuration description: Learn how to configure your Apache Spark environment for interactive data wrangling.---+++
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
---+++ Last updated 07/13/2023 #Customer intent: As an experienced Python developer, I need secure access to my data in my Azure storage solutions, and I need to use that data to accomplish my machine learning tasks.
machine-learning Concept Top Level Entities In Managed Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-top-level-entities-in-managed-feature-store.md
Title: Top-level entities in managed feature store description: Learn about how Azure Machine Learning uses managed feature stores to create data transformation features and make these features available for training and deployment.---+++
machine-learning Concept What Is Managed Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-what-is-managed-feature-store.md
Title: What is managed feature store? description: Learn about the managed feature store in Azure Machine Learning---+++
machine-learning Dsvm Common Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-common-identity.md
description: Learn how to create common user accounts that can be used across mu
keywords: deep learning, AI, data science tools, data science virtual machine, geospatial analytics, team data science process --++ -+ Last updated 04/10/2024
machine-learning Dsvm Enterprise Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-enterprise-overview.md
keywords: deep learning, AI, data science tools, data science virtual machine, g
--++ -+ Last updated 04/10/2024
machine-learning Dsvm Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-pools.md
keywords: deep learning, AI, data science tools, data science virtual machine, g
--++ -+ Last updated 04/11/2024
machine-learning Dsvm Samples And Walkthroughs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-samples-and-walkthroughs.md
--++ -+ Last updated 04/16/2024
machine-learning Dsvm Secure Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-secure-access-keys.md
--++ -+ Last updated 04/16/2024
machine-learning Dsvm Tools Data Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-data-platforms.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ -+ Last updated 04/16/2024
machine-learning Dsvm Tools Data Science https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-data-science.md
--++ -+ Last updated 04/17/2024
machine-learning Dsvm Tools Deep Learning Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-deep-learning-frameworks.md
--++ -+ Last updated 04/17/2024
machine-learning Dsvm Tools Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-development.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ -+ Last updated 04/17/2024
machine-learning Dsvm Tools Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-ingestion.md
--++ -+ Last updated 04/19/2024
machine-learning Dsvm Tools Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-languages.md
--++ -+ Last updated 04/22/2024
machine-learning Dsvm Tools Productivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-productivity.md
description: Learn about the productivity tools on the Data Science Virtual Mach
keywords: deep learning, AI, data science tools, data science virtual machine, geospatial analytics, team data science process --++ -+ Last updated 04/22/2024
machine-learning Dsvm Tutorial Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-bicep.md
Title: 'Quickstart: Create an Azure Data Science VM - Bicep'
description: In this quickstart, you use Bicep to quickly deploy a Data Science Virtual Machine. --++ Last updated 04/22/2024
machine-learning Dsvm Tutorial Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager.md
Title: 'Quickstart: Create a Data Science VM - Resource Manager template'
description: Learn how to use an Azure Resource Manager template to quickly deploy a Data Science Virtual Machine --++ Last updated 04/23/2024
machine-learning Dsvm Ubuntu Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md
Title: 'Quickstart: Create an Ubuntu Data Science Virtual Machine'
description: Configure and create a Data Science Virtual Machine for Linux (Ubuntu) to do analytics and machine learning. --++ -+ Last updated 04/23/2024 #Customer intent: As a data scientist, I want to learn how to provision the Linux DSVM so that I can move my existing workflow to the cloud.
machine-learning Linux Dsvm Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/linux-dsvm-walkthrough.md
--++ -+ Last updated 04/25/2024
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/overview.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ -+ Last updated 04/26/2024
machine-learning Provision Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/provision-vm.md
description: Learn how to configure and create a Data Science Virtual Machine on Azure for analytics and machine learning. --++ Last updated 04/27/2024
machine-learning Reference Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/reference-known-issues.md
description: Get a list of the known issues, workarounds, and troubleshooting for Azure Data Science Virtual Machine ---++ -+ Last updated 04/29/2024
machine-learning Reference Ubuntu Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/reference-ubuntu-vm.md
Title: 'Reference: Ubuntu Data Science Virtual Machine' description: Details on tools included in the Ubuntu Data Science Virtual Machine- --+++ Last updated 04/30/2024
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Title: What's new on the Data Science Virtual Machine description: Release notes for the Azure Data Science Virtual Machine- ---+++ Last updated 05/21/2024
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/tools-included.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ -+ Last updated 05/21/2024
machine-learning Ubuntu Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/ubuntu-upgrade.md
keywords: deep learning, AI, data science tools, data science virtual machine, t
--++ -+ Last updated 05/08/2024
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
description: Perform data exploration and modeling tasks on the Windows Data Sci
---++ -+ Last updated 06/05/2024
machine-learning Feature Retrieval Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/feature-retrieval-concepts.md
description: The feature retrieval specification, and how to use it for training
---+++ Last updated 12/06/2023
machine-learning Feature Set Materialization Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/feature-set-materialization-concepts.md
description: Build and use feature set materialization resources.
---+++ Last updated 12/06/2023
machine-learning Feature Set Specification Transformation Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/feature-set-specification-transformation-concepts.md
description: The feature set specification, transformations, and best practices.
---+++ Last updated 12/06/2023
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
---+++ Last updated 09/05/2023 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning How To Administrate Data Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md
---+++ Last updated 09/26/2023
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
---+++ Last updated 06/19/2023 # Customer intent: As an experienced data scientist with Python skills, I have data located in external sources outside of Azure. I need to make that data available to the Azure Machine Learning platform, to train my machine learning models.
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
---+++ Last updated 06/20/2023
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
---+++ Last updated 02/20/2024 # Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute resource to train my machine learning models.
machine-learning How To Import Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md
---+++ Last updated 04/18/2024
machine-learning How To Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-label-data.md
Title: Labeling images and text documents title.suffix: Azure Machine Learning description: Use data labeling tools to rapidly label text or label images for a Machine Learning in a data labeling project.---+++
machine-learning How To Manage Imported Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-imported-data-assets.md
---+++ Last updated 06/19/2023
machine-learning How To Manage Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-synapse-spark-pool.md
Title: Attach and manage a Synapse Spark pool in Azure Machine Learning description: Learn how to attach and manage Spark pools with Azure Synapse.---+++
machine-learning How To Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mltable.md
---+++ Last updated 04/18/2024 # Customer intent: As an experienced Python developer, I need to make my Azure storage data available to my remote compute, to train my machine learning models.
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
---+++ Last updated 02/06/2024 - devplatv2
machine-learning How To Schedule Data Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-data-import.md
---+++ Last updated 06/19/2023
machine-learning How To Setup Access Control Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-access-control-feature-store.md
Title: Manage access to managed feature store description: Learn how to access to an Azure Machine Learning managed feature store using Azure role-based access control (Azure RBAC).---+++
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
Title: Submit Spark jobs in Azure Machine Learning description: Learn how to submit standalone and pipeline Spark jobs in Azure Machine Learning ---+++
machine-learning How To Troubleshoot Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-data-access.md
Title: Troubleshoot data access description: Learn how to troubleshoot and resolve data access issues.---+++
machine-learning Interactive Data Wrangling With Apache Spark Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md
Title: Interactive data wrangling with Apache Spark in Azure Machine Learning description: Learn how to use Apache Spark to wrangle data with Azure Machine Learning---+++
machine-learning Migrate To V2 Assets Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-data.md
--++ Last updated 04/15/2024-+ monikerRange: 'azureml-api-1 || azureml-api-2'
machine-learning Migrate To V2 Resource Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-compute.md
--++ Last updated 04/15/2024-+ monikerRange: 'azureml-api-1 || azureml-api-2'
machine-learning Offline Retrieval Point In Time Join Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/offline-retrieval-point-in-time-join-concepts.md
description: Use a point-in-time join for offline feature retrieval.
---+++ Last updated 12/06/2023
machine-learning Quickstart Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-spark-jobs.md
Title: "Configure Apache Spark jobs in Azure Machine Learning" description: Learn how to submit Apache Spark jobs with Azure Machine Learning.---+++
machine-learning Reference Yaml Component Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-spark.md
--++ Last updated 05/11/2023-+ # CLI (v2) Spark component YAML schema
machine-learning Reference Yaml Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-data.md
--++ Last updated 04/15/2024-+ # CLI (v2) data YAML schema
machine-learning Reference Yaml Datastore Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-blob.md
--++ Last updated 04/15/2024-+ # CLI (v2) Azure Blob datastore YAML schema
machine-learning Reference Yaml Datastore Data Lake Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen1.md
--++ Last updated 04/15/2024-+ # CLI (v2) Azure Data Lake Gen1 YAML schema
machine-learning Reference Yaml Datastore Data Lake Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen2.md
--++ Last updated 12/15/2023-+ # CLI (v2) Azure Data Lake Gen2 YAML schema
machine-learning Reference Yaml Datastore Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-files.md
--++ Last updated 12/18/2023-+ # CLI (v2) Azure Files datastore YAML schema
machine-learning Reference Yaml Feature Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-feature-entity.md
---+++ Last updated 05/23/2023
machine-learning Reference Yaml Feature Retrieval Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-feature-retrieval-spec.md
---+++ Last updated 05/23/2023
machine-learning Reference Yaml Feature Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-feature-set.md
---+++ Last updated 05/23/2023
machine-learning Reference Yaml Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-feature-store.md
---+++ Last updated 05/23/2023
machine-learning Reference Yaml Featureset Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-featureset-spec.md
---+++ Last updated 05/23/2023
machine-learning Reference Yaml Job Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md
--++ Last updated 03/06/2024-+ # CLI (v2) pipeline job YAML schema
machine-learning Reference Yaml Job Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-spark.md
--++ Last updated 05/11/2023-+ # CLI (v2) Spark job YAML schema
machine-learning Reference Yaml Job Sweep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-sweep.md
--++ Last updated 03/05/2024-+ # CLI (v2) sweep job YAML schema
machine-learning Reference Yaml Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-mltable.md
--++ Last updated 02/14/2024-+ # CLI (v2) MLtable YAML schema
machine-learning Reference Yaml Schedule Data Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule-data-import.md
--++ Last updated 05/25/2023-+ # CLI (v2) import schedule YAML schema
machine-learning Troubleshooting Managed Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/troubleshooting-managed-feature-store.md
---+++ Last updated 10/31/2023
machine-learning Tutorial Develop Feature Set With Custom Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-develop-feature-set-with-custom-source.md
--++ Last updated 11/28/2023-+ - sdkv2 - ignite2023
machine-learning Tutorial Enable Recurrent Materialization Run Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md
--++ Last updated 11/28/2023-+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Tutorial Experiment Train Models Using Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-experiment-train-models-using-features.md
--++ Last updated 10/27/2023-+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Tutorial Explore Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-explore-data.md
---+++ Last updated 07/05/2023 #Customer intent: As a data scientist, I want to know how to prototype and develop machine learning models on a cloud workstation.
machine-learning Tutorial Feature Store Domain Specific Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-feature-store-domain-specific-language.md
--++ Last updated 03/29/2024-+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Tutorial Get Started With Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md
--++ Last updated 11/28/2023-+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Tutorial Network Isolation For Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-network-isolation-for-feature-store.md
--++ Last updated 03/20/2024-+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Tutorial Online Materialization Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-online-materialization-inference.md
--++ Last updated 11/28/2023-+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md
The service allows update to Cassandra YAML configuration on a datacenter via th
## Update Cassandra version > [!IMPORTANT]
-> Cassandra 4.1, 5.0 and Turnkey Version Updates, are in public preview.
+> Cassandra 5.0 and Turnkey Version Updates, are in public preview.
> These features are provided without a service level agreement, and it's not recommended for production workloads. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
ms. Previously updated : 12/07/2023 Last updated : 07/05/2023 ms.cutom: engagement-fy24 # Support matrix for Hyper-V assessment > [!CAUTION]
-> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly.
+> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article summarizes prerequisites and support requirements when you discover and assess on-premises servers running in a Hyper-V environment for migration to Azure by using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool. If you want to migrate servers running on Hyper-V to Azure, see the [migration support matrix](migrate-support-matrix-hyper-v-migration.md).
Support | ASP.NET web apps | Java web apps
| | Stack | VMware, Hyper-V, and physical servers. | VMware, Hyper-V, and physical servers. Windows servers | Windows Server 2008 R2 and later are supported. | Not supported.
-Linux servers | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, and Red Hat Enterprise Linux 5/6/7.
+Linux servers | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, and Red Hat Enterprise Linux 5/6/7.
Web server versions | IIS 7.5 and later. | Tomcat 8 or later. Required privileges | Local admin. | Root or sudo user.
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
ms. Previously updated : 03/13/2024 Last updated : 07/05/2024 # Support matrix for physical server discovery and assessment > [!CAUTION]
-> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly.
+> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article summarizes prerequisites and support requirements when you assess physical servers for migration to Azure by using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool. If you want to migrate physical servers to Azure, see the [migration support matrix](migrate-support-matrix-physical-migration.md).
For Linux servers, based on the features you want to perform, you can create a u
Operating system | Versions | Red Hat Enterprise Linux | 5.1, 5.3, 5.11, 6.x, 7.x, 8.x, 9.x
- CentOS | 5.1, 5.9, 5.11, 6.x, 7.x, 8.x
Ubuntu | 12.04, 14.04, 16.04, 18.04, 20.04 Oracle Linux | 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 SUSE Linux | 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3
Support | ASP.NET web apps | Java web apps
| | Stack | VMware, Hyper-V, and physical servers. | VMware, Hyper-V, and physical servers. Windows servers | Windows Server 2008 R2 and later are supported. | Not supported.
-Linux servers | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, and Red Hat Enterprise Linux 5/6/7.
+Linux servers | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, and Red Hat Enterprise Linux 5/6/7.
Web server versions | IIS 7.5 and later. | Tomcat 8 or later. Required privileges | Local admin. | Root or sudo user.
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md
ms. Previously updated : 03/18/2024 Last updated : 07/05/2024
The following table summarizes the steps performed automatically for the operati
Learn more about steps for [running a Linux VM on Azure](../virtual-machines/linux/create-upload-generic.md), and get instructions for some of the popular Linux distributions.
-Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 8.x/7.x/6.x, CentOS 8.x/7.x/6.x, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12/11 SP4/11 SP3, Debian 9/8/7, and Oracle 7 when using the agentless method of VMware migration.
+Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements) to install Linux VM agent. Azure Migrate installs the Linux VM agent automatically for RHEL 8.x/7.x/6.x, Ubuntu 14.04/16.04/18.04/19.04/19.10/20.04, SUSE 15 SP0/15 SP1/12/11 SP4/11 SP3, Debian 9/8/7, and Oracle 7 when using the agentless method of VMware migration.
## Check Azure VM requirements
migrate Tutorial App Containerization Java App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-app-service.md
ms.
Previously updated : 04/03/2024 Last updated : 07/05/2024 # Java web app containerization and migration to Azure App Service
Before you begin this tutorial, you should:
| **Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the Java web applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. **Application servers** | Enable Secure Shell (SSH) connection on port 22 on the server(s) running the Java application(s) to be containerized. <br/>
-**Java web application** | The tool currently supports: <br/><br/> - Applications running on Tomcat 8 or Tomcat 9.<br/> - Application servers on Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7. <br/> - Applications using Java 7 or Java 8. <br/> If you have version outside of this, find an image that supports your required versions and modify the dockerfile to replace image <br/><br/> The tool currently doesn't support: <br/><br/> - Application servers running multiple Tomcat instances <br/>
+**Java web application** | The tool currently supports: <br/><br/> - Applications running on Tomcat 8 or Tomcat 9.<br/> - Application servers on Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, Red Hat Enterprise Linux 5/6/7. <br/> - Applications using Java 7 or Java 8. <br/> If you have version outside of this, find an image that supports your required versions and modify the dockerfile to replace image <br/><br/> The tool currently doesn't support: <br/><br/> - Application servers running multiple Tomcat instances <br/>
## Prepare an Azure user account
migrate Tutorial App Containerization Java Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-kubernetes.md
ms.
Previously updated : 04/03/2024 Last updated : 07/05/2024 # Java web app containerization and migration to Azure Kubernetes Service
Before you begin this tutorial, you should:
| **Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the Java web applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. **Application servers** | - Enable Secure Shell (SSH) connection on port 22 on the server(s) running the Java application(s) to be containerized. <br/>
-**Java web application** | The tool currently supports <br/><br/> - Applications running on Tomcat 8 or Tomcat 9.<br/> - Application servers on Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7. <br/> - Applications using Java 7 or Java 8. <br/> If you have version outside of this, find an image that supports your required versions and modify the dockerfile to replace image <br/><br/> The tool currently doesn't support <br/><br/> - Applications servers running multiple Tomcat instances <br/>
+**Java web application** | The tool currently supports <br/><br/> - Applications running on Tomcat 8 or Tomcat 9.<br/> - Application servers on Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, Red Hat Enterprise Linux 5/6/7. <br/> - Applications using Java 7 or Java 8. <br/> If you have version outside of this, find an image that supports your required versions and modify the dockerfile to replace image <br/><br/> The tool currently doesn't support <br/><br/> - Applications servers running multiple Tomcat instances <br/>
## Prepare an Azure user account
migrate Tutorial Discover Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md
ms. Previously updated : 03/18/2024 Last updated : 07/05/2024
Operating system names provided in the CSV must contain and match. If they don't
**A-H** | **I-R** | **S-T** | **U-Z** | | |
-Asianux 3<br/>Asianux 4<br/>Asianux 5<br/>CentOS<br/>CentOS 4/5<br/>CoreOS Linux<br/>Debian GNU/Linux 4<br/>Debian GNU/Linux 5<br/>Debian GNU/Linux 6<br/>Debian GNU/Linux 7<br/>Debian GNU/Linux 8<br/>FreeBSD | IBM OS/2<br/>macOS X 10<br/>MS-DOS<br/>Novell NetWare 5<br/>Novell NetWare 6<br/>Oracle Linux<br/>Oracle Linux 4/5<br/>Oracle Solaris 10<br/>Oracle Solaris 11<br/>Red Hat Enterprise Linux 2<br/>Red Hat Enterprise Linux 3<br/>Red Hat Enterprise Linux 4<br/>Red Hat Enterprise Linux 5<br/>Red Hat Enterprise Linux 6<br/>Red Hat Enterprise Linux 7<br/>Red Hat Fedora | SCO OpenServer 5<br/>SCO OpenServer 6<br/>SCO UnixWare 7<br/> Serenity Systems eComStation<br/>Serenity Systems eComStation 1<br/>Serenity Systems eComStation 2<br/>Sun Microsystems Solaris 8<br/>Sun Microsystems Solaris 9<br/><br/>SUSE Linux Enterprise 10<br/>SUSE Linux Enterprise 11<br/>SUSE Linux Enterprise 12<br/>SUSE Linux Enterprise 8/9<br/>SUSE Linux Enterprise 11<br/>SUSE openSUSE | Ubuntu Linux<br/>VMware ESXi 4<br/>VMware ESXi 5<br/>VMware ESXi 6<br/>Windows 10<br/>Windows 2000<br/>Windows 3<br/>Windows 7<br/>Windows 8<br/>Windows 95<br/>Windows 98<br/>Windows NT<br/>Windows Server (R) 2008<br/>Windows Server 2003<br/>Windows Server 2008<br/>Windows Server 2008 R2<br/>Windows Server 2012<br/>Windows Server 2012 R2<br/>Windows Server 2016<br/>Windows Server 2019<br/>Windows Server Threshold<br/>Windows Vista<br/>Windows Web Server 2008 R2<br/>Windows XP Professional
+Asianux 3<br/>Asianux 4<br/>Asianux 5<br/>CoreOS Linux<br/>Debian GNU/Linux 4<br/>Debian GNU/Linux 5<br/>Debian GNU/Linux 6<br/>Debian GNU/Linux 7<br/>Debian GNU/Linux 8<br/>FreeBSD | IBM OS/2<br/>macOS X 10<br/>MS-DOS<br/>Novell NetWare 5<br/>Novell NetWare 6<br/>Oracle Linux<br/>Oracle Linux 4/5<br/>Oracle Solaris 10<br/>Oracle Solaris 11<br/>Red Hat Enterprise Linux 2<br/>Red Hat Enterprise Linux 3<br/>Red Hat Enterprise Linux 4<br/>Red Hat Enterprise Linux 5<br/>Red Hat Enterprise Linux 6<br/>Red Hat Enterprise Linux 7<br/>Red Hat Fedora | SCO OpenServer 5<br/>SCO OpenServer 6<br/>SCO UnixWare 7<br/> Serenity Systems eComStation<br/>Serenity Systems eComStation 1<br/>Serenity Systems eComStation 2<br/>Sun Microsystems Solaris 8<br/>Sun Microsystems Solaris 9<br/><br/>SUSE Linux Enterprise 10<br/>SUSE Linux Enterprise 11<br/>SUSE Linux Enterprise 12<br/>SUSE Linux Enterprise 8/9<br/>SUSE Linux Enterprise 11<br/>SUSE openSUSE | Ubuntu Linux<br/>VMware ESXi 4<br/>VMware ESXi 5<br/>VMware ESXi 6<br/>Windows 10<br/>Windows 2000<br/>Windows 3<br/>Windows 7<br/>Windows 8<br/>Windows 95<br/>Windows 98<br/>Windows NT<br/>Windows Server (R) 2008<br/>Windows Server 2003<br/>Windows Server 2008<br/>Windows Server 2008 R2<br/>Windows Server 2012<br/>Windows Server 2012 R2<br/>Windows Server 2016<br/>Windows Server 2019<br/>Windows Server Threshold<br/>Windows Vista<br/>Windows Web Server 2008 R2<br/>Windows XP Professional
## Business case considerations - If you import servers by using a CSV file and build a business case:
migrate Tutorial Migrate Aws Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md
Previously updated : 03/20/2024 Last updated : 07/05/2024 # Discover, assess, and migrate Amazon Web Services (AWS) VMs to Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly.
+> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This tutorial shows you how to discover, assess, and migrate Amazon Web Services (AWS) virtual machines (VMs) to Azure VMs by using Azure Migrate: Server Assessment and the Migration and modernization tool.
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/migrate-support-matrix-vmware.md
ms. Previously updated : 02/27/2024 Last updated : 07/05/2024 zone_pivot_groups: vmware-discovery-requirements
zone_pivot_groups: vmware-discovery-requirements
# Support matrix for VMware discovery > [!CAUTION]
-> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly.
+> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article summarizes prerequisites and support requirements for using the [Azure Migrate: Discovery and assessment](../migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool to discover and assess servers in a VMware environment for migration to Azure.
Support | ASP.NET web apps | Java web apps
| | Stack |VMware, Hyper-V, and physical servers. | VMware, Hyper-V, and physical servers. Windows servers | Windows Server 2008 R2 and later are supported. | Not supported.
-Linux servers | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, and Red Hat Enterprise Linux 5/6/7.
+Linux servers | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, and Red Hat Enterprise Linux 5/6/7.
Web server versions | IIS 7.5 and later. | Tomcat 8 or later. Protocol | WinRM port 5985 (HTTP) | SSH port 22 (TCP) Required privileges | Local admin. | Root or sudo user.
Support | Details
| Supported servers | You can enable agentless dependency analysis on up to 1,000 servers (across multiple vCenter Servers) discovered per appliance. Windows servers | Windows Server 2022 <br/> Windows Server 2019<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br /> Windows Server 2008 (32-bit)
-Linux servers | Red Hat Enterprise Linux 5.1, 5.3, 5.11, 6.x, 7.x, 8.x <br /> CentOS 5.1, 5.9, 5.11, 6.x, 7.x, 8.x <br /> Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04 <br /> OracleLinux 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 <br /> SUSE Linux 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 <br /> Debian 7, 8, 9, 10, 11
+Linux servers | Red Hat Enterprise Linux 5.1, 5.3, 5.11, 6.x, 7.x, 8.x <br /> Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04 <br /> OracleLinux 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 <br /> SUSE Linux 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 <br /> Debian 7, 8, 9, 10, 11
Server requirements | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed.<br /><br /> WMI should be enabled and available on Windows servers. vCenter Server account | The read-only account used by Azure Migrate and Modernize for assessment must have privileges for guest operations on VMware VMs. Windows server access | A user account (local or domain) with administrator permissions on servers.
mysql Migrate Single Flexible In Place Auto Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md
Following described are the ways to review your migration schedule once you rece
- **Review** the private endpoints listed to be migrated. Ensure they are marked as **Ready to Migrate**. If they are marked as ineligible, select the appropriate subscription and private DNS Zone. - Select the **confirmation checkbox** after performing the listed pre-requisite checks for migrating private endpoints. - Click on the **Authenticate** button to authenticate ARM connection required to migrate the private endpoints from source to target server.
+ > [!NOTE]
+ > If the mandatory inputs for migration are not provided atleast 7 days before the scheduled migration, the migration will be rescheduled to a later date.
## Prerequisite checks for in-place automigration
network-watcher Connection Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-overview.md
Previously updated : 05/20/2024 Last updated : 07/05/2024 #CustomerIntent: As an Azure administrator, I need to monitor communication between one VM and another. If the communication fails, I need to know why so that I can resolve the problem.
The following sections provide details for these steps.
## Install monitoring agents
- > [!NOTE]
- > Connection monitor now supports auto enablement of monitoring extensions for Azure & Non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection monitor.
-
Connection monitor relies on lightweight executable files to run connectivity checks. It supports connectivity checks from both Azure environments and on-premises environments. The executable file that you use depends on whether your VM is hosted on Azure or on-premises.
+> [!NOTE]
+> Monitoring extensions for Azure and non-Azure endpoints are automatically enabled when you use the Azure portal to create a connection monitor.
+ ### Agents for Azure virtual machines and virtual machine scale sets To make Connection monitor recognize your Azure VMs or virtual machine scale sets as monitoring sources, install the Network Watcher Agent virtual machine extension on them. This extension is also known as the *Network Watcher extension*. Azure virtual machines and scale sets require the extension to trigger end-to-end monitoring and other advanced functionality.
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
If your source server is configured with a *private access* virtual network, you
## Post-restore tasks
-After you restore the database, you can perform the following tasks to get your users and applications back up and running:
+After you restore the server, you can perform the following tasks to get your users and applications back up and running:
- If the new server is meant to replace the original server, redirect clients and client applications to the new server. Change the server name of your connection string to point to the new server.
After you restore the database, you can perform the following tasks to get your
- If the source server from which you restored was configured with high availability, and you want to configure the restored server with high availability, you can then follow [these steps](./how-to-manage-high-availability-portal.md). -- If the source server from which you restored was configured with read replicase, and you want to configure read replicas on the restored server, you can then follow [these steps](./how-to-read-replicas-portal.md).
+- If the source server from which you restored was configured with read replicas, and you want to configure read replicas on the restored server, you can then follow [these steps](./how-to-read-replicas-portal.md).
## Long-term retention (preview)
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
In the SAP workload documentation space, you can find the following areas:
- June 26, 2024: Adapt [Azure Storage types for SAP workload](./planning-guide-storage.md) to latest features, like snapshot capabilities for Premium SSD v2 and Ultra disk. Adapt ANF to support of mix of NFS and block storage between /hana/data and /hana/log - June 26, 2024: Fix wrong memory stated for some VMs in [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md) and [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md)
+- June 19, 2024: Update the SAP high availability guides to lift the restriction of using floating IP on the NIC secondary IP address in load-balancing scenarios
- May 21, 2024: Update timeouts and added start delay for pacemaker scheduled events in [Set up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) and [Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure](./high-availability-guide-suse-pacemaker.md). - April 1, 2024: Reference the considerations section for sizing HANA shared file system in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md), [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md), [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md), and [Azure Files NFS for SAP](planning-guide-storage-azure-files.md) - March 18, 2024: Added considerations for sizing the HANA shared file system in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
sentinel Microsoft Sentinel Defender Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-sentinel-defender-portal.md
The following table describes the new or improved capabilities available in the
| Capabilities | Description | | -- | |
-| Advanced hunting | Query from a single portal across different data sets to make hunting more efficient and remove the need for context-switching. View and query all data including data from Microsoft security services and Microsoft Sentinel. Use all your existing Microsoft Sentinel workspace content, including queries and functions.<br><br> For more information, see [Advanced hunting in the Microsoft Defender portal](https://go.microsoft.com/fwlink/p/?linkid=2264410). |
+| Advanced hunting | Query from a single portal across different data sets to make hunting more efficient and remove the need for context-switching. Use Copilot for Security to help generate your KQL. View and query all data including data from Microsoft security services and Microsoft Sentinel. Use all your existing Microsoft Sentinel workspace content, including queries and functions.<br><br> For more information, see the following articles:<br>- [Advanced hunting in the Microsoft Defender portal](https://go.microsoft.com/fwlink/p/?linkid=2264410)<br>- [Copilot for Security in advanced hunting](/defender-xdr/advanced-hunting-security-copilot) |
| Attack disrupt | Deploy automatic attack disruption for SAP with both the unified security operations platform and the Microsoft Sentinel solution for SAP applications. For example, contain compromised assets by locking suspicious SAP users in case of a financial process manipulation attack. <br><br>Attack disruption capabilities for SAP are available in the Defender portal only. To use attack disruption for SAP, update your data connector agent version and ensure that the relevant Azure role is assigned to your agent's identity. <br><br> For more information, see [Automatic attack disruption for SAP](sap/deployment-attack-disrupt.md). | |SOC optimizations | Get high-fidelity and actionable recommendations to help you identify areas to:<br>- Reduce costs <br>- Add security controls<br>- Add missing data<br>SOC optimizations are available in the Defender and Azure portals, are tailored to your environment, and are based on your current coverage and threat landscape. <br><br>For more information, see the following articles:<br>- [Optimize your security operations](soc-optimization/soc-optimization-access.md) <br>- [SOC optimization reference of recommendations](soc-optimization/soc-optimization-reference.md) | | Unified entities | Entity pages for devices, users, IP addresses, and Azure resources in the Defender portal display information from Microsoft Sentinel and Defender data sources. These entity pages give you an expanded context for your investigations of incidents and alerts in the Defender portal.<br><br>For more information, see [Investigate entities with entity pages in Microsoft Sentinel](/azure/sentinel/entity-pages). |
-| Unified incidents | Manage and investigate security incidents in a single location and from a single queue in the Defender portal. Incidents include:<br>- Data from the breadth of sources<br>- AI analytics tools of security information and event management (SIEM)<br>- Context and mitigation tools offered by extended detection and response (XDR) <br><br> For more information, see [Incident response in the Microsoft Defender portal](/microsoft-365/security/defender/incidents-overview). |
+| Unified incidents | Manage and investigate security incidents in a single location and from a single queue in the Defender portal. Use Copilot for Security to summarize, respond and report. Incidents include:<br>- Data from the breadth of sources<br>- AI analytics tools of security information and event management (SIEM)<br>- Context and mitigation tools offered by extended detection and response (XDR) <br><br> For more information, see the following articles:<br>- [Incident response in the Microsoft Defender portal](/microsoft-365/security/defender/incidents-overview)<br>- [Investigate Microsoft Sentinel incidents in Copilot for Security](sentinel-security-copilot.md) |
## Capability differences between portals
sentinel Sentinel Security Copilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-security-copilot.md
+
+ Title: Microsoft Sentinel plugin (Preview) in Copilot for Security
+description: Learn about Microsoft Sentinel capabilities in Copilot for Security. Understand the best prompts to use and how to get timely, accurate results for natural language to KQL.
+keywords: security copilot, Microsoft Defender XDR, embedded experience, incident summary, query assistant, incident report, incident response automated, automatic incident response, summarize incidents, summarize incident report, plugins, Microsoft plugins, preinstalled plugins, Microsoft Copilot for Security, Copilot for Security, Microsoft Defender, Copilot in Sentinel, NL2KQL, natural language to KQL, generate queries
++
+ms.pagetype: security
++
+ms.localizationpriority: medium
+audience: ITPro
+
+appliesto:
+ - Microsoft Sentinel
+ - Copilot for Security
Last updated : 07/04/2024
+#Customer intent: As a SOC administer or analyst, understand how to use Microsoft Sentinel data with Copilot for Security.
++
+# Investigate Microsoft Sentinel incidents in Copilot for Security
+
+Microsoft Copilot for Security is a platform that helps you defend your organization at machine speed and scale. Microsoft Sentinel provides a plugin for Copilot to help analyze incidents and generate hunting queries.
+
+Together with the iterative prompts using other sophisticated Copilot for Security sources you enable, your Microsoft Sentinel incidents and data provide wider visibility into threats and their context for your organization.
+
+For more information on Copilot for Security, see the following articles:
+- [Get started with Microsoft Copilot for Security](/copilot/security/get-started-security-copilot)
+- [Understand authentication in Microsoft Copilot for Security](/copilot/security/authentication)
+
+## Integrate Microsoft Sentinel with Copilot for Security
+
+Microsoft Sentinel provides two plugins to integrate with Copilot for Security:
+- **Microsoft Sentinel (Preview)**
+- **Natural language to KQL for Microsoft Sentinel (Preview)**.
+
+> [!IMPORTANT]
+> The "Microsoft Sentinel" and "Natural Language to KQL for Microsoft Sentinel" plugins are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+### Configure a default Microsoft Sentinel workspace
+
+Increase your prompt accuracy by configuring a Microsoft Sentinel workspace as the default.
+
+1. Navigate to Copilot for Security at [https://securitycopilot.microsoft.com/](https://securitycopilot.microsoft.com/).
+
+1. Open **Sources** :::image type="icon" source="media/sentinel-security-copilot/sources.png"::: in the prompt bar.
+
+1. On the **Manage plugins** page, set the toggle to **On**
+
+1. Select the gear icon on the Microsoft Sentinel (Preview) plugin.
+
+ :::image type="content" source="media/sentinel-security-copilot/sentinel-plugins.png" alt-text="Screenshot of the personalization selection gear icon for the Microsoft Sentinel plugin.":::
+
+1. Configure the default workspace name.
+
+ :::image type="content" source="media/sentinel-security-copilot/configure-default-sentinel-workspace.png" alt-text="Screenshot of the plugin personalization options for the Microsoft Sentinel plugin.":::
+
+> [!TIP]
+> Specify the workspace in your prompt when it doesn't match the configured default.
+>
+> Example: `What are the top 5 high priority Sentinel incidents in workspace "soc-sentinel-workspace"?`
+
+### Integrate Microsoft Sentinel with Copilot in Defender
+
+Use the unified security operations platform with your Microsoft Sentinel data for an embedded Copilot for Security experience. Microsoft Sentinel's unified incidents in the Defender portal allow Copilot in Defender to use its capabilities with Microsoft Sentinel data.
+
+For example:
+
+- The [SAP (Preview) solution]() is installed in your workspace for Microsoft Sentinel.
+- The near real-time rule [**SAP - (Preview) File Downloaded From a Malicious IP Address**](sap/sap-solution-security-content.md#data-exfiltration) triggers an alert, creating a Microsoft Sentinel incident.
+- [Microsoft Sentinel was added to the unified security operations platform](/defender-xdr/microsoft-sentinel-onboard).
+- Microsoft Sentinel incidents are now unified with Defender XDR incidents.
+- Use Copilot in Microsoft Defender for incident summary, guided responses and incident reports.
++
+For more information, see the following resources:
+
+- [Microsoft Sentinel in the Microsoft Defender portal](microsoft-sentinel-defender-portal.md#new-and-improved-capabilities).
+- [Copilot in Microsoft Defender](/defender-xdr/security-copilot-in-microsoft-365-defender)
+
+### Integrate Microsoft Sentinel with Copilot for Security in advanced hunting
+
+The Natural language to KQL for Microsoft Sentinel (Preview) plugin generates and runs KQL hunting queries using Microsoft Sentinel data. This capability is available in the standalone experience and the advanced hunting section of the Microsoft Defender portal.
+
+> [!NOTE]
+> In the unified Microsoft Defender portal, you can prompt Copilot for Security to generate advanced hunting queries for both Defender XDR and Microsoft Sentinel tables. Not all Microsoft Sentinel tables are currently supported, but support for these tables can be expected in the future.
+
+For more information, see [Copilot for Security in advanced hunting](/defender-xdr/advanced-hunting-security-copilot).
+
+## Improve your Microsoft Sentinel prompts
+
+Consider the **Microsoft Sentinel incident investigation** promptbook as a starting point for creating effective prompts. This promptbook delivers a report about a specific incident, along with related alerts, reputation scores, users, and devices.
+
+| Guidance | Prompt |
+|||
+|Nudge Copilot to provide human readable information instead of responding with object IDs. |`Show me Sentinel incidents that were closed as a false positive. Supply the Incident number, Incident Title, and the time they were created.`|
+|Copilot knows who you are. Use the "me" pronoun to find incidents related to you. The following prompt targets incidents assigned to you. |`What Sentinel incidents created in the last 24 hours are assigned to me? List them with highest priority incidents at the top.` |
+|When you narrow a prompt response down to a single incident, Copilot knows the context.|`Tell me about the entities associated with that incident.`|
+|Copilot is good at summarizing. Describe a specific audience you want the prompts and responses summarized for. |`Write an executive report summarizing this investigation. It should be suited for a nontechnical audience.`|
+
+For more prompt guidance and samples, see the following resources:
+
+- [Using promptbooks](/copilot/security/using-promptbooks)
+- [Prompting in Microsoft Copilot for Security](/copilot/security/prompting-security-copilot)
+- [Rod Trent's Copilot for Security Prompt Library](https://github.com/rod-trent/Copilot-for-Security/tree/main/Prompts)
+
+## Related articles
+
+- [Microsoft Copilot in Microsoft Defender](/defender-xdr/security-copilot-in-microsoft-365-defender)
+- [Microsoft Defender XDR integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md)
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
description: Learn how to deploy Azure File Sync storage sync service using the
Previously updated : 06/03/2024 Last updated : 07/05/2024 ms.devlang: azurecli
storage Geo Redundant Storage For Large File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/geo-redundant-storage-for-large-file-shares.md
description: Azure Files geo-redundancy for large file shares significantly impr
Previously updated : 05/29/2024 Last updated : 07/05/2024
If the primary region becomes unavailable for any reason, you can [initiate an a
## New limits for geo-redundant shares
-In regions that are now generally available, all standard SMB file shares that are geo-redundant (both new and existing) now support up to 100TiB capacity and have higher performance limits:
+All standard SMB file shares that are geo-redundant (both new and existing) now support up to 100TiB capacity and have higher performance limits:
| **Attribute** | **Previous limit** | **New limit** | ||-||
In regions that are now generally available, all standard SMB file shares that a
| Max throughput per share | Up to 60 MiB/s | Up to [storage account limits](./storage-files-scale-targets.md#storage-account-scale-targets) (150x increase) | ## Region availability
-Azure Files geo-redundancy for large file shares is generally available in all regions except China East 2 and China North 2, which are still in preview.
+Azure Files geo-redundancy for large file shares is generally available in all regions.
## Pricing Pricing is based on the standard file share tier and redundancy option configured for the storage account. To learn more, see [Azure Files Pricing](https://azure.microsoft.com/pricing/details/storage/files/).
-## Register for the feature
-
-To get started, register for the feature using Azure portal or PowerShell. This step is required for regions that are in preview and is no longer required for regions that are generally available.
-
-# [Azure portal](#tab/portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com?azure-portal=true).
-2. Search for and select **Preview features**.
-3. Click the **Type** filter and select **Microsoft.Storage**.
-4. Select **Azure Files geo-redundancy for large file shares** and click **Register**.
-
-# [Azure PowerShell](#tab/powershell)
-
-To register your subscription using Azure PowerShell, run the following commands. Replace `<your-subscription-id>` and `<your-tenant-id>` with your own values.
-
-```azurepowershell-interactive
-Connect-AzAccount -SubscriptionId <your-subscription-id> -TenantId <your-tenant-id>
-Register-AzProviderFeature -FeatureName AllowLfsForGRS -ProviderNamespace Microsoft.Storage
-```
-- ## Configure geo-redundancy and 100 TiB capacity for standard SMB file shares
-In regions that are now generally available:
-- All standard SMB file shares (new and existing) support up to 100 TiB capacity and you can select any redundancy option supported in the region. Since all standard SMB file shares now support up to 100 TiB capacity, the large file share (LargeFileSharesState) property on storage accounts is no longer used and will be removed in the future.
+In all regions that support geo-redundancy:
+- Standard SMB file shares (new and existing) support up to 100 TiB capacity and you can select any redundancy option supported in the region. Since all standard SMB file shares now support up to 100 TiB capacity, the large file share (LargeFileSharesState) property on storage accounts is no longer used and will be removed in the future.
- If you have existing file shares, you can now increase the file share size up to 100 TiB (share quotas aren't automatically increased). - Performance limits (IOPS and throughput) for your file shares have automatically increased to the storage account limits.
update-manager Guidance Migration Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-azure.md
description: Patching guidance overview for Microsoft Configuration Manager to A
Previously updated : 04/19/2024 Last updated : 07/05/2024
Deploy software updates (install patches) | Provides three modes of deploying up
As a first step in MCM user's journey towards Azure Update Manager, you need to enable Azure Update Manager on your existing MCM managed servers (i.e. ensure that Azure Update Manager and MCM co-existence is achieved). The following section address few challenges that you might encounter in this first step.
+### Prerequisites for Azure Update Manager and MCM co-existence
+
+- Ensure that the Auto updates are disabled on the machine. For more information, see [Manage additional Windows Update- Windows Deployment](https://learn.microsoft.com/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry).
+
+ Ensure that the registry path *HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU, NoAutoUpdate* is set to 1.
+
+- Azure Update Manager can get updates from WSUS server and for this, ensure to configure WSUS server as part of SCCM.
+
+ - Ensure that the WSUS server has enough space.
+ - Ensure to update language option to download the packages in WSUS config. We recommend that you select the languages that are required. For more information, see [Step 2 - Configure WSUS](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/deploy/2-configure-wsus#to-configure-wsus).
+ - Ensure to create a rule for auto approving updates in WSUS to download the applicable packages on the WSUS server so that Azure Update Manager can get the updates from this WSUS server.
+ - Select classifications you want as per your requirements or keep them same as selected in SCCM.
+ - Select products as per requirements or keep them same as selected in SCCM.
+ - To start, create a test computer group and apply this rule to it, to test these changes.
+ - After testing the test group, you can expand it to all computer groups.
+ - Create an exclusion computer group in WSUS if needed.
+ ### Overview of current MCM setup MCM client uses WSUS server to scan for first-party updates, therefore you have WSUS server configured as part of the initial setup.
update-manager Guidance Patching Sql Server Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-patching-sql-server-azure-vm.md
description: An overview on patching guidance for SQL Server on Azure VMs using
Previously updated : 04/15/2024 Last updated : 07/06/2024
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-This article provides the details on how to integrate [Azure Update Manager](overview.md) with your [SQL virtual machines](/azure/azure-sql/virtual-machines/windows/manage-sql-vm-portal) resource for your [SQL Server on Azure Virtual Machines (VMs)](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)
+This article provides the details on how to integrate [Azure Update Manager](overview.md) with your [SQL virtual machines](/azure/azure-sql/virtual-machines/windows/manage-sql-vm-portal) resource for your [SQL Server on Azure Virtual Machines (VMs)](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview).
-> [!NOTE]
-> This feature isn't available in Azure US Government and Azure China operated by 21 Vianet.
## Overview
update-manager Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md
Previously updated : 05/13/2024 Last updated : 07/05/2024 # What's new in Azure Update Manager [Azure Update Manager](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Azure Update Manager.
+## June 2024
+
+### New region support
+
+General Availability: Azure Update Manager is now supported in US Government and Microsoft Azure operated by 21Vianet. [Learn more](support-matrix.md#supported-regions).
+ ## May 2024 ### Migration portal experience and scripts: Generally Available
virtual-machines Network Watcher Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-update.md
Previously updated : 03/17/2024 Last updated : 07/05/2024
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md
Previously updated : 04/17/2024 Last updated : 07/05/2024
-# Tutorial: Create and manage a VPN gateway by using the Azure portal
+# Tutorial: Create and manage a VPN gateway using the Azure portal
-This tutorial helps you create and manage a virtual network gateway (VPN gateway) by using the Azure portal. The VPN gateway is just one part of a connection architecture to help you securely access resources within a virtual network.
+This tutorial helps you create and manage a virtual network gateway (VPN gateway) using the Azure portal. The VPN gateway is one part of the connection architecture that helps you securely access resources within a virtual network using VPN Gateway.
:::image type="content" source="./media/tutorial-create-gateway-portal/gateway-diagram.png" alt-text="Diagram that shows a virtual network and a VPN gateway." lightbox="./media/tutorial-create-gateway-portal/gateway-diagram-expand.png":::
You need an Azure account with an active subscription. If you don't have one, [c
## <a name="CreatVNet"></a>Create a virtual network
-Create a virtual network by using the following values:
+Create a virtual network using the following example values:
* **Resource group:** TestRG1 * **Name:** VNet1
-* **Region:** (US) East US
+* **Region:** (US) East US (or region of your choosing)
* **IPv4 address space:** 10.1.0.0/16 * **Subnet name:** FrontEnd * **Subnet address space:** 10.1.0.0/24 [!INCLUDE [Create a VNet](../../includes/vpn-gateway-basic-vnet-rm-portal-include.md)]
-After you create your virtual network, you can optionally configure Azure DDoS Protection. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes. For more information about Azure DDoS Protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
+After you create your virtual network, you can optionally configure Azure DDoS Protection. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes. For more information about Azure DDoS Protection, see [What is Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md).
## Create a gateway subnet
The virtual network gateway requires a specific subnet named **GatewaySubnet**.
[!INCLUDE [Create gateway subnet](../../includes/vpn-gateway-create-gateway-subnet-portal-include.md)] + ## <a name="VNetGateway"></a>Create a VPN gateway
-In this step, you create the virtual network gateway (VPN gateway) for your virtual network. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
+In this section, you create the virtual network gateway (VPN gateway) for your virtual network. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
-Create a virtual network gateway by using the following values:
+Create a gateway using the following values:
* **Name**: VNet1GW
-* **Region**: East US
* **Gateway type**: VPN
-* **SKU**: VpnGw2
+* **SKU**: VpnGw2AZ
* **Generation**: Generation 2 * **Virtual network**: VNet1 * **Gateway subnet address range**: 10.1.255.0/27 * **Public IP address**: Create new
-* **Public IP address name**: VNet1GWpip
-
-For this exercise, you won't select a zone-redundant SKU. If you want to learn about zone-redundant SKUs, see [About zone-redundant virtual network gateways](about-zone-redundant-vnet-gateways.md). Additionally, these steps aren't intended to configure an active-active gateway. For more information, see [Configure active-active gateways](active-active-portal.md).
+* **Public IP address name:** VNet1GWpip1
+* **Public IP address SKU:** Standard
+* **Assignment:** Static
+* **Second Public IP address name:** VNet1GWpip2
A gateway can take 45 minutes or more to fully create and deploy. You can see the deployment status on the **Overview** page for your gateway. After the gateway is created, you can view the IP address assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device. -
-## <a name="view"></a>View the public IP address
-
-You can view the gateway public IP address on the **Overview** page for your gateway. The public IP address is used when you configure a site-to-site connection to your VPN gateway.
+## <a name="view"></a>View public IP address
+To view public IP addresses associated to your virtual network gateway, navigate to your gateway in the portal.
-To see more information about the public IP address object, select the name/IP address link next to **Public IP address**.
+1. On the portal page for your virtual network gateway, under **Settings**, open the **Properties** page.
+1. To view more information about the IP address object, click the associated IP address link.
## <a name="resize"></a>Resize a gateway SKU There are specific rules for resizing versus changing a gateway SKU. In this section, you resize the SKU. For more information, see [Resize or change gateway SKUs](about-gateway-skus.md#resizechange).
+The basic steps are:
+
+1. Go to the **Configuration** page for your virtual network gateway.
+1. On the right side of the page, select the dropdown arrow to show a list of available SKUs. Notice that the list only populates SKUs that you're able to use to resize your current SKU. If you don't see the SKU you want to use, instead of resizing, you have to change to a new SKU.
+1. Select the SKU from the dropdown list and save your changes.
## <a name="reset"></a>Reset a gateway
+Gateway resets behave differently, depending on your gateway configuration. For more information, see [Reset a VPN gateway or a connection](reset-gateway.md).
+
+The basic steps are:
+ [!INCLUDE [reset a gateway](../../includes/vpn-gateway-reset-gw-portal-include.md)] ## Clean up resources
If you're not going to continue to use this application or go to the next tutori
these resources. 1. Enter the name of your resource group in the **Search** box at the top of the portal and select it from the search results.- 1. Select **Delete resource group**.- 1. Enter your resource group for **TYPE THE RESOURCE GROUP NAME** and select **Delete**. ## Next steps
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md
After you create your virtual network, you can optionally configure Azure DDoS P
[!INCLUDE [Create gateway subnet](../../includes/vpn-gateway-create-gateway-subnet-portal-include.md)] + ## <a name="VNetGateway"></a>Create a VPN gateway In this step, you create the virtual network gateway for your virtual network. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
Create a virtual network gateway (VPN gateway) by using the following values:
[!INCLUDE [Configure PIP settings](../../includes/vpn-gateway-add-gw-pip-portal-include.md)]
-You can see the deployment status on the **Overview** page for your gateway. A gateway can take up to 45 minutes to fully create and deploy. After the gateway is created, you can view the IP address that was assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
+A gateway can take 45 minutes or more to fully create and deploy. You can see the deployment status on the **Overview** page for your gateway. After the gateway is created, you can view the IP address assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
[!INCLUDE [NSG warning](../../includes/vpn-gateway-no-nsg-include.md)] ### <a name="view"></a>View the public IP address
-You can view the gateway public IP address on the **Overview** page for your gateway.
-
+To view public IP addresses associated to your virtual network gateway, navigate to your gateway in the portal.
-To see more information about the public IP address object, select the name/IP address link next to **Public IP address**.
+1. On the portal page for your virtual network gateway, under **Settings**, open the **Properties** page.
+1. To view more information about the IP address object, click the associated IP address link.
## <a name="LocalNetworkGateway"></a>Create a local network gateway