Updates from: 11/08/2024 02:04:30
Service Microsoft Docs article Related commit history on GitHub Change details
api-center Set Up Api Center Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-azure-cli.md
-ms.daate: 06/27/2024
Last updated : 06/27/2024
app-service Configure Gateway Required Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-gateway-required-vnet-integration.md
You can't use gateway-required virtual network integration:
To create a gateway:
-1. [Create the VPN gateway and subnet](../vpn-gateway/point-to-site-certificate-gateway.md#creategw). Select a route-based VPN type.
+1. [Create the VPN gateway and subnet](../vpn-gateway/tutorial-create-gateway-portal.md). Select a route-based VPN type.
1. [Set the point-to-site addresses](../vpn-gateway/point-to-site-certificate-gateway.md#addresspool). If the gateway isn't in the basic SKU, then IKEV2 must be disabled in the point-to-site configuration and SSTP must be selected. The point-to-site address space must be in the RFC 1918 address blocks 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.
azure-functions Durable Functions Azure Storage Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-azure-storage-provider.md
If you are not seeing the throughput numbers you expect and your CPU and memory
> [!TIP] > In some cases you can significantly increase the throughput of external events, activity fan-in, and entity operations by increasing the value of the `controlQueueBufferThreshold` setting in **host.json**. Increasing this value beyond its default causes the Durable Task Framework storage provider to use more memory to prefetch these events more aggressively, reducing delays associated with dequeueing messages from the Azure Storage control queues. For more information, see the [host.json](durable-functions-bindings.md#host-json) reference documentation.
+### Flex Consumption Plan
+The [Flex Consumption plan](../flex-consumption-plan.md) is an Azure Functions hosting plan that provides many of the benefits of the Consumption plan, including a serverless billing model, while also adding useful features, such as private networking, instance memory size selection, and full support for managed identity authentication.
+
+Azure Storage is currently the only supported [storage provider](durable-functions-storage-providers.md) for Durable Functions when hosted in the Flex Consumption plan.
+
+You should follow these performance recommendations when hosting Durable Functions in the Flex Consumption plan:
+
+* Set the [always ready instance count](../flex-consumption-how-to.md#set-always-ready-instance-counts) for the `durable` group to `1`. This ensures that there is always one instance ready to handle Durable Functions related requests, thus reducing the application's cold start.
+* Reduce the [queue polling interval](durable-functions-azure-storage-provider.md#queue-polling) to 10 seconds or less. Since this plan type is more sensitive to queue polling delays, lowering the polling interval will help increase the frequency of polling operations, thus ensuring requests are handled faster. However, more frequent polling operations will lead to a higher Azure Storage account cost.
+ ### High throughput processing The architecture of the Azure Storage backend puts certain limitations on the maximum theoretical performance and scalability of Durable Functions. If your testing shows that Durable Functions on Azure Storage won't meet your throughput requirements, you should consider instead using the [Netherite storage provider for Durable Functions](durable-functions-storage-providers.md#netherite).
azure-functions Durable Functions Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-entities.md
Entity functions define operations for reading and updating small pieces of stat
Entities provide a means for scaling out applications by distributing the work across many entities, each with a modestly sized state. ::: zone pivot="csharp,javascript,python" > [!NOTE]
-> Entity functions and related functionality are only available in [Durable Functions 2.0](durable-functions-versions.md#migrate-from-1x-to-2x) and above. They are currently supported in .NET in-proc, .NET isolated worker, JavaScript, and Python, but not in PowerShell or Java.
+> Entity functions and related functionality are only available in [Durable Functions 2.0](durable-functions-versions.md#migrate-from-1x-to-2x) and above. They are currently supported in .NET in-proc, .NET isolated worker, JavaScript, and Python, but not in PowerShell or Java. Furthermore, entity functions for .NET Isolated are supported when using the Azure Storage or Netherite state providers, but not when using the MSSQL state provider.
::: zone-end ::: zone pivot="powershell,java" >[!IMPORTANT]
azure-functions Durable Functions Storage Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-storage-providers.md
There are many significant tradeoffs between the various supported storage provi
| Maximum throughput | Moderate | Very high | Moderate | | Maximum orchestration/entity scale-out (nodes) | 16 | 32 | N/A | | Maximum activity scale-out (nodes) | N/A | 32 | N/A |
+| Durable Entities support | ✅ Fully supported | ✅ Fully supported | ⚠️ Supported except when using .NET Isolated |
| [KEDA 2.0](https://keda.sh/) scaling support<br/>([more information](../functions-kubernetes-keda.md)) | ❌ Not supported | ❌ Not supported | ✅ Supported using the [MSSQL scaler](https://keda.sh/docs/scalers/mssql/) ([more information](https://microsoft.github.io/durabletask-mssql/#/scaling)) | | Support for [extension bundles](../functions-bindings-register.md#extension-bundles) (recommended for non-.NET apps) | ✅ Fully supported | ✅ Fully supported | ✅ Fully supported | | Price-performance configurable? | ❌ No | ✅ Yes (Event Hubs TUs and CUs) | ✅ Yes (SQL vCPUs) | | Disconnected environment support | ❌ Azure connectivity required | ❌ Azure connectivity required | ✅ Fully supported | | Identity-based connections | ✅ Fully supported |❌ Not supported | ⚠️ Requires runtime-driven scaling |
+| [Flex Consumption plan](../flex-consumption-plan.md) | ✅ Fully supported ([see notes](./durable-functions-azure-storage-provider.md#flex-consumption-plan)) |❌ Not supported | ❌ Not supported |
## Next steps
azure-functions Quickstart Mssql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-mssql.md
Durable Functions supports several [storage providers](durable-functions-storage
> [!NOTE] >
-> - The MSSQL back end was designed to maximize application portability and control over your data. It uses [Microsoft SQL Server](https://www.microsoft.com/sql-server/) to persist all task hub data so that users get the benefits of a modern, enterprise-grade database management system (DBMS) infrastructure. To learn more about when to use the MSSQL storage provider, see the [storage providers overview](durable-functions-storage-providers.md).
+> - The MSSQL backend was designed to maximize application portability and control over your data. It uses [Microsoft SQL Server](https://www.microsoft.com/sql-server/) to persist all task hub data so that users get the benefits of a modern, enterprise-grade database management system (DBMS) infrastructure. To learn more about when to use the MSSQL storage provider, see the [storage providers overview](durable-functions-storage-providers.md).
> > - Migrating [task hub data](durable-functions-task-hubs.md) across storage providers currently isn't supported. Function apps that have existing runtime data start with a fresh, empty task hub after they switch to the MSSQL back end. Similarly, the task hub contents that are created by using MSSQL can't be preserved if you switch to a different storage provider.
+>
+> - The MSSQL backend currently isn't supported by Durable Functions when running on the [Flex Consumption plan](../flex-consumption-plan.md).
## Prerequisites
azure-functions Quickstart Netherite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-netherite.md
Durable Functions offers several [storage providers](durable-functions-storage-p
> - Netherite was designed and developed by [Microsoft Research](https://www.microsoft.com/research) for [high throughput](https://microsoft.github.io/durabletask-netherite/#/scenarios) scenarios. In some [benchmarks](https://microsoft.github.io/durabletask-netherite/#/throughput?id=multi-node-throughput), throughput increased by more than an order of magnitude compared to the default Azure Storage provider. To learn more about when to use the Netherite storage provider, see the [storage providers](durable-functions-storage-providers.md) documentation. > > - Migrating [task hub data](durable-functions-task-hubs.md) across storage providers currently isn't supported. Function apps that have existing runtime data start with a fresh, empty task hub after they switch to the Netherite back end. Similarly, the task hub contents that are created by using MSSQL can't be preserved if you switch to a different storage provider.
+>
+> - The Netherite backend currently isn't supported by Durable Functions when running on the [Flex Consumption plan](../flex-consumption-plan.md).
## Prerequisites
azure-functions Flex Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-plan.md
Title: Azure Functions Flex Consumption plan hosting
description: Running your function code in the Azure Functions Flex Consumption plan provides virtual network integration, dynamic scale (to zero), and reduced cold starts. Previously updated : 08/22/2024 Last updated : 11/01/2024 # Customer intent: As a developer, I want to understand the benefits of using the Flex Consumption plan so I can get the scalability benefits of Azure Functions without having to pay for resources I don't need.
In Flex Consumption, many of the standard application settings and site configur
Keep these other considerations in mind when using Flex Consumption plan during the current preview: + **Host**: There is a 30 seconds timeout for the app initialization. If your function app takes longer than 30 seconds to start you will see gRPC related System.TimeoutException entries. This timeout will be configurable and a more clear exception will be implemented as part of [this host work item](https://github.com/Azure/azure-functions-host/issues/10482).
-+ **Durable Functions**: Due to the per function scaling nature of Flex Consumption, to ensure the best performance for Durable Functions we recommend setting the [Always Ready instance count](./flex-consumption-how-to.md#set-always-ready-instance-counts) for the `durable` group to `1`. Also, with the Azure Storage provider, consider reducing the [queue polling interval](./durable/durable-functions-azure-storage-provider.md#queue-polling) to 10 seconds or less. Only Azure Storage is supported as a backend storage providers for Flex Consumption hosted durable functions.
++ **Durable Functions**: Azure Storage is currently the only supported [storage provider](./durable/durable-functions-storage-providers.md) for Durable Functions when hosted in the Flex Consumption plan. See [recommendations](./durable/durable-functions-azure-storage-provider.md#flex-consumption-plan) when hosting Durable Functions in the Flex Consumption plan. + **VNet Integration** Ensure that the `Microsoft.App` Azure resource provider is enabled for your subscription by [following these instructions](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider). The subnet delegation required by Flex Consumption apps is `Microsoft.App/environments`. + **Triggers**: All triggers are fully supported except for Kafka and Azure SQL triggers. The Blob storage trigger only supports the [Event Grid source](./functions-event-grid-blob-trigger.md). Non-C# function apps must use version `[4.0.0, 5.0.0)` of the [extension bundle](./functions-bindings-register.md#extension-bundles), or a later version. + **Regions**: Not all regions are currently supported. To learn more, see [View currently supported regions](flex-consumption-how-to.md#view-currently-supported-regions).
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
Last updated 07/14/2023
# Azure guidance for secure isolation
-Microsoft Azure is a hyperscale public multi-tenant cloud services platform that provides you with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help you increase efficiency and unlock insights into your operations and performance.
+Microsoft Azure is a hyperscale public multitenant cloud services platform that provides you with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help you increase efficiency and unlock insights into your operations and performance.
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
+A multitenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multitenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
-Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles:
+Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multitenant, cryptographically certain, logically isolated cloud services using a common set of principles:
1. User access controls with authentication and identity separation 2. Compute isolation for processing
You can also use the [Azure Key Vault solution in Azure Monitor](/azure/key-vaul
> For a comprehensive list of Azure Key Vault security recommendations, see **[Azure security baseline for Key Vault](/security/benchmark/azure/baselines/key-vault-security-baseline)**. #### Vault
-**[Vaults](/azure/key-vault/general/overview)** provide a multi-tenant, low-cost, easy to deploy, zone-resilient (where available), and highly available key management solution suitable for most common cloud application scenarios. Vaults can store and safeguard [secrets, keys, and certificates](/azure/key-vault/general/about-keys-secrets-certificates). They can be either software-protected (standard tier) or HSM-protected (premium tier). For a comparison between the standard and premium tiers, see the [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). Software-protected secrets, keys, and certificates are safeguarded by Azure, using industry-standard algorithms and key lengths. If you require extra assurances, you can choose to safeguard your secrets, keys, and certificates in vaults protected by multi-tenant HSMs. The corresponding HSMs are validated according to the [FIPS 140 standard](/azure/compliance/offerings/offering-fips-140-2), and have an overall Security Level 2 rating, which includes requirements for physical tamper evidence and role-based authentication.
+**[Vaults](/azure/key-vault/general/overview)** provide a multitenant, low-cost, easy to deploy, zone-resilient (where available), and highly available key management solution suitable for most common cloud application scenarios. Vaults can store and safeguard [secrets, keys, and certificates](/azure/key-vault/general/about-keys-secrets-certificates). They can be either software-protected (standard tier) or HSM-protected (premium tier). For a comparison between the standard and premium tiers, see the [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). Software-protected secrets, keys, and certificates are safeguarded by Azure, using industry-standard algorithms and key lengths. If you require extra assurances, you can choose to safeguard your secrets, keys, and certificates in vaults protected by multitenant HSMs. The corresponding HSMs are validated according to the [FIPS 140 standard](/azure/compliance/offerings/offering-fips-140-2), and have an overall Security Level 2 rating, which includes requirements for physical tamper evidence and role-based authentication.
-Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where you can control your own keys in HSMs, and use them to encrypt data at rest for [many Azure services](../security/fundamentals/encryption-models.md#services-supporting-customer-managed-keys-cmks). As mentioned previously, you can [import or generate encryption keys](/azure/key-vault/keys/hsm-protected-keys) in HSMs ensuring that keys never leave the HSM boundary to support *bring your own key (BYOK)* scenarios.
+Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where you can control your own keys in HSMs, and use them to encrypt data at rest for [many Azure services](../security/fundamentals/encryption-customer-managed-keys-support.md). As mentioned previously, you can [import or generate encryption keys](/azure/key-vault/keys/hsm-protected-keys) in HSMs ensuring that keys never leave the HSM boundary to support *bring your own key (BYOK)* scenarios.
Key Vault can handle requesting and renewing certificates in vaults, including Transport Layer Security (TLS) certificates, enabling you to enroll and automatically renew certificates from supported public Certificate Authorities. Key Vault certificates support provides for the management of your X.509 certificates, which are built on top of keys and provide an automated renewal feature. Certificate owner can [create a certificate](/azure/key-vault/certificates/create-certificate) through Azure Key Vault or by importing an existing certificate. Both self-signed and Certificate Authority generated certificates are supported. Moreover, the Key Vault certificate owner can implement secure storage and management of X.509 certificates without interaction with private keys.
When a managed HSM is created, the requestor also provides a list of data plane
> [!IMPORTANT] > Unlike with key vaults, granting your users management plane access to a managed HSM doesn't grant them any access to data plane to access keys or data plane role assignments managed HSM local RBAC. This isolation is implemented by design to prevent inadvertent expansion of privileges affecting access to keys stored in managed HSMs.
-As mentioned previously, managed HSM supports [importing keys generated](/azure/key-vault/managed-hsm/hsm-protected-keys-byok) in your on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as *bring your own key (BYOK)* scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](/azure/azure-sql/database/transparent-data-encryption-byok-overview), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others. For a more complete list of Azure services that work with Managed HSM, see [Data encryption models](../security/fundamentals/encryption-models.md#services-supporting-customer-managed-keys-cmks).
+As mentioned previously, managed HSM supports [importing keys generated](/azure/key-vault/managed-hsm/hsm-protected-keys-byok) in your on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as *bring your own key (BYOK)* scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](/azure/azure-sql/database/transparent-data-encryption-byok-overview), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others. For a more complete list of Azure services that work with Managed HSM, see [Data encryption models](../security/fundamentals/encryption-customer-managed-keys-support.md).
-Managed HSM enables you to use the established Azure Key Vault API and management interfaces. You can use the same application development and deployment patterns for all your applications irrespective of the key management solution: multi-tenant vault or single-tenant managed HSM.
+Managed HSM enables you to use the established Azure Key Vault API and management interfaces. You can use the same application development and deployment patterns for all your applications irrespective of the key management solution: multitenant vault or single-tenant managed HSM.
## Compute isolation Microsoft Azure compute platform is based on [machine virtualization](../security/fundamentals/isolation-choices.md). This approach means that your code ΓÇô whether it's deployed in a PaaS worker role or an IaaS virtual machine ΓÇô executes in a virtual machine hosted by a Windows Server Hyper-V hypervisor. On each Azure physical server, also known as a node, there's a [Type 1 Hypervisor](https://en.wikipedia.org/wiki/Hypervisor) that runs directly over the hardware and divides the node into a variable number of Guest virtual machines (VMs), as shown in Figure 4. Each node has one special Host VM, also known as Root VM, which runs the Host OS ΓÇô a customized and hardened version of the latest Windows Server, which is stripped down to reduce the attack surface and include only those components necessary to manage the node. Isolation of the Root VM from the Guest VMs and the Guest VMs from one another is a key concept in Azure security architecture that forms the basis of Azure [compute isolation](../security/fundamentals/isolation-choices.md#compute-isolation), as described in Microsoft online documentation.
The critical Hypervisor isolation is provided through:
- Defense-in-depth exploits mitigations - Strong security assurance processes
-These technologies are described in the rest of this section. **They enable Azure Hypervisor to offer strong security assurances for tenant separation in a multi-tenant cloud.**
+These technologies are described in the rest of this section. **They enable Azure Hypervisor to offer strong security assurances for tenant separation in a multitenant cloud.**
##### *Strongly defined security boundaries* Your code executes in a Hypervisor VM and benefits from Hypervisor enforced security boundaries, as shown in Figure 7. Azure Hypervisor is based on [Microsoft Hyper-V](/virtualization/hyper-v-on-windows/reference/hyper-v-architecture) technology. It divides an Azure node into a variable number of Guest VMs that have separate address spaces where they can load an operating system (OS) and applications operating in parallel to the Host OS that executes in the Root partition of the node.
Virtualization extensions in the Host CPU enable the Azure Hypervisor to enforce
- I/O devices that are being accessed directly by Guest partitions. - **CPU context** ΓÇô the Hypervisor uses virtualization extensions in the CPU to restrict privileges and CPU context that can be accessed while a Guest partition is running. The Hypervisor also uses these facilities to save and restore state when sharing CPUs between multiple partitions to ensure isolation of CPU state between the partitions.
-The Azure Hypervisor makes extensive use of these processor facilities to provide isolation between partitions. The emergence of speculative side channel attacks has identified potential weaknesses in some of these processor isolation capabilities. In a multi-tenant architecture, any cross-VM attack across different tenants involves two steps: placing an adversary-controlled VM on the same Host as one of the victim VMs, and then breaching the logical isolation boundary to perform a side-channel attack. Azure provides protection from both threat vectors by using an advanced VM placement algorithm enforcing memory and process separation for logical isolation, and secure network traffic routing with cryptographic certainty at the Hypervisor. As discussed in section titled *[Exploitation of vulnerabilities in virtualization technologies](#exploitation-of-vulnerabilities-in-virtualization-technologies)* later in the article, the Azure Hypervisor has been architected to provide robust isolation directly within the hypervisor that helps mitigate many sophisticated side channel attacks.
+The Azure Hypervisor makes extensive use of these processor facilities to provide isolation between partitions. The emergence of speculative side channel attacks has identified potential weaknesses in some of these processor isolation capabilities. In a multitenant architecture, any cross-VM attack across different tenants involves two steps: placing an adversary-controlled VM on the same Host as one of the victim VMs, and then breaching the logical isolation boundary to perform a side-channel attack. Azure provides protection from both threat vectors by using an advanced VM placement algorithm enforcing memory and process separation for logical isolation, and secure network traffic routing with cryptographic certainty at the Hypervisor. As discussed in section titled *[Exploitation of vulnerabilities in virtualization technologies](#exploitation-of-vulnerabilities-in-virtualization-technologies)* later in the article, the Azure Hypervisor has been architected to provide robust isolation directly within the hypervisor that helps mitigate many sophisticated side channel attacks.
-The Azure Hypervisor defined security boundaries provide the base level isolation primitives for strong segmentation of code, data, and resources between potentially hostile multi-tenants on shared hardware. These isolation primitives are used to create multi-tenant resource isolation scenarios including:
+The Azure Hypervisor defined security boundaries provide the base level isolation primitives for strong segmentation of code, data, and resources between potentially hostile multitenants on shared hardware. These isolation primitives are used to create multitenant resource isolation scenarios including:
- **Isolation of network traffic between potentially hostile guests** ΓÇô Virtual Network (VNet) provides isolation of network traffic between tenants as part of its fundamental design, as described later in *[Separation of tenant network traffic](#separation-of-tenant-network-traffic)* section. VNet forms an isolation boundary where the VMs within a VNet can only communicate with each other. Any traffic destined to a VM from within the VNet or external senders without the proper policy configured will be dropped by the Host and not delivered to the VM. - **Isolation for encryption keys and cryptographic material** ΓÇô You can further augment the isolation capabilities with the use of [hardware security managers or specialized key storage](../security/fundamentals/encryption-overview.md), for example, storing encryption keys in FIPS 140 validated hardware security modules (HSMs) via [Azure Key Vault](/azure/key-vault/general/overview).
Microsoft provides detailed customer guidance on **[Windows](/azure/virtual-mach
Azure Compute offers virtual machine sizes that are [isolated to a specific hardware type](/azure/virtual-machines/isolation) and dedicated to a single customer. These VM instances allow your workloads to be deployed on dedicated physical servers. Using Isolated VMs essentially guarantees that your VM will be the only one running on that specific server node. You can also choose to further subdivide the resources on these Isolated VMs by using [Azure support for nested Virtual Machines](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization). ## Networking isolation
-The logical isolation of tenant infrastructure in a public multi-tenant cloud is [fundamental to maintaining security](https://azure.microsoft.com/solutions/network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet can't communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements.
+The logical isolation of tenant infrastructure in a public multitenant cloud is [fundamental to maintaining security](https://azure.microsoft.com/solutions/network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet can't communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements.
This section describes how Azure provides isolation of network traffic among tenants and enforces that isolation with cryptographic certainty.
Azure isolation assurances are further enforced by Microsoft's internal use of t
If you're accustomed to a traditional on-premises data center deployment, you would typically conduct a risk assessment to gauge your threat exposure and formulate mitigating measures when migrating to the cloud. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help you with this comparison. ## Logical isolation considerations
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](../security/fundamentals/isolation-choices.md) to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping enforce controls designed to keep other customers from accessing your data or applications. If you're migrating from traditional on-premises physically isolated infrastructure to the cloud, this section addresses concerns that may be of interest to you.
+A multitenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses [logical isolation](../security/fundamentals/isolation-choices.md) to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multitenant cloud services while rigorously helping enforce controls designed to keep other customers from accessing your data or applications. If you're migrating from traditional on-premises physically isolated infrastructure to the cloud, this section addresses concerns that may be of interest to you.
### Physical versus logical security considerations Table 6 provides a summary of key security considerations for physically isolated on-premises deployments (bare metal) versus logically isolated cloud-based deployments (Azure). It's useful to review these considerations prior to examining risks identified to be specific to shared cloud environments.
Of particular interest are efforts to learn the **cryptographic keys of a peer V
Overall, PaaS ΓÇô or any workload that autocreates VMs ΓÇô contributes to churn in VM placement that leads to randomized VM allocation. Random placement of your VMs makes it much harder for attackers to get on the same host. In addition, host access is hardened with greatly reduced attack surface that makes these types of exploits difficult to sustain. ## Summary
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
+A multitenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multitenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
-Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles:
+Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multitenant, cryptographically certain, logically isolated cloud services using a common set of principles:
- User access controls with authentication and identity separation that uses Microsoft Entra ID and Azure role-based access control (Azure RBAC). - Compute isolation for processing, including both logical and physical compute isolation.
azure-government Documentation Government Overview Jps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-jps.md
Microsoft provides [strong customer commitments](https://www.microsoft.com/trust
## Tenant separation
-Azure is a hyperscale public multi-tenant cloud services platform that provides you with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help you increase efficiency and unlock insights into your operations and performance.
+Azure is a hyperscale public multitenant cloud services platform that provides you with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help you increase efficiency and unlock insights into your operations and performance.
-A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
+A multitenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multitenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
-Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles:
+Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multitenant, cryptographically certain, logically isolated cloud services using a common set of principles:
- User access controls with authentication and identity separation - Compute isolation for processing
Data encryption provides isolation assurances that are tied directly to encrypti
The [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/3/final) is a US government standard that defines minimum security requirements for cryptographic modules in information technology products. Microsoft maintains an active commitment to meeting the [FIPS 140 requirements](/azure/compliance/offerings/offering-fips-140-2), having validated cryptographic modules since the standardΓÇÖs inception in 2001. Microsoft validates its cryptographic modules under the US National Institute of Standards and Technology (NIST) [Cryptographic Module Validation Program (CMVP)](https://csrc.nist.gov/Projects/cryptographic-module-validation-program). Multiple Microsoft products, including many cloud services, use these cryptographic modules.
-While the current CMVP FIPS 140 implementation guidance precludes a FIPS 140 validation for a cloud service, cloud service providers can obtain and operate FIPS 140 validated cryptographic modules for the computing elements that comprise their cloud services. Azure is built with a combination of hardware, commercially available operating systems (Linux and Windows), and Azure-specific version of Windows. Through the Microsoft [Security Development Lifecycle (SDL)](https://www.microsoft.com/securityengineering/sdl/), all Azure services use FIPS 140 approved algorithms for data security because the operating system uses FIPS 140 approved algorithms while operating at a hyper scale cloud. The corresponding crypto modules are FIPS 140 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Moreover, you can store your own cryptographic keys and other secrets in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys](../security/fundamentals/encryption-models.md#server-side-encryption-using-customer-managed-keys-in-azure-key-vault).
+While the current CMVP FIPS 140 implementation guidance precludes a FIPS 140 validation for a cloud service, cloud service providers can obtain and operate FIPS 140 validated cryptographic modules for the computing elements that comprise their cloud services. Azure is built with a combination of hardware, commercially available operating systems (Linux and Windows), and Azure-specific version of Windows. Through the Microsoft [Security Development Lifecycle (SDL)](https://www.microsoft.com/securityengineering/sdl/), all Azure services use FIPS 140 approved algorithms for data security because the operating system uses FIPS 140 approved algorithms while operating at a hyper scale cloud. The corresponding crypto modules are FIPS 140 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Moreover, you can store your own cryptographic keys and other secrets in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys](../security/fundamentals/encryption-models.md#server-side-encryption-using-customer-managed-keys-in-azure-key-vault-and-azure-managed-hsm).
### Encryption key management
azure-maps Release Notes Drawing Tools Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-drawing-tools-module.md
This document contains information about new features and other changes to the Azure Maps Drawing Tools Module.
-## [1.0.5] (CDN: November 4, 2024, npm: TBA)
+## [1.0.5] (CDN: November 4, 2024, npm: November 7)
### Bug fixes
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog] -
+[1.0.5]: https://www.npmjs.com/package/azure-maps-drawing-tools/v/1.0.5
[1.0.4]: https://www.npmjs.com/package/azure-maps-drawing-tools/v/1.0.4 [1.0.3]: https://www.npmjs.com/package/azure-maps-drawing-tools/v/1.0.3 [1.0.2]: https://www.npmjs.com/package/azure-maps-drawing-tools/v/1.0.2
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (latest)
-### [3.5.0] (CDN: November 4, 2024, npm: TBA)
+### [3.5.0] (CDN: November 4, 2024, npm: November 7)
#### New features - Add support for fullscreen control.
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.5.0]: https://www.npmjs.com/package/azure-maps-control/v/3.5.0
[3.4.0]: https://www.npmjs.com/package/azure-maps-control/v/3.4.0 [3.3.0]: https://www.npmjs.com/package/azure-maps-control/v/3.3.0 [3.2.1]: https://www.npmjs.com/package/azure-maps-control/v/3.2.1
azure-vmware Configure Vsan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vsan.md
Run the `Set-vSANCompressDedupe` cmdlet to set preferred space efficiency model.
>[!NOTE] >Setting Compression to False and Deduplication to True sets vSAN to Dedupe and Compression. >Setting Compression to False and Dedupe to False, disables all space efficiency.
- >Azure VMware Solution default is Dedupe and Compression
- >Compression only provides slightly better performance
+ >Azure VMware Solution default is Dedupe and Compression.
+ >Compression only provides slightly better performance.
>Disabling both compression and deduplication offers the greatest performance gains, however at the cost of space utilization. 1. Check **Notifications** to see the progress.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Adding a disk to a protected VM | Supported.
Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](/azure/virtual-machines/disks-shared-enable) | Not supported. <br><br> - You can exclude shared disk with Enhanced policy and backup the other supported disks in the VM. <br><br> - You can use S2D to create a shared disk or standalone volumes by combining capacities from disks in different VMs. Azure Backup doesn't support backup of a shared volume (between VMs for database cluster or cluster Configuration) created using S2D.
-<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md).[Learn about the disk considerations for Azure VM](/azure/virtual-machines/disks-types#ultra-disk-limitations). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - File-level restore is currently not supported for machines using Ultra disks. <br><br> - GRS vaults and Cross-Region Restore are currently supported in the following regions for machines using Ultra Disks: Southeast Asia, East Asia, North Europe, West Europe, East US, West US, and West US 3.
-<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). [Learn about the disk considerations for Azure VM](/azure/virtual-machines/disks-types#regional-availability). <br><br> - Configuration of Premium SSD v2 disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - File-level restore is currently not supported for machines using Premium SSD v2 disks. <br><br> - GRS vaults and Cross-Region Restore are currently supported in the following regions for machines using Premium SSDv2 Disks: Southeast Asia, East Asia, North Europe, West Europe, East US, West US, and West US 3.
+<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md).[Learn about the disk considerations for Azure VM](/azure/virtual-machines/disks-types#ultra-disk-limitations). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - File-level restore is currently not supported for machines using Ultra disks. <br><br> - GRS vaults and Cross-Region Restore are currently supported in the following regions for machines using Ultra Disks: South Central US, Brazil South, Canada East, Canada Central, East US2, South East Asia, West US, Central US, Korea South, Korea Central, South Central US, West Europe, North Central US, East Asia, USGov Texas, USGov Arizona, USGov Texas, West US2, North Europe, East US, West Central US, East US.
+<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). [Learn about the disk considerations for Azure VM](/azure/virtual-machines/disks-types#regional-availability). <br><br> - Configuration of Premium SSD v2 disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - File-level restore is currently not supported for machines using Premium SSD v2 disks. <br><br> - GRS vaults and Cross-Region Restore are currently supported in the following regions for machines using Premium SSDv2 Disks: Brazil South, Central US, East Asia, East US, East US2, North Central US, North Europe, South Central US, South East Asia, UK South, UK West, West Europe, West US, West US3.
[Temporary disks](/azure/virtual-machines/managed-disks-overview#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](/azure/virtual-machines/ephemeral-os-disks) | Supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS.
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
Operation Callback URI is an optional parameter in some mid-call APIs that use e
| `Recognize` | `RecognizeCompleted` / `RecognizeFailed` / `RecognizeCanceled` | | `StopContinuousDTMFRecognition` | `ContinuousDtmfRecognitionStopped` | | `SendDTMF` | `ContinuousDtmfRecognitionToneReceived` / `ContinuousDtmfRecognitionToneFailed` |
+| `Hold` | `HoldFailed` |
+| `StartMediaStreaming` | `MediaStreamingStarted` / `MediaStreamingFailed` |
+| `StopMediaStreaming` | `MediaStreamingStopped` / `MediaStreamingFailed` |
+| `StartTranscription` | `TranscriptionStarted` / `TranscriptionFailed` |
+| `UpdateTranscription` | `TranscriptionUpdated` / `TranscriptionFailed` |
+| `StopTranscription` | `TranscriptionStopped` / `TranscriptionFailed` |
## Next steps
container-apps Sessions Code Interpreter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions-code-interpreter.md
Authorization: Bearer <TOKEN>
} ```
+## Logging
+
+Code interpreter sessions don't support logging directly. Your application that's interacting with the sessions can log requests to the session pool management API and its responses.
+
+## Billing
+
+Code interpreter sessions are billed based on the duration of each session. See [Billing](billing.md#dynamic-sessions) for more information.
+ ## Next steps > [!div class="nextstepaction"]
container-apps Sessions Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions-custom-container.md
az containerapp sessionpool create \
--registry-username <USER_NAME> \ --registry-password <PASSWORD> \ --container-type CustomContainer \
- --image myregistry.azurecr.io/my-container-image:1.0 \
+ --image myregistry.azurecr.io/my-container-image:1.0 \
--cpu 0.25 --memory 0.5Gi \ --target-port 80 \ --cooldown-period 300 \
This command creates a session pool with the following settings:
| `--env-vars` | `"key1=value1" "key2=value2"` | The environment variables to set in the container. | | `--location` | `"Supported Location"` | The location of the session pool. |
+To check on the status of the session pool, use the `az containerapp sessionpool show` command:
+
+```bash
+az containerapp sessionpool show \
+ --name <SESSION_POOL_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --query "properties.poolManagementEndpoint" \
+ --output tsv
+```
+ To update the session pool, use the `az containerapp sessionpool update` command. # [Azure Resource Manager](#tab/arm)
Before you send the request, replace the placeholders between the `<>` brackets
```json { "type": "Microsoft.App/sessionPools",
- "apiVersion": "2024-02-02-preview",
+ "apiVersion": "2024-08-02-preview",
"name": "my-session-pool", "location": "westus2", "properties": {
Before you send the request, replace the placeholders between the `<>` brackets
"executionType": "Timed", "cooldownPeriodInSeconds": 600 },
+ "secrets": [
+ {
+ "name": "registrypassword",
+ "value": "<REGISTRY_PASSWORD>"
+ }
+ ],
"customContainerTemplate": {
+ "registryCredentials": {
+ "server": "myregistry.azurecr.io",
+ "username": "myregistry",
+ "passwordSecretRef": "registrypassword"
+ },
"containers": [ { "image": "myregistry.azurecr.io/my-container-image:1.0",
This template creates a session pool with the following settings:
| `scaleConfiguration.readySessionInstances` | `5` | The target number of sessions that are ready in the session pool all the time. Increase this number if sessions are allocated faster than the pool is being replenished. | | `dynamicPoolConfiguration.executionType` | `Timed` | The type of execution for the session pool. Must be `Timed` for custom container sessions. | | `dynamicPoolConfiguration.cooldownPeriodInSeconds` | `600` | The number of seconds that a session can be idle before the session is terminated. The idle period is reset each time the session's API is called. Value must be between `300` and `3600`. |
+| `secrets` | `[{ "name": "registrypassword", "value": "<REGISTRY_PASSWORD>" }]` | A list of secrets. |
+| `customContainerTemplate.registryCredentials.server` | `myregistry.azurecr.io` | The container registry server hostname. |
+| `customContainerTemplate.registryCredentials.username` | `myregistry` | The username to log in to the container registry. |
+| `customContainerTemplate.registryCredentials.passwordSecretRef` | `registrypassword` | The name of the secret that contains the password to log in to the container registry. |
| `customContainerTemplate.containers[0].image` | `myregistry.azurecr.io/my-container-image:1.0` | The container image to use for the session pool. | | `customContainerTemplate.containers[0].name` | `mycontainer` | The name of the container. | | `customContainerTemplate.containers[0].resources.cpu` | `0.25` | The required CPU in cores. |
This template creates a session pool with the following settings:
> [!IMPORTANT] > If the session is used to run untrusted code, don't include information or data that you don't want the untrusted code to access. Assume the code is malicious and has full access to the container, including its environment variables, secrets, and files.
+#### Image caching
+
+When a session pool is created or updated, Azure Container Apps caches the container image in the pool. This caching helps speed up the process of creating new sessions.
+
+Any changes to the image aren't automatically reflected in the sessions. To update the image, update the session pool with a new image tag. Use a unique tag for each image update to ensure that the new image is pulled.
+ ### Working with sessions Your application interacts with a session using the session pool's management API.
Your application interacts with a session using the session pool's management AP
A pool management endpoint for custom container sessions follows this format: `https://<SESSION_POOL>.<ENVIRONMENT_ID>.<REGION>.azurecontainerapps.io`. To retrieve the session pool's management endpoint, use the `az containerapp sessionpool show` command:+ ```bash az containerapp sessionpool show \ --name <SESSION_POOL_NAME> \
This request is forwarded to the custom container session with the identifier fo
In the example, the session's container receives the request at `http://0.0.0.0:<INGRESS_PORT>/<API_PATH_EXPOSED_BY_CONTAINER>`.
+### Using managed identity
+
+A managed identity from Microsoft Entra ID allows your custom container session pools and their sessions to access other Microsoft Entra protected resources. For more about managed identities in Microsoft Entra ID, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+You can enable managed identities for your custom container session pools. Both system-assigned and user-assigned managed identities are supported.
+
+There are two ways to use managed identities with custom container session pools:
+
+* **Image pull authentication**: Use the managed identity to authenticate with the container registry to pull the container image.
+
+* **Resource access**: Use the session pool's managed identity in a session to access other Microsoft Entra protected resources. Due to its security implications, this capability is disabled by default.
+
+ > [!IMPORTANT]
+ > If you enable access to managed identity in a session, any code or programs running in the session can create Entra tokens for the pool's managed identity. Since sessions typically run untrusted code, use this feature with caution.
+
+# [Azure CLI](#tab/azure-cli)
+
+To enable managed identity for a custom container session pool, use Azure Resource Manager.
+
+# [Azure Resource Manager](#tab/arm)
+
+To enable managed identity for a custom container session pool, add an `identity` property to the session pool resource. The `identity` property must have a `type` property with the value `SystemAssigned` or `UserAssigned`. For details on how to configure this property, see [Configure managed identities](managed-identity.md?tabs=arm%2Cdotnet#configure-managed-identities).
+
+The following example shows an ARM template snippet that enables a user-assigned identity for a custom container session pool and use it for image pull authentication. Before you send the request, replace the placeholders between the `<>` brackets with the appropriate values for your session pool and session identifier.
+
+```json
+{
+ "type": "Microsoft.App/sessionPools",
+ "apiVersion": "2024-08-02-preview",
+ "name": "my-session-pool",
+ "location": "westus2",
+ "properties": {
+ "environmentId": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerApps/environments/<ENVIRONMENT_NAME>",
+ "poolManagementType": "Dynamic",
+ "containerType": "CustomContainer",
+ "scaleConfiguration": {
+ "maxConcurrentSessions": 10,
+ "readySessionInstances": 5
+ },
+ "dynamicPoolConfiguration": {
+ "executionType": "Timed",
+ "cooldownPeriodInSeconds": 600
+ },
+ "customContainerTemplate": {
+ "registryCredentials": {
+ "server": "myregistry.azurecr.io",
+ "identity": "<IDENTITY_RESOURCE_ID>"
+ },
+ "containers": [
+ {
+ "image": "myregistry.azurecr.io/my-container-image:1.0",
+ "name": "mycontainer",
+ "resources": {
+ "cpu": 0.25,
+ "memory": "0.5Gi"
+ },
+ "command": [
+ "/bin/sh"
+ ],
+ "args": [
+ "-c",
+ "while true; do echo hello; sleep 10;done"
+ ],
+ "env": [
+ {
+ "name": "key1",
+ "value": "value1"
+ },
+ {
+ "name": "key2",
+ "value": "value2"
+ }
+ ]
+ }
+ ],
+ "ingress": {
+ "targetPort": 80
+ }
+ },
+ "sessionNetworkConfiguration": {
+ "status": "EgressEnabled"
+ },
+ "managedIdentitySettings": [
+ {
+ "identity": "<IDENTITY_RESOURCE_ID>",
+ "lifecycle": "None"
+ }
+ ]
+ },
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<IDENTITY_RESOURCE_ID>": {}
+ }
+ }
+}
+```
+
+This template contains the following additional settings for managed identity:
+
+| Parameter | Value | Description |
+||-|-|
+| `customContainerTemplate.registryCredentials.identity` | `<IDENTITY_RESOURCE_ID>` | The resource ID of the managed identity to use for image pull authentication. |
+| `managedIdentitySettings.identity` | `<IDENTITY_RESOURCE_ID>` | The resource ID of the managed identity to use in the session. |
+| `managedIdentitySettings.lifecycle` | `None` | The session lifecycle where the managed identity is available.<br><br>- `None` (default): The session can't access the identity. It's only used for image pull.<br><br>- `Main`: In addition to image pull, the main session can also access the identity. **Use with caution.** |
+++
+## Logging
+
+Console logs from custom container sessions are available in the Azure Log Analytics workspace associated with the Azure Container Apps environment in a table named `AppEnvSessionConsoleLogs_CL`.
+ ## Billing Custom container sessions are billed based on the resources consumed by the session pool. For more information, see [Azure Container Apps billing](billing.md#custom-container).
container-apps Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions.md
Previously updated : 04/04/2024 Last updated : 11/04/2024
You can configure pools to set the maximum number of sessions that can be alloca
A session is a sandboxed environment that runs your code or application. Each session is isolated from other sessions and from the host environment with a [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) sandbox. Optionally, you can enable network isolation to further enhance security.
-#### Session identifiers
+You interact with sessions in a session pool by sending HTTP requests. Each session pool has a unique pool management endpoint.
-When you interact with sessions in a pool, you must define a session identifier to manage each session. The session identifier is a free-form string, meaning you can define it in any way that suits your application's needs. This identifier is a key element in determining the behavior of the session:
+For code interpreter sessions, you can also use an integration with an [LLM framework](./sessions-code-interpreter.md#llm-framework-integrations).
-- Reuse of existing sessions: This session is reused if there's already a running session that matches the identifier.-- Allocation of new sessions: If no running session matches the identifier, a new session is automatically allocated from the pool.
+### Session identifiers
-The session identifier is a string that you define that is unique within the session pool. If you're building a web application, you can use the user's ID. If you're building a chatbot, you can use the conversation ID.
+To send an HTTP request to a session, you must provide a session identifier in the request. You pass the session identifier in a query parameter named `identifier` in the URL when you make a request to a session.
-The identifier must be a string that is 4 to 128 characters long and can contain only alphanumeric characters and special characters from this list: `|`, `-`, `&`, `^`, `%`, `$`, `#`, `(`, `)`, `{`, `}`, `[`, `]`, `;`, `<`, and `>`.
+* If a session with the identifier already exists, the request is sent to the existing session.
+* If a session with the identifier doesn't exist, a new session is automatically allocated before the request is sent to it.
++
+#### Identifier format
-You pass the session identifier in a query parameter named `identifier` in the URL when you make a request to a session.
+The session identifier is a free-form string, meaning you can define it in any way that suits your application's needs.
-For code interpreter sessions, you can also use an integration with an [LLM framework](./sessions-code-interpreter.md#llm-framework-integrations). The framework handles the token generation and management for you. Ensure that the application is configured with a managed identity that has the necessary role assignments on the session pool.
+The session identifier is a string that you define that is unique within the session pool. If you're building a web application, you can use the user's ID as the session identifier. If you're building a chatbot, you can use the conversation ID.
-##### Protecting session identifiers
+The identifier must be a string that is 4 to 128 characters long and can contain only alphanumeric characters and special characters from this list: `|`, `-`, `&`, `^`, `%`, `$`, `#`, `(`, `)`, `{`, `}`, `[`, `]`, `;`, `<`, and `>`.
+
+#### Protecting session identifiers
-The session identifier is sensitive information which requires a secure process as you create and manage its value. To protect this value, your application must ensure each user or tenant only has access to their own sessions.
+The session identifier is sensitive information which you must manage securely. Your application needs to ensure each user or tenant only has access to their own sessions.
The specific strategies that prevent misuse of session identifiers differ depending on the design and architecture of your app. However, your app must always have complete control over the creation and use of session identifiers so that a malicious user can't access another user's session.
Example strategies include:
> [!IMPORTANT] > Failure to secure access to sessions may result in misuse or unauthorized access to data stored in your users' sessions.
-### Authentication
+### <a name="authentication"></a>Authentication and authorization
-Authentication is handled using Microsoft Entra (formerly Azure Active Directory) tokens. Valid Microsoft Entra tokens are generated by an identity belonging to the *Azure ContainerApps Session Executor* and *Contributor* roles on the session pool.
+When you send requests to a session using the pool management API, authentication is handled using Microsoft Entra (formerly Azure Active Directory) tokens. Only Microsoft Entra tokens from an identity belonging to the *Azure ContainerApps Session Executor* role on the session pool are authorized to call the pool management API.
-To assign the roles to an identity, use the following Azure CLI commands:
+To assign the role to an identity, use the following Azure CLI command:
```bash az role assignment create \ --role "Azure ContainerApps Session Executor" \ --assignee <PRINCIPAL_ID> \ --scope <SESSION_POOL_RESOURCE_ID>-
-az role assignment create \
- --role "Contributor" \
- --assignee <PRINCIPAL_ID> \
- --scope <SESSION_POOL_RESOURCE_ID>
``` If you're using an [LLM framework integration](sessions-code-interpreter.md#llm-framework-integrations), the framework handles the token generation and management for you. Ensure that the application is configured with a managed identity with the necessary role assignments on the session pool.
access_token = token.token
> [!IMPORTANT]
-> A valid token can be used to create and access any session in the pool. Keep your tokens secure and don't share them with untrusted parties. End users should access sessions through your application, not directly.
+> A valid token can be used to create and access any session in the pool. Keep your tokens secure and don't share them with untrusted parties. End users should access sessions through your application, not directly. They should never have access to the tokens used to authenticate requests to the session pool.
#### Lifecycle
The Container Apps runtime automatically manages the lifecycle for each session
Azure Container Apps dynamic sessions are built to run untrusted code and applications in a secure and isolated environment. While sessions are isolated from one another, anything within a single session, including files and environment variables, is accessible by users of the session. You should only configure or upload sensitive data to a session if you trust the users of the session.
+By default, sessions are prevented from making outbound network requests. You can control network access by configuring network status settings on the session pool.
+
+In addition, follow the guidance in the [authentication and authorization](#authentication) section to ensure that only authorized users can access sessions and in the [protecting session identifiers](#protecting-session-identifiers) section to ensure that session identifiers are secure.
+ ## Preview limitations Azure Container Apps dynamic sessions is currently in preview. The following limitations apply:
Azure Container Apps dynamic sessions is currently in preview. The following lim
| North Central US | Γ£ö | - | | North Europe | Γ£ö | Γ£ö | | West US 2 | Γ£ö | Γ£ö |
-
-* Logging isn't supported. Your application can log requests to the session pool management API and its responses.
## Next steps
cost-management-billing Tutorial Improved Exports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-improved-exports.md
For Azure Storage accounts:
- Your Azure storage account must be configured for blob or file storage. - Don't configure exports to a storage container that is configured as a destination in an [object replication rule](../../storage/blobs/object-replication-overview.md#object-replication-policies-and-rules). - To export to storage accounts with configured firewalls, you need other privileges on the storage account. The other privileges are only required during export creation or modification. They are:
- - Owner role on the storage account.
- Or
- - Any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions.
- Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall.
+ - **Owner** role or any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions.
+
+ - Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall.
- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**. :::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" :::
You can retrieve up to 13 months of historical data through the portal UI for al
- All available prices: - MCA/MPA: Up to 13 months.
-
+
- EA: Up to 25 months (starting from December 2022).
-
+
+#### Why do I get the 'Unauthorized' error while trying to create an Export?
+
+When attempting to create an Export to a storage account with a firewall, the user must have the Owner role or a custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions. If these permissions are missing, you will encounter an error like:
++
+```json
+{
+ "error":{
+ "code":"Unauthorized",
+ "message":"The user does not have authorization to perform 'Microsoft.Authorization/roleAssignments/write' action on specified storage account, please use a storage account with sufficient permissions. If the permissions have changed recently then retry after some time."
+ }
+}
+```
+
+You can check for the permissions on the storage account by referring to the steps in [Check access for a user to a single Azure resource](../../role-based-access-control/check-access.md).
+ ## Next steps - Learn more about exports at [Tutorial: Create and manage exported data](tutorial-export-acm-data.md).
cost-management-billing Cost Usage Details Focus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/dataset-schema/cost-usage-details-focus.md
To learn more about FOCUS, see [FOCUS: A new specification for cloud cost transp
You can view the latest changes to the FOCUS cost and usage details file schema in the [FinOps Open Cost and Usage Specification changelog](https://github.com/FinOps-Open-Cost-and-Usage-Spec/FOCUS_Spec/blob/working_draft/CHANGELOG.md).
+#### Note: Version 1.0r2
+
+FOCUS 1.0r2 is a follow-up release to the FOCUS 1.0 dataset that changes how date columns are formatted, which may impact anyone who is parsing and especially modifying these values. The 1.0r2 dataset is still aligned with the FOCUS 1.0 specification. The "r2" indicates this is the second release of that 1.0 specification. The only change in this release is that all date columns now include seconds to more closely adhere to the FOCUS 1.0 specification. As an example, a 1.0 export may use "2024-01-01T00:00Z" and a 1.0r2 export would use "2024-01-01T00:00:00Z". The only difference is the extra ":00" for seconds at the end of the time segment of the ISO formatted date string.
+ ## Version 1.0 | Column | Fields | Description |
data-factory Connector Couchbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-couchbase.md
Previously updated : 10/12/2024 Last updated : 11/05/2024 # Copy data from Couchbase using Azure Data Factory (Preview)
The service provides a built-in driver to enable connectivity, therefore you don
The connector supports the Couchbase version higher than 6.0.
-The connector now uses the following precision. The previous precision is compatible.
- - Double values use 17 significant digits (previously 15 significant digits)
- - Float values use 9 significant digits (previously 7 significant digits)
- ## Prerequisites [!INCLUDE [data-factory-v2-integration-runtime-requirements](includes/data-factory-v2-integration-runtime-requirements.md)]
data-factory Connector Google Bigquery Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery-legacy.md
Previously updated : 05/22/2024 Last updated : 11/05/2024 # Copy data from Google BigQuery using Azure Data Factory or Synapse Analytics (legacy)
For a list of data stores that are supported as sources or sinks by the copy act
The service provides a built-in driver to enable connectivity. Therefore, you don't need to manually install a driver to use this connector.
+The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
+
+The connector no longer supports P12 keyfiles. If you rely on service accounts, you are recommended to use JSON keyfiles instead. The P12CustomPwd property used for supporting the P12 keyfile was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes).
++ >[!NOTE] >This Google BigQuery connector is built on top of the BigQuery APIs. Be aware that BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis, refer to [Quotas & Limits - API requests](https://cloud.google.com/bigquery/quotas#api_requests). Make sure you do not trigger too many concurrent requests to the account.
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery.md
Previously updated : 10/09/2024 Last updated : 11/05/2024 # Copy data from Google BigQuery using Azure Data Factory or Synapse Analytics
For a list of data stores that are supported as sources or sinks by the copy act
The service provides a built-in driver to enable connectivity. Therefore, you don't need to manually install a driver to use this connector.
-The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
-
-The connector no longer supports P12 keyfiles. If you rely on service accounts, you are recommended to use JSON keyfiles instead. The P12CustomPwd property used for supporting the P12 keyfile was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes#bigquery_6).
- >[!NOTE] >This Google BigQuery connector is built on top of the BigQuery APIs. Be aware that BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis, refer to [Quotas & Limits - API requests](https://cloud.google.com/bigquery/quotas#api_requests). Make sure you do not trigger too many concurrent requests to the account.
data-factory Connector Shopify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-shopify.md
Previously updated : 09/12/2024 Last updated : 11/05/2024 # Copy data from Shopify using Azure Data Factory or Synapse Analytics (Preview)
The service provides a built-in driver to enable connectivity, therefore you don
The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
-The billing_on column property was removed from the following tables. For more information, see this [article](https://shopify.dev/docs/api/admin-rest/2024-07/resources/usagecharge).
- - Recurring_Application_Charges
- - UsageCharge
+The billing_on column property was removed from the Recurring_Application_Charges and UsageCharge tables due to Shopify's official deprecation of billing_on field.
## Getting started
data-factory Connector Xero https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-xero.md
Previously updated : 09/12/2024 Last updated : 11/05/2024 # Copy data from Xero using Azure Data Factory or Synapse Analytics
Specifically, this Xero connector supports:
- All Xero tables (API endpoints) except "Reports". - Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
+>[!NOTE]
+>Due to the [sunset of OAuth 1.0 authentication in Xero](https://devblog.xero.com/an-update-on-why-we-are-saying-goodbye-oauth-1-0a-hello-oauth-2-0-6a839230908f), please [upgrade to OAuth 2.0 authentication type](#linked-service-properties) if you are currently using OAuth 1.0 authentication type.
+ ## Getting started [!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/release-notes.md
Last updated 04/17/2024
-# What's new
+# What's new in Microsoft Defender for IoT
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
defender-for-iot Tutorial Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-standalone-agent-binary-installation.md
This tutorial will help you learn how to install and authenticate the Defender f
In this tutorial you'll learn how to: > [!div class="checklist"]
+>
> - Download and install the micro agent > - Authenticate the micro agent > - Validate the installation > - Test the system > - Install a specific micro agent version + ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
dev-box Dev Box Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/dev-box-roadmap.md
Previously updated : 08/26/2024 Last updated : 11/06/2024 #customer intent: As a customer, I want to understand upcoming features and enhancements in Microsoft Dev Box so that I can plan and optimize development and deployment strategies.
Microsoft Dev Box can significantly enhance developer productivity by minimizing
- [Team customizations](https://developercommunity.visualstudio.com/t/Share-customization-files-across-my-team/10729596?sort=newest): as a project lead or a dev center admin, set up a config-as-code Dev Box configuration for an entire team, allowing quicker onboarding of developers without having them deal with onboarding complexity. - [Dev Center Imaging](https://developercommunity.visualstudio.com/t/Speed-up-Dev-Box-customization-using-a-c/10729598): as a project lead or a dev center admin, tailor customizations to each team without losing out Dev Box creation performance. Optimize these team customizations into an image without investing in and maintaining your own custom image generation capabilities. - [Secrets & variables](https://developercommunity.visualstudio.com/t/Customization-YAMLs:-Use-secrets-from-a/10729608?sort=newest): as a project lead or a dev center admin, you can now source secrets from subscriptions that are different from the one your DevCenter is in, allowing you to reuse centralized secret stores with Dev Box-- [Native Run as user support for Dev Box customizations](https://developercommunity.visualstudio.com/t/Improve-run-as-user-support-for-Dev-Box/10719951): some of your Dev Box customization tasks require to be run as the signed in user. Native run as user support provides capability of executing customization under the user context with improved reliability, status tracking, and error reporting. -- [Project Policy](https://developercommunity.visualstudio.com/t/Curation-for-Dev-Center-and-Projects-und/10719953): as a dev center admin, set up guardrails around resources that different projects should and shouldn't access.
+- [Native Run as user support](https://developercommunity.visualstudio.com/t/Improve-run-as-user-support-for-Dev-Box/10719951): some of your Dev Box customization tasks require to be run as the signed in user. Native run as user support provides capability of executing customization under the user context with improved reliability, status tracking, and error reporting.
**Enhanced user provided customizations** -- [Improved Dev Box creation flow on Dev Home and Developer Portal](https://developercommunity.visualstudio.com/t/I-would-like-to-use-Dev-Box-customizatio/10719976): as a developer, get started with Dev Box customizations using a UI to choose repositories to clone or packages to install, without having to author a yaml configuration by hand.-- [Native support for WinGet & DSC](https://developercommunity.visualstudio.com/t/I-would-like-my-Dev-Box-to-run-Winget-an/10719983): all Dev Boxes will be able to use WinGet and DSC to install packages and apply configurations, without requiring a catalog to be attached.
+- [Native support for WinGet & DSC](https://developercommunity.visualstudio.com/t/I-would-like-my-Dev-Box-to-run-Winget-an/10719983): all Dev Boxes will be able to use WinGet and DSC to install packages and apply configurations, without requiring a catalog to be attached.
-**First time developer experience**
+**Developer onboarding & experience**
- [Developer Portal landing page and welcome tour](https://developercommunity.microsoft.com/t/Developer-Portal-landing-page-and-welcom/10720999): as a developer getting onboarded to Dev Box, you get to learn about how to use the product and discover features.-- [Pin Developer Portal to task view/desktop](https://developercommunity.visualstudio.com/t/Ping-to-task-view-is-not-quite-working-f/10719957): as a developer, you can quickly access your Developer Box by pinning it to your Windows task view.
+- [Region Selection Optimization for Dev Box Creation](https://developercommunity.visualstudio.com/t/Region-selection-optimization-based-on-l/10784537): as a developer, easily create your new Dev Box in an optimal region based on your location. As a dev center admin, optimize the location of existing Dev Boxes based on end user location and available capacity.
+- [Direct launch via the Windows App](https://developercommunity.visualstudio.com/t/Direct-launch-via-the-Windows-App/10784545): as a developer, quickly launch Dev Box from the developer portal on the Windows App RDP client.
+- [Cross clients multi-monitor settings](https://developercommunity.visualstudio.com/t/Dual-Screen-Settings-adjustment-setting-/10770153): as a developer, your multi-monitor settings will be shared consistently across RDP clients.
+- [Notification center for Developer Portal](https://developercommunity.visualstudio.com/t/Outage-notifications-for-Dev-Box/10720453?q=notifications): as a developer, you will get service notifications and updates right in the Developer Portal.
+- [Pin Dev Box to task view from Developer Portal](https://developercommunity.visualstudio.com/t/Ping-to-task-view-is-not-quite-working-f/10719957): as a developer, you can quickly access your Developer Box by pinning it to your Windows task view.
## Enterprise management
Microsoft Dev Box aims to deliver centralized governance based on organizational
**Streamlined and flexible onboarding for enterprises** -- [Firewall Service Tags](https://developercommunity.visualstudio.com/t/Dev-Box:-Advanced-notice-and-notificatio/10704156?q=firewall): as IT administrator working on setting up Dev Box for your organization, quickly configure traffic roles by utilizing Service Tags in your Firewall set up. -- [Guest Account](https://developercommunity.visualstudio.com/t/Enable-Guest-accountsVendors-to-access-/10290470): as a dev center admin, securely onboard and support external teams and contractors to your Dev Box service.
+- [In product prerequisites](https://developercommunity.visualstudio.com/t/User-License-Assignment-as-Pre-requisite/10523902?q=pre-requisits): as a dev center admin, you will get a dynamic prerequisites page that highlights any missing requirements and helps you track the progress you are making in setting up the Dev Box service.
+- [New Supported Regions](https://devblogs.microsoft.com/develop-from-the-cloud/microsoft-dev-box-regional-availability/): as a dev center admin, you will be able to enable your development team to create dev boxes in new regions including [UAE North](https://developercommunity.visualstudio.com/t/Support-for-Dev-Box-in-UAE-North/10781448) and [Spain Central](https://developercommunity.visualstudio.com/t/Dev-Box-support-in-Spain-Central/10781449).
+- [Expand IPs within existing subnets](https://developercommunity.visualstudio.com/t/Expand-IPs-within-existing-Subnets-in-a/10781464): as a dev center admin, you will be able to expand IP ranges in subnets that are running out of IP addresses.
+- [RRS Integration into QMS](https://developercommunity.visualstudio.com/t/Automatically-approve-higher-amounts-of/10781465): as a dev center admin for a trusted customer, you will be able to request and get larger amount of quota automatically approved through QMS.
**Enhanced monitoring and cost controls capabilities**
+- [Hibernation on disconnect:](https://developercommunity.visualstudio.com/t/Customize-hibernation-options/10640621?entry=suggestion&q=hibernation+disconnect) as a dev center admin, reduce cost of compute by enabling dev boxes to hibernate on disconnect based on active working hours of developers.
- [Dev Box logs:](https://developercommunity.visualstudio.com/t/When-Microsoft-Monitoring-Agent-will-be/10471575?entry=suggestion&q=Azure+Monitor) as a dev center admin, access user level engagement metrics and connectivity related metrics. -- [Azure Monitor Agent (AMA) scoping](https://developercommunity.visualstudio.com/t/When-Microsoft-Monitoring-Agent-will-be/10471575?entry=suggestion&q=Azure+Monitor): as a dev center admin, focus your monitoring solely on Dev Box devices, which simplifies monitoring and reduces costs. -- [Hibernation on disconnect (preview):](https://developercommunity.visualstudio.com/t/Customize-hibernation-options/10640621?entry=suggestion&q=hibernation+disconnect) as a dev center admin, reduce cost of compute by enabling Dev Boxes to hibernate on disconnect based on active working hours of developers. **Security and privacy**
+- [Project Policy](https://developercommunity.visualstudio.com/t/Curation-for-Dev-Center-and-Projects-und/10719953): as a dev center admin, set up guardrails around resources that different projects should and shouldn't access.
- [Customer Managed Keys (CMK):](https://developercommunity.visualstudio.com/t/Encryption-with-customer-managed-keys-fo/10720463) as a dev center admin, have a greater control over your data encryption by managing your own encryption keys.-- [Privileged Identity Management (PIM)](https://developercommunity.visualstudio.com/t/Only-allows-Dev-Box-projects-to-utilize-/10502335): as a dev center admin, get just-in-time admin access to project configurations.-- [Developer offboarding](https://developercommunity.visualstudio.com/t/Provide-a-means-to-do-external-cleanup/10670632?q=delete+unused+): as a dev center admin, configure your Dev Box service to offload users from Dev Boxes when they leave the organization and switch between teams.
+- [Developer offboarding](https://developercommunity.visualstudio.com/t/Provide-a-means-to-do-external-cleanup/10670632?q=delete+unused+): as a dev center admin, configure your Dev Box service to offload users from Dev Boxes when they leave the organization and switch between teams.
+- [Firewall Service Tags](https://developercommunity.visualstudio.com/t/Dev-Box:-Advanced-notice-and-notificatio/10704156?q=firewall): as IT administrator working on setting up Dev Box for your organization, quickly configure traffic roles by utilizing Service Tags in your Firewall set up.
## Fundamental performance & reliability
Microsoft Dev Box aims to provide a "like-local" developer experience that is as
**Seamless and reliable connectivity** - [Single Sign On (SSO)](https://developercommunity.visualstudio.com/t/Enable-single-sign-on-for-dev-boxes/10720478): as a developer, you no longer need to provide your sign-in credentials every time you access your Dev Box.-- [Simple Multiple Independent Links Evaluation & Switching](https://developercommunity.microsoft.com/t/Reliable-Connectivity-to-Dev-Box/10720996) [(SMILES):](https://developercommunity.microsoft.com/t/Reliable-Connectivity-to-Dev-Box/10720996) as a developer, you get an uninterrupted reliable Dev Box connection by automatically switching to backup links as needed without disconnecting your active session.-- [Azure region optimizations based on user locations:](https://developercommunity.visualstudio.com/t/Move-VM-to-different-poolregion/10277787) as a developer, easily create your new Dev Box in an optimal region based on your location. As a dev center admin, optimize the location of existing Dev Boxes based on end user location and available capacity. -- [Visual Studio 2022 and Visual Studio Code RDP optimizations](https://developercommunity.microsoft.com/t/VS-and-VS-Code-optimizations-for-Dev-Box/10720946): as a developer, type and navigate your code without any noticeable latency.
+- [Visual Studio 2022 RDP optimizations](https://developercommunity.microsoft.com/t/VS-and-VS-Code-optimizations-for-Dev-Box/10720946): as a developer, type and navigate your code without any noticeable latency.
+- [Auto network repair](https://developercommunity.visualstudio.com/t/Enable-Network-Adapter-after-wrongly-dis/10656306): as a developer, if you lose connectivity to your Dev Box due to miss configuring your Dev Box network adapter, Dev Box will automatically reset your network connection.
+- [Azure region optimizations based on user locations:](https://developercommunity.visualstudio.com/t/Move-VM-to-different-poolregion/10277787) as a dev center admin, optimize the location of existing Dev Boxes based on end user location and available capacity.
**Service health & reliability** -- [Backup SKUs:](https://developercommunity.visualstudio.com/t/Back-up-SKUs-in-case-of-capacity-outage/10720451) as a dev center admin, you get the option to select backup SKUs to be automatically utilized to avoid interruptions during a service outage.
+- [Startup optimizations](https://developercommunity.visualstudio.com/t/Startup-optimizations-for-Dev-box/10781438): as a developer, you will experience a more reliable and stable Dev Box startup experience.
+- [Backup SKUs:](https://developercommunity.visualstudio.com/t/Back-up-SKUs-in-case-of-capacity-outage/10720451) as a developer, you will be able to smoothly resume working on existing dev boxes during service outages by opting to using a fallback SKU.
- [Self-service snapshot and restore](https://developercommunity.visualstudio.com/t/Self-serve-snapshot-and-restore/10719611): as a developer, you can recover your Dev Box by restoring it to a previous snapshot.-- [Outage notifications:](https://developercommunity.visualstudio.com/t/Outage-notifications-for-Dev-Box/10720453) developers and admins can stay informed about ongoing service outages via outage notification shared within the developer and Azure status portals.
+- [Outage notifications:](https://developercommunity.visualstudio.com/t/Outage-notifications-for-Dev-Box/10720453) developers and admins can stay informed about ongoing service outages via outage notification shared within the developer and Azure status portals including [Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health) and [Azure Status](https://azure.status.microsoft/status) portals.
++
+This roadmap outlines our current priorities, and we remain flexible to adapt based on customer feedback. We invite you to [share your thoughts and suggest more capabilities you would like to see](https://aka.ms/DevBox/Feedback). Your insights help us refine our focus and deliver even greater value.
## Related content -- [What is Microsoft Dev Box?](overview-what-is-microsoft-dev-box.md)
+- [What is Microsoft Dev Box?](overview-what-is-microsoft-dev-box.md)
event-hubs Event Processor Balance Partition Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-processor-balance-partition-load.md
Last updated 07/31/2024
To scale your event processing application, you can run multiple instances of the application and have the load balanced among themselves. In the older and deprecated versions, `EventProcessorHost` allowed you to balance the load between multiple instances of your program and checkpoint events when receiving the events. In the newer versions (5.0 onwards), **EventProcessorClient** (.NET and Java), or **EventHubConsumerClient** (Python and JavaScript) allows you to do the same. The development model is made simpler by using events. You can subscribe to the events that you're interested in by registering an event handler. If you're using the old version of the client library, see the following migration guides: [.NET](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md), [Java](https://github.com/Azure/azure-sdk-for-jav).
-This article describes a sample scenario for using multiple instances of client `applications to read events from an event hub. It also gives you details about features of event processor client, which allows you to receive events from multiple partitions at once and load balance with other consumers that use the same event hub and consumer group.
+This article describes a sample scenario for using multiple instances of client applications to read events from an event hub. It also gives you details about features of event processor client, which allows you to receive events from multiple partitions at once and load balance with other consumers that use the same event hub and consumer group.
> [!NOTE] > The key to scale for Event Hubs is the idea of partitioned consumers. In contrast to the [competing consumers](/previous-versions/msp-n-p/dn568101(v=pandp.10)) pattern, the partitioned consumer pattern enables high scale by removing the contention bottleneck and facilitating end to end parallelism.
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
Previously updated : 09/02/2024 Last updated : 11/7/2024
While FastPath supports many configurations, it doesn't support the following fe
* Load Balancers: If you deploy an Azure internal load balancer in your virtual network or the Azure PaaS service you deploy in your virtual network, the network traffic from your on-premises network to the virtual IPs hosted on the load balancer is sent to the virtual network gateway.
+* Gateway Transit: If you deploy two peered hub virtual networks connected to one circuit, you need to make sure to set the Allow Gateway Transit on the virtual network peering to false, otherwise you will experience connectivity issues.
+
+* Use Remote Gateway: If you deploy a spoke vnet peered to two hub vnets, you can only use one hub gateway as the remote gateway. If you use both as a remote gateway, you will experience connectivity issues.
+ * Private Link: FastPath Connectivity to a private endpoint or Private Link service over an ExpressRoute Direct circuit is supported for limited scenarios. For more information, see [enable FastPath and Private Link for 100-Gbps ExpressRoute Direct](expressroute-howto-linkvnet-arm.md#fastpath-virtual-network-peering-user-defined-routes-udrs-and-private-link-support-for-expressroute-direct-connections). FastPath connectivity to a Private endpoint/Private Link service isn't supported for ExpressRoute partner provider circuits. * DNS Private Resolver: Azure ExpressRoute FastPath doesn't support connectivity to [DNS Private Resolver](../dns/dns-private-resolver-overview.md).
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Previously updated : 10/03/2024 Last updated : 11/07/2024
The following table shows locations by service provider. If you want to view ava
| **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** | &check; | &check; | Taipei | | **[Fastweb](https://www.fastweb.it/grandi-aziende/dati-voce/scheda-prodotto/fast-company/)** | &check; |&check; | Milan | | **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** | &check; | &check; | Montreal<br/>Quebec City<br/>Toronto2 |
-| **[Flo Networks](https://flo.net/)** | &check; | &check; | Dallas<br/>Los Angeles<br/>Miami<br/>Queretaro(Mexico City)<br/>Sao Paulo<br/>Washington DC<br/>**Locations are listed under Neutrona (company name is Neutrona Networks) Networks and Transtelco as providers for circuit creation* |
+| **[Flo Networks](https://flo.net/microsoft)** | &check; | &check; | Dallas<br/>Los Angeles<br/>Miami<br/>Queretaro(Mexico City)<br/>Sao Paulo<br/>Washington DC<br/>**Locations are listed under Neutrona (company name is Neutrona Networks) Networks and Transtelco as providers for circuit creation* |
| **[GBI](https://www.gbiinc.com/microsoft-azure/)** | &check; | &check; | Dubai2<br/>Frankfurt | | **[GÉANT](https://www.geant.org/Networks)** | &check; | &check; | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>Madrid2<br/>Marseille | | **[GlobalConnect](https://www.globalconnect.no/)** | &check; | &check; | Amsterdam<br/>Copenhagen<br/>Oslo<br/>Stavanger<br/>Stockholm |
firmware-analysis Automate Firmware Analysis Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/automate-firmware-analysis-service-principals.md
+
+ Title: Use service principals to automate workflows in Firmware analysis
+description: Learn about how to use service principals to automate workflows for Firmware Analysis.
+++ Last updated : 11/04/2024++
+# How to Use Service Principals to Automate Workflows in Firmware analysis
+
+Many users of the firmware analysis service may need to automate their workflow. The command `az login` creates an interactive login experience with two-factor authentication that makes it difficult for users to fully automate their workflow. A service principal [Apps & service principals in Microsoft Entra ID](/entra/identity-platform/app-objects-and-service-principals) is a secure identity with proper permissions that authenticates to Azure in the command line without requiring two-factor authentication or an interactive log-in. This article explains how to create a service principal and use it to interact with the firmware analysis service. For more information on creating service principals, visit [Create Azure service principals using the Azure CLI](/cli/azure/azure-cli-sp-tutorial-1#create-a-service-principal). To authenticate securely, we recommend creating a service principal and authenticating using certificates. To learn more, visit [Create a service principal containing a certificate using Azure CLI](/cli/azure/azure-cli-sp-tutorial-3).
+
+1. Log in to your Azure account using the portal.
+
+2. Navigate to your subscription and assign yourself `User Access Administrator` or `Role Based Access Control Administrator` permissions, or higher, in your subscription. This gives you permission to create a service principal.
+
+3. Navigate to your command line
+
+ 1. Log in, specifying the tenant ID during login
+
+ ```azurecli
+ az login --tenant <TENANT_ID>
+ ```
+
+ 3. Switch to your subscription where you have proper permissions to create a service principal
+
+ ```azurecli
+ az account set --subscription <SUBSCRIPTION_ID>
+ ```
+
+ 5. Create service principal, assigning it the proper permissions at the proper scope. We recommend assigning your service principal the Firmware Analysis Admin role at the Subscription level.
+
+ ```azurecli
+ az ad sp create-for-rbac --name <SERVICE_PRINCIPAL_NAME> --role "Firmware Analysis Admin" --scopes /subscriptions/<SUBSCRIPTION_ID>
+ ```
+
+4. Note down your service principalΓÇÖs client ID (ΓÇ£`appId`ΓÇ¥), tenant ID (ΓÇ£`tenant`ΓÇ¥), and secret (ΓÇ£`password`ΓÇ¥) in a safe place. YouΓÇÖll need this for the next step.
+
+5. Log in to your service principal
+
+ ```azurecli
+ az login --service-principal --username $clientID --password $secret --tenant $tenantID
+ ```
+
+6. Once logged in, refer to the following Quickstarts for scripts to interact with the Firmware analysis service via Azure PowerShell, Azure CLI, or Python:
+ - [Upload firmware using Azure CLI](quickstart-upload-firmware-using-azure-command-line-interface.md)
+ - [Upload firmware using Azure PowerShell](quickstart-upload-firmware-using-powershell.md)
+ - [Upload firmware using Python](quickstart-upload-firmware-using-python.md)
++
firmware-analysis Firmware Analysis Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/firmware-analysis-faq.md
Last updated 01/10/2024
# Frequently asked questions about Firmware analysis
-This article addresses frequent questions about Firmware analysis.
+This article addresses frequent questions about Firmware analysis.
[Firmware analysis](./overview-firmware-analysis.md) is a tool that analyzes firmware images and provides an understanding of security vulnerabilities in the firmware images.
Firmware analysis supports unencrypted images that contain file systems with emb
* POSIX tarball archive * UBI erase count header * UBI file system superblock node
+* UEFI file system
* xz compressed data * YAFFS filesystem, big endian * YAFFS filesystem, little endian
firmware-analysis Interpreting Extractor Paths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/interpreting-extractor-paths.md
+
+ Title: Interpreting extractor paths from SBOM view in Firmware analysis
+description: Learn how to interpret extractor paths from the SBOM view in Firmware analysis results.
+++ Last updated : 11/04/2024++
+# Overview of How Firmware Images are Structured
+
+A firmware image is a collection of files and file systems containing software that operates hardware. Often, it includes compressed files, executables, and system files. These file systems may or may not include other file systems within each file. For example, a firmware image thatΓÇÖs a .zip file may include individual files such as executables within it but may also include other compressed file systems, such as a SquashFS file. You can visualize it like the following:
++
+*Each circle represents a file that may or may not have more file systems within it. The extractor repeatedly extracts each circle until there are no more circles (files) within it to be extracted.*
+
+If the large, encompassing oval represents the firmware image, the three circles within the large oval may represent individual file systems within this firmware image. The circles may even represent executables with embedded file systems within them.
+
+Because of the complex structure of firmware images ΓÇô any given layer could be an executable or a file system with another embedded executable or file system ΓÇô we need a comprehensive way to present the extraction results to accurately reflect a firmware imageΓÇÖs structure.
+
+**How the Extractor Works**
+
+The Firmware Analysis extractor identifies and decompresses data found within firmware images. There are multiple types of extractors, one for each type of file. For a full list of file formats that Firmware Analysis supports, check [Firmware analysis Frequently Asked Questions](firmware-analysis-faq.md).
+
+For example, a `ZipArchive` extractor would extract a `ZipArchive` file. The extractor extracts the image as it sits on the disk in your system, and you will need to correlate the file path to the structure of files on your build environment. When you upload your firmware images to the Firmware Analysis service, the extractor recursively extracts the image until it cannot extract further. This means that the original firmware image is decompressed into individual files, and each individual file is sent again to the extractor to see if they can be further decompressed. This repeats until the extractor cannot decompress further.
+
+Sometimes, there may be numerous files concatenated into one. Extractor will identify that there are numerous files in that one file, and use the appropriate extractor to extract each file, then put each file into its own respective directory. This means that if there were four files that were compiled with `GZip`, and they were concatenated into one file, extractor will identify that there are four `GZip` files at that level of extraction. Extractor will put the first `GZip` file into a directory named `GZipExtractor/1`, the second into a directory named `GZipExtractor/2`, and so on.
+
+## Interpret File Paths Created by the Extractor
+
+In the Firmware Analysis service, the SBOM view of the analysis results contains the file paths:
++
+Here is an example of a file path that might be seen in analysis results, and how to visualize the path in a file-system structure:
++
+The following file-system structure is a visual representation of the SBOM file path:
++
+In this sample file path, a `ZipArchiveExtractor` extracted a `ZipArchive`, and it puts the contents into a directory named `ZipArchiveExtractor/1`. Again, the ΓÇÿ1ΓÇÖ means that this was the first ΓÇô and possibly, the only ΓÇô `ZipArchive` file at this level of extraction. The extractor assigns a default name called `zip-root` to the `ZipArchive` file.
++
+> [!Note]
+> Usually, you can assume that a subdirectory with the suffix `-root` is created by Extractor and does not actually exist in your environment. It is just a subdirectory created by Extractor to hold the contents of that file type.
+>
+
+Within `zip-root` is the `adhoc` file:
++
+Within the `adhoc` file is the `lede-17.01.4-arc770-generic-nsim-initramfs.elf` file:
++
+Since the `lede…` file ends with `.extracted`, this means that there is something within this `.elf` file that needs to be extracted further. The next extractor used was a `CPIOArchiveExtractor`, which means that there was a `CPIOArchive` file system embedded in the `.elf` file. The contents of the `CPIOArchive` file were placed in a `cpio-root` subdirectory:
++
+and within the `CPIOArchive` file, there was a `bin` file, and that `bin` file had a file named `busybox` within it:
++
+## Locate the Path in your Environment
+
+Since the first extractor that was used was a `ZipArchiveExtractor`, this means that everything exists in a `Zip` file. Locate the `Zip` file, and within that, the full path on your environment would be `/adhoc/lede-17.01.4-arc770-generic-nsim-initramfs.elf.extracted/bin/busybox`. However, assume that you can only see into the first level of extraction ΓÇô the `.elf` file. To see further, you would need your own extractor to extract beyond the first layer. This means that, tangibly, the file path to go to would be: `/adhoc/lede-17.01.4-arc770-generic-nsim-initramfs.elf`.
+
+## Multiple Extractor Paths
+
+In some cases, you may notice a `(+1)` or `(+2)` next to the file path:
++
+When you hover over the number, youΓÇÖll see a pop-up that looks like this:
++
+This means that the SBOM can be found at these two executable paths.
+
firmware-analysis Quickstart Upload Firmware Using Azure Command Line Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/quickstart-upload-firmware-using-azure-command-line-interface.md
The output of this command includes a `name` property, which is your firmware ID
2. Generate a SAS URL, which you'll use in the next step to send your firmware image to Azure Storage. Replace `sampleFirmwareID` with the firmware ID that you saved from the previous step. You can store the SAS URL in a variable for easier access for future commands: ```azurecli
- $sasURL = $(az firmwareanalysis workspace generate-upload-url --resource-group myResourceGroup --subscription 123e4567-e89b-12d3-a456-426614174000 --workspace-name default --firmware-id sampleFirmwareID --query "url")
+ set resourceGroup=myResourceGroup
+ set subscription=123e4567-e89b-12d3-a456-426614174000
+ set workspace=default
+ set firmwareID=sampleFirmwareID
+
+ for /f "tokens=*" %i in ('az firmwareanalysis workspace generate-upload-url --resource-group %resourceGroup% --subscription %subscription% --workspace-name %workspace% --firmware-id %firmwareID% --query "url"') do set sasURL=%i
``` 3. Upload your firmware image to Azure Storage. Replace `pathToFile` with the path to your firmware image on your local machine. ```azurecli
- az storage blob upload -f pathToFile --blob-url $sasURL
+ az storage blob upload -f pathToFile --blob-url %sasURL%
``` Here's an example workflow of how you could use these commands to create and upload a firmware image. To learn more about using variables in CLI commands, visit [How to use variables in Azure CLI commands](/cli/azure/azure-cli-variables?tabs=bash): ```azurecli
-$filePath='/path/to/image'
-$resourceGroup='myResourceGroup'
-$workspace='default'
+set filePath="/path/to/image"
+set resourceGroup="myResourceGroup"
+set workspace="default"
-$fileName='file1'
-$vendor='vendor1'
-$model='model'
-$version='test'
+set fileName="file1"
+set vendor="vendor1"
+set model="model"
+set version="test"
-$FWID=$(az firmwareanalysis firmware create --resource-group $resourceGroup --workspace-name $workspace --file-name $fileName --vendor $vendor --model $model --version $version --query "name")
+for /f "tokens=*" %i in ('az firmwareanalysis firmware create --resource-group %resourceGroup% --workspace-name %workspace% --file-name %fileName% --vendor %vendor% --model %model% --version %version% --query "name"') do set FWID=%i
-$URL=$(az firmwareanalysis workspace generate-upload-url --resource-group $resourceGroup --workspace-name $workspace --firmware-id $FWID --query "url")
+for /f "tokens=*" %i in ('az firmwareanalysis workspace generate-upload-url --resource-group %resourceGroup% --workspace-name %workspace% --firmware-id %FWID% --query "url"') do set URL=%i
-$OUTPUT=(az storage blob upload -f $filePath --blob-url $URL)
+az storage blob upload -f %filePath% --blob-url %URL%
``` ## Retrieve firmware analysis results
If you would like to automate the process of checking your analysis's status, yo
The `az resource wait` command has a `--timeout` parameter, which is the time in seconds that the analysis will end if "status" does not reach "Ready" within the timeout frame. The default timeout is 3600, which is one hour. Large images may take longer to analyze, so you can set the timeout using the `--timeout` parameter according to your needs. Here's an example of how you can use the `az resource wait` command with the `--timeout` parameter to automate checking your analysis's status, assuming that you have already created a firmware and stored the firmware ID in a variable named `$FWID`: ```azurecli
-$ID=$(az firmwareanalysis firmware show --resource-group $resourceGroup --workspace-name $workspace --firmware-id $FWID --query "id")
+set resourceGroup="myResourceGroup"
+set workspace="default"
+set FWID="yourFirmwareID"
+
+for /f "tokens=*" %i in ('az firmwareanalysis firmware show --resource-group %resourceGroup% --workspace-name %workspace% --firmware-id %FWID% --query "id"') do set ID=%i
-Write-Host (ΓÇÿSuccessfully created a firmware image with the firmware ID of ΓÇÿ + $FWID + ΓÇÿ, recognized in Azure by this resource ID: ΓÇÿ + $ID + ΓÇÿ.ΓÇÖ)
+echo Successfully created a firmware image with the firmware ID of %FWID%, recognized in Azure by this resource ID: %ID%.
-$WAIT=$(az resource wait --ids $ID --custom "properties.status=='Ready'" --timeout 10800)
+for /f "tokens=*" %i in ('az resource wait --ids %ID% --custom "properties.status=='Ready'" --timeout 10800') do set WAIT=%i
-$STATUS=$(az resource show --ids $ID --query 'properties.status')
+for /f "tokens=*" %i in ('az resource show --ids %ID% --query "properties.status"') do set STATUS=%i
-Write-Host ('Firmware analysis completed with status: ' + $STATUS)
+echo Firmware analysis completed with status: %STATUS%
``` Once you've confirmed that your analysis status is "Ready", you can run commands to pull the results.
firmware-analysis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/release-notes.md
+
+ Title: What's new in Firmware analysis
++
+description: Learn about the latest updates for Firmware analysis.
+ Last updated : 11/04/2024++
+# What's new in Firmware analysis
+
+This article lists new features and feature enhancements in the Firmware analysis service.
+
+Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## October 2024
+
+- **Support for UEFI images**: Firmware analysis service now analyzes UEFI images and identifies PKFail instances.
+
+- **New upload form with drag and drop**: You can now upload files with the new drag and drop functionality.
+
+- **Enhanced navigation menu**: Firmware analysis service in the Azure portal now has a menu blade for improved navigation.
+
+- **Race condition resolution**: Resolved a race condition that occasionally prevented the UI from fully populating the firmware list on initial load.
+
+- **Certificate encoding update**: Deprecated the encoding value for certificates to avoid confusion. Now also reporting DER certificates in embedded binaries as well as flat files.
+
+- **Paged analysis results**: Added support to fetch analysis results in pages to ensure safe retrieval of very large data sets.
+
+- **Additional bug fixes and improvements**: Multiple bug fixes for filters, styling, and accessibility.
++
+## March 2024
+
+- **Azure CLI and PowerShell commands**: Automate your workflow of analyzing firmware images by using the [Firmware Analysis Azure CLI](/cli/azure/service-page/firmware%20analysis) or the [Firmware Analysis PowerShell commands](/powershell/module/az.firmwareanalysis).
+- **User choice in resource group**: Pick your own resource group or create a new resource group during the onboarding process.
+
+ :::image type="content" source="media/whats-new-firmware-analysis/pick-resource-group.png" alt-text="Screenshot that shows resource group picker while onboarding." lightbox="media/whats-new-firmware-analysis/pick-resource-group.png":::
+
+- **New UI format with Firmware inventory**: Subtabs to organize Getting started, Subscription management, and Firmware inventory.
+
+ :::image type="content" source="media/whats-new-firmware-analysis/firmware-inventory-tab.png" alt-text="Screenshot that shows the firmware inventory in the new UI." lightbox="media/whats-new-firmware-analysis/firmware-inventory-tab.png":::
+
+- **Enhanced documentation**: Updates to [Tutorial: Analyze an IoT/OT firmware image](tutorial-analyze-firmware.md) documentation addressing the new onboarding experience.
+
+## January 2024
+
+- **PDF report generator**: Addition of a **Download as PDF** capability on the **Overview page** that generates and downloads a PDF report of the firmware analysis results.
+
+ :::image type="content" source="media/whats-new-firmware-analysis/overview-pdf-download.png" alt-text="Screenshot that shows the new Download as PDF button." lightbox="media/whats-new-firmware-analysis/overview-pdf-download.png":::
+
+- **Reduced analysis time**: Analysis time has been shortened by 30-80%, depending on image size.
+
+- **CODESYS libraries detection**: Firmware analysis now detects the use of CODESYS libraries, which Microsoft recently identified as having high-severity vulnerabilities. These vulnerabilities can be exploited for attacks such as remote code execution (RCE) or denial of service (DoS). For more information, see [Multiple high severity vulnerabilities in CODESYS V3 SDK could lead to RCE or DoS](https://www.microsoft.com/en-us/security/blog/2023/08/10/multiple-high-severity-vulnerabilities-in-codesys-v3-sdk-could-lead-to-rce-or-dos/).
+
+- **Enhanced documentation**: Addition of documentation addressing the following concepts:
+ - [Azure role-based access control for Firmware Analysis](firmware-analysis-rbac.md), which explains roles and permissions needed to upload firmware images and share analysis results, and an explanation of how the **FirmwareAnalysisRG** resource group works
+ - [Frequently asked questions](firmware-analysis-FAQ.md)
+
+- **Improved filtering for each report**: Each subtab report now includes more fine-grained filtering capabilities.
+
+- **Firmware metadata**: Addition of a collapsible tab with firmware metadata that is available on each page.
+
+ :::image type="content" source="media/whats-new-firmware-analysis/overview-firmware-metadata.png" alt-text="Screenshot that shows the new metadata tab in the Overview page." lightbox="media/whats-new-firmware-analysis/overview-firmware-metadata.png":::
+
+- **Improved version detection**: Improved version detection of the following libraries:
+ - pcre
+ - pcre2
+ - net-tools
+ - zebra
+ - dropbear
+ - bluetoothd
+ - WolfSSL
+ - sqlite3
+
+- **Added support for file systems**: Firmware analysis now supports extraction of the following file systems. For more information, see [Firmware analysis FAQs](firmware-analysis-faq.md#what-types-of-firmware-images-does-firmware-analysis-support):
+ - ISO
+ - RomFS
+ - Zstandard and non-standard LZMA implementations of SquashFS
++
+## July 2023
+
+Microsoft Defender for IoT Firmware Analysis is now available in public preview. Defender for IoT can analyze your device firmware for common weaknesses and vulnerabilities, and provide insight into your firmware security. This analysis is useful whether you build the firmware in-house or receive firmware from your supply chain.
+
+For more information, see [Firmware analysis for device builders](overview-firmware-analysis.md).
++
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
Azure Front Door private link is available in the following regions:
| West US 3 | Sweden Central | | | | US Gov Arizona | | | | | US Gov Texas | | | |
+| US Gov Virginia | | | |
## Limitations
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 10/29/2024 Last updated : 11/07/2024 # Azure HDInsight release notes
For workload specific versions, see [HDInsight 5.x component versions](./hdinsig
* MSI based authentication support available for Azure blob storage.
-Azure HDInsight now supports OAuth-based authentication for accessing Azure Blob storage by leveraging Azure Active Directory (AAD) and managed identities (MSI). With this enhancement, HDInsight uses user-assigned managed identities to access Azure blob storage. For more information, see [Managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview).
+ * Azure HDInsight now supports OAuth-based authentication for accessing Azure Blob storage by leveraging Azure Active Directory (AAD) and managed identities (MSI). With this enhancement, HDInsight uses user-assigned managed identities to access Azure blob storage. For more information, see [Managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview).
+
+* HDInsight service is transitioning to use standard load balancers for all its cluster configurations because of [deprecation announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer#main) of Azure basic load balancer.
+
+ * This change will be rolled out in a phased manner for different regions.
+
+ > [!NOTE]
+ > When using your own Virtual Network (custom VNet) during cluster creation, please be advised that the cluster creation will not succeed once this change is enabled. We recommend referring to the [migration guide to recreate the cluster](./load-balancer-migration-guidelines.md).
+ > For any assistance, contact [support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
## :::image type="icon" border="false" source="./media/hdinsight-release-notes/clock.svg"::: Coming soon
Azure HDInsight now supports OAuth-based authentication for accessing Azure Blob
* On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). * To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
-* HDInsight service is transitioning to use standard load balancers for all its cluster configurations because of [deprecation announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer#main) of Azure basic load balancer.
- * This change will be rolled out in a phased manner for different regions between November 07, 2024 and November 21, 2024. Watch out our release notes for more updates.
- * Retirement Notifications for [HDInsight 4.0](https://azure.microsoft.com/updates/azure-hdinsight-40-will-be-retired-on-31-march-2025-migrate-your-hdinsight-clusters-to-51) and [HDInsight 5.0](https://azure.microsoft.com/updates/hdinsight5retire/). If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
iot-dps Concepts X509 Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-x509-attestation.md
The Device Provisioning Service only accepts X.509 certificates that use either
If you use ECC methods to generate X.509 certificates for device attestation, we recommend the following elliptic curves: * nistP256
-* nistP284
+* nistP384
* nistP521 ### DPS certificate naming requirements
iot-operations Howto Deploy Iot Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-deploy-iot-operations.md
A cluster host:
If you deployed Azure IoT Operations to your cluster previously, uninstall those resources before continuing. For more information, see [Update Azure IoT Operations](./howto-manage-update-uninstall.md#upgrade).
-* Verify that your cluster host is configured correctly for deployment by using the [verify-host](/cli/azure/iot/ops#az-iot-ops-verify-host) command on the cluster host:
-
- ```azurecli
- az iot ops verify-host
- ```
- * (Optional) Prepare your cluster for observability before deploying Azure IoT Operations: [Configure observability](../configure-observability-monitoring/howto-configure-observability.md). ## Deploy
migrate How To Create Azure Vmware Solution Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-vmware-solution-assessment.md
Last updated 05/09/2024
This article describes how to create an Azure VMware Solution assessment for on-premises VMs in a VMware vSphere environment with Azure Migrate: Discovery and assessment.
-[Azure Migrate](migrate-services-overview.md) helps you to migrate to Azure. Azure Migrate provides a centralized hub to track discovery, assessment, and migration of on-premises infrastructure, applications, and data to Azure. The hub provides Azure tools for assessment and migration, as well as third-party independent software vendor (ISV) offerings.
+[Azure Migrate](migrate-services-overview.md) helps you to migrate to Azure. Azure Migrate provides a centralized hub to track discovery, assessment, and migration of on-premises infrastructure, applications, and data to Azure. The hub provides Azure tools for assessment and migration, as well as Partner independent software vendor (ISV) offerings.
## Before you start -- Make sure you've [created](./create-manage-projects.md) an Azure Migrate project.-- If you've already created a project, make sure you've [added](how-to-assess.md) the Azure Migrate: Discovery and assessment tool.-- To create an assessment, you need to set up an Azure Migrate appliance for [VMware vSphere](how-to-set-up-appliance-vmware.md), which discovers the on-premises servers, and sends metadata and performance data to Azure Migrate: Discovery and assessment. [Learn more](migrate-appliance.md).-- You could also [import the server metadata](./tutorial-discover-import.md) in comma-separated values (CSV) format or [import your RVTools XLSX file](./tutorial-import-vmware-using-rvtools-xlsx.md).
+- [Create](./create-manage-projects.md) an Azure Migrate project.
+- [Add](how-to-assess.md) the Azure Migrate: Discovery and assessment tool if you've already created a project.
+- Set up an Azure Migrate appliance for [VMware vSphere](how-to-set-up-appliance-vmware.md), which discovers the on-premises servers, and sends metadata and performance data to Azure Migrate: Discovery and assessment. [Learn more](migrate-appliance.md).
+- [Import](./tutorial-discover-import.md) the server metadata in comma-separated values (CSV) format or [import your RVTools XLSX file](./tutorial-import-vmware-using-rvtools-xlsx.md).
## Azure VMware Solution (AVS) Assessment overview
There are two types of sizing criteria that you can use to create Azure VMware S
**Assessment** | **Details** | **Data** | |
-**Performance-based** | For RVTools & CSV file-based assessments and performance-based assessment will consider the "In Use MiB" & "Storage In Use" respectively for storage configuration of each VM. For appliance-based assessments and performance-based assessments will consider the collected CPU & memory performance data of on-premises servers. | **Recommended Node size**: Based on CPU and memory utilization data along with node type, storage type, and FTT setting that you select for the assessment.
+**Performance-based** | For RVTools and CSV file-based assessments and performance-based assessments, the assessment considers the **In Use MiB** and **Storage In Use** respectively for storage configuration of each VM. For appliance-based assessments and performance-based assessments, the assessment considers the collected CPU & memory performance data of on-premises servers. | **Recommended Node size**: Based on CPU and memory utilization data along with node type, storage type, and FTT setting that you select for the assessment.
**As on-premises** | Assessments based on on-premises sizing. | **Recommended Node size**: Based on the on-premises server size along with the node type, storage type, and FTT setting that you select for the assessment. ## Run an Azure VMware Solution (AVS) assessment
-1. On the **Overview** page > **Servers, databases and web apps**, click **Assess and migrate servers**.
+1. On the **Overview** page > **Servers, databases and web apps**, select **Assess and migrate servers**.
-1. In **Azure Migrate: Discovery and assessment**, click **Assess**.
+1. In **Azure Migrate: Discovery and assessment**, select **Assess**.
1. In **Assess servers** > **Assessment type**, select **Azure VMware Solution (AVS)**.
There are two types of sizing criteria that you can use to create Azure VMware S
- If you discovered servers using the appliance, select **Servers discovered from Azure Migrate appliance**. - If you discovered servers using an imported CSV or RVTools file, select **Imported servers**.
-1. Click **Edit** to review the assessment properties.
+1. Select **Edit** to review the assessment properties.
:::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/assess-servers.png" alt-text="Page for selecting the assessment settings":::
There are two types of sizing criteria that you can use to create Azure VMware S
- In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify.
- - The **Storage type** is defaulted to **vSAN**. This is the default storage type for an Azure VMware Solution private cloud.
+ - The **Storage type** is defaulted to **vSAN** and **Azure NetApp Files (ANF) - Standard**, **ANF - Premium**, and **ANF - Ultra** tiers. ANF is an external storage type in AVS that will be used when storage is the limiting factor considering the configuration/performance of the incoming VMs. When performance metrics are provided using the Azure Migrate appliance or the CSV, the assessment selects the tier that satisfies the performance requirements of the incoming VMsΓÇÖ disks. If the assessment is performed using a RVTools file or without providing performance metrics like throughput and IOPS, **ANF - Standard** tier is used for assessment by default.
- In **Reserved Instances**, specify whether you want to use reserve instances for Azure VMware Solution nodes when you migrate your VMs.
- - If you select to use a reserved instance, you can't specify '**Discount (%)**
- - [Learn more](../azure-vmware/reserved-instance.md)
+ - If you select to use a reserved instance, you can't specify '**Discount (%)**. [Learn more](../azure-vmware/reserved-instance.md).
1. In **VM Size**: - The **Node type** is defaulted to **AV36**. Azure Migrate recommends the number of nodes needed to migrate the servers to Azure VMware Solution.
- - In **FTT setting, RAID level**, select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises server disk requirement, determines the total vSAN storage required in AVS.
+ - In **FTT setting, RAID level**, select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises server disk requirement, determines the total vSAN storage required in AVS.
- In **CPU Oversubscription**, specify the ratio of virtual cores associated with one physical core in the AVS node. Oversubscription of greater than 4:1 might cause performance degradation, but can be used for web server type workloads.
- - In **Memory overcommit factor**, specify the ratio of memory over commit on the cluster. A value of 1 represents 100% memory use, 0.5 for example is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place.
- - In **Dedupe and compression factor**, specify the anticipated deduplication and compression factor for your workloads. Actual value can be obtained from on-premises vSAN or storage config and this may vary by workload. A value of 3 would mean 3x so for 300GB disk only 100GB storage would be used. A value of 1 would mean no dedupe or compression. You can only add values from 1 to 10 up to one decimal place.
+ - In **Memory overcommit factor**, specify the ratio of memory over commit on the cluster. A value of 1 represents 100% memory use, 0.5, for example, is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place.
+ - In **Dedupe and compression factor**, specify the anticipated deduplication and compression factor for your workloads. Actual value can be obtained from on-premises vSAN or storage config and this might vary by workload. A value of 3 would mean 3x so for 300 GB disk only 100 GB storage would be used. A value of 1 would mean no dedupe or compression. You can only add values from 1 to 10 up to one decimal place.
1. In **Node Size**: - In **Sizing criterion**, select if you want to base the assessment on static metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment
There are two types of sizing criteria that you can use to create Azure VMware S
- In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
-1. Click **Save** if you make changes.
+1. Select **Save** if you make changes.
- :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/avs-view-all.png" alt-text="Assessment properties":::
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/avs-view-all-inline.png" alt-text="Assessment properties" lightbox="./media/tutorial-assess-vmware-azure-vmware-solution/avs-view-all-expanded.png":::
-1. In **Assess Servers**, click **Next**.
+1. In **Assess Servers**, select **Next**.
1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
There are two types of sizing criteria that you can use to create Azure VMware S
:::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/assess-group.png" alt-text="Add servers to a group":::
-1. Select the appliance, and select the servers you want to add to the group. Then click **Next**.
+1. Select the appliance, and select the servers you want to add to the group. Then select **Next**.
-1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
+1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
> [!NOTE] > For performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. Ideally, after you start discovery, wait for the performance duration you specify (day/week/month) for a high-confidence rating.
An Azure VMware Solution (AVS) assessment describes:
- **Azure VMware Solution (AVS) readiness**: Whether the on-premises VMs are suitable for migration to Azure VMware Solution (AVS). - **Number of Azure VMware Solution nodes**: Estimated number of Azure VMware Solution nodes required to run the servers. - **Utilization across AVS nodes**: Projected CPU, memory, and storage utilization across all nodes.
- - Utilization includes up front factoring in the following cluster management overheads such as the vCenter Server, NSX Manager (large),
-NSX Edge, if HCX is deployed also the HCX Manager and IX appliance consuming ~ 44vCPU (11 CPU), 75GB of RAM and 722GB of storage before
-compression and deduplication.
+ - Utilization includes up front factoring in the following cluster management overheads such as the vCenter Server, NSX Manager (large), NSX Edge, if HCX is deployed also the HCX Manager and IX appliance consuming ~ 44vCPU (11 CPU), 75 GB of RAM and 722 GB of storage before compression and deduplication.
- Limiting factor determines the number of hosts/nodes required to accommodate the resources. - **Monthly cost estimation**: The estimated monthly costs for all Azure VMware Solution (AVS) nodes running the on-premises VMs.
-You can click on **Sizing assumptions** to understand the assumptions that went in node sizing and resource utilization calculations. You can also edit the assessment properties, or recalculate the assessment.
+You can select **Sizing assumptions** to understand the assumptions that went in node sizing and resource utilization calculations. You can also edit the assessment properties, or recalculate the assessment.
### View an assessment
-1. In **Windows, Linux and SQL Server** > **Azure Migrate: Discovery and assessment**, click the number next to **Azure VMware Solution**.
+1. In **Windows, Linux and SQL Server** > **Azure Migrate: Discovery and assessment**, select the number next to **Azure VMware Solution**.
-1. In **Assessments**, select an assessment to open it. As an example (estimations and costs for example only):
+1. In **Assessments**, select an assessment to open it. As an example (estimations and costs, for example, only):
- :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/avs-assessment-summary.png" alt-text="AVS Assessment summary":::
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/avs-assessment-summary-inline.png" alt-text="Assessment Summary" lightbox="./media/tutorial-assess-vmware-azure-vmware-solution/avs-assessment-summary-expanded.png":::
-1. Review the assessment summary. You can click on **Sizing assumptions** to understand the assumptions that went in node sizing and resource utilization calculations. You can also edit the assessment properties, or recalculate the assessment.
+1. Review the assessment summary. You can select **Sizing assumptions** to understand the assumptions that went in node sizing and resource utilization calculations. You can also edit the assessment properties, or recalculate the assessment.
+
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/avs-utilization-inline.png" alt-text="Screenshot of AVS Utilization summary." lightbox="./media/tutorial-assess-vmware-azure-vmware-solution/avs-utilization-expanded.png":::
### Review Azure VMware Solution (AVS) readiness 1. In **Azure readiness**, verify whether servers are ready for migration to AVS.
-2. Review the server status:
- - **Ready for AVS**: The server can be migrated as-is to Azure (AVS) without any changes. It will start in AVS with full AVS support.
- - **Ready with conditions**: There might be some compatibility issues example internet protocol or deprecated OS in VMware and need to be remediated before migrating to Azure VMware Solution. To fix any readiness problems, follow the remediation guidance the assessment suggests.
- - **Not ready for AVS**: The VM will not start in AVS. For example, if the on-premises VMware VM has an external device attached such as a cd-rom the VMware vMotion operation will fail (if using VMware vMotion).
+2. [Review](concepts-azure-vmware-solution-assessment-calculation.md#server-properties) the server status:
+ - **Ready for AVS**: The server can be migrated as-is to Azure (AVS) without any changes. It starts in AVS with full AVS support.
+ - **Ready with conditions**: There might be some compatibility issues, for example, internet protocol or deprecated OS in VMware and need to be remediated before migrating to Azure VMware Solution. To fix any readiness problems, follow the remediation guidance the assessment suggests.
+ - **Not ready for AVS**: The VM won't start in AVS. For example, if the on-premises VMware VM has an external device attached such as a cd-rom the VMware vMotion operation fails (if using VMware vMotion).
- **Readiness unknown**: Azure Migrate couldn't determine the readiness of the server because of insufficient metadata collected from the on-premises environment. 3. Review the Suggested tool: - **VMware HCX Advanced or Enterprise**: For VMware vSphere VMs, VMware Hybrid Cloud Extension (HCX) solution is the suggested migration tool to migrate your on-premises workload to your Azure VMware Solution (AVS) private cloud. [Learn More](../azure-vmware/configure-vmware-hcx.md).
- - **Unknown**: For servers imported via a CSV or RVTools file, the default migration tool is unknown. Though for VMware vSphere VMs, it is suggested to use the VMware Hybrid Cloud Extension (HCX) solution.
+ - **Unknown**: For servers imported via a CSV or RVTools file, the default migration tool is unknown. Though for VMware vSphere VMs, it's suggested to use the VMware Hybrid Cloud Extension (HCX) solution.
-4. Click on an **AVS readiness** status. You can view VM readiness details, and drill down to see VM details, including compute, storage, and network settings.
+4. Select an **AVS readiness** status. You can view VM readiness details, and drill down to see VM details, including compute, storage, and network settings.
### Review cost details
This view shows the estimated cost of running servers in Azure VMware Solution.
1. Review the monthly total costs. Costs are aggregated for all servers in the assessed group. - Cost estimates are based on the number of AVS nodes required considering the resource requirements of all the servers in total.
- - As the pricing for Azure VMware Solution is per node, the total cost does not have compute cost and storage cost distribution.
+ - As the pricing for Azure VMware Solution is per node, the total cost doesn't have compute cost and storage cost distribution.
- The cost estimation is for running the on-premises servers in AVS. AVS assessment doesn't consider PaaS or SaaS costs.+
+2. Review Estimated AVS cost: This cost indicates the estimated monthly AVS cost that would be incurred for hosting the VMs imported or discovered. It includes categorical costs of the AVS nodes, external storage costs, and the associated networking costs (if applicable).
2. You can review monthly storage cost estimates. This view shows aggregated storage costs for the assessed group, split over different types of storage disks.
migrate How To View A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-view-a-business-case.md
There are four major reports that you need to review:
- **Current on-premises vs Future**: This report covers the breakdown of the total cost of ownership by cost categories and insights on savings. - **On-premises with Azure Arc**: This report covers the breakdown of the total cost of ownership for your on-premises estate with and without Arc. - **Azure IaaS**: This report covers the Azure and on-premises footprint of the servers and workloads recommended for migrating to Azure IaaS.-- **On-premises vs AVS (Azure VMware Solution)**: If you build a business case to *Migrate to AVS*, youΓÇÖll see this report which covers the AVS and on-premises footprint of the workloads for migrating to AVS.
+- **On-premises vs AVS (Azure VMware Solution)**: If you build a business case to *Migrate to AVS*, you see this report which covers the AVS and on-premises footprint of the workloads for migrating to AVS.
- **Azure PaaS**: This report covers the Azure and on-premises footprint of the workloads recommended for migrating to Azure PaaS. ## View a business case
It covers cost components for on-premises and AVS, savings, and insights to unde
#### [AVS (Azure VMware Solution)](#tab/avs-azure) This section contains the cost estimate by recommended target (Annual cost includes Compute, Storage, Network, labor components) and savings from Hybrid benefits.-- AVS cost estimate:
- - **Estimated AVS cost**: This card includes the total cost of ownership for hosting all workloads on AVS including the AVS nodes cost (which includes storage cost), networking, and labor cost. The node cost is computed by taking the most cost optimum AVS node SKU. A default CPU over-subscription of 4:1, 100% memory overcommit and compression and deduplication factor of 1.5 is assumed to get the compute cost of AVS. You can learn more about this [here](concepts-azure-vmware-solution-assessment-calculation.md#whats-in-an-azure-vmware-solution-assessment). External storage options like ANF arenΓÇÖt a part of the business case yet.
- - **Compute and license cost**: This card shows the comparison of compute and license cost when using Azure hybrid benefit and without Azure hybrid benefit.
-- Savings and optimization:
- - **Savings with 3-year RI**: This card shows the node cost with 3-year RI.
- - **Savings with Azure Hybrid Benefit & Extended Security Updates**: This card displays the estimated maximum savings when using Azure hybrid benefit and with extended security updates over a period of one year.
+
+#### AVS cost estimate
+
+**Estimated AVS cost**: This card includes the total cost of ownership for hosting all workloads on AVS including the AVS nodes cost (which includes storage cost), networking, and labor cost. The node cost is computed by taking the most cost optimum AVS node SKU. The infrastructure settings used are as follows:
+
+- The number and SKU of AVS hosts used in a business case aligns to the SKUs available in the given region and optimized to use the least number of nodes required to host all VMs ready to be migrated.
+- Azure NetApp File (ANF) is used when it can be used to optimize the number of AVS hosts required. ANF Standard tier is used when the VMs have been imported using RVTools. For an Azure Migrate appliance-based business case, the tier of ANF used in the business case depends on the IOPS & throughput data for VMs.
+ - CPU over-subscription of 4:1
+ - Memory overcommit of 100%
+ - Compression and deduplication factor of 1.5. You can learn more about this [here](concepts-azure-vmware-solution-assessment-calculation.md#whats-in-an-azure-vmware-solution-assessment).
+
+**Compute and license cost**: This card shows the comparison of compute and license cost when using Azure hybrid benefit and without Azure hybrid benefit.
+
+#### Savings and optimization:
+
+**Savings with 3-year RI**: This card shows the node cost with 3-year RI.
+
+**Savings with Azure Hybrid Benefit & Extended Security Updates**: This card displays the estimated maximum savings when using Azure hybrid benefit and with extended security updates over a period of one year.
+ #### [On-premises](#tab/avs-on-premises)
This section contains the cost estimate by recommended target (Annual cost inclu
## On-premises with Azure Arc report This section contains the cost and savings estimate by Arc-enabling your on-premises estate: -- Arc cost estimate
+#### Arc cost estimate
- **Compute and license cost**: Estimated as a sum of total server hardware acquisition cost on-premises, software cost (Windows license, SQL license, Virtualization software cost), and maintenance cost, SQL license cost is assumed to be using pay-as-you-go model via Arc-enabled SQL Server. ESU licenses for Windows Server and SQL Server are also assumed to be paid via Azure through ESUs enabled by Azure Arc. - **Security and Management Cost**: Security cost is estimated as sum of total protection cost for general servers and SQL workloads using MDC via Azure Arc and management cost is estimated as sum of total management cost for general servers. - **Storage, Network and facilities cost** : Storage cost is Cost per GB and can be customized in the assumptions. Network and facilities cost is considered same as that of current on-premises costs.-- Arc savings
+#### Arc savings
- **Estimated ESU savings**: This report includes the savings by paying ESUs monthly instead of annual licensing and deploying them seamlessly to your on-premises servers. - **IT Productivity Savings**: Azure Arc improves IT productivity by reducing the time they spend on routine activities. This report includes that and management savings.
- - **Threat protection and Savings by using MDC**: The report also includes the savings by using Microsoft Defender for Cloud to secure your on-premises server. You can mitigate threats 50% faster and improve your security posture with Microsoft Defender for cloud.
+ - **Threat protection and Savings using MDC**: The report also includes the savings by using Microsoft Defender for Cloud to secure your on-premises server. You can mitigate threats 50% faster and improve your security posture with Microsoft Defender for cloud.
:::image type="content" source="./media/how-to-view-a-business-case/azure-arc-inline.png" alt-text="Screenshot of comparison of on-premises servers with Arc and without Arc." lightbox="./media/how-to-view-a-business-case/azure-arc-expanded.png":::
migrate Tutorial Assess Vmware Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-assess-vmware-azure-vmware-solution.md
Before you follow this tutorial to assess your servers for migration to AVS, mak
- To discover servers using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-vmware.md). - To discover servers using an imported CSV file, [follow this tutorial](../tutorial-discover-import.md).-- To import servers using an RVTools file, [follow this tutorial](tutorial-import-vmware-using-rvtools-xlsx.md).
+- To import servers using a RVTools file, [follow this tutorial](tutorial-import-vmware-using-rvtools-xlsx.md).
## Decide which assessment to run
Run an assessment as follows:
1. In **Discovery source**: - If you discovered servers using the appliance, select **Servers discovered from Azure Migrate appliance**.
- - If you discovered servers using an imported CSV file or an RVTools XLSX file, select **Imported servers**.
+ - If you discovered servers using an imported RVTools XLSX or CSV file, select **Imported servers**.
1. Select **Edit** to review the assessment properties.
Run an assessment as follows:
**Section** | **Setting** | **Details** | | | Target settings | **Target location** | The Azure region to which you want to migrate. Size and cost recommendations are based on the location that you specify.
- Target settings | **Storage type** | Defaulted to **vSAN**. This is the default storage type for an AVS private cloud.
+ Target settings | **Storage type** | Defaulted to vSAN & ANF - Standard, ANF - Premium, and ANF - Ultra tiers. ANF will be used when external storage can reduce the cost by reducing the number of nodes required. ANF - Standard will be used by default when IOPS/throughput for storage is not provided.
Target settings | **Reserved instance** | Specify whether you want to use reserve instances for Azure VMware Solution nodes when you migrate your VMs. If you decide to use a reserved instance, you can't specify **Discount (%)**. [Learn more](/azure/azure-vmware/reserved-instance) about reserved instances.
- VM size | **Node type** | Defaulted to **AV36**. Azure Migrate recommends the node needed to migrate the servers to AVS.
- VM size | **FTT setting, RAID level** | Select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises server disk requirement, determines the total vSAN storage required in AVS.
+ VM size | **Node type** | Defaulted to use all the nodes available in a given region.
+ VM size | **FTT setting, RAID level** | Select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises server disk requirement, determines the total vSAN storage required in AVS.
VM size | **CPU Oversubscription** | Specify the ratio of virtual cores associated with one physical core in the AVS node. Oversubscription of greater than 4:1 might cause performance degradation, but can be used for web server type workloads. VM size | **Memory overcommit factor** | Specify the ratio of memory over commit on the cluster. A value of 1 represents 100% memory use, 0.5, for example, is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place.
- VM size | **Dedupe and compression factor** | Specify the anticipated dedupe and compression factor for your workloads. The actual value can be obtained from on-premises vSAN or storage config and this might vary by workload. A value of 3 would mean 3x so for a 300GB disk only 100GB storage would be used. A value of 1 would mean no dedupe or compression. You can only add values from 1 to 10 up to one decimal place.
+ VM size | **Dedupe and compression factor** | Specify the anticipated dedupe and compression factor for your workloads. The actual value can be obtained from on-premises vSAN or storage config and this might vary by workload. A value of 3 would mean 3x so for a 300 GB disk only 100 GB storage would be used. A value of 1 would mean no dedupe or compression. You can only add values from 1 to 10 up to one decimal place.
Node size | **Sizing criteria** | Set to be *Performance-based* by default, which means Azure Migrate collects performance metrics based on which it provides recommendations. Node size | **Performance history** | Indicate the data duration on which you want to base the assessment. (Default is one day) Node size | **Percentile utilization** | Indicate the percentile value you want to use for the performance sample. (Default is 95th percentile)
Run an assessment as follows:
1. Select **Save** if you make changes.
- :::image type="content" source="../media/tutorial-assess-vmware-azure-vmware-solution/avs-view-all.png" alt-text="Assessment properties":::
+ :::image type="content" source="../media/tutorial-assess-vmware-azure-vmware-solution/avs-view-all-inline.png" alt-text="Assessment properties" lightbox="../media/tutorial-assess-vmware-azure-vmware-solution/avs-view-all-expanded.png":::
1. In **Assess Servers**, select **Next**.
To view an assessment:
1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to **Azure VMware Solution**.
-1. In **Assessments**, select an assessment to open it. As an example (estimations and costs for example only):
+1. In **Assessments**, select an assessment to open it. As an example (estimations and costs, for example, only):
- :::image type="content" source="../media/tutorial-assess-vmware-azure-vmware-solution/avs-assessment-summary.png" alt-text="AVS Assessment summary":::
+ :::image type="content" source="../media/tutorial-assess-vmware-azure-vmware-solution/avs-assessment-summary-inline.png" alt-text="Assessment Summary" lightbox="../media/tutorial-assess-vmware-azure-vmware-solution/avs-assessment-summary-expanded.png":::
1. Review the assessment summary.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (November 2024)
+
+- AVS assessments now support cost assessments with AV64 SKU and Azure NetApp Files (ANF) as an external storage option. [Learn more](how-to-create-azure-vmware-solution-assessment.md).
+- Support for cost of SKUs when porting on-premises VCF subscription to AVS.
+ ## Update (October 2024) The RVTools XLSX (preview) file import now reads storage data, when available, from vPartition and vMemory (for storage required for unreserved memory) sheets. [Learn more](vmware/tutorial-import-vmware-using-rvtools-xlsx.md#prerequisites).
networking Azure Network Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/azure-network-latency.md
Use the following tabs to view latency statistics for each region.
| North Europe | 156 | 152 | | Norway East | 172 | 171 | | Norway West | 167 | 167 |
-| Poland North | 175 | 177 |
+| Poland Central | 175 | 177 |
| Qatar Central | 264 | 254 | | South Africa North | 294 | 291 | | South Africa West | 278 | 275 |
Use the following tabs to view latency statistics for each region.
| North Europe | 112 | 104 | 122 | 132 | | Norway East | 136 | 127 | 136 | 150 | | Norway West | 129 | 118 | 133 | 145 |
-| Poland North | 141 | 129 | 141 | 155 |
+| Poland Central | 141 | 129 | 141 | 155 |
| Qatar Central | 226 | 214 | 224 | 240 | | South Africa North | 257 | 245 | 255 | 271 | | South Africa West | 241 | 228 | 239 | 255 |
Use the following tabs to view latency statistics for each region.
| North Europe | 88 | 95 | | Norway East | 108 | 110 | | Norway West | 102 | 103 |
-| Poland North | 113 | 112 |
+| Poland Central | 113 | 112 |
| Qatar Central | 196 | 199 | | South Africa North | 227 | 230 | | South Africa West | 210 | 214 |
Use the following tabs to view latency statistics for each region.
| North Europe | 187 | 99 | 106 | | Norway East | 211 | 114 | 124 | | Norway West | 199 | 110 | 115 |
-| Poland North | 210 | 123 | 129 |
+| Poland Central | 210 | 123 | 129 |
| Qatar Central | 296 | 208 | 213 | | South Africa North | 326 | 239 | 244 | | South Africa West | 311 | 223 | 228 |
Use the following tabs to view latency statistics for each region.
| North Europe | 296 | 283 | 290 | 281 | | Norway East | 309 | 300 | 303 | 290 | | Norway West | 305 | 294 | 307 | 286 |
-| Poland North | 306 | 297 | 303 | 286 |
+| Poland Central | 306 | 297 | 303 | 286 |
| Qatar Central | 184 | 182 | 182 | 170 | | South Africa North | 280 | 278 | 279 | 266 | | South Africa West | 295 | 294 | 295 | 282 |
Use the following tabs to view latency statistics for each region.
| North Europe | 252 | 257 | | Norway East | 272 | 276 | | Norway West | 264 | 269 |
-| Poland North | 277 | 279 |
+| Poland Central | 277 | 279 |
| Qatar Central | 155 | 161 | | South Africa North | 252 | 259 | | South Africa West | 268 | 274 |
Use the following tabs to view latency statistics for each region.
| North Europe | 19 | 28 | 41 | 19 | | Norway East | 32 | 41 | 39 | 25 | | Norway West | 25 | 35 | 39 | 18 |
-| Poland North | 28 | 35 | 34 | 23 |
+| Poland Central | 28 | 35 | 34 | 23 |
| Qatar Central | 124 | 114 | 126 | 132 | | South Africa North | 156 | 154 | 167 | 163 | | South Africa West | 141 | 138 | 152 | 147 |
Use the following tabs to view latency statistics for each region.
#### [Central Europe](#tab/CentralEurope/Europe)
-| Source | Germany North | Germany West Central | Poland North | Switzerland North | Switzerland West |
+| Source | Germany North | Germany West Central | Poland Central | Switzerland North | Switzerland West |
|||-|--|-|| | Australia Central | 296 | 292 | 305 | 284 | 284 | | Australia Central 2 | 288 | 284 | 297 | 278 | 279 |
Use the following tabs to view latency statistics for each region.
| North Europe | 33 | 28 | 39 | 33 | 27 | | Norway East | 23 | 27 | 31 | 30 | 33 | | Norway West | 27 | 27 | 35 | 30 | 33 |
-| Poland North | 13 | 22 | | 25 | 27 |
+| Poland Central | 13 | 22 | | 25 | 27 |
| Qatar Central | 136 | 131 | 145 | 125 | 122 | | South Africa North | 170 | 164 | 179 | 165 | 161 | | South Africa West | 154 | 151 | 163 | 148 | 145 |
Use the following tabs to view latency statistics for each region.
| North Europe | 39 | 34 | 43 | | Norway East | | 10 | 11 | | Norway West | 11 | | 17 |
-| Poland North | 29 | 34 | 26 |
+| Poland Central | 29 | 34 | 26 |
| Qatar Central | 149 | 144 | 153 | | South Africa North | 183 | 176 | 186 | | South Africa West | 167 | 160 | 170 |
Use the following tabs to view latency statistics for each region.
| North Europe | | 13 | 16 | | Norway East | 40 | 30 | 34 | | Norway West | 35 | 25 | 27 |
-| Poland North | 41 | 31 | 35 |
+| Poland Central | 41 | 31 | 35 |
| Qatar Central | 140 | 130 | 133 | | South Africa North | 172 | 162 | 165 | | South Africa West | 158 | 147 | 151 |
Use the following tabs to view latency statistics for each region.
| North Europe | 261 | 254 | | Norway East | 271 | 265 | | Norway West | 267 | 260 |
-| Poland North | 267 | 261 |
+| Poland Central | 267 | 261 |
| Qatar Central | 152 | 145 | | South Africa North | 250 | 241 | | South Africa West | 265 | 257 |
Use the following tabs to view latency statistics for each region.
| North Europe | 146 | 160 | 164 | | Norway East | 154 | 170 | 176 | | Norway West | 148 | 162 | 172 |
-| Poland North | 151 | 165 | 173 |
+| Poland Central | 151 | 165 | 173 |
| Qatar Central | 41 | 55 | 65 | | South Africa North | 138 | 153 | 158 | | South Africa West | 153 | 168 | 174 |
Use the following tabs to view latency statistics for each region.
| North Europe | 229 | 198 | | Norway East | 240 | 208 | | Norway West | 236 | 204 |
-| Poland North | 237 | 205 |
+| Poland Central | 237 | 205 |
| Qatar Central | 121 | 90 | | South Africa North | 218 | 188 | | South Africa West | 233 | 203 |
Use the following tabs to view latency statistics for each region.
| North Europe | 70 | 139 | 127 | 129 | | Norway East | 83 | 150 | 138 | 140 | | Norway West | 81 | 146 | 134 | 136 |
-| Poland North | 69 | 146 | 134 | 136 |
+| Poland Central | 69 | 146 | 134 | 136 |
| Qatar Central | 153 | | 16 | 17 | | South Africa North | 195 | 117 | 105 | 102 | | South Africa West | 183 | 132 | 121 | 118 |
Use the following tabs to view latency statistics for each region.
| North Europe | 172 | 155 | | Norway East | 184 | 167 | | Norway West | 177 | 160 |
-| Poland North | 180 | 163 |
+| Poland Central | 180 | 163 |
| Qatar Central | 118 | 131 | | South Africa North | | 20 | | South Africa West | 21 | |
openshift Confidential Containers Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/confidential-containers-deploy.md
Last updated 11/04/2024
-# Deploy Confidential Containers in an Azure Red Hat OpenShift (ARO) cluster
+# Deploy Confidential Containers in an Azure Red Hat OpenShift (ARO) cluster (Preview)
This article describes the steps required to deploy Confidential Containers for an ARO cluster. This process involves two main parts and multiple steps:
openshift Confidential Containers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/confidential-containers-overview.md
Last updated 11/04/2024
-# Confidential Containers with Azure Red Hat OpenShift
+# Confidential Containers with Azure Red Hat OpenShift (Preview)
Confidential Containers offer a robust solution to protect sensitive data within cloud environments. By using hardware-based trusted execution environments (TEEs), Confidential Containers provide a secure enclave within the host system, isolating applications and their data from potential threats. This isolation ensures that even if the host system is compromised, the confidential data remains protected.
operator-nexus Reference Nexus Kubernetes Cluster Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-kubernetes-cluster-supported-versions.md
Note the following important changes to make before you upgrade to any of the av
| Kubernetes Version | Version Bundle | Components | OS Components | Breaking Changes | Notes | | - | -- | -- | -- | -- | | | 1.30.3 | 1 | Calico v3.27.4<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 3.0.20240824](https://github.com/microsoft/azurelinux/releases/tag/3.0.20240824-3.0) | No breaking changes | |
-| 1.29.7 | 3 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | Extended Available patches 1.29.4-1 |
+| 1.29.7 | 3 | Calico v3.27.4<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | Extended Available patches 1.29.4-1 |
| 1.29.7 | 2 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
-| 1.29.6 | 4 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
+| 1.29.6 | 4 | Calico v3.27.4<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
| 1.29.6 | 3 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
-| 1.28.12 | 3 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | Extended Available patches 1.28.9-2, 1.28.0-6 |
+| 1.28.12 | 3 | Calico v3.27.4<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | Extended Available patches 1.28.9-2, 1.28.0-6 |
| 1.28.12 | 2 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
-| 1.28.11 | 4 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
+| 1.28.11 | 4 | Calico v3.27.4<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
| 1.28.11 | 3 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240731](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240731-2.0) | No breaking changes | |
-| 1.27.13 | 3 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | Extended Available patches 1.27.3-7, 1.27.1-8 |
+| 1.27.13 | 3 | Calico v3.27.4<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | Extended Available patches 1.27.3-7, 1.27.1-8 |
| 1.27.13 | 2 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
-| 1.27.9 | 5 | Calico v3.28.2<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
+| 1.27.9 | 5 | Calico v3.27.4<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.7.0<br>Csi-nfs v4.9.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
| 1.27.9 | 4 | Calico v3.27.4<br>metrics-server v0.7.2<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | | | 1.26.12 | 4 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | | | 1.26.12 | 3 | Calico v3.27.4<br>metrics-server v0.7.1<br>Multus v4.0.2<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.15<br>sriov-dp v3.5.1<br>Csi-nfs v4.8.0<br>csi-volume v0.1.0<br>metallb v0.14.5-3 | [Azure Linux 2.0.20240425](https://github.com/microsoft/azurelinux/releases/tag/2.0.20240425-2.0) | No breaking changes | |
operator-service-manager Best Practices Onboard Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/best-practices-onboard-deploy.md
As of today, if dependsOnProfile provided in the NFDV is invalid, the NF operati
} } ```
+## injectArtifactStoreDetails considerations
+In some cases, third-party helm charts maynot be fully compliant with AOSM requirements for registryURL. In this case, the injectArtifactStoreDetails feature can be used to avoid making changes to helm packages.
+
+### How to enable
+To use injectArtifactStoreDetails, set the installOptions parameter in the NF resource roleOrverrides section to true, then use whatever registryURL value is needed to keep the registry URL valid. See following example of injectArtifactStoreDetails parameter enabled.
+
+```bash
+resource networkFunction 'Microsoft.HybridNetwork/networkFunctions@2023-09-01' = {
+ name: nfName
+ location: location
+ properties: {
+ nfviType: 'AzureArcKubernetes'
+ networkFunctionDefinitionVersionResourceReference: {
+ id: nfdvId
+ idType: 'Open'
+ }
+ allowSoftwareUpdate: true
+ nfviId: nfviId
+ deploymentValues: deploymentValues
+ configurationType: 'Open'
+ roleOverrideValues: [
+ // Use inject artifact store details feature on test app 1
+ '{"name":"testapp1", "deployParametersMappingRuleProfile":{"helmMappingRuleProfile":{"options":{"installOptions":{"atomic":"false","wait":"false","timeout":"60","injectArtifactStoreDetails":"true"},"upgradeOptions": {"atomic": "false", "wait": "true", "timeout": "100", "injectArtifactStoreDetails": "true"}}}}}'
+ ]
+ }
+}
+```
security Encryption Customer Managed Keys Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-customer-managed-keys-support.md
+
+ Title: Services that support customer managed keys (CMKs) in Azure Key Vault and Azure Managed HSM
+description: Services that support customer managed keys (CMKs) in Azure Key Vault and Azure Managed HSM
++ Last updated : 10/25/2024+++++
+# Services that support customer managed keys (CMKs) in Azure Key Vault and Azure Managed HSM
+
+The following services support server-side encryption with customer managed keys in [Azure Key Vault](/azure/key-vault/) and [Azure Managed HSM](/azure/key-vault/managed-hsm/). For implementation details, see the service-specific documentation or the service's [Microsoft Cloud Security Benchmark: security baseline](/security/benchmark/azure/security-baselines-overview) (section DP-5).
+
+## AI and machine learning
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Azure AI Search](/azure/search/) | Yes | | [Configure customer-managed keys for data encryption in Azure AI Search](/azure/search/search-security-manage-encryption-keys) |
+| [Azure AI services](/azure/ai-services/) | Yes | Yes | [Customer-managed keys for encryption](/azure/ai-services/encryption/cognitive-services-encryption-keys-portal) |
+| [Azure AI Studio](/azure/ai-studio) | Yes | | [Encryption of data at rest in Azure AI services](/azure/ai-studio/concepts/encryption-keys-portal) |
+| [Azure Bot Service](/azure/bot-service/) | Yes | | [Encryption of bot data in Azure Bot Service](/azure/bot-service/bot-service-encryption) |
+| [Azure Health Bot](/azure/health-bot/) | Yes | | [Configure customer-managed keys (CMK) for Azure Health Bot](/azure/health-bot/cmk) |
+| [Azure Machine Learning](/azure/machine-learning/) | Yes | | [Customer-managed keys for workspace encryption in Azure Machine Learning](/azure/machine-learning/concept-customer-managed-keys) |
+| [Azure OpenAI](/azure/ai-services/openai/) | Yes | Yes | [Azure OpenAI Service encryption of data at rest](/azure/ai-services/openai/encrypt-data-at-rest) |
+| [Content Moderator](/azure/ai-services/content-moderator/) | Yes | Yes | [Content Moderator encryption of data at rest](/azure/ai-services/content-moderator/encrypt-data-at-rest) |
+| [Dataverse](/powerapps/maker/data-platform/) | Yes | Yes | [Customer-managed keys in Dataverse](/power-platform/admin/customer-managed-key) |
+| [Dynamics 365](/dynamics365/) | Yes | Yes | [Customer-managed keys for encryption](/dynamics365/fin-ops-core/dev-itpro/sysadmin/customer-managed-keys) |
+| [Face](/azure/ai-services/computer-vision/overview-identity) | Yes | Yes | [Face service encryption of data at rest](/azure/ai-services/computer-vision/identity-encrypt-data-at-rest) |
+| [Language Understanding](/azure/ai-services/luis/what-is-luis) | Yes | Yes | [Customer-managed keys with Azure Key Vault](/azure/ai-services/luis/encrypt-data-at-rest) |
+| [Personalizer](/azure/ai-services/personalizer/) | Yes | Yes | [Encryption of data at rest in Personalizer](/azure/ai-services/personalizer/encrypt-data-at-rest) |
+| [Power Platform](/power-platform/) | Yes | Yes | [Customer-managed keys in Power Platform](/power-platform/admin/customer-managed-key) |
+| [QnA Maker](/azure/ai-services/qnamaker/) | Yes | Yes | [QnA Maker encryption of data at rest](/azure/ai-services/qnamaker/encrypt-data-at-rest) |
+| [Speech Services](/azure/ai-services/speech-service/) | Yes | Yes | [Speech service encryption of data at rest](/azure/ai-services/speech-service/speech-encryption-of-data-at-rest) |
+| [Translator Text](/azure/ai-services/translator/) | Yes | Yes | [Translator encryption of data at rest](/azure/ai-services/translator/encrypt-data-at-rest) |
+
+## Analytics
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Azure Data Explorer](/azure/data-explorer/) | Yes | | [Configure customer-managed keys (CMK) in Azure Data Explorer](/azure/data-explorer/customer-managed-keys-portal) |
+| [Azure Data Factory](/azure/data-factory/) | Yes | Yes | [Encryption with customer-managed keys for Azure Data Factory](/azure/data-factory/enable-customer-managed-key) |
+| [Azure Data Lake Store](/azure/data-lake-store/) | Yes (RSA 2048-bit) | | |
+| [Azure Data Manager for Energy](/azure/energy-data-services/) | Yes | | [Manage data security and encryption](/azure/energy-data-services/how-to-manage-data-security-and-encryption) |
+| [Azure Databricks](/azure/databricks/) | Yes | Yes | [Customer-managed keys for managed services](/azure/databricks/security/keys/customer-managed-key-managed-services-azure) |
+| [Azure HDInsight](/azure/hdinsight/) | Yes | | [Azure HDInsight double encryption for data at rest](/azure/hdinsight/disk-encryption) |
+| [Azure Monitor Application Insights](/azure/azure-monitor/app/app-insights-overview) | Yes | | [Customer-managed keys in Azure Monitor](/azure/azure-monitor/logs/customer-managed-keys) |
+| [Azure Monitor Log Analytics](/azure/azure-monitor/logs/log-analytics-overview) | Yes | Yes | [Customer-managed keys in Azure Monitor](/azure/azure-monitor/logs/customer-managed-keys) |
+| [Azure Stream Analytics](/azure/stream-analytics/) | Yes\* | Yes | [Data protection in Azure Stream Analytics](/azure/stream-analytics/data-protection) |
+| [Azure Synapse Analytics](/azure/synapse-analytics/) | Yes (RSA 3072-bit) | Yes | [Configure encryption at rest with customer-managed keys](/azure/synapse-analytics/security/workspaces-encryption) |
+| [Microsoft Fabric](/fabric) | Yes | | [Customer-managed key (CMK) encryption and Microsoft Fabric](/fabric/security/security-scenario#customer-managed-key-cmk-encryption-and-microsoft-fabric) |
+| [Power BI Embedded](/power-bi) | Yes | | [Using your own key for Power BI encryption (Preview)](/power-bi/enterprise/service-encryption-byok) |
+
+## Containers
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Azure Kubernetes Service](/azure/aks/) | Yes | Yes | [Enable host encryption on your AKS cluster nodes](/azure/aks/enable-host-encryption) |
+| [Azure Red Hat OpenShift](/azure/openshift/) | Yes | | [Bring your own keys (BYOK) with Azure Red Hat OpenShift](/azure/openshift/howto-byok) |
+| [Container Instances](/azure/container-instances/) | Yes | | [Encrypt data with a customer-managed key](/azure/container-instances/container-instances-encrypt-data#encrypt-data-with-a-customer-managed-key) |
+| [Container Registry](/azure/container-registry/) | Yes | | [Encrypt container images with a customer-managed key](/azure/container-registry/container-registry-customer-managed-keys) |
+
+## Compute
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [App Service](/azure/app-service/) | Yes\* | Yes | [Configure customer-managed keys for App Service](/azure/app-service/configure-encrypt-at-rest-using-cmk) |
+| [Azure Functions](/azure/azure-functions/) | Yes\* | Yes | [Configure customer-managed keys for Azure Functions](/azure/azure-functions/configure-encrypt-at-rest-using-cmk) |
+| [Azure HPC Cache](/azure/hpc-cache/) | Yes | | [Use customer-managed keys with HPC Cache](/azure/hpc-cache/customer-keys) |
+| [Azure Managed Applications](/azure/azure-resource-manager/managed-applications/) | Yes\* | Yes | [Azure managed applications overview](/azure/azure-resource-manager/managed-applications/overview) |
+| [Azure portal](/azure/azure-portal/) | Yes\* | Yes | [Security in the Azure portal](/azure/security/fundamentals/overview) |
+| [Azure VMware Solution](/azure/azure-vmware/) | Yes | Yes | [Configure customer-managed keys in Azure VMware Solution](/azure/azure-vmware/configure-customer-managed-keys) |
+| [Batch](/azure/batch/) | Yes | | [Use customer-managed keys with Batch accounts](/azure/batch/batch-customer-managed-key) |
+| [SAP HANA](/azure/sap/large-instances/hana-overview-architecture) | Yes | | |
+| [Site Recovery](/azure/site-recovery/) | Yes | | [Enable replication with customer-managed keys](/azure/site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks) |
+| [Virtual Machine Scale Set](/azure/virtual-machine-scale-sets/) | Yes | Yes | [Encrypt virtual machine scale sets using the portal](/azure/virtual-machines/linux/disk-encryption-key-vault) |
+| [Virtual Machines](/azure/virtual-machines/) | Yes | Yes | [Azure Disk Encryption for Windows and Linux VMs](/azure/virtual-machines/disk-encryption#customer-managed-keys) |
+
+## Databases
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Azure Cosmos DB](/azure/cosmos-db/) | Yes | Yes | [Configure customer-managed keys using Azure Key Vault](/azure/cosmos-db/how-to-setup-cmk), [Configure customer-managed keys using Azure Key Vault Managed HSM](/azure/cosmos-db/how-to-setup-customer-managed-keys-mhsm) |
+| [Azure Database for MySQL - Flexible Server](/azure/mysql/flexible-server/) | Yes | | [Data encryption with customer-managed keys in Azure Database for MySQL - Flexible Server](/azure/mysql/flexible-server/concepts-customer-managed-key) |
+| [Azure Database for MySQL - Single Server](/azure/mysql/single-server/) | Yes | | [Azure Database for MySQL data encryption with a customer-managed key](/previous-versions/azure/mysql/single-server/concepts-data-encryption-mysql) |
+| [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/) | Yes | | [Data encryption with customer-managed keys in Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/concepts-data-encryption) |
+| [Azure Database for PostgreSQL - Single Server](/azure/postgresql/) | Yes | Yes | [Data encryption with customer-managed keys in Azure Database for PostgreSQL - Single Server](/previous-versions/azure/postgresql/single-server/concepts-data-encryption-postgresql) |
+| [Azure Managed Instance for Apache Cassandra](/azure/managed-instance-apache-cassandra/) | Yes | | [Configure customer-managed keys for encryption](/azure/managed-instance-apache-cassandra/customer-managed-keys) |
+| [Azure SQL Database](/azure/azure-sql/database/) | Yes (RSA 3072-bit) | Yes | [Bring your own key (BYOK) support for Transparent Data Encryption (TDE)](/azure/azure-sql/database/transparent-data-encryption-byok-overview) |
+| [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/) | Yes (RSA 3072-bit) | Yes | [Bring your own key (BYOK) support for Transparent Data Encryption (TDE)](/azure/azure-sql/database/transparent-data-encryption-byok-overview) |
+| [SQL Server on Azure VM](/azure/azure-sql/virtual-machines/) | Yes | | [Configure Azure Key Vault integration for SQL Server on Azure VMs ](/azure/azure-sql/virtual-machines/windows/azure-key-vault-integration-configure) |
+| [SQL Server on Virtual Machines](/azure/virtual-machines/windows/sql/) | Yes | | [Transparent data encryption for SQL Server on Azure VM](/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-security#transparent-data-encryption) |
+| [SQL Server Stretch Database](/azure/sql-server-stretch-database/) | Yes (RSA 3072-bit) | | |
+| [Table Storage](/azure/storage/tables/) | Yes | | [Customer-managed keys for Azure Storage encryption](/azure/storage/common/customer-managed-keys-overview) |
+
+## Hybrid + multicloud
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Azure Stack Edge](/azure/databox-online/) | Yes | | [Protect data at rest on Azure Stack Edge Pro R](/azure/databox-online/azure-stack-edge-pro-r-security#protect-data-at-rest) |
+
+## Integration
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Azure Health Data Services](/azure/healthcare-apis/) | Yes | | [Configure customer-managed keys for Azure Health Data Services DICOM](/azure/healthcare-apis/dicom/configure-customer-managed-keys), [Configure customer-managed keys for Azure Health Data Services FHIR](/azure/healthcare-apis/fhir/configure-customer-managed-keys) |
+| [Event Hubs](/azure/event-hubs/) | Yes | | [Configure customer-managed keys for encryption](/azure/event-hubs/configure-customer-managed-key) |
+| [Logic Apps](/azure/logic-apps/) | Yes | | |
+| [Service Bus](/azure/service-bus-messaging/) | Yes | | [Configure customer-managed keys for encryption](/azure/service-bus-messaging/configure-customer-managed-key) |
+
+## IoT services
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Device Update for IoT Hub](/azure/iot-hub-device-update/) | Yes | Yes | [Data encryption for Device Update for IoT Hub](/azure/iot-hub-device-update/device-update-data-encryption) |
+| [IoT Hub Device Provisioning](/azure/iot-dps/) | Yes | | |
+
+## Management and governance
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [App Configuration](/azure/azure-app-configuration/) | Yes | | [Use customer-managed keys to encrypt data](/azure/azure-app-configuration/concept-customer-managed-keys) |
+| [Automation](/azure/automation/) | Yes | | [Encryption of automation assets](/azure/automation/automation-secure-asset-encryption) |
+| [Azure Migrate](/azure/migrate/) | Yes | | [Tutorial: Migrate VMware VMs to Azure](/azure/migrate/tutorial-migrate-vmware) |
+| [Azure Monitor](/azure/azure-monitor) | Yes | | [Customer-managed keys in Azure Monitor](/azure/azure-monitor/logs/customer-managed-keys) |
+
+## Media
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Azure Communication Services](/azure/communication-services/) | Yes | | [Data encryption in Azure Communication Services](/azure/communications-gateway/security#data-retention-data-security-and-encryption-at-rest) |
+| [Media Services](/azure/media-services/) | Yes | | [Use your own encryption keys with Azure Media Services](/azure/media-services/latest/concept-use-customer-managed-keys-byok) |
+
+## Security
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Azure Information Protection](/azure/information-protection/) | Yes | | [How are the Azure Rights Management cryptographic keys managed and secured?](/azure/information-protection/how-does-it-work#how-the-azure-rms-cryptographic-keys-are-stored-and-secured) |
+| [Microsoft Defender for Cloud](/azure/defender-for-cloud/) | Yes | | [Customer-managed keys in Azure Monitor](/azure/azure-monitor/logs/customer-managed-keys) |
+| [Microsoft Defender for IoT](/azure/defender-for-iot/) | Yes | | |
+| [Microsoft Sentinel](/azure/sentinel/) | Yes | Yes | [Encryption at rest in Microsoft Sentinel](/azure/sentinel/customer-managed-keys) |
+
+## Storage
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Archive Storage](/azure/storage/blobs/archive-blob) | Yes | | [Customer-managed keys for Azure Storage encryption](/azure/storage/common/customer-managed-keys-overview) |
+| [Azure Backup](/azure/backup/) | Yes | Yes | [Encrypt backup data using customer-managed keys](/azure/backup/encryption-at-rest-with-cmk) |
+| [Azure Cache for Redis](/azure/azure-cache-for-redis/) | Yes\*\* | Yes | [Configure disk encryption for Azure Cache for Redis instances using customer managed keys](/azure/azure-cache-for-redis/cache-how-to-encryption) |
+| [Azure Data Box](/azure/databox/) | Yes | | [Use a customer-managed key to secure your Data Box](/azure/databox/data-box-customer-managed-encryption-key-portal) |
+| [Azure Managed Lustre](/azure/azure-managed-lustre/) | Yes | | [Use customer-managed encryption keys with Azure Managed Lustre](/azure/azure-managed-lustre/customer-managed-encryption-keys) |
+| [Azure NetApp Files](/azure/azure-netapp-files/) | Yes | Yes | [Configure customer-managed keys for Azure NetApp Files volume encryption](/azure/azure-netapp-files/configure-customer-managed-keys?tabs=azure-portal) |
+| [Blob Storage](/azure/storage/blobs/) | Yes | Yes | [Customer-managed keys for Azure Storage encryption](/azure/storage/common/customer-managed-keys-overview) |
+| [Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction/) | Yes | Yes | [Customer-managed keys for Azure Storage encryption](/azure/storage/common/customer-managed-keys-overview) |
+| [Disk Storage](/azure/virtual-machines/disks-types/) | Yes | Yes | [Azure Disk Encryption for Windows and Linux VMs](/azure/virtual-machines/disk-encryption#customer-managed-keys) |
+| [File Storage](/azure/storage/files/) | Yes | Yes | [Customer-managed keys for Azure Storage encryption](/azure/storage/common/customer-managed-keys-overview) |
+| [File Sync](/azure/storage/file-sync/file-sync-introduction) | Yes | Yes | [Customer-managed keys for Azure Storage encryption](/azure/storage/common/customer-managed-keys-overview) |
+| [Managed Disk Storage](/azure/virtual-machines/disks-types/) | Yes | Yes | [Azure Disk Encryption for Windows and Linux VMs](/azure/virtual-machines/disk-encryption#customer-managed-keys) |
+| [Premium Blob Storage](/azure/storage/blobs/) | Yes | Yes | [Customer-managed keys for Azure Storage encryption](/azure/storage/common/customer-managed-keys-overview) |
+| [Queue Storage](/azure/storage/queues/) | Yes | Yes | [Customer-managed keys for Azure Storage encryption](/azure/storage/common/customer-managed-keys-overview) |
+| [StorSimple](/azure/storsimple/) | Yes | | [Azure StorSimple security features](/azure/storsimple/storsimple-security#data-encryption) |
+| [Ultra Disk Storage](/azure/virtual-machines/disks-types/) | Yes | Yes | [Azure Disk Encryption for Windows and Linux VMs](/azure/virtual-machines/disk-encryption#customer-managed-keys) |
+
+## Other
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+||||||
+| [Universal Print](/universal-print/) | Yes | | [Data encryption in Universal Print](/universal-print/fundamentals/universal-print-encryption) |
+
+## Caveats
+
+\* This service supports storing data in your own Key Vault, Storage Account, or other data persisting service that already supports Server-Side Encryption with Customer-Managed Key.
+
+\*\* Any transient data stored temporarily on disk such as pagefiles or swap files are encrypted with a Microsoft key (all tiers) or a customer-managed key (using the Enterprise and Enterprise Flash tiers). For more information, see [Configure disk encryption in Azure Cache for Redis](../../azure-cache-for-redis/cache-how-to-encryption.md).
+
+## Related content
+
+- [Data encryption models in Microsoft Azure](encryption-models.md)
+- [How encryption is used in Azure](encryption-overview.md)
+- [Double encryption](double-encryption.md)
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
There are three scenarios for server-side encryption:
- Customer controls keys on customer-controlled hardware - Full cloud functionality
-Server-side Encryption models refer to encryption that is performed by the Azure service. In that model, the Resource Provider performs the encrypt and decrypt operations. For example, Azure Storage might receive data in plain text operations and will perform the encryption and decryption internally. The Resource Provider might use encryption keys that are managed by Microsoft or by the customer depending on the provided configuration.
+Server-side Encryption models refer to encryption that performed by the Azure service. In that model, the Resource Provider performs the encrypt and decrypt operations. For example, Azure Storage might receive data in plain text operations and perform the encryption and decryption internally. The Resource Provider might use encryption keys that managed by Microsoft or by the customer depending on the provided configuration.
:::image type="content" source="media/encryption-models/azure-security-encryption-atrest-fig3.png" alt-text="Screenshot of Server." lightbox="media/encryption-models/azure-security-encryption-atrest-fig3.png":::
-Each of the server-side encryption at rest models implies distinctive characteristics of key management. This includes where and how encryption keys are created, and stored as well as the access models and the key rotation procedures.
+Each of the server-side encryption at rest models implies distinctive characteristics of key management, including where and how encryption keys are created and stored, as well as the access models and the key rotation procedures.
-For client-side encryption, consider the following:
+For client-side encryption, consider:
- Azure services cannot see decrypted data - Customers manage and store keys on-premises (or in other secure stores). Keys are not available to Azure services
The supported encryption models in Azure split into two main groups: "Client Enc
## Client encryption model
-Client Encryption model refers to encryption that is performed outside of the Resource Provider or Azure by the service or calling application. The encryption can be performed by the service application in Azure, or by an application running in the customer data center. In either case, when leveraging this encryption model, the Azure Resource Provider receives an encrypted blob of data without the ability to decrypt the data in any way or have access to the encryption keys. In this model, the key management is done by the calling service/application and is opaque to the Azure service.
+Client Encryption model refers to encryption that is performed outside of the Resource Provider or Azure by the service or calling application. The encryption can be performed by the service application in Azure, or by an application running in the customer data center. In either case, when using this encryption model, the Azure Resource Provider receives an encrypted blob of data without the ability to decrypt the data in any way or have access to the encryption keys. In this model, the key management is done by the calling service/application and is opaque to the Azure service.
:::image type="content" source="media/encryption-models/azure-security-encryption-atrest-fig2.png" alt-text="Screenshot of Client."::: ## Server-side encryption using service-managed keys
-For many customers, the essential requirement is to ensure that the data is encrypted whenever it is at rest. Server-side encryption using service-managed Keys enables this model by allowing customers to mark the specific resource (Storage Account, SQL DB, etc.) for encryption and leaving all key management aspects such as key issuance, rotation, and backup to Microsoft. Most Azure services that support encryption at rest typically support this model of offloading the management of the encryption keys to Azure. The Azure resource provider creates the keys, places them in secure storage, and retrieves them when needed. This means that the service has full access to the keys and the service has full control over the credential lifecycle management.
+For many customers, the essential requirement is to ensure that the data is encrypted whenever it is at rest. Server-side encryption using service-managed Keys enables this model by allowing customers to mark the specific resource (Storage Account, SQL DB, etc.) for encryption and leaving all key management aspects such as key issuance, rotation, and back up to Microsoft. Most Azure services that support encryption at rest typically support this model of offloading the management of the encryption keys to Azure. The Azure resource provider creates the keys, places them in secure storage, and retrieves them when needed. The service has full access to the keys and the service has full control over the credential lifecycle management.
:::image type="content" source="media/encryption-models/azure-security-encryption-atrest-fig4.png" alt-text="Screenshot of managed.":::
-Server-side encryption using service-managed keys therefore quickly addresses the need to have encryption at rest with low overhead to the customer. When available a customer typically opens the Azure portal for the target subscription and resource provider and checks a box indicating, they would like the data to be encrypted. In some Resource Managers server-side encryption with service-managed keys is on by default.
+Server-side encryption using service-managed keys therefore quickly addresses the need to have encryption at rest with low overhead to the customer. When available a customer typically opens the Azure portal for the target subscription and resource provider and checks a box indicating, they would like the data to be encrypted. In some Resource Managers, server-side encryption with service-managed keys is on by default.
Server-side encryption with Microsoft-managed keys does imply the service has full access to store and manage the keys. While some customers might want to manage the keys because they feel they gain greater security, the cost and risk associated with a custom key storage solution should be considered when evaluating this model. In many cases, an organization might determine that resource constraints or risks of an on-premises solution might be greater than the risk of cloud management of the encryption at rest keys. However, this model might not be sufficient for organizations that have requirements to control the creation or lifecycle of the encryption keys or to have different personnel manage a service's encryption keys than those managing the service (that is, segregation of key management from the overall management model for the service). ### Key access
-When Server-side encryption with service-managed keys is used, the key creation, storage, and service access are all managed by the service. Typically, the foundational Azure resource providers will store the Data Encryption Keys in a store that is close to the data and quickly available and accessible while the Key Encryption Keys are stored in a secure internal store.
+When Server-side encryption with service-managed keys is used, the key creation, storage, and service access are all managed by the service. Typically, the foundational Azure resource providers store the Data Encryption Keys in a store that is close to the data and quickly available and accessible while the Key Encryption Keys are stored in a secure internal store.
**Advantages**
When Server-side encryption with service-managed keys is used, the key creation,
- No customer control over the encryption keys (key specification, lifecycle, revocation, etc.) - No ability to segregate key management from overall management model for the service
-## Server-side encryption using customer-managed keys in Azure Key Vault
+## Server-side encryption using customer-managed keys in Azure Key Vault and Azure Managed HSM
For scenarios where the requirement is to encrypt the data at rest and control the encryption keys customers can use server-side encryption using customer-managed Keys in Key Vault. Some services might store only the root Key Encryption Key in Azure Key Vault and store the encrypted Data Encryption Key in an internal location closer to the data. In that scenario customers can bring their own keys to Key Vault (BYOK ΓÇô Bring Your Own Key), or generate new ones, and use them to encrypt the desired resources. While the Resource Provider performs the encryption and decryption operations, it uses the configured key encryption key as the root key for all encryption operations. Loss of key encryption keys means loss of data. For this reason, keys should not be deleted. Keys should be backed up whenever created or rotated. [Soft-Delete and purge protection](/azure/key-vault/general/soft-delete-overview) must be enabled on any vault storing key encryption keys to protect against accidental or malicious cryptographic erasure. Instead of deleting a key, it is recommended to set enabled to false on the key encryption key. Use access controls to revoke access to individual users or services in [Azure Key Vault](/azure/key-vault/general/security-features#access-model-overview) or [Managed HSM](/azure/key-vault/managed-hsm/secure-your-managed-hsm).
+> [!NOTE]
+> For a list of services that support customer-managed keys in Azure Key Vault and Azure Managed HSM, see [Services that support CMKs in Azure Key Vault and Azure Managed HSM](encryption-customer-managed-keys-support.md).
+ ### Key Access
-The server-side encryption model with customer-managed keys in Azure Key Vault involves the service accessing the keys to encrypt and decrypt as needed. Encryption at rest keys are made accessible to a service through an access control policy. This policy grants the service identity access to receive the key. An Azure service running on behalf of an associated subscription can be configured with an identity in that subscription. The service can perform Microsoft Entra authentication and receive an authentication token identifying itself as that service acting on behalf of the subscription. That token can then be presented to Key Vault to obtain a key it has been given access to.
+The server-side encryption model with customer-managed keys in Azure Key Vault involves the service accessing the keys to encrypt and decrypt as needed. Encryption-at-rest keys are made accessible to a service through an access control policy. This policy grants the service identity access to receive the key. An Azure service running on behalf of an associated subscription can be configured with an identity in that subscription. The service can perform Microsoft Entra authentication and receive an authentication token identifying itself as that service acting on behalf of the subscription. That token can then be presented to Key Vault to obtain a key to which it has been given access.
For operations using encryption keys, a service identity can be granted access to any of the following operations: decrypt, encrypt, unwrapKey, wrapKey, verify, sign, get, list, update, create, import, delete, backup, and restore.
When server-side encryption using customer-managed keys in customer-controlled h
- Significant setup, configuration, and ongoing maintenance costs - Increased dependency on network availability between the customer datacenter and Azure datacenters.
-## Services supporting customer managed keys (CMKs)
-
-Here are the services that support server-side encryption using customer managed keys:
-
-| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
-| | | | |
-| **AI and Machine Learning** | | | |
-| [Azure AI Search](/azure/search/) | Yes | | |
-| [Azure AI services](/azure/cognitive-services/) | Yes | Yes | |
-| [Azure AI Studio](/azure/ai-studio) | Yes | | [CMKs for encryption](/azure/ai-studio/concepts/encryption-keys-portal) |
-| [Azure Machine Learning](/azure/machine-learning/) | Yes | | |
-| [Azure OpenAI](/azure/ai-services/openai/) | Yes | Yes | |
-| [Content Moderator](/azure/cognitive-services/content-moderator/) | Yes | Yes | |
-| [Dataverse](/powerapps/maker/data-platform/) | Yes | Yes | |
-| [Dynamics 365](/dynamics365/) | Yes | Yes | |
-| [Face](/azure/cognitive-services/face/) | Yes | Yes | |
-| [Language Understanding](/azure/cognitive-services/luis/) | Yes | Yes | |
-| [Personalizer](/azure/cognitive-services/personalizer/) | Yes | Yes | |
-| [Power Platform](/power-platform/) | Yes | Yes | |
-| [QnA Maker](/azure/cognitive-services/qnamaker/) | Yes | Yes | |
-| [Speech Services](/azure/cognitive-services/speech-service/) | Yes | Yes | |
-| [Translator Text](/azure/cognitive-services/translator/) | Yes | Yes | |
-| **Analytics** | | | |
-| [Azure Data Explorer](/azure/data-explorer/) | Yes | | |
-| [Azure Data Factory](/azure/data-factory/) | Yes | Yes | |
-| [Azure Data Lake Store](/azure/data-lake-store/) | Yes, RSA 2048-bit | | |
-| [Azure HDInsight](/azure/hdinsight/) | Yes | | |
-| [Azure Monitor Application Insights](/azure/azure-monitor/app/app-insights-overview) | Yes | | |
-| [Azure Monitor Log Analytics](/azure/azure-monitor/logs/log-analytics-overview) | Yes | Yes | |
-| [Azure Stream Analytics](/azure/stream-analytics/) | Yes\*\* | Yes | |
-| [Event Hubs](/azure/event-hubs/) | Yes | | |
-| [Functions](/azure/azure-functions/) | Yes | | |
-| [Microsoft Fabric](/fabric) | Yes | | [CMK encryption](/fabric/security/security-scenario#customer-managed-key-cmk-encryption-and-microsoft-fabric) |
-| [Power BI Embedded](/power-bi) | Yes | | [BYOK for Power BI](/power-bi/enterprise/service-encryption-byok) |
-| **Containers** | | | |
-| [App Configuration](/azure/azure-app-configuration/) | Yes | | [Use CMKs to encrypt App Configuration data](/azure/azure-app-configuration/concept-customer-managed-keys) |
-| [Azure Kubernetes Service](/azure/aks/) | Yes | Yes | |
-| [Azure Red Hat OpenShift](/azure/openshift/) | Yes | | [CMK encryption](/azure/openshift/howto-byok) |
-| [Container Instances](/azure/container-instances/) | Yes | | |
-| [Container Registry](/azure/container-registry/) | Yes | | |
-| **Compute** | | | |
-| [App Service](/azure/app-service/) | Yes\*\* | Yes | |
-| [Automation](/azure/automation/) | Yes | | |
-| [Azure Functions](/azure/azure-functions/) | Yes\*\* | Yes | |
-| [Azure portal](/azure/azure-portal/) | Yes\*\* | Yes | |
-| [Azure VMware Solution](/azure/azure-vmware/) | Yes | Yes | |
-| [Azure-managed applications](/azure/azure-resource-manager/managed-applications/overview) | Yes\*\* | Yes | |
-| [Batch](/azure/batch/) | Yes | | [Configure CMKs](/azure/batch/batch-customer-managed-key) |
-| [Logic Apps](/azure/logic-apps/) | Yes | | |
-| [SAP HANA](/azure/sap/large-instances/hana-overview-architecture) | Yes | | |
-| [Service Bus](/azure/service-bus-messaging/) | Yes | | |
-| [Site Recovery](/azure/site-recovery/) | Yes | | |
-| [Virtual Machine Scale Set](/azure/virtual-machine-scale-sets/) | Yes | Yes | |
-| [Virtual Machines](/azure/virtual-machines/) | Yes | Yes | |
-| **Databases** | | | |
-| [Azure Cosmos DB](/azure/cosmos-db/) | Yes | Yes | [Configure CMKs (Key Vault)](/azure/cosmos-db/how-to-setup-cmk) and [Configure CMKs (Managed HSM)](/azure/cosmos-db/how-to-setup-customer-managed-keys-mhsm) |
-| [Azure Database for MySQL](/azure/mysql/) | Yes | Yes | |
-| [Azure Database for PostgreSQL](/azure/postgresql/) | Yes | Yes | |
-| [Azure Database Migration Service](/azure/dms/) | N/A\* | | |
-| [Azure Databricks](/azure/databricks/) | Yes | Yes | |
-| [Azure Managed Instance for Apache Cassandra](/azure/managed-instance-apache-cassandra/) | Yes | | [CMKs](/azure/managed-instance-apache-cassandra/customer-managed-keys) |
-| [Azure SQL Database](/azure/azure-sql/database/) | Yes, RSA 3072-bit | Yes | |
-| [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/) | Yes, RSA 3072-bit | Yes | |
-| [Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW) only)](/azure/synapse-analytics/) | Yes, RSA 3072-bit | Yes | |
-| [SQL Server on Virtual Machines](/azure/virtual-machines/windows/sql/) | Yes | | |
-| [SQL Server Stretch Database](/sql/sql-server/stretch-database/) | Yes, RSA 3072-bit | | |
-| [Table Storage](/azure/storage/tables/) | Yes | | |
-| **Hybrid + multicloud** | | | |
-| [Azure Stack Edge](/azure/databox-online/) | Yes | | [Azure Stack Edge: Security baseline](/security/benchmark/azure/baselines/azure-stack-edge-security-baseline#dp-5-use-customer-managed-key-option-in-data-at-rest-encryption-when-required) |
-| **Identity** | | | |
-| [Microsoft Entra Domain Services](/azure/active-directory-domain-services/) | Yes | | |
-| **Integration** | | | |
-| [Azure Health Data Services](/azure/healthcare-apis/) | Yes | | [Configure CMKs for DICOM](/azure/healthcare-apis/dicom/configure-customer-managed-keys), [Configure CMKs for FHIR](/azure/healthcare-apis/fhir/configure-customer-managed-keys) |
-| [Service Bus](/azure/service-bus-messaging/) | Yes | | |
-| **IoT Services** | | | |
-| [IoT Hub](/azure/iot-hub/) | Yes | | |
-| [IoT Hub Device Provisioning](/azure/iot-dps/) | Yes | | |
-| **Management and Governance** | | | |
-| [Azure Migrate](/azure/migrate/) | Yes | | |
-| [Azure Monitor](/azure/azure-monitor) | Yes | | [CMKs](/azure/azure-monitor/logs/customer-managed-keys?tabs=portal) |
-| **Media** | | | |
-| [Media Services](/azure/media-services/) | Yes | | |
-| **Security** | | | |
-| [Microsoft Defender for Cloud](/azure/defender-for-cloud/) | Yes | | [Security baseline: CMKs](/security/benchmark/azure/baselines/microsoft-defender-for-cloud-security-baseline#dp-5-use-customer-managed-key-option-in-data-at-rest-encryption-when-required) |
-| [Microsoft Defender for IoT](/azure/defender-for-iot/) | Yes | | |
-| [Microsoft Sentinel](/azure/sentinel/) | Yes | Yes | |
-| **Storage** | | | |
-| [Archive Storage](/azure/storage/blobs/archive-blob) | Yes | | |
-| [Azure Backup](/azure/backup/) | Yes | Yes | |
-| [Azure Cache for Redis](/azure/azure-cache-for-redis/) | Yes\*\* | Yes | |
-| [Azure Managed Lustre](/azure/azure-managed-lustre/) | Yes | | [CMKs](/azure/azure-managed-lustre/customer-managed-encryption-keys) |
-| [Azure NetApp Files](/azure/azure-netapp-files/) | Yes | Yes | |
-| [Azure Stack Edge](/azure/databox-online/azure-stack-edge-overview/) | Yes | | |
-| [Blob Storage](/azure/storage/blobs/) | Yes | Yes | |
-| [Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction/) | Yes | Yes | |
-| [Disk Storage](/azure/virtual-machines/disks-types/) | Yes | Yes | |
-| [File Premium Storage](/azure/storage/files/) | Yes | Yes | |
-| [File Storage](/azure/storage/files/) | Yes | Yes | |
-| [File Sync](/azure/storage/file-sync/file-sync-introduction) | Yes | Yes | |
-| [Managed Disk Storage](/azure/virtual-machines/disks-types/) | Yes | Yes | |
-| [Premium Blob Storage](/azure/storage/blobs/) | Yes | Yes | |
-| [Queue Storage](/azure/storage/queues/) | Yes | Yes | |
-| [StorSimple](/azure/storsimple/) | Yes | | |
-| [Ultra Disk Storage](/azure/virtual-machines/disks-types/) | Yes | Yes | |
-| **Other** | | | |
-| [Azure Data Manager for Energy](/azure/energy-data-services/overview-microsoft-energy-data-services) | Yes | | |
-
-\* This service doesn't persist data. Transient caches, if any, are encrypted with a Microsoft key.
-
-\*\* This service supports storing data in your own Key Vault, Storage Account, or other data persisting service that already supports Server-Side Encryption with Customer-Managed Key.
-
-\*\*\* Any transient data stored temporarily on disk such as pagefiles or swap files are encrypted with a Microsoft key (all tiers) or a customer-managed key (using the Enterprise and Enterprise Flash tiers). For more information, see [Configure disk encryption in Azure Cache for Redis](../../azure-cache-for-redis/cache-how-to-encryption.md).
- ## Related content
+- [Services that support CMKs in Azure Key Vault and Azure Managed HSM](encryption-customer-managed-keys-support.md)
- [How encryption is used in Azure](encryption-overview.md) - [Double encryption](double-encryption.md)
sentinel Cross Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/cross-workspace.md
To do this, use the following steps:
- **Use Log Analytics in Azure Monitor to manage access to data by resource**. For more information, see [Manage access to Microsoft Sentinel data by resource](../resource-context-rbac.md). -- **Associate SAP resources with an Azure resource ID**. Specify the required `azure_resource_id` field in the connector configuration section on the data collector that you use to ingest data from the SAP system into Microsoft Sentinel. For more information, see [Connector configuration](reference-systemconfig-json.md#connector-configuration).
+- **Associate SAP resources with an Azure resource ID**. This option is supported only for a data connector agent deployed via CLI. Specify the required `azure_resource_id` field in the connector configuration section on the data collector that you use to ingest data from the SAP system into Microsoft Sentinel. For more information, see [Deploy an SAP data connector agent from the command line](deploy-command-line.md) and [Connector configuration](reference-systemconfig-json.md#connector-configuration).
:::image type="content" source="media/cross-workspace/sap-cross-workspace-combined.png" alt-text="Diagram that shows how to work with the Microsoft Sentinel solution for SAP applications by using the same workspace for SAP and SOC data." border="false":::
sentinel Deploy Command Line https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-command-line.md
This procedure describes how to create a new agent and connect it to your SAP sy
docker update --restart unless-stopped <container-name> ```
-The deployment procedure generates a **systemconfig.json** file that contains the configuration details for the SAP data connector agent. For more information, see [SAP data connector agent configuration file](deployment-overview.md#sap-data-connector-agent-configuration-file).
+The deployment procedure generates a [**systemconfig.json**](reference-systemconfig-json.md) file that contains the configuration details for the SAP data connector agent. The file is located in the `/sapcon-app/sapcon/config/system` directory on your VM.
## Deploy the data connector using a configuration file
Azure Key Vault is the recommended method to store your authentication credentia
docker update --restart unless-stopped <container-name> ```
-The deployment procedure generates a **systemconfig.json** file that contains the configuration details for the SAP data connector agent. For more information, see [SAP data connector agent configuration file](deployment-overview.md#sap-data-connector-agent-configuration-file).
+The deployment procedure generates a [**systemconfig.json**](reference-systemconfig-json.md) file that contains the configuration details for the SAP data connector agent. The file is located in the `/sapcon-app/sapcon/config/system` directory on your VM.
## Prepare the kickstart script for secure communication with SNC
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
Previously updated : 09/15/2024 Last updated : 10/28/2024 appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal
While deployment is also supported from the command line, we recommend that you
:::image type="content" source="media/deploy-data-connector-agent-container/logs-page.png" alt-text="Screenshot of the Logs tab in the Add new system side pane.":::
+1. (Optional) For optimal results in monitoring the SAP PAHI table, select **Configuration History**. For more information, see [Verify that the PAHI table is updated at regular intervals](preparing-sap.md#verify-that-the-pahi-table-is-updated-at-regular-intervals).
+ 1. Review the settings you defined. Select **Previous** to modify any settings, or select **Deploy** to deploy the system.
-The system configuration you defined is deployed into Azure Key Vault. You can now see the system details in the table under **Configure an SAP system and assign it to a collector agent**. This table displays the associated agent name, SAP System ID (SID), and health status for systems that you added via the portal or otherwise.
+The system configuration you defined is deployed into the Azure key vault you defined during the deployment. You can now see the system details in the table under **Configure an SAP system and assign it to a collector agent**. This table displays the associated agent name, SAP System ID (SID), and health status for systems that you added via the portal or otherwise.
At this stage, the system's **Health** status is **Pending**. If the agent is updated successfully, it pulls the configuration from Azure Key vault, and the status changes to **System healthy**. This update can take up to 10 minutes.
-The deployment procedure generates a **systemconfig.json** file that contains the configuration details for the SAP data connector agent. For more information, see [SAP data connector agent configuration file](deployment-overview.md#sap-data-connector-agent-configuration-file).
- ## Check connectivity and health After you deploy the SAP data connector agent, check your agent's health and connectivity. For more information, see [Monitor the health and role of your SAP systems](../monitor-sap-system-health.md).
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
We recommend that you involve all relevant teams when planning your deployment t
- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) - [Deploy SAP connector manually](sap-solution-deploy-alternate.md)
-## SAP data connector agent configuration file
-
-The deployment procedure generates a **systemconfig.json** file that contains the configuration details for the SAP data connector agent. The file is located in the `/sapcon-app/sapcon/config/system` directory on your VM. You can use this file to update the configuration of your SAP data connector agent.
-
-Earlier versions of the deployment script, released before June 2023, generated a **systemconfig.ini** file instead. For more information, see:
--- [Systemconfig.json file reference](reference-systemconfig-json.md)-- [Systemconfig.ini file reference](reference-systemconfig.md) (legacy)- ## Stop SAP data collection If you need to stop Microsoft Sentinel from collecting your SAP data, stop log ingestion and disable the connector. Then remove the extra user role and any optional CRs installed on your SAP system.
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
The SAP PAHI table includes data on the history of the SAP system, the database,
If the PAHI table is updated regularly, the `SAP_COLLECTOR_FOR_PERFMONITOR` job is scheduled and runs hourly. If the `SAP_COLLECTOR_FOR_PERFMONITOR` job doesn't exist, make sure to configure it as needed.
-For more information, see:
--- SAP documentation: [Database Collector in Background Processing](https://help.sap.com/doc/saphelp_nw75/7.5.5/c4/3a735b505211d189550000e829fbbd/frameset.htm) and [Configuring the Data Collector](https://help.sap.com/docs/SAP_NETWEAVER_AS_ABAP_752/3364beced9d145a5ad185c89a1e04658/c43a818c505211d189550000e829fbbd.html)-- [Optimize SAP PAHI table monitoring (recommended)](deploy-command-line.md#optimize-sap-pahi-table-monitoring-recommended)
+For more information, see [Database Collector in Background Processing](https://help.sap.com/doc/saphelp_nw75/7.5.5/c4/3a735b505211d189550000e829fbbd/frameset.htm) and [Configuring the Data Collector](https://help.sap.com/docs/SAP_NETWEAVER_AS_ABAP_752/3364beced9d145a5ad185c89a1e04658/c43a818c505211d189550000e829fbbd.html).
## Configure your system to use SNC for secure connections
sentinel Reference Kickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-kickstart.md
For more information, see:
- [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md) - [Troubleshoot your Microsoft Sentinel solution for SAP applications solution deployment](sap-deploy-troubleshoot.md)-- [Systemconfig.json file reference](reference-systemconfig-json.md)
sentinel Reference Systemconfig Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig-json.md
# Microsoft Sentinel solution for SAP applications `systemconfig.json` file reference
-The *systemconfig.json* file is used to configure the behavior of the Microsoft Sentinel for SAP applications data connector agent. This article describes the options available in each section of the configuration file.
+The *systemconfig.json* file is used to configure the behavior of the Microsoft Sentinel for SAP applications data connector agent when [deployed from the command line](deploy-command-line.md). This article describes the options available in each section of the configuration file.
-Content in this article is intended for your **SAP BASIS** teams.
+Content in this article is intended for your **SAP BASIS** teams, and is only relevant when your data connector agent is deployed from the command line. We recommend [deploying your data connector agent from the portal](deploy-data-connector-agent-container.md) instead.
> [!IMPORTANT] > Microsoft Sentinel solution for SAP applications uses the *systemconfig.json* file for agent versions released on or after June 22, 2023. For previous agent versions, you must still use the *[systemconfig.ini file](reference-systemconfig.md)*.
sentinel Reference Systemconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig.md
# Microsoft Sentinel solution for SAP applications `systemconfig.ini` file reference
-The *systemconfig.ini* file is used to configure the behavior of the Microsoft Sentinel for SAP applications data connector agent. This article describes the options available in each section of the configuration file.
+The *systemconfig.ini* file is the legacy file used to configure the behavior of the Microsoft Sentinel for SAP applications data connector agent in versions earlier than June 22, 2023. This article describes the options available in each section of the configuration file.
-Content in this article is intended for your **SAP BASIS** teams.
+Content in this article is intended for your **SAP BASIS** teams. This article is not relevant if you've used the [recommended deployment procedure](deploy-data-connector-agent-container.md) from the portal. If you've installed a newer version of the agent from the command line, use the [Microsoft Sentinel solution for SAP applications `systemconfig.json` file reference](reference-systemconfig-json.md) instead.
-> [!IMPORTANT]
-> Microsoft Sentinel solution for SAP applications uses the *[systemconfig.json file](reference-systemconfig-json.md)* for agent versions released on or after June 22, 2023. For previous agent versions, you must still use the *systemconfig.ini* file.
->
-> If you update the agent version, the configuration file is automatically migrated.
## Systemconfig configuration file sections
For more information, see:
- [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md) - [Troubleshoot your Microsoft Sentinel solution for SAP applications solution deployment](sap-deploy-troubleshoot.md)-- [Systemconfig.json file reference](reference-systemconfig-json.md)
sentinel Reference Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-update.md
For more information, see:
- [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md) - [Troubleshoot your Microsoft Sentinel solution for SAP applications solution deployment](sap-deploy-troubleshoot.md)-- [Systemconfig.json file reference](reference-systemconfig-json.md)
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md
This article includes troubleshooting steps to help you ensure accurate and timely data ingestion and monitoring for your SAP environment with Microsoft Sentinel.
-In this article, we refer to the [**systemconfig.json**](reference-systemconfig-json.md) file, which is used for agent versions released on or after June 22, 2023. If you're using an earlier version of the agent, refer to the [**systemconfig.ini**](reference-systemconfig.md) file instead.
+Selected troubleshooting procedures are only relevant when your data connector agent is [deployed via the command line](deploy-command-line.md). If you used the recommended procedure to [deploy the agent from the portal](deploy-data-connector-agent-container.md), use the portal to make any configuration changes.
## Useful Docker commands
docker logs -f sapcon-[SID]
## Enable/disable debug mode printing
+This procedure is only supported if you've deployed the [data connector agent from the command line](deploy-command-line.md).
+ 1. On your data collector agent container virtual machine, edit the [**/opt/sapcon/[SID]/systemconfig.json**](reference-systemconfig-json.md) file. 1. Define the **General** section if it wasn't previously defined. In this section, define `logging_debug = True` to enable debug mode printing, or `logging_debug = False` to disable it.
Connector execution logs for your Microsoft Sentinel solution for SAP applicatio
## Review and update the Microsoft Sentinel for SAP agent connector configuration file
-If you [deployed your agent via the portal](deploy-data-connector-agent-container.md#deploy-the-data-connector-agent-from-the-portal-preview), you can continue to maintain and change configuration settings via the portal.
+This procedure is only supported if you've deployed the [data connector agent from the command line](deploy-command-line.md). If you [deployed your agent via the portal](deploy-data-connector-agent-container.md#deploy-the-data-connector-agent-from-the-portal-preview), continue to maintain and change configuration settings via the portal.
-If you deployed via the command line, or want to make manual updates directly to the configuration file, perform the following steps:
+If you deployed via the command line, perform the following steps:
1. On your VM, open the configuration file: **sapcon/[SID]/systemconfig.json**
docker cp nwrfc750P_8-70002752.zip /sapcon-app/inst/
### ABAP runtime errors appear on a large system
+This procedure is only supported if you've deployed the [data connector agent from the command line](deploy-command-line.md).
+ If ABAP runtime errors appear on large systems, try setting a smaller chunk size: 1. Edit the [**/opt/sapcon/[SID]/systemconfig.json**](reference-systemconfig-json.md) file and in the **Connector Configuration** section define `timechunk = 5`.
docker restart sapcon-[SID]
### Incorrect SAP ABAP user credentials in a fixed configuration
+This section is only supported if you've deployed the [data connector agent from the command line](deploy-command-line.md).
+ A fixed configuration is when the password is stored directly in the [**systemconfig.json**](reference-systemconfig-json.md) configuration file. If your credentials there are incorrect, verify your credentials.
Common issues include:
### Retrieving an audit log fails with warnings
+This section is only supported if you've deployed the [data connector agent from the command line](deploy-command-line.md).
+ If you attempt to retrieve an audit log without the [required configurations](preparing-sap.md#configure-sap-auditing) and the process fails with warnings, verify that the SAP Auditlog can be retrieved using one of the following methods: - Using a compatibility mode called *XAL* on older versions
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-deploy-alternate.md
# Deploy the Microsoft Sentinel for SAP data connector agent container with expert options
-This article provides procedures for deploying and configuring the Microsoft Sentinel for SAP data connector agent container using expert, custom, or manual configuration options. For typical deployments we recommend that you use the [portal](deploy-data-connector-agent-container.md#deploy-the-data-connector-agent-from-the-portal-preview) instead.
+This article provides procedures for deploying and configuring the Microsoft Sentinel for SAP data connector agent container with expert, custom, or manual configuration options. For typical deployments we recommend that you use the [portal](deploy-data-connector-agent-container.md#deploy-the-data-connector-agent-from-the-portal-preview) instead.
Content in this article is intended for your **SAP BASIS** teams. For more information, see [Deploy a SAP data connector agent from the command line](deploy-command-line.md).
For more information, see the [Quickstart: Create a key vault using the Azure CL
## Perform an expert / custom installation
-This procedure describes how to deploy the Microsoft Sentinel for SAP data connector using an expert or custom installation, such as when installing on-premises.
+This procedure describes how to deploy the Microsoft Sentinel for SAP data connector via the CLI using an expert or custom installation, such as when installing on-premises.
**Prerequisites:** Azure Key Vault is the recommended method to store your authentication credentials and configuration data. We recommend that you perform this procedure only after you have a key vault ready with your SAP credentials.
This procedure describes how to deploy the Microsoft Sentinel for SAP data conne
## Manually configure the Microsoft Sentinel for SAP data connector
-The Microsoft Sentinel for SAP data connector is configured in the **systemconfig.json** file, which you cloned to your SAP data connector machine as part of the [deployment procedure](#perform-an-expert--custom-installation). Use the content in this section to manually configure data connector settings.
+When deployed via the CLI, the Microsoft Sentinel for SAP data connector is configured in the **systemconfig.json** file, which you cloned to your SAP data connector machine as part of the [deployment procedure](#perform-an-expert--custom-installation). Use the content in this section to manually configure data connector settings.
For more information, see [Systemconfig.json file reference](reference-systemconfig-json.md), or [Systemconfig.ini file reference](reference-systemconfig.md) for legacy systems.
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
Schema field descriptions are based on the field descriptions in the relevant [S
### ABAP DB table data log (PREVIEW)
-To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). This log isn't supported when using the recommended procedure to [install the data connector agent from the portal](deploy-data-connector-agent-container.md).
- **Microsoft Sentinel function for querying this log**: SAPTableDataLog
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
### ABAP Gateway log (PREVIEW)
-To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). This log isn't supported when using the recommended procedure to [install the data connector agent from the portal](deploy-data-connector-agent-container.md).
- **Microsoft Sentinel function for querying this log**: SAPOS_GW
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
### ABAP ICM log (PREVIEW)
-To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). This log isn't supported when using the recommended procedure to [install the data connector agent from the portal](deploy-data-connector-agent-container.md).
- **Microsoft Sentinel function for querying this log**: SAPOS_ICM
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
### ABAP Syslog
-To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). This log isn't supported when using the recommended procedure to [install the data connector agent from the portal](deploy-data-connector-agent-container.md).
- **Microsoft Sentinel function for querying this log**: SAPOS_Syslog
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
### ABAP WorkProcess log
-To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). This log isn't supported when using the recommended procedure to [install the data connector agent from the portal](deploy-data-connector-agent-container.md).
- **Microsoft Sentinel function for querying this log**: SAPOS_WP
Collecting the HANA DB Audit Trail log is an example of how Microsoft Sentinel c
### JAVA files
-To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
+To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.json** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). This log isn't supported when using the recommended procedure to [install the data connector agent from the portal](deploy-data-connector-agent-container.md).
- **Microsoft Sentinel function for querying this log**: SAPJAVAFilesLogs
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
This section lists the data tables that are retrieved directly from the SAP system and ingested into Microsoft Sentinel exactly as they are.
-To have the data from these tables ingested into Microsoft Sentinel, configure the relevant settings in the **systemconfig.json** file. For more information, see [Configuring user master data collection](sap-solution-deploy-alternate.md#configuring-user-master-data-collection).
- The data retrieved from these tables provides a clear view of the authorization structure, group membership, and user profiles. It also allows you to track the process of authorization grants and revokes, and identify and govern the risks associated with those processes. The tables listed below are required to enable functions that identify privileged users, map users to roles, groups, and authorizations.
sentinel Sap Suspicious Configuration Security Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-suspicious-configuration-security-parameters.md
For more information, see:
- [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md) - [Troubleshoot your Microsoft Sentinel solution for SAP applications solution deployment](sap-deploy-troubleshoot.md)-- [Systemconfig.json file reference](reference-systemconfig-json.md)
storage-actions Storage Task Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-operations.md
The following table shows the supported operations, parameters, and parameter va
| Operation | Parameters | Values | ||-||
-| SetBlobTier | tier | Hot \| Cold \| Archive |
+| SetBlobTier | tier | Hot \| Cool \| Archive |
| SetBlobExpiry | expiryTime, expiryOption |(expiryTime): Number of milliseconds<br>(expiryOption): Absolute \| NeverExpire \| RelativeToCreation \| RelativeToNow | | DeleteBlob | None | None | | UndeleteBlob | None | None |
storage Storage Account Recover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-recover.md
If the deleted storage account used customer-managed keys with Azure Key Vault a
> [!IMPORTANT] > Recovery of a deleted storage account is not guaranteed. Recovery is a best-effort attempt. Microsoft recommends locking resources to prevent accidental account deletion. For more information about resource locks, see [Lock resources to prevent changes](../../azure-resource-manager/management/lock-resources.md). >
+> When a storage account is deleted, any linked private endpoints are also removed. These private endpoints are not automatically recreated when the storage account is recovered.
+>
> Another best practice to avoid accidental account deletion is to limit the number of users who have permissions to delete an account via role-based access control (Azure RBAC). For more information, see [Best practices for Azure RBAC](../../role-based-access-control/best-practices.md). ## Recover a deleted account from the Azure portal
storage Container Storage Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-faq.md
* <a id="azure-container-storage-ephemeralosdisk"></a> **Does Azure Container Storage use the capacity from Ephemeral OS disks for ephemeral disk storage pool?** No, Azure Container Storage only discovers and uses the capacity from ephemeral data disks for ephemeral disk storage pool.
-
+
+* <a id="azure-container-storage-endpoints"></a>
+ **What endpoints need to be allowlisted in the Azure Firewall for Azure Container Storage to work?**
+
+ To ensure Azure Container Storage functions correctly, you must allowlist specific endpoints in your Azure Firewall. These endpoints are required for Azure
+ Container Storage components to communicate with necessary Azure services. Failure to allowlist these endpoints can cause installation or runtime issues.
+
+ Endpoints to Allowlist:
+
+ `linuxgeneva-microsoft.azurecr.io`,
+ `eus2azreplstore137.blob.core.windows.net`,
+ `eus2azreplstore70.blob.core.windows.net`,
+ `eus2azreplstore155.blob.core.windows.net`,
+ `eus2azreplstore162.blob.core.windows.net`,
+ `*.hcp.eastus2.azmk8s.io`,
+ `management.azure.com`,
+ `login.microsoftonline.com`,
+ `packages.microsoft.com`,
+ `acs-mirror.azureedge.net`,
+ `eastus2.dp.kubernetesconfiguration.azure.com`,
+ `mcr.microsoft.com`.
+
+ For additional details, refer to the [Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters](/azure/aks/outbound-rules-control-egress) documentation and [Azure Arc-enabled Kubernetes network requirements.](/azure/azure-arc/kubernetes/network-requirements?tabs=azure-cloud)
+ ## See also - [What is Azure Container Storage?](container-storage-introduction.md)
storage File Sync Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-overview.md
description: Learn how to configure networking and connect your Windows Server t
Previously updated : 06/05/2024 Last updated : 11/06/2024
Azure Files and Azure File Sync support the following mechanisms to tunnel traff
In addition to the default public endpoints Azure Files and Azure File Sync provide through the storage account and Storage Sync Service, they provide the option to have one or more private endpoints per resource. This allows you to privately and securely connect to Azure file shares from on-premises using VPN or ExpressRoute and from within an Azure VNET. When you create a private endpoint for an Azure resource, it gets a private IP address from within the address space of your virtual network, much like how your on-premises Windows file server has an IP address within the dedicated address space of your on-premises network.
-> [!IMPORTANT]
-> In order to use private endpoints on the Storage Sync Service resource, you must use Azure File Sync agent version 10.1 or greater. Agent versions prior to 10.1 don't support private endpoints on the Storage Sync Service. All prior agent versions support private endpoints on the storage account resource.
- An individual private endpoint is associated with a specific Azure virtual network subnet. Storage accounts and Storage Sync Services may have private endpoints in more than one virtual network. Using private endpoints enables you to:
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
For more information on these choices, see [Planning for an Azure Files deployme
| Microsoft.Storage | Pay-as-you-go | HDD (standard) | GeoZone (GZRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ## Prerequisites-- This article assumes that you have an Azure subscription. If you have an Azure subscription, then create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- This article assumes that you have an Azure subscription. If you don't have an Azure subscription, then create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- If you intend to use Azure PowerShell, [install the latest version](/powershell/azure/install-azure-powershell). - If you intend to use Azure CLI, [install the latest version](/cli/azure/install-azure-cli).
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/6-microsoft-third-party-migration-tools.md
The following sections discuss a range of products and services that Microsoft o
#### AzCopy
-[AzCopy](../../../storage/common/storage-use-azcopy-v10.md) is a command line utility that copies files to Azure Blob Storage over a standard internet connection. In a warehouse migration project, you can use AzCopy to upload extracted, compressed, delimited text files before loading them into Azure Synapse using [PolyBase](#polybase). AzCopy can upload individual files, file selections, or file folders. If the exported files are in Parquet format, use a native Parquet reader instead.
+[AzCopy](../../../storage/common/storage-use-azcopy-v10.md) is a command line utility that copies files to Azure Blob Storage over a standard internet, secure VPN or private Expressroute connection. In a warehouse migration project, you can use AzCopy to upload extracted, compressed, delimited text files before loading them into Azure Synapse using [PolyBase](#polybase). AzCopy can upload individual files, file selections, or file folders. If the exported files are in Parquet format, use a native Parquet reader instead.
#### Azure Data Box
synapse-analytics Sql Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/sql-authentication.md
Previously updated : 03/07/2022 Last updated : 11/07/2024
Azure Synapse Analytics has two SQL form-factors that enable you to control your resource consumption. This article explains how the two form-factors control the user authentication.
-To authorize to Synapse SQL, you can use two authorization types:
+To authenticate to Synapse SQL, you can use two options:
-- Microsoft Entra authorization-- SQL authorization
+- Microsoft Entra authentication
+- SQL authentication
-SQL authorization enables legacy applications to connect to Azure Synapse SQL in a familiar way. However, Microsoft Entra authentication allows you to centrally manage access to Azure Synapse resources, such as SQL pools. Azure Synapse Analytics supports disabling local authentication, such as SQL authentication, both during and after workspace creation. Once disabled, local authentication can be enabled at any time by authorized users. For more information on Microsoft Entra-only authentication, see [Disabling local authentication in Azure Synapse Analytics](active-directory-authentication.md).
+SQL authentication enables legacy applications to connect to Azure Synapse SQL in a familiar way, with a user name and password. However, Microsoft Entra authentication allows you to centrally manage access to Azure Synapse resources, such as SQL pools. Azure Synapse Analytics supports disabling local authentication, such as SQL authentication, both during and after workspace creation. Once disabled, local authentication can be enabled at any time by authorized users. For more information on Microsoft Entra-only authentication, see [Disabling local authentication in Azure Synapse Analytics](active-directory-authentication.md).
## Administrative accounts
-There are two administrative accounts (**SQL admin username** and **SQL Active Directory admin**) that act as administrators. To identify these administrator accounts for your SQL pools open the Azure portal, and navigate to the Properties tab of your Synapse workspace.
+There are two administrative accounts (**SQL admin username** and **Microsoft Entra admin**) that act as administrators. To identify these administrator accounts for your SQL pools open the Azure portal, and navigate to the Properties tab of your Synapse workspace.
![SQL Server Admins](./media/sql-authentication/sql-admins.png)
There are two administrative accounts (**SQL admin username** and **SQL Active D
When you create an Azure Synapse Analytics, you must name a **Server admin login**. SQL server creates that account as a login in the `master` database. This account connects using SQL Server authentication (user name and password). Only one of these accounts can exist. -- **SQL Active Directory admin**
+- **Microsoft Entra admin**
One Microsoft Entra account, either an individual or security group account, can also be configured as an administrator. It's optional to configure a Microsoft Entra administrator, but a Microsoft Entra administrator **must** be configured if you want to use Microsoft Entra accounts to connect to Synapse SQL. - The Microsoft Entra admin account controls access to dedicated SQL pools, while Synapse RBAC roles can be used to control access to serverless pools, for example, with the **Synapse Administrator** and **Synapse SQL Administrator** role.
-The **SQL admin username** and **SQL Active Directory admin** accounts have the following characteristics:
+The **SQL admin username** and **Microsoft Entra admin** accounts have the following characteristics:
- Are the only accounts that can automatically connect to any SQL Database on the server. (To connect to a user database, other accounts must either be the owner of the database, or have a user account in the user database.) - These accounts enter user databases as the `dbo` user and they have all the permissions in the user databases. (The owner of a user database also enters the database as the `dbo` user.)
The **SQL admin username** and **SQL Active Directory admin** accounts have the
- Can view the `sys.sql_logins` system table. >[!Note]
->If a user is configured as an Active Directory admin and Synapse Administrator, and then removed from the Active Directory admin role, then the user will lose access to the dedicated SQL pools in Synapse. They must be removed and then added to the Synapse Administrator role to regain access to dedicated SQL pools.
+>If a user is configured as an Microsoft Entra admin and Synapse Administrator, and then removed from the Microsoft Entra admin role, then the user will lose access to the dedicated SQL pools in Synapse. They must be removed and then added to the Synapse Administrator role to regain access to dedicated SQL pools.
## [Serverless SQL pool](#tab/serverless)
Once login and user are created, you can use the regular SQL Server syntax to gr
### Administrator access path
-When the workspace-level firewall is properly configured, the **SQL admin username** and the **SQL Active Directory admin** can connect using client tools such as SQL Server Management Studio or SQL Server Data Tools. Only the latest tools provide all the features and capabilities.
+When the workspace-level firewall is properly configured, the **SQL admin username** and the **SQL Microsoft Entra admin** can connect using client tools such as SQL Server Management Studio or SQL Server Data Tools. Only the latest tools provide all the features and capabilities.
The following diagram shows a typical configuration for the two administrator accounts:
When managing logins and users in SQL Database, consider the following points:
- To `CREATE/ALTER/DROP` a user requires the `ALTER ANY USER` permission on the database. - When the owner of a database role tries to add or remove another database user to or from that database role, the following error may occur: **User or role 'Name' does not exist in this database.** This error occurs because the user isn't visible to the owner. To resolve this issue, grant the role owner the `VIEW DEFINITION` permission on the user.
-## Next steps
+## Related content
For more information, see [Contained Database Users - Making Your Database Portable](/sql/relational-databases/security/contained-database-users-making-your-database-portable).
virtual-desktop Connect Legacy Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-legacy-windows.md
description: Learn how to connect Connect to Azure Virtual Desktop with the lega
+ Last updated 09/24/2024
Once you've subscribed to a workspace, here's how to connect:
- To learn more about the features of the Remote Desktop client for Windows, check out [Use features of the Remote Desktop client for Windows when connecting to Azure Virtual Desktop](client-features-windows.md). -- If you want to use Teams on Azure Virtual Desktop with media optimization, see [Use Microsoft Teams on Azure Virtual Desktop](../teams-on-avd.md).
+- If you want to use Teams on Azure Virtual Desktop with media optimization, see [Use Microsoft Teams on Azure Virtual Desktop](../teams-on-avd.md).
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
zone_pivot_groups: azure-virtual-desktop-windows-clients Previously updated : 10/30/2024 Last updated : 11/07/2024 # What's new in the Remote Desktop client for Windows
virtual-network-manager Concept Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-limitations.md
This article provides an overview of the current limitations when you're using [
* Azure Virtual Network Manager policies don't support the standard evaluation cycle for policy compliance. For more information, see [Evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). * The move of the subscription where the Azure Virtual Network Manager instance exists to another tenant is not supported.
-## Limitations and limits for peering and connected groups
+## Limitations for peerings and connected groups
* A virtual network can be peered up to 1000 virtual networks using Azure Virtual Network Manager's hub and spoke topology. This means that you can peer up to 1000 spoke virtual networks to a hub virtual network. * By default, a [connected group](concept-connectivity-configuration.md) can have up to 250 virtual networks. This is a soft limit and can be increased up to 1000 virtual networks by submitting a request using [this form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbRzeHatNxLHpJshECDnD5QidURTM2OERMQlYxWkE1UTNBMlRNUkJUNkhDTy4u&route=shorturl). * By default, a virtual network can be part of up to two connected groups. For example, a virtual network: * Can be part of two mesh configurations. * Can be part of a mesh topology and a network group that has direct connectivity enabled in a hub-and-spoke topology.
- * Can be part of two network groups with direct connectivity enabled in the same or a different hub-and-spoke configuration.
+ * Can be part of two network groups with direct connectivity enabled in the same or a different hub-and-spoke configuration.
+ * This is a soft limit and can be adjusted by submitting a request using [this form](https://forms.office.com/r/xXxYrQt0NQ).
* The following BareMetal Infrastructures are not supported: * [Azure NetApp Files](../azure-netapp-files/index.yml) * [Azure VMware Solution](../azure-vmware/index.yml)
This article provides an overview of the current limitations when you're using [
## Limitations for security admin rules * The maximum number of IP prefixes in all [security admin rules](concept-security-admins.md) combined is 1,000.- * The maximum number of admin rules in one level of Azure Virtual Network Manager is 100.
+* The service tags AzurePlatformDNS, AzurePlatformIMDS, and AzurePlatformLKM are not currently supported in security admin rules.
## Related content
virtual-network-manager Concept Security Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md
Protocols currently supported with security admin rules are:
#### Source and destination types * **IP addresses**: You can provide IPv4 or IPv6 addresses or blocks of address in CIDR notation. To list multiple IP address, separate each IP address with a comma.
-* **Service Tag**: You can define specific service tags based on regions or a whole service. See [Available service tags](../virtual-network/service-tags-overview.md#available-service-tags), for the list of supported tags.
+* **Service Tag**: You can define specific service tags based on regions or a whole service. See the public documentation on [available service tags](../virtual-network/service-tags-overview.md#available-service-tags) for the list of supported tags. Out of this list, security admin rules currently do not support the AzurePlatformDNS, AzurePlatformIMDS, and AzurePlatformLKM service tags.
#### Source and destination ports
virtual-network-manager Concept User Defined Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-user-defined-route.md
description: Learn to automate and simplifying routing behaviors using user-defi
Previously updated : 11/05/2024 Last updated : 11/07/2024 # Customer Intent: As a network engineer, I want learn how I can automate and simplify routing within my Azure Network using User-defined routes.
This article provides an overview of UDR management, why it's important, how it works, and common routing scenarios that you can simplify and automate using UDR management. > [!IMPORTANT]
-> User-defined routes management with Azure Virtual Network Manager is in public preview. Public previews are made available to you on the condition that you agree to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some features might not be supported or might have constrained capabilities. This preview version is provided without a service level agreement, and it's not recommended for production workloads.
+> **User-defined routes management with Azure Virtual Network Manager is generally available in select regions. For more information and a list of regions, see [General availability](#general-availability).**
+>
+> Regions that aren't listed in the previous link are in public preview. Public previews are made available to you on the condition that you agree to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some features might not be supported or might have constrained capabilities. This preview version is provided without a service level agreement, and it's not recommended for production workloads.
## What is UDR management?
-Azure Virtual Network Manager (AVNM) allows you to describe your desired routing behavior and orchestrate user-defined routes (UDRs) to create and maintain the desired routing behavior. User-defined routes address the need for automation and simplification in managing routing behaviors. Currently, youΓÇÖd manually create User-Defined Routes (UDRs) or utilize custom scripts. However, these methods are prone to errors and overly complicated. You can utilize the Azure-managed hub in Virtual WAN. This option has certain limitations (such as the inability to customize the hub or lack of IPV6 support) not be relevant to your organization. With UDR management in your virtual network manager, you have a centralized hub for managing and maintaining routing behaviors.
+Azure Virtual Network Manager allows you to describe your desired routing behavior and orchestrate user-defined routes (UDRs) to create and maintain the desired routing behavior. User-defined routes address the need for automation and simplification in managing routing behaviors. Currently, youΓÇÖd manually create User-Defined Routes (UDRs) or utilize custom scripts. However, these methods are prone to errors and overly complicated. You can utilize the Azure-managed hub in Virtual WAN. This option has certain limitations (such as the inability to customize the hub or lack of IPV6 support) not be relevant to your organization. With UDR management in your virtual network manager, you have a centralized hub for managing and maintaining routing behaviors.
## How does UDR management work?
Newly created or deleted subnets have their route table updated with eventual co
The following are impacts of UDR management with Azure Virtual Network Manager on routes and route tables: -- When conflicting routing rules exist (rules with same destination but different next hops), they aren't supported within or across rule collections that target the same virtual network or subnet.-- When you create a route rule with the same destination as an existing route in the route table, the routing rule is ignored.-- When a virtual network manager-created UDR is manually modified in the route table, the route isn't up when an empty commit is performed. Also, any update to the rule isn't reflected in the route with the same destination.
+- When conflicting routing rules exist (rules with the same destination but different next hops), only one of the conflicting rules will be applied, while the others will be ignored. Any of the conflicting rules may be selected at random. It is important to note that conflicting rules within or across rule collections targeting the same virtual network or subnet are not supported.
+- When you create a routing rule with the same destination as an existing route in the route table, the routing rule is ignored.
+- When a route table with existing UDRs is present, Azure Virtual Network Manager will create a new managed route table that includes both the existing routes and new routes based on the deployed routing configuration.
+- Any additional UDRs added to a managed route table will remain unaffected and will not be deleted when the routing configuration is removed. Only routes created by Azure Virtual Network Manager will be removed.
+- If an Azure Virtual Network Manager managed UDR is manually edited in the route table, that route will be deleted when the configuration is removed from the region.
- Existing Azure services in the Hub virtual network maintain their existing limitations with respect to Route Table and UDRs.-- Azure Virtual Network Manager requires a managed resource group to store the route table. If you need to delete the resource group, deletion must happen before any new deployments are attempted for resources in the same subscription.
+- Azure Virtual Network Manager requires a managed resource group to store the route table. If an Azure Policy enforces specific tags or properties on resource groups, those policies must be disabled or adjusted for the managed resource group to prevent deployment issues. Furthermore, if you need to delete this managed resource group, ensure that deletion occurs prior to initiating any new deployments for resources within the same subscription.
- UDR management allows users to create up to 1000 UDRs per route table.
+## General availability
+
+General availability of user defined routes management with Azure Virtual Network Manager is accessible in the following regions:
+
+- Australia Central
+
+- Australia Central 2
+
+- Australia East
+
+- Australia Southeast
+
+- Brazil South
+
+- Brazil Southeast
+
+- Canada Central
+
+- Canada East
+
+- Central India
+
+- Central US
+
+- East Asia
+
+- East US
+
+- France Central
+
+- Germany North
+
+- Germany West Central
+
+- Jio India Central
+
+- Jio India West
+
+- Japan East
+
+- Korea Central
+
+- Korea South
+
+- North Central US
+
+- North Europe
+
+- Norway East
+
+- Norway West
+
+- Poland Central
+
+- Qatar Central
+
+- South Africa North
+
+- South Africa West
+
+- South India
+
+- Southeast Asia
+
+- Sweden Central
+
+- Sweden South
+
+- Switzerland North
+
+- Switzerland West
+
+- UAE Central
+
+- UAE North
+
+- UK South
+
+- UK West
+
+- West Europe
+
+- West India
+
+- West US
+
+- West US 2
+
+- West Central US
+
+- Central US (EUAP)
+
+- East US 2 (EUAP)
+
+For regions undefined in the previous list, user defined routes management with Azure Virtual Network Manager remains in public preview.
++ ## Next step > [!div class="nextstepaction"] > [Learn how to create user-defined routes in Azure Virtual Network Manager](how-to-create-user-defined-route.md).-
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
No. Azure Virtual Network Manager doesn't store any customer data.
No. Azure Virtual Network Manager doesn't currently support that capability. If you need to move an instance, you can consider deleting it and using the Azure Resource Manager template to create another one in another location.
+### Can I move a subscription with an Azure Virtual Network Manager to another tenant?
+Yes, but there are some considerations to keep in mind:
+- The target tenant cannot have an Azure Virtual Network Manager created.
+- The spokes virtual networks in the network group may lose their reference when changing tenants, thus losing connectivity to the hub vnet. To resolve this, after moving the subscription to another tenant, you must manually add the spokes vnets to the network group of Azure Virtual Network Manager.
+ ### How can I see what configurations are applied to help me troubleshoot? You can view Azure Virtual Network Manager settings under **Network Manager** for a virtual network. The settings show both connectivity and security admin configurations that are applied. For more information, see [View configurations applied by Azure Virtual Network Manager](how-to-view-applied-configurations.md).
virtual-network-manager How To Create User Defined Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-user-defined-route.md
In this article, you learn how to deploy [User-Defined Routes (UDRs)](concept-us
- Routing configuration to create UDRs for the network group > [!IMPORTANT]
-> User-defined routes management with Azure Virtual Network Manager is in public preview. Public previews are made available to you on the condition that you agree to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some features might not be supported or might have constrained capabilities. This preview version is provided without a service level agreement, and it's not recommended for production workloads.
+> User-defined routes management with Azure Virtual Network Manager is generally available in select regions. For more information and a list of regions, see [General availability](./concept-user-defined-route.md#general-availability).
+>
+> Regions that aren't listed in the previous link are in public preview. Public previews are made available to you on the condition that you agree to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some features might not be supported or might have constrained capabilities. This preview version is provided without a service level agreement, and it's not recommended for production workloads.
## Prerequisites
In this step, you deploy the routing configuration to create the UDRs for the ne
| **Include user defined routing configurations in your goal state** | Select checkbox. | | **User defined routing configurations** | Select **routing-configuration**. | | **Region** | |
- | **Target regions** | Select **(US) West US 2)**. |
+ | **Target regions** | Select **(US) West US 2**. |
1. Select **Next** and then **Deploy** to deploy the routing configuration.
In this step, you deploy the routing configuration to create the UDRs for the ne
> [!div class="nextstepaction"] > [Learn more about User-Defined Routes (UDRs)](../virtual-network/virtual-networks-udr-overview.md)---
virtual-network-manager How To Manage User Defined Routes Multiple Hub Spoke Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-manage-user-defined-routes-multiple-hub-spoke-topologies.md
Title: "Manage User-defined Routes (UDRs) across multiple hub-and-spoke topologi
description: Learn to manage User Defined Routes (UDRs) across multiple hub-and-spoke topologies with Azure Virtual Network Manager. Previously updated : 10/23/2024 Last updated : 11/07/2024 # customer intent: As a network administrator, I want to deploy a Spoke-to-Spoke topology with two hubs using Virtual Network Manager.
In this article, you learn how to deploy multiple hub-and-spoke topologies, and manage user-defined routes (UDRs) with Azure Virtual Network Manager. This scenario is useful when you have a hub and spoke architecture in multiple Azure regions. In the past, customers with firewalls or network virtual appliances performed many manual operations to do cross-hub and spoke in the past. Users needed many user-defined routes(UDRs) to be set up by hand, and when there were changes in spoke virtual networks, such as adding new spoke virtual networks and subnets, they also needed to change user-defined routes and route tables. UDR management with Virtual Network Manager can help you automate these tasks. > [!IMPORTANT]
-> User-defined routes management with Azure Virtual Network Manager is in public preview. Public previews are made available to you on the condition that you agree to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some features might not be supported or might have constrained capabilities. This preview version is provided without a service level agreement, and it's not recommended for production workloads.
+> **User-defined routes management with Azure Virtual Network Manager is generally available in select regions. For more information and a list of regions, see [General availability](./concept-user-defined-route.md#general-availability).**
+>
+> Regions that aren't listed in the previous link are in public preview. Public previews are made available to you on the condition that you agree to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some features might not be supported or might have constrained capabilities. This preview version is provided without a service level agreement, and it's not recommended for production workloads.
## Prerequisites
virtual-network-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/overview.md
Previously updated : 3/22/2024 Last updated : 03/22/2024 #Customer intent: As an IT administrator, I want to learn about Azure Virtual Network Manager and what I can use it for.
virtual-network Virtual Network Peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-peering-overview.md
# Virtual network peering
-Virtual network peering enables you to seamlessly connect two or more [Virtual Networks](virtual-networks-overview.md) in Azure. The virtual networks appear as one for connectivity purposes. The traffic between virtual machines in peered virtual networks uses the Microsoft backbone infrastructure. Like traffic between virtual machines in the same network, traffic is routed through Microsoft's *private* network only.
+Virtual network peering enables you to seamlessly connect two or more [Virtual Networks](virtual-networks-overview.md) in Azure. The virtual networks appear as one for connectivity purposes. The traffic between virtual machines in peered virtual networks uses the Microsoft backbone infrastructure. Like traffic between virtual machines in the same network, traffic is routed through Microsoft's *private* network only. By default, a virtual network can be peered with up to 500 other virtual networks. Using [Azure Virtual Network Manager's connectivity configuration](../virtual-network-manager/concept-connectivity-configuration.md), you can increase this limit to peer up to 1,000 virtual networks to a single virtual network. This allows you to, for instance, create a hub-and-spoke topology with 1,000 spoke virtual networks and create a mesh of 1000 spokes virtual networks where all spoke virtual networks are directly interconnected.
Azure supports the following types of peering:
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
You create custom routes by either creating [user-defined](#user-defined) routes
To customize your traffic routes, you shouldn't modify the default routes but you should create custom, or user-defined(static) routes which override Azure's default system routes. In Azure, you create a route table, then associate the route table to zero or more virtual network subnets. Each subnet can have zero or one route table associated to it. To learn about the maximum number of routes you can add to a route table and the maximum number of user-defined route tables you can create per Azure subscription, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits).
-By default, a route table can contain up to 1000 user-defined routes (UDRs). With Azure Virtual Network ManagerΓÇÖs [routing configuration](../virtual-network-manager/concept-user-defined-route.md), this can be expanded to 1000 UDRs per route table. This increased limit supports more advanced routing setups, such as directing traffic from on-premises data centers through a firewall to each spoke virtual network in a hub-and-spoke topology when you have a higher number of spoke virtual networks.
+By default, a route table can contain up to 400 user-defined routes (UDRs). With Azure Virtual Network ManagerΓÇÖs [routing configuration](../virtual-network-manager/concept-user-defined-route.md), this can be expanded to 1000 UDRs per route table. This increased limit supports more advanced routing setups, such as directing traffic from on-premises data centers through a firewall to each spoke virtual network in a hub-and-spoke topology when you have a higher number of spoke virtual networks.
When you create a route table and associate it to a subnet, the table's routes are combined with the subnet's default routes. If there are conflicting route assignments, user-defined routes override the default routes.
vpn-gateway Azure Vpn Client Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/azure-vpn-client-versions.md
# Azure VPN Client versions
-This article helps you view each of the versions of the Azure VPN Client. As new client versions become available, they're added to this article. To view the version number of an installed Azure VPN Client, launch the client and select **Help**. For the list of Azure VPN Client instructions, including how to download the Azure VPN Client, see the table in [VPN Client configuration requirements](point-to-site-about.md#what-are-the-client-configuration-requirements).
+This article helps you view each of the versions of the Azure VPN Client. As new client versions become available, they're added to this article. To view the version number of an installed Azure VPN Client, launch the client and select **Help**. For the list of Azure VPN Client instructions, including how to download the Azure VPN Client, see the table in [VPN Client configuration requirements](point-to-site-about.md#client).
## Azure VPN Client - Windows
vpn-gateway Point To Site About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-about.md
A RADIUS server can also integrate with other external identity systems. This op
For P2S gateway configuration steps, see [Configure P2S - RADIUS](point-to-site-how-to-radius-ps.md).
-## What are the client configuration requirements?
+## <a name="client"></a>What are the client configuration requirements?
The client configuration requirements vary, based on the VPN client that you use, the authentication type, and the protocol. The following table shows the available clients and the corresponding articles for each configuration.
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md
Create a virtual network using the following example values:
* **Name:** VNet1 * **Region:** (US) East US (or region of your choosing) * **IPv4 address space:** 10.1.0.0/16
-* **Subnet name:** FrontEnd
+* **Subnet name:** Use the default name, or specify a name. Example: FrontEnd
* **Subnet address space:** 10.1.0.0/24 [!INCLUDE [Create a VNet](../../includes/vpn-gateway-basic-vnet-rm-portal-include.md)]