Updates from: 10/28/2023 01:15:56
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
description: Learn how to enable custom domains in your redirect URLs for Azure
-
Copy the URL, change the domain name manually, and then paste it back to your br
Azure Front Door passes the user's original IP address. It's the IP address that you'll see in the audit reporting or your custom policy.
+> [!IMPORTANT]
+> If the client sends an `x-forwarded-for` header to Azure Front Door, Azure AD B2C will use the originator's `x-forwarded-for` as the user's IP address for [Conditional Access Evaluation](./conditional-access-identity-protection-overview.md) and the `{Context:IPAddress}` [claims resolver](./claim-resolver-overview.md).
+ ### Can I use a third-party Web Application Firewall (WAF) with B2C? Yes, Azure AD B2C supports BYO-WAF (Bring Your Own Web Application Firewall). However, you must test WAF to ensure that it doesn't block or alert legitimate requests to Azure AD B2C user flows or custom policies. Learn how to configure [Akamai WAF](partner-akamai.md) and [Cloudflare WAF](partner-cloudflare.md) with Azure AD B2C.
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md
Previously updated : 10/26/2022 Last updated : 10/17/2023
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-With Azure Active Directory B2C (Azure AD B2C) [HTML templates](customize-ui-with-html.md), you can craft your users' identity experiences. Your HTML templates can contain only certain HTML tags and attributes. Basic HTML tags, such as <b>, <i>, <u>, <h1>, and <hr> are allowed. More advanced tags such as <script>, and <iframe> are removed for security reasons.
+With Azure Active Directory B2C (Azure AD B2C) [HTML templates](customize-ui-with-html.md), you can craft your users' identity experiences. Your HTML templates can contain only certain HTML tags and attributes. Basic HTML tags, such as &lt;b&gt;, &lt;i&gt;, &lt;u&gt;, &lt;h1&gt;, and &lt;hr&gt; are allowed. More advanced tags such as &lt;script&gt;, and &lt;iframe&gt; are removed for security reasons but the `<script>` tag should be added in the `<head>` tag.
+
+The `<script>` tag should be added in the `<head>` tag in two ways:
+
+1. Adding the `defer` attribute, which specifies that the script is downloaded in parallel to parsing the page, then the script is executed after the page has finished parsing:
+
+ ```javascript
+ <script src="my-script.js" defer></script>
+ ```
++
+2. Adding `async` attribute that specifies that the script is downloaded in parallel to parsing the page, then the script is executed as soon as it is available (before parsing completes):
+
+ ```javascript
+ <script src="my-script.js" async></script>
+ ```
To enable JavaScript and advance HTML tags and attributes:
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Title: Operational excellence recommendations description: Operational excellence recommendations ++ Previously updated : 02/02/2022 Last updated : 10/05/2023 # Operational excellence recommendations
You can get these recommendations on the **Operational Excellence** tab of the A
1. On the **Advisor** dashboard, select the **Operational Excellence** tab.
-## Azure Spring Apps
+## AI + machine learning
-### Update your outdated Azure Spring Apps SDK to the latest version
+### Upgrade to the latest version of the Immersive Reader SDK
-We have identified API calls from an outdated Azure Spring Apps SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. The latest version of the Immersive Reader SDK provides you with updated security, performance, and an expanded set of features for customizing and enhancing your integration experience.
-Learn more about the [Azure Spring Apps service](../spring-apps/index.yml).
+Learn more about [Azure AI Immersive Reader](/azure/ai-services/immersive-reader/).
-### Update Azure Spring Apps API Version
+### Upgrade to the latest version of the Immersive Reader SDK
-We have identified API calls from outdated Azure Spring Apps API for resources under this subscription. We recommend switching to the latest Azure Spring Apps API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version. This ensures you receive the latest features and performance improvements.
+We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. The latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience.
-Learn more about the [Azure Spring Apps service](../spring-apps/index.yml).
+Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade to the latest version of the Immersive Reader SDK)](https://aka.ms/ImmersiveReaderAzureAdvisorSDKLearnMore).
-## Automation
-### Upgrade to Start/Stop VMs v2
-This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the original version available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
+## Analytics
-Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v2)](https://aka.ms/startstopv2docs).
+### Reduce the cache policy on your Data Explorer tables
+
+Reduce the table cache policy to match the usage patterns (query lookback period)
+
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesOperationalExcellence (Reduce the cache policy on your Data Explorer tables)](https://aka.ms/adxcachepolicy).
++++
+## Compute
+
+### Update your outdated Azure Spring Apps SDK to the latest version
-## Azure VMware
+We have identified API calls from an outdated Azure Spring Apps SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+
+Learn more about the [Azure Spring Apps service](../spring-apps/index.yml).
+
+### Update Azure Spring Apps API Version
+
+We have identified API calls from outdated Azure Spring Apps API for resources under this subscription. We recommend switching to the latest Azure Spring Apps API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version, which ensures you receive the latest features and performance improvements.
+
+Learn more about the [Azure Spring Apps service](../spring-apps/index.yml).
### New HCX version is available for upgrade
-Your HCX version is not latest. New HCX version is available for upgrade. Updating a VMware HCX system installs the latest features, problem fixes, and security patches.
+Your HCX version isn't latest. New HCX version is available for upgrade. Updating a VMware HCX system installs the latest features, problem fixes, and security patches.
Learn more about [AVS Private cloud - HCXVersion (New HCX version is available for upgrade)](https://aka.ms/vmware/hcxdoc).
-## Batch
- ### Recreate your pool to get the latest node agent features and fixes Your pool has an old node agent. Consider recreating your pool to get the latest node agent updates and bug fixes.
Learn more about [Batch account - OldPool (Recreate your pool to get the latest
### Delete and recreate your pool to remove a deprecated internal component
-Your pool is using a deprecated internal component. Please delete and recreate your pool for improved stability and performance.
+Your pool is using a deprecated internal component. Delete and recreate your pool for improved stability and performance.
Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](https://aka.ms/batch_deprecatedcomponent_learnmore).
-### Upgrade to the latest API version to ensure your Batch account remains operational.
+### Upgrade to the latest API version to ensure your Batch account remains operational
In the past 14 days, you have invoked a Batch management or service API version that is scheduled for deprecation. Upgrade to the latest API version to ensure your Batch account remains operational. Learn more about [Batch account - UpgradeAPI (Upgrade to the latest API version to ensure your Batch account remains operational.)](https://aka.ms/batch_deprecatedapi_learnmore).
-### Delete and recreate your pool using a VM size that will soon be retired
+### Delete and recreate your pool using a different VM size
-Your pool is using A8-A11 VMs, which are set to be retired in March 2021. Please delete your pool and recreate it with a different VM size.
+Your pool is using A8-A11 VMs, which are set to be retired in March 2021. Delete your pool and recreate it with a different VM size.
-Learn more about [Batch account - RemoveA8_A11Pools (Delete and recreate your pool using a VM size that will soon be retired)](https://aka.ms/batch_a8_a11_retirement_learnmore).
+Learn more about [Batch account - RemoveA8_A11Pools (Delete and recreate your pool using a different VM size)](https://aka.ms/batch_a8_a11_retirement_learnmore).
### Recreate your pool with a new image
-Your pool is using an image with an imminent expiration date. Please recreate the pool with a new image to avoid potential interruptions. A list of newer images is available via the ListSupportedImages API.
+Your pool is using an image with an imminent expiration date. Recreate the pool with a new image to avoid potential interruptions. A list of newer images is available via the ListSupportedImages API.
Learn more about [Batch account - EolImage (Recreate your pool with a new image)](https://aka.ms/batch_expiring_image_learn_more).
-## Cache for Redis
+### Increase the number of compute resources you can deploy by 10 vCPU
-### Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. This is a common source of incidents affecting customer applications
+If quota limits are exceeded, new VM deployments are blocked until quota is increased. Increase your quota now to enable deployment of more resources. Learn More
-Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. It's difficult to configure the network accurately and avoid affecting cache functionality. It's easy to break the cache accidentally while making configuration changes for other network resources. This is a common source of incidents affecting customer applications
+Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number of compute resources you can deploy by 10 vCPU)](https://aka.ms/SubscriptionServiceLimits).
-Learn more about [Redis Cache Server - PrivateLink (Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. This is a common source of incidents affecting customer applications)](https://aka.ms/VnetToPrivateLink).
+### Add Azure Monitor to your virtual machine (VM) labeled as production
-### TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses.
+Azure Monitor for VMs monitors your Azure virtual machines (VM) and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and it monitors their processes and dependencies on other resources and external processes. It includes support for monitoring performance and application dependencies for VMs that are hosted on-premises or in another cloud provider.
-TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses. We highly recommend that you configure your cache to use TLS 1.2 only and your application should use TLS 1.2 or later. See https://aka.ms/TLSVersions for more information.
+Learn more about [Virtual machine - AddMonitorProdVM (Add Azure Monitor to your virtual machine (VM) labeled as production)](/azure/azure-monitor/insights/vminsights-overview).
-Learn more about [Redis Cache Server - TLSVersion (TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses.)](https://aka.ms/TLSVersions).
+### Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers
-## Azure AI services
+Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers. Frequent DNS lookups and NTP sync can be viewed as malicious traffic and blocked by the DDOS service in the Azure environment
-### Upgrade to the latest version of the Immersive Reader SDK
+Learn more about [Virtual machine - GetVmlistFortigateNtpIssue (Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers.)](https://docs.fortinet.com/document/fortigate/6.2.3/fortios-release-notes/236526/known-issues).
-We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. Using the latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience.
+### An Azure environment update has been rolled out that might affect your Checkpoint Firewall
-Learn more about [Azure AI Immersive Reader](/azure/ai-services/immersive-reader/).
+The image version of the Checkpoint firewall installed might have been affected by the recent Azure environment update. A kernel panic resulting in a reboot to factory defaults can occur in certain circumstances.
-## Compute
+Learn more about [Virtual machine - NvaCheckpointNicServicing (An Azure environment update has been rolled out that might affect your Checkpoint Firewall.)](https://supportcenter.checkpoint.com/supportcenter/portal).
-### Increase the number of compute resources you can deploy by 10 vCPU
+### The iControl REST interface has an unauthenticated remote command execution vulnerability
-If quota limits are exceeded, new VM deployments will be blocked until quota is increased. Increase your quota now to enable deployment of more resources. Learn More
+An unauthenticated remote command execution vulnerability allows for unauthenticated attackers with network access to the iControl REST interface, through the BIG-IP management interface and self IP addresses, to execute arbitrary system commands, create or delete files, and disable services. This vulnerability can only be exploited through the control plane and can't be exploited through the data plane. Exploitation can lead to complete system compromise. The BIG-IP system in Appliance mode is also vulnerable
-Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number of compute resources you can deploy by 10 vCPU)](https://aka.ms/SubscriptionServiceLimits).
+Learn more about [Virtual machine - GetF5vulnK03009991 (The iControl REST interface has an unauthenticated remote command execution vulnerability.)](https://support.f5.com/csp/article/K03009991).
-### Add Azure Monitor to your virtual machine (VM) labeled as production
+### NVA Accelerated Networking enabled but potentially not working
-Azure Monitor for VMs monitors your Azure virtual machines (VM) and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and it monitors their processes and dependencies on other resources and external processes. It includes support for monitoring performance and application dependencies for VMs that are hosted on-premises or in another cloud provider.
+Desired state for Accelerated Networking is set to ΓÇÿtrueΓÇÖ for one or more interfaces on your VM, but actual state for accelerated networking isn't enabled.
-Learn more about [Virtual machine - AddMonitorProdVM (Add Azure Monitor to your virtual machine (VM) labeled as production)](/azure/azure-monitor/insights/vminsights-overview).
+Learn more about [Virtual machine - GetVmListANDisabled (NVA Accelerated Networking enabled but potentially not working.)](../virtual-network/create-vm-accelerated-networking-cli.md).
-### Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers.
+### Virtual machines with Citrix Application Delivery Controller (ADC) and accelerated networking enabled might disconnect during maintenance operation
-Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers. This can be viewed as malicious traffic and blocked by the DDOS service in the Azure environment
+We have identified that you're running a Network virtual Appliance (NVA) called Citrix Application Delivery Controller (ADC), and the NVA has accelerated networking enabled. The Virtual machine that this NVA is deployed on might experience connectivity issues during a platform maintenance operation. It is recommended that you follow the article provided by the vendor: https://aka.ms/Citrix_CTX331516
-Learn more about [Virtual machine - GetVmlistFortigateNtpIssue (Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers.)](https://docs.fortinet.com/document/fortigate/6.2.3/fortios-release-notes/236526/known-issues).
+Learn more about [Virtual machine - GetCitrixVFRevokeError (Virtual machines with Citrix Application Delivery Controller (ADC) and accelerated networking enabled might disconnect during maintenance operation)](https://aka.ms/Citrix_CTX331516).
-### An Azure environment update has been rolled out that may affect your Checkpoint Firewall.
+### Update your outdated Azure Spring Cloud SDK to the latest version
-The image version of the Checkpoint firewall installed may have been affected by the recent Azure environment update. A kernel panic resulting in a reboot to factory defaults can occur in certain circumstances.
+We have identified API calls from an outdated Azure Spring Cloud SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
-Learn more about [Virtual machine - NvaCheckpointNicServicing (An Azure environment update has been rolled out that may affect your Checkpoint Firewall.)](https://supportcenter.checkpoint.com/supportcenter/portal).
+Learn more about [Spring Cloud Service - SpringCloudUpgradeOutdatedSDK (Update your outdated Azure Spring Cloud SDK to the latest version)](/azure/spring-cloud).
-### The iControl REST interface has an unauthenticated remote command execution vulnerability.
+### Update Azure Spring Cloud API Version
-This vulnerability allows for unauthenticated attackers with network access to the iControl REST interface, through the BIG-IP management interface and self IP addresses, to execute arbitrary system commands, create or delete files, and disable services. This vulnerability can only be exploited through the control plane and cannot be exploited through the data plane. Exploitation can lead to complete system compromise. The BIG-IP system in Appliance mode is also vulnerable
+We have identified API calls from outdated Azure Spring Cloud API for resources under this subscription. We recommend switching to the latest Spring Cloud API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version, which ensures you receive the latest features and performance improvements.
+
+Learn more about [Spring Cloud Service - UpgradeAzureSpringCloudAPI (Update Azure Spring Cloud API Version)](/azure/spring-cloud).
-Learn more about [Virtual machine - GetF5vulnK03009991 (The iControl REST interface has an unauthenticated remote command execution vulnerability.)](https://support.f5.com/csp/article/K03009991).
-### NVA Accelerated Networking enabled but potentially not working.
-Desired state for Accelerated Networking is set to ΓÇÿtrueΓÇÖ for one or more interfaces on this VM, but actual state for accelerated networking is not enabled.
-Learn more about [Virtual machine - GetVmListANDisabled (NVA Accelerated Networking enabled but potentially not working.)](../virtual-network/create-vm-accelerated-networking-cli.md).
-### Virtual machines with Citrix Application Delivery Controller (ADC) and accelerated networking enabled may disconnect during maintenance operation
+## Containers
-We have identified that you are running a Network virtual Appliance (NVA) called Citrix Application Delivery Controller (ADC), and the NVA has accelerated networking enabled. The Virtual machine that this NVA is deployed on may experience connectivity issues during a platform maintenance operation. It is recommended that you follow the article provided by the vendor: https://aka.ms/Citrix_CTX331516
+### The api version you use for Microsoft.App is deprecated, use latest api version
-Learn more about [Virtual machine - GetCitrixVFRevokeError (Virtual machines with Citrix Application Delivery Controller (ADC) and accelerated networking enabled may disconnect during maintenance operation)](https://aka.ms/Citrix_CTX331516).
+The api version you use for Microsoft.App is deprecated, use latest api version
-## Kubernetes
+Learn more about [Microsoft App Container App - UseLatestApiVersion (The api version you use for Microsoft.App is deprecated, use latest api version)](https://aka.ms/containerappsapiversion).
### Update cluster's service principal
-This cluster's service principal is expired and the cluster will not be healthy until the service principal is updated
+This cluster's service principal is expired and the cluster isn't healthy until the service principal is updated
Learn more about [Kubernetes service - UpdateServicePrincipal (Update cluster's service principal)](../aks/update-credentials.md).
Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn116IsFound (Depr
### Enable the Cluster Autoscaler
-This cluster has not enabled AKS Cluster Autoscaler, and it will not adapt to changing load conditions unless you have other ways to autoscale your cluster
+This cluster has not enabled AKS Cluster Autoscaler, and it can't adapt to changing load conditions unless you have other ways to autoscale your cluster
Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Cluster Autoscaler)](/azure/aks/cluster-autoscaler). ### The AKS node pool subnet is full
-Some of the subnets for this cluster's node pools are full and cannot take any more worker nodes. Using the Azure CNI plugin requires to reserve IP addresses for each node and all the pods for the node at node provisioning time. If there is not enough IP address space in the subnet, no worker nodes can be deployed. Additionally, the AKS cluster cannot be upgraded if the node subnet is full.
+Some of the subnets for this cluster's node pools are full and can't take any more worker nodes. Using the Azure CNI plugin requires to reserve IP addresses for each node and all the pods for the node at node provisioning time. If there isn't enough IP address space in the subnet, no worker nodes can be deployed. Additionally, the AKS cluster can't be upgraded if the node subnet is full.
Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](../aks/create-node-pools.md#add-a-node-pool-with-a-unique-subnet).
+### Expired ETCD cert
+
+Expired ETCD cert, update.
+
+Learn more about [Kubernetes service - ExpiredETCDCertPre03012022 (Expired ETCD cert)](https://aka.ms/AKSUpdateCredentials).
+ ### Disable the Application Routing Addon This cluster has Pod Security Policies enabled, which are going to be deprecated in favor of Azure Policy for AKS
Learn more about [Kubernetes service - UseAzurePolicyForKubernetes (Disable the
### Use Ephemeral OS disk
-This cluster is not using ephemeral OS disks which can provide lower read/write latency, along with faster node scaling and cluster upgrades
+This cluster isn't using ephemeral OS disks which can provide lower read/write latency, along with faster node scaling and cluster upgrades
Learn more about [Kubernetes service - UseEphemeralOSdisk (Use Ephemeral OS disk)](../aks/concepts-storage.md#ephemeral-os-disk).
+### Outdated Azure Linux (Mariner) OS SKUs Found
+
+Found outdated Azure Linux (Mariner) OS SKUs. 'CBL-Mariner' SKU isn't supported. 'Mariner' SKU is equivalent to 'AzureLinux', but it's advisable to switch to 'AzureLinux' SKU for future updates and support, as 'AzureLinux' is the Generally Available version.
+
+Learn more about [Kubernetes service - ClustersWithDeprecatedMarinerSKU (Outdated Azure Linux (Mariner) OS SKUs Found)](https://aka.ms/AzureLinuxOSSKU).
+ ### Free and Standard tiers for AKS control plane management
-This cluster has not enabled the Standard tier which includes the Uptime SLA by default, and is limited to an SLO of 99.5%.
+This cluster has not enabled the Standard tier that includes the Uptime SLA by default, and is limited to an SLO of 99.5%.
Learn more about [Kubernetes service - Free and Standard Tier](../aks/free-standard-pricing-tiers.md).
Deprecated Kubernetes API in 1.22 has been found. Avoid using deprecated APIs.
Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn122IsFound (Deprecated Kubernetes API in 1.22 has been found)](https://aka.ms/aks-deprecated-k8s-api-1.22).
-## MySQL
++
+## Databases
+
+### Azure SQL IaaS Agent must be installed in full mode
+
+Full mode installs the SQL IaaS Agent to the VM to deliver full functionality. Use it for managing a SQL Server VM with a single instance. There is no cost associated with using the full manageability mode. System administrator permissions are required. Note that installing or upgrading to full mode is an online operation, there is no restart required.
+
+Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent must be installed in full mode)](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management).
+
+### Install SQL best practices assessment on your SQL VM
+
+SQL best practices assessment provides a mechanism to evaluate the configuration of your Azure SQL VM for best practices like indexes, deprecated features, trace flag usage, statistics, etc. Assessment results are uploaded to your Log Analytics workspace using Azure Monitoring Agent (AMA).
+
+Learn more about [SQL virtual machine - SqlAssessmentAdvisorRec (Install SQL best practices assessment on your SQL VM)](/azure/azure-sql/virtual-machines/windows/sql-assessment-for-sql-vm).
+
+### Migrate Azure Cosmos DB attachments to Azure Blob Storage
+
+We noticed that your Azure Cosmos DB collection is using the legacy attachments feature. We recommend migrating attachments to Azure Blob Storage to improve the resiliency and scalability of your blob data.
+
+Learn more about [Azure Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](../cosmos-db/attachments.md#migrating-attachments-to-azure-blob-storage).
+
+### Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup
+
+Your Azure Cosmos DB accounts are configured with periodic backup. Continuous backup with point-in-time restore is now available on these accounts. With continuous backup, you can restore your data to any point in time within the past 30 days. Continuous backup might also be more cost-effective as a single copy of your data is retained.
+
+Learn more about [Azure Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](../cosmos-db/continuous-backup-restore-introduction.md).
+
+### Enable partition merge to configure an optimal database partition layout
+
+Your account has collections that could benefit from enabling partition merge. Minimizing the number of partitions reduces rate limiting and resolve storage fragmentation problems. Containers are likely to benefit from this if the RU/s per physical partition is < 3000 RUs and storage is < 20 GB.
+
+Learn more about [Cosmos DB account - CosmosDBPartitionMerge (Enable partition merge to configure an optimal database partition layout)](/azure/cosmos-db/merge?tabs=azure-powershell).
++ ### Your Azure Database for MySQL - Flexible Server is vulnerable using weak, deprecated TLSv1 or TLSv1.1 protocols
-To support modern security standards, MySQL community edition discontinued the support for communication over Transport Layer Security (TLS) 1.0 and 1.1 protocols. Microsoft will also stop supporting connection over TLSv1 and TLSv1.1 to Azure Database for MySQL - Flexible server soon to comply with the modern security standards. We recommend you upgrade your client driver to support TLSv1.2.
+To support modern security standards, MySQL community edition discontinued the support for communication over Transport Layer Security (TLS) 1.0 and 1.1 protocols. Microsoft also stopped supporting connections over TLSv1 and TLSv1.1 to Azure Database for MySQL - Flexible server to comply with the modern security standards. We recommend you upgrade your client driver to support TLSv1.2.
Learn more about [Azure Database for MySQL flexible server - OrcasMeruMySqlTlsDeprecation (Your Azure Database for MySQL - Flexible Server is vulnerable using weak, deprecated TLSv1 or TLSv1.1 protocols)](https://aka.ms/encrypted_connection_deprecated_protocols).
-## Desktop Virtualization
+### Optimize or partition tables in your database which has huge tablespace size
-### Permissions missing for start VM on connect
+The maximum supported tablespace size in Azure Database for MySQL -Flexible server is 4TB. To effectively manage large tables, we recommended that you optimize the table or implement partitioning, which helps distribute the data across multiple files and prevent reaching the hard limit of 4TB in the tablespace.
-We have determined you enabled start VM on connect but didn't grant the Azure Virtual Desktop the rights to power manage VMs in your subscription. As a result your users connecting to host pools won't receive a remote desktop session. Review feature documentation for requirements.
+Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerSingleTablespace4TBLimit2bf9 (Optimize or partition tables in your database which has huge tablespace size)](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/how-to-reclaim-storage-space-with-azure-database-for-mysql/ba-p/3615876).
-Learn more about [Host Pool - AVDStartVMonConnect (Permissions missing for start VM on connect)](https://aka.ms/AVDStartVMRequirement).
+### Enable storage autogrow for MySQL Flexible Server
-### No validation environment enabled
+Storage auto-growth prevents a server from running out of storage and becoming read-only.
-We have determined that you do not have a validation environment enabled in current subscription. When creating your host pools, you have selected "No" for "Validation environment" in the properties tab. Having at least one host pool with a validation environment enabled ensures the business continuity through Azure Virtual Desktop service deployments with early detection of potential issues.
+Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerStorageAutogrow43b64 (Enable storage autogrow for MySQL Flexible Server)](/azure/mysql/flexible-server/concepts-service-tiers-storage#storage-auto-grow).
-Learn more about [Host Pool - ValidationEnvHostPools (No validation environment enabled)](../virtual-desktop/create-validation-host-pool.md).
+### Apply resource delete lock
-### Not enough production environments enabled
+Lock your MySQL Flexible Server to protect from accidental user deletions and modifications
-We have determined that too many of your host pools have Validation Environment enabled. In order for Validation Environments to best serve their purpose, you should have at least one, but never more than half of your host pools in Validation Environment. By having a healthy balance between your host pools with Validation Environment enabled and those with it disabled, you will best be able to utilize the benefits of the multistage deployments that Azure Virtual Desktop offers with certain updates. To fix this issue, open your host pool's properties and select "No" next to the "Validation Environment" setting.
+Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerResourceLockbe19e (Apply resource delete lock)](/azure/azure-resource-manager/management/lock-resources).
-Learn more about [Host Pool - ProductionEnvHostPools (Not enough production environments enabled)](../virtual-desktop/create-host-pools-powershell.md).
+### Add firewall rules for MySQL Flexible Server
-## Azure Cosmos DB
+Add firewall rules to protect your server from unauthorized access
-### Migrate Azure Cosmos DB attachments to Azure Blob Storage
+Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerNoFirewallRule6e523 (Add firewall rules for MySQL Flexible Server)](/azure/mysql/flexible-server/how-to-manage-firewall-portal).
-We noticed that your Azure Cosmos DB collection is using the legacy attachments feature. We recommend migrating attachments to Azure Blob Storage to improve the resiliency and scalability of your blob data.
-Learn more about [Azure Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](../cosmos-db/attachments.md#migrating-attachments-to-azure-blob-storage).
+### Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration, which is a common source of incidents affecting customer applications
-### Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup
+Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. It's difficult to configure the network accurately and avoid affecting cache functionality. It's easy to break the cache accidentally while making configuration changes for other network resources, which is a common source of incidents affecting customer applications
-Your Azure Cosmos DB accounts are configured with periodic backup. Continuous backup with point-in-time restore is now available on these accounts. With continuous backup, you can restore your data to any point in time within the past 30 days. Continuous backup may also be more cost-effective as a single copy of your data is retained.
+Learn more about [Redis Cache Server - PrivateLink (Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. This is a common source of incidents affecting customer applications)](https://aka.ms/VnetToPrivateLink).
-Learn more about [Azure Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](../cosmos-db/continuous-backup-restore-introduction.md).
+### Support for TLS versions 1.0 and 1.1 is retiring on September 30, 2024
+
+Support for TLS 1.0/1.1 is retiring on September 30, 2024. Configure your cache to use TLS 1.2 only and your application to use TLS 1.2 or later. See https://aka.ms/TLSVersions for more information.
+
+Learn more about [Redis Cache Server - TLSVersion (Support for TLS versions 1.0 and 1.1 is retiring on September 30, 2024.)](https://aka.ms/TLSVersions).
+
+### TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses
+
+TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses. We highly recommend that you configure your cache to use TLS 1.2 only and your application to use TLS 1.2 or later. See https://aka.ms/TLSVersions for more information.
+
+Learn more about [Redis Cache Server - TLSVersion (TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses.)](https://aka.ms/TLSVersions).
+
+### Cloud service caches are being retired in August 2024, migrate before then to avoid any problems
+
+This instance of Azure Cache for Redis has a dependency on Cloud Services (classic) which is being retired in August 2024. Follow the instructions found in the following link to migrate to an instance without this dependency. If you need to upgrade your cache to Redis 6 note that upgrading a cache with a dependency on cloud services isn't supported. You must migrate your cache instance to Virtual Machine Scale Set before upgrading. For more information, see the following link. Note: If you have completed your migration away from Cloud Services, allow up to 24 hours for this recommendation to be removed
+
+Learn more about [Redis Cache Server - MigrateFromCloudService (Cloud service caches are being retired in August 2024, migrate before then to avoid any problems)](/azure/azure-cache-for-redis/cache-faq#caches-with-a-dependency-on-cloud-services-%28classic%29).
+
+### Redis persistence allows you to persist data stored in a cache so you can reload data from an event that caused data loss.
+
+Redis persistence allows you to persist data stored in Redis. You can also take snapshots and back up the data. If there's a hardware failure, the persisted data is automatically loaded in your cache instance. Data loss is possible if a failure occurs where Cache nodes are down.
+
+Learn more about [Redis Cache Server - Persistence (Redis persistence allows you to persist data stored in a cache so you can reload data from an event that caused data loss.)](https://aka.ms/redis/persistence).
+
+### Using persistence with soft delete enabled can increase storage costs.
+
+Check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see the following link.
+
+Learn more about [Redis Cache Server - PersistenceSoftEnable (Using persistence with soft delete enabled can increase storage costs.)](https://aka.ms/redis/persistence).
+
+### You might benefit from using an Enterprise tier cache instance
+
+This instance of Azure Cache for Redis is using one or more advanced features from the list - more than 6 shards, geo-replication, zone-redundancy or persistence. Consider switching to an Enterprise tier cache to get the most out of your Redis experience. Enterprise tier caches offer higher availability, better performance and more powerful features like active geo-replication.
+
+Learn more about [Redis Cache Server - ConsiderUsingRedisEnterprise (You might benefit from using an Enterprise tier cache instance)](https://aka.ms/redisenterpriseupgrade).
+++++
+## Integration
+
+### Use Azure AD-based authentication for more fine-grained control and simplified management
+
+You can use Azure AD-based authentication, instead of gateway tokens, which allows you to use standard procedures to create, assign and manage permissions and control expiry times. Additionally, you gain fine-grained control across gateway deployments and easily revoke access in case of a breach.
+
+Learn more about [Api Management - ShgwUseAdAuth (Use Azure AD-based authentication for more fine-grained control and simplified management)](https://aka.ms/apim/shgw/how-to/use-ad-auth).
+
+### Validate JWT policy is being used with security keys that have insecure key size for validating Json Web Token (JWT).
+
+Validate JWT policy is being used with security keys that have insecure key size for validating Json Web Token (JWT). We recommend using longer key sizes to improve security for JWT-based authentication & authorization.
+
+Learn more about [Api Management - validate-jwt-with-insecure-key-size (Validate JWT policy is being used with security keys that have insecure key size for validating Json Web Token (JWT).)]().
+
+### Use self-hosted gateway v2
+
+We have identified one or more instances of your self-hosted gateway(s) that are using a deprecated version of the self-hosted gateway (v0.x and/or v1.x).
+
+Learn more about [Api Management - shgw-legacy-image-usage (Use self-hosted gateway v2)](https://aka.ms/apim/shgw/migration/v2).
-## Monitor
+### Use Configuration API v2 for self-hosted gateways
+
+We have identified one or more instances of your self-hosted gateway(s) that are using the deprecated Configuration API v1.
+
+Learn more about [Api Management - shgw-config-api-v1-usage (Use Configuration API v2 for self-hosted gateways)](https://aka.ms/apim/shgw/migration/v2).
+
+### Only allow tracing on subscriptions intended for debugging purposes. Sharing subscription keys with tracing allowed with unauthorized users could lead to disclosure of sensitive information contained in tracing logs such as keys, access tokens, passwords, internal hostnames, and IP addresses.
+
+Traces generated by Azure API Management service might contain sensitive information that is intended for service owner and must not be exposed to clients using the service. Using tracing enabled subscription keys in production or automated scenarios creates a risk of sensitive information exposure if client making call to the service requests a trace.
+
+Learn more about [Api Management - heavy-tracing-usage (Only allow tracing on subscriptions intended for debugging purposes. Sharing subscription keys with tracing allowed with unauthorized users could lead to disclosure of sensitive information contained in tracing logs such as keys, access tokens, passwords, internal hostnames, and IP addresses.)](/azure/api-management/api-management-howto-api-inspector).
+
+### Self-hosted gateway instances were identified that use gateway tokens that expire soon
+
+At least one deployed self-hosted gateway instance was identified that uses a gateway token that expires in the next seven days. To ensure that it can connect to the control-plane, generate a new gateway token and update your deployed self-hosted gateways (does not impact data-plane traffic).
+
+Learn more about [Api Management - ShgwGatewayTokenNearExpiry (Self-hosted gateway instance(s) were identified that use gateway tokens that expire soon)]().
++
+## Internet of Things
+
+### IoT Hub Fallback Route Disabled
+
+We have detected that the Fallback Route on your IoT Hub has been disabled. When the Fallback Route is disabled messages stop flowing to the default endpoint. If you're no longer able to ingest telemetry downstream consider re-enabling the Fallback Route.
+
+Learn more about [IoT hub - IoTHubFallbackDisabledAdvisor (IoT Hub Fallback Route Disabled)](/azure/iot-hub/iot-hub-devguide-messages-d2c#fallback-route).
++++
+## Management and governance
+
+### Upgrade to Start/Stop VMs v2
+
+The new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the original version available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
+
+Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v2)](https://aka.ms/startstopv2docs).
### Repair your log alert rule
-We have detected that one or more of your alert rules have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries may become invalid overtime due to changes in referenced resources, tables, or commands. We recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure.
+We have detected that one or more of your alert rules have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries might become invalid overtime due to changes in referenced resources, tables, or commands. We recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure.
Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](https://aka.ms/aa_logalerts_queryrepair).
The alert rule was disabled by Azure Monitor as it was causing service issues. T
Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](https://aka.ms/aa_logalerts_queryrepair).
-## Key Vault
+### Update Azure Managed Grafana SDK Version
-### Create a backup of HSM
+We have identified that an older SDK version has been used to manage or access your Grafana workspace. To get access to all the latest functionality, it is recommended that you switch to use the latest SDK version.
-Create a periodic HSM backup to prevent data loss and have ability to recover the HSM in case of a disaster.
+Learn more about [Grafana Dashboard - UpdateAzureManagedGrafanaSDK (Update Azure Managed Grafana SDK Version)](https://aka.ms/GrafanaPortalLearnMore).
-Learn more about [Managed HSM Service - CreateHSMBackup (Create a backup of HSM)](../key-vault/managed-hsm/best-practices.md#create-backups).
+### Switch to Azure Monitor based alerts for backup
-## Data Explorer
+Switch to Azure Monitor based alerts for backup to leverage various benefits, such as - standardized, at-scale alert management experiences offered by Azure, ability to route alerts to different notification channels of choice, and greater flexibility in alert configuration.
+
+Learn more about [Recovery Services vault - SwitchToAzureMonitorAlerts (Switch to Azure Monitor based alerts for backup)](https://aka.ms/AzMonAlertsBackup).
-### Reduce the cache policy on your Data Explorer tables
-Reduce the table cache policy to match the usage patterns (query lookback period)
-Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesOperationalExcellence (Reduce the cache policy on your Data Explorer tables)](https://aka.ms/adxcachepolicy).
## Networking
+### Resolve Certificate Update issue for your Application Gateway
+
+We have detected that one or more of your Application Gateways is unable to fetch the latest version certificate present in your Key Vault. If it is intended to use a particular version of the certificate, ignore this message.
+
+Learn more about [Application gateway - AppGwAdvisorRecommendationForCertificateUpdateErrors (Resolve Certificate Update issue for your Application Gateway)]().
+ ### Resolve Azure Key Vault issue for your Application Gateway
-We've detected that one or more of your Application Gateways is unable to obtain a certificate due to misconfigured Key Vault. You should fix this configuration immediately to avoid operational issues with your gateway.
+We've detected that one or more of your Application Gateways is unable to obtain a certificate due to misconfigured Key Vault. You must fix this configuration immediately to avoid operational issues with your gateway.
Learn more about [Application gateway - AppGwAdvisorRecommendationForKeyVaultErrors (Resolve Azure Key Vault issue for your Application Gateway)](https://aka.ms/agkverror).
Traffic Analytics is a cloud-based solution that provides visibility into user a
Learn more about [Network Security Group - NSGFlowLogsenableTA (Enable Traffic Analytics to view insights into traffic patterns across Azure resources)](https://aka.ms/aa_enableta_learnmore).
-## SQL Virtual Machine
-
-### SQL IaaS Agent should be installed in full mode
-
-Full mode installs the SQL IaaS Agent to the VM to deliver full functionality. Use it for managing a SQL Server VM with a single instance. There is no cost associated with using the full manageability mode. System administrator permissions are required. Note that installing or upgrading to full mode is an online operation, there is no restart required.
-
-Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent should be installed in full mode)](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management?tabs=azure-powershell).
-
-## Storage
-
-### Prevent hitting subscription limit for maximum storage accounts
-
-A region can support a maximum of 250 storage accounts per subscription. You have either already reached or are about to reach that limit. If you reach that limit, you will be unable to create any more storage accounts in that subscription/region combination. Please evaluate the recommended action below to avoid hitting the limit.
-
-Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](https://aka.ms/subscalelimit).
-
-### Update to newer releases of the Storage Java v12 SDK for better reliability.
-
-We noticed that one or more of your applications use an older version of the Azure Storage Java v12 SDK to write data to Azure Storage. Unfortunately, the version of the SDK being used has a critical issue that uploads incorrect data during retries (for example, in case of HTTP 500 errors), resulting in an invalid object being written. The issue is fixed in newer releases of the Java v12 SDK.
-
-Learn more about [Storage Account - UpdateStorageJavaSDK (Update to newer releases of the Storage Java v12 SDK for better reliability.)](/azure/developer/java/sdk/?view=azure-java-stable&preserve-view=true).
-
-## Subscription
- ### Set up staging environments in Azure App Service
-Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, no requests are dropped because of swap operations.
+Deploy an app to a slot first and then swap it into production to ensure that all instances of the slot are warmed up before being swapped and eliminate downtime. The traffic redirection is seamless, no requests are dropped because of swap operations.
Learn more about [Subscription - AzureApplicationService (Set up staging environments in Azure App Service)](../app-service/deploy-staging-slots.md). ### Enforce 'Add or replace a tag on resources' using Azure Policy
-Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy adds or replaces the specified tag and value when any resource is created or updated. Existing resources can be remediated by triggering a remediation task. Does not modify tags on resource groups.
+Azure Policy is a service in Azure that you use to create, assign, and manage policies that enforce different rules and effects over your resources. Enforce a policy that adds or replaces the specified tag and value when any resource is created or updated. Existing resources can be remediated by triggering a remediation task, which does not modify tags on resource groups.
Learn more about [Subscription - AddTagPolicy (Enforce 'Add or replace a tag on resources' using Azure Policy)](../governance/policy/overview.md). ### Enforce 'Allowed locations' using Azure Policy
-Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy enables you to restrict the locations your organization can specify when deploying resources. Use to enforce your geo-compliance requirements.
+Azure Policy is a service in Azure that you use to create, assign, and manage policies that enforce different rules and effects over your resources. Enforce a policy that enables you to restrict the locations your organization can specify when deploying resources. Use the policy to enforce your geo-compliance requirements.
Learn more about [Subscription - AllowedLocationsPolicy (Enforce 'Allowed locations' using Azure Policy)](../governance/policy/overview.md). ### Enforce 'Audit VMs that do not use managed disks' using Azure Policy
-Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy audits VMs that do not use managed disks.
+Azure Policy is a service in Azure that you use to create, assign, and manage policies that enforce different rules and effects over your resources. Enforce a policy that audits VMs that do not use managed disks.
Learn more about [Subscription - AuditForManagedDisksPolicy (Enforce 'Audit VMs that do not use managed disks' using Azure Policy)](../governance/policy/overview.md). ### Enforce 'Allowed virtual machine SKUs' using Azure Policy
-Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy enables you to specify a set of virtual machine SKUs that your organization can deploy.
+Azure Policy is a service in Azure that you use to create, assign, and manage policies that enforce different rules and effects over your resources. Enforce a policy that enables you to specify a set of virtual machine SKUs that your organization can deploy.
Learn more about [Subscription - AllowedVirtualMachineSkuPolicy (Enforce 'Allowed virtual machine SKUs' using Azure Policy)](../governance/policy/overview.md). ### Enforce 'Inherit a tag from the resource group' using Azure Policy
-Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy adds or replaces the specified tag and value from the parent resource group when any resource is created or updated. Existing resources can be remediated by triggering a remediation task.
+Azure Policy is a service in Azure that you use to create, assign, and manage policies that enforce different rules and effects over your resources. Enforce a policy that adds or replaces the specified tag and value from the parent resource group when any resource is created or updated. Existing resources can be remediated by triggering a remediation task.
Learn more about [Subscription - InheritTagPolicy (Enforce 'Inherit a tag from the resource group' using Azure Policy)](../governance/policy/overview.md).
Using Azure Lighthouse improves security and reduces unnecessary access to your
Learn more about [Subscription - OnboardCSPSubscriptionsToLighthouse (Use Azure Lighthouse to simply and securely manage customer subscriptions at scale)](../lighthouse/concepts/cloud-solution-provider.md).
-## Web
+### Subscription with more than 10 VNets must be managed using AVNM
-### Set up staging environments in Azure App Service
+Subscription with more than 10 VNets must be managed using AVNM. Azure Virtual Network Manager is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions.
-Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, no requests are dropped because of swap operations.
+Learn more about [Subscription - ManageVNetsUsingAVNM (Subscription with more than 10 VNets must be managed using AVNM)](/azure/virtual-network-manager/).
-Learn more about [App service - AzureAppService-StagingEnv (Set up staging environments in Azure App Service)](../app-service/deploy-staging-slots.md).
+### VNet with more than 5 peerings must be managed using AVNM connectivity configuration
-### Update Service Connector API Version
+VNet with more than 5 peerings must be managed using AVNM connectivity configuration. Azure Virtual Network Manager is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions.
-We have identified API calls from outdated Service Connector API for resources under this subscription. We recommend switching to the latest Service Connector API version. You need to update your existing code or tools to use the latest API version.
+Learn more about [Virtual network - ManagePeeringsUsingAVNM (VNet with more than 5 peerings must be managed using AVNM connectivity configuration)]().
-Learn more about [App service - UpgradeServiceConnectorAPI (Update Service Connector API Version)](/azure/service-connector).
+### Upgrade NSG flow logs to VNet flow logs
-### Update Service Connector SDK to the latest version
+Virtual Network flow log allows you to record IP traffic flowing in a virtual network. It provides several benefits over Network Security Group flow log like simplified enablement, enhanced coverage, accuracy, performance and observability of Virtual Network Manager rules and encryption status.
-We have identified API calls from an outdated Service Connector SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+Learn more about [Resource - UpgradeNSGToVnetFlowLog (Upgrade NSG flow logs to VNet flow logs)](https://aka.ms/vnetflowlogspreviewdocs).
-Learn more about [App service - UpgradeServiceConnectorSDK (Update Service Connector SDK to the latest version)](/azure/service-connector).
-## Azure Center for SAP
-### Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP
-Azure Center for SAP solutions recommendation: All VMs in SAP system should be certified for SAP.
-Learn more about [App Server Instance - VM_0001 (Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533).
+## SAP for Azure
+
+### Ensure the HANA DB VM type supports the HANA scenario in your SAP workload
+
+Correct VM type needs to be selected for the specific HANA Scenario. The HANA scenarios can be 'OLAP', 'OLTP', 'OLAP: Scaleup' and 'OLTP: Scaleup'. See SAP note 1928533 for the correct VM type for your SAP workload. The correct VM type helps ensure better performance and support for your SAP systems
+
+Learn more about [Database Instance - HanaDBSupport (Ensure the HANA DB VM type supports the HANA scenario in your SAP workload)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Ensure the Operating system in App VM is supported in combination with DB type in your SAP workload
+
+Operating system in the VMs in your SAP workload need to be supported for the DB type selected. See SAP note 1928533 for the correct OS-DB combinations for the ASCS, Database and Application VMs to ensure better performance and support for your SAP systems
+
+Learn more about [App Server Instance - AppOSDBSupport (Ensure the Operating system in App VM is supported in combination with DB type in your SAP workload)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Set the parameter net.ipv4.tcp_keepalive_time to '300' in the Application VM OS in SAP workloads
+
+In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_keepalive_time = 300 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads in order.
+
+Learn more about [App Server Instance - AppIPV4TCPKeepAlive (Set the parameter net.ipv4.tcp_keepalive_time to '300' in the Application VM OS in SAP workloads)](https://launchpad.support.sap.com/#/notes/1410736).
+
+### Ensure the Operating system in DB VM is supported for the DB type in your SAP workload
+
+Operating system in the VMs in your SAP workload need to be supported for the DB type selected. See SAP note 1928533 for the correct OS-DB combinations for the ASCS, Database and Application VMs to ensure better performance and support for your SAP systems
+
+Learn more about [Database Instance - DBOSDBSupport (Ensure the Operating system in DB VM is supported for the DB type in your SAP workload)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Set the parameter net.ipv4.tcp_retries2 to '15' in the Application VM OS in SAP workloads
+
+In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_retries2 = 15 to enable faster reconnection after an ASCS failover. This is recommended for all Application VM OS in SAP workloads.
+
+Learn more about [App Server Instance - AppIpv4Retries2 (Set the parameter net.ipv4.tcp_retries2 to '15' in the Application VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019722#:~:text=To%20check%20for%20current%20values%20of%20certain%20TCP%20tuning).
+
+### See the parameter net.ipv4.tcp_keepalive_probes to '9' in the Application VM OS in SAP workloads
+
+In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_keepalive_probes = 9 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads.
+
+Learn more about [App Server Instance - AppIPV4Probes (See the parameter net.ipv4.tcp_keepalive_probes to '9' in the Application VM OS in SAP workloads)](/azure/virtual-machines/workloads/sap/high-availability-guide).
+
+### Set the parameter net.ipv4.tcp_tw_recycle to '0' in the Application VM OS in SAP workloads
+
+In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_tw_recycle = 0 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads.
+
+Learn more about [App Server Instance - AppIpv4Recycle (Set the parameter net.ipv4.tcp_tw_recycle to '0' in the Application VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019722#:~:text=To%20check%20for%20current%20values%20of%20certain%20TCP%20tuning).
+
+### Ensure the Operating system in ASCS VM is supported in combination with DB type in your SAP workload
+
+Operating system in the VMs in your SAP workload need to be supported for the DB type selected. See SAP note 1928533 for the correct OS-DB combinations for the ASCS, Database and Application VMs. The correct OS-DB combinations help ensure better performance and support for your SAP systems
+
+Learn more about [Central Server Instance - ASCSOSDBSupport (Ensure the Operating system in ASCS VM is supported in combination with DB type in your SAP workload)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP
+
+Azure Center for SAP solutions recommendation: All VMs in SAP system must be certified for SAP.
+
+Learn more about [App Server Instance - VM_0001 (Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Set the parameter net.ipv4.tcp_retries1 to '3' in the Application VM OS in SAP workloads
+
+In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_retries1 = 3 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads.
+
+Learn more about [App Server Instance - AppIpv4Retries1 (Set the parameter net.ipv4.tcp_retries1 to '3' in the Application VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019722#:~:text=To%20check%20for%20current%20values%20of%20certain%20TCP%20tuning).
+
+### Set the parameter net.ipv4.tcp_tw_reuse to '0' in the Application VM OS in SAP workloads
+
+In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_tw_reuse = 0 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads.
+
+Learn more about [App Server Instance - AppIpv4TcpReuse (Set the parameter net.ipv4.tcp_tw_reuse to '0' in the Application VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019722#:~:text=To%20check%20for%20current%20values%20of%20certain%20TCP%20tuning).
+
+### Set the parameter net.ipv4.tcp_keepalive_intvl to '75' in the Application VM OS in SAP workloads
+
+In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_keepalive_intvl = 75 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads.
+
+Learn more about [App Server Instance - AppIPV4intvl (Set the parameter net.ipv4.tcp_keepalive_intvl to '75' in the Application VM OS in SAP workloads)](/azure/virtual-machines/workloads/sap/high-availability-guide).
++++
+### Ensure Accelerated Networking is enabled on all NICs for improved performance of SAP workloads
+
+Network latency between App VMs and DB VMs for SAP workloads is required to be 0.7ms or less. If accelerated networking isn't enabled, network latency can increase beyond the threshold of 0.7ms
+
+Learn more about [Database Instance - NIC_0001_DB (Ensure Accelerated Networking is enabled on all NICs for improved performance of SAP workloads)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Ensure Accelerated Networking is enabled on all NICs for improved performance of SAP workloads
+
+Network latency between App VMs and DB VMs for SAP workloads is required to be 0.7ms or less. If accelerated networking isn't enabled, network latency can increase beyond the threshold of 0.7ms
+
+Learn more about [App Server Instance - NIC_0001 (Ensure Accelerated Networking is enabled on all NICs for improved performance of SAP workloads)](https://launchpad.support.sap.com/#/notes/1928533).
+++ ### Azure Center for SAP recommendation: Ensure Accelerated networking is enabled on all interfaces
Azure Center for SAP solutions recommendation: Ensure Accelerated networking is
Learn more about [Central Server Instance - NIC_0001_ASCS (Azure Center for SAP recommendation: Ensure Accelerated networking is enabled on all interfaces)](https://launchpad.support.sap.com/#/notes/1928533).
-### Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP
+### Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP
-Azure Center for SAP solutions recommendation: All VMs in SAP system should be certified for SAP.
+Azure Center for SAP solutions recommendation: All VMs in SAP system must be certified for SAP.
-Learn more about [Central Server Instance - VM_0001_ASCS (Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533).
+Learn more about [Central Server Instance - VM_0001_ASCS (Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533).
-### Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP
+### Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP
-Azure Center for SAP solutions recommendation: All VMs in SAP system should be certified for SAP.
+Azure Center for SAP solutions recommendation: All VMs in SAP system must be certified for SAP.
-Learn more about [Database Instance - VM_0001_DB (Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533).
+Learn more about [Database Instance - VM_0001_DB (Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads
+
+fstrim scans the filesystem and sends 'UNMAP' commands for each unused block it finds; useful in thin-provisioned system if the system is over-provisioned. Running SAP HANA on an over-provisioned storage array isn't recommended. Active fstrim can cause XFS metadata corruption See SAP note: 2205917
+
+Learn more about [App Server Instance - GetFsTrimForApp (Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019447).
+
+### Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads
+
+fstrim scans the filesystem and sends 'UNMAP' commands for each unused block it finds; useful in thin-provisioned system if the system is over-provisioned. Running SAP HANA on an over-provisioned storage array isn't recommended. Active fstrim can cause XFS metadata corruption See SAP note: 2205917
+
+Learn more about [Central Server Instance - GetFsTrimForAscs (Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019447).
+
+### Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads
+
+fstrim scans the filesystem and sends 'UNMAP' commands for each unused block it finds; useful in thin-provisioned system if the system is over-provisioned. Running SAP HANA on an over-provisioned storage array isn't recommended. Active fstrim can cause XFS metadata corruption See SAP note: 2205917
+
+Learn more about [Database Instance - GetFsTrimForDb (Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019447).
+
+### For better performance and support, ensure HANA data filesystem type is supported for HANA DB
+
+For different volumes of SAP HANA, where asynchronous I/O is used, SAP only supports filesystems validated as part of an SAP HANA appliance certification. Using an unsupported filesystem might lead to various operational issues, e.g. hanging recovery and index server crashes. See SAP note 2972496.
+
+Learn more about [Database Instance - HanaDataFileSystemSupported (For better performance and support, ensure HANA data filesystem type is supported for HANA DB)](https://launchpad.support.sap.com/#/notes/2972496).
+
+### For better performance and support, ensure HANA shared filesystem type is supported for HANA DB
+
+For different volumes of SAP HANA, where asynchronous I/O is used, SAP only supports filesystems validated as part of an SAP HANA appliance certification. Using an unsupported filesystem might lead to various operational issues, e.g. hanging recovery and index server crashes. See SAP note 2972496.
+
+Learn more about [Database Instance - HanaSharedFileSystem (For better performance and support, ensure HANA shared filesystem type is supported for HANA DB)](https://launchpad.support.sap.com/#/notes/2972496).
++
+### For better performance and support, ensure HANA log filesystem type is supported for HANA DB
+
+For different volumes of SAP HANA, where asynchronous I/O is used, SAP only supports filesystems validated as part of an SAP HANA appliance certification. Using an unsupported filesystem might lead to various operational issues, e.g. hanging recovery and index server crashes. See SAP note 2972496.
+
+Learn more about [Database Instance - HanaLogFileSystemSupported (For better performance and support, ensure HANA log filesystem type is supported for HANA DB)](https://launchpad.support.sap.com/#/notes/2972496).
### Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET
-Azure Center for SAP recommendation: Ensure all NICs for a system should be attached to the same VNET.
+Azure Center for SAP recommendation: Ensure all NICs for a system must be attached to the same VNET.
Learn more about [App Server Instance - AllVmsHaveSameVnetApp (Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET)](/azure/virtual-machines/workloads/sap/sap-deployment-checklist#:~:text=this%20article.-,Networking,-.).
-### Azure Center for SAP recommendation: Swap space on HANA systems should be 2GB
+### Azure Center for SAP recommendation: Swap space on HANA systems must be 2GB
-Azure Center for SAP solutions recommendation: Swap space on HANA systems should be 2GB.
+Azure Center for SAP solutions recommendation: Swap space on HANA systems must be 2GB.
-Learn more about [Database Instance - SwapSpaceForSap (Azure Center for SAP recommendation: Swap space on HANA systems should be 2GB)](https://launchpad.support.sap.com/#/notes/1999997).
+Learn more about [Database Instance - SwapSpaceForSap (Azure Center for SAP recommendation: Swap space on HANA systems must be 2GB)](https://launchpad.support.sap.com/#/notes/1999997).
### Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET
-Azure Center for SAP recommendation: Ensure all NICs for a system should be attached to the same VNET.
+Azure Center for SAP recommendation: Ensure all NICs for a system must be attached to the same VNET.
Learn more about [Central Server Instance - AllVmsHaveSameVnetAscs (Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET)](/azure/virtual-machines/workloads/sap/sap-deployment-checklist#:~:text=this%20article.-,Networking,-.). ### Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET
-Azure Center for SAP recommendation: Ensure all NICs for a system should be attached to the same VNET.
+Azure Center for SAP recommendation: Ensure all NICs for a system must be attached to the same VNET.
Learn more about [Database Instance - AllVmsHaveSameVnetDb (Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET)](/azure/virtual-machines/workloads/sap/sap-deployment-checklist#:~:text=this%20article.-,Networking,-.).
Azure Center for SAP solutions recommendation: Ensure network configuration is
Learn more about [Database Instance - NetworkConfigForSap (Azure Center for SAP recommendation: Ensure network configuration is optimized for HANA and OS)](https://launchpad.support.sap.com/#/notes/2382421). ++
+## Storage
+
+### Create a backup of HSM
+
+Create a periodic HSM backup to prevent data loss and have ability to recover the HSM in case of a disaster.
+
+Learn more about [Managed HSM Service - CreateHSMBackup (Create a backup of HSM)](../key-vault/managed-hsm/best-practices.md#create-backups).
+
+### Application Volume Group SDK Recommendation
+
+The minimum API version for Azure NetApp Files application volume group feature must be 2022-01-01. We recommend using 2022-03-01 when possible to fully leverage the API.
+
+Learn more about [Volume - Application Volume Group SDK version recommendation (Application Volume Group SDK Recommendation)](https://aka.ms/anf-sdkversion).
+
+### Availability Zone Volumes SDK Recommendation
+
+The minimum SDK version of 2022-05-01 is recommended for the Azure NetApp Files Availability zone volume placement feature, to enable deployment of new Azure NetApp Files volumes in the Azure availability zone (AZ) that you specify.
+
+Learn more about [Volume - Azure NetApp Files AZ Volume SDK version recommendation (Availability Zone Volumes SDK Recommendation)](https://aka.ms/anf-sdkversion).
+
+### Cross Zone Replication SDK recommendation
+
+The minimum SDK version of 2022-05-01 is recommended for the Azure NetApp Files Cross Zone Replication feature, to enable you to replicate volumes across availability zones within the same region.
+
+Learn more about [Volume - Azure NetApp Files Cross Zone Replication SDK recommendation (Cross Zone Replication SDK recommendation)](https://aka.ms/anf-sdkversion).
+
+### Volume Encryption using Customer Managed Keys with Azure Key Vault SDK Recommendation
+
+The minimum API version for Azure NetApp Files Customer Managed Keys with Azure Key Vault feature is 2022-05-01.
+
+Learn more about [Volume - CMK with AKV SDK Recommendation (Volume Encryption using Customer Managed Keys with Azure Key Vault SDK Recommendation)]().
+
+### Cool Access SDK Recommendation
+
+The minimum SDK version of 2022-03-01 is recommended for Standard service level with cool access feature to enable moving inactive data to an Azure storage account (the cool tier) and free up storage that resides within Azure NetApp Files volumes, resulting in overall cost savings.
+
+Learn more about [Capacity Pool - Azure NetApp Files Cool Access SDK version recommendation (Cool Access SDK Recommendation)](https://aka.ms/anf-sdkversion).
+
+### Large Volumes SDK Recommendation
+
+The minimum SDK version of 2022-xx-xx is recommended for automation of large volume creation, resizing and deletion.
+
+Learn more about [Volume - Large Volumes SDK Recommendation (Large Volumes SDK Recommendation)](/azure/azure-netapp-files/azure-netapp-files-resource-limits).
+
+### Prevent hitting subscription limit for maximum storage accounts
+
+A region can support a maximum of 250 storage accounts per subscription. You have either already reached or are about to reach that limit. If you reach that limit, you're unable to create any more storage accounts in that subscription/region combination. Evaluate the recommended action below to avoid hitting the limit.
+
+Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](https://aka.ms/subscalelimit).
+
+### Update to newer releases of the Storage Java v12 SDK for better reliability.
+
+We noticed that one or more of your applications use an older version of the Azure Storage Java v12 SDK to write data to Azure Storage. Unfortunately, the version of the SDK being used has a critical issue that uploads incorrect data during retries (for example, in case of HTTP 500 errors), resulting in an invalid object being written. The issue is fixed in newer releases of the Java v12 SDK.
+
+Learn more about [Storage Account - UpdateStorageJavaSDK (Update to newer releases of the Storage Java v12 SDK for better reliability.)](/azure/developer/java/sdk/?view=azure-java-stable&preserve-view=true).
+++++
+## Virtual desktop infrastructure
+
+### Permissions missing for start VM on connect
+
+We have determined you enabled start VM on connect but didn't grant the Azure Virtual Desktop the rights to power manage VMs in your subscription. As a result your users connecting to host pools won't receive a remote desktop session. Review feature documentation for requirements.
+
+Learn more about [Host Pool - AVDStartVMonConnect (Permissions missing for start VM on connect)](https://aka.ms/AVDStartVMRequirement).
+
+### No validation environment enabled
+
+We have determined that you do not have a validation environment enabled in current subscription. When creating your host pools, you have selected "No" for "Validation environment" in the properties tab. Having at least one host pool with a validation environment enabled ensures the business continuity through Azure Virtual Desktop service deployments with early detection of potential issues.
+
+Learn more about [Host Pool - ValidationEnvHostPools (No validation environment enabled)](../virtual-desktop/create-validation-host-pool.md).
+
+### Not enough production environments enabled
+
+We have determined that too many of your host pools have Validation Environment enabled. In order for Validation Environments to best serve their purpose, you must have at least one, but never more than half of your host pools in Validation Environment. By having a healthy balance between your host pools with Validation Environment enabled and those with it disabled, you're best able to utilize the benefits of the multistage deployments that Azure Virtual Desktop offers with certain updates. To fix this issue, open your host pool's properties and select "No" next to the "Validation Environment" setting.
+
+Learn more about [Host Pool - ProductionEnvHostPools (Not enough production environments enabled)](../virtual-desktop/create-host-pools-powershell.md).
++++
+## Web
+
+### Set up staging environments in Azure App Service
+
+Deploy an app to a slot first and then swap it into production to ensure that all instances of the slot are warmed up before being swapped and eliminate downtime. The traffic redirection is seamless, no requests are dropped because of swap operations.
+
+Learn more about [App service - AzureAppService-StagingEnv (Set up staging environments in Azure App Service)](../app-service/deploy-staging-slots.md).
+
+### Update Service Connector API Version
+
+We have identified API calls from outdated Service Connector API for resources under this subscription. We recommend switching to the latest Service Connector API version. You need to update your existing code or tools to use the latest API version.
+
+Learn more about [App service - UpgradeServiceConnectorAPI (Update Service Connector API Version)](/azure/service-connector).
+
+### Update Service Connector SDK to the latest version
+
+We have identified API calls from an outdated Service Connector SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+
+Learn more about [App service - UpgradeServiceConnectorSDK (Update Service Connector SDK to the latest version)](/azure/service-connector).
++++++ ## Next steps Learn more about [Operational Excellence - Microsoft Azure Well Architected Framework](/azure/architecture/framework/devops/overview)
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Title: Performance recommendations
description: Full list of available performance recommendations in Advisor. Previously updated : 02/03/2022++ Last updated : 10/15/2023 # Performance recommendations
The performance recommendations in Azure Advisor can help improve the speed and
1. On the **Advisor** dashboard, select the **Performance** tab.
-## Attestation
+## AI + machine learning
-### Update Attestation API Version
+### 429 Throttling Detected on this resource
-We have identified API calls from outdated Attestation API for resources under this subscription. We recommend switching to the latest Attestation API versions. You need to update your existing code to use the latest API version. This ensures you receive the latest features and performance improvements.
+We observed that there have been 1,000 or more 429 throttling errors on this resource in a one day timeframe. Consider enabling autoscale to better handle higher call volumes and reduce the number of 429 errors.
-Learn more about [Attestation provider - UpgradeAttestationAPI (Update Attestation API Version)](/rest/api/attestation).
+Learn more about [Azure AI services autoscale](/azure/ai-services/autoscale?tabs=portal).
-## Azure VMware Solution
+### Text Analytics Model Version Deprecation
-### vSAN capacity utilization has crossed critical threshold
+Upgrade the model version to a newer model version or latest to utilize the latest and highest quality models.
-Your vSAN capacity utilization has reached 75%. The cluster utilization is required to remain below the 75% critical threshold for SLA compliance. Add new nodes to VSphere cluster to increase capacity or delete VMs to reduce consumption or adjust VM workloads
+Learn more about [Cognitive Service - TAUpgradeToLatestModelVersion (Text Analytics Model Version Deprecation)](https://aka.ms/language-model-lifecycle).
-Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization has crossed critical threshold)](../azure-vmware/concepts-private-clouds-clusters.md).
+### Text Analytics Model Version Deprecation
-## Azure Cache for Redis
+Upgrade the model version to a newer model version or latest to utilize the latest and highest quality models.
-### Improve your Cache and application performance when running with high network bandwidth
+Learn more about [Cognitive Service - TAUpgradeModelVersiontoLatest (Text Analytics Model Version Deprecation)](https://aka.ms/language-model-lifecycle).
-Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity.
+### Upgrade to the latest Cognitive Service Text Analytics API version
-Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth).
+Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personal data recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints, we have Opinion Mining in SA endpoint, redacted text property in personal data endpoint
-### Improve your Cache and application performance when running with many connected clients
+Learn more about [Cognitive Service - UpgradeToLatestAPI (Upgrade to the latest Cognitive Service Text Analytics API version)](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api).
-Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
+### Upgrade to the latest API version of Azure Cognitive Service for Language
-Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections).
+Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability.
-### Improve your Cache and application performance when running with high server load
+Learn more about [Cognitive Service - UpgradeToLatestAPILanguage (Upgrade to the latest API version of Azure Cognitive Service for Language)](https://aka.ms/language-api).
-Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
+### Upgrade to the latest Cognitive Service Text Analytics SDK version
-Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu).
+Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personal data recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints, we have Opinion Mining in SA endpoint, redacted text property in personal data endpoint
-### Improve your Cache and application performance when running with high memory pressure
+Learn more about [Cognitive Service - UpgradeToLatestSDK (Upgrade to the latest Cognitive Service Text Analytics SDK version)](/azure/cognitive-services/text-analytics/quickstarts/text-analytics-sdk?tabs=version-3-1&pivots=programming-language-csharp).
-Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
+### Upgrade to the latest Cognitive Service Language SDK version
-Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory).
+Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability.
-### Improve your Cache and application performance when memory rss usage is high.
+Learn more about [Cognitive Service - UpgradeToLatestSDKLanguage (Upgrade to the latest Cognitive Service Language SDK version)](https://aka.ms/language-api).
-Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
+### Upgrade to the latest Azure AI Language SDK version
-Learn more about [Redis Cache Server - RedisCacheUsedMemoryRSS (Improve your Cache and application performance when memory rss usage is high.)](https://aka.ms/redis/recommendations/memory).
+Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personal data recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints, we have Opinion Mining in SA endpoint, redacted text property in personal data endpoint.
-### Improve your Cache and application performance when memory rss usage is high.
+Learn more about [Azure AI Language](/azure/ai-services/language-service/language-detection/overview).
-Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheUsedMemoryRSSHigh (Improve your Cache and application performance when memory rss usage is high.)](https://aka.ms/redis/recommendations/memory).
-### Improve your Cache and application performance when running with high network bandwidth
-Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity.
+## Analytics
-Learn more about [Redis Cache Server - RedisCacheNetworkBandwidthHigh (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth).
+### Right-size Data Explorer resources for optimal performance.
-### Improve your Cache and application performance when running with high memory pressure
+This recommendation surfaces all Data Explorer resources that exceed the recommended data capacity (80%). The recommended action to improve the performance is to scale to the recommended configuration shown.
-Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
+Learn more about [Data explorer resource - Right-size ADX resource (Right-size Data Explorer resources for optimal performance.)](https://aka.ms/adxskuperformance).
-Learn more about [Redis Cache Server - RedisCacheUsedMemoryHigh (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory).
+### Review table cache policies for Data Explorer tables
-### Improve your Cache and application performance when running with many connected clients
+This recommendation surfaces Data Explorer tables with a high number of queries that look back beyond the configured cache period (policy) - you see the top 10 tables by query percentage that access out-of-cache data. The recommended action to improve the performance: Limit queries on this table to the minimal necessary time range (within the defined policy). Alternatively, if data from the entire time range is required, increase the cache period to the recommended value.
-Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
+Learn more about [Data explorer resource - UpdateCachePoliciesForAdxTables (Review table cache policies for Data Explorer tables)](https://aka.ms/adxcachepolicy).
-Learn more about [Redis Cache Server - RedisCacheConnectedClientsHigh (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections).
+### Reduce Data Explorer table cache policy for better performance
-### Improve your Cache and application performance when running with high server load
+Reducing the table cache policy frees up unused data from the resource's cache and improves performance.
-Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesToImprovePerformance (Reduce Data Explorer table cache policy for better performance)](https://aka.ms/adxcachepolicy).
-Learn more about [Redis Cache Server - RedisCacheServerLoadHigh (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu).
+### Increase the cache in the cache policy
-### Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache.
+Based on your actual usage during the last month, update the cache policy to increase the hot cache for the table. The retention period must always be larger than the cache period. If you increase the cache and the retention period is lower than the cache period, update the retention policy. The analysis is based only on user queries that scanned data.
-Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache. If client host machine is running hot on memory, CPU or network bandwidth, the cache responses will not reach your application fast enough and could result in higher latency.
+Learn more about [Data explorer resource - IncreaseCacheForAzureDataExplorerTablesToImprovePerformance (Increase the cache in the cache policy)](https://aka.ms/adxcachepolicy).
-Learn more about [Redis Cache Server - UnresponsiveClient (Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache.)](/azure/azure-cache-for-redis/cache-troubleshoot-client).
+### Enable Optimized Autoscale for Data Explorer resources
-## CDN
+Looks like your resource could have automatically scaled to improve performance (based on your actual usage during the last week, cache utilization, ingestion utilization, CPU, and streaming ingests utilization). To optimize costs and performance, we recommend enabling Optimized Autoscale.
-### Upgrade SDK version recommendation
+Learn more about [Data explorer resource - PerformanceEnableOptimizedAutoscaleAzureDataExplorer (Enable Optimized Autoscale for Data Explorer resources)](https://aka.ms/adxoptimizedautoscale).
-The latest version of Azure Front Door Standard and Premium Client Library or SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Front Door Standard and Premium.
+### Reads happen on most recent data
-Learn more about [Front Door Profile - UpgradeCDNToLatestSDKLanguage (Upgrade SDK version recommendation)](https://aka.ms/afd/tiercomparison).
+More than 75% of your read requests are landing on the memstore, indicating that the reads are primarily on recent data. Recent data reads suggest that even if a flush happens on the memstore, the recent file needs to be accessed and put in the cache.
-## Azure AI services
+Learn more about [HDInsight cluster - HBaseMemstoreReadPercentage (Reads happen on most recent data)](../hdinsight/hbase/apache-hbase-advisor.md).
-### 429 Throttling Detected on this resource
+### Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.
-We observed that there have been 1,000 or more 429 throttling errors on this resource in a one day timeframe. Consider enabling autoscale to better handle higher call volumes and reduce the number of 429 errors.
+You're seeing this advisor recommendation because HDInsight team's system log shows that in the past seven days, your cluster has encountered the following scenarios:
-Learn more about [Azure AI services autoscale](/azure/ai-services/autoscale?tabs=portal).
+1. High WAL sync time latency
-### Upgrade to the latest Azure AI Language SDK version
+2. High write request count (at least 3 one hour windows of over 1000 avg_write_requests/second/node)
-Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personally identifiable information recognition, Entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have Opinion Mining in SA endpoint, redacted text property in personally identifiable information endpoint.
+These conditions are indicators that your cluster is suffering from high write latencies, which can be due to heavy workload on your cluster.
-Learn more about [Azure AI Language](/azure/ai-services/language-service/language-detection/overview).
+To improve the performance of your cluster, consider utilizing the Accelerated Writes feature provided by Azure HDInsight HBase. The Accelerated Writes feature for HDInsight Apache HBase clusters attaches premium SSD-managed disks to every RegionServer (worker node) instead of using cloud storage. As a result, it provides low write-latency and better resiliency for your applications.
-## Communication services
+To read more on this feature, visit link:
-### Use recommended version of Chat SDK
+Learn more about [HDInsight cluster - AccWriteCandidate (Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.)](../hdinsight/hbase/apache-hbase-accelerated-writes.md).
-Azure Communication Services Chat SDK can be used to add rich, real-time chat to your applications. Update to the recommended version of Chat SDK to ensure the latest fixes and features.
+### More than 75% of your queries are full scan queries
-Learn more about [Communication service - UpgradeChatSdk (Use recommended version of Chat SDK)](../communication-services/concepts/chat/sdk-features.md).
+More than 75% of the scan queries on your cluster are doing a full region/table scan. Modify your scan queries to avoid full region or table scans.
-### Use recommended version of Resource Manager SDK
+Learn more about [HDInsight cluster - ScanQueryTuningcandidate (More than 75% of your queries are full scan queries.)](../hdinsight/hbase/apache-hbase-advisor.md).
-Resource Manager SDK can be used to provision and manage Azure Communication Services resources. Update to the recommended version of Resource Manager SDK to ensure the latest fixes and features.
+### Check your region counts as you have blocking updates
-Learn more about [Communication service - UpgradeResourceManagerSdk (Use recommended version of Resource Manager SDK)](../communication-services/quickstarts/create-communication-resource.md?pivots=platform-net&tabs=windows).
+Region counts needs to be adjusted to avoid updates getting blocked. It might require a scale up of the cluster by adding new nodes.
-### Use recommended version of Identity SDK
+Learn more about [HDInsight cluster - RegionCountCandidate (Check your region counts as you have blocking updates.)](../hdinsight/hbase/apache-hbase-advisor.md).
-Azure Communication Services Identity SDK can be used to manage identities, users, and access tokens. Update to the recommended version of Identity SDK to ensure the latest fixes and features.
+### Consider increasing the flusher threads
-Learn more about [Communication service - UpgradeIdentitySdk (Use recommended version of Identity SDK)](../communication-services/concepts/sdk-options.md).
+The flush queue size in your region servers is more than 100 or there are updates getting blocked frequently. Tuning of the flush handler is recommended.
-### Use recommended version of SMS SDK
+Learn more about [HDInsight cluster - FlushQueueCandidate (Consider increasing the flusher threads)](../hdinsight/hbase/apache-hbase-advisor.md).
-Azure Communication Services SMS SDK can be used to send and receive SMS messages. Update to the recommended version of SMS SDK to ensure the latest fixes and features.
+### Consider increasing your compaction threads for compactions to complete faster
-Learn more about [Communication service - UpgradeSmsSdk (Use recommended version of SMS SDK)](/azure/communication-services/concepts/telephony-sms/sdk-features).
+The compaction queue in your region servers is more than 2000 suggesting that more data requires compaction. Slower compactions can affect read performance as the number of files to read are more. More files without compaction can also affect the heap usage related to how files interact with Azure file system.
-### Use recommended version of Phone Numbers SDK
+Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increasing your compaction threads for compactions to complete faster)](/azure/hdinsight/hbase/apache-hbase-advisor).
-Azure Communication Services Phone Numbers SDK can be used to acquire and manage phone numbers. Update to the recommended version of Phone Numbers SDK to ensure the latest fixes and features.
+### Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows
-Learn more about [Communication service - UpgradePhoneNumbersSdk (Use recommended version of Phone Numbers SDK)](../communication-services/concepts/sdk-options.md).
+Clustered columnstore tables are organized in data into segments. Having high segment quality is critical to achieving optimal query performance on a columnstore table. You can measure segment quality by the number of rows in a compressed row group.
-### Use recommended version of Calling SDK
+Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](https://aka.ms/AzureSynapseCCIGuidance).
-Azure Communication Services Calling SDK can be used to enable voice, video, screen-sharing, and other real-time communication. Update to the recommended version of Calling SDK to ensure the latest fixes and features.
+### Update SynapseManagementClient SDK Version
-Learn more about [Communication service - UpgradeCallingSdk (Use recommended version of Calling SDK)](../communication-services/concepts/voice-video-calling/calling-sdk-features.md).
+New SynapseManagementClient is using .NET SDK 4.0 or above.
-### Use recommended version of Call Automation SDK
+Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update SynapseManagementClient SDK Version)](https://aka.ms/UpgradeSynapseManagementClientSDK).
-Azure Communication Services Call Automation SDK can be used to make and manage calls, play audio, and configure recording. Update to the recommended version of Call Automation SDK to ensure the latest fixes and features.
-Learn more about [Communication service - UpgradeServerCallingSdk (Use recommended version of Call Automation SDK)](../communication-services/concepts/voice-video-calling/call-automation-apis.md).
-### Use recommended version of Network Traversal SDK
+## Compute
-Azure Communication Services Network Traversal SDK can be used to access TURN servers for low-level data transport. Update to the recommended version of Network Traversal SDK to ensure the latest fixes and features.
+### vSAN capacity utilization has crossed critical threshold
-Learn more about [Communication service - UpgradeTurnSdk (Use recommended version of Network Traversal SDK)](../communication-services/concepts/sdk-options.md).
+Your vSAN capacity utilization has reached 75%. The cluster utilization is required to remain below the 75% critical threshold for SLA compliance. Add new nodes to VSphere cluster to increase capacity or delete VMs to reduce consumption or adjust VM workloads
-## Compute
+Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization has crossed critical threshold)](../azure-vmware/concepts-private-clouds-clusters.md).
### Update Automanage to the latest API Version
-We have identified sdk calls from outdated API for resources under this subscription. We recommend switching to the latest sdk versions. This ensures you receive the latest features and performance improvements.
+We have identified SDK calls from outdated API for resources under this subscription. We recommend switching to the latest SDK versions to ensure you receive the latest features and performance improvements.
Learn more about [Virtual machine - UpdateToLatestApi (Update Automanage to the latest API Version)](/azure/automanage/reference-sdk). ### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.
-We have determined that your VMs are located in a region different or far from where your users are connecting from, using Azure Virtual Desktop. This may lead to prolonged connection response times and will impact overall user experience on Azure Virtual Desktop.
+We have determined that your VMs are located in a region different or far from where your users are connecting with Azure Virtual Desktop. Distant user regions might lead to prolonged connection response times and affect overall user experience.
Learn more about [Virtual machine - RegionProximitySessionHosts (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](../virtual-desktop/connection-latency.md).
Learn more about [Virtual machine - ManagedDisksStorageAccount (Use Managed disk
### Convert Managed Disks from Standard HDD to Premium SSD for performance
-We have noticed your Standard HDD disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which will take three to five minutes.
+We have noticed your Standard HDD disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which takes three to five minutes.
Learn more about [Disk - MDHDDtoPremiumForPerformance (Convert Managed Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd). ### Enable Accelerated Networking to improve network performance and latency
-We have detected that Accelerated Networking is not enabled on VM resources in your existing deployment that may be capable of supporting this feature. If your VM OS image supports Accelerated Networking as detailed in the documentation, make sure to enable this free feature on these VMs to maximize the performance and latency of your networking workloads in cloud
+We have detected that Accelerated Networking isn't enabled on VM resources in your existing deployment that might be capable of supporting this feature. If your VM OS image supports Accelerated Networking as detailed in the documentation, make sure to enable this free feature on these VMs to maximize the performance and latency of your networking workloads in cloud
Learn more about [Virtual machine - AccelNetConfiguration (Enable Accelerated Networking to improve network performance and latency)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms). ### Use SSD Disks for your production workloads
-We noticed that you are using SSD disks while also using Standard HDD disks on the same VM. Standard HDD managed disks are generally recommended for dev-test and backup; we recommend you use Premium SSDs or Standard SSDs for production. Premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Standard SSDs provide consistent and lower latency. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which will take three to five minutes.
+We noticed that you're using SSD disks while also using Standard HDD disks on the same VM. Standard HDD managed disks are recommended for dev-test and backup; we recommend you use Premium SSDs or Standard SSDs for production. Premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Standard SSDs provide consistent and lower latency. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which takes three to five minutes.
Learn more about [Virtual machine - MixedDiskTypeToSSDPublic (Use SSD Disks for your production workloads)](/azure/virtual-machines/windows/disks-types#disk-comparison). ### Match production Virtual Machines with Production Disk for consistent performance and better latency
-Production virtual machines need production disks if you want to get the best performance. We see that you are running a production level virtual machine, however, you are using a low performing disk with standard HDD. Upgrading your disks that are attached to your production disks, either Standard SSD or Premium SSD, will benefit you with a more consistent experience and improvements in latency.
+Production virtual machines need production disks if you want to get the best performance. We see that you're running a production level virtual machine, however, you're using a low performing disk with standard HDD. Upgrading disks that are attached to your production disks, either Standard SSD or Premium SSD, benefits you with a more consistent experience and improvements in latency.
Learn more about [Virtual machine - MatchProdVMProdDisks (Match production Virtual Machines with Production Disk for consistent performance and better latency)](/azure/virtual-machines/windows/disks-types#disk-comparison).
-### Accelerated Networking may require stopping and starting the VM
+### Accelerated Networking might require stopping and starting the VM
-We have detected that Accelerated Networking is not engaged on VM resources in your existing deployment even though the feature has been requested. In rare cases like this, it may be necessary to stop and start your VM, at your convenience, to re-engage AccelNet.
+We have detected that Accelerated Networking isn't engaged on VM resources in your existing deployment even though the feature has been requested. In rare cases like this, it might be necessary to stop and start your VM, at your convenience, to re-engage AccelNet.
-Learn more about [Virtual machine - AccelNetDisengaged (Accelerated Networking may require stopping and starting the VM)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
+Learn more about [Virtual machine - AccelNetDisengaged (Accelerated Networking might require stopping and starting the VM)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-### Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.
+### Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance
-Ultra disk is available in the same region as your database workload. Ultra disk offers high throughput, high IOPS, and consistent low latency disk storage for your database workloads: For Oracle DBs, you can now use either 4k or 512E sector sizes with Ultra disk depending on your Oracle DB version. For SQL server, leveraging Ultra disk for your log disk might offer more performance for your database. See instructions here for migrating your log disk to Ultra disk.
+Ultra disk is available in the same region as your database workload. Ultra disk offers high throughput, high IOPS, and consistent low latency disk storage for your database workloads: For Oracle DBs, you can now use either 4k or 512E sector sizes with Ultra disk depending on your Oracle DB version. For SQL server, using Ultra disk for your log disk might offer more performance for your database. See instructions here for migrating your log disk to Ultra disk.
Learn more about [Virtual machine - AzureStorageVmUltraDisk (Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.)](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal).
-### Upgrade the size of your virtual machines close to resource exhaustion
+### Upgrade the size of your most active virtual machines to prevent resource exhaustion and improve performance
+
+We analyzed data for the past seven days and identified virtual machines (VMs) with high utilization across different metrics (that is, CPU, Memory, and VM IO). Those VMs might experience performance issues since they're nearing or at their SKU's limits. Consider upgrading their SKU to improve performance.
+
+Learn more about [Virtual machine - UpgradeSizeHighVMUtilV0 (Upgrade the size of your most active virtual machines to prevent resource exhaustion and improve performance)](https://aka.ms/aa_resizehighusagevmrec_learnmore).
+
-We analyzed data for the past 7 days and identified virtual machines (VMs) with high utilization across different metrics (i.e., CPU, Memory, and VM IO). Those VMs may experience performance issues since they are nearing/at their SKU's limits. Consider upgrading their SKU to improve performance.
-Learn more about [Virtual machine - Improve the performance of highly used VMs using Azure Advisor](https://aka.ms/aa_resizehighusagevmrec_learnmore)
-## Kubernetes
+## Containers
### Unsupported Kubernetes version is detected
Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with
Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions).
-## DataFactory
+### Unsupported Kubernetes version is detected
+
+Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with a supported version.
+
+Learn more about [HDInsight Cluster Pool - UnsupportedHiloAKSVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions).
+
+### Clusters with a single node pool
+
+We recommended that you add one or more node pools instead of using a single node pool. Multiple pools help to isolate critical system pods from your application to prevent misconfigured or rogue application pods from accidentally killing system pods.
+
+Learn more about [Kubernetes service - ClustersWithASingleNodePool (Clusters with a Single Node Pool)](/azure/aks/use-system-pools?tabs=azure-cli#system-and-user-node-pools).
+
+### Update Fleet API to the latest version
+
+We have identified SDK calls from outdated Fleet API for resources under your subscription. We recommend switching to the latest SDK version, which ensures you receive the latest features and performance improvements.
+
+Learn more about [Kubernetes fleet manager | PREVIEW - UpdateToLatestFleetApi (Update Fleet API to the latest Version)](/azure/kubernetes-fleet/update-orchestration).
++++
+## Databases
+
+### Configure your Azure Cosmos DB query page size (MaxItemCount) to -1
+
+You're using the query page size of 100 for queries for your Azure Cosmos DB container. We recommend using a page size of -1 for faster scans.
+
+Learn more about [Azure Cosmos DB account - CosmosDBQueryPageSize (Configure your Azure Cosmos DB query page size (MaxItemCount) to -1)](/azure/cosmos-db/sql-api-query-metrics#max-item-count).
+
+### Add composite indexes to your Azure Cosmos DB container
+
+Your Azure Cosmos DB containers are running ORDER BY queries incurring high Request Unit (RU) charges. It's recommended to add composite indexes to your containers' indexing policy to improve the RU consumption and decrease the latency of these queries.
+
+Learn more about [Azure Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](../cosmos-db/index-policy.md#composite-indexes).
+
+### Optimize your Azure Cosmos DB indexing policy to only index what's needed
+
+Your Azure Cosmos DB containers are using the default indexing policy, which indexes every property in your documents. Because you're storing large documents, a high number of properties get indexed, resulting in high Request Unit consumption and poor write latency. To optimize write performance, we recommend overriding the default indexing policy to only index the properties used in your queries.
+
+Learn more about [Azure Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](../cosmos-db/index-policy.md).
+
+### Use hierarchical partition keys for optimal data distribution
+
+Your account has a custom setting that allows the logical partition size in a container to exceed the limit of 20 GB. The Azure Cosmos DB team applied this setting as a temporary measure to give you time to rearchitect your application with a different partition key. It isn't recommended as a long-term solution, as SLA guarantees aren't honored when the limit is increased. You can now use hierarchical partition keys (preview) to rearchitect your application. The feature allows you to exceed the 20-GB limit by setting up to three partition keys, ideal for multitenant scenarios or workloads that use synthetic keys.
-### Review your throttled Data Factory Triggers
+Learn more about [Azure Cosmos DB account - CosmosDBHierarchicalPartitionKey (Use hierarchical partition keys for optimal data distribution)](https://devblogs.microsoft.com/cosmosdb/hierarchical-partition-keys-private-preview/).
+
+### Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK
+
+We noticed that your Azure Cosmos DB applications are using Gateway mode via the Azure Cosmos DB .NET or Java SDKs. We recommend switching to Direct connectivity for lower latency and higher scalability.
+
+Learn more about [Azure Cosmos DB account - CosmosDBGatewayMode (Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK)](/azure/cosmos-db/performance-tips#networking).
+
+### Enhance Performance by Scaling Up for Optimal Resource Utilization
+
+Maximizing the efficiency of your system's resources is crucial for maintaining top-notch performance. Our system closely monitors CPU usage, and when it crosses the 90% threshold over a 12-hour period, a proactive alert is triggered. This alert not only informs Azure Cosmos DB for MongoDB vCore users of the elevated CPU consumption but also provides valuable guidance on scaling up to a higher tier. By upgrading to a more robust tier, you can unlock improved performance and ensure your system operates at its peak potential.
+
+Learn more about [Scaling and configuring Your Azure Cosmos DB for MongoDB vCore cluster](/azure/cosmos-db/mongodb/vcore/how-to-scale-cluster).
-A high volume of throttling has been detected in an event-based trigger that runs in your Data Factory resource. This is causing your pipeline runs to drop from the run queue. Review the trigger definition to resolve issues and increase performance.
+### PerformanceBoostervCore
-Learn more about [Data factory trigger - ADFThrottledTriggers (Review your throttled Data Factory Triggers)](https://aka.ms/adf-create-event-trigger).
+When CPU usage surpasses 90% within a 12-hour timeframe, users are notified about the high usage. Additionally it advises them to scale up to a higher tier to get a better performance.
+
+Learn more about [Cosmos DB account - ScaleUpvCoreRecommendation (PerformanceBoostervCore)](/azure/cosmos-db/mongodb/vcore/how-to-scale-cluster).
-## MariaDB
### Scale the storage limit for MariaDB server
-Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
+Our system shows that the server might be constrained because it's approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or the server moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
Learn more about [MariaDB server - OrcasMariaDbStorageLimit (Scale the storage limit for MariaDB server)](https://aka.ms/mariadbstoragelimits). ### Increase the MariaDB server vCores
-Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+Our system shows that the CPU has been running under high utilization for an extended time period over the last seven days. High CPU utilization might lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
Learn more about [MariaDB server - OrcasMariaDbCpuOverload (Increase the MariaDB server vCores)](https://aka.ms/mariadbpricing). ### Scale the MariaDB server to higher SKU
-Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
+Our system shows that the server might be unable to support the connection requests because of the maximum supported connections for the given SKU, which might result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
Learn more about [MariaDB server - OrcasMariaDbConcurrentConnection (Scale the MariaDB server to higher SKU)](https://aka.ms/mariadbconnectionlimits). ### Move your MariaDB server to Memory Optimized SKU
-Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+Our system shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
Learn more about [MariaDB server - OrcasMariaDbMemoryCache (Move your MariaDB server to Memory Optimized SKU)](https://aka.ms/mariadbpricing). ### Increase the reliability of audit logs
-Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
+Our system shows that the server's audit logs might have been lost over the past day. Lost audit logs can occur when your server is experiencing a CPU-heavy workload, or a server generates a large number of audit logs over a short time period. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
Learn more about [MariaDB server - OrcasMariaDBAuditLog (Increase the reliability of audit logs)](https://aka.ms/mariadb-audit-logs).
-## MySQL
- ### Scale the storage limit for MySQL server
-Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
+Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
Learn more about [MySQL server - OrcasMySQLStorageLimit (Scale the storage limit for MySQL server)](https://aka.ms/mysqlstoragelimits). ### Scale the MySQL server to higher SKU
-Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to a higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
+Our system shows that the server might be unable to support the connection requests because of the maximum supported connections for the given SKU, which might result in a large number of failed connections requests that adversely affect the performance. To improve performance, we recommend moving to a higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
Learn more about [MySQL server - OrcasMySQLConcurrentConnection (Scale the MySQL server to higher SKU)](https://aka.ms/mysqlconnectionlimits). ### Increase the MySQL server vCores
-Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+Our system shows that the CPU has been running under high utilization for an extended time period over the last seven days. High CPU utilization might lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
Learn more about [MySQL server - OrcasMySQLCpuOverload (Increase the MySQL server vCores)](https://aka.ms/mysqlpricing). ### Move your MySQL server to Memory Optimized SKU
-Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+Our system shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
Learn more about [MySQL server - OrcasMySQLMemoryCache (Move your MySQL server to Memory Optimized SKU)](https://aka.ms/mysqlpricing). ### Add a MySQL Read Replica server
-Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
+Our system shows that you might have a read intensive workload running, which results in resource contention for this server. Resource contention might lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
Learn more about [MySQL server - OrcasMySQLReadReplica (Add a MySQL Read Replica server)](https://aka.ms/mysqlreadreplica). ### Improve MySQL connection management
-Our internal telemetry indicates that your application connecting to MySQL server may not be managing connections efficiently. This may result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections. This can be done by configuring a server side connection-pooler, such as ProxySQL.
+Our system shows that your application connecting to MySQL server might be managing connections poorly, which might result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections. You can do this by configuring a server side connection-pooler, such as ProxySQL.
Learn more about [MySQL server - OrcasMySQLConnectionPooling (Improve MySQL connection management)](https://aka.ms/azure_mysql_connection_pooling). ### Increase the reliability of audit logs
-Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
+Our system shows that the server's audit logs might have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short time period. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
Learn more about [MySQL server - OrcasMySQLAuditLog (Increase the reliability of audit logs)](https://aka.ms/mysql-audit-logs). ### Improve performance by optimizing MySQL temporary-table sizing
-Our internal telemetry indicates that your MySQL server may be incurring unnecessary I/O overhead due to low temporary-table parameter settings. This may result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions.
+Our system shows that your MySQL server might be incurring unnecessary I/O overhead due to low temporary-table parameter settings. This might result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions.
Learn more about [MySQL server - OrcasMySqlTmpTables (Improve performance by optimizing MySQL temporary-table sizing)](https://aka.ms/azure_mysql_tmp_table). ### Improve MySQL connection latency
-Our internal telemetry indicates that your application connecting to MySQL server may not be managing connections efficiently. This may result in higher application latency. To improve connection latency, we recommend that you enable connection redirection. This can be done by enabling the connection redirection feature of the PHP driver.
+Our system shows that your application connecting to MySQL server might be managing connections poorly. This might result in higher application latency. To improve connection latency, we recommend that you enable connection redirection. This can be done by enabling the connection redirection feature of the PHP driver.
Learn more about [MySQL server - OrcasMySQLConnectionRedirection (Improve MySQL connection latency)](https://aka.ms/azure_mysql_connection_redirection). ### Increase the storage limit for MySQL Flexible Server
-Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount.
+Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount.
Learn more about [Azure Database for MySQL flexible server - OrcasMeruMySqlStorageUpsell (Increase the storage limit for MySQL Flexible Server)](https://aka.ms/azure_mysql_flexible_server_storage). ### Scale the MySQL Flexible Server to a higher SKU
-Our telemetry indicates that your Flexible Server is exceeding the connection limits associated with your current SKU. A large number of failed connection requests may adversely affect server performance. To improve performance, we recommend increasing the number of vCores or switching to a higher SKU.
+Our system shows that your Flexible Server is exceeding the connection limits associated with your current SKU. A large number of failed connection requests might adversely affect server performance. To improve performance, we recommend increasing the number of vCores or switching to a higher SKU.
Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlConnectionUpsell (Scale the MySQL Flexible Server to a higher SKU)](https://aka.ms/azure_mysql_flexible_server_storage). ### Increase the MySQL Flexible Server vCores.
-Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+Our system shows that the CPU has been running under high utilization for an extended time period over the last seven days. High CPU utilization might lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlCpuUpcell (Increase the MySQL Flexible Server vCores.)](https://aka.ms/azure_mysql_flexible_server_pricing). ### Improve performance by optimizing MySQL temporary-table sizing.
-Our internal telemetry indicates that your MySQL server may be incurring unnecessary I/O overhead due to low temporary-table parameter settings. This may result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions.
+Our system shows that your MySQL server might be incurring unnecessary I/O overhead due to low temporary-table parameter settings. Unnecessary I/O overhead might result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions.
Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlTmpTable (Improve performance by optimizing MySQL temporary-table sizing.)](https://dev.mysql.com/doc/refman/8.0/en/internal-temporary-tables.html#internal-temporary-tables-engines). ### Move your MySQL server to Memory Optimized SKU
-Our internal telemetry shows that there is high memory usage for this server which can result in slower query performance and increased IOPS. To improve performance, please review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+Our system shows that there is high memory usage for this server that can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlMemoryUpsell (Move your MySQL server to Memory Optimized SKU)](https://aka.ms/azure_mysql_flexible_server_storage). ### Add a MySQL Read Replica server
-Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
+Our system shows that you might have a read intensive workload running, which results in resource contention for this server. This might lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlReadReplicaUpsell (Add a MySQL Read Replica server)](https://aka.ms/flexible-server-mysql-read-replicas).
-## PostgreSQL
- ### Increase the work_mem to avoid excessive disk spilling from sort and hash
-Our internal telemetry shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server which will help to reduce the scenarios when the sort or hash happens on disk, thereby improving the overall query performance.
+Our system shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server, which helps to reduce the scenarios when the sort or hash happens on disk and improves the overall query performance.
Learn more about [PostgreSQL server - OrcasPostgreSqlWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration).
-### Scale the storage limit for PostgreSQL server
-
-Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlStorageLimit (Scale the storage limit for PostgreSQL server)](https://aka.ms/postgresqlstoragelimits).
+### Boost your workload performance by 30% with the new Ev5 compute hardware
-### Distribute data in server group to distribute workload among nodes
+With the new Ev5 compute hardware, you can boost workload performance by 30% with higher concurrency and better throughput. Navigate to the Compute+Storage option on the Azure portal and switch to Ev5 compute at no extra cost. Ev5 compute provides best performance among other VM series in terms of QPS and latency.
-It looks like the data has not been distributed in this server group but stays on the coordinator. For full Hyperscale (Citus) benefits distribute data on worker nodes in this server group.
+Learn more about [Azure Database for MySQL flexible server - OrcasMeruMySqlComputeSeriesUpgradeEv5 (Boost your workload performance by 30% with the new Ev5 compute hardware)](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/boost-azure-mysql-business-critical-flexible-server-performance/ba-p/3603698).
-Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusDistributeData (Distribute data in server group to distribute workload among nodes)](https://go.microsoft.com/fwlink/?linkid=2135201).
-### Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly
+### Scale the storage limit for PostgreSQL server
-It looks like the data is not well balanced between worker nodes in this Hyperscale (Citus) server group. In order to use each worker node of the Hyperscale (Citus) server group effectively rebalance data in this server group.
+Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
-Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusRebalanceData (Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly)](https://go.microsoft.com/fwlink/?linkid=2148869).
+Learn more about [PostgreSQL server - OrcasPostgreSqlStorageLimit (Scale the storage limit for PostgreSQL server)](https://aka.ms/postgresqlstoragelimits).
### Scale the PostgreSQL server to higher SKU
-Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
+Our system shows that the server might be unable to support the connection requests because of the maximum supported connections for the given SKU, which might result in a large number of failed connections requests adversely affecting performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
Learn more about [PostgreSQL server - OrcasPostgreSqlConcurrentConnection (Scale the PostgreSQL server to higher SKU)](https://aka.ms/postgresqlconnectionlimits). ### Move your PostgreSQL server to Memory Optimized SKU
-Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+Our system shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
Learn more about [PostgreSQL server - OrcasPostgreSqlMemoryCache (Move your PostgreSQL server to Memory Optimized SKU)](https://aka.ms/postgresqlpricing). ### Add a PostgreSQL Read Replica server
-Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
+Our system shows that you might have a read intensive workload running, which results in resource contention for this server. Resource contention can lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
Learn more about [PostgreSQL server - OrcasPostgreSqlReadReplica (Add a PostgreSQL Read Replica server)](https://aka.ms/postgresqlreadreplica). ### Increase the PostgreSQL server vCores
-Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+Our system shows that the CPU has been running under high utilization for an extended time period over the last seven days. High CPU utilization might lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
Learn more about [PostgreSQL server - OrcasPostgreSqlCpuOverload (Increase the PostgreSQL server vCores)](https://aka.ms/postgresqlpricing). ### Improve PostgreSQL connection management
-Our internal telemetry indicates that your PostgreSQL server may not be managing connections efficiently. This may result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections. This can be done by configuring a server side connection-pooler, such as PgBouncer.
+Our system shows that your PostgreSQL server might not be managing connections efficiently, which can result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections by configuring a server side connection-pooler, such as PgBouncer.
Learn more about [PostgreSQL server - OrcasPostgreSqlConnectionPooling (Improve PostgreSQL connection management)](https://aka.ms/azure_postgresql_connection_pooling). ### Improve PostgreSQL log performance
-Our internal telemetry indicates that your PostgreSQL server has been configured to output VERBOSE error logs. This can be useful for troubleshooting your database, but it can also result in reduced database performance. To improve performance, we recommend that you change the log_error_verbosity parameter to the DEFAULT setting.
+Our system shows that your PostgreSQL server has been configured to output VERBOSE error logs. This setting can be useful for troubleshooting your database, but it can also result in reduced database performance. To improve performance, we recommend that you change the log_error_verbosity parameter to the DEFAULT setting.
Learn more about [PostgreSQL server - OrcasPostgreSqlLogErrorVerbosity (Improve PostgreSQL log performance)](https://aka.ms/azure_postgresql_log_settings). ### Optimize query statistics collection on an Azure Database for PostgreSQL
-Our internal telemetry indicates that your PostgreSQL server has been configured to track query statistics using the pg_stat_statements module. While useful for troubleshooting, it can also result in reduced server performance. To improve performance, we recommend that you change the pg_stat_statements.track parameter to NONE.
+Our system shows that your PostgreSQL server has been configured to track query statistics using the pg_stat_statements module. While useful for troubleshooting, it can also result in reduced server performance. To improve performance, we recommend that you change the pg_stat_statements.track parameter to NONE.
Learn more about [PostgreSQL server - OrcasPostgreSqlStatStatementsTrack (Optimize query statistics collection on an Azure Database for PostgreSQL)](https://aka.ms/azure_postgresql_optimize_query_stats). ### Optimize query store on an Azure Database for PostgreSQL when not troubleshooting
-Our internal telemetry indicates that your PostgreSQL database has been configured to track query performance using the pg_qs.query_capture_mode parameter. While troubleshooting, we suggest setting the pg_qs.query_capture_mode parameter to TOP or ALL. When not troubleshooting, we recommend that you set the pg_qs.query_capture_mode parameter to NONE.
+Our system shows that your PostgreSQL database has been configured to track query performance using the pg_qs.query_capture_mode parameter. While troubleshooting, we suggest setting the pg_qs.query_capture_mode parameter to TOP or ALL. When not troubleshooting, we recommend that you set the pg_qs.query_capture_mode parameter to NONE.
Learn more about [PostgreSQL server - OrcasPostgreSqlQueryCaptureMode (Optimize query store on an Azure Database for PostgreSQL when not troubleshooting)](https://aka.ms/azure_postgresql_query_store). ### Increase the storage limit for PostgreSQL Flexible Server
-Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount.
+Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode.
Learn more about [PostgreSQL server - OrcasPostgreSqlFlexibleServerStorageLimit (Increase the storage limit for PostgreSQL Flexible Server)](https://aka.ms/azure_postgresql_flexible_server_limits).
-### Optimize logging settings by setting LoggingCollector to -1
+#### Optimize logging settings by setting LoggingCollector to -1
Optimize logging settings by setting LoggingCollector to -1
-### Optimize logging settings by setting LogDuration to OFF
+Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
+
+#### Optimize logging settings by setting LogDuration to OFF
Optimize logging settings by setting LogDuration to OFF
-### Optimize logging settings by setting LogStatement to NONE
+Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
+
+#### Optimize logging settings by setting LogStatement to NONE
Optimize logging settings by setting LogStatement to NONE
-### Optimize logging settings by setting ReplaceParameter to OFF
+Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
+
+#### Optimize logging settings by setting ReplaceParameter to OFF
Optimize logging settings by setting ReplaceParameter to OFF
-### Optimize logging settings by setting LoggingCollector to OFF
+Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
+
+#### Optimize logging settings by setting LoggingCollector to OFF
Optimize logging settings by setting LoggingCollector to OFF
+Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
+ ### Increase the storage limit for Hyperscale (Citus) server group
-Our internal telemetry shows that one or more nodes in the server group may be constrained because they are approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
+Our system shows that one or more nodes in the server group might be constrained because they are approaching limits for the currently provisioned storage values. This might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
Learn more about [PostgreSQL server - OrcasPostgreSqlCitusStorageLimitHyperscaleCitus (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes). ### Optimize log_statement settings for PostgreSQL on Azure Database
-Our internal telemetry indicates that you have log_statement enabled, for better performance, set it to NONE
+Our system shows that you have log_statement enabled, for better performance set it to NONE
Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogStatement (Optimize log_statement settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md). ### Increase the work_mem to avoid excessive disk spilling from sort and hash
-Our internal telemetry shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server which will help to reduce the scenarios when the sort or hash happens on disk, thereby improving the overall query performance.
+Our system shows that the configuration work_mem is too small for your PostgreSQL server, resulting in disk spilling and degraded query performance. We recommend increasing the work_mem limit for the server, which helps to reduce the scenarios when the sort or hash happens on disk and improves the overall query performance.
Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration). ### Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning
-Our internal telemetry suggests that you can improve storage performance by enabling Intelligent tuning
+Our system suggests that you can improve storage performance by enabling Intelligent tuning
Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruIntelligentTuning (Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning)](../postgresql/flexible-server/concepts-intelligent-tuning.md). ### Optimize log_duration settings for PostgreSQL on Azure Database
-Our internal telemetry indicates that you have log_duration enabled, for better performance, set it to OFF
+Our system shows that you have log_duration enabled, for better performance, set it to OFF
Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogDuration (Optimize log_duration settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md). ### Optimize log_min_duration settings for PostgreSQL on Azure Database
-Our internal telemetry indicates that you have log_min_duration enabled, for better performance, set it to -1
+Our system shows that you have log_min_duration enabled, for better performance, set it to -1
Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogMinDuration (Optimize log_min_duration settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md). ### Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database
-Our internal telemetry indicates that you have pg_qs.query_capture_mode enabled, for better performance, set it to NONE
+Our system shows that you have pg_qs.query_capture_mode enabled, for better performance, set it to NONE
Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruQueryCaptureMode (Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-query-store-best-practices.md). ### Optimize PostgreSQL performance by enabling PGBouncer
-Our Internal telemetry indicates that you can improve PostgreSQL performance by enabling PGBouncer
+Our system shows that you can improve PostgreSQL performance by enabling PGBouncer
Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruOrcasPostgreSQLConnectionPooling (Optimize PostgreSQL performance by enabling PGBouncer)](../postgresql/flexible-server/concepts-pgbouncer.md). ### Optimize log_error_verbosity settings for PostgreSQL on Azure Database
-Our internal telemetry indicates that you have log_error_verbosity enabled, for better performance, set it to DEFAULT
+Our system shows that you have log_error_verbosity enabled, for better performance, set it to DEFAULT
Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogErrorVerbosity (Optimize log_error_verbosity settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md). ### Increase the storage limit for Hyperscale (Citus) server group
-Our internal telemetry shows that one or more nodes in the server group may be constrained because they are approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
+Our system shows that one or more nodes in the server group might be constrained because they are approaching limits for the currently provisioned storage values. This might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
Learn more about [Hyperscale (Citus) server group - MarlinStorageLimitRecommendation (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes). ### Migrate your database from SSPG to FSPG
-Consider our new offering Azure Database for PostgreSQL Flexible Server that provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience. Learn more.
+Consider our new offering, Azure Database for PostgreSQL Flexible Server, which provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls, and simplified developer experience.
Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](../postgresql/how-to-upgrade-using-dump-and-restore.md). ### Move your PostgreSQL Flexible Server to Memory Optimized SKU
-Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, please review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+Our system shows that there is high churn in the buffer pool for this server, resulting in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
Learn more about [PostgreSQL server - OrcasMeruMemoryUpsell (Move your PostgreSQL Flexible Server to Memory Optimized SKU)](https://aka.ms/azure_postgresql_flexible_server_pricing).
-## DesktopVirtualization
-
-### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.
+### Improve your Cache and application performance when running with high network bandwidth
-We have determined that your VMs are located in a region different or far from where your users are connecting from, using Azure Virtual Desktop. This may lead to prolonged connection response times and will impact overall user experience on Azure Virtual Desktop. When creating VMs for your host pools, you should attempt to use a region closer to the user. Having close proximity ensures continuing satisfaction with the Azure Virtual Desktop service and a better overall quality of experience.
+Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce network bandwidth or scale to a different size or SKU with more capacity.
-Learn more about [Host Pool - RegionProximityHostPools (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](../virtual-desktop/connection-latency.md).
+Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth).
-### Change the max session limit for your depth first load balanced host pool to improve VM performance
+### Improve your Cache and application performance when running with many connected clients
-Depth first load balancing uses the max session limit to determine the maximum number of users that can have concurrent sessions on a single session host. If the max session limit is too high, all user sessions will be directed to the same session host and this may cause performance and reliability issues. Therefore, when setting a host pool to have depth first load balancing, you should also set an appropriate max session limit according to the configuration of your deployment and capacity of your VMs. To fix this, open your host pool's properties and change the value next to the "Max session limit" setting.
+Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce the server load or scale to a different size or SKU with more capacity.
-Learn more about [Host Pool - ChangeMaxSessionLimitForDepthFirstHostPool (Change the max session limit for your depth first load balanced host pool to improve VM performance )](../virtual-desktop/configure-host-pool-load-balancing.md).
+Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections).
-## Azure Cosmos DB
+### Improve your Cache and application performance when running with many connected clients
-### Configure your Azure Cosmos DB query page size (MaxItemCount) to -1
+Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce the server load or scale to a different size or SKU with more capacity.
-You are using the query page size of 100 for queries for your Azure Cosmos DB container. We recommend using a page size of -1 for faster scans.
+Learn more about [Redis Cache Server - RedisCacheConnectedClientsHigh (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections).
-Learn more about [Azure Cosmos DB account - CosmosDBQueryPageSize (Configure your Azure Cosmos DB query page size (MaxItemCount) to -1)](/azure/cosmos-db/sql-api-query-metrics#max-item-count).
+### Improve your Cache and application performance when running with high server load
-### Add composite indexes to your Azure Cosmos DB container
+Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce the server load or scale to a different size or SKU with more capacity.
-Your Azure Cosmos DB containers are running ORDER BY queries incurring high Request Unit (RU) charges. It is recommended to add composite indexes to your containers' indexing policy to improve the RU consumption and decrease the latency of these queries.
+Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu).
-Learn more about [Azure Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](../cosmos-db/index-policy.md#composite-indexes).
+### Improve your Cache and application performance when running with high server load
-### Optimize your Azure Cosmos DB indexing policy to only index what's needed
+Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce the server load or scale to a different size or SKU with more capacity.
-Your Azure Cosmos DB containers are using the default indexing policy, which indexes every property in your documents. Because you're storing large documents, a high number of properties get indexed, resulting in high Request Unit consumption and poor write latency. To optimize write performance, we recommend overriding the default indexing policy to only index the properties used in your queries.
+Learn more about [Redis Cache Server - RedisCacheServerLoadHigh (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu).
-Learn more about [Azure Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](../cosmos-db/index-policy.md).
+### Improve your Cache and application performance when running with high memory pressure
-### Use hierarchical partition keys for optimal data distribution
+Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce used memory or scale to a different size or SKU with more capacity.
-This account has a custom setting that allows the logical partition size in a container to exceed the limit of 20 GB. This setting was applied by the Azure Cosmos DB team as a temporary measure to give you time to re-architect your application with a different partition key. It is not recommended as a long-term solution, as SLA guarantees are not honored when the limit is increased. You can now use hierarchical partition keys (preview) to re-architect your application. The feature allows you to exceed the 20 GB limit by setting up to three partition keys, ideal for multi-tenant scenarios or workloads that use synthetic keys.
+Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory).
-Learn more about [Azure Cosmos DB account - CosmosDBHierarchicalPartitionKey (Use hierarchical partition keys for optimal data distribution)](https://devblogs.microsoft.com/cosmosdb/hierarchical-partition-keys-private-preview/).
+### Improve your Cache and application performance when memory rss usage is high.
-### Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK
+Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce used memory or scale to a different size or SKU with more capacity.
-We noticed that your Azure Cosmos DB applications are using Gateway mode via the Azure Cosmos DB .NET or Java SDKs. We recommend switching to Direct connectivity for lower latency and higher scalability.
+Learn more about [Redis Cache Server - RedisCacheUsedMemoryRSS (Improve your Cache and application performance when memory rss usage is high.)](https://aka.ms/redis/recommendations/memory).
-Learn more about [Azure Cosmos DB account - CosmosDBGatewayMode (Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK)](/azure/cosmos-db/performance-tips#networking).
+### Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache
-### Enhance Performance by Scaling Up for Optimal Resource Utilization
+Cache instances perform best when the host machines where the client application runs, is able to keep up with responses from the cache. If client host machine is running hot on memory, CPU, or network bandwidth, the cache responses don't reach your application fast enough and can result in higher latency.
-Maximizing the efficiency of your system's resources is crucial for maintaining top-notch performance. Our system closely monitors CPU usage, and when it crosses the 90% threshold over a 12-hour period, a proactive alert is triggered. This alert not only informs Azure Cosmos DB for MongoDB vCore users of the elevated CPU consumption but also provides valuable guidance on scaling up to a higher tier. By upgrading to a more robust tier, you can unlock improved performance and ensure your system operates at its peak potential.
+Learn more about [Redis Cache Server - UnresponsiveClient (Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache.)](/azure/azure-cache-for-redis/cache-troubleshoot-client).
-Learn more about [Scaling and configuring Your Azure Cosmos DB for MongoDB vCore cluster](../cosmos-db/mongodb/vcore/how-to-scale-cluster.md)
-## HDInsight
+## DevOps
-### Unsupported Kubernetes version is detected
+### Update to the latest AMS API Version
-Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with a supported version.
+We have identified calls to an Azure Media Services (AMS) API version that is not recommended. We recommend switching to the latest AMS API version to ensure uninterrupted access to AMS, latest features, and performance improvements.
-Learn more about [HDInsight Cluster Pool - UnsupportedHiloAKSVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions).
+Learn more about [Monitor - UpdateToLatestAMSApiVersion (Update to the latest AMS API Version)](https://aka.ms/AMSAdvisor).
-### Reads happen on most recent data
+### Upgrade to the latest Workloads SDK version
-More than 75% of your read requests are landing on the memstore. That indicates that the reads are primarily on recent data. This suggests that even if a flush happens on the memstore, the recent file needs to be accessed and that file needs to be in the cache.
+Upgrade to the latest Workloads SDK version to get the best results in terms of model quality, performance and service availability.
-Learn more about [HDInsight cluster - HBaseMemstoreReadPercentage (Reads happen on most recent data)](../hdinsight/hbase/apache-hbase-advisor.md).
+Learn more about [Monitor - UpgradeToLatestAMSSdkVersion (Upgrade to the latest Workloads SDK version)](https://aka.ms/AMSAdvisor).
-### Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.
-You are seeing this advisor recommendation because HDInsight team's system log shows that in the past 7 days, your cluster has encountered the following scenarios:
- 1. High WAL sync time latency
- 2. High write request count (at least 3 one hour windows of over 1000 avg_write_requests/second/node)
-These conditions are indicators that your cluster is suffering from high write latencies. This could be due to heavy workload performed on your cluster.
-To improve the performance of your cluster, you may want to consider utilizing the Accelerated Writes feature provided by Azure HDInsight HBase. The Accelerated Writes feature for HDInsight Apache HBase clusters attaches premium SSD-managed disks to every RegionServer (worker node) instead of using cloud storage. As a result, provides low write-latency and better resiliency for your applications.
-To read more on this feature, please visit link:
+## Integration
-Learn more about [HDInsight cluster - AccWriteCandidate (Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.)](../hdinsight/hbase/apache-hbase-accelerated-writes.md).
+### Upgrade your API Management resource to an alternative version
-### More than 75% of your queries are full scan queries.
+Your subscription is running on versions that have been scheduled for deprecation. On 30 September 2023, all API versions for the Azure API Management service prior to 2021-08-01 retire and API calls fail. Upgrade to newer version to prevent disruption to your services.
-More than 75% of the scan queries on your cluster are doing a full region/table scan. Modify your scan queries to avoid full region or table scans.
+Learn more about [Api Management - apimgmtdeprecation (Upgrade your API Management resource to an alternative version)](https://azure.microsoft.com/updates/api-versions-being-retired-for-azure-api-management/).
-Learn more about [HDInsight cluster - ScanQueryTuningcandidate (More than 75% of your queries are full scan queries.)](../hdinsight/hbase/apache-hbase-advisor.md).
-### Check your region counts as you have blocking updates.
-Region counts needs to be adjusted to avoid updates getting blocked. It might require a scale up of the cluster by adding new nodes.
-Learn more about [HDInsight cluster - RegionCountCandidate (Check your region counts as you have blocking updates.)](../hdinsight/hbase/apache-hbase-advisor.md).
-### Consider increasing the flusher threads
+## Mobile
-The flush queue size in your region servers is more than 100 or there are updates getting blocked frequently. Tuning of the flush handler is recommended.
+### Use recommended version of Chat SDK
-Learn more about [HDInsight cluster - FlushQueueCandidate (Consider increasing the flusher threads)](../hdinsight/hbase/apache-hbase-advisor.md).
+Azure Communication Services Chat SDK can be used to add rich, real-time chat to your applications. Update to the recommended version of Chat SDK to ensure the latest fixes and features.
-### Consider increasing your compaction threads for compactions to complete faster
+Learn more about [Communication service - UpgradeChatSdk (Use recommended version of Chat SDK)](../communication-services/concepts/chat/sdk-features.md).
-The compaction queue in your region servers is more than 2000 suggesting that more data requires compaction. Slower compactions can impact read performance as the number of files to read are more. More files without compaction can also impact the heap usage related to how files interact with Azure file system.
+### Use recommended version of Resource Manager SDK
-Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increasing your compaction threads for compactions to complete faster)](/azure/hdinsight/hbase/apache-hbase-advisor).
+Resource Manager SDK can be used to create and manage Azure Communication Services resources. Update to the recommended version of Resource Manager SDK to ensure the latest fixes and features.
-## Automanage
+Learn more about [Communication service - UpgradeResourceManagerSdk (Use recommended version of Resource Manager SDK)](../communication-services/quickstarts/create-communication-resource.md?pivots=platform-net&tabs=windows).
-### Update Automanage to the latest API Version
+### Use recommended version of Identity SDK
-We have identified sdk calls from outdated API for resources under this subscription. We recommend switching to the latest sdk versions. This ensures you receive the latest features and performance improvements.
+Azure Communication Services Identity SDK can be used to manage identities, users, and access tokens. Update to the recommended version of Identity SDK to ensure the latest fixes and features.
-Learn more about [Machine - Azure Arc - UpdateToLatestApiHci (Update Automanage to the latest API Version)](/azure/automanage/reference-sdk).
+Learn more about [Communication service - UpgradeIdentitySdk (Use recommended version of Identity SDK)](../communication-services/concepts/sdk-options.md).
-## KeyVault
+### Use recommended version of SMS SDK
-### Update Key Vault SDK Version
+Azure Communication Services SMS SDK can be used to send and receive SMS messages. Update to the recommended version of SMS SDK to ensure the latest fixes and features.
-New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process.<br><br>**PLEASE DISMISS:**<br>If Key Vault is integrated with Azure Storage, Disk or other Azure services which can use old Key Vault SDK and when all your current custom applications are using .NET SDK 4.0 or above.
+Learn more about [Communication service - UpgradeSmsSdk (Use recommended version of SMS SDK)](/azure/communication-services/concepts/telephony-sms/sdk-features).
-Learn more about [Key vault - UpgradeKeyVaultSDK (Update Key Vault SDK Version)](../key-vault/general/client-libraries.md).
+### Use recommended version of Phone Numbers SDK
-### Update Key Vault SDK Version
+Azure Communication Services Phone Numbers SDK can be used to acquire and manage phone numbers. Update to the recommended version of Phone Numbers SDK to ensure the latest fixes and features.
-New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process.
+Learn more about [Communication service - UpgradePhoneNumbersSdk (Use recommended version of Phone Numbers SDK)](../communication-services/concepts/sdk-options.md).
-> [!IMPORTANT]
-> Please be aware that you can only remediate recommendation for custom applications you have access to. Recommendations can be shown due to integration with other Azure services like Storage, Disk encryption, which are in process to update to new version of our SDK. If you use .NET 4.0 in all your applications please dismiss.
+### Use recommended version of Calling SDK
-Learn more about [Managed HSM Service - UpgradeKeyVaultMHSMSDK (Update Key Vault SDK Version)](../key-vault/general/client-libraries.md).
+Azure Communication Services Calling SDK can be used to enable voice, video, screen-sharing, and other real-time communication. Update to the recommended version of Calling SDK to ensure the latest fixes and features.
-## Data Exporer
+Learn more about [Communication service - UpgradeCallingSdk (Use recommended version of Calling SDK)](../communication-services/concepts/voice-video-calling/calling-sdk-features.md).
-### Right-size Data Explorer resources for optimal performance.
+### Use recommended version of Call Automation SDK
-This recommendation surfaces all Data Explorer resources which exceed the recommended data capacity (80%). The recommended action to improve the performance is to scale to the recommended configuration shown.
+Azure Communication Services Call Automation SDK can be used to make and manage calls, play audio, and configure recording. Update to the recommended version of Call Automation SDK to ensure the latest fixes and features.
-Learn more about [Data explorer resource - Right-size ADX resource (Right-size Data Explorer resources for optimal performance.)](https://aka.ms/adxskuperformance).
+Learn more about [Communication service - UpgradeServerCallingSdk (Use recommended version of Call Automation SDK)](../communication-services/concepts/voice-video-calling/call-automation-apis.md).
-### Review table cache policies for Data Explorer tables
+### Use recommended version of Network Traversal SDK
-This recommendation surfaces Data Explorer tables with a high number of queries that look back beyond the configured cache period (policy). (You'll see the top 10 tables by query percentage that access out-of-cache data). The recommended action to improve the performance: Limit queries on this table to the minimal necessary time range (within the defined policy). Alternatively, if data from the entire time range is required, increase the cache period to the recommended value.
+Azure Communication Services Network Traversal SDK can be used to access TURN servers for low-level data transport. Update to the recommended version of Network Traversal SDK to ensure the latest fixes and features.
-Learn more about [Data explorer resource - UpdateCachePoliciesForAdxTables (Review table cache policies for Data Explorer tables)](https://aka.ms/adxcachepolicy).
+Learn more about [Communication service - UpgradeTurnSdk (Use recommended version of Network Traversal SDK)](../communication-services/concepts/sdk-options.md).
-### Reduce Data Explorer table cache policy for better performance
+### Use recommended version of Rooms SDK
-Reducing the table cache policy will free up unused data from the resource's cache and improve performance.
+Azure Communication Services Rooms SDK can be used to control who can join a call, when they can meet, and how they can collaborate. Update to the recommended version of Rooms SDK to ensure the latest fixes and features. A non-recommended version was detected in the last 48-60 hours.
-Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesToImprovePerformance (Reduce Data Explorer table cache policy for better performance)](https://aka.ms/adxcachepolicy).
+Learn more about [Communication service - UpgradeRoomsSdk (Use recommended version of Rooms SDK)](/azure/communication-services/concepts/rooms/room-concept).
-## Networking
-### Configure DNS Time to Live to 60 seconds
-Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible.
-Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5).
+## Networking
-### Configure DNS Time to Live to 20 seconds
+### Upgrade SDK version recommendation
-Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible.
+The latest version of Azure Front Door Standard and Premium Client Library or SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Front Door Standard and Premium.
-Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](https://aka.ms/Ngfw4r).
+Learn more about [Front Door Profile - UpgradeCDNToLatestSDKLanguage (Upgrade SDK version recommendation)](https://aka.ms/afd/tiercomparison).
-### Configure DNS Time to Live to 60 seconds
+### Upgrade SDK version recommendation
-Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible.
+The latest version of Azure Traffic Collector SDK contains fixes to issues proactively identified through our QA process, supports the latest resource model & has reliability and performance optimization that can improve your overall experience of using ATC.
-Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5).
+Learn more about [Azure Traffic Collector - UpgradeATCToLatestSDKLanguage (Upgrade SDK version recommendation)](/azure/expressroute/traffic-collector).
### Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs
-You have been using over 90% of your procured circuit bandwidth recently. If you exceed your allocated bandwidth, you will experience an increase in dropped packets sent over ExpressRoute. Upgrade your circuit bandwidth to maintain performance if your bandwidth needs remain this high.
+You have been using over 90% of your procured circuit bandwidth recently. If you exceed your allocated bandwidth, you experience an increase in dropped packets sent over ExpressRoute. Upgrade your circuit bandwidth to maintain performance if your bandwidth needs remain this high.
Learn more about [ExpressRoute circuit - UpgradeERCircuitBandwidth (Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs)](../expressroute/about-upgrade-circuit-bandwidth.md).
-### Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use
-
-Under high traffic load, the VPN gateway may drop packets due to high CPU.
-
-Learn more about [Virtual network gateway - HighCPUVNetGateway (Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use)](https://aka.ms/HighCPUP2SVNetGateway).
-
-### Consider increasing the size of your VNet Gateway SKU to address high P2S use
-
-Each gateway SKU can only support a specified count of concurrent P2S connections. Your connection count is close to your gateway limit, so additional connection attempts may fail.
-
-Learn more about [Virtual network gateway - HighP2SConnectionsVNetGateway (Consider increasing the size of your VNet Gateway SKU to address high P2S use)](https://aka.ms/HighP2SConnectionsVNetGateway).
-
-### Make sure you have enough instances in your Application Gateway to support your traffic
-
-Your Application Gateway has been running on high utilization recently and under heavy load, you may experience traffic loss or increase in latency. It is important that you scale your Application Gateway according to your traffic and with a bit of a buffer so that you are prepared for any traffic surges or spikes and minimizing the impact that it may have in your QoS. Application Gateway v1 SKU (Standard/WAF) supports manual scaling and v2 SKU (Standard_v2/WAF_v2) support manual and autoscaling. In case of manual scaling, increase your instance count and if autoscaling is enabled, make sure your maximum instance count is set to a higher value so Application Gateway can scale out as the traffic increases
-
-Learn more about [Application gateway - HotAppGateway (Make sure you have enough instances in your Application Gateway to support your traffic)](https://aka.ms/hotappgw).
-
-## SQL
-
-### Create statistics on table columns
-
-We have detected that you are missing table statistics which may be impacting query performance. The query optimizer uses statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
-
-Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statistics on table columns)](https://aka.ms/learnmorestatistics).
-
-### Remove data skew to increase query performance
-
-We have detected distribution data skew greater than 15%. This can cause costly performance bottlenecks.
-
-Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](https://aka.ms/learnmoredataskew).
-
-### Update statistics on table columns
-
-We have detected that you do not have up-to-date table statistics which may be impacting query performance. The query optimizer uses up-to-date statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
-
-Learn more about [SQL data warehouse - UpdateTableStatisticsSqlDW (Update statistics on table columns)](https://aka.ms/learnmorestatistics).
-
-### Scale up to optimize cache utilization with SQL Data Warehouse
-
-We have detected that you had high cache used percentage with a low hit percentage. This indicates high cache eviction which can impact the performance of your workload.
-
-Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to optimize cache utilization with SQL Data Warehouse)](https://aka.ms/learnmoreadaptivecache).
-
-### Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse
-
-We have detected that you had high tempdb utilization which can impact the performance of your workload.
-
-Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](https://aka.ms/learnmoretempdb).
-
-### Convert tables to replicated tables with SQL Data Warehouse
-
-We have detected that you may benefit from using replicated tables. When using replicated tables, this will avoid costly data movement operations and significantly increase the performance of your workload.
-
-Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to replicated tables with SQL Data Warehouse)](https://aka.ms/learnmorereplicatedtables).
+### Experience more predictable, consistent latency with a private connection to Azure
-### Split staged files in the storage account to increase load performance
+Improve the performance, privacy, and reliability of your business-critical apps by extending your on-premises networks to Azure with Azure ExpressRoute. Establish private ExpressRoute connections directly from your WAN, through a cloud exchange facility, or through POP and IPVPN connections.
-We have detected that you can increase load throughput by splitting your compressed files that are staged in your storage account. A good rule of thumb is to split compressed files into 60 or more to maximize the parallelism of your load.
+Learn more about [Subscription - AzureExpressRoute (Experience more predictable, consistent latency with a private connection to Azure)](/azure/expressroute/expressroute-howto-circuit-portal-resource-manager).
-Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](https://aka.ms/learnmorefilesplit).
+### Upgrade Workloads API to the latest version (Azure Center for SAP solutions API)
-### Increase batch size when loading to maximize load throughput, data compression, and query performance
+We have identified calls to an outdated Workloads API version for resources under this resource group. We recommend switching to the latest Workloads API version to ensure uninterrupted access to latest features and performance improvements in Azure Center for SAP solutions. If there are multiple Virtual Instances for SAP solutions (VIS) shown in the recommendation, ensure you update the API version for all VIS resources.
-We have detected that you can increase load performance and throughput by increasing the batch size when loading into your database. You should consider using the COPY statement. If you are unable to use the COPY statement, consider increasing the batch size when using loading utilities such as the SQLBulkCopy API or BCP - a good rule of thumb is a batch size between 100K to 1M rows.
+Learn more about [Subscription - UpdateToLatestWaasApiVersionAtSub (Upgrade Workloads API to the latest version (Azure Center for SAP solutions API))](https://go.microsoft.com/fwlink/?linkid=2228001).
-Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](https://aka.ms/learnmoreincreasebatchsize).
+### Upgrade Workloads SDK to the latest version (Azure Center for SAP solutions SDK)
-### Co-locate the storage account within the same region to minimize latency when loading
+We have identified calls to an outdated Workloads SDK version from resources in this Resource Group. Upgrade to the latest Workloads SDK version to get the latest features and the best results in terms of model quality, performance and service availability for Azure Center for SAP solutions. If there are multiple Virtual Instances for SAP solutions (VIS) shown in the recommendation, ensure you update the SDK version for all VIS resources.
-We have detected that you are loading from a region that is different from your SQL pool. You should consider loading from a storage account that is within the same region as your SQL pool to minimize latency when loading data.
+Learn more about [Subscription - UpgradeToLatestWaasSdkVersionAtSub (Upgrade Workloads SDK to the latest version (Azure Center for SAP solutions SDK))](https://go.microsoft.com/fwlink/?linkid=2228000).
-Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](https://aka.ms/learnmorestoragecolocation).
+### Configure DNS Time to Live to 60 seconds
-## Storage
+Time to Live (TTL) affects how recent the response a client gets when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client is routed to a functioning endpoint more quickly, in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible.
-### Use "Put Blob" for blobs smaller than 256 MB
+Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5).
-When writing a block blob that is 256 MB or less (64 MB for requests using REST versions before 2016-05-31), you can upload it in its entirety with a single write operation using "Put Blob". Based on your aggregated metrics, we believe your storage account's write operations can be optimized.
+### Configure DNS Time to Live to 20 seconds
-Learn more about [Storage Account - StorageCallPutBlob (Use \""Put Blob\"" for blobs smaller than 256 MB)](https://aka.ms/understandblockblobs).
+Time to Live (TTL) affects how recent the response a client gets when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client is routed to a functioning endpoint more quickly, in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible.
-### Upgrade your Storage Client Library to the latest version for better reliability and performance
+Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](https://aka.ms/Ngfw4r).
-The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage.
+### Configure DNS Time to Live to 60 seconds
-Learn more about [Storage Account - UpdateStorageDataMovementSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](https://aka.ms/AA5wtca).
+Time to Live (TTL) affects how recent the response a client gets when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client is routed to a functioning endpoint more quickly, in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible.
-### Upgrade to Standard SSD Disks for consistent and improved performance
+Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5).
-Because you are running IaaS virtual machine workloads on Standard HDD managed disks, we wanted to let you know that a Standard SSD disk option is now available for all Azure VM types. Standard SSD disks are a cost-effective storage option optimized for enterprise workloads that need consistent performance. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which will take three to five minutes.
+### Consider increasing the size of your virtual network Gateway SKU to address consistently high CPU use
-Learn more about [Storage Account - StandardSSDForNonPremVM (Upgrade to Standard SSD Disks for consistent and improved performance)](/azure/virtual-machines/windows/disks-types#standard-ssd).
+Under high traffic load, the VPN gateway might drop packets due to high CPU.
-### Use premium performance block blob storage
+Learn more about [Virtual network gateway - HighCPUVNetGateway (Consider increasing the size of your virtual network (VNet) Gateway SKU to address consistently high CPU use)](https://aka.ms/HighCPUP2SVNetGateway).
-One or more of your storage accounts has a high transaction rate per GB of block blob data stored. Use premium performance block blob storage instead of standard performance storage for your workloads that require fast storage response times and/or high transaction rates and potentially save on storage costs.
+### Consider increasing the size of your virtual network Gateway SKU to address high P2S use
-Learn more about [Storage Account - PremiumBlobStorageAccount (Use premium performance block blob storage)](https://aka.ms/usePremiumBlob).
+Each gateway SKU can only support a specified count of concurrent P2S connections. Your connection count is close to your gateway limit, so more connection attempts might fail.
-### Convert Unmanaged Disks from Standard HDD to Premium SSD for performance
+Learn more about [Virtual network gateway - HighP2SConnectionsVNetGateway (Consider increasing the size of your VNet Gateway SKU to address high P2S use)](https://aka.ms/HighP2SConnectionsVNetGateway).
-We have noticed your Unmanaged HDD Disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which will take three to five minutes.
+### Make sure you have enough instances in your Application Gateway to support your traffic
-Learn more about [Storage Account - UMDHDDtoPremiumForPerformance (Convert Unmanaged Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd).
+Your Application Gateway has been running on high utilization recently and under heavy load you might experience traffic loss or increase in latency. It is important that you scale your Application Gateway accordingly and add a buffer so that you're prepared for any traffic surges or spikes and minimize the effect that it might have in your QoS. Application Gateway v1 SKU (Standard/WAF) supports manual scaling and v2 SKU (Standard_v2/WAF_v2) supports manual and autoscaling. With manual scaling, increase your instance count. If autoscaling is enabled, make sure your maximum instance count is set to a higher value so Application Gateway can scale out as the traffic increases.
+Learn more about [Application gateway - HotAppGateway (Make sure you have enough instances in your Application Gateway to support your traffic)](https://aka.ms/hotappgw).
-## Subscription
-### Experience more predictable, consistent latency with a private connection to Azure
-Improve the performance, privacy, and reliability of your business-critical apps by extending your on-premises networks to Azure with Azure ExpressRoute. Establish private ExpressRoute connections directly from your WAN, through a cloud exchange facility, or through POP and IPVPN connections.
-Learn more about [Subscription - AzureExpressRoute (Experience more predictable, consistent latency with a private connection to Azure)](/azure/expressroute/expressroute-howto-circuit-portal-resource-manager).
-## Synapse
-### Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows
+## SAP for Azure
-Clustered columnstore tables are organized in data into segments. Having high segment quality is critical to achieving optimal query performance on a columnstore table. Segment quality can be measured by the number of rows in a compressed row group.
+### To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the App VM OS in SAP workloads
-Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](https://aka.ms/AzureSynapseCCIGuidance).
+To avoid sporadic soft-lockup in Mellanox driver, reduce the can_queue value in the OS. The value cannot be set directly. Add the following kernel boot line options to achieve the same effect:'hv_storvsc.storvsc_ringbuffer_size=131072 hv_storvsc.storvsc_vcpus_per_sub_channel=1024'
-### Update SynapseManagementClient SDK Version
+Learn more about [App Server Instance - AppSoftLockup (To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the App VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000020248).
-New SynapseManagementClient is using .NET SDK 4.0 or above.
+### To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the ASCS VM OS in SAP workloads
-Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update SynapseManagementClient SDK Version)](https://aka.ms/UpgradeSynapseManagementClientSDK).
+To avoid sporadic soft-lockup in Mellanox driver, reduce the can_queue value in the OS. The value cannot be set directly. Add the following kernel boot line options to achieve the same effect:'hv_storvsc.storvsc_ringbuffer_size=131072 hv_storvsc.storvsc_vcpus_per_sub_channel=1024'
-## Web
+Learn more about [Central Server Instance - AscsoftLockup (To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the ASCS VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000020248).
-### Move your App Service Plan to PremiumV2 for better performance
-
-Your app served more than 1000 requests per day for the past 3 days. Your app may benefit from the higher performance infrastructure available with the Premium V2 App Service tier. The Premium V2 tier features Dv2-series VMs with faster processors, SSD storage, and doubled memory-to-core ratio when compared to the previous instances. Learn more about upgrading to Premium V2 from our documentation.
+### To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the DB VM OS in SAP workloads
-Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service Plan to PremiumV2 for better performance)](https://aka.ms/ant-premiumv2).
+To avoid sporadic soft-lockup in Mellanox driver, reduce the can_queue value in the OS. The value cannot be set directly. Add the following kernel boot line options to achieve the same effect:'hv_storvsc.storvsc_ringbuffer_size=131072 hv_storvsc.storvsc_vcpus_per_sub_channel=1024'
-### Check outbound connections from your App Service resource
+Learn more about [Database Instance - DBSoftLockup (To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the DB VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000020248).
-Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/IP port connection limits can cause unexpected connectivity issues for your apps.
+### For improved file system performance in HANA DB with ANF, optimize tcp_wmem OS parameter
-Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](https://aka.ms/antbc-socket).
+The parameter net.ipv4.tcp_wmem specifies minimum, default, and maximum send buffer sizes that are used for a TCP socket. Set the parameter as per SAP note: 302436 to certify HANA DB to run with ANF and improve file system performance. The maximum value must not exceed net.core.wmem_max parameter.
-## SAP on Azure Workloads
+Learn more about [Database Instance - WriteBuffersAllocated (For improved file system performance in HANA DB with ANF, optimize tcp_wmem OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
-### For improved file system performance in HANA DB with ANF, optimize tcp_wmem OS parameter
+### For improved file system performance in HANA DB with ANF, optimize tcp_rmem OS parameter
-The parameter net.ipv4.tcp_wmem specifies minimum, default, and maximum send buffer sizes that are used for a TCP socket. Set the parameter as per SAP note: 302436 to certify HANA DB to run with ANF and improve file system performance. The maximum value should not exceed net.core.wmem_max parameter
+The parameter net.ipv4.tcp_rmem specifies minimum, default, and maximum receive buffer sizes used for a TCP socket. Set the parameter as per SAP note 3024346 to certify HANA DB to run with ANF and improve file system performance. The maximum value must not exceed net.core.rmem_max parameter.
-Learn more about [Database Instance - WriteBuffersAllocated (For improved file system performance in HANA DB with ANF, optimize tcp_wmem OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
+Learn more about [Database Instance - OptimiseReadTcp (For improved file system performance in HANA DB with ANF, optimize tcp_rmem OS parameter)](https://launchpad.support.sap.com/#/notes/3024346).
### For improved file system performance in HANA DB with ANF, optimize wmem_max OS parameter
-In HANA DB with ANF storage type, the maximum write socket buffer, defined by the parameter, net.core.wmem_max must be set large enough to handle outgoing network packets. This configuration certifies HANA DB to run with ANF and improves file system performance. See SAP note: 3024346
+In HANA DB with ANF storage type, the maximum write socket buffer, defined by the parameter net.core.wmem_max must be set large enough to handle outgoing network packets. The net.core.wmem_max configuration certifies HANA DB to run with ANF and improves file system performance. See SAP note: 3024346.
Learn more about [Database Instance - MaxWriteBuffer (For improved file system performance in HANA DB with ANF, optimize wmem_max OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, optimize tcp_rmem OS parameter
-The parameter net.ipv4.tcp_rmem specifies minimum, default, and maximum receive buffer sizes used for a TCP socket. Set the parameter as per SAP note 3024346 to certify HANA DB to run with ANF and improve file system performance. The maximum value should not exceed net.core.rmem_max parameter
+The parameter net.ipv4.tcp_rmem specifies minimum, default, and maximum receive buffer sizes used for a TCP socket. Set the parameter as per SAP note 3024346 to certify HANA DB to run with ANF and improve file system performance. The maximum value must not exceed net.core.rmem_max parameter.
Learn more about [Database Instance - OptimizeReadTcp (For improved file system performance in HANA DB with ANF, optimize tcp_rmem OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, optimize rmem_max OS parameter
-In HANA DB with ANF storage type, the maximum read socket buffer, defined by the parameter, net.core.rmem_max must be set large enough to handle incoming network packets. This configuration certifies HANA DB to run with ANF and improves file system performance. See SAP note: 3024346.
+In HANA DB with ANF storage type, the maximum read socket buffer, defined by the parameter, net.core.rmem_max must be set large enough to handle incoming network packets. The net.core.rmem_max configuration certifies HANA DB to run with ANF and improves file system performance. See SAP note: 3024346.
Learn more about [Database Instance - MaxReadBuffer (For improved file system performance in HANA DB with ANF, optimize rmem_max OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, set receiver backlog queue size to 300000
-The parameter net.core.netdev_max_backlog specifies the size of the receiver backlog queue, used if a Network interface receives packets faster than the kernel can process. Set the parameter as per SAP note: 3024346. This configuration certifies HANA DB to run with ANF and improves file system performance.
+The parameter net.core.netdev_max_backlog specifies the size of the receiver backlog queue, used if a network interface receives packets faster than the kernel can process. Set the parameter as per SAP note: 3024346. The net.core.netdev_max_backlog configuration certifies HANA DB to run with ANF and improves file system performance.
Learn more about [Database Instance - BacklogQueueSize (For improved file system performance in HANA DB with ANF, set receiver backlog queue size to 300000)](https://launchpad.support.sap.com/#/notes/3024346). ### To improve file system performance in HANA DB with ANF, enable the TCP window scaling OS parameter
-Enable the TCP window scaling parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads
+Enable the TCP window scaling parameter as per SAP note: 302436. The TCP window scaling configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads.
Learn more about [Database Instance - EnableTCPWindowScaling (To improve file system performance in HANA DB with ANF, enable the TCP window scaling OS parameter )](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, disable IPv6 protocol in OS
-Disable IPv6 as per recommendation for SAP on Azure for HANA DB with ANF to improve file system performance
+Disable IPv6 as per recommendation for SAP on Azure for HANA DB with ANF to improve file system performance.
-Learn more about [Database Instance - DisableIPv6Protocol (For improved file system performance in HANA DB with ANF, disable IPv6 protocol in OS)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+Learn more about [Database Instance - DisableIPv6Protocol (For improved file system performance in HANA DB with ANF, disable IPv6 protocol in OS)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse).
### To improve file system performance in HANA DB with ANF, disable parameter for slow start after idle
-The parameter net.ipv4.tcp_slow_start_after_idle disables the need to scale-up incrementally the TCP window size for TCP connections which were idle for some time. By setting this parameter to zero as per SAP note: 302436, the maximum speed is used from beginning for previously idle TCP connections
+The parameter net.ipv4.tcp_slow_start_after_idle disables the need to scale-up incrementally the TCP window size for TCP connections that were idle for some time. By setting this parameter to zero as per SAP note: 302436, the maximum speed is used from beginning for previously idle TCP connections.
Learn more about [Database Instance - ParameterSlowStart (To improve file system performance in HANA DB with ANF, disable parameter for slow start after idle)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF optimize tcp_max_syn_backlog OS parameter
-To prevent the kernel from using SYN cookies in a situation where lots of connection requests are sent in a short timeframe and to prevent a warning about a potential SYN flooding attack in the system log, the size of the SYN backlog should be set to a reasonably high value. See SAP note 2382421
+To prevent the kernel from using SYN cookies in a situation where lots of connection requests are sent in a short timeframe and to prevent a warning about a potential SYN flooding attack in the system log, the size of the SYN backlog must be set to a reasonably high value. See SAP note 2382421.
-Learn more about [Database Instance - TCPMaxSynBacklog (For improved file system performance in HANA DB with ANF optimize tcp_max_syn_backlog OS parameter)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+Learn more about [Database Instance - TCPMaxSynBacklog (For improved file system performance in HANA DB with ANF optimize tcp_max_syn_backlog OS parameter)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse).
### For improved file system performance in HANA DB with ANF, enable the tcp_sack OS parameter
-Enable the tcp_sack parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads
+Enable the tcp_sack parameter as per SAP note: 302436. The tcp_sack configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads.
Learn more about [Database Instance - TCPSackParameter (For improved file system performance in HANA DB with ANF, enable the tcp_sack OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### In high-availability scenario for HANA DB with ANF, disable the tcp_timestamps OS parameter
-Disable the tcp_timestamps parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in high-availability scenarios for HANA DB with ANF in SAP workloads
+Disable the tcp_timestamps parameter as per SAP note: 302436. The tcp_timestamps configuration certifies HANA DB to run with ANF and improves file system performance in high-availability scenarios for HANA DB with ANF in SAP workloads
Learn more about [Database Instance - DisableTCPTimestamps (In high-availability scenario for HANA DB with ANF, disable the tcp_timestamps OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, enable the tcp_timestamps OS parameter
-Enable the tcp_timestamps parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads
+Enable the tcp_timestamps parameter as per SAP note: 302436. The tcp_timestamps configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads.
Learn more about [Database Instance - EnableTCPTimestamps (For improved file system performance in HANA DB with ANF, enable the tcp_timestamps OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### To improve file system performance in HANA DB with ANF, enable auto-tuning TCP receive buffer size
-The parameter net.ipv4.tcp_moderate_rcvbuf enables TCP to perform receive buffer auto-tuning, to automatically size the buffer (no greater than tcp_rmem to match the size required by the path for full throughput. Enable this parameter as per SAP note: 302436 for improved file system performance
+The parameter net.ipv4.tcp_moderate_rcvbuf enables TCP to perform buffer auto-tuning, to automatically size the buffer (no greater than tcp_rmem to match the size required by the path for full throughput. Enable this parameter as per SAP note: 302436 for improved file system performance.
Learn more about [Database Instance - EnableAutoTuning (To improve file system performance in HANA DB with ANF, enable auto-tuning TCP receive buffer size)](https://launchpad.support.sap.com/#/notes/3024346).
Learn more about [Database Instance - IPV4LocalPortRange (For improved file syst
### To improve file system performance in HANA DB with ANF, optimize sunrpc.tcp_slot_table_entries
-Set the parameter sunrpc.tcp_slot_table_entries to 128 as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads
+Set the parameter sunrpc.tcp_slot_table_entries to 128 as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads.
-Learn more about [Database Instance - TCPSlotTableEntries (To improve file system performance in HANA DB with ANF, optimize sunrpc.tcp_slot_table_entries)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+Learn more about [Database Instance - TCPSlotTableEntries (To improve file system performance in HANA DB with ANF, optimize sunrpc.tcp_slot_table_entries)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse).
-### All disks in LVM for /hana/data volume should be of the same type to ensure high performance in HANA DB
+### All disks in LVM for /hana/data volume must be of the same type to ensure high performance in HANA DB
-If multiple disk types are selected in the /hana/data volume, performance of HANA DB in SAP workloads might get restricted. Ensure all HANA Data volume disks are of the same type and are configured as per recommendation for SAP on Azure
+If multiple disk types are selected in the /hana/data volume, performance of HANA DB in SAP workloads might get restricted. Ensure all HANA Data volume disks are of the same type and are configured as per recommendation for SAP on Azure.
-Learn more about [Database Instance - HanaDataDiskTypeSame (All disks in LVM for /hana/data volume should be of the same type to ensure high performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Configuration%20for%20SAP%20/hana/data%20volume).
+Learn more about [Database Instance - HanaDataDiskTypeSame (All disks in LVM for /hana/data volume must be of the same type to ensure high performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage).
-### Stripe size for /hana/data should be 256 kb for improved performance of HANA DB in SAP workloads
+### Stripe size for /hana/data must be 256 kb for improved performance of HANA DB in SAP workloads
-If you are using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. Based on experience with recentLinux versions, Azure recommends using stripe size of 256 kb for /hana/data filesystem for better performance of HANA DB
+If you're using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. Based on experience with recentLinux versions, Azure recommends using stripe size of 256 kb for /hana/data filesystem for better performance of HANA DB.
-Learn more about [Database Instance - HanaDataStripeSize (Stripe size for /hana/data should be 256 kb for improved performance of HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=As%20stripe%20sizes%20the%20recommendation%20is%20to%20use).
+Learn more about [Database Instance - HanaDataStripeSize (Stripe size for /hana/data must be 256 kb for improved performance of HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage).
### To improve file system performance in HANA DB with ANF, optimize the parameter vm.swappiness
-Set the OS parameter vm.swappiness to 10 as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads
+Set the OS parameter vm.swappiness to 10 as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads.
-Learn more about [Database Instance - VmSwappiness (To improve file system performance in HANA DB with ANF, optimize the parameter vm.swappiness)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+Learn more about [Database Instance - VmSwappiness (To improve file system performance in HANA DB with ANF, optimize the parameter vm.swappiness)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse).
### To improve file system performance in HANA DB with ANF, disable net.ipv4.conf.all.rp_filter
-Disable the reverse path filter linux OS parameter, net.ipv4.conf.all.rp_filter as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads
+Disable the reverse path filter linux OS parameter, net.ipv4.conf.all.rp_filter as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads.
-Learn more about [Database Instance - DisableIPV4Conf (To improve file system performance in HANA DB with ANF, disable net.ipv4.conf.all.rp_filter)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings).
+Learn more about [Database Instance - DisableIPV4Conf (To improve file system performance in HANA DB with ANF, disable net.ipv4.conf.all.rp_filter)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse).
-### If using Ultradisk, the IOPS for /hana/data volume should be >=7000 for better HANA DB performance
+### If using Ultradisk, the IOPS for /hana/data volume must be >=7000 for better HANA DB performance
-IOPS of at least 7000 in /hana/data volume is recommended for SAP workloads when using Ultradisk. Select the disk type for /hana/data volume as per this requirement to ensure high performance of the DB
+IOPS of at least 7000 in /hana/data volume is recommended for SAP workloads when using Ultradisk. Select the disk type for /hana/data volume as per this requirement to ensure high performance of the DB.
-Learn more about [Database Instance - HanaDataIOPS (If using Ultradisk, the IOPS for /hana/data volume should be >=7000 for better HANA DB performance)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#azure-ultra-disk-storage-configuration-for-sap-hana:~:text=1%20x%20P6-,Azure%20Ultra%20disk%20storage%20configuration%20for%20SAP%20HANA,-Another%20Azure%20storage).
+Learn more about [Database Instance - HanaDataIOPS (If using Ultradisk, the IOPS for /hana/data volume must be >=7000 for better HANA DB performance)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#azure-ultra-disk-storage-configuration-for-sap-hana).
### To improve file system performance in HANA DB with ANF, change parameter tcp_max_slot_table_entries
-Set the OS parameter tcp_max_slot_table_entries to 128 as per SAP note: 302436 for improved file transfer performance in HANA DB with ANF in SAP workloads
+Set the OS parameter tcp_max_slot_table_entries to 128 as per SAP note: 302436 for improved file transfer performance in HANA DB with ANF in SAP workloads.
Learn more about [Database Instance - OptimizeTCPMaxSlotTableEntries (To improve file system performance in HANA DB with ANF, change parameter tcp_max_slot_table_entries)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings). ### Ensure the read performance of /hana/data volume is >=400 MB/sec for better performance in HANA DB
-Read activity of at least 400 MB/sec for /hana/data for 16 MB and 64 MB I/O sizes is recommended for SAP workloads on Azure. Select the disk type for /hana/data as per this requirement to ensure high performance of the DB and to meet minimum storage requirements for SAP HANA
+Read activity of at least 400 MB/sec for /hana/data for 16 MB and 64 MB I/O sizes is recommended for SAP workloads on Azure. Select the disk type for /hana/data as per this requirement to ensure high performance of the DB and to meet minimum storage requirements for SAP HANA.
Learn more about [Database Instance - HanaDataVolumePerformance (Ensure the read performance of /hana/data volume is >=400 MB/sec for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Read%20activity%20of%20at%20least%20400%20MB/sec%20for%20/hana/data).
-### Read/write performance of /hana/log volume should be >=250 MB/sec for better performance in HANA DB
+### Read/write performance of /hana/log volume must be >=250 MB/sec for better performance in HANA DB
-Read/Write activity of at least 250 MB/sec for /hana/log for 1 MB I/O size is recommended for SAP workloads on Azure. Select the disk type for /hana/log volume as per this requirement to ensure high performance of the DB and to meet minimum storage requirements for SAP HANA
+Read/Write activity of at least 250 MB/sec for /hana/log for 1 MB I/O size is recommended for SAP workloads on Azure. Select the disk type for /hana/log volume as per this requirement to ensure high performance of the DB and to meet minimum storage requirements for SAP HANA.
-Learn more about [Database Instance - HanaLogReadWriteVolume (Read/write performance of /hana/log volume should be >=250 MB/sec for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Read/write%20on%20/hana/log%20of%20250%20MB/sec%20with%201%20MB%20I/O%20sizes).
+Learn more about [Database Instance - HanaLogReadWriteVolume (Read/write performance of /hana/log volume must be >=250 MB/sec for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Read/write%20on%20/hana/log%20of%20250%20MB/sec%20with%201%20MB%20I/O%20sizes).
-### If using Ultradisk, the IOPS for /hana/log volume should be >=2000 for better performance in HANA DB
+### If using Ultradisk, the IOPS for /hana/log volume must be >=2000 for better performance in HANA DB
-IOPS of at least 2000 in /hana/log volume is recommended for SAP workloads when using Ultradisk. Select the disk type for /hana/log volume as per this requirement to ensure high performance of the DB
+IOPS of at least 2000 in /hana/log volume is recommended for SAP workloads when using Ultradisk. Select the disk type for /hana/log volume as per this requirement to ensure high performance of the DB.
-Learn more about [Database Instance - HanaLogIOPS (If using Ultradisk, the IOPS for /hana/log volume should be >=2000 for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#azure-ultra-disk-storage-configuration-for-sap-hana:~:text=1%20x%20P6-,Azure%20Ultra%20disk%20storage%20configuration%20for%20SAP%20HANA,-Another%20Azure%20storage).
+Learn more about [Database Instance - HanaLogIOPS (If using Ultradisk, the IOPS for /hana/log volume must be >=2000 for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#azure-ultra-disk-storage-configuration-for-sap-hana:~:text=1%20x%20P6-,Azure%20Ultra%20disk%20storage%20configuration%20for%20SAP%20HANA,-Another%20Azure%20storage).
-### All disks in LVM for /hana/log volume should be of the same type to ensure high performance in HANA DB
+### All disks in LVM for /hana/log volume must be of the same type to ensure high performance in HANA DB
-If multiple disk types are selected in the /hana/log volume, performance of HANA DB in SAP workloads might get restricted. Ensure all HANA Data volume disks are of the same type and are configured as per recommendation for SAP on Azure
+If multiple disk types are selected in the /hana/log volume, performance of HANA DB in SAP workloads might get restricted. Ensure all HANA Data volume disks are of the same type and are configured as per recommendation for SAP on Azure.
-Learn more about [Database Instance - HanaDiskLogVolumeSameType (All disks in LVM for /hana/log volume should be of the same type to ensure high performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=For%20the%20/hana/log%20volume.%20the%20configuration%20would%20look%20like).
+Learn more about [Database Instance - HanaDiskLogVolumeSameType (All disks in LVM for /hana/log volume must be of the same type to ensure high performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=For%20the%20/hana/log%20volume.%20the%20configuration%20would%20look%20like).
### Enable Write Accelerator on /hana/log volume with Premium disk for improved write latency in HANA DB
Azure Write Accelerator is a functionality for Azure M-Series VMs. It improves I
Learn more about [Database Instance - WriteAcceleratorEnabled (Enable Write Accelerator on /hana/log volume with Premium disk for improved write latency in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=different%20SAP%20applications.-,Solutions%20with%20premium%20storage%20and%20Azure%20Write%20Accelerator%20for%20Azure%20M%2DSeries%20virtual%20machines,-Azure%20Write%20Accelerator).
-### Stripe size for /hana/log should be 64 kb for improved performance of HANA DB in SAP workloads
+### Stripe size for /hana/log must be 64 kb for improved performance of HANA DB in SAP workloads
+
+If you're using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. To get enough throughput with larger I/O sizes, Azure recommends using stripe size of 64 kb for /hana/log filesystem for better performance of HANA DB.
+
+Learn more about [Database Instance - HanaLogStripeSize (Stripe size for /hana/log must be 64 kb for improved performance of HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=As%20stripe%20sizes%20the%20recommendation%20is%20to%20use).
+++++
+## Security
+
+### Update Attestation API Version
+
+We have identified API calls from outdated an Attestation API for resources under this subscription. We recommend switching to the latest Attestation API versions. You need to update your existing code to use the latest API version. Using the latest API version ensures you receive the latest features and performance improvements.
+
+Learn more about [Attestation provider - UpgradeAttestationAPI (Update Attestation API Version)](/rest/api/attestation).
+
+### Update Key Vault SDK Version
+
+New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process. If Key Vault is integrated with Azure Storage, Disk or other Azure services that can use old Key Vault SDK and when all your current custom applications are using .NET SDK 4.0 or above, dismiss the recommendation.
+
+Learn more about [Key vault - UpgradeKeyVaultSDK (Update Key Vault SDK Version)](../key-vault/general/client-libraries.md).
+
+### Update Key Vault SDK Version
+
+New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process.
+
+> [!IMPORTANT]
+> Be aware that you can only remediate recommendation for custom applications you have access to. Recommendations can be shown due to integration with other Azure services like Storage, Disk encryption, which are in process to update to new version of our SDK. If you use .NET 4.0 in all your applications, dismiss the recommendation.
+
+Learn more about [Managed HSM Service - UpgradeKeyVaultMHSMSDK (Update Key Vault SDK Version)](../key-vault/general/client-libraries.md).
++++
+## Storage
+
+### Use "Put Blob" for blobs smaller than 256 MB
+
+When writing a block blob that is 256 MB or less (64 MB for requests using REST versions before 2016-05-31), you can upload it in its entirety with a single write operation using "Put Blob". Based on your aggregated metrics, we believe your storage account's write operations can be optimized.
+
+Learn more about [Storage Account - StorageCallPutBlob (Use \""Put Blob\"" for blobs smaller than 256 MB)](https://aka.ms/understandblockblobs).
+
+### Increase provisioned size of premium file share to avoid throttling of requests
+
+Your requests for premium file share are throttled as the I/O operations per second (IOPS) or throughput limits for the file share have reached. To protect your requests from being throttled, increase the size of the premium file share.
+
+Learn more about [Storage Account - AzureStorageAdvisorAvoidThrottlingPremiumFiles (Increase provisioned size of premium file share to avoid throttling of requests)]().
+
+### Create statistics on table columns
+
+We have detected that you're missing table statistics that might be impacting query performance. The query optimizer uses statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
+
+Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statistics on table columns)](https://aka.ms/learnmorestatistics).
+
+### Remove data skew to increase query performance
+
+We have detected distribution data skew greater than 15%, which can cause costly performance bottlenecks.
+
+Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](https://aka.ms/learnmoredataskew).
+
+### Update statistics on table columns
+
+We have detected that you don't have up-to-date table statistics, which might be impacting query performance. The query optimizer uses up-to-date statistics to estimate the cardinality or number of rows in the query result that enables the query optimizer to create a high quality query plan.
+
+Learn more about [SQL data warehouse - UpdateTableStatisticsSqlDW (Update statistics on table columns)](https://aka.ms/learnmorestatistics).
+
+### Scale up to optimize cache utilization with SQL Data Warehouse
+
+We have detected that you had high cache used percentage with low hit percentage, indicating a high cache eviction rate that can affect the performance of your workload.
+
+Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to optimize cache utilization with SQL Data Warehouse)](https://aka.ms/learnmoreadaptivecache).
+
+### Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse
+
+We have detected that you had high tempdb utilization that can affect the performance of your workload.
+
+Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](https://aka.ms/learnmoretempdb).
+
+### Convert tables to replicated tables with SQL Data Warehouse
+
+We have detected that you might benefit from using replicated tables. Replicated tables avoid costly data movement operations and significantly increase the performance of your workload.
+
+Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to replicated tables with SQL Data Warehouse)](https://aka.ms/learnmorereplicatedtables).
+
+### Split staged files in the storage account to increase load performance
+
+We have detected that you can increase load throughput by splitting your compressed files that are staged in your storage account. A good rule of thumb is to split compressed files into 60 or more to maximize the parallelism of your load.
+
+Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](https://aka.ms/learnmorefilesplit).
+
+### Increase batch size when loading to maximize load throughput, data compression, and query performance
+
+We have detected that you can increase load performance and throughput by increasing the batch size when loading into your database. Consider using the COPY statement. If you're unable to use the COPY statement, consider increasing the batch size when using loading utilities such as the SQLBulkCopy API or BCP - a good rule of thumb is a batch size between 100K to 1M rows.
+
+Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](https://aka.ms/learnmoreincreasebatchsize).
+
+### Co-locate the storage account within the same region to minimize latency when loading
+
+We have detected that you're loading from a region that is different from your SQL pool. Consider loading from a storage account that is within the same region as your SQL pool to minimize latency when loading data.
+
+Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](https://aka.ms/learnmorestoragecolocation).
+
+### Upgrade your Storage Client Library to the latest version for better reliability and performance
+
+The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage.
+
+Learn more about [Storage Account - UpdateStorageSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](https://aka.ms/learnmorestoragecolocation).
+
+### Upgrade your Storage Client Library to the latest version for better reliability and performance
+
+The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage.
+
+Learn more about [Storage Account - UpdateStorageDataMovementSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](https://aka.ms/AA5wtca).
+
+### Upgrade to Standard SSD Disks for consistent and improved performance
+
+Because you're running IaaS virtual machine workloads on Standard HDD managed disks, be aware that a Standard SSD disk option is now available for all Azure VM types. Standard SSD disks are a cost-effective storage option optimized for enterprise workloads that need consistent performance. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which takes three to five minutes.
+
+Learn more about [Storage Account - StandardSSDForNonPremVM (Upgrade to Standard SSD Disks for consistent and improved performance)](/azure/virtual-machines/windows/disks-types#standard-ssd).
+
+### Use premium performance block blob storage
+
+One or more of your storage accounts has a high transaction rate per GB of block blob data stored. Use premium performance block blob storage instead of standard performance storage for your workloads that require fast storage response times and/or high transaction rates and potentially save on storage costs.
+
+Learn more about [Storage Account - PremiumBlobStorageAccount (Use premium performance block blob storage)](https://aka.ms/usePremiumBlob).
+
+### Convert Unmanaged Disks from Standard HDD to Premium SSD for performance
+
+We have noticed your Unmanaged HDD Disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which takes three to five minutes.
+
+Learn more about [Storage Account - UMDHDDtoPremiumForPerformance (Convert Unmanaged Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd).
+
+### Distribute data in server group to distribute workload among nodes
+
+It looks like the data is not distributed in this server group but stays on the coordinator. For full Hyperscale (Citus) benefits, distribute data on worker nodes in the server group.
+
+Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusDistributeData (Distribute data in server group to distribute workload among nodes)](https://go.microsoft.com/fwlink/?linkid=2135201).
+
+### Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly
+
+It looks like the data is not well balanced between worker nodes in this Hyperscale (Citus) server group. In order to use each worker node of the Hyperscale (Citus) server group effectively rebalance data in the server group.
+
+Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusRebalanceData (Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly)](https://go.microsoft.com/fwlink/?linkid=2148869).
++++
+## Virtual desktop infrastructure
+
+### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location
+
+We have determined that your VMs are located in a region different or far from where your users are connecting with Azure Virtual Desktop, which might lead to prolonged connection response times and affect overall user experience. When you create VMs for your host pools, try to use a region closer to the user. Having close proximity ensures continuing satisfaction with the Azure Virtual Desktop service and a better overall quality of experience.
+
+Learn more about [Host Pool - RegionProximityHostPools (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](../virtual-desktop/connection-latency.md).
+
+### Change the max session limit for your depth first load balanced host pool to improve VM performance
+
+Depth first load balancing uses the max session limit to determine the maximum number of users that can have concurrent sessions on a single session host. If the max session limit is too high, all user sessions are directed to the same session host and this might cause performance and reliability issues. Therefore, when setting a host pool to have depth first load balancing, also set an appropriate max session limit according to the configuration of your deployment and capacity of your VMs. To fix this, open your host pool's properties and change the value next to the "Max session limit" setting.
+
+Learn more about [Host Pool - ChangeMaxSessionLimitForDepthFirstHostPool (Change the max session limit for your depth first load balanced host pool to improve VM performance )](../virtual-desktop/configure-host-pool-load-balancing.md).
++++
+## Web
+
+### Move your App Service Plan to PremiumV2 for better performance
+
+Your app served more than 1000 requests per day for the past 3 days. Your app might benefit from the higher performance infrastructure available with the Premium V2 App Service tier. The Premium V2 tier features Dv2-series VMs with faster processors, SSD storage, and doubled memory-to-core ratio when compared to the previous instances. Learn more about upgrading to Premium V2 from our documentation.
+
+Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service Plan to PremiumV2 for better performance)](https://aka.ms/ant-premiumv2).
+
+### Check outbound connections from your App Service resource
+
+Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/IP port connection limits can cause unexpected connectivity issues for your apps.
+
+Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](https://aka.ms/antbc-socket).
+
-If you are using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. To get enough throughput with larger I/O sizes, Azure recommends using stripe size of 64 kb for /hana/log filesystem for better performance of HANA DB
-Learn more about [Database Instance - HanaLogStripeSize (Stripe size for /hana/log should be 64 kb for improved performance of HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=As%20stripe%20sizes%20the%20recommendation%20is%20to%20use).
## Next steps
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Azure Advisor helps you ensure and improve the continuity of your business-criti
## AI Services
-### You are close to exceeding storage quota of 2GB. Create a Standard search service
+### You're close to exceeding storage quota of 2GB. Create a Standard search service
You're close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded. Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
-### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service
+### You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service
You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded. Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
-### You are close to exceeding your available storage quota. Add more partitions if you need more storage
+### You're close to exceeding your available storage quota. Add more partitions if you need more storage
You're close to exceeding your available storage quota. Add extra partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
Learn more about [HDInsight cluster - clusterOlderThanAYear (Your cluster was cr
### Your Kafka cluster disks are almost full
-The data disks used by Kafka brokers in your HDInsight cluster are almost full. When that happens, the Apache Kafka broker process can't start and fails because of the disk full error. To mitigate, find the retention time for every topic, back up the files that are older and restart the brokers.
+The data disks used by Kafka brokers in your HDInsight cluster are almost full. When that happens, the Apache Kafka broker process can't start and fails because of the disk full error. To mitigate, find the retention time for every Kafka Topic, back up the files that are older and restart the brokers.
Learn more about [HDInsight cluster - KafkaDiskSpaceFull (Your Kafka Cluster Disks are almost full)](https://aka.ms/kafka-troubleshoot-full-disk).
-### Creation of clusters under custom VNet requires more permission
+### Creation of clusters under custom virtual network requires more permission
-Your clusters with custom VNet were created without VNet joining permission. Ensure that the users who perform create operations have permissions to the Microsoft.Network/virtualNetworks/subnets/join action before September 30, 2023.
+Your clusters with custom virtual network were created without virtual network joining permission. Ensure that the users who perform create operations have permissions to the Microsoft.Network/virtualNetworks/subnets/join action before September 30, 2023.
Learn more about [HDInsight cluster - EnforceVNetJoinPermissionCheck (Creation of clusters under custom VNet requires more permission)](https://aka.ms/hdinsightEnforceVnet).
Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and re
### Apply critical updates to your HDInsight clusters
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources such as load balancer, network interface and public IP address, associated with your clusters. Do this before January 21, 2021 05:00 PM UTC when the HDInsight team is performing updates between January 21, 2021 05:00 PM UTC and January 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources in the same resource group and subnet where your cluster is. Failure to apply this update might result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters.
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying the update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources associated with your clusters. Change your policy assignment before January 21, 2021 05:00 PM UTC when the HDInsight team is performing updates between January 21, 2021 05:00 PM UTC and January 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources in the same resource group and subnet where your cluster is. Failure to apply this update might result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters.
Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgr
### Enable virtual machine replication to protect your applications from regional outage
-Virtual machines that don't have replication enabled to another region aren't resilient to regional outages. Replicating the machines drastically reduce any adverse business impact during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the following list so that in an event of an outage, you can quickly bring up your machines in remote Azure region.
+Virtual machines that don't have replication enabled to another region aren't resilient to regional outages. Replicating the machines drastically reduce any adverse business effect during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the following list so that in an event of an outage, you can quickly bring up your machines in remote Azure region.
Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](https://aka.ms/azure-site-recovery-dr-azure-vms). ### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no extra cost
Learn more about [Virtual machine - VMRunningDeprecatedPlanLevelImage (Virtual m
Virtual machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to newer version of the image to prevent disruption to your workloads. + Learn more about [Virtual machine - VMRunningDeprecatedImage (Virtual machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ). ### Use Availability zones for better resiliency and availability
Availability Zones (AZ) in Azure help protect your applications and data from da
Learn more about [Virtual machine - AvailabilityZoneVM (Use Availability zones for better resiliency and availability)](/azure/reliability/availability-zones-overview).
+### Use Managed Disks to improve data reliability
+
+Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
+
+Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore).
+ ### Access to mandatory URLs missing for your Azure Virtual Desktop environment
-In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to the allowed list, in case your virtual machine runs in a restricted environment. After visiting the "Learn More" link, you see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you might also search your application event log for event 3702.
+In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to the allowed list, in case your virtual machine runs in a restricted environment. After visiting the "Learn More" link, you see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you might also search your Application event log for event 3702.
Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Azure Virtual Desktop environment)](../virtual-desktop/safe-url-list.md).
Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade
## Databases
-### Replication - Add a primary key to the table that currently does not have one
+### Replication - Add a primary key to the table that currently doesn't have one
Based on our internal monitoring, we have observed significant replication lag on your replica server. This lag is occurring because the replica server is replaying relay logs on a table that lacks a primary key. To ensure that the replica can synchronize with the primary and keep up with changes, add primary keys to the tables in the primary server. Once the primary keys are added, recreate the replica server. Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerReplicaMissingPKfb41 (Replication - Add a primary key to the table that currently doesn't have one)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table).
-### High Availability - Add primary key to the table that currently does not have one
+### High Availability - Add primary key to the table that currently doesn't have one
Our internal monitoring system has identified significant replication lag on the High Availability standby server. The standby server replaying relay logs on a table that lacks a primary key, is the main cause of the lag. To address this issue and adhere to best practices, we recommend you add primary keys to all tables. Once you add the primary keys, proceed to disable and then re-enable High Availability to mitigate the problem. Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerHAMissingPKcf38 (High Availability - Add primary key to the table that currently doesn't have one.)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table).
-### Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact
+### Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid
-Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade.
+Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased with the maxfragmentationmemory-reserved setting available in the advanced settings option area.
-Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](https://aka.ms/redis/recommendations/memory-policies).
+Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential effect.)](https://aka.ms/redis/recommendations/memory-policies).
### Enable Azure backup for SQL on your virtual machines
Learn more about [SQL virtual machine - EnableAzBackupForSQL (Enable Azure backu
### Improve PostgreSQL availability by removing inactive logical replication slots
-Our internal telemetry indicates that your PostgreSQL server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+Our internal system indicates that your PostgreSQL server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_logical_decoding). ### Improve PostgreSQL availability by removing inactive logical replication slots
-Our internal telemetry indicates that your PostgreSQL flexible server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication slots can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+Our internal system indicates that your PostgreSQL flexible server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication slots can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding). ### Configure Consistent indexing mode on your Azure Cosmos DB container
-We noticed that your Azure Cosmos DB container is configured with the Lazy indexing mode, which might impact the freshness of query results. We recommend switching to Consistent mode.
+We noticed that your Azure Cosmos DB container is configured with the Lazy indexing mode, which might affect the freshness of query results. We recommend switching to Consistent mode.
Learn more about [Azure Cosmos DB account - CosmosDBLazyIndexing (Configure Consistent indexing mode on your Azure Cosmos DB container)](/azure/cosmos-db/how-to-manage-indexing-policy).
Some or all of your devices are using outdated SDK and we recommend you upgrade
Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
-### Upgrade Edge Device Runtime to a supported version for Iot Hub
+### Upgrade Microsoft Edge Device Runtime to a supported version for Iot Hub
-Some or all of your Edge devices are using outdated versions and we recommend you upgrade to the latest supported version of the runtime. See the details in the link given.
+Some or all of your Microsoft Edge devices are using outdated versions and we recommend you upgrade to the latest supported version of the runtime. See the details in the link given.
-Learn more about [IoT hub - UpgradeEdgeSdk (Upgrade Edge Device Runtime to a supported version for Iot Hub)](https://aka.ms/IOTEdgeSDKCheck).
+Learn more about [IoT hub - UpgradeEdgeSdk (Upgrade Microsoft Edge Device Runtime to a supported version for Iot Hub)](https://aka.ms/IOTEdgeSDKCheck).
Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WA
### Extra protection to mitigate Log4j 2 vulnerability (CVE-2021-44228)
-To mitigate the impact of Log4j 2 vulnerability, we recommend these steps:
+To mitigate the effect of Log4j 2 vulnerability, we recommend these steps:
1) Upgrade Log4j 2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link provided. 2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU.
-Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
+Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (More protection to mitigate Log4j 2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
-### Update VNet permission of Application Gateway users
+### Update virtual network permission of Application Gateway users
To improve security and provide a more consistent experience across Azure, all users must pass a permission check before creating or updating an Application Gateway in a Virtual Network. The users or service principals must include at least Microsoft.Network/virtualNetworks/subnets/join/action permission.
All endpoints associated to this proximity profile are in the same region. Users
Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb). - ### Move to production gateway SKUs from Basic gateways The VPN gateway Basic SKU is designed for development or testing scenarios. Move to a production SKU if you're using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
Prevent risk of connectivity failures due to SNAT port exhaustion by using NAT g
Learn more about [Virtual network - natGateway (Use NAT gateway for outbound connectivity)](/azure/load-balancer/load-balancer-outbound-connections#2-associate-a-nat-gateway-to-the-subnet).
+### Update virtual network permission of Application Gateway users
+
+To improve security and provide a more consistent experience across Azure, all users must pass a permission check before creating or updating an Application Gateway in a Virtual Network. The users or service principals must include at least Microsoft.Network/virtualNetworks/subnets/join/action permission.
+
+Learn more about [Application gateway - AppGwLinkedAccessFailureRecmmendation (Update VNet permission of Application Gateway users)](https://aka.ms/agsubnetjoin).
+
+### Use version-less Key Vault secret identifier to reference the certificates
+
+We strongly recommend that you use a version-less secret identifier to allow your application gateway resource to automatically retrieve the new certificate version, whenever available. Example: https://myvault.vault.azure.net/secrets/mysecret/
+
+Learn more about [Application gateway - AppGwAdvisorRecommendationForCertificateUpdate (Use version-less Key Vault secret identifier to reference the certificates)](https://aka.ms/agkvversion).
+ ### Enable Active-Active gateways for redundancy In active-active configuration, both instances of the VPN gateway establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic is switched over to the other active IPsec tunnel automatically.
Learn more about [Central Server Instance - ExpectedVotesHAASCSRH (Set the expec
### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads
-The corosync token_retransmits_before_loss_const determines how many token retransmits the system attempts before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for ASCS HA setup.
+The corosync token_retransmits_before_loss_const determines the number of times that tokens can be retransmitted the system attempts before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for ASCS HA setup.
Learn more about [Central Server Instance - TokenRestransmitsHAASCSSLE (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
The softdog timer is loaded as a kernel module in linux OS. This timer triggers
Learn more about [Central Server Instance - softdogmoduleloadedHAASCSSLE (Ensure the softdog module is loaded in for Pacemaker in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-### Ensure that there is one instance of fence_azure_arm in Pacemaker configuration for ASCS HA setup
+### Ensure there's one instance of a fence_azure_arm in Pacemaker configuration for ASCS HA setup
-The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure that there's one instance of fence_azure_arm in the pacemaker configuration for ASCS HA setup. The fence_azure_arm requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal.
+The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure there's one instance of a fence_azure_arm in your Pacemaker configuration for ASCS HA setup. The fence_azure_arm requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal.
-Learn more about [Central Server Instance - FenceAzureArmHAASCSSLE (Ensure that there's one instance of fence_azure_arm in Pacemaker configuration for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Central Server Instance - FenceAzureArmHAASCSSLE (Ensure that there's one instance of a fence_azure_arm in your Pacemaker configuration for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
### Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads
Learn more about [Central Server Instance - ASCSHASetIdleTimeOutLB (Set the Idle
### Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads
-Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down.
+Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down.
Learn more about [Central Server Instance - ASCSLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-general-update-november-2021/ba-p/2807619#network-settings-and-tuning-for-sap-on-azure).
Learn more about [Database Instance - PreferSiteTakeoverHDB (Set parameter PREFE
### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads
-The corosync token_retransmits_before_loss_const determines how many token retransmits are attempted before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for HANA DB HA setup.
+The corosync token_retransmits_before_loss_const determines the amount of token retransmits that are attempted before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for HANA DB HA setup.
Learn more about [Database Instance - TokenRetransmitsHDB (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-### Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads
-
-Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads.
-
-Learn more about [Database Instance - ExpectedVotesSuseHDB (Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
- ### Set the two_node parameter to 1 in the cluster cofiguration in HA enabled SAP workloads In a two node HA cluster, set the quorum parameter 'two_node' to 1 as per recommendation for SAP on Azure.
The softdog timer is loaded as a kernel module in linux OS. This timer triggers
Learn more about [Database Instance - SoftdogConfigSuseHDB (Create the softdog config file in Pacemaker configuration for HA enable HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
-### Ensure that there is one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup
+### Ensure there's one instance of a fence_azure_arm in Pacemaker configuration for HANA DB HA setup
-The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure that there's one instance of fence_azure_arm in the pacemaker configuration for HANA DB HA setup. The fence_azure-arm instance requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal.
+The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure there's one instance of a fence_azure_arm in your Pacemaker configuration for HANA DB HA setup. The fence_azure-arm instance requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal.
-Learn more about [Database Instance - FenceAzureArmSuseHDB (Ensure that there's one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+Learn more about [Database Instance - FenceAzureArmSuseHDB (Ensure there's one instance of a fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
### Ensure the softdog module is loaded in for Pacemaker in HA enabled HANA DB in SAP workloads
Learn more about [Database Instance - DBHAEnableLBPorts (Enable HA ports in the
### Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads
-Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down.
+Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down.
Learn more about [Database Instance - DBLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads)](/azure/load-balancer/load-balancer-custom-probe-overview).
+### There should be one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup
+
+The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure there's one instance of a fence_azure_arm in the Pacemaker configuration for HANA DB HA setup. The fence_azure_arm is needed if you're using Azure fence agent for fencing with either managed identity or service principal.
+
+Learn more about [Database Instance - FenceAzureArmSuseHDB (There should be one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability).
+ ## Storage
Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete
### Enable Cross Region Restore for your recovery Services Vault
-Enabling cross region restore for your geo-redundant vaults
+Enabling cross region restore for your geo-redundant vaults.
Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Restore for your Recovery Services Vault)](../backup/backup-azure-arm-restore-vms.md#cross-region-restore). ### Enable Backups on your virtual machines
-Enable backups for your virtual machines and secure your data
+Enable backups for your virtual machines and secure your data.
Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on your virtual machines)](../backup/backup-overview.md). ### Configure blob backup
-Configure blob backup
+Configure blob backup.
Learn more about [Storage Account - ConfigureBlobBackup (Configure blob backup)](/azure/backup/blob-backup-overview). ### Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data
-Keep your information and applications safe with robust, one click backup from Azure. Activate Azure Backup to get cost-effective protection for a wide range of workloads including VMs, SQL databases, applications, and file shares.
+Keep your information and applications safe with robust, one select backup from Azure. Activate Azure Backup to get cost-effective protection for a wide range of workloads including VMs, SQL databases, applications, and file shares.
Learn more about [Subscription - AzureBackupService (Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data)](/azure/backup/).
As previously announced, Azure Data Lake Storage Gen1 will be retired on Februar
Learn more about [Data lake store account - ADLSGen1_Deprecation (You have ADLS Gen1 Accounts Which Needs to be Migrated to ADLS Gen2)](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/).
+### You have ADLS Gen1 Accounts Which Need to be Migrated to ADLS Gen2
+
+As previously announced, Azure Data Lake Storage Gen1 will be retired on February 29, 2024. We highly recommend that you migrate your data lake to Azure Data Lake Storage Gen2, which offers advanced capabilities designed for big data analytics. Azure Data Lake Storage Gen2 is built on top of Azure Blob Storage.
+
+Learn more about [Data lake store account - ADLSGen1_Deprecation (You have ADLS Gen1 Accounts Which Needs to be Migrated to ADLS Gen2)](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/).
+ ### Enable Soft Delete to protect your blob data
-After enabling the soft delete option, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
+After you enable the Soft Delete option, deleted data transitions to a "soft" deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](https://aka.ms/softdelete).
Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to imp
### Implement disaster recovery strategies for your Azure NetApp Files Resources
-To avoid data or functionality loss in the event of a regional or zonal disaster, implement common disaster recovery techniques such as cross region replication or cross zone replication for your Azure NetApp Files volumes
+To avoid data or functionality loss if there's a regional or zonal disaster, implement common disaster recovery techniques such as cross region replication or cross zone replication for your Azure NetApp Files volumes
Learn more about [Volume - ANFCRRCZRRecommendation (Implement disaster recovery strategies for your Azure NetApp Files Resources)](https://aka.ms/anfcrr).
Learn more about [Volume - SAPTimeoutsANF (Review SAP configuration for timeout
### Consider scaling out your App Service Plan to avoid CPU exhaustion
-Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this you could scale out your app.
+Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this problem, you could scale out your app.
Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](https://aka.ms/antbc-cpu).
Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service
### Use deployment slots for your App Service resource
-You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
+You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment effect to your production web app.
Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](https://aka.ms/ant-staging).
Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the back
### Move your App Service resource to Standard or higher and use deployment slots
-You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
+You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment effect to your production web app.
Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](https://aka.ms/ant-staging).
Learn more about [App Service plan - AppServiceNumberOfInstances (Consider scali
### Application code needs fixing when the worker process crashes due to Unhandled Exception
-We identified the following thread that resulted in an unhandled exception for your App and the application code must be fixed to prevent impact to application availability. A crash happens when an exception in your code terminates the process.
+We identified the following thread that resulted in an unhandled exception for your App and the application code must be fixed to prevent effect to application availability. A crash happens when an exception in your code terminates the process.
Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code must be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html). ### Consider changing your App Service configuration to 64-bit
-We identified your application is running in 32-bit and the memory is reaching the 2GB limit. Consider switching to 64-bit processes so you can take advantage of the extra memory available in your Web Worker role. This action triggers a web app restart, so schedule accordingly.
+We identified your application is running in 32-bit and the memory is reaching the 2-GB limit. Consider switching to 64-bit processes so you can take advantage of the extra memory available in your Web Worker role. This action triggers a web app restart, so schedule accordingly.
Learn more about [App service 32-bit limitations](/troubleshoot/azure/app-service/web-apps-performance-faqs#i-see-the-message-worker-process-requested-recycle-due-to-percent-memory-limit-how-do-i-address-this-issue).
Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure F
### Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU
-The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100GB. Consider upgrading these apps to Standard SKU to avoid throttling.
+The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100 GB. Consider upgrading these apps to Standard SKU to avoid throttling.
Learn more about [Static Web App - StaticWebAppsUpgradeToStandardSKU (Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU.)](https://azure.microsoft.com/pricing/details/app-service/static/). - ## Next steps Learn more about [Reliability - Microsoft Azure Well Architected Framework](/azure/architecture/framework/resiliency/overview)
ai-services Video Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/video-retrieval.md
Azure AI Spatial Analysis Video Retrieval APIs are part of Azure AI Vision and e
## Input requirements
-### Supported file formats
+### Supported formats
+ | File format | Description | | -- | -- | | `asf` | ASF (Advanced / Active Streaming Format) |
+| `avi` | AVI (Audio Video Interleaved) |
| `flv` | FLV (Flash Video) | | `matroskamm`, `webm` | Matroska / WebM |
-| `mov`, `mp4`, `m4a`, `3gp`, `3g2`, `mj2` | QuickTime / MOV |
-| `mpegts` | MPEG-TS (MPEG-2 Transport Stream) |
-| `rawvideo` | raw video |
-| `rm` | RealMedia |
-| `rtsp` | RTSP input |
+| `mov`,`mp4`,`m4a`,`3gp`,`3g2`,`mj2` | QuickTime / MOV |
+
+### Supported video codecs
-### Supported codecs
| Codec | Format | | -- | -- | | `h264` | H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 |
-| `rawvideo` | raw video |
-| `h265` | HEVC
+| `h265` | H.265/HEVC |
| `libvpx-vp9` | libvpx VP9 (codec vp9) |
+| `mpeg4` | MPEG-4 part 2 |
+
+### Supported audio codecs
+
+| Codec | Format |
+| -- | -- |
+| `aac` | AAC (Advanced Audio Coding) |
+| `mp3` | MP3 (MPEG audio layer 3) |
+| `pcm` | PCM (uncompressed) |
+| `vorbis` | Vorbis |
+| `wmav2` | Windows Media Audio 2 |
## Call the Video Retrieval APIs
The Spatial Analysis Video Retrieval APIs allows a user to add metadata to video
### Step 1: Create an Index
-To begin, you need to create an index to store and organize the video files and their metadata. The example below demonstrates how to create an index named "my-video-index."
+To begin, you need to create an index to store and organize the video files and their metadata. The example below demonstrates how to create an index named "my-video-index" using the **[Create Index](https://eastus.dev.cognitive.microsoft.com/docs/services/ingestion-api-private-preview-2023-05-01-preview/operations/645db36646346106fcc4779b)** API.
```bash curl.exe -v -X PUT "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii "
Connection: close
### Step 2: Add video files to the index
-Next, you can add video files to the index with their associated metadata. The example below demonstrates how to add two video files to the index using SAS URLs to provide access.
+Next, you can add video files to the index with their associated metadata. The example below demonstrates how to add two video files to the index using SAS URLs with the **[Create Ingestion](https://eastus.dev.cognitive.microsoft.com/docs/services/ingestion-api-private-preview-2023-05-01-preview/operations/645db36646346106fcc4779f)** API.
```bash
Connection: close
### Step 3: Wait for ingestion to complete
-After you add video files to the index, the ingestion process starts. It might take some time depending on the size and number of files. To ensure the ingestion is complete before performing searches, you can use the **Get Ingestion** call to check the status. Wait for this call to return `"state" = "Completed"` before proceeding to the next step.
+After you add video files to the index, the ingestion process starts. It might take some time depending on the size and number of files. To ensure the ingestion is complete before performing searches, you can use the **[Get Ingestion](https://eastus.dev.cognitive.microsoft.com/docs/services/ingestion-api-private-preview-2023-05-01-preview/operations/645db36646346106fcc477a0)** API to check the status. Wait for this call to return `"state" = "Completed"` before proceeding to the next step.
```bash curl.exe -v _X GET "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index/ingestions?api-version=2023-05-01-preview&$top=20" -H "ocp-apim-subscription-key: <YOUR_SUBSCRIPTION_KEY>"
After you add video files to the index, you can search for specific videos using
#### Search with "vision" feature
-To perform a search using the "vision" feature, specify the query text and any desired filters.
+To perform a search using the "vision" feature, use the [Search By Text](https://eastus.dev.cognitive.microsoft.com/docs/services/ingestion-api-private-preview-2023-05-01-preview/operations/645db36646346106fcc477a2) API with the `vision` filter, specifying the query text and any other desired filters.
```bash POST -v -X "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index:queryByText?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii "
Connection: close
#### Search with "speech" feature
-To perform a search using the "speech" feature, provide the query text and any desired filters.
+To perform a search using the "speech" feature, use the **[Search By Text](https://eastus.dev.cognitive.microsoft.com/docs/services/ingestion-api-private-preview-2023-05-01-preview/operations/645db36646346106fcc477a2)** API with the `speech` filter, providing the query text and any other desired filters.
```bash curl.exe -v -X POST "https://<YOUR_ENDPOINT_URL>com/computervision/retrieval/indexes/my-video-index:queryByText?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii "
ai-services Fine Tuning Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/fine-tuning-considerations.md
+
+ Title: Azure OpenAI Service fine-tuning considerations
+description: Learn more about what you should take into consideration before fine-tuning with Azure OpenAI Service
++++ Last updated : 10/23/2023++
+recommendations: false
+++
+# When to use Azure OpenAI fine-tuning
+
+When deciding whether or not fine-tuning is the right solution to explore for a given use case, there are some key terms that it's helpful to be familiar with:
+
+- [Prompt Engineering](/azure/ai-services/openai/concepts/prompt-engineering) is a technique that involves designing prompts for natural language processing models. This process improves accuracy and relevancy in responses, optimizing the performance of the model.
+- [Retrieval Augmented Generation (RAG)](/azure/machine-learning/concept-retrieval-augmented-generation?view=azureml-api-2&preserve-view=true) improves Large Language Model (LLM) performance by retrieving data from external sources and incorporating it into a prompt. RAG allows businesses to achieve customized solutions while maintaining data relevance and optimizing costs.
+- [Fine-tuning](/azure/ai-services/openai/how-to/fine-tuning?pivots=programming-language-studio) retrains an existing Large Language Model using example data, resulting in a new "custom" Large Language Model that has been optimized using the provided examples.
+
+## What is Fine Tuning with Azure OpenAI?
+
+When we talk about fine tuning, we really mean *supervised fine-tuning* not continuous pre-training or Reinforcement Learning through Human Feedback (RLHF). Supervised fine-tuning refers to the process of retraining pre-trained models on specific datasets, typically to improve model performance on specific tasks or introduce information that wasn't well represented when the base model was originally trained.
+
+Fine-tuning is an advanced technique that requires expertise to use appropriately. The questions below will help you evaluate whether you are ready for fine-tuning, and how well you've thought through the process. You can use these to guide your next steps or identify other approaches that might be more appropriate.
+
+## Why do you want to fine-tune a model?
+
+- You should be able to clearly articulate a specific use case for fine-tuning and identify the [model](models.md#fine-tuning-models-preview) you hope to fine-tune.
+- Good use cases for fine-tuning include steering the model to output content in a specific and customized style, tone, or format, or scenarios where the information needed to steer the model is too long or complex to fit into the prompt window.
+
+**Common signs you might not be ready for fine-tuning yet:**
+
+- No clear use case for fine tuning, or an inability to articulate much more than ΓÇ£I want to make a model betterΓÇ¥.
+- If you identify cost as your primary motivator, proceed with caution. Fine-tuning might reduce costs for certain use cases by shortening prompts or allowing you to use a smaller model but thereΓÇÖs a higher upfront cost to training and you will have to pay for hosting your own custom model. Refer to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for more information on Azure OpenAI fine-tuning costs.
+- If you want to add out of domain knowledge to the model, you should start with retrieval augmented generation (RAG) with features like Azure OpenAI's [on your data](./use-your-data.md) or [embeddings](../tutorials/embeddings.md). Often, this is a cheaper, more adaptable, and potentially more effective option depending on the use case and data.
+
+## What have you tried so far?
+
+Fine-tuning is an advanced capability, not the starting point for your generative AI journey. You should already be familiar with the basics of using Large Language Models (LLMs). You should start by evaluating the performance of a base model with prompt engineering and/or Retrieval Augmented Generation (RAG) to get a baseline for performance.
+
+Having a baseline for performance without fine-tuning is essential for knowing whether or not fine-tuning has improved model performance. Fine-tuning with bad data makes the base model worse, but without a baseline, it's hard to detect regressions.
+
+**If you are ready for fine-tuning you:**
+
+- Should be able to demonstrate evidence and knowledge of Prompt Engineering and RAG based approaches.
+- Be able to share specific experiences and challenges with techniques other than fine-tuning that were already tried for your use case.
+- Need to have quantitative assessments of baseline performance, whenever possible.
+
+**Common signs you might not be ready for fine-tuning yet:**
+
+- Starting with fine-tuning without having tested any other techniques.
+- Insufficient knowledge or understanding on how fine-tuning applies specifically to Large Language Models (LLMs).
+- No benchmark measurements to assess fine-tuning against.
+
+## What isnΓÇÖt working with alternate approaches?
+
+Understanding where prompt engineering falls short should provide guidance on going about your fine-tuning. Is the base model failing on edge cases or exceptions? Is the base model not consistently providing output in the right format, and you canΓÇÖt fit enough examples in the context window to fix it?
+
+Examples of failure with the base model and prompt engineering will help you identify the data they need to collect for fine-tuning, and how you should be evaluating your fine-tuned model.
+
+HereΓÇÖs an example: A customer wanted to use GPT-3.5-Turbo to turn natural language questions into queries in a specific, non-standard query language. They provided guidance in the prompt (ΓÇ£Always return GQLΓÇ¥) and used RAG to retrieve the database schema. However, the syntax wasn't always correct and often failed for edge cases. They collected thousands of examples of natural language questions and the equivalent queries for their database, including cases where the model had failed before ΓÇô and used that data to fine-tune the model. Combining their new fine-tuned model with their engineered prompt and retrieval brought the accuracy of the model outputs up to acceptable standards for use.
+
+**If you are ready for fine-tuning you:**
+
+- Have clear examples on how you have approached the challenges in alternate approaches and whatΓÇÖs been tested as possible resolutions to improve performance.
+- You've identified shortcomings using a base model, such as inconsistent performance on edge cases, inability to fit enough few shot prompts in the context window to steer the model, high latency, etc.
+
+**Common signs you might not be ready for fine-tuning yet:**
+
+- Insufficient knowledge from the model or data source.
+- Inability to find the right data to serve the model.
+
+## What data are you going to use for fine-tuning?
+
+Even with a great use case, fine-tuning is only as good as the quality of the data that you are able to provide. You need to be willing to invest the time and effort to make fine-tuning work. Different models will require different data volumes but you often need to be able to provide fairly large quantities of high-quality curated data.
+
+Another important point is even with high quality data if your data isn't in the necessary format for fine-tuning you will need to commit engineering resources in order to properly format the data.
+
+| Data | Babbage-002 & Davinci-002 | GPT-3.5-Turbo |
+||||
+| Volume | Thousands of Examples | Thousands of Examples |
+| Format | Prompt/Completion | Conversational Chat |
+
+**If you are ready for fine-tuning you:**
+
+- Have identified a dataset for fine-tuning.
+- The dataset is in the appropriate format for training.
+- Some level of curation has been employed to ensure dataset quality.
+
+**Common signs you might not be ready for fine-tuning yet:**
+
+- Dataset hasn't been identified yet.
+- Dataset format doesn't match the model you wish to fine-tune.
+
+## How will you measure the quality of your fine-tuned model?
+
+There isnΓÇÖt a single right answer to this question, but you should have clearly defined goals for what success with fine-tuning looks like. Ideally, this shouldn't just be qualitative but should include quantitative measures of success like utilizing a holdout set of data for validation, as well as user acceptance testing or A/B testing the fine-tuned model against a base model.
+
+## Next steps
+
+- Watch the [Azure AI Show episode: "To fine-tune or not to fine-tune, that is the question"](https://www.youtube.com/watch?v=0Jo-z-MFxJs)
+- Learn more about [Azure OpenAI fine-tuning](../how-to/fine-tuning.md)
+- Explore our [fine-tuning tutorial](../tutorials/fine-tune.md)
ai-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-test-and-train.md
Audio files can have silence at the beginning and end of the recording. If possi
Custom Speech projects require audio files with these properties:
+> [!IMPORTANT]
+> These are requirements for Audio + human-labeled transcript training and testing. They differ from the ones for Audio only training and testing. If you want to use Audio only training and testing, [see this section](#audio-data-for-training-or-testing).
+ | Property | Value | |--|-| | File format | RIFF (WAV) |
Audio data is optimal for testing the accuracy of Microsoft's baseline speech to
Custom Speech projects require audio files with these properties:
+> [!IMPORTANT]
+> These are requirements for Audio only training and testing. They differ from the ones for Audio + human-labeled transcript training and testing. If you want to use Audio + human-labeled transcript training and testing, [see this section](#audio--human-labeled-transcript-data-for-training-or-testing).
+ | Property | Value | |--|--| | File format | RIFF (WAV) |
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md
Title: Regions - Speech service description: A list of available regions and endpoints for the Speech service, including speech to text, text to speech, and speech translation.- Previously updated : 09/16/2022 Last updated : 10/27/2023 -+ # Speech service supported regions
The following regions are supported for Speech service features such as speech t
| Europe | France Central | `francecentral` | | Europe | Germany West Central | `germanywestcentral` | | Europe | Norway East | `norwayeast` |
+| Europe | Sweden Central | `swedentcentral` |
| Europe | Switzerland North | `switzerlandnorth` <sup>6</sup>| | Europe | Switzerland West | `switzerlandwest` | | Europe | UK South | `uksouth` <sup>1,2,3,4,7</sup>|
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Net
In Overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Extra nodes created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
-A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an Overlay network for direct communication between pods. There's no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pod, which provides connectivity performance between pods on par with VMs in a VNet.
+A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an Overlay network for direct communication between pods. There's no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods, which provides connectivity performance between pods on par with VMs in a VNet. Workloads running within the pods are not even aware that network address manipulation is happening.
:::image type="content" source="media/azure-cni-Overlay/azure-cni-overlay.png" alt-text="A diagram showing two nodes with three pods each running in an Overlay network. Pod traffic to endpoints outside the cluster is routed via NAT.":::
Communication with endpoints outside the cluster, such as on-premises and peered
You can provide outbound (egress) connectivity to the internet for Overlay pods using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting).
-You can configure ingress connectivity to the cluster using an ingress controller, such as Nginx or [HTTP application routing](./http-application-routing.md).
+You can configure ingress connectivity to the cluster using an ingress controller, such as Nginx or [HTTP application routing](./http-application-routing.md). You cannot configure ingress connectivity using Azure App Gateway. For details see [Limitations with Azure CNI Overlay](#limitations-with-azure-cni-overlay).
## Differences between Kubenet and Azure CNI Overlay
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
Last updated 09/26/2023
# Automatically scale a cluster to meet application demands on Azure Kubernetes Service (AKS)
-To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed.
+To keep up with application demands in Azure Kubernetes Service (AKS), you might need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed.
This article shows you how to enable and manage the cluster autoscaler in an AKS cluster, which is based on the open source [Kubernetes][kubernetes-cluster-autoscaler] version.
To adjust to changing application demands, such as between workdays and evenings
* The **[Horizontal Pod Autoscaler][horizontal-pod-autoscaler]** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand. * **[Vertical Pod Autoscaler][vertical-pod-autoscaler]** (preview) automatically sets resource requests and limits on containers per workload based on past usage to ensure pods are scheduled onto nodes that have the required CPU and memory resources. The Horizontal Pod Autoscaler scales the number of pod replicas as needed, and the cluster autoscaler scales the number of nodes in a node pool as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity after a period of time. Any pods on a node removed by the cluster autoscaler are safely scheduled elsewhere in the cluster.
The cluster autoscaler and Horizontal Pod Autoscaler can work together and are o
> [!NOTE] > Manual scaling is disabled when you use the cluster autoscaler. Let the cluster autoscaler determine the required number of nodes. If you want to manually scale your cluster, [disable the cluster autoscaler](#disable-the-cluster-autoscaler-on-a-cluster).
-With cluster autoscaler enabled, when the node pool size is lower than the minimum or greater than the maximum it applies the scaling rules. Next, the autoscaler waits to take effect until a new node is needed in the node pool or until a node may be safely deleted from the current node pool. For more information, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
+With cluster autoscaler enabled, when the node pool size is lower than the minimum or greater than the maximum it applies the scaling rules. Next, the autoscaler waits to take effect until a new node is needed in the node pool or until a node might be safely deleted from the current node pool. For more information, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
-The cluster autoscaler may be unable to scale down if pods can't move, such as in the following situations:
+The cluster autoscaler might be unable to scale down if pods can't move, such as in the following situations:
* A directly created pod not backed by a controller object, such as a deployment or replica set. * A pod disruption budget (PDB) is too restrictive and doesn't allow the number of pods to fall below a certain threshold.
The cluster autoscaler uses startup parameters for things like time intervals be
> [!IMPORTANT] > The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscale in the Azure portal or using the Azure CLI. Let the Kubernetes cluster autoscaler manage the required scale settings. For more information, see [Can I modify the AKS resources in the node resource group?][aks-faq-node-resource-group]
+#### [Azure CLI](#tab/azure-cli)
+ * Update an existing cluster using the [`az aks update`][az-aks-update] command and enable and configure the cluster autoscaler on the node pool using the `--enable-cluster-autoscaler` parameter and specifying a node `--min-count` and `--max-count`. The following example command updates an existing AKS cluster to enable the cluster autoscaler on the node pool for the cluster and sets a minimum of one and maximum of three nodes: ```azurecli-interactive
The cluster autoscaler uses startup parameters for things like time intervals be
It takes a few minutes to update the cluster and configure the cluster autoscaler settings.
+#### [Portal](#tab/azure-portal)
+
+1. To enable cluster autoscaler on your existing clusterΓÇÖs node pools, navigate to *Node pools* from your cluster's overview page in the Azure portal. Select the *scale method* for the node pool youΓÇÖd like to adjust scaling settings for.
+
+ :::image type="content" source="./media/cluster-autoscaler/main-blade-column-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's node pools. The column for 'Scale method' is highlighted." lightbox="./media/cluster-autoscaler/main-blade-column.png":::
+
+1. From here, you can enable or disable autoscaling, adjust minimum and maximum node count, and learn more about your node poolΓÇÖs size, capacity, and usage. Select *Apply* to save your changes.
+
+ :::image type="content" source="./media/cluster-autoscaler/menu-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's node pools is shown with the 'Scale node pool' menu expanded. The 'Apply' button is highlighted." lightbox="./media/cluster-autoscaler/menu.png":::
+++ ### Disable the cluster autoscaler on a cluster * Disable the cluster autoscaler using the [`az aks update`][az-aks-update-preview] command and the `--disable-cluster-autoscaler` parameter.
The cluster autoscaler uses startup parameters for things like time intervals be
Nodes aren't removed when the cluster autoscaler is disabled. > [!NOTE]
-> You can manually scale your cluster after disabling the cluster autoscaler using the [`az aks scale`][az-aks-scale] command. If you use the horizontal pod autoscaler, that feature continues to run with the cluster autoscaler disabled, but pods may end up unable to be scheduled if all node resources are in use.
+> You can manually scale your cluster after disabling the cluster autoscaler using the [`az aks scale`][az-aks-scale] command. If you use the horizontal pod autoscaler, that feature continues to run with the cluster autoscaler disabled, but pods might end up unable to be scheduled if all node resources are in use.
### Re-enable a disabled cluster autoscaler
Monitor the performance of your applications and services, and adjust the cluste
## Use the cluster autoscaler profile
-You can also configure more granular details of the cluster autoscaler by changing the default values in the cluster-wide autoscaler profile. For example, a scale down event happens after nodes are under-utilized after 10 minutes. If you have workloads that run every 15 minutes, you may want to change the autoscaler profile to scale down under-utilized nodes after 15 or 20 minutes. When you enable the cluster autoscaler, a default profile is used unless you specify different settings. The cluster autoscaler profile has the following settings you can update:
+You can also configure more granular details of the cluster autoscaler by changing the default values in the cluster-wide autoscaler profile. For example, a scale down event happens after nodes are under-utilized after 10 minutes. If you have workloads that run every 15 minutes, you might want to change the autoscaler profile to scale down under-utilized nodes after 15 or 20 minutes. When you enable the cluster autoscaler, a default profile is used unless you specify different settings. The cluster autoscaler profile has the following settings you can update:
* Example profile update that scales after 15 minutes and changes after 10 minutes of idle use.
You can also configure more granular details of the cluster autoscaler by changi
| scale-down-delay-after-failure | How long after scale down failure that scale down evaluation resumes | 3 minutes | | scale-down-unneeded-time | How long a node should be unneeded before it's eligible for scale down | 10 minutes | | scale-down-unready-time | How long an unready node should be unneeded before it's eligible for scale down | 20 minutes |
+| ignore-daemonsets-utilization (Preview) | Whether DaemonSet pods will be ignored when calculating resource utilization for scaling down | false |
+| daemonset-eviction-for-empty-nodes (Preview) | Whether DaemonSet pods will be gracefully terminated from empty nodes | false |
+| daemonset-eviction-for-occupied-nodes (Preview) | Whether DaemonSet pods will be gracefully terminated from non-empty nodes | true |
| scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, in which a node can be considered for scale down | 0.5 | | max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds |
+| balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false |
| balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false | | expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random | | skip-nodes-with-local-storage | If true, cluster autoscaler doesn't delete nodes with pods with local storage, for example, EmptyDir or HostPath | true |
You can also configure more granular details of the cluster autoscaler by changi
> > * The cluster autoscaler profile affects **all node pools** that use the cluster autoscaler. You can't set an autoscaler profile per node pool. When you set the profile, any existing node pools with the cluster autoscaler enabled immediately start using the profile. > * The cluster autoscaler profile requires Azure CLI version *2.11.1* or later. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+> * To access preview features use the aks-preview extension version 0.5.126 or later
+ ### Set the cluster autoscaler profile on a new cluster
You can also configure more granular details of the cluster autoscaler by changi
You can retrieve logs and status updates from the cluster autoscaler to help diagnose and debug autoscaler events. AKS manages the cluster autoscaler on your behalf and runs it in the managed control plane. You can enable control plane node to see the logs and operations from the cluster autoscaler.
+### [Azure CLI](#tab/azure-cli)
+ Use the following steps to configure logs to be pushed from the cluster autoscaler into Log Analytics: 1. Set up a rule for resource logs to push cluster autoscaler logs to Log Analytics using the [instructions here][aks-view-master-logs]. Make sure you check the box for `cluster-autoscaler` when selecting options for **Logs**.
Use the following steps to configure logs to be pushed from the cluster autoscal
As long as there are logs to retrieve, you should see logs similar to the following logs:
- :::image type="content" source="media/autoscaler/autoscaler-logs.png" alt-text="Screenshot of Log Analytics logs.":::
+ :::image type="content" source="media/cluster-autoscaler/autoscaler-logs.png" alt-text="Screenshot of Log Analytics logs.":::
The cluster autoscaler also writes out the health status to a `configmap` named `cluster-autoscaler-status`. You can retrieve these logs using the following `kubectl` command:
Use the following steps to configure logs to be pushed from the cluster autoscal
kubectl get configmap -n kube-system cluster-autoscaler-status -o yaml ```
+### [Portal](#tab/azure-portal)
+
+1. Navigate to *Node pools* from your cluster's overview page in the Azure portal. Select any of the tiles for autoscale events, autoscale warnings, or scale-ups not triggered to get more details.
+
+ :::image type="content" source="./media/cluster-autoscaler/main-blade-tiles-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's node pools. The section displaying autoscaler events, warning, and scale-ups not triggered is highlighted." lightbox="./media/cluster-autoscaler/main-blade-tiles.png":::
+
+1. YouΓÇÖll see a list of Kubernetes events filtered to `source: cluster-autoscaler` that have occurred within the last hour. With this information, youΓÇÖll be able to troubleshoot and diagnose any issues that might arise while scaling your nodes.
+
+ :::image type="content" source="./media/cluster-autoscaler/events-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's events. The filter for source is highlighted, showing 'source: cluster-autoscaler'." lightbox="./media/cluster-autoscaler/events.png":::
+++ To learn more about the autoscaler logs, see the [Kubernetes/autoscaler GitHub project FAQ][kubernetes-faq]. ## Use the cluster autoscaler with node pools
aks Use Oidc Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-oidc-issuer.md
Title: Create an OpenID Connect provider for your Azure Kubernetes Service (AKS)
description: Learn how to configure the OpenID Connect (OIDC) provider for a cluster in Azure Kubernetes Service (AKS) Previously updated : 07/26/2023 Last updated : 10/27/2023 # Create an OpenID Connect provider on Azure Kubernetes Service (AKS)
az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup
``` > [!IMPORTANT]
-> Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice. Then key2 and key3 are valid, and key1 is invalid.
+> Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice and restart the pods using projected service account tokens. Then key2 and key3 are valid, and key1 is invalid.
## Check the OIDC keys
The output should resemble the following:
https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000/ ```
-By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com/{uuid}`, where the value for `{region}` matches the location the AKS cluster is deployed in. The value `{uuid}` represents the OIDC key.
+By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com/{uuid}`, where the value for `{region}` matches the location the AKS cluster is deployed in. The value `{uuid}` represents the OIDC key, which is a randomly generated guid for each cluster that is immutable.
### Get the discovery document
aks Windows Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-best-practices.md
+
+ Title: Best practices for Windows containers on Azure Kubernetes Service (AKS)
+description: Learn about best practices for running Windows containers in Azure Kubernetes Service (AKS).
+++ Last updated : 10/27/2023++
+# Best practices for Windows containers on Azure Kubernetes Service (AKS)
+
+In AKS, you can create node pools that run Linux or Windows Server as the operating system (OS) on the nodes. Windows Server nodes can run native Windows container applications, such as .NET Framework. The Linux OS and Windows OS have different container support and configuration considerations. For more information, see [Windows container considerations in Kubernetes][windows-vs-linux].
+
+This article outlines best practices for running Windows containers on AKS.
+
+## Create an AKS cluster with Linux and Windows node pools
+
+When you create a new AKS cluster, the Azure platform creates a Linux node pool by default. This node pool contains system services needed for the cluster to function. Azure also creates and manages a control plane abstracted from the user, which means you aren't exposed to the underlying OS of the nodes hosting the main control plane components. We recommend that you run at least *two nodes* on the default Linux node pool to ensure the reliability and performance of your cluster. You can't delete the default Linux node pool unless you delete the entire cluster.
+
+There are some cases where you should consider deploying a Linux node pool when planning to run Windows-based workloads on your AKS cluster, such as:
+
+* If you want to run Linux and Windows workloads, you can deploy a Linux node pool and a Windows node pool in the same cluster.
+* If you want to deploy infrastructure-related components based on Linux, such as NGINX, you need a Linux node pool alongside your Windows node pool. You can use control plane nodes for development and testing scenarios. For production workloads, we recommend that you deploy separate Linux node pools to ensure reliability and performance.
+
+## Modernize existing applications with Windows on AKS
+
+You might want to containerize existing applications and run them using Windows on AKS. Before starting the containerization process, it's important to understand the application architecture and dependencies. For more information, see [Containerize existing applications using Windows containers](/virtualization/windowscontainers/quick-start/lift-shift-to-containers).
+
+## Windows OS version
+
+> **Best practice guidance**
+>
+> Windows Server 2022 provides the latest security and performance improvements and is the recommended OS for Windows node pools on AKS.
+
+AKS uses Windows Server 2019 and Windows Server 2022 as the host OS versions and only supports process isolation. AKS doesn't support container images built by other versions of Windows Server. For more information, see [Windows container version compatibility](/virtualization/windowscontainers/deploy-containers/version-compatibility).
+
+Windows Server 2022 is the default OS for Kubernetes version 1.25 and later. Windows Server 2019 will retire after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information, see the [AKS release notes][aks-release-notes].
+
+## Networking
+
+### Networking modes
+
+> **Best practice guidance**
+>
+> AKS clusters with Windows node pools only support Azure Container Networking Interface (Azure CNI) and use it by default.
+
+Windows doesn't support kubenet networking. AKS clusters with Windows node pools must use Azure CNI. For more information, see [Network concepts for applications in AKS][network-concepts-for-aks-applications].
+
+Azure CNI offers two networking modes based on your workload requirements:
+
+* [**Azure CNI Overlay**][azure-cni-overlay] is an overlay network similar to kubenet. The overlay network allows you to use virtual network (VNet) IPs for nodes and private address spaces for pods within those nodes that you can reuse across the cluster. Azure CNI Overlay is the **recommended networking mode**. It provides simplified network configuration and management and the best scalability in AKS networking.
+* [**Azure CNI with Dynamic IP Allocation**][azure-cni-dynamic-ip-allocation] requires extra planning and consideration for IP address management. This mode provides VNet IPs for nodes *and* pods. This configuration allows you direct access to pod IPs. However, it comes with increased complexity and reduced scalability.
+
+To help you decide which networking mode to use, see [Choosing a network model][azure-cni-choose-network-model].
+
+### Network policies
+
+> **Best practice guidance**
+>
+> Use network policies to secure traffic between pods. Windows supports Azure Network Policy Manager and Calico Network Policy. For more information, see [Differences between Azure Network Policy Manager and Calico Network Policy][azurenpm-vs-calico].
+
+When managing traffic between pods, you should apply the principle of least privilege. The Network Policy feature in Kubernetes allows you to define and enforce ingress and egress traffic rules between the pods in your cluster. For more information, see [Secure traffic between pods using network policies in AKS][network-policies-aks].
+
+Windows pods on AKS clusters that use the Calico Network Policy enable [Floating IP][dsr] by default.
+
+## Upgrades and updates
+
+It's important to keep your Windows environment up-to-date to ensure your systems have the latest security updates, feature sets, and compliance requirements. In a Kubernetes environment like AKS, you need to maintain the Kubernetes version, Windows nodes, and Windows container images and pods.
+
+### Kubernetes version upgrades
+
+As a managed Kubernetes service, AKS provides the necessary tools to upgrade your cluster to the latest Kubernetes version. For more information, see [Upgrade an AKS cluster][upgrade-aks-cluster].
+
+### Windows node monthly updates
+
+Windows nodes on AKS follow a monthly update schedule. Every month, AKS creates a new VHD with the latest available updates for Windows node pools. The VHD includes the host image, latest Nano Server image, latest Server Core image, and container. We recommend performing monthly updates to your Windows node pools to ensure your nodes have the latest security patches. For more information, see [Upgrade AKS node images][upgrade-aks-node-images].
+
+> [!NOTE]
+> Upgrades on Windows systems include both OS version upgrades and monthly node OS updates.
+
+You can stay up to date with the availability of new monthly releases using the [AKS release tracker][aks-release-tracker] and [AKS release notes][aks-release-notes].
+
+### Windows node OS version upgrades
+
+Windows has a release cadence for new versions of the OS, including Windows Server 2019 and Windows Server 2022. When upgrading your Windows node OS version, ensure the Windows container image version matches the Windows container host version and the node pools have only one version of Windows Server.
+
+To upgrade the Windows node OS version, you need to complete the following steps:
+
+1. Create a new node pool with the new Windows Server version.
+2. Deploy your workloads with the new Windows container images to the new node pool.
+3. Decommission the old node pool.
+
+For more information, see [Upgrade Windows Server workloads on AKS][upgrade-windows-workloads-aks].
+
+> [!NOTE]
+> Windows announced a new [Windows Server Annual Channel for Containers](https://techcommunity.microsoft.com/t5/windows-server-news-and-best/windows-server-annual-channel-for-containers/ba-p/3866248) that supports portability and mixed versions of Windows nodes and containers. This feature isn't yet supported in AKS.
+>
+> To track AKS feature plans, see the [Public AKS roadmap](https://github.com/Azure/AKS/projects/1#card-90806240).
+
+## Next steps
+
+To learn more about Windows containers on AKS, see the following resources:
+
+* [Learn how to deploy, manage, and monitor Windows containers on AKS](/training/paths/deploy-manage-monitor-wincontainers-aks).
+* Open an issue or provide feedback in the [Windows containers GitHub repository](https://github.com/microsoft/Windows-Containers/issues).
+* Review the [third-party partner solutions for Windows on AKS][windows-on-aks-partner-solutions].
+
+<!-- LINKS - internal -->
+[azure-cni-overlay]: ./azure-cni-overlay.md
+[azure-cni-dynamic-ip-allocation]: ./configure-azure-cni-dynamic-ip-allocation.md
+[azure-cni-choose-network-model]: ./azure-cni-overlay.md#choosing-a-network-model-to-use
+[network-concepts-for-aks-applications]: ./concepts-network.md
+[windows-vs-linux]: ./windows-vs-linux-containers.md
+[azurenpm-vs-calico]: ./use-network-policies.md#differences-between-azure-network-policy-manager-and-calico-network-policy-and-their-capabilities
+[network-policies-aks]: ./use-network-policies.md
+[dsr]: ../load-balancer/load-balancer-multivip-overview.md#rule-type-2-backend-port-reuse-by-using-floating-ip
+[upgrade-aks-cluster]: ./upgrade-cluster.md
+[upgrade-aks-node-images]: ./node-image-upgrade.md
+[upgrade-windows-workloads-aks]: ./upgrade-windows-2019-2022.md
+[windows-on-aks-partner-solutions]: ./windows-aks-partner-solutions.md
+
+<!-- LINKS - external -->
+[aks-release-notes]: https://github.com/Azure/AKS/releases
+[aks-release-tracker]: https://releases.aks.azure.com/
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
In Azure Kubernetes Service (AKS), you can create a node pool that runs Windows
This article outlines some of the frequently asked questions and OS concepts for Windows Server nodes in AKS.
-## Which Windows operating systems are supported?
-
-AKS uses Windows Server 2019 and Windows Server 2022 as the host OS version and only supports process isolation. Container images built by using other Windows Server versions are not supported. For more information, see [Windows container version compatibility][windows-container-compat]. For Kubernetes version 1.25 and higher, Windows Server 2022 is the default operating system. Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
- ## What kind of disks are supported for Windows? Azure Disks and Azure Files are the supported volume types, and are accessed as NTFS volumes in the Windows Server container.
Azure Disks and Azure Files are the supported volume types, and are accessed as
Generation 2 VMs are supported on Linux and Windows for WS2022 only. For more information, see [Support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
-## Can I run Windows only clusters in AKS?
-
-The master nodes (the control plane) in an AKS cluster are hosted by the AKS service. You won't be exposed to the operating system of the nodes hosting the master components. All AKS clusters are created with a default first node pool, which is Linux-based. This node pool contains system services that are needed for the cluster to function. We recommend that you run at least two nodes in the first node pool to ensure the reliability of your cluster and the ability to do cluster operations. The first Linux-based node pool can't be deleted unless the AKS cluster itself is deleted.
-
-In some cases, if you are planning to run Windows-based workloads on an AKS cluster, you should consider deploying a Linux node pool for the following reasons:
-- If you are planning to run Windows and Linux workloads, you can deploy a Windows and Linux node pool on the same AKS cluster to run the workloads side by side.-- When deploying infrastructure-related components based on Linux, such as Ngix and others, these workloads require a Linux node pool alongside your Windows node pools. For development and test scenarios, you can use control plane nodes. For production workloads, we recommend deploying separate Linux node pools for performance and reliability.- ## How do I patch my Windows nodes? To get the latest patches for Windows nodes, you can either [upgrade the node pool][nodepool-upgrade] or [upgrade the node image][upgrade-node-image]. Windows Updates are not enabled on nodes in AKS. AKS releases new node pool images as soon as patches are available, and it's the user's responsibility to upgrade node pools to stay current on patches and hotfixes. This patch process is also true for the Kubernetes version being used. [AKS release notes][aks-release-notes] indicate when new versions are available. For more information on upgrading the Windows Server node pool, see [Upgrade a node pool in AKS][nodepool-upgrade]. If you're only interested in updating the node image, see [AKS node image upgrades][upgrade-node-image].
To get the latest patches for Windows nodes, you can either [upgrade the node po
> [!NOTE] > The updated Windows Server image will only be used if a cluster upgrade (control plane upgrade) has been performed prior to upgrading the node pool.
-## What network plug-ins are supported?
-
-AKS clusters with Windows node pools must use the Azure Container Networking Interface (Azure CNI) (advanced) networking model. Kubenet (basic) networking is not supported. For more information on the differences in network models, see [Network concepts for applications in AKS][azure-network-models]. The Azure CNI network model requires extra planning and consideration for IP address management. For more information on how to plan and implement Azure CNI, see [Configure Azure CNI networking in AKS][configure-azure-cni].
-
-Windows nodes on AKS clusters also have [Direct Server Return (DSR)][dsr] enabled by default when Calico is enabled.
- ## Is preserving the client source IP supported? At this time, [client source IP preservation][client-source-ip] is not supported with Windows nodes.
api-management Developer Portal Extend Custom Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md
Title: Add custom functionality to the Azure API Management developer portal
+ Title: Add custom functionality to developer portal - Azure API Management
description: How to customize the managed API Management developer portal with custom functionality such as custom widgets. Previously updated : 11/01/2022 Last updated : 10/27/2023
-# Extend the developer portal with custom features
+# Extend the developer portal with custom widgets
The API Management [developer portal](api-management-howto-developer-portal.md) features a visual editor and built-in widgets so that you can customize and style the portal's appearance. However, you may need to customize the developer portal further with custom functionality. For example, you might want to integrate your developer portal with a support system that involves adding a custom interface. This article explains ways to add custom functionality such as custom widgets to your API Management developer portal.
The following table summarizes three options, with links to more detail.
|Method |Description | ||| |[Custom HTML code widget](#use-custom-html-code-widget) | - Lightweight solution for API publishers to add custom logic for basic use cases<br/><br/>- Copy and paste custom HTML code into a form, and developer portal renders it in an iframe |
-|[Create and upload custom widget](#create-and-upload-custom-widget) | - Developer solution for more advanced widget use cases<br/><br/>- Requires local implementation in React, Vue, or plain TypeScript<br/><br/>- Widget scaffold and tools provided to help developers create widget and upload to developer portal<br/><br/>- Supports workflows for source control, versioning, and code reuse<br/><br/> |
+|[Create and upload custom widget](#create-and-upload-custom-widget) | - Developer solution for more advanced widget use cases<br/><br/>- Requires local implementation in React, Vue, or plain TypeScript<br/><br/>- Widget scaffold and tools provided to help developers create widget and upload to developer portal<br/><br/>- Widget creation, testing, and deployment can be scripted through open source [React Component Toolkit](#create-custom-widgets-using-open-source-react-component-toolkit)<br/><br/>- Supports workflows for source control, versioning, and code reuse |
|[Self-host developer portal](developer-portal-self-host.md) | - Legacy extensibility option for customers who need to customize source code of the entire portal core<br/><br/> - Gives complete flexibility for customizing portal experience<br/><br/>- Requires advanced configuration<br/><br/>- Customer responsible for managing complete code lifecycle: fork code base, develop, deploy, host, patch, and upgrade |-- ## Use Custom HTML code widget The managed developer portal includes a **Custom HTML code** widget where you can insert HTML code for small portal customizations. For example, use custom HTML to embed a video or to add a form. The portal renders the custom widget in an inline frame (iframe).
The managed developer portal includes a **Custom HTML code** widget where you ca
## Create and upload custom widget
-For more advanced widget use cases, API Management provides a scaffold and tools to help developers create a widget and upload it to the developer portal.
-
-### Prerequisites
+For more advanced use cases, you can create and upload a custom widget to the developer portal. API Management provides a code scaffold for developers to create custom widgets in React, Vue, or plain TypeScript. The scaffold includes tools to help you develop and deploy your widget to the developer portal.
+### Prerequisites
+
* Install [Node.JS runtime](https://nodejs.org/en/) locally * Basic knowledge of programming and web development
To implement your widget using another JavaScript UI framework and libraries, yo
* If your framework of choice isn't compatible with [Vite build tool](https://vitejs.dev/), configure it so that it outputs compiled files to the `./dist` folder. Optionally, redefine where the compiled files are located by providing a relative path as the fourth argument for the [`deployNodeJs`](#azureapi-management-custom-widgets-toolsdeploynodejs) function. * For local development, the `config.msapim.json` file must be accessible at the URL `localhost:<port>/config.msapim.json` when the server is running.
+## Create custom widgets using open source React Component Toolkit
+
+The open source [React Component Toolkit](https://github.com/microsoft/react-component-toolkit) provides a suite of npm package scripts to help you convert a React application to the custom widget framework, test it, and deploy the custom widget to the developer portal. If you have access to an Azure OpenAI service, the toolkit can also create a widget from a text description that you provide.
+
+Currently, you can use the toolkit in two ways to deploy a custom widget:
+
+* Manually, by installing the toolkit and running the npm package scripts locally. You run the scripts sequentially to create, test, and deploy a React component as a custom widget to the developer portal.
+* Using an [Azure Developer CLI (azd) template](https://github.com/Azure-Samples/react-component-toolkit-openai-demo) for an end-to-end deployment. The `azd` template deploys an Azure API Management instance and an Azure OpenAI instance. After resources are provisioned, an interactive script helps you create, test, and deploy a custom widget to the developer portal from a description that you provide.
+
+> [!NOTE]
+> The React Component Toolkit and Azure Developer CLI sample template are open source projects. Support is provided only through GitHub issues in the respective repositories.
-## Next steps
+## Related content
Learn more about the developer portal:
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
Set-AzWebApp $webapp
By default, App Service starts your app from the root directory of your app code. But certain web frameworks don't start in the root directory. For example, [Laravel](https://laravel.com/) starts in the `public` subdirectory. Such an app would be accessible at `http://contoso.com/public`, for example, but you typically want to direct `http://contoso.com` to the `public` directory instead. If your app's startup file is in a different folder, or if your repository has more than one application, you can edit or add virtual applications and directories.
+> [!IMPORTANT]
+> Virtual directory to a physical path feature is only available on Windows apps.
+ # [Azure portal](#tab/portal) 1. In the [Azure portal], search for and select **App Services**, and then select your app.
application-gateway Ingress Controller Autoscale Pods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-autoscale-pods.md
Previously updated : 04/27/2023 Last updated : 10/26/2023
Use following two components:
* [`Azure Kubernetes Metric Adapter`](https://github.com/Azure/azure-k8s-metrics-adapter) - We use the metric adapter to expose Application Gateway metrics through the metric server. The Azure Kubernetes Metric Adapter is an open source project under Azure, similar to the Application Gateway Ingress Controller. * [`Horizontal Pod Autoscaler`](../aks/concepts-scale.md#horizontal-pod-autoscaler) - We use HPA to use Application Gateway metrics and target a deployment for scaling.
+> [!NOTE]
+> The Azure Kubernetes Metrics Adapter is no longer maintained. Kubernetes Event-driven Autoscaling (KEDA) is an alternative.<br>
+> Also see [Application Gateway for Containers](for-containers/overview.md).
+ ## Setting up Azure Kubernetes Metric Adapter 1. First, create a Microsoft Entra service principal and assign it `Monitoring Reader` access over Application Gateway's resource group.
azure-app-configuration Monitor App Configuration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration-reference.md
# Monitoring App Configuration data reference
-This article is a reference for the monitoring data collected by App Configuration. See [Monitoring App Configuration](monitor-app-configuration.md) for a walk through on to collect and analyze monitoring data for App Configuration.
+This article is a reference for the monitoring data collected by App Configuration. See [Monitoring App Configuration](monitor-app-configuration.md) for how to collect and analyze monitoring data for App Configuration.
## Metrics Resource Provider and Type: [App Configuration Platform Metrics](../azure-monitor/essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores)
Resource Provider and Type: [App Configuration Platform Metrics](../azure-monito
| Http Incoming Request Duration | Milliseconds | Server side duration of an Http Request | | Throttled Http Request Count | Count | Throttled requests are Http requests that receive a response with a status code of 429 | | Daily Storage Usage | Percent | Represents the amount of storage in use as a percentage of the maximum allowance. This metric is updated at least once daily. |
+| Request Quota Usage | Percent | Represents the current total request usage in percentage. |
| Replication Latency | Milliseconds | Represents the average time it takes for a replica to be consistent with current state. | For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
App Configuration has the following dimensions associated with its metr
| Metric Name | Dimension description | |-|--|
-| Http Incoming Request Count | The supported dimensions are the **HttpStatusCode**, **AuthenticationScheme**, and **Endpoint** of each request. **AuthenticationScheme** can be filtered by AAD or HMAC authentication. |
-| Http Incoming Request Duration | The supported dimensions are the **HttpStatusCode**, **AuthenticationScheme**, and **Endpoint** of each request. **AuthenticationScheme** can be filtered by AAD or HMAC authentication. |
+| Http Incoming Request Count | The supported dimensions are the **HttpStatusCode**, **AuthenticationScheme**, and **Endpoint** of each request. **AuthenticationScheme** can be filtered by "AAD" or "HMAC" authentication. |
+| Http Incoming Request Duration | The supported dimensions are the **HttpStatusCode**, **AuthenticationScheme**, and **Endpoint** of each request. **AuthenticationScheme** can be filtered by "AAD" or "HMAC" authentication. |
| Throttled Http Request Count | The **Endpoint** of each request is included as a dimension. | | Daily Storage Usage | This metric does not have any dimensions. |
+| Request Quota Usage | The supported dimensions are the **OperationType** ("Read"or "Write") and **Endpoint** of each request. |
| Replication Latency | The **Endpoint** of the replica that data was replicated to is included as a dimension. | For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
azure-app-configuration Monitor App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration.md
You can analyze metrics for App Configuration with metrics from other Azure serv
* Http Incoming Request Duration * Throttled Http Request Count (Http status code 429 Responses) * Daily Storage Usage
+* Request Quota Usage
* Replication Latency In the portal, navigate to the **Metrics** section and select the **Metric Namespaces** and **Metrics** you want to analyze. This screenshot shows you the metrics view when selecting **Http Incoming Request Count** for your configuration store.
azure-app-configuration Rest Api Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-throttling.md
Last updated 08/17/2020
# Throttling
-Configuration stores have limits on the requests that they may serve. Any requests that exceed an allotted quota for a configuration store will receive an HTTP 429 (Too Many Requests) response.
+Configuration stores have limits on the requests that they can serve. Any requests that exceed an allotted quota for a configuration store will receive an HTTP 429 (Too Many Requests) response.
Throttling is divided into different quota policies:
In the above example, the client has exceeded its allowed quota and is advised t
## Other retry
-The service may identify situations other than throttling that need a client retry (ex: 503 Service Unavailable). In all such cases, the `retry-after-ms` response header will be provided. To increase robustness, the client is advised to follow the suggested interval and perform a retry.
+The service might identify situations other than throttling that need a client retry (ex: 503 Service Unavailable). In all such cases, the `retry-after-ms` response header will be provided. To increase robustness, the client is advised to follow the suggested interval and perform a retry.
```http HTTP/1.1 503 Service Unavailable retry-after-ms: 787 ```+
+## Monitoring
+
+To view the **Total Requests** quota usage, App Configuration provides a metric named **Request Quota Usage**. The request quota usage metric shows the current quota usage as a percentage.
+
+For more information on the request quota usage metric and other App Configuration metrics see [Monitoring App Configuration data reference](./monitor-app-configuration-reference.md).
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Title: "Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters." Previously updated : 10/12/2023 Last updated : 10/27/2023 description: "With cluster connect, you can securely connect to Azure Arc-enabled Kubernetes clusters from anywhere without requiring any inbound port to be enabled on the firewall."
Before you begin, review the [conceptual overview of the cluster connect feature
## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An existing Azure Arc-enabled Kubernetes connected cluster.
+ - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
+ ### [Azure CLI](#tab/azure-cli) -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) Azure CLI to the latest version.
Before you begin, review the [conceptual overview of the cluster connect feature
az extension update --name connectedk8s ``` -- An existing Azure Arc-enabled Kubernetes connected cluster.
- - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
--- In addition to meeting the [network requirements for Arc-enabled Kubernetes](network-requirements.md), enable these endpoints for outbound access:-
- | Endpoint | Port |
- |-|-|
- |`*.servicebus.windows.net` | 443 |
- |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
-
- > [!NOTE]
- > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
- - Replace the placeholders and run the below command to set the environment variables used in this document: ```azurecli
Before you begin, review the [conceptual overview of the cluster connect feature
### [Azure PowerShell](#tab/azure-powershell) -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).- - Install [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-azure-powershell). -- An existing Azure Arc-enabled Kubernetes connected cluster.
- - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
--- In addition to meeting the [network requirements for Arc-enabled Kubernetes](network-requirements.md), enable these endpoints for outbound access:-
- | Endpoint | Port |
- |-|-|
- |`*.servicebus.windows.net` | 443 |
- |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
-
- > [!NOTE]
- > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
- - Replace the placeholders and run the below command to set the environment variables used in this document: ```azurepowershell
Before you begin, review the [conceptual overview of the cluster connect feature
+- In addition to meeting the [network requirements for Arc-enabled Kubernetes](network-requirements.md), enable these endpoints for outbound access:
+
+ | Endpoint | Port |
+ |-|-|
+ |`*.servicebus.windows.net` | 443 |
+ |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
+
+ > [!NOTE]
+ > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
+ [!INCLUDE [arc-region-note](../includes/arc-region-note.md)] ## Set up authentication
On the existing Arc-enabled cluster, create the ClusterRoleBinding with either M
1. Authorize the entity with appropriate permissions.
- - If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Microsoft Entra entity (service principal or user) that needs to access this cluster. Example:
+ - If you're using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Microsoft Entra entity (service principal or user) that needs to access this cluster. For example:
```console kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID ```
- - If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Microsoft Entra entity. Example:
+ - If you're using Azure RBAC for authorization checks on the cluster, you can create an applicable [Azure role assignment](azure-rbac.md#built-in-roles) mapped to the Microsoft Entra entity. For example:
```azurecli az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER
On the existing Arc-enabled cluster, create the ClusterRoleBinding with either M
1. Authorize the entity with appropriate permissions.
- - If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Microsoft Entra entity (service principal or user) that needs to access this cluster. Example:
+ - If you're using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Microsoft Entra entity (service principal or user) that needs to access this cluster. For example:
```console kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID ```
- - If you are using [Azure RBAC for authorization checks](azure-rbac.md) on the cluster, you can create an Azure role assignment mapped to the Microsoft Entra entity. Example:
+ - If you're using [Azure RBAC for authorization checks](azure-rbac.md) on the cluster, you can create an applicable [Azure role assignment](azure-rbac.md#built-in-roles) mapped to the Microsoft Entra entity. For example:
- ```azurecli
+ ```azurepowershell
+
az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER az role assignment create --role "Azure Arc Enabled Kubernetes Cluster User Role" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER ```
Use `az connectedk8s show` to check your Arc-enabled Kubernetes agent version.
### [Agent version < 1.11.7](#tab/agent-version)
-When making requests to the Kubernetes cluster, if the Microsoft Entra entity used is a part of more than 200 groups, you may see the following error:
+When making requests to the Kubernetes cluster, if the Microsoft Entra entity used is a part of more than 200 groups, you might see the following error:
`You must be logged in to the server (Error:Error while retrieving group info. Error:Overage claim (users with more than 200 group membership) is currently not supported.`
This is a known limitation. To get past this error:
### [Agent version >= 1.11.7](#tab/agent-version-latest)
-When making requests to the Kubernetes cluster, if the Microsoft Entra service principal used is a part of more than 200 groups, you may see the following error:
+When making requests to the Kubernetes cluster, if the Microsoft Entra service principal used is a part of more than 200 groups, you might see the following error:
`Overage claim (users with more than 200 group membership) for SPN is currently not supported. For troubleshooting, please refer to aka.ms/overageclaimtroubleshoot`
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
| Release Date | Release notes | Windows | Linux | |:|:|:|:| | October 2023| **Linux** <ul><li>Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics<li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ui> |None|1.28.0|
-| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when AMA vm-extension is provisioned involving disable command</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None |
+| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None |
| August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ul>**Linux**<ul><li> Comming soon</li></ui>|1.19.0| Comming Soon | | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0|None| | June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncompliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4|
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md
Monitoring of your Node.js web applications running on [Azure App Services](../.
The easiest way to enable application monitoring for Node.js applications running on Azure App Services is through Azure portal. Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes.
+>[!NOTE]
+> You can configure the automatically attached agent using the APPLICATIONINSIGHTS_CONFIGURATION_CONTENT environment variable in the App Service Environment variable blade. For details on the configuration options that can be passed via this environment variable, see [Node.js Configuration](https://github.com/microsoft/ApplicationInsights-node.js#Configuration).
+ > [!NOTE]
-> If both autoinstrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) in this article.
+> If both automatic instrumentation and manual SDK-based instrumentation are detected, only the manual instrumentation settings are honored. This is to prevent duplicate data from being sent. For more information, see the [troubleshooting section](#troubleshooting) in this article.
### Autoinstrumentation through Azure portal
Below is our step-by-step troubleshooting guide for extension/agent based monito
If `SDKPresent` is true this indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off. - # [Linux](#tab/linux) 1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~3".
Below is our step-by-step troubleshooting guide for extension/agent based monito
``` If `SDKPresent` is true this indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off.++ [!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)]
For the latest updates and bug fixes, [consult the release notes](web-app-extens
* [Receive alert notifications](../alerts/alerts-overview.md) whenever operational events happen or metrics cross a threshold. * Use [Application Insights for JavaScript apps and web pages](javascript.md) to get client telemetry from the browsers that visit a web page. * [Availability overview](availability-overview.md)+
azure-monitor Best Practices Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-plan.md
# Azure Monitor best practices - Planning your monitoring strategy and configuration
-This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It describes planning that you should consider before starting your implementation. This ensures that the configuration options you choose meet your particular business requirements.
+This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It describes planning that you should consider before starting your implementation. This planning ensures that the configuration options you choose meet your particular business requirements.
-If you're not already familiar with monitoring concepts, start with the [Cloud monitoring guide](/azure/cloud-adoption-framework/manage/monitor) which is part of the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/). That guide defines high-level concepts of monitoring and provides guidance for defining requirements for your monitoring environment and supporting processes. This article will refer to sections of that guide that are relevant to particular planning steps.
+If you're not already familiar with monitoring concepts, start with the [Cloud monitoring guide](/azure/cloud-adoption-framework/manage/monitor), which is part of the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/). That guide defines high-level concepts of monitoring and provides guidance for defining requirements for your monitoring environment and supporting processes. This article refers to sections of that guide that are relevant to particular planning steps.
## Understand Azure Monitor costs
-A core goal of your monitoring strategy will be minimizing costs. Some data collection and features in Azure Monitor have no cost while other have costs based on their particular configuration, amount of data collected, or frequency that they're run. The articles in this scenario will identify any recommendations that include a cost, but you should be familiar with Azure Monitor pricing as you design your implementation for cost optimization. See the following for details and guidance on Azure Monitor pricing:
+Minimizing costs is a core goal of your monitoring strategy. Some data collection and features in Azure Monitor have no cost. However, others have costs based on their particular configuration, amount of data collected, or frequency that they're run. The articles in this scenario identify any recommendations that include a cost, but you should be familiar with Azure Monitor pricing as you design your implementation for cost optimization. See the following pages for details and guidance on Azure Monitor pricing:
- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) - [Monitor usage and estimated costs in Azure Monitor](usage-estimated-costs.md) ## Define strategy
-Before you design and implement any monitoring solution, you should establish a monitoring strategy so that you understand the goals and requirements of your plan. The strategy defines your particular requirements, the configuration that best meets those requirements, and processes to leverage the monitoring environment to maximize your applications' performance and reliability. The configuration options that you choose for Azure Monitor should be consistent with your strategy.
+Before you design and implement any monitoring solution, you should establish a monitoring strategy so that you understand the goals and requirements of your plan. The strategy defines your particular requirements, the configuration that best meets those requirements, and processes to use the monitoring environment to maximize your applications' performance and reliability. The configuration options that you choose for Azure Monitor should be consistent with your strategy.
-See [Cloud monitoring guide: Formulate a monitoring strategy](/azure/cloud-adoption-framework/strategy/monitoring-strategy) for a number of factors that you should consider when developing a monitoring strategy. You should also refer to [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview) which will assist in comparing completely cloud based monitoring with a hybrid model.
+See [Cloud monitoring guide: Formulate a monitoring strategy](/azure/cloud-adoption-framework/strategy/monitoring-strategy) for many factors that you should consider when developing a monitoring strategy. You should also refer to [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview) for assistance with comparing completely cloud based monitoring with a hybrid model.
## Gather required information Before you determine the details of your implementation, you should gather information required to define those details. The following sections described information typically required for a complete implementation of Azure Monitor. ### What needs to be monitored?
- You won't necessarily configure complete monitoring for all of your cloud resources but instead focus on your critical applications and the components they depend on. This will not only reduce your monitoring costs but also reduce the complexity of your monitoring environment. See [Cloud monitoring guide: Collect the right data](/azure/cloud-adoption-framework/manage/monitor/data-collection) for guidance on defining the data that you require.
+ You don't need to necessarily configure complete monitoring for all of your cloud resources but instead focus on your critical applications and the components they depend on. This focus will not only reduce your monitoring costs but also reduce the complexity of your monitoring environment. See [Cloud monitoring guide: Collect the right data](/azure/cloud-adoption-framework/manage/monitor/data-collection) for guidance on defining the data that you require.
### Who needs to have access and be notified
-As you configure your monitoring environment, you need to determine which users should have access to monitoring data and which users need to be notified when an issue is detected. These may be application and resource owners, or you may have a centralized monitoring team. This information will determine how you configure permissions for data access and notifications for alerts. You may also require custom workbooks to present particular sets of information to different users.
+As you configure your monitoring environment, you need to determine the folllowing:
+
+- Which users should have access to monitoring data
+- Which users need to be notified when an issue is detected
+
+These users may be application and resource owners, or you may have a centralized monitoring team. This information determines how you configure permissions for data access and notifications for alerts. You may also require custom workbooks to present particular sets of information to different users.
### Service level agreements
-Your organization may have SLAs that define your commitments for performance and uptime of your applications. These SLAs may determine how you need to configure time sensitive features of Azure Monitor such as alerts. You will also need to understand [data latency in Azure Monitor](logs/data-ingestion-time.md) since this will affect the responsiveness of monitoring scenarios and your ability to meet SLAs.
+Your organization may have SLAs that define your commitments for performance and uptime of your applications. These SLAs may determine how you need to configure time sensitive features of Azure Monitor such as alerts. You also need to understand [data latency in Azure Monitor](logs/data-ingestion-time.md) since this affects the responsiveness of monitoring scenarios and your ability to meet SLAs.
## Identify monitoring services and products
-Azure Monitor is designed to address Health and Status monitoring. A complete monitoring solution will typically involve multiple Azure services and potentially other products. Other monitoring objectives, which may require additional solutions, are described in the Cloud Monitoring Guide in [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements).
+Azure Monitor is designed to address Health and Status monitoring. A complete monitoring solution typically involves multiple Azure services and potentially other products. Other monitoring objectives, which may require more solutions, are described in the Cloud Monitoring Guide in [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements).
-The following sections describe other services and products that you may use in conjunction with Azure Monitor. This scenario currently doesn't include guidance on implementing these solutions so you should refer to their documentation.
+The following sections describe other services and products that you may use with Azure Monitor. This scenario currently doesn't include guidance on implementing these solutions so you should refer to their documentation.
### Security monitoring While the operational data stored in Azure Monitor might be useful for investigating security incidents, other services in Azure were designed to monitor security. Security monitoring in Azure is performed by Microsoft Defender for Cloud and Microsoft Sentinel.
While the operational data stored in Azure Monitor might be useful for investiga
### System Center Operations Manager
-You may have an existing investment in System Center Operations Manager for monitoring on-premises resources and workloads running on your virtual machines. You may choose to [migrate this monitoring to Azure Monitor](azure-monitor-operations-manager.md) or continue to use both products together in a hybrid configuration. See [Cloud monitoring guide: Monitoring platforms overview](/azure/cloud-adoption-framework/manage/monitor/platform-overview) for a comparison of the two products. See [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview) for guidance on using the two in a hybrid configuration and on determining the most appropriate model for your environment.
+You may have an existing investment in System Center Operations Manager for monitoring on-premises resources and workloads running on your virtual machines. You may choose to [migrate this monitoring to Azure Monitor](azure-monitor-operations-manager.md) or continue to use both products together in a hybrid configuration. See [Cloud monitoring guide: Monitoring platforms overview](/azure/cloud-adoption-framework/manage/monitor/platform-overview) for a comparison of the two products. See [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview) for how to use the two in a hybrid configuration and determine the most appropriate model for your environment.
+
+## Frequently asked questions
+
+This section provides answers to common questions.
+### What IP addresses does Azure Monitor use?
+See [IP addresses used by Application Insights and Log Analytics](app/ip-addresses.md) for the IP addresses and ports required for agents and other external resources to access Azure Monitor.
## Next steps
azure-monitor Change Analysis Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-enable.md
foreach ($webapp in $webapp_list)
} ```
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### How can I enable Change Analysis for a web application?
+
+Enable Change Analysis for web application in guest changes by using the [Diagnose and solve problems tool](./change-analysis-visualizations.md#diagnose-and-solve-problems-tool).
+ ## Next steps - Learn about [visualizations in Change Analysis](change-analysis-visualizations.md)
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
Currently the following dependencies are supported in **Web App Diagnose and sol
- **Web app deployment and configuration changes**: Since these changes are collected by a site extension and stored on disk space owned by your application, data collection and storage is subject to your application's behavior. Check to see if a misbehaving application is affecting the results. - **Snapshot retention for all changes**: The Change Analysis data for resources is tracked by Azure Resource Graphs (ARG). ARG keeps snapshot history of tracked resources only for 14 days.
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Does using Change Analysis incur cost?
+
+You can use Change Analysis at no extra cost. Enable the `Microsoft.ChangeAnalysis` resource provider, and anything supported by Change Analysis is open to you.
## Next steps
azure-monitor Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/getting-started.md
These articles provide detailed information about each of the main steps you'll
| [Configure alerts and automated responses](best-practices-alerts.md) |Configure notifications and processes that are automatically triggered when an alert is fired. | | [Optimize costs](best-practices-cost.md) | Reduce your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. |
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### How do I enable Azure Monitor?
+
+Azure Monitor is enabled the moment that you create a new Azure subscription, and [activity log](./essentials/platform-logs-overview.md) and platform [metrics](essentials/data-platform-metrics.md) are automatically collected. Create [diagnostic settings](essentials/diagnostic-settings.md) to collect more detailed information about the operation of your Azure resources, and add [monitoring solutions](/previous-versions/azure/azure-monitor/insights/solutions) and [insights](./monitor-reference.md) to provide extra analysis on collected data for particular services.
+
+### How do I access Azure Monitor?
+
+Access all Azure Monitor features and data from the **Monitor** menu in the Azure portal. The **Monitoring** section of the menu for different Azure services provides access to the same tools with data filtered to a particular resource. Azure Monitor data is also accessible for various scenarios by using the Azure CLI, PowerShell, and a REST API.
+ ## Next steps
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
For a list of where log queries are used and references to tutorials and other d
![Screenshot that shows queries in Log Analytics.](media/data-platform-logs/log-analytics.png) ## Relationship to Azure Data Explorer
-Azure Monitor Logs is based on Azure Data Explorer. A Log Analytics workspace is roughly the equivalent of a database in Azure Data Explorer. Tables are structured the same, and both use KQL.
+Azure Monitor Logs is based on Azure Data Explorer. A Log Analytics workspace is roughly the equivalent of a database in Azure Data Explorer. Tables are structured the same, and both use KQL. For information on KQL, see [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/).
The experience of using Log Analytics to work with Azure Monitor queries in the Azure portal is similar to the experience of using the Azure Data Explorer Web UI. You can even [include data from a Log Analytics workspace in an Azure Data Explorer query](/azure/data-explorer/query-monitor-data).
azure-monitor Tutorial Logs Ingestion Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md
Previously updated : 03/20/2023 Last updated : 10/27/2023 # Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)
-The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme).
+The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azingest), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme).
> [!NOTE] > This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-portal.md) for a similar tutorial that uses the Azure portal UI to configure these components.
The steps required to configure the Logs ingestion API are as follows:
3. [Create a data collection endpoint (DCE)](#create-data-collection-endpoint) to receive data. 2. [Create a custom table in a Log Analytics workspace](#create-new-table-in-log-analytics-workspace). This is the table you'll be sending data to. 4. [Create a data collection rule (DCR)](#create-data-collection-rule) to direct the data to the target table.
-5. [Give the AD application access to the DCR](#assign-permissions-to-a-dcr).
+5. [Give the Microsoft Entra application access to the DCR](#assign-permissions-to-a-dcr).
6. See [Sample code to send data to Azure Monitor using Logs ingestion API](tutorial-logs-ingestion-code.md) for sample code to send data to using the Logs ingestion API. ## Prerequisites
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
You may need to integrate Azure Monitor with other systems or to build custom so
|[Azure Functions](../azure-functions/functions-overview.md)| Similar to Azure Logic Apps, Azure Functions give you the ability to pre process and post process monitoring data as well as perform complex action beyond the scope of typical Azure Monitor alerts. Azure Functions uses code however providing additional flexibility over Logic Apps. |Azure DevOps and GitHub | Azure Monitor Application Insights gives you the ability to create [Work Item Integration](app/release-and-work-item-insights.md?tabs=work-item-integration) with monitoring data embedding in it. Additional options include [release annotations](app/release-and-work-item-insights.md?tabs=release-annotations) and [continuous monitoring](app/release-and-work-item-insights.md?tabs=continuous-monitoring). |
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### What's the difference between Azure Monitor, Log Analytics, and Application Insights?
+
+In September 2018, Microsoft combined Azure Monitor, Log Analytics, and Application Insights into a single service to provide powerful end-to-end monitoring of your applications and the components they rely on. Features in Log Analytics and Application Insights haven't changed, although some features have been rebranded to Azure Monitor to better reflect their new scope. The log data engine and query language of Log Analytics is now referred to as Azure Monitor Logs.
+
+### How much does Azure Monitor cost?
+
+The cost of Azure Monitor is based on your usage of different features and is primarily determined by the amount of data you collect. See [Azure Monitor cost and usage](./usage-estimated-costs.md) for details on how costs are determined and [Cost optimization in Azure Monitor](./best-practices-cost.md) for recommendations on reducing your overall spend.
+
+### Is there an on-premises version of Azure Monitor?
+
+No. Azure Monitor is a scalable cloud service that processes and stores large amounts of data, although Azure Monitor can monitor resources that are on-premises and in other clouds.
+
+### Does Azure Monitor integrate with System Center Operations Manager?
+
+You can connect your existing System Center Operations Manager management group to Azure Monitor to collect data from agents into Azure Monitor Logs. This capability allows you to use log queries and solutions to analyze data collected from agents. You can also configure existing System Center Operations Manager agents to send data directly to Azure Monitor. See [Connect Operations Manager to Azure Monitor](agents/om-agents.md).
+
## Next steps - [Getting started with Azure Monitor](getting-started.md)
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
You can enable preview features by adding:
The preceding sample enables 'userDefineTypes' and 'extensibility`. The available experimental features include: - **assertions**: Should be enabled in tandem with `testFramework` experimental feature flag for expected functionality. Allows you to author boolean assertions using the `assert` keyword comparing the actual value of a parameter, variable, or resource name to an expected value. Assert statements can only be written directly within the Bicep file whose resources they reference. For more information, see [Bicep Experimental Test Framework](https://github.com/Azure/bicep/issues/11967).-- **compileTimeImports**: Allows you to use symbols defined in another template. See [Import user-defined data types](./bicep-import.md#import-user-defined-data-types-preview).
+- **compileTimeImports**: Allows you to use symbols defined in another Bicep file. See [Import types, variables and functions](./bicep-import.md#import-types-variables-and-functions-preview).
- **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider. See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). - **sourceMapping**: Enables basic source mapping to map an error location returned in the ARM template layer back to the relevant location in the Bicep file. - **resourceTypedParamsAndOutputs**: Enables the type for a parameter or output to be of type resource to make it easier to pass resource references between modules. This feature is only partially implemented. See [Simplifying resource referencing](https://github.com/azure/bicep/issues/2245).
azure-resource-manager Bicep Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-import.md
Title: Import Bicep namespaces
-description: Describes how to import Bicep namespaces.
+ Title: Imports in Bicep
+description: Describes how to import shared functionality and namespaces in Bicep.
Last updated 09/21/2023
-# Import Bicep namespaces
+# Imports in Bicep
-This article describes the syntax you use to import user-defined data types and the Bicep namespaces including the Bicep extensibility providers.
+This article describes the syntax you use to export and import shared functionality, as well as namespaces for Bicep extensibility providers.
-## Import user-defined data types (Preview)
+## Exporting types, variables and functions (Preview)
-[Bicep version 0.21.1 or newer](./install.md) is required to use this feature. The experimental flag `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features).
+> [!NOTE]
+> [Bicep version 0.23 or newer](./install.md) is required to use this feature. The experimental feature `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features). For user-defined functions, the experimental feature `userDefinedFunctions` must also be enabled.
+The `@export()` decorator is used to indicate that a given statement can be imported by another file. This decorator is only valid on type, variable and function statements. Variable statements marked with `@export()` must be compile-time constants.
-The syntax for importing [user-defined data type](./user-defined-data-types.md) is:
+The syntax for exporting functionality for use in other Bicep files is:
```bicep
-import {<user-defined-data-type-name>, <user-defined-data-type-name>, ...} from '<bicep-file-name>'
+@export()
+<statement_to_export>
+```
+
+## Import types, variables and functions (Preview)
+
+> [!NOTE]
+> [Bicep version 0.23.X or newer](./install.md) is required to use this feature. The experimental feature `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features). For user-defined functions, the experimental feature `userDefinedFunctions` must also be enabled.
+
+The syntax for importing functionality from another Bicep file is:
+
+```bicep
+import {<symbol_name>, <symbol_name>, ...} from '<bicep_file_name>'
+```
+
+With optional aliasing to rename symbols:
+
+```bicep
+import {<symbol_name> as <alias_name>, ...} from '<bicep_file_name>'
```
-or with wildcard syntax:
+Using the wildcard import syntax:
```bicep
-import * as <namespace> from '<bicep-file-name>'
+import * as <alias_name> from '<bicep_file_name>'
```
-You can mix and match the two preceding syntaxes.
+You can mix and match the preceding syntaxes. To access imported symbols using the wildcard syntax, you must use the `.` operator: `<alias_name>.<exported_symbol>`.
-Only user-defined data types that bear the [@export() decorator](./user-defined-data-types.md#import-types-between-bicep-files-preview) can be imported. Currently, this decorator can only be used on [`type`](./user-defined-data-types.md) statements.
+Only statements that have been [exported](#exporting-types-variables-and-functions-preview) in the file being referenced are available to be imported.
-Imported types can be used anywhere a user-defined type might be, for example, within the type clauses of type, param, and output statements.
+Functionality that has been imported from another file can be used without restrictions. For example, imported variables can be used anywhere a variable declared in-file would normally be valid.
### Example
-myTypes.bicep
+module.bicep
```bicep @export()
-type myString = string
+type myObjectType = {
+ foo: string
+ bar: int
+}
@export()
-type myInt = int
+var myConstant = 'This is a constant value'
+
+@export()
+func sayHello(name string) string => 'Hello ${name}!'
``` main.bicep ```bicep
-import * as myImports from 'myTypes.bicep'
-import {myInt} from 'myTypes.bicep'
+import * as myImports from 'exports.bicep'
+import {myObjectType, sayHello} from 'exports.bicep'
-param exampleString myImports.myString = 'Bicep'
-param exampleInt myInt = 3
+param exampleObject myObjectType = {
+ foo: myImports.myConstant
+ bar: 0
+}
-output outString myImports.myString = exampleString
-output outInt myInt = exampleInt
+output greeting string = sayHello('Bicep user')
+output exampleObject myImports.myObjectType = exampleObject
```
-## Import namespaces and extensibility providers
+## Import namespaces and extensibility providers (Preview)
-The syntax for importing the namespaces is:
+> [!NOTE]
+> The experimental feature `extensibility` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features) to use this feature.
+
+The syntax for importing namespaces is:
```bicep import 'az@1.0.0'
Both `az` and `sys` are Bicep built-in namespaces. They are imported by default.
The syntax for importing Bicep extensibility providers is:
+```bicep
+import '<provider-name>@<provider-version>'
+```
+
+The syntax for importing Bicep extensibility providers which require configuration is:
+ ```bicep import '<provider-name>@<provider-version>' with { <provider-properties>
azure-resource-manager User Defined Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-functions.md
When defining a user function, there are some restrictions:
* The function can't access variables. * The function can only use parameters that are defined in the function.
-* The function can't call other user-defined functions.
* The function can't use the [reference](bicep-functions-resource.md#reference) function or any of the [list](bicep-functions-resource.md#list) functions. * Parameters for the function can't have default values.
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 4/20/2023 Last updated : 10/27/2023 # Known issues: Azure VMware Solution This article describes the currently known issues with Azure VMware Solution.
-Refer to the table below to find details about resolution dates or possible workarounds. For more information about the different feature enhancements and bug fixes in Azure VMware Solution, see [What's New](azure-vmware-solution-platform-updates.md).
+Refer to the table to find details about resolution dates or possible workarounds. For more information about the different feature enhancements and bug fixes in Azure VMware Solution, see [What's New](azure-vmware-solution-platform-updates.md).
|Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- | | [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS - Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 |
-| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active in the vSphere Client | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
+| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active in the vSphere Client | 2021 | This alert should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
| When adding a cluster to my private cloud, the **Cluster-n: vSAN physical disk alarm 'Operation'** and **Cluster-n: vSAN cluster alarm 'vSAN Cluster Configuration Consistency'** alerts are active in the vSphere Client | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
+| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **Capacity - Maximum Capacity Threshold** alarm is raised | 2023 | Alarm raised because there are more than 4 clusters in the private cloud with the medium form factor for the NSX-T Data Center Unified Appliance. The form factor needs to be scaled up to large. This issue will be detected and completed by Microsoft, however you can also open a support request. | 2023 |
+| When I build a VMware HCX Service Mesh with the Enterprise license, the Replication Assisted vMotion Migration option is not available. | 2023 | The default VMware HCX Compute Profile does not have the Replication Assisted vMotion Migration option enabled. From the Azure VMware Solution vSphere Client, select the VMware HCX option and edit the default Compute Profile to enable Replication Assisted vMotion Migration. | 2023 |
+| [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | Microsoft is currently working with its security teams and partners to evaluate the risk to Azure VMware Solution and its customers. Initial investigations have shown that controls in place within Azure VMware Solution reduce the risk of CVE-2023-03048. However Microsoft is working on a plan to rollout security fixes in the near future to completely remediate the security vulnerability. | October 2023 |
In this article, you learned about the current known issues with the Azure VMware Solution.
azure-web-pubsub Howto Enable Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-enable-geo-replication.md
Update **webpubsub** extension to the latest version, then run:
```azurecli az webpubsub replica create --sku Premium_P1 -l eastus --replica-name MyReplica --name MyWebPubSub -g MyResourceGroup ```+
+-
+ ## Pricing and resource unit Each replica has its **own** `unit` and `autoscale settings`.
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md
The following table describes the network topologies supported by each network f
|Topology |Supported | | :- |::|
-|Connectivity to BareMetal (BM) in a local VNet| Yes |
-|Connectivity to BM in a peered VNet (Same region)|Yes |
-|Connectivity to BM in a peered VNet\* (Cross region or global peering)\*|No |
+|Connectivity to BareMetal Infrasturcture (BMI) in a local VNet| Yes |
+|Connectivity to BMI in a peered VNet (Same region)|Yes |
+|Connectivity to BMI in a peered VNet\* (Cross region or global peering) with VWAN\*|Yes |
+|Connectivity to BM in a peered VNet* (Cross region or global peering)* without VWAN| No|
|On-premises connectivity to Delegated Subnet via Global and Local Expressroute |Yes| |ExpressRoute (ER) FastPath |No |
-|Connectivity from on-premises to a BM in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit|Yes |
+|Connectivity from on-premises to BMI in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit|Yes |
|On-premises connectivity to Delegated Subnet via VPN GW| Yes |
-|Connectivity from on-premises to a BM in a spoke VNet over VPN gateway and VNet peering with gateway transit| Yes |
+|Connectivity from on-premises to BMI in a spoke VNet over VPN gateway and VNet peering with gateway transit| Yes |
|Connectivity over Active/Passive VPN gateways| Yes | |Connectivity over Active/Active VPN gateways| No | |Connectivity over Active/Active Zone Redundant gateways| No |
The following table describes whatΓÇÖs supported for each network features confi
| :- | -: | |Delegated subnet per VNet |1| |[Network Security Groups](../../../virtual-network/network-security-groups-overview.md) on NC2 on Azure-delegated subnets|No|
-|[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets|No|
+|[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets with VWAN|Yes|
+[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets without VWAN| No|
|Connectivity from BareMetal to [private endpoints](../../../private-link/private-endpoint-overview.md) in the same Vnet on Azure-delegated subnets|No| |Connectivity from BareMetal to [private endpoints](../../../private-link/private-endpoint-overview.md) in a different spoke Vnet connected to vWAN|Yes| |Load balancers for NC2 on Azure traffic|No|
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
During the public preview of Azure Chaos Studio, there are a few limitations and
- **Supported browsers** - The Chaos Studio portal experience has only been tested on the following browsers: * **Windows:** Microsoft Edge, Google Chrome, and Firefox * **MacOS:** Safari, Google Chrome, and Firefox-- **Terraform** - At present Chaos Studio does not support terraform.
+- **Terraform** - Chaos Studio does not support Terraform at this time.
+- **Powershell modules** - Chaos Studio does not have dedicated Powershell modules at this time. For Powershell, use our REST API
+- **Azure CLI** - Chaos Studio does not have dedicated AzCLI modules at this time. Use our REST API from AzCLI
+- **Azure Policy** - Chaos Studio does not support the applicable built-in policies for our service (audit policy for customer-managed keys and Private Link) at this time.
+- **Private Link** To use Private Link for Agent Service, you need to have your subscription allowlisted and use our preview API version. We do not support Azure Portal UI experiments for Agent-based experiments using Private Link. These restrictions do NOT apply to our Service-direct faults
+- **Customer-Managed Keys** You will need to use our 2023-10-27-preview REST API via a CLI to create CMK-enabled experiments. We do not support Portal UI experiments using CMK at this time.
+- **Lockbox** At present, we do not have integration with Customer Lockbox.
+- **Java SDK** At present, we do not have a dedicated Java SDK. If this is something you would use, reach out to us with your feature request.
- **Built-in roles** - Chaos Studio does not currently have its own built-in roles. Permissions may be attained to run a chaos experiment by either assigning an [Azure built-in role](chaos-studio-fault-providers.md) or a created custom role to the experiment's identity. ## Known issues
cloud-shell Private Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/private-vnet.md
description: This article describes a scenario for using Azure Cloud Shell in a
ms.contributor: jahelmic Last updated 06/21/2023 Title: Using Cloud Shell in an Azure virtual network
+ Title: Use Cloud Shell in an Azure virtual network
-# Using Cloud Shell in an Azure virtual network
+# Use Cloud Shell in an Azure virtual network
-By default, Cloud Shell sessions run in a container in a Microsoft network that's separate from your
-resources. Commands run inside the container can't access resources in a private virtual network.
-For example, you can't use SSH to connect from Cloud Shell to a virtual machine that only has a
-private IP address, or use `kubectl` to connect to a Kubernetes cluster that has locked down access.
+By default, Azure Cloud Shell sessions run in a container in a Microsoft network that's separate
+from your resources. Commands that run inside the container can't access resources in a private
+virtual network. For example, you can't use Secure Shell (SSH) to connect from Cloud Shell to a
+virtual machine that has only a private IP address, or use `kubectl` to connect to a Kubernetes
+cluster that has locked down access.
-To provide access to your private resources, you can deploy Cloud Shell into an Azure Virtual
-Network that you control. This is referred to as _VNET isolation_.
+To provide access to your private resources, you can deploy Cloud Shell into an Azure virtual
+network that you control. This technique is called _virtual network isolation_.
-## Benefits to VNET isolation with Azure Cloud Shell
+## Benefits of virtual network isolation with Cloud Shell
-Deploying Azure Cloud Shell in a private VNET offers several benefits:
+Deploying Cloud Shell in a private virtual network offers these benefits:
-- The resources you want to manage don't have to have public IP addresses.-- You can use command line tools, SSH, and PowerShell remoting from the Cloud Shell container to
+- The resources that you want to manage don't need to have public IP addresses.
+- You can use command-line tools, SSH, and PowerShell remoting from the Cloud Shell container to
manage your resources. - The storage account that Cloud Shell uses doesn't have to be publicly accessible.
-## Things to consider before deploying Azure Cloud Shell in a VNET
+## Things to consider before deploying Azure Cloud Shell in a virtual network
- Starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.-- VNET isolation requires you to use [Azure Relay][01], which is a paid service. In the Cloud Shell
- scenario, one hybrid connection is used for each administrator while they're using Cloud Shell.
- The connection is automatically closed when the Cloud Shell session ends.
+- Virtual network isolation requires you to use [Azure Relay][01], which is a paid service. In the
+ Cloud Shell scenario, one hybrid connection is used for each administrator while they're using
+ Cloud Shell. The connection is automatically closed when the Cloud Shell session ends.
## Architecture The following diagram shows the resource architecture that you must build to enable this scenario.
-![Illustration of Cloud Shell isolated VNET architecture.][03]
+![Illustration of a Cloud Shell isolated virtual network architecture.][03]
-- **Customer Client Network** - Client users can be located anywhere on the Internet to securely
+- **Customer client network**: Client users can be located anywhere on the internet to securely
access and authenticate to the Azure portal and use Cloud Shell to manage resources contained in
- the customers subscription. For stricter security, you can allow users to launch Cloud Shell only
+ the customer's subscription. For stricter security, you can allow users to open Cloud Shell only
from the virtual network contained in your subscription.-- **Microsoft Network** - Customers connect to the Azure portal on Microsoft's network to
- authenticate and launch Cloud Shell.
-- **Customer Virtual Network** - This is the network that contains the subnets to support VNET
- isolation. Resources such as virtual machines and services are directly accessible from Cloud
- Shell without the need to assign a public IP address.
-- **Azure Relay** - An [Azure Relay][01] allows two endpoints that aren't directly reachable to
+- **Microsoft network**: Customers connect to the Azure portal on Microsoft's network to
+ authenticate and open Cloud Shell.
+- **Customer virtual network**: This is the network that contains the subnets to support virtual
+ network isolation. Resources such as virtual machines and services are directly accessible from
+ Cloud Shell without the need to assign a public IP address.
+- **Azure Relay**: [Azure Relay][01] allows two endpoints that aren't directly reachable to
communicate. In this case, it's used to allow the administrator's browser to communicate with the container in the private network.-- **File share** - Cloud Shell requires a storage account that is accessible from the virtual
- network. The storage account provides the file share used by Cloud Shell users.
+- **File share**: Cloud Shell requires a storage account that's accessible from the virtual network.
+ The storage account provides the file share used by Cloud Shell users.
## Related links
cloud-shell Quickstart Deploy Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-deploy-vnet.md
Title: Deploy Azure Cloud Shell in a virtual network with quickstart templates
-# Deploy Azure Cloud Shell in a virtual network with quickstart templates
+# Deploy Cloud Shell in a virtual network by using quickstart templates
-Before you can deploy Azure Cloud Shell in a virtual network (VNet) configuration using the
-quickstart templates, there are several prerequisites to complete before running the templates.
+Before you run quickstart templates to deploy Azure Cloud Shell in a virtual network (VNet), there
+are several prerequisites to complete.
-This document guides you through the process to complete the configuration.
+This article walks you through the following steps to configure and deploy Cloud Shell in a virtual
+network:
-## Steps to deploy Azure Cloud Shell in a virtual network
-
-This article walks you through the following steps to deploy Azure Cloud Shell in a virtual network:
-
-1. Register resource providers
-1. Collect the required information
-1. Create the virtual networks using the **Azure Cloud Shell - VNet** ARM template
-1. Create the virtual network storage account using the **Azure Cloud Shell - VNet storage** ARM
- template
-1. Configure and use Azure Cloud Shell in a virtual network
+1. Register resource providers.
+1. Collect the required information.
+1. Create the virtual networks by using the **Azure Cloud Shell - VNet** Azure Resource Manager
+ template (ARM template).
+1. Create the virtual network storage account by using the **Azure Cloud Shell - VNet storage** ARM
+ template.
+1. Configure and use Cloud Shell in a virtual network.
## 1. Register resource providers
-Azure Cloud Shell needs access to certain Azure resources. That access is made available through
+Cloud Shell needs access to certain Azure resources. You make that access available through
resource providers. The following resource providers must be registered in your subscription: - **Microsoft.CloudShell** - **Microsoft.ContainerInstances** - **Microsoft.Relay**
-Depending when your tenant was created, some of these providers might already be registered.
+Depending on when your tenant was created, some of these providers might already be registered.
-To see all resource providers, and the registration status for your subscription:
+To see all resource providers and the registration status for your subscription:
1. Sign in to the [Azure portal][04]. 1. On the Azure portal menu, search for **Subscriptions**. Select it from the available options.
-1. Select the subscription you want to view.
+1. Select the subscription that you want to view.
1. On the left menu, under **Settings**, select **Resource providers**. 1. In the search box, enter `cloudshell` to search for the resource provider.
-1. Select the **Microsoft.CloudShell** resource provider register from the provider list.
-1. Select **Register** to change the status from **unregistered** to **Registered**.
+1. Select the **Microsoft.CloudShell** resource provider from the provider list.
+1. Select **Register** to change the status from **unregistered** to **registered**.
1. Repeat the previous steps for the **Microsoft.ContainerInstances** and **Microsoft.Relay** resource providers.
- [![Screenshot of selecting resource providers in the Azure portal.][98]][98a]
+[![Screenshot of selecting resource providers in the Azure portal.][98]][98a]
## 2. Collect the required information
-There are several pieces of information that you need to collect before you can deploy Azure Cloud.
-You can use the default Azure Cloud Shell instance to gather the required information and create the
-necessary resources. You should create dedicated resources for the Azure Cloud Shell VNet
-deployment. All resources must be in the same Azure region and contained in the same resource group.
+You need to collect several pieces of information before you can deploy Cloud Shell.
+
+You can use the default Cloud Shell instance to gather the required information and create the
+necessary resources. You should create dedicated resources for the Cloud Shell virtual network
+deployment. All resources must be in the same Azure region and in the same resource group.
+
+Fill in the following values:
-- **Subscription** - The name of your subscription containing the resource group used for the Azure
- Cloud Shell VNet deployment
-- **Resource Group** - The name of the resource group used for the Azure Cloud Shell VNet deployment-- **Region** - The location of the resource group-- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell VNet-- **Azure Container Instance OID** - The ID of the Azure Container Instance for your resource group-- **Azure Relay Namespace** - The name that you want to assign to the Relay resource created by the
- template
+- **Subscription**: The name of your subscription that contains the resource group for the Cloud
+ Shell virtual network deployment.
+- **Resource Group**: The name of the resource group for the Cloud Shell virtual network deployment.
+- **Region**: The location of the resource group.
+- **Virtual Network**: The name of the Cloud Shell virtual network.
+- **Azure Container Instance OID**: The ID of the Azure container instance for your resource group.
+- **Azure Relay Namespace**: The name that you want to assign to the Azure Relay resource that the
+ template creates.
### Create a resource group
-You can create the resource group using the Azure portal, Azure CLI, or Azure PowerShell. For more
-information, see the following articles:
+You can create the resource group by using the Azure portal, the Azure CLI, or Azure PowerShell. For
+more information, see the following articles:
- [Manage Azure resource groups by using the Azure portal][02] - [Manage Azure resource groups by using Azure CLI][01]
information, see the following articles:
### Create a virtual network
-You can create the virtual network using the Azure portal, Azure CLI, or Azure PowerShell. For more
-information, see the following articles:
+You can create the virtual network by using the Azure portal, the Azure CLI, or Azure PowerShell.
+For more information, see the following articles:
- [Use the Azure portal to create a virtual network][05] - [Use Azure PowerShell to create a virtual network][06] - [Use Azure CLI to create a virtual network][04] > [!NOTE]
-> When setting the Container subnet address prefix for the Cloud Shell subnet it's important to
-> consider the number of Cloud Shell sessions you need to run concurrently. If the number of Cloud
-> Shell sessions exceeds the available IP addresses in the container subnet, users of those sessions
-> can't connect to Cloud Shell. Increase the container subnet range to accommodate your specific
-> needs. For more information, see the _Change Network Settings_ section of
-> [Add, change, or delete a virtual network subnet][07]
+> When you're setting the container subnet address prefix for the Cloud Shell subnet, it's important
+> to consider the number of Cloud Shell sessions that you need to run concurrently. If the number of
+> Cloud Shell sessions exceeds the available IP addresses in the container subnet, users of those
+> sessions can't connect to Cloud Shell. Increase the container subnet range to accommodate your
+> specific needs. For more information, see the "Change subnet settings" section of
+> [Add, change, or delete a virtual network subnet][07].
-### Azure Container Instance ID
+### Get the Azure container instance ID
-The **Azure Container Instance ID** is a unique value for every tenant. You use this identifier in
-the [quickstart templates][07] to configure virtual network for Cloud Shell.
+The Azure container instance ID is a unique value for every tenant. You use this identifier in
+the [quickstart templates][07] to configure a virtual network for Cloud Shell.
-1. Sign in to the [Azure portal][09]. From the **Home** screen, select **Microsoft Entra ID**. If
- the icon isn't displayed, enter `Microsoft Entra ID` in the top search bar.
-1. In the left menu, select **Overview** and enter `azure container instance service` into the
+1. Sign in to the [Azure portal][09]. From the home page, select **Microsoft Entra ID**. If the icon
+ isn't displayed, enter `Microsoft Entra ID` in the top search bar.
+1. On the left menu, select **Overview**. Then enter `azure container instance service` in the
search bar. [![Screenshot of searching for Azure Container Instance Service.][95]][95a]
-1. In the results under **Enterprise applications**, select the **Azure Container Instance Service**.
-1. Find **ObjectID** listed as a property on the **Overview** page for **Azure Container Instance
- Service**.
-1. You use this ID in the quickstart template for virtual network.
+1. In the results, under **Enterprise applications**, select **Azure Container Instance Service**.
+1. On the **Overview** page for **Azure Container Instance Service**, find the **Object ID** value
+ that's listed as a property.
+
+ You use this ID in the quickstart template for the virtual network.
[![Screenshot of Azure Container Instance Service details.][96]][96a]
-## 3. Create the virtual network using the ARM template
+## 3. Create the virtual network by using the ARM template
Use the [Azure Cloud Shell - VNet][08] template to create Cloud Shell resources in a virtual
-network. The template creates three subnets under the virtual network created earlier. You might
-choose to change the supplied names of the subnets or use the defaults. The virtual network, along
-with the subnets, require valid IP address assignments. You need at least one IP address for the
-Relay subnet and enough IP addresses in the container subnet to support the number of concurrent
-sessions you expect to use.
-
-The ARM template requires specific information about the resources you created earlier, along with
-naming information for new resources. This information is filled out along with the prefilled
+network. The template creates three subnets under the virtual network that you created earlier. You
+might choose to change the supplied names of the subnets or use the defaults.
+
+The virtual network, along with the subnets, requires valid IP address assignments. You need at
+least one IP address for the Relay subnet and enough IP addresses in the container subnet to support
+the number of concurrent sessions that you expect to use.
+
+The ARM template requires specific information about the resources that you created earlier, along
+with naming information for new resources. This information is filled out along with the prefilled
information in the form.
-Information needed for the template:
+Information that you need for the template includes:
-- **Subscription** - The name of your subscription containing the resource group for Azure Cloud
- Shell VNet
-- **Resource Group** - The resource group name of either an existing or newly created resource group-- **Region** - Location of the resource group-- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell virtual network-- **Network Security Group** - The name that you want to assign to the Network Security Group
- created by the template
-- **Azure Container Instance OID** - The ID of the Azure Container Instance for your resource group
+- **Subscription**: The name of your subscription that contains the resource group for the Cloud
+ Shell virtual network.
+- **Resource Group**: The name of an existing or newly created resource group.
+- **Region**: The location of the resource group.
+- **Virtual Network**: The name of the Cloud Shell virtual network.
+- **Network Security Group**: The name that you want to assign to the network security group (NSG)
+ that the template creates.
+- **Azure Container Instance OID**: The ID of the Azure container instance for your resource group.
Fill out the form with the following information: | Project details | Value | | | -- |
-| Subscription | Defaults to the current subscription context.<br>For this example, we're using `Contoso (carolb)` |
-| Resource group | Enter the name of the resource group from the prerequisite information.<br>For this example, we're using `rg-cloudshell-eastus`. |
+| **Subscription** | Defaults to the current subscription context.<br>The example in this article uses `Contoso (carolb)`. |
+| **Resource group** | Enter the name of the resource group from the prerequisite information.<br>The example in this article uses `rg-cloudshell-eastus`. |
| Instance details | Value | | - | - |
-| Region | Prefilled with your default region.<br>For this example, we're using `East US`. |
-| Existing VNET Name | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `vnet-cloudshell-eastus`. |
-| Relay Namespace Name | Create a name that you want to assign to the Relay resource created by the template.<br>For this example, we're using `arn-cloudshell-eastus`. |
-| Nsg Name | Enter the name of the Network Security Group (NSG). The deployment creates this NSG and assigns an access rule to it. |
-| Azure Container Instance OID | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `8fe7fd25-33fe-4f89-ade3-0e705fcf4370`. |
-| Container Subnet Name | Defaults to `cloudshellsubnet`. Enter the name of the subnet for your container. |
-| Container Subnet Address Prefix | For this example, we use `10.1.0.0/16`, which provides 65,543 IP addresses for Cloud Shell instances. |
-| Relay Subnet Name | Defaults to `relaysubnet`. Enter the name of the subnet containing your relay. |
-| Relay Subnet Address Prefix | For this example, we use `10.0.2.0/24`. |
-| Storage Subnet Name | Defaults to `storagesubnet`. Enter the name of the subnet containing your storage. |
-| Storage Subnet Address Prefix | For this example, we use `10.0.3.0/24`. |
-| Private Endpoint Name | Defaults to `cloudshellRelayEndpoint`. Enter the name of the subnet containing your container. |
-| Tag Name | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. |
-| Location | Defaults to `[resourceGroup().location]`. Leave unchanged. |
-
-Once the form is complete, select **Review + Create** and deploy the network ARM template to your
+| **Region** | Prefilled with your default region.<br>The example in this article uses `East US`. |
+| **Existing VNET Name** | Fill in the value from the prerequisite information that you gathered.<br>The example in this article uses `vnet-cloudshell-eastus`. |
+| **Relay Namespace Name** | Create a name that you want to assign to the Relay resource that the template creates.<br>The example in this article uses `arn-cloudshell-eastus`. |
+| **Nsg Name** | Enter the name of the NSG. The deployment creates this NSG and assigns an access rule to it. |
+| **Azure Container Instance OID** | Fill in the value from the prerequisite information that you gathered.<br>The example in this article uses `8fe7fd25-33fe-4f89-ade3-0e705fcf4370`. |
+| **Container Subnet Name** | Defaults to `cloudshellsubnet`. Enter the name of the subnet for your container. |
+| **Container Subnet Address Prefix** | The example in this article uses `10.1.0.0/16`, which provides 65,543 IP addresses for Cloud Shell instances. |
+| **Relay Subnet Name** | Defaults to `relaysubnet`. Enter the name of the subnet that contains your relay. |
+| **Relay Subnet Address Prefix** | The example in this article uses `10.0.2.0/24`. |
+| **Storage Subnet Name** | Defaults to `storagesubnet`. Enter the name of the subnet that contains your storage. |
+| **Storage Subnet Address Prefix** | The example in this article uses `10.0.3.0/24`. |
+| **Private Endpoint Name** | Defaults to `cloudshellRelayEndpoint`. Enter the name of the subnet that contains your container. |
+| **Tag Name** | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. |
+| **Location** | Defaults to `[resourceGroup().location]`. Leave unchanged. |
+
+After the form is complete, select **Review + Create** and deploy the network ARM template to your
subscription.
-## 4. Create the virtual network storage using the ARM template
+## 4. Create the virtual network storage by using the ARM template
Use the [Azure Cloud Shell - VNet storage][09] template to create Cloud Shell resources in a virtual network. The template creates the storage account and assigns it to the private virtual network.
-The ARM template requires specific information about the resources you created earlier, along
+The ARM template requires specific information about the resources that you created earlier, along
with naming information for new resources.
-Information needed for the template:
+Information that you need for the template includes:
-- **Subscription** - The name of the subscription containing the resource group for Azure Cloud
+- **Subscription**: The name of the subscription that contains the resource group for the Cloud
Shell virtual network.-- **Resource Group** - The resource group name of either an existing or newly created resource group-- **Region** - Location of the resource group-- **Existing virtual network name** - The name of the virtual network created earlier-- **Existing Storage Subnet Name** - The name of the storage subnet created with the Network
- quickstart template
-- **Existing Container Subnet Name** - The name of the container subnet created with the Network
- quickstart template
+- **Resource Group**: The name of an existing or newly created resource group.
+- **Region**: The location of the resource group.
+- **Existing virtual network name**: The name of the virtual network that you created earlier.
+- **Existing Storage Subnet Name**: The name of the storage subnet that you created by using the
+ network quickstart template.
+- **Existing Container Subnet Name**: The name of the container subnet that you created by using the
+ network quickstart template.
Fill out the form with the following information: | Project details | Value | | | -- |
-| Subscription | Defaults to the current subscription context.<br>For this example, we're using `Contoso (carolb)` |
-| Resource group | Enter the name of the resource group from the prerequisite information.<br>For this example, we're using `rg-cloudshell-eastus`. |
+| **Subscription** | Defaults to the current subscription context.<br>The example in this article uses `Contoso (carolb)`. |
+| **Resource group** | Enter the name of the resource group from the prerequisite information.<br>The example in this article uses `rg-cloudshell-eastus`. |
| Instance details | Value | | | |
-| Region | Prefilled with your default region.<br>For this example, we're using `East US`. |
-| Existing VNET Name | For this example, we're using `vnet-cloudshell-eastus`. |
-| Existing Storage Subnet Name | Fill in the name of the resource created by the network template. |
-| Existing Container Subnet Name | Fill in the name of the resource created by the network template. |
-| Storage Account Name | Create a name for the new storage account.<br>For this example, we're using `myvnetstorage1138`. |
-| File Share Name | Defaults to `acsshare`. Enter the name of the file share want to create. |
-| Resource Tags | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. |
-| Location | Defaults to `[resourceGroup().location]`. Leave unchanged. |
-
-Once the form is complete, select **Review + Create** and deploy the network ARM template to your
+| **Region** | Prefilled with your default region.<br>The example in this article uses `East US`. |
+| **Existing VNET Name** | The example in this article uses `vnet-cloudshell-eastus`. |
+| **Existing Storage Subnet Name** | Fill in the name of the resource that the network template creates. |
+| **Existing Container Subnet Name** | Fill in the name of the resource that the network template creates. |
+| **Storage Account Name** | Create a name for the new storage account.<br>The example in this article uses `myvnetstorage1138`. |
+| **File Share Name** | Defaults to `acsshare`. Enter the name of the file share that you want to create. |
+| **Resource Tags** | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. |
+| **Location** | Defaults to `[resourceGroup().location]`. Leave unchanged. |
+
+After the form is complete, select **Review + Create** and deploy the network ARM template to your
subscription.
-## 5. Configuring Cloud Shell to use a virtual network
+## 5. Configure Cloud Shell to use a virtual network
-After you have deployed your private Cloud Shell instance, each Cloud Shell user must change their
+After you deploy your private Cloud Shell instance, each Cloud Shell user must change their
configuration to use the new private instance.
-If you have used the default Cloud Shell before deploying the private instance, you must reset your
-user settings.
+If you used the default Cloud Shell instance before you deployed the private instance, you must
+reset your user settings:
-1. Open Cloud Shell
+1. Open Cloud Shell.
1. Select **Cloud Shell settings** from the menu bar (gear icon).
-1. Select **Reset user settings** then select **Reset**
+1. Select **Reset user settings**, and then select **Reset**.
Resetting the user settings triggers the first-time user experience the next time you start Cloud Shell.
-[![Screenshot of Cloud Shell storage dialog box.][97]][97a]
+[![Screenshot of the Cloud Shell storage dialog.][97]][97a]
-1. Choose your preferred shell experience (Bash or PowerShell)
-1. Select **Show advanced settings**
+1. Choose your preferred shell experience (Bash or PowerShell).
+1. Select **Show advanced settings**.
1. Select the **Show VNET isolation settings** checkbox.
-1. Choose the **Subscription** containing your private Cloud Shell instance.
-1. Choose the **Region** containing your private Cloud Shell instance.
-1. Select the **Resource group** name containing your private Cloud Shell instance. If you have
- selected the correct resource group, the **Virtual network**, **Network profile**, and **Relay
- namespace** should be automatically populated with the correct values.
-1. Enter the name for the **File share** you created with the storage template.
+1. Choose the subscription that contains your private Cloud Shell instance.
+1. Choose the region that contains your private Cloud Shell instance.
+1. For **Resource group**, select the resource group that contains your private Cloud Shell
+ instance.
+
+ If you select the correct resource group, **Virtual network**, **Network profile**, and **Relay
+ namespace** are automatically populated with the correct values.
+1. For **File share**, enter the name of the file share that you created by using the storage
+ template.
1. Select **Create storage**. ## Next steps
-You must complete the Cloud Shell configuration steps for each user that needs to use the new
-private Cloud Shell instance.
+You must complete the Cloud Shell configuration steps for each user who needs to use the new private
+Cloud Shell instance.
<!-- link references --> [01]: /azure/azure-resource-manager/management/manage-resource-groups-cli
cloud-shell Vnet Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/vnet-troubleshooting.md
+
+description: >
+ This article provides instructions for troubleshooting a private virtual network deployment of
+ Azure Cloud Shell.
+ms.contributor: jahelmic
Last updated : 10/26/2023+
+ Title: Troubleshoot Azure Cloud Shell in a private virtual network
+
+# Troubleshoot Azure Cloud Shell in a private virtual network
+
+This article provides instructions for troubleshooting a private virtual network deployment of Azure
+Cloud Shell. For best results, and to be supportable, following the deployment instructions in the
+[Deploy Azure Cloud Shell in a virtual network using quickstart templates][03] article.
+
+## Verify you have set the correct permissions
+
+To configure Azure Cloud Shell in a virtual network, you must have the **Owner** role assignment on
+the subscription. To view and assign roles, see [List Owners of a Subscription][01]
+
+Unless otherwise noted, all the troubleshooting steps start in **Subscriptions** section of the
+Azure portal.
+
+1. Sign in to the [Azure portal][02].
+1. On the Azure portal menu, search for **Subscriptions**. Select it from the available options.
+1. Select the subscription you want to view.
+
+## Verify resource provider registrations
+
+Azure Cloud Shell needs access to certain Azure resources. That access is made available through
+resource providers. The following resource providers must be registered in your subscription:
+
+- **Microsoft.CloudShell**
+- **Microsoft.ContainerInstances**
+- **Microsoft.Relay**
+
+To see all resource providers, and the registration status for your subscription:
+
+1. Go to the **Settings** section of left menu of your subscription page.
+1. Select **Resource providers**.
+1. In the search box, enter `cloudshell` to search for the resource provider.
+1. Select the **Microsoft.CloudShell** resource provider register from the provider list.
+1. Select **Register** to change the status from **unregistered** to **Registered**.
+1. Repeat the previous steps for the **Microsoft.ContainerInstances** and **Microsoft.Relay**
+ resource providers.
+
+ [![Screenshot of selecting resource providers in the Azure portal.][ss01]][ss01x]
+
+## Verify Azure Container Instance Service role assignments
+
+The **Azure Container Instance Service** application needs specific permissions for the **Relay**
+and **Network Profile** resources. Use the following steps to see the resources and the role
+permissions for your subscription:
+
+1. Go to the **Settings** section of left menu of your subscription page.
+1. Select **Resource groups**.
+1. Select the resource group you provided in the prerequisites for the deployment.
+1. In the **Essentials** section of the **Overview**, select the **Show hidden types** checkbox.
+ This checkbox allows you to see all the resources created by the deployment.
+
+ [![Screenshot showing all the resources in your resource group.][ss02]][ss02x]
+
+1. Select the network profile resource with the type of `microsoft.network/networkprofile`. The name
+ should be `aci-networkProfile-<location>` where `<location>` is the location of the resource
+ group.
+1. On network profile page, select **Access control (IAM)** in the left menu.
+1. Then select **Role assignments** from the top menu bar.
+1. In the search box, enter `container`.
+1. Verify that **Azure Container Instance Service** has the `Network Contributor` role.
+
+ [![Screenshot showing the network profiles role assignments.][ss03]][ss03x]
+
+1. From the Resources page, select the relay namespace resource with the type of `Relay`. The name
+ should be the name of the relay namespace you provided in the deployment template.
+1. On relay page, select **Access control (IAM)**, then select **Role assignments** from the top
+ menu bar.
+1. In the search box, enter `container`.
+1. Verify that **Azure Container Instance Service** has the `Contributor` role.
+
+ [![Screenshot showing the network relay role assignments.][ss04]][ss04x]
+
+## Redeploy Cloud Shell for a private virtual network
+
+Verify the configurations described in this article. If you continue receive an error message when
+you try to use your deployment of Cloud Shell, you have two options:
+
+1. Open a support ticket
+1. Redeploy Cloud Shell for a private virtual network
+
+### Open a support ticket
+
+If you want to open a support ticket, you can do so from the Azure portal. Be sure to capture any
+error messages, including the **Correlation Id** and **Activity Id** values. Don't change any
+settings or delete any resources until instructed to by a support technician.
+
+Follow these steps to open a support ticket:
+
+1. Select the **Support & Troubleshooting** icon on the top navigation bar in the Azure portal.
+1. From the **Support & Troubleshooting** pane, select **Help + support**.
+1. Select **Create a support request** at the top of the center pane.
+1. Follow the instructions to create a support ticket.
+
+ [![Screenshot of creating a support ticket in the Azure portal.][ss05]][ss05x]
+
+### Redeploy Cloud Shell for a private virtual network
+
+Before you redeploy Cloud Shell, you must delete the existing deployment. In the prerequisites for
+the deployment, you provided a resource group and a virtual network. If you created these resources
+specifically for this deployment, then it should be safe to delete them. If you used existing
+resources, then you shouldn't delete them.
+
+The following list provides a description of the resources created by the deployment:
+
+- A **microsoft.network/networkprofiles** resource named `aci-networkProfile-<location>` where
+ `<location>` is the location of the resource group.
+- A **Private endpoint** resource named `cloudshellRelayEndpoint`.
+- A **Network Interface** resource named `cloudshellRelayEndpoint.nic.<UUID>` where `<UUID>` is a
+ unique identifier added to the name.
+- A **Virtual Network** resource that you provided from the prerequisites.
+- A **Private DNS zone** named `privatelink.servicebus.windows.net`.
+- A **Network security group** resource with the name you provided in the deployment template.
+- A **microsoft.network/privatednszones/virtualnetworklinks** resource with a name starting the name
+ of the relay namespace you provided in the deployment template.
+- A **Relay** resource with the name of the relay namespace you provided in the deployment template.
+- A **Storage account** resource with the name you provided in the deployment template.
+
+Once you have removed the resources, you can redeploy Cloud Shell by following the steps in the
+[Deploy Azure Cloud Shell in a virtual network using quickstart templates][03] article.
+
+You can find these resources by viewing the resource group in the Azure portal.
+
+[![Screenshot of resources created by the deployment.][ss02]][ss02x]
+
+<!-- link references -->
+[01]: /azure/role-based-access-control/role-assignments-list-portal#list-owners-of-a-subscription
+[02]: https://portal.azure.com/
+[03]: quickstart-deploy-vnet.md
+
+[ss01]: ./media/quickstart-deploy-vnet/resource-provider.png
+[ss01x]: ./media/quickstart-deploy-vnet/resource-provider.png#lightbox
+[ss02]: ./media/vnet-troubleshooting/show-resource-group.png
+[ss02x]: ./media/vnet-troubleshooting/show-resource-group.png#lightbox
+[ss03]: ./media/vnet-troubleshooting/network-profile-role.png
+[ss03x]: ./media/vnet-troubleshooting/network-profile-role.png#lightbox
+[ss04]: ./media/vnet-troubleshooting/relay-namespace-role.png
+[ss04x]: ./media/vnet-troubleshooting/relay-namespace-role.png#lightbox
+[ss05]: ./media/vnet-troubleshooting/create-support-ticket.png
+[ss05x]: ./media/vnet-troubleshooting/create-support-ticket.png#lightbox
communication-services End Of Call Survey Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/end-of-call-survey-logs.md
The following are instructions for configuring your Azure Monitor resource to st
### Overview
-The implementation of end-of-call survey logs represents an augmented functionality within ACS (Azure Communication Services), enabling Contoso to submit surveys to gather customers' subjective feedback on their calling experience. This approach aims to supplement the assessment of call quality beyond objective metrics such as audio and video bitrate, jitter, and latency, which may not fully capture whether a customer had a satisfactory or unsatisfactory experience. By leveraging Azure logs to publish and examine survey data, Contoso gains insights for analysis and identification of areas that require improvement. These survey results serve as a valuable resource for Azure Communication Services to continuously monitor and enhance quality and reliability. For more details about [End of call survey](../../../concepts/voice-video-calling/end-of-call-survey-concept.md)
+The implementation of end-of-call survey logs represents an augmented functionality within Azure Communication Services (Azure Communication Services), enabling Contoso to submit surveys to gather customers' subjective feedback on their calling experience. This approach aims to supplement the assessment of call quality beyond objective metrics such as audio and video bitrate, jitter, and latency, which may not fully capture whether a customer had a satisfactory or unsatisfactory experience. By leveraging Azure logs to publish and examine survey data, Contoso gains insights for analysis and identification of areas that require improvement. These survey results serve as a valuable resource for Azure Communication Services to continuously monitor and enhance quality and reliability. For more details about [End of call survey](../../../concepts/voice-video-calling/end-of-call-survey-concept.md)
The End of Call Survey is a valuable tool that allows you to gather insights into how end-users perceive the quality and reliability of your JavaScript/Web SDK calling solution. The accompanying logs contain crucial data that helps assess end-users' experience, including:
communication-services Rooms Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/rooms-logs.md
Communication Services offers the following types of logs that you can enable:
| `UpsertedRoomParticipantsCount` | The count of participants upserted in a Room. | | `RemovedRoomParticipantsCount` | The count of participants removed from a Room. | | `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
+| `PstnDialOutEnabled` | Indicates whether a room has the ability to make PSTN calls to invite people to a meeting. |
#### Example CreateRoom log
Communication Services offers the following types of logs that you can enable:
"CorrelationId": "Y4x6ZabFE0+E8ERwMpd68w", "Level": "Informational", "OperationName": "CreateRoom",
- "OperationVersion": "2022-03-31-preview",
+ "OperationVersion": "2023-10-30-preview",
"ResultType": "Succeeded", "ResultSignature": 201, "RoomId": "99466898241024408", "RoomLifespan": 61, "AddedRoomParticipantsCount": 4, "TimeGenerated": "5/25/2023, 4:32:49.469 AM",
+ "PstnDialOutEnabled": false,
} ] ```
Communication Services offers the following types of logs that you can enable:
"CorrelationId": "CNiZIX7fvkumtBSpFq7fxg", "Level": "Informational", "OperationName": "GetRoom",
- "OperationVersion": "2022-03-31-preview",
+ "OperationVersion": "2023-10-30-preview",
"ResultType": "Succeeded", "ResultSignature": "200", "RoomId": "99466387192310000",
Communication Services offers the following types of logs that you can enable:
"CorrelationId": "Bwqzh0pdnkGPDwNcMnBkng", "Level": "Informational", "OperationName": "UpdateRoom",
- "OperationVersion": "2022-03-31-preview",
+ "OperationVersion": "2023-10-30-preview",
"ResultType": "Succeeded", "ResultSignature": "200", "RoomId": "99466387192310000", "RoomLifespan": 121, "TimeGenerated": "2022-08-19T17:07:30.3543160Z",
+ "PstnDialOutEnabled": false,
}, ] ```
Communication Services offers the following types of logs that you can enable:
"CorrelationId": "x7rMXmihYEe3GFho9T/H2w", "Level": "Informational", "OperationName": "DeleteRoom",
- "OperationVersion": "2022-02-01",
+ "OperationVersion": "2023-10-30-preview",
"ResultType": "Succeeded", "ResultSignature": "204", "RoomId": "99466387192310000",
Communication Services offers the following types of logs that you can enable:
"CorrelationId": "KibM39CaXkK+HTInfsiY2w", "Level": "Informational", "OperationName": "ListRooms",
- "OperationVersion": "2022-03-31-preview",
+ "OperationVersion": "2023-10-30-preview",
"ResultType": "Succeeded", "ResultSignature": "200", "TimeGenerated": "2022-08-19T17:07:30.5393800Z",
Communication Services offers the following types of logs that you can enable:
"CorrelationId": "zHT8snnUMkaXCRDFfjQDJw", "Level": "Informational", "OperationName": "UpdateParticipants",
- "OperationVersion": "2022-03-31-preview",
+ "OperationVersion": "2023-10-30-preview",
"ResultType": "Succeeded", "ResultSignature": "200", "RoomId": "99466387192310000",
communication-services Voice And Video Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/voice-and-video-logs.md
The call summary log contains data to help you identify key properties of all ca
| `endpointType` | This value describes the properties of each endpoint that's connected to the call. It can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. | | `sdkVersion` | The version string for the Communication Services Calling SDK version that each relevant endpoint uses (for example, `"1.1.00.20212500"`). | | `osVersion` | A string that represents the operating system and version of each endpoint device. |
-| `participantTenantId` | The ID of the Microsoft tenant associated with the identity of the participant. The tenant can either be the Azure tenant that owns the ACS resource or the Microsoft tenant of an M365 identity. This field is used to guide cross-tenant redaction.
-|`participantType` | Description of the participant as a combination of its client (Azure Communication Services (ACS) or Teams), and its identity, (ACS or Microsoft 365). Possible values include: ACS (ACS identity and ACS SDK), Teams (Teams identity and Teams client), ACS as Teams external user (ACS identity and ACS SDK in Teams call or meeting), and ACS as Microsoft 365 user (M365 identity and ACS client).
+| `participantTenantId` | The ID of the Microsoft tenant associated with the identity of the participant. The tenant can either be the Azure tenant that owns the Azure Communication Services resource or the Microsoft tenant of an M365 identity. This field is used to guide cross-tenant redaction.
+|`participantType` | Description of the participant as a combination of its client (Azure Communication Services (ACS) or Teams), and its identity, (ACS or Microsoft 365). Possible values include: Azure Communication Services (ACS identity and Azure Communication Services SDK), Teams (Teams identity and Teams client), Azure Communication Services as Teams external user (ACS identity and Azure Communication Services SDK in Teams call or meeting), and Azure Communication Services as Microsoft 365 user (M365 identity and Azure Communication Services client).
| `pstnPartcipantCallType `|It represents the type and direction of PSTN participants including Emergency calling, direct routing, transfer, forwarding, etc.| ### Call diagnostic log schema
communication-services Azure Communication Services Azure Cognitive Services Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md
Title: Connect Azure Communication Services to Azure AI services
-description: Provides a how-to guide for connecting ACS to Azure AI services.
+description: Provides a how-to guide for connecting Azure Communication Services to Azure AI services.
communication-services Call Automation Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation-teams-interop.md
This interoperability with Microsoft Teams over VoIP makes it easy for developer
## Scenario Showcase ΓÇô Expert Consultation A customer service agent, who is using a Contact Center Agent experience, wants to now add a subject matter expert, who is knowledge worker (regular employee) at Contoso and uses Microsoft Teams, into a support call with a customer to provide some expert advice to resolve a customer issue.
-The dataflow diagram depicts a canonical scenario where a Teams user is added to an ongoing ACS call for expert consultation.
+The dataflow diagram depicts a canonical scenario where a Teams user is added to an ongoing Azure Communication Services call for expert consultation.
[ ![Diagram of calling flow for a customer service with Microsoft Teams and Call Automation.](./media/call-automation-teams-interop.png)](./media/call-automation-teams-interop.png#lightbox) 1. Customer is on an ongoing call with a Contact Center customer service agent. 1. During the call, the customer service agent needs expert help from one of the domain experts part of an engineering team. The agent is able to identify a knowledge worker who is available on Teams (presence via Graph APIs) and tries to add them to the call.
-1. Contoso Contact CenterΓÇÖs SBC is already configured with ACS Direct Routing where this add participant request is processed.
-1. Contoso Contact Center provider has implemented a web service, using ACS Call Automation that receives the ΓÇ£add ParticipantΓÇ¥ request.
-1. With Teams interop built into ACS Call Automation, ACS then uses the Teams userΓÇÖs ObjectId to add them to the call. The Teams user receives the incoming call notification. They accept and join the call.
+1. Contoso Contact CenterΓÇÖs SBC is already configured with Azure Communication Services Direct Routing where this add participant request is processed.
+1. Contoso Contact Center provider has implemented a web service, using Azure Communication Services Call Automation that receives the ΓÇ£add ParticipantΓÇ¥ request.
+1. With Teams interop built into Azure Communication Services Call Automation, Azure Communication Services then uses the Teams userΓÇÖs ObjectId to add them to the call. The Teams user receives the incoming call notification. They accept and join the call.
1. Once the Teams user has provided their expertise, they leave the call. The customer service agent and customer continue wrap up their conversation. ## Capabilities
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
Some of the common use cases that can be built using Call Automation include:
- Increase engagement by building automated customer outreach programs for marketing and customer service. - Analyze in a post-call process your unmixed audio recordings for quality assurance purposes.
-Azure Communication Services Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
+Azure Communication Services Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an Azure Communication Services Calling SDK client app to answer the incoming call request. With support for Azure Communication Services PSTN or Direct Routing, you can then connect this workflow back to your contact center.
![Diagram of calling flow for a customer service scenario.](./media/call-automation-architecture.png)
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
# Playing audio in call The play action provided through the Azure Communication Services Call Automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. You can play audio to call participants through one of two methods;-- Providing Azure Communication Services access to prerecorded audio files of WAV format, that ACS can access with support for authentication
+- Providing Azure Communication Services access to prerecorded audio files of WAV format, that Azure Communication Services can access with support for authentication
- Regular text that can be converted into speech output through the integration with Azure AI services. You can use the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). (Supported in public preview)
communication-services Play Ai Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-ai-action.md
# Playing audio in calls The play action provided through the Azure Communication Services Call Automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. You can play audio to call participants through one of two methods;-- Providing Azure Communication Services access to pre-recorded audio files of WAV format, that ACS can access with support for authentication
+- Providing Azure Communication Services access to pre-recorded audio files of WAV format, that Azure Communication Services can access with support for authentication
- Regular text that can be converted into speech output through the integration with Azure AI services. You can leverage the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales please see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md).
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md
# Gathering user input
-With the release of ACS Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user, which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways, which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech.
+With the release of Azure Communication Services Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user, which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways, which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech.
**Voice recognition with speech-to-text (Public Preview)**
communication-services Recognize Ai Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-ai-action.md
-With the release of ACS Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech.
+With the release of Azure Communication Services Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech.
**Voice recognition with speech-to-text**
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
For customers that use Virtual appointments, refer to our Teams Interoperability
- The maximum number of participants allowed in a chat thread is 250. - The maximum message size allowed is approximately 28 KB. - For chat threads with more than 20 participants, read receipts and typing indicator features are not supported.-- For Teams Interop scenarios, it is the number of ACS users, not Teams users that must be below 20 for read receipts and typing indicator features to be supported.
+- For Teams Interop scenarios, it is the number of Azure Communication Services users, not Teams users that must be below 20 for read receipts and typing indicator features to be supported.
## Chat architecture
communication-services Detailed Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/detailed-call-flows.md
Communication Services is built primarily on two types of traffic: **real-time m
Users of your Communication Services solution will be connecting to your services from their client devices. Communication between these devices and your servers is handled with **signaling**. For example: call initiation and real-time chat are supported by signaling between devices and your service. Most signaling traffic uses HTTPS REST, though in some scenarios, SIP can be used as a signaling traffic protocol. While this type of traffic is less sensitive to latency, low-latency signaling will give the users of your solution a pleasant end-user experience.
-Call flows in ACS are based on the Session Description Protocol (SDP) RFC 4566 offer and answer model over HTTPS. Once the callee accepts an incoming call, the caller and callee agree on the session parameters.
+Call flows in Azure Communication Services are based on the Session Description Protocol (SDP) RFC 4566 offer and answer model over HTTPS. Once the callee accepts an incoming call, the caller and callee agree on the session parameters.
Media traffic is encrypted by, and flows between, the caller and callee using Secure RTP (SRTP), a profile of Real-time Transport Protocol (RTP) that provides confidentiality, authentication, and replay attack protection to RTP traffic. SRTP uses a session key generated by a secure random number generator and exchanged using the signaling TLS channel.
-ACS media traffic between two endpoints participating in ACS audio, video, and application sharing, utilizes SRTP to encrypt the media stream. Cryptographic keys are negotiated between the two endpoints over a signaling protocol which uses TLS 1.2 and AES-256 (in GCM mode) encrypted UDP/TCP channel.
+Azure Communication Services media traffic between two endpoints participating in Azure Communication Services audio, video, and application sharing, utilizes SRTP to encrypt the media stream. Cryptographic keys are negotiated between the two endpoints over a signaling protocol which uses TLS 1.2 and AES-256 (in GCM mode) encrypted UDP/TCP channel.
communication-services Email Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email-metrics.md
Title: Email metric definitions for Azure Communication Services
-description: This document covers definitions of acs email metrics available in the Azure portal.
+description: This document covers definitions of Azure Communication Services email metrics available in the Azure portal.
communication-services Enable Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/enable-closed-captions.md
In this document, we're going to be looking at specifically Teams interoperabili
*Usage of translations through Teams generated captions requires the organizer to have assigned a Teams Premium license, or in the case of Microsoft 365 users they must have a Teams premium license. More information about Teams Premium can be found [here](https://www.microsoft.com/microsoft-teams/premium#tabx93f55452286a4264a2778ef8902fb81a).*
-In scenarios where there's a Teams user on a Teams client or a Microsoft 365 user with Azure Communication Services SDKs in the call, the developer can use Teams caption. This allows developers to work with the Teams captioning technology that may already be familiar with today. With Teams captions developers are limited to what their Teams license allows. Basic captions allow only one spoken and one caption language for the call. With Teams premium license developers can use the translation functionality offered by Teams to provide one spoken language for the call and translated caption languages on a per user basis. In a Teams interop scenario, captions enabled through ACS follows the same policies that are defined in Teams for [meetings](/powershell/module/skype/set-csteamsmeetingpolicy) and [calls](/powershell/module/skype/set-csteamscallingpolicy).
+In scenarios where there's a Teams user on a Teams client or a Microsoft 365 user with Azure Communication Services SDKs in the call, the developer can use Teams caption. This allows developers to work with the Teams captioning technology that may already be familiar with today. With Teams captions developers are limited to what their Teams license allows. Basic captions allow only one spoken and one caption language for the call. With Teams premium license developers can use the translation functionality offered by Teams to provide one spoken language for the call and translated caption languages on a per user basis. In a Teams interop scenario, captions enabled through Azure Communication Services follows the same policies that are defined in Teams for [meetings](/powershell/module/skype/set-csteamsmeetingpolicy) and [calls](/powershell/module/skype/set-csteamscallingpolicy).
## Common use cases
In scenarios where there's a Teams user on a Teams client or a Microsoft 365 use
Accessibility ΓÇô For people with hearing impairments or who are new to the language to participate in calls and meetings. A key feature requirement in the Telemedical industry is to help patients communicate effectively with their health care providers. ### Teams interoperability
-Use Teams ΓÇô Organizations using ACS and Teams can use Teams closed captions to improve their applications by providing closed captions capabilities to users. Those organizations can keep using Microsoft Teams for all calls and meetings without third party applications providing this capability.
+Use Teams ΓÇô Organizations using Azure Communication Services and Teams can use Teams closed captions to improve their applications by providing closed captions capabilities to users. Those organizations can keep using Microsoft Teams for all calls and meetings without third party applications providing this capability.
### Global inclusivity Provide translation ΓÇô Use the translation functions provided to provide translated captions for users who may be new to the language or for companies that operate at a global scale and have offices around the world, their teams can have conversations even if some people might not be familiar with the spoken language.
-## Sample architecture of ACS user using captions in a Teams meeting
+## Sample architecture of Azure Communication Services user using captions in a Teams meeting
![Diagram of Teams meeting interop](./media/acs-teams-interop-captions.png)
-## Sample architecture of an ACS user using captions in a meeting with a Microsoft 365 user on ACS SDK
+## Sample architecture of an Azure Communication Services user using captions in a meeting with a Microsoft 365 user on Azure Communication Services SDK
![Diagram of CTE user](./media/m365-captions-interop.png)
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
The following table shows supported server-side capabilities available in Azure
|Capability | Supported | | | |
-| [Manage ACS call recording](../../voice-video-calling/call-recording.md) | ❌ |
+| [Manage Azure Communication Services call recording](../../voice-video-calling/call-recording.md) | ❌ |
| [Azure Metrics](../../metrics.md) | ✔️ | | [Azure Monitor](../../analytics/logs/voice-and-video-logs.md) | ✔️ | | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ |
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/virtual-visits/overview.md
These three **implementation options** are columns in the table below, while eac
|--||--||| | *Manager* | Configure Business Availability | Bookings | Bookings | Custom | | *Provider* | Managing upcoming appointments | Outlook & Teams | Outlook & Teams | Custom |
-| *Provider* | Join the appointment | Teams | Teams | ACS Calling & Chat |
-| *Consumer* | Schedule an appointment | Bookings | Bookings | ACS Rooms |
-| *Consumer*| Be reminded of an appointment | Bookings | Bookings | ACS SMS |
-| *Consumer*| Join the appointment | Teams or virtual appointments | ACS Calling & Chat | ACS Calling & Chat |
+| *Provider* | Join the appointment | Teams | Teams | Azure Communication Services Calling & Chat |
+| *Consumer* | Schedule an appointment | Bookings | Bookings | Azure Communication Services Rooms |
+| *Consumer*| Be reminded of an appointment | Bookings | Bookings | Azure Communication Services SMS |
+| *Consumer*| Join the appointment | Teams or virtual appointments | Azure Communication Services Calling & Chat | Azure Communication Services Calling & Chat |
There are other ways to customize and combine Microsoft tools to deliver a virtual appointments experience: - **Replace Bookings with a custom scheduling experience with Graph.** You can build your own consumer-facing scheduling experience that controls Microsoft 365 meetings with Graph APIs.
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md
The following sections provide information about known issues associated with th
### Chrome M115 - regression
-Chrome version 115 for Android introduced a regression when making video calls - the result of this bug is a user making a call on ACS with this version of Chrome will have no outgoing video in Group and ACS-MS Teams calls.
+Chrome version 115 for Android introduced a regression when making video calls - the result of this bug is a user making a call on Azure Communication Services with this version of Chrome will have no outgoing video in Group and ACS-MS Teams calls.
- This is a known regression introduced on [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1469318) - As a short term mitigation please instruct users to use Microsoft Edge or Firefox on Android, or avoid using Google Chrome 115/116 on Android
Firefox desktop browser support is now available in public preview. Known issues
### iOS Chrome Known Issues iOS Chrome browser support is now available in public preview. Known issues are: - No outgoing and incoming audio when switching browser to background or locking the device-- No incoming/outgoing audio coming from bluetooth headset. When a user connects bluetooth headset in the middle of ACS call, the audio still comes out from the speaker until the user locks and unlocks the phone. We have seen this issue on older iOS versions (15.6, 15.7), and it is not reproducible on iOS 16.
+- No incoming/outgoing audio coming from bluetooth headset. When a user connects bluetooth headset in the middle of Azure Communication Services call, the audio still comes out from the speaker until the user locks and unlocks the phone. We have seen this issue on older iOS versions (15.6, 15.7), and it is not reproducible on iOS 16.
### iOS 16 introduced bugs when putting browser in the background during a call
-The iOS 16 release has introduced a bug that can stop the ACS audio\video call when using Safari mobile browser. Apple is aware of this issue and is looking for a fix on their side. The impact could be that an ACS call might stop working during a call and the only resolution to get it working again is to have the end customer restart their phone.
+The iOS 16 release has introduced a bug that can stop the Azure Communication Services audio\video call when using Safari mobile browser. Apple is aware of this issue and is looking for a fix on their side. The impact could be that an Azure Communication Services call might stop working during a call and the only resolution to get it working again is to have the end customer restart their phone.
To reproduce this bug: - Have a user using an iPhone running iOS 16-- Join ACS call (with audio only or with audio and video) using Safari iOS mobile browser
+- Join Azure Communication Services call (with audio only or with audio and video) using Safari iOS mobile browser
- If during a call someone puts the Safari browser in the background and views YouTube OR receives a FaceTime\phone call while connected via a Bluetooth device Results: - After a few minutes of this situation, the incoming and outgoing video may stop working.-- The only way to get ACS calling to work again is to have the end user restart their phone.
+- The only way to get Azure Communication Services calling to work again is to have the end user restart their phone.
### Chrome M98 - regression
Chrome version 98 introduced a regression with abnormal generation of video keyf
### No incoming audio during a call
-Occasionally, a user in an ACS call may not be able to hear the audio from remote participants.
+Occasionally, a user in an Azure Communication Services call may not be able to hear the audio from remote participants.
There is a related [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1402250) bug that causes this issue, the issue can be mitigated by reconnecting the PeerConnection. We've added this workaround since SDK 1.9.1 (stable) and SDK 1.10.0 (beta)
-On Android Chrome, if a user joins ACS call several times, the incoming audio can also disappear. The user is not able to hear the audio from other participants until the page is refreshed. We've fixed this issue in SDK 1.10.1-beta.1, and improved the audio resource usage.
+On Android Chrome, if a user joins Azure Communication Services call several times, the incoming audio can also disappear. The user is not able to hear the audio from other participants until the page is refreshed. We've fixed this issue in SDK 1.10.1-beta.1, and improved the audio resource usage.
### Some Android devices failing call scenarios except for group calls.
A number of specific Android devices fail to start, accept calls, and meetings.
### Android Chrome mutes the call after browser goes to background for one minute
-On Android Chrome, if a user is on an ACS call and puts the browser into background for one minute. The microphone will lose access and the other participants in the call won't hear the audio from the user. Once the user brings the browser to foreground, microphone is available again. Related chromium bugs [here](https://bugs.chromium.org/p/chromium/issues/detail?id=1027446) and [here](https://bugs.chromium.org/p/webrtc/issues/detail?id=10940)
+On Android Chrome, if a user is on an Azure Communication Services call and puts the browser into background for one minute. The microphone will lose access and the other participants in the call won't hear the audio from the user. Once the user brings the browser to foreground, microphone is available again. Related chromium bugs [here](https://bugs.chromium.org/p/chromium/issues/detail?id=1027446) and [here](https://bugs.chromium.org/p/webrtc/issues/detail?id=10940)
### A mobile (iOS and Android) user has dropped the call but is still showing up on the participant list.
-The problem can occur if a mobile user leaves the ACS group call without using the Call.hangUp() API. When a mobile user closes the browser or refreshes the webpage without hang up, other participants in the group call will still see this mobile user on the participant list for about 60 seconds.
+The problem can occur if a mobile user leaves the Azure Communication Services group call without using the Call.hangUp() API. When a mobile user closes the browser or refreshes the webpage without hang up, other participants in the group call will still see this mobile user on the participant list for about 60 seconds.
### iOS Safari refreshes the page if the user goes to another app and returns back to the browser
-The problem can occur if a user in an ACS call with iOS Safari, and switches to other app for a while. After the user returns back to the browser,
+The problem can occur if a user in an Azure Communication Services call with iOS Safari, and switches to other app for a while. After the user returns back to the browser,
the browser page may refresh. This is because OS kills the browser. One way to mitigate this issue is to keep some states and recover after page refreshes.
This problem can occur if another application or the operating system takes over
- A user plays a YouTube video, for example, or starts a FaceTime call. Switching to another native application can capture access to the microphone or camera. - A user enables Siri, which will capture access to the microphone.
-On iOS, for example, while on an ACS call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD will be raised and audio will stop flowing in the ACS call and the call will be marked as muted. Once the PSTN call is over, the user will have to go and unmute the ACS call for audio to start flowing again in the ACS call. In the case of Android Chrome when a PSTN call comes in, audio will stop flowing in the ACS call and the ACS call will not be marked as muted. In this case, there is no microphoneMutedUnexepectedly UFD event. Once the PSTN call is finished, Android Chrome will regain audio automatically and audio will start flowing normally again in the ACS call.
+On iOS, for example, while on an Azure Communication Services call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD will be raised and audio will stop flowing in the Azure Communication Services call and the call will be marked as muted. Once the PSTN call is over, the user will have to go and unmute the Azure Communication Services call for audio to start flowing again in the Azure Communication Services call. In the case of Android Chrome when a PSTN call comes in, audio will stop flowing in the Azure Communication Services call and the Azure Communication Services call will not be marked as muted. In this case, there is no microphoneMutedUnexepectedly UFD event. Once the PSTN call is finished, Android Chrome will regain audio automatically and audio will start flowing normally again in the Azure Communication Services call.
-In case camera is on and an interruption occurs, ACS call may or may not lose the camera. If lost then camera will be marked as off and user will have to go turn it back on after the interruption has released the camera.
+In case camera is on and an interruption occurs, Azure Communication Services call may or may not lose the camera. If lost then camera will be marked as off and user will have to go turn it back on after the interruption has released the camera.
Occasionally, microphone or camera devices won't be released on time, and that can cause issues with the original call. For example, if the user tries to unmute while watching a YouTube video, or if a PSTN call is active simultaneously.
The environment in which this problem occurs is the following:
The cause of this problem might be that acquiring your own stream from the same device will have a side effect of running into race conditions. Acquiring streams from other devices might lead the user into insufficient USB/IO bandwidth, and the `sourceUnavailableError` rate will skyrocket.
-### Excessive use of certain APIs like mute/unmute will result in throttling on ACS infrastructure
+### Excessive use of certain APIs like mute/unmute will result in throttling on Azure Communication Services infrastructure
-As a result of the mute/unmute API call, ACS infrastructure informs other participants in the call about the state of audio of a local participant who invoked mute/unmute, so that participants in the call know who is muted/unmuted.
-Excessive use of mute/unmute will be blocked in ACS infrastructure. That will happen if the participant (or application on behalf of participant) will attempt to mute/unmute continuously, every second, more than 15 times in a 30-second rolling window.
+As a result of the mute/unmute API call, Azure Communication Services infrastructure informs other participants in the call about the state of audio of a local participant who invoked mute/unmute, so that participants in the call know who is muted/unmuted.
+Excessive use of mute/unmute will be blocked in Azure Communication Services infrastructure. That will happen if the participant (or application on behalf of participant) will attempt to mute/unmute continuously, every second, more than 15 times in a 30-second rolling window.
## Communication Services Call Automation APIs
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/metrics.md
The following operations are available on Rooms API request metrics:
| DeleteRoom | Deletes a Room. | | GetRoom | Gets a Room by Room ID. | | PatchRoom | Updates a Room by Room ID. |
-| ListRooms | Lists all the Rooms for an ACS Resource. |
+| ListRooms | Lists all the Rooms for an Azure Communication Services Resource. |
| AddParticipants | Adds participants to a Room.| | RemoveParticipants | Removes participants from a Room. | | GetParticipants | Gets list of participants for a Room. |
communication-services Number Lookup Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/number-lookup-concept.md
Key features of Azure Communication Services Number Lookup include:
## Value Proposition
-The main benefits the solution will provide to ACS customers can be summarized on the below:
+The main benefits the solution will provide to Azure Communication Services customers can be summarized on the below:
- **Reduce Cost:** Optimize your communication expenses by sending messages only to phone numbers that are SMS-ready - **Increase efficiency:** Better target customers based on subscribersΓÇÖ data (name, type, location, etc.). You can also decide on the best communication channel to choose based on status (e.g., SMS or email while roaming instead of calls).
The main benefits the solution will provide to ACS customers can be summarized o
- **Validate the number can receive the SMS before you send it:** Check if a number has SMS capabilities or not and decide if needed to use different communication channels. *Contoso bank collected the phone numbers of the people who are interested in their services on their site. Contoso wants to send an invite to register for the promotional offer. Contoso checks before sending the link on the offer if SMS is possible channel for the number that customer provided on the site and donΓÇÖt waste money to send SMS to non mobile numbers.* - **Estimate the total cost of an SMS campaign before you launch it:** Get the current carrier of the target number and compare that with the list of known carrier surcharges.
-*Contoso, a marketing company, wants to launch a large SMS campaign to promote a product. Contoso checks the current carrier details for the different numbers he is targeting with this campaign to estimate the cost based on what ACS is charging him.*
+*Contoso, a marketing company, wants to launch a large SMS campaign to promote a product. Contoso checks the current carrier details for the different numbers he is targeting with this campaign to estimate the cost based on what Azure Communication Services is charging him.*
![Diagram showing call recording architecture using calling client sdk.](../numbers/mvp-use-case.png)
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Alice is a Dynamics 365 contact center agent, who makes an outbound call from Om
- One participant on the VoIP leg (Alice) from Omnichannel for Customer Service client application x 10 minutes x $0.004 per participant leg per minute = $0.04 - One participant on the Communication Services direct routing outbound leg (Bob) from Communication Services servers to an SBC x 10 minutes x $0.004 per participant leg per minute = $0.04-- Omnichannel for Customer Service bot doesn't introduce extra ACS charges.
+- Omnichannel for Customer Service bot doesn't introduce extra Azure Communication Services charges.
**Total cost for the call**: $0.04 + $0.04 = $0.08
Note that the service application that uses Call Automation SDK isn't charged to
### Pricing example: Inbound PSTN call redirected to another external telephone number using Call Automation SDK
-Vlad dials your toll-free number (that you acquired from Communication Service) from his mobile phone. Your service application (built with Call Automation SDK) receives the call, and invokes the logic to redirect the call to a mobile phone number of Abraham using ACS direct routing. Abraham picks up the call and they talk with Vlad for 5 minutes.
+Vlad dials your toll-free number (that you acquired from Communication Service) from his mobile phone. Your service application (built with Call Automation SDK) receives the call, and invokes the logic to redirect the call to a mobile phone number of Abraham using Azure Communication Services direct routing. Abraham picks up the call and they talk with Vlad for 5 minutes.
- Vlad was on the call as a PSTN endpoint for a total of 5 minutes. - Your service application was on the call for the entire 5 minutes of the call.
Vlad dials your toll-free number (that you acquired from Communication Service)
**Cost calculations** - Inbound PSTN leg by Vlad to toll-free number acquired from Communication Services x 5 minutes x $0.0220 per minute for receiving the call = $0.11-- One participant on the ACS direct routing outbound leg (Abraham) from the service application to an SBC x 5 minutes x $0.004 per participant leg per minute = $0.02
+- One participant on the Azure Communication Services direct routing outbound leg (Abraham) from the service application to an SBC x 5 minutes x $0.004 per participant leg per minute = $0.02
The service application that uses Call Automation SDK isn't charged to be part of the call. The additional monthly cost of leasing a US toll-free number isn't included in this calculation.
communication-services Raw Id Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/raw-id-use-cases.md
public void CommunicationIdentifierFromGetRawId()
You can find more platform-specific examples in the following article: [Understand identifier types](./identifiers.md) ## Storing CommunicationIdentifier in a database
-One of the typical jobs that may be required from you is mapping ACS users to users coming from Contoso user database or identity provider. This is usually achieved by adding an extra column or field in Contoso user DB or Identity Provider. However, given the characteristics of the Raw ID (stable, globally unique, and deterministic), you may as well choose it as a primary key for the user storage.
+One of the typical jobs that may be required from you is mapping Azure Communication Services users to users coming from Contoso user database or identity provider. This is usually achieved by adding an extra column or field in Contoso user DB or Identity Provider. However, given the characteristics of the Raw ID (stable, globally unique, and deterministic), you may as well choose it as a primary key for the user storage.
Assuming a `ContosoUser` is a class that represents a user of your application, and you want to save it along with a corresponding CommunicationIdentifier to the database. The original value for a `CommunicationIdentifier` can come from the Communication Identity, Calling or Chat APIs or from a custom Contoso API but can be represented as a `string` data type in your programming language no matter what the underlying type is:
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Here are the main scenarios where rooms are useful:
- **Rooms enable scheduled communication experience.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can schedule and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services. - **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. This will allow only a subset of users with assigned Communication Services identities to join a room call. - **Rooms enable structured communications through roles and permissions.** Rooms allow developers to assign predefined roles to users to exercise a higher degree of control and structure in communication. Ensure only presenters can speak and share content in a large meeting or in a virtual conference.
+- **Rooms enable to perform calls using PSTN.** Rooms enable users to invite participants to a meeting by making phone calls through the public switched telephone network (PSTN).
## When to use rooms
The tables below provide detailed capabilities mapped to the roles. At a high le
| - Render a video in multiple places (local camera or remote stream) | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Set/Update video scaling mode | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Render remote video stream | ✔️ | ✔️ | ✔️ |
+| **PSTN calls** | | |
+| - Call participants using phone calls | ✔️ | ❌ | ❌ |
*) Only available on the web calling SDK. Not available on iOS and Android calling SDKs
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
Alphanumeric sender ID is not capable of receiving inbound messages or STOP mess
Short Code availability is currently restricted to paid Azure subscriptions that have a billing address in the United States. Short Codes cannot be acquired on trial accounts or using Azure free credits. For more details, check out our [subscription eligibility page](../numbers/sub-eligibility-number-capability.md). ### Can you text to a toll-free number from a short code?
-ACS toll-free numbers are enabled to receive messages from short codes. However, short codes are not typically enabled to send messages to toll-free numbers. If your messages from short codes to ACS toll-free numbers are failing, check with your short code provider if the short code is enabled to send messages to toll-free numbers.
+Azure Communication Services toll-free numbers are enabled to receive messages from short codes. However, short codes are not typically enabled to send messages to toll-free numbers. If your messages from short codes to Azure Communication Services toll-free numbers are failing, check with your short code provider if the short code is enabled to send messages to toll-free numbers.
### How should a short code be formatted? Short codes do not fall under E.164 formatting guidelines and do not have a country code, or a "+" sign prefix. In the SMS API request, your short code should be passed as the 5-6 digit number you see in your short codes page without any prefix.
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
> In all the examples, if the dialed number does not match the pattern, the call will be dropped unless there is a purchased number exist for the communication resource, and this number was used as `alternateCallerId` in the application. ## Managing inbound calls
-For general inbound call management use [Call Automation SDKs](../call-automation/incoming-call-notification.md) to build an application that listens for and manage inbound calls placed to a phone number or received via ACS direct routing.
+For general inbound call management use [Call Automation SDKs](../call-automation/incoming-call-notification.md) to build an application that listens for and manage inbound calls placed to a phone number or received via Azure Communication Services direct routing.
Omnichannel for Customer Service customers, refer to [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling). ## Next steps
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
```typescript {
- "resourceId": <string>, // stable resource id of the ACS resource recording
+ "resourceId": <string>, // stable resource id of the Azure Communication Services resource recording
"callId": <string>, // id of the call "chunkDocumentId": <string>, // object identifier for the chunk this metadata corresponds to "chunkIndex": <number>, // index of this chunk with respect to all chunks in the recording
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The Azure Communication Services Calling SDK supports the following streaming co
| **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video and 1 screen sharing | 1 video + 1 screen sharing | | **Maximum # of incoming remote streams that can be rendered simultaneously** | 9 videos + 1 screen sharing on desktop browsers*, 4 videos + 1 screen sharing on web mobile browsers | 9 videos + 1 screen sharing |
-\* Starting from ACS Web Calling SDK version [1.16.3](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1163-stable-2023-08-24)
+\* Starting from Azure Communication Services Web Calling SDK version [1.16.3](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1163-stable-2023-08-24)
While the Calling SDK don't enforce these limits, your users might experience performance degradation if they're exceeded. Use the API of [Optimal Video Count](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#remote-video-quality) to determine how many current incoming video streams your web environment can support. ## Calling SDK timeouts
communication-services Data Channel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/data-channel.md
[!INCLUDE [Public Preview Disclaimer](../../includes/public-preview-include.md)] > [!NOTE]
-> This document delves into the Data Channel feature present in the ACS Calling SDK.
+> This document delves into the Data Channel feature present in the Azure Communication Services Calling SDK.
> While the Data Channel in this context bears some resemblance to the Data Channel in WebRTC, it's crucial to recognize subtle differences in their specifics. > Throughout this document, we use terms *Data Channel API* or *API* to denote the Data Channel API within the SDK. > When referring to the Data Channel API in WebRTC, we explicitly use the term *WebRTC Data Channel API* for clarity and precision.
communication-services Manage Call Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/manage-call-quality.md
The following sections detail the tools to implement at different phases of a ca
- **After a call** ## Before a call
-**Pre-call readiness** ΓÇô By using the pre-call checks ACS provides,
+**Pre-call readiness** ΓÇô By using the pre-call checks Azure Communication Services provides,
you can learn a userΓÇÖs connection status before the call and take proactive action on their behalf. For example, if you learn a userΓÇÖs connection is poor you can suggest they turn off their video before
Because Azure Communication Services Voice and Video calls run on web and mobile
behavior on the call they're trying to participate in, referred to as the target call. You should make sure there aren't multiple browser tabs open before a call starts, and also monitor during the whole call lifecycle. You can pro-actively notify customers to close their excess tabs, or help them join a call correctly with useful messaging if they're unable to join a call initially. - To check if user has multiple instances
- of ACS running in a browser, see: [How to detect if an application using Azure Communication Services' SDK is active in multiple tabs of a browser](../../how-tos/calling-sdk/is-sdk-active-in-multiple-tabs.md).
+ of Azure Communication Services running in a browser, see: [How to detect if an application using Azure Communication Services' SDK is active in multiple tabs of a browser](../../how-tos/calling-sdk/is-sdk-active-in-multiple-tabs.md).
## During a call
Sometimes users can't hear each other, maybe the speaker is too quiet, the liste
Since network conditions can change during a call, users can report poor audio and video quality even if they started the call without issue. Our Media statistics give you detailed quality metrics on each inbound and outbound audio, video, and screen share stream. These detailed insights help you monitor calls in progress, show users their network quality status throughout a call, and debug individual calls. -- These metrics help indicate issues on the ACS client SDK send and receive media streams. As an example, you can actively monitor the outgoing video stream's `availableBitrate`, notice a persistent drop below the recommended 1.5 Mbps and notify the user their video quality is degraded.
+- These metrics help indicate issues on the Azure Communication Services client SDK send and receive media streams. As an example, you can actively monitor the outgoing video stream's `availableBitrate`, notice a persistent drop below the recommended 1.5 Mbps and notify the user their video quality is degraded.
- It's important to note that our Server Log data only give you an overall summary of the call after it ends. Our detailed Media Statistics provide low level metrics throughout the call duration for use in during the call and afterwards for deeper analysis. - To learn more, see: [Media quality statistics](media-quality-sdk.md)
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Communication Services connections require internet connectivity to specific por
| Category | IP ranges or FQDN | Ports | | :-- | :-- | :-- |
-| Media traffic | Range of Azure public cloud IP addresses 20.202.0.0/16 The range provided above is the range of IP addresses on either Media processor or ACS TURN service. | UDP 3478 through 3481, TCP ports 443 |
+| Media traffic | Range of Azure public cloud IP addresses 20.202.0.0/16 The range provided above is the range of IP addresses on either Media processor or Azure Communication Services TURN service. | UDP 3478 through 3481, TCP ports 443 |
| Signaling, telemetry, registration| *.skype.com, *.microsoft.com, *.azure.net, *.azure.com, *.office.com| TCP 443, 80 |
communication-services Simulcast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/simulcast.md
The lack of simulcast support leads to a degraded video experience in calls with
Simulcast is supported on Azure Communication Services SDK for WebJS (1.9.1-beta.1+) and native SDK for Android, iOS, and Windows. Currently, simulcast on the sender side is supported on following desktop browsers - Chrome and Edge. Simulcast on receiver side is supported on all platforms that Azure Communication Services Calling supports. Support for Sender side Simulcast capability from mobile browsers will be added in the future. ## How Simulcast works
-Simulcast is a feature that allows a publisher, in this case the ACS calling SDK, to send different qualities of the same video to the SFU. The SFU then forwards the most suitable quality to each other endpoint on a call, based on their bandwidth, CPU, and resolution preferences. This way, the publisher can save resources and the subscribers can receive the best possible quality. The SFU doesn't change the video quality, it only selects which one to forward.
+Simulcast is a feature that allows a publisher, in this case the Azure Communication Services calling SDK, to send different qualities of the same video to the SFU. The SFU then forwards the most suitable quality to each other endpoint on a call, based on their bandwidth, CPU, and resolution preferences. This way, the publisher can save resources and the subscribers can receive the best possible quality. The SFU doesn't change the video quality, it only selects which one to forward.
## Supported number of video qualities available with Simulcast.
-Simulcast streaming from a web endpoint supports a maximum two video qualities. There aren't API controls needed to enable Simulcast for ACS. Simulcast is enabled and available for all video calls.
+Simulcast streaming from a web endpoint supports a maximum two video qualities. There aren't API controls needed to enable Simulcast for Azure Communication Services. Simulcast is enabled and available for all video calls.
## Available video resolutions When streaming with simulcast, there are no set resolutions for high or low quality simulcast video streams. Instead, based on many different variables, either a single or multiple video steams are delivered. If every subscriber to video is requesting and capable of receiving maximum resolution what publisher can provide, only that maximum resolution will be sent. The following resolutions are supported:
communication-services Video Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-constraints.md
[!INCLUDE [Public Preview Disclaimer](../../includes/public-preview-include.md)]
-The Video Constraints API is a powerful tool that enables developers to control the video quality from within their video calls. With this API, developers can set maximum video resolutions, frame rate, and bitrate used so that the call is optimized for the user's device and network conditions. The ACS video engine is optimized to allow the video quality to change dynamically based on devices ability and network quality. But there might be certain scenarios where you would want to have tighter control of the video quality that end users experience. For instance, there may be situations where the highest video quality is not a priority, or you may want to limit the video bandwidth usage in the application. To support those use cases, you can use the Video Constraints API to have tighter control over video quality.
+The Video Constraints API is a powerful tool that enables developers to control the video quality from within their video calls. With this API, developers can set maximum video resolutions, frame rate, and bitrate used so that the call is optimized for the user's device and network conditions. The Azure Communication Services video engine is optimized to allow the video quality to change dynamically based on devices ability and network quality. But there might be certain scenarios where you would want to have tighter control of the video quality that end users experience. For instance, there may be situations where the highest video quality is not a priority, or you may want to limit the video bandwidth usage in the application. To support those use cases, you can use the Video Constraints API to have tighter control over video quality.
Another benefit of the Video Constraints API is that it enables developers to optimize the video call for different devices. For example, if a user is using an older device with limited processing power, developers can set constraints on the video resolution to ensure that the video call runs smoothly on that device
-ACS Web Calling SDK supports setting the maximum video resolution, framerate, or bitrate that a client sends. The sender video constraints are supported on Desktop browsers (Chrome, Edge, Firefox) and when using iOS Safari mobile browser or Android Chrome mobile browser.
+Azure Communication Services Web Calling SDK supports setting the maximum video resolution, framerate, or bitrate that a client sends. The sender video constraints are supported on Desktop browsers (Chrome, Edge, Firefox) and when using iOS Safari mobile browser or Android Chrome mobile browser.
The native Calling SDK (Android, iOS, Windows) supports setting the maximum values of video resolution and framerate for outgoing video streams and setting the maximum resolution for incoming video streams. These constraints can be set at the start of the call and during the call.
communication-services Video Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-effects.md
> [!NOTE] > Currently browser support for creating video background effects is only supported on Chrome and Edge Desktop Browser (Windows and Mac) and Mac Safari Desktop.
-The Azure Communication Calling SDK allows you to create video effects that other users on a call are able to see. For example, for a user doing ACS calling using the WebJS SDK you can now enable that the user can turn on background blur. When the background blur is enabled, a user can feel more comfortable in doing a video call that the output video just shows a user, and all other content is blurred.
+The Azure Communication Calling SDK allows you to create video effects that other users on a call are able to see. For example, for a user doing Azure Communication Services calling using the WebJS SDK you can now enable that the user can turn on background blur. When the background blur is enabled, a user can feel more comfortable in doing a video call that the output video just shows a user, and all other content is blurred.
## Prerequisites ### Install the Azure Communication Services Calling SDK
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md
To place a call to a Communication Services user, you need to provide a Communic
```csharp Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events
-var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller
var callThisPerson = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); // person to call CreateCallResult response = await client.CreateCallAsync(callThisPerson, callbackUri); ```
CreateCallResult response = await client.CreateCallAsync(callThisPerson, callbac
```java String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events
-PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+18001234567"); // This is the ACS provisioned phone number for the caller
+PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+18001234567"); // This is the Azure Communication Services provisioned phone number for the caller
CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16471234567"), callerIdNumber); // person to call CreateCallResult response = client.createCall(callInvite, callbackUri).block(); ```
CreateCallResult response = client.createCall(callInvite, callbackUri).block();
```javascript const callInvite = { targetParticipant: { phoneNumber: "+18008008800" }, // person to call
- sourceCallIdNumber: { phoneNumber: "+18888888888" } // This is the ACS provisioned phone number for the caller
+ sourceCallIdNumber: { phoneNumber: "+18888888888" } // This is the Azure Communication Services provisioned phone number for the caller
}; const callbackUri = "https://<myendpoint>/Events"; // the callback endpoint where you want to receive subsequent events const response = await client.createCall(callInvite, callbackUri);
const response = await client.createCall(callInvite, callbackUri);
callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events caller_id_number = PhoneNumberIdentifier( "+18001234567"
-) # This is the ACS provisioned phone number for the caller
+) # This is the Azure Communication Services provisioned phone number for the caller
call_invite = CallInvite( target=PhoneNumberIdentifier("+16471234567"), source_caller_id_number=caller_id_number,
var pstnEndpoint = new PhoneNumberIdentifier("+16041234567");
var voipEndpoint = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-... var groupCallOptions = new CreateGroupCallOptions(new List<CommunicationIdentifier>{ pstnEndpoint, voipEndpoint }, callbackUri) {
- SourceCallerIdNumber = new PhoneNumberIdentifier("+16044561234"), // This is the ACS provisioned phone number for the caller
+ SourceCallerIdNumber = new PhoneNumberIdentifier("+16044561234"), // This is the Azure Communication Services provisioned phone number for the caller
}; CreateCallResult response = await client.CreateGroupCallAsync(groupCallOptions); ```
CreateCallResult response = await client.CreateGroupCallAsync(groupCallOptions);
```java String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events
-PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+18001234567"); // This is the ACS provisioned phone number for the caller
+PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+18001234567"); // This is the Azure Communication Services provisioned phone number for the caller
List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(new PhoneNumberIdentifier("+16471234567"), new CommunicationUserIdentifier("<user_id_of_target>"))); CreateGroupCallOptions groupCallOptions = new CreateGroupCallOptions(targets, callbackUri); groupCallOptions.setSourceCallIdNumber(callerIdNumber);
const participants = [
{ communicationUserId: "<user_id_of_target>" }, //user id looks like 8:a1b1c1-... ]; const createCallOptions = {
- sourceCallIdNumber: { phoneNumber: "+18888888888" }, // This is the ACS provisioned phone number for the caller
+ sourceCallIdNumber: { phoneNumber: "+18888888888" }, // This is the Azure Communication Services provisioned phone number for the caller
}; const response = await client.createGroupCall(participants, callbackUri, createCallOptions); ```
const response = await client.createGroupCall(participants, callbackUri, createC
callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events caller_id_number = PhoneNumberIdentifier( "+18888888888"
-) # This is the ACS provisioned phone number for the caller
+) # This is the Azure Communication Services provisioned phone number for the caller
pstn_endpoint = PhoneNumberIdentifier("+18008008800") voip_endpoint = CommunicationUserIdentifier( "<user_id_of_target>"
To redirect the call to a phone number, construct the target and caller ID with
# [csharp](#tab/csharp) ```csharp
-var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller
var target = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); ``` # [Java](#tab/java) ```java
-PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller
CallInvite target = new CallInvite(new PhoneNumberIdentifier("+18001234567"), callerIdNumber); ```
const target = {
```python caller_id_number = PhoneNumberIdentifier( "+18888888888"
-) # This is the ACS provisioned phone number for the caller
+) # This is the Azure Communication Services provisioned phone number for the caller
call_invite = CallInvite( target=PhoneNumberIdentifier("+16471234567"), source_caller_id_number=caller_id_number,
You can add a participant (Communication Services user or phone number) to an ex
# [csharp](#tab/csharp) ```csharp
-var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller
var addThisPerson = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); AddParticipantsResult result = await callConnection.AddParticipantAsync(addThisPerson); ```
AddParticipantsResult result = await callConnection.AddParticipantAsync(addThisP
# [Java](#tab/java) ```java
-PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller
CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); AddParticipantOptions addParticipantOptions = new AddParticipantOptions(callInvite); Response<AddParticipantResult> addParticipantResultResponse = callConnectionAsync.addParticipantWithResponse(addParticipantOptions).block();
Response<AddParticipantResult> addParticipantResultResponse = callConnectionAsyn
# [JavaScript](#tab/javascript) ```javascript
-const callerIdNumber = { phoneNumber: "+16044561234" }; // This is the ACS provisioned phone number for the caller
+const callerIdNumber = { phoneNumber: "+16044561234" }; // This is the Azure Communication Services provisioned phone number for the caller
const addThisPerson = { targetParticipant: { phoneNumber: "+16041234567" }, sourceCallIdNumber: callerIdNumber,
const addParticipantResult = await callConnection.addParticipant(addThisPerson);
```python caller_id_number = PhoneNumberIdentifier( "+18888888888"
-) # This is the ACS provisioned phone number for the caller
+) # This is the Azure Communication Services provisioned phone number for the caller
call_invite = CallInvite( target=PhoneNumberIdentifier("+18008008800"), source_caller_id_number=caller_id_number,
communication-services Control Mid Call Media Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/control-mid-call-media-actions.md
app.logger.info("Started continuous DTMF recognition")
``` --
-When your application no longer wishes to receive DTMF tones from the participant anymore, you can use the `StopContinuousDtmfRecognitionAsync` method to let ACS know to stop detecting DTMF tones.
+When your application no longer wishes to receive DTMF tones from the participant anymore, you can use the `StopContinuousDtmfRecognitionAsync` method to let Azure Communication Services know to stop detecting DTMF tones.
### StopContinuousDtmfRecognitionAsync Stop detecting DTMF tones sent by participant.
if event.type == "Microsoft.Communication.ContinuousDtmfRecognitionToneReceived"
``` --
-ACS provides you with a `SequenceId` as part of the `ContinuousDtmfRecognitionToneReceived` event, which your application can use to reconstruct the order in which the participant entered the DTMF tones.
+Azure Communication Services provides you with a `SequenceId` as part of the `ContinuousDtmfRecognitionToneReceived` event, which your application can use to reconstruct the order in which the participant entered the DTMF tones.
### ContinuousDtmfRecognitionFailed Event Example of how you can handle when DTMF tone detection fails.
communication-services Mute Participants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/mute-participants.md
zone_pivot_groups: acs-csharp-java
With Azure Communication Services Call Automation SDK, developers can now mute participants through server based API requests. This feature can be useful when you want your application to mute participants after they've joined the meeting to avoid any interruptions or distractions to ongoing meetings.
-If youΓÇÖre interested in abilities to allow participants to mute/unmute themselves on the call when theyΓÇÖve joined with ACS Client Libraries, you can use our [mute/unmute function](../../../communication-services/how-tos/calling-sdk/manage-calls.md) provided through our Calling Library.
+If youΓÇÖre interested in abilities to allow participants to mute/unmute themselves on the call when theyΓÇÖve joined with Azure Communication Services Client Libraries, you can use our [mute/unmute function](../../../communication-services/how-tos/calling-sdk/manage-calls.md) provided through our Calling Library.
## Common use cases
communication-services Call Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/call-transcription.md
zone_pivot_groups: acs-plat-ios-android-windows
# Display call transcription state on the client > [!NOTE]
-> Call transcription state is only available from Teams meetings. Currently there's no support for call transcription state for ACS to ACS calls.
+> Call transcription state is only available from Teams meetings. Currently there's no support for call transcription state for Azure Communication Services to Azure Communication Services calls.
When using call transcription you may want to let your users know that a call is being transcribe. Here's how.
communication-services Callkit Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/callkit-integration.md
Last updated 01/06/2023
Title: CallKit integration in ACS Calling SDK
+ Title: CallKit integration in Azure Communication Services Calling SDK
-description: Steps on how to integrate CallKit with ACS Calling SDK
+description: Steps on how to integrate CallKit with Azure Communication Services Calling SDK
# Integrate with CallKit
description: Steps on how to integrate CallKit with ACS Calling SDK
## CallKit Integration (within SDK)
- CallKit Integration in the ACS iOS SDK handles interaction with CallKit for us. To perform any call operations like mute/unmute, hold/resume, we only need to call the API on the ACS SDK.
+ CallKit Integration in the Azure Communication Services iOS SDK handles interaction with CallKit for us. To perform any call operations like mute/unmute, hold/resume, we only need to call the API on the Azure Communication Services SDK.
### Initialize call agent with CallKitOptions
description: Steps on how to integrate CallKit with ACS Calling SDK
### Handle incoming push notification payload
- When the app receives incoming push notification payload, we need to call `handlePush` to process it. ACS Calling SDK will raise the `IncomingCall` event.
+ When the app receives incoming push notification payload, we need to call `handlePush` to process it. Azure Communication Services Calling SDK will raise the `IncomingCall` event.
```Swift public func handlePushNotification(_ pushPayload: PKPushPayload)
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/capabilities.md
Do I have permission to turn video on, do I have permission to turn mic on, do I
[!INCLUDE [Capabilities JavaScript](./includes/capabilities/capabilities-web.md)] ## Supported Calltype
-The feature is currently supported only for ACS Rooms call type and teams meeting call type
+The feature is currently supported only for Azure Communication Services Rooms call type and teams meeting call type
## Next steps - [Learn how to manage video](./manage-video.md)
communication-services Manage Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-calls.md
Last updated 08/10/2021
zone_pivot_groups: acs-plat-web-ios-android-windows
-#Customer intent: As a developer, I want to manage calls with the acs sdks so that I can create a calling application that manages calls.
+#Customer intent: As a developer, I want to manage calls with the Azure Communication Services sdks so that I can create a calling application that manages calls.
# Manage calls
communication-services Manage Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-video.md
Last updated 08/10/2021
zone_pivot_groups: acs-plat-web-ios-android-windows
-#Customer intent: As a developer, I want to manage video calls with the acs sdks so that I can create a calling application that provides video capabilities.
+#Customer intent: As a developer, I want to manage video calls with the Azure Communication Services sdks so that I can create a calling application that provides video capabilities.
# Manage video during calls
communication-services Push Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/push-notifications.md
Last updated 08/10/2021
zone_pivot_groups: acs-plat-web-ios-android
-#Customer intent: As a developer, I want to enable push notifications with the acs sdks so that I can create a calling application that provides push notifications to its users.
+#Customer intent: As a developer, I want to enable push notifications with the Azure Communication Services sdks so that I can create a calling application that provides push notifications to its users.
# Enable push notifications for calls
communication-services Local Testing Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/event-grid/local-testing-event-grid.md
ngrok http 7071
"MessageId": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e", "From": "15555555555", "To": "15555555555",
- "Message": "Great to connect with ACS events",
+ "Message": "Great to connect with Azure Communication Services events",
"ReceivedTimestamp": "2020-09-18T00:27:45.32Z" }, "eventType": "Microsoft.Communication.SMSReceived",
communication-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/managed-identity.md
Azure Communication Services is a fully managed communication platform that enab
## Using Managed Identity with Azure Communication Services
-ACS supports using Managed Identity to authenticate with the service. By using Managed Identity, you can eliminate the need to manage your own access tokens and credentials.
+Azure Communication Services supports using Managed Identity to authenticate with the service. By using Managed Identity, you can eliminate the need to manage your own access tokens and credentials.
Your Azure Communication Services resource can be assigned two types of identity: 1. A **System Assigned Identity** which is tied to your resource and is deleted when your resource is deleted.
az communication identity assign --system-assigned --name myApp --resource-group
## Add a user-assigned identity
-Assigning a user-assigned identity to your ACS resource requires that you first create the identity and then add its resource identifier to your Communication service resource.
+Assigning a user-assigned identity to your Azure Communication Services resource requires that you first create the identity and then add its resource identifier to your Communication service resource.
# [Azure portal](#tab/portal)
az communication identity assign --name myApp --resource-group myResourceGroup -
--
-## Managed Identity using ACS management SDKs
-Managed Identity can also be assigned to your ACS resource using the Azure Communication Management SDKs.
+## Managed Identity using Azure Communication Services management SDKs
+Managed Identity can also be assigned to your Azure Communication Services resource using the Azure Communication Management SDKs.
This assignment can be achieved by introducing the identity property in the resource definition either on creation or when updating the resource. # [.NET](#tab/dotnet)
For more information specific to managing your resource instance, see [Managing
# [JavaScript](#tab/javascript)
-For Node.js apps and JavaScript functions, samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/communication/arm-communication/samples-dev/communicationServicesCreateOrUpdateSample.ts)
+For Node.js apps and JavaScript functions, samples on how to create or update your Azure Communication Services resource with a managed identity can be found in the [Azure Communication Management Developer Samples for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/communication/arm-communication/samples-dev/communicationServicesCreateOrUpdateSample.ts)
For more information on using the JavaScript Management SDK, see [Azure Communication Management SDK for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/communication/arm-communication/README.md) # [Python](#tab/python)
-For Python apps and functions, Code Samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/communication/azure-mgmt-communication/generated_samples/communication_services/create_or_update_with_system_assigned_identity.py)
+For Python apps and functions, Code Samples on how to create or update your Azure Communication Services resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/communication/azure-mgmt-communication/generated_samples/communication_services/create_or_update_with_system_assigned_identity.py)
For more information on using the python Management SDK, see [Azure Communication Management SDK for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/communication/azure-mgmt-communication/README.md) # [Java](#tab/java)
-For Java apps and functions, Code Samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/communication/azure-resourcemanager-communication/src/samples/java/com/azure/resourcemanager/communication/generated/CommunicationServicesCreateOrUpdateSamples.java).
+For Java apps and functions, Code Samples on how to create or update your Azure Communication Services resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/communication/azure-resourcemanager-communication/src/samples/java/com/azure/resourcemanager/communication/generated/CommunicationServicesCreateOrUpdateSamples.java).
For more information on using the java Management SDK, see [Azure Communication Management SDK for Java](https://github.com/Azure/azure-sdk-for-jav) # [GoLang](#tab/go)
-For Golang apps and functions, Code Samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Golang](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/communication/armcommunication/services_client_example_test.go).
+For Golang apps and functions, Code Samples on how to create or update your Azure Communication Services resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Golang](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/communication/armcommunication/services_client_example_test.go).
For more information on using the golang Management SDK, see [Azure Communication Management SDK for Golang](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/communication/armcommunication/README.md)
communication-services Domain Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/telephony/domain-validation.md
To use direct routing in Azure Communication Services, you need to validate that
When you're verifying the ownership of the SBC FQDN, keep in mind that the `*.onmicrosoft.com` and `*.azure.com` domain names aren't supported. For example, if you have two domain names, `contoso.com` and `contoso.onmicrosoft.com`, use `sbc.contoso.com` as the SBC name. Validating domain part makes sense if you plan to add multiple SBCs from the same domain name space. For example if you're using `sbc-eu.contoso.com`, `sbc-us.contoso.com`, and `sbc-af.contoso.com` you can validate `contoso.com` domain once and add SBCs from that domain later without extra validation.
-Validating entire FQDN is helpful if you're a service provider and don't want to validate your base domain ownership with every customer. For example if you're running SBCs `customer1.acs.adatum.biz`, `customer2.acs.adatum.biz`, and `customer3.acs.adatum.biz`, you don't need to validate `acs.adatum.biz` for every Communication resource, instead you validate the entire FQDN each time. This option provides more granular security approach.
+Validating entire FQDN is helpful if you're a service provider and don't want to validate your base domain ownership with every customer. For example if you're running SBCs `customer1.Azure Communication Services.adatum.biz`, `customer2.Azure Communication Services.adatum.biz`, and `customer3.Azure Communication Services.adatum.biz`, you don't need to validate `acs.adatum.biz` for every Communication resource, instead you validate the entire FQDN each time. This option provides more granular security approach.
## Add a new domain name
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
When you have a Communication Services resource, you can set up a Communication
:::image type="content" source="./media/smaller-bot-choose-resource.png" alt-text="Screenshot that shows how to save the selected Communication Service resource to create a new Communication Services user ID." lightbox="./media/bot-choose-resource.png":::
-1. When the resource details are verified, a bot ID is shown in the **Bot ACS Id** column. You can use the bot ID to represent the bot in a chat thread by using the Communication Services Chat AddParticipant API. After you add the bot to a chat as participant, the bot starts to receive chat-related activities, and it can respond in the chat thread.
+1. When the resource details are verified, a bot ID is shown in the **Bot Azure Communication Services Id** column. You can use the bot ID to represent the bot in a chat thread by using the Communication Services Chat AddParticipant API. After you add the bot to a chat as participant, the bot starts to receive chat-related activities, and it can respond in the chat thread.
:::image type="content" source="./media/smaller-acs-chat-channel-saved.png" alt-text="Screenshot that shows the new Communication Services user ID assigned to the bot." lightbox="./media/acs-chat-channel-saved.png":::
namespace Microsoft.BotBuilderSamples.Bots
### Send an adaptive card
+> [!NOTE]
+> Adaptive cards are only supported within Azure Communication Services use cases where all chat participants are ACS users, and not for Teams interoprability use cases.
+ You can send an adaptive card to the chat thread to increase engagement and efficiency. An adaptive card also helps you communicate with users in various ways. You can send an adaptive card from a bot by adding the card as a bot activity attachment. Here's an example of how to send an adaptive card:
Verify that the bot's Communication Services ID is used correctly when a request
## Next steps
-Try the [chat bot demo app](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App) for a 1:1 chat between a chat user and a bot via the BotFramework WebChat UI component.
+Try the [chat bot demo app](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App) for a 1:1 chat between a chat user and a bot via the BotFramework WebChat UI component.
communication-services Add Multiple Senders Mgmt Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-multiple-senders-mgmt-sdks.md
Title: How to add and remove sender addresses in Azure Communication Services using the ACS Management Client Libraries
+ Title: How to add and remove sender addresses in Azure Communication Services using the Azure Communication Services Management Client Libraries
-description: Learn about adding and removing sender addresses in Azure Communication Services using the ACS Management Client Libraries
+description: Learn about adding and removing sender addresses in Azure Communication Services using the Azure Communication Services Management Client Libraries
zone_pivot_groups: acs-js-csharp-java-python
-# Quickstart: How to add and remove sender addresses in Azure Communication Services using the ACS Management Client Libraries
+# Quickstart: How to add and remove sender addresses in Azure Communication Services using the Azure Communication Services Management Client Libraries
-In this quick start, you will learn how to add and remove sender addresses in Azure Communication Services using the ACS Management Client Libraries.
+In this quick start, you will learn how to add and remove sender addresses in Azure Communication Services using the Azure Communication Services Management Client Libraries.
::: zone pivot="programming-language-csharp" [!INCLUDE [Add sender addresses with .NET Management SDK](./includes/add-multiple-senders-net.md)]
communication-services Define Media Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/media-composition/define-media-composition.md
Azure Communication Services Media Composition is made up of three parts: inputs
To retrieve the media sources that will be used in the layout composition, you'll need to define inputs. Inputs can be either multi-source or single source. ### Multi-source inputs
-ACS Group Calls and ACS Rooms are typically made up of multiple participants. We define these as multi-source inputs. They can be used in layouts as a single input or destructured to reference a single participant.
+Azure Communication Services Group Calls and Azure Communication Services Rooms are typically made up of multiple participants. We define these as multi-source inputs. They can be used in layouts as a single input or destructured to reference a single participant.
-ACS Group Call json:
+Azure Communication Services Group Call json:
```json { "inputs": {
ACS Group Call json:
} ```
-ACS Rooms Input json:
+Azure Communication Services Rooms Input json:
```json { "inputs": {
ACS Rooms Input json:
``` ### Single source inputs
-Unlike multi-source inputs, single source inputs reference a single media source. If the single source input is from a multi-source input such as an ACS group call or rooms, it will reference the multi-source input's ID in the `call` property. The following are examples of single source inputs:
+Unlike multi-source inputs, single source inputs reference a single media source. If the single source input is from a multi-source input such as an Azure Communication Services group call or rooms, it will reference the multi-source input's ID in the `call` property. The following are examples of single source inputs:
Participant json: ```json
The custom layout example above will result in the following composition:
## Outputs After media has been composed according to a layout, they can be outputted to your audience in various ways. Currently, you can either send the composed stream to a call or to an RTMP server.
-ACS Group Call json:
+Azure Communication Services Group Call json:
```json { "outputs": {
ACS Group Call json:
} ```
-ACS Rooms Output json:
+Azure Communication Services Rooms Output json:
```json { "outputs": {
communication-services Receive Sms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/receive-sms.md
The `SMSReceived` event generated when an SMS is sent to an Azure Communication
"MessageId": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e", "From": "15555555555", "To": "15555555555",
- "Message": "Great to connect with ACS events",
+ "Message": "Great to connect with Azure Communication Services events",
"ReceivedTimestamp": "2020-09-18T00:27:45.32Z" }, "eventType": "Microsoft.Communication.SMSReceived",
communication-services Get Started Chat Ui Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/ui-library/get-started-chat-ui-library.md
Get started with Azure Communication Services UI Library to quickly integrate communication experiences into your applications. In this quickstart, learn how to integrate UI Library chat composites into an application and set up the experience for your app users.
-Communication Services UI Library renders a full chat experience right in your application. It takes care of connecting to ACS chat services, and updates participant's presence automatically. As a developer, you need to worry about where in your app's user experience you want the chat experience to launch and only create the ACS resources as required.
+Communication Services UI Library renders a full chat experience right in your application. It takes care of connecting to Azure Communication Services chat services, and updates participant's presence automatically. As a developer, you need to worry about where in your app's user experience you want the chat experience to launch and only create the Azure Communication Services resources as required.
::: zone pivot="platform-web"
communication-services Media Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/media-streaming.md
Title: Media streaming quickstart
-description: Provides a quick start for developers to get audio streams through media streaming APIs from ACS calls.
+description: Provides a quick start for developers to get audio streams through media streaming APIs from Azure Communication Services calls.
communication-services Web Calling Push Notifications Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/web-calling-push-notifications-sample.md
Title: Azure Communication Services Web Calling SDK - Web push notifications
-description: Quickstart tutorial for ACS Web Calling SDK push notifications
+description: Quickstart tutorial for Azure Communication Services Web Calling SDK push notifications
Last updated 04/04/2023
communication-services Meeting Interop Features File Attachment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-interop/meeting-interop-features-file-attachment.md
# Tutorial: Enable file attachment support in your Chat app
-The Chat SDK is designed to work with Microsoft Teams seamlessly. Specifically, Chat SDK provides a solution to receive file attachments sent by users from Microsoft Teams. Currently this feature is only available in the Chat SDK for JavaScript. Please note that sending file attachments from ACS user to Teams user is not currently supported, see the current capabilities of [Teams Interop Chat](../../concepts/interop/guest/capabilities.md) for details.
+The Chat SDK is designed to work with Microsoft Teams seamlessly. Specifically, Chat SDK provides a solution to receive file attachments sent by users from Microsoft Teams. Currently this feature is only available in the Chat SDK for JavaScript. Please note that sending file attachments from Azure Communication Services user to Teams user is not currently supported, see the current capabilities of [Teams Interop Chat](../../concepts/interop/guest/capabilities.md) for details.
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
communication-services Contact Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/contact-center.md
The following list presents the set of features that are currently available for
| Group of features | Capability | Public preview | General availability | |-|-|-|-|
-| DTMF Support in ACS UI SDK | Allows touch tone entry | ❌ | ✔️ |
+| DTMF Support in Azure Communication Services UI SDK | Allows touch tone entry | ❌ | ✔️ |
| Teams Capabilities | Audio and video | ✔️ | ✔️ | | | Screen sharing | ✔️ | ✔️ | | | Record the call | ✔️ | ✔️ |
communication-services End Of Call Survey Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/end-of-call-survey-tutorial.md
In addition to using the End of Call Survey API you can create your own survey q
- Embed Azure AppInsights into your application [Click here to know more about App Insight initialization using plain JavaScript](../../azure-monitor/app/javascript-sdk.md). Alternatively, you can use NPM to get the App Insights dependences. [Click here to know more about App Insight initialization using NPM](../../azure-monitor/app/javascript-sdk-configuration.md). - Build a UI in your application that will serve custom questions to the user and gather their input, lets assume that your application gathered responses as a string in the `improvementSuggestion` variable -- Submit survey results to ACS and send user response using App Insights:
+- Submit survey results to Azure Communication Services and send user response using App Insights:
``` javascript currentCall.feature(SDK.Features.CallSurvey).submitSurvey(survey).then(res => { // `improvementSuggesstion` contains custom, user response
In addition to using the End of Call Survey API you can create your own survey q
appInsights.flush(); ``` User responses that were sent using AppInsights are available under your App Insights workspace. You can use [Workbooks](../../update-center/workbooks.md) to query between multiple resources, correlate call ratings and custom survey data. Steps to correlate the call ratings and custom survey data:-- Create new [Workbooks](../../update-center/workbooks.md) (Your ACS Resource -> Monitoring -> Workbooks -> New) and query Call Survey data from your ACS resource.
+- Create new [Workbooks](../../update-center/workbooks.md) (Your Azure Communication Services Resource -> Monitoring -> Workbooks -> New) and query Call Survey data from your Azure Communication Services resource.
- Add new query (+Add -> Add query) - Make sure `Data source` is `Logs` and `Resource type` is `Communication` - You can rename the query (Advanced Settings -> Step name [example: call-survey])
communication-services File Sharing Tutorial Acs Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial-acs-chat.md
[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
-In an Azure Communication Service Chat ("ACS Chat"), we can enable file sharing between communication users. Note, ACS Chat is different from the Teams Interoperability Chat ("Interop Chat"). If you want to enable file sharing in an Interop Chat, refer to [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md).
+In an Azure Communication Service Chat ("ACS Chat"), we can enable file sharing between communication users. Note, Azure Communication Services Chat is different from the Teams Interoperability Chat ("Interop Chat"). If you want to enable file sharing in an Interop Chat, refer to [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md).
In this tutorial, we're configuring the Azure Communication Services UI Library Chat Composite to enable file sharing. The UI Library Chat Composite provides a set of rich components and UI controls that can be used to enable file sharing. We're using Azure Blob Storage to enable the storage of the files that are shared through the chat thread.
communication-services File Sharing Tutorial Interop Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial-interop-chat.md
[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
-In a Teams Interoperability Chat ("Interop Chat"), we can enable file sharing between Azure Communication Service end users and Teams users. Note, Interop Chat is different from the Azure Communication Service Chat ("ACS Chat"). If you want to enable file sharing in an ACS Chat, refer to [Add file sharing with UI Library in Azure Communication Service Chat](./file-sharing-tutorial-acs-chat.md). Currently, the Azure Communication Service end user is only able to receive file attachments from the Teams user. Please refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more.
+In a Teams Interoperability Chat ("Interop Chat"), we can enable file sharing between Azure Communication Service end users and Teams users. Note, Interop Chat is different from the Azure Communication Service Chat ("ACS Chat"). If you want to enable file sharing in an Azure Communication Services Chat, refer to [Add file sharing with UI Library in Azure Communication Service Chat](./file-sharing-tutorial-acs-chat.md). Currently, the Azure Communication Service end user is only able to receive file attachments from the Teams user. Please refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more.
>[!IMPORTANT] >
Moreover, the Teams user's tenant admin might impose restrictions on file sharin
Let's run `npm run start` then you should be able to access our sample app via `localhost:3000` like the following screenshot:
-![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a ACS UI library.")
+![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a Azure Communication Services UI library.")
Simply click on the chat button located in the bottom to reveal the chat panel and now if Teams user sends some files, you should see something like the following screenshot: ![Teams sending a file](./media/file-sharing-tutorial-interop-chat-1.png "Screenshot of a Teams client sending one file.")
-![ACS getting a file](./media/file-sharing-tutorial-interop-chat-2.png "Screenshot of ACS UI library receiving one file.")
+![ACS getting a file](./media/file-sharing-tutorial-interop-chat-2.png "Screenshot of Azure Communication Services UI library receiving one file.")
And now if the user click on the file attachment card, a new tab would be opened like the following where the user can download the file:
communication-services Inline Image Tutorial Interop Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/inline-image-tutorial-interop-chat.md
And this is all you need! And there's no other setup needed to enable inline ima
Let's run `npm run start` then you should be able to access our sample app via `localhost:3000` like the following screenshot:
-![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a ACS UI library.")
+![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a Azure Communication Services UI library.")
Simply click on the chat button located in the bottom to reveal the chat panel and now if Teams user sends an image, you should see something like the following screenshot: ![Teams sending two images](./media/inline-image-tutorial-interop-chat-1.png "Screenshot of a Teams client sending 2 inline images.")
-![ACS getting two images](./media/inline-image-tutorial-interop-chat-2.png "Screenshot of ACS UI library receiving 2 inline images.")
+![ACS getting two images](./media/inline-image-tutorial-interop-chat-2.png "Screenshot of Azure Communication Services UI library receiving 2 inline images.")
Note that in a Teams Interop Chat, we currently only support Azure Communication Service end user to receive inline images sent by the Teams user. To learn more about what features are supported, refer to the [UI Library use cases](../concepts/ui-library/ui-library-use-cases.md)
communication-services Integrate Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/integrate-azure-function.md
Before you get started, make sure to:
}; } ```
-**Explanation to code above**: The first line import the interface for the `CommunicationIdentityClient`. The connection string in the second line can be found in your Azure Communication Services resource in the Azure portal. The `ACSEndpoint` is the URL of the ACS resource that was created.
+**Explanation to code above**: The first line import the interface for the `CommunicationIdentityClient`. The connection string in the second line can be found in your Azure Communication Services resource in the Azure portal. The `ACSEndpoint` is the URL of the Azure Communication Services resource that was created.
5. Open the local Azure Function folder in Visual Studio Code. Open the `index.js` and run the local Azure Function. A local Azure Function endpoint will be created and printed in the terminal. The printed message looks similar to:
communication-services Proxy Calling Support Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/proxy-calling-support-tutorial.md
Title: Tutorial - Proxy your ACS calling traffic across your own servers
+ Title: Tutorial - Proxy your Azure Communication Services calling traffic across your own servers
description: Learn how to have your media and signaling traffic be proxied to servers that you can control.
communication-services Virtual Visits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md
These three **implementation options** are columns in the table below, while eac
|--||--||| | *Manager* | Configure Business Availability | Bookings | Bookings | Custom | | *Provider* | Managing upcoming appointments | Outlook & Teams | Outlook & Teams | Custom |
-| *Provider* | Join the appointment | Teams | Teams | ACS Calling & Chat |
-| *Consumer* | Schedule an appointment | Bookings | Bookings | ACS Rooms |
-| *Consumer*| Be reminded of an appointment | Bookings | Bookings | ACS SMS |
-| *Consumer*| Join the appointment | Teams or virtual appointments | ACS Calling & Chat | ACS Calling & Chat |
+| *Provider* | Join the appointment | Teams | Teams | Azure Communication Services Calling & Chat |
+| *Consumer* | Schedule an appointment | Bookings | Bookings | Azure Communication Services Rooms |
+| *Consumer*| Be reminded of an appointment | Bookings | Bookings | Azure Communication Services SMS |
+| *Consumer*| Join the appointment | Teams or virtual appointments | Azure Communication Services Calling & Chat | Azure Communication Services Calling & Chat |
There are other ways to customize and combine Microsoft tools to deliver a virtual appointments experience: - **Replace Bookings with a custom scheduling experience with Graph.** You can build your own consumer-facing scheduling experience that controls Microsoft 365 meetings with Graph APIs.
The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. T
2. Consumer gets an appointment reminder through SMS and Email. 3. Provider joins the appointment using Microsoft Teams. 4. Consumer uses a link from the Bookings reminders to launch the Contoso consumer app and join the underlying Teams meeting.
-5. The users communicate with each other using voice, video, and text chat in a meeting. Specifically, Teams chat interoperability enables Teams user to send inline images or file attachments directly to ACS users seamlessly.
+5. The users communicate with each other using voice, video, and text chat in a meeting. Specifically, Teams chat interoperability enables Teams user to send inline images or file attachments directly to Azure Communication Services users seamlessly.
## Building a virtual appointment sample In this section, weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application is a desktop and mobile friendly browser experience, with code that you can use to explore and for production.
communications-gateway Plan And Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/plan-and-manage-costs.md
Previously updated : 10/09/2023 Last updated : 10/27/2023 # Plan and manage costs for Azure Communications Gateway
When you deploy Azure Communications Gateway, you're charged for how you use the
For example, if you have 28,000 users assigned to the deployment each month, you're charged for: * The service availability fee for each hour in the month * 24,001 users in the 1000-25000 tier
-* 3000 users in the 25001-100000 tier
-
-> [!TIP]
-> If you receive a quote through Microsoft Volume Licensing, pricing may be presented as aggregated so that the values are easily readable (for example showing the per-user meters in groups of 10 or 100 rather than the pricing for individual users). This does not impact the way you will be billed.
-
-If you choose to deploy the Number Management Portal by selecting the API Bridge option, you'll also be charged for the Number Management Portal. Fees work in the same way as the other meters: a service fee meter and a per-user meter. The number of users charged for the Number Management Portal is always the same as the number of users charged on the other Azure Communications Gateway meters.
+* 3000 users in the 25000+ tier
> [!NOTE] > A Microsoft Teams Direct Routing user is any telephone number configured with Direct Routing on Azure Communications Gateway. Billing for the user starts as soon as you have configured the number.
If you choose to deploy the Number Management Portal by selecting the API Bridge
At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all Azure Communications Gateway costs. There's a separate line item for each meter.
+> [!TIP]
+> If you receive a quote through Microsoft Volume Licensing, pricing may be presented as aggregated so that the values are easily readable (for example showing the per-user meters in groups of 10 or 100 rather than the pricing for individual users). This does not impact the way you will be billed.
+ If you've arranged any custom work with Microsoft, you might be charged an extra fee for that work. That fee isn't included in these meters. If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
Network Security Groups (NSGs) needed to configure virtual networks closely rese
You can lock down a network via NSGs with more restrictive rules than the default NSG rules to control all inbound and outbound traffic for the Container Apps environment at the subscription level.
-In the workload profiles environment, user-defined routes (UDRs) and securing outbound traffic with a firewall are supported. When using an external workload profiles environment, inbound traffic to Container Apps that use external ingress routes through the public IP that exists in the [managed resource group](./networking.md#workload-profiles-environment-1) rather than through your subnet. This means that locking down inbound traffic via NSG or Firewall on an external workload profiles environment is not supported. For more information, see [Networking in Azure Container Apps environments](./networking.md#user-defined-routes-udr).
+In the workload profiles environment, user-defined routes (UDRs) and [securing outbound traffic with a firewall](./networking.md#configuring-udr-with-azure-firewall) are supported. When using an external workload profiles environment, inbound traffic to Azure Container Apps is routed through the public IP that exists in the [managed resource group](./networking.md#workload-profiles-environment-1) rather than through your subnet. This means that locking down inbound traffic via NSG or Firewall on an external workload profiles environment isn't supported. For more information, see [Networking in Azure Container Apps environments](./networking.md#user-defined-routes-udr).
In the Consumption only environment, custom user-defined routes (UDRs) and ExpressRoutes aren't supported. ## NSG allow rules
-The following tables describe how to configure a collection of NSG allow rules.
->[!NOTE]
-> The subnet associated with a Container App Environment on the Consumption only environment requires a CIDR prefix of `/23` or larger. On the workload profiles environment (preview), a `/27` or larger is required.
+The following tables describe how to configure a collection of NSG allow rules. The specific rules required depend on your [environment type](./environment.md#types).
### Inbound
-| Protocol | Port | ServiceTag | Description |
-|--|--|--|--|
-| Any | \* | Infrastructure subnet address space | Allow communication between IPs in the infrastructure subnet. This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`. |
-| Any | \* | AzureLoadBalancer | Allow the Azure infrastructure load balancer to communicate with your environment. |
+# [Workload profiles environment](#tab/workload-profiles-env)
-### Outbound with service tags
+>[!Note]
+> When using workload profiles, inbound NSG rules only apply for traffic going through your virtual network. If your container apps are set to accept traffic from the public internet, incoming traffic will go through the public endpoint instead of the virtual network.
-The following service tags are required when using NSGs on the Consumption only environment:
+| Protocol | Source | Source Ports | Destination | Destination Ports | Description |
+|--|--|--|--|--|--|
+| TCP | Your Client IPs | \* | Your container app's subnet<sup>1</sup> | `443`, `30,000-32,676`<sup>2</sup> | Allow your Client IPs to access Azure Container Apps. |
+| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30,000-32,676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
-| Protocol | Port | ServiceTag | Description
-|--|--|--|--|
-| UDP | `1194` | `AzureCloud.<REGION>` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. |
-| TCP | `9000` | `AzureCloud.<REGION>` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. |
-| TCP | `443` | `AzureMonitor` | Allows outbound calls to Azure Monitor. |
+# [Consumption only environment](#tab/consumption-only-env)
-The following service tags are required when using NSGs on the workload profiles environment:
+| Protocol | Source | Source Ports | Destination | Destination Ports | Description |
+|--|--|--|--|--|--|
+| TCP | Your Client IPs | \* | Your container app's subnet<sup>1</sup> | `443` | Allow your Client IPs to access Azure Container Apps. |
+| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30,000-32,676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
+++
+<sup>1</sup> This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`.
+<sup>2</sup> The full range is required when creating your Azure Container Apps as a port within the range will by dynamically allocated. Once created, the required ports are 2 immutable, static values, and you can update your NSG rules.
->[!Note]
-> If you are using Azure Container Registry (ACR) with NSGs configured on your virtual network, create a private endpoint on your ACR to allow Container Apps to pull images through the virtual network.
-| Protocol | Port | Service Tag | Description
-|--|--|--|--|
-| TCP | `443` | `MicrosoftContainerRegistry` | This is the service tag for container registry for microsoft containers. |
-| TCP | `443` | `AzureFrontDoor.FirstParty` | This is a dependency of the `MicrosoftContainerRegistry` service tag. |
+### Outbound
-### Outbound with wild card IP rules
+# [Workload profiles environment](#tab/workload-profiles-env)
+
+| Protocol | Source | Source Ports | Destination | Destination Ports | Description |
+|--|--|--|--|--|--|
+| TCP | Your container app's subnet<sup>1</sup> | \* | Your Container Registry | Your container registry's port | This is required to communicate with your container registry. For example, when using ACR, you need `AzureContainerRegistry` and `AzureActiveDirectory` for the destination, and the port will be your container registry's port unless using private endpoints.<sup>2</sup> |
+| TCP | Your container app's subnet | \* | `AzureMonitor` | `443` | Allows outbound calls to Azure Monitor. |
+| TCP | Your container app's subnet | \* | `MicrosoftContainerRegistry` | `443` | This is the service tag for Microsoft container registry for system containers. |
+| TCP | Your container app's subnet | \* | `AzureFrontDoor.FirstParty` | `443` | This is a dependency of the `MicrosoftContainerRegistry` service tag. |
+| UDP | Your container app's subnet | \* | \* | `123` | NTP server. |
+| Any | Your container app's subnet | \* | Your container app's subnet | \* | Allow communication between IPs in your container app's subnet. |
+| TCP | Your container app's subnet | \* | `AzureActiveDirectory` | `443` | If you're using managed identity, this is required. |
+
+# [Consumption only environment](#tab/consumption-only-env)
+
+| Protocol | Source | Source Ports | Destination | Destination Ports | Description |
+|--|--|--|--|--|--|
+| TCP | Your container app's subnet<sup>1</sup> | \* | Your Container Registry | Your container registry's port | This is required to communicate with your container registry. For example, when using ACR, you need `AzureContainerRegistry` and `AzureActiveDirectory` for the destination, and the port will be your container registry's port unless using private endpoints.<sup>2</sup> |
+| UDP | Your container app's subnet | \* | `AzureCloud.<REGION>` | `1194` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. |
+| TCP | Your container app's subnet | \* | `AzureCloud.<REGION>` | `9000` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. |
+| TCP | Your container app's subnet | \* | `AzureMonitor` | `443` | Allows outbound calls to Azure Monitor. |
+| TCP | Your container app's subnet | \* | `AzureCloud` | `443` | Allowing all outbound on port `443` provides a way to allow all FQDN based outbound dependencies that don't have a static IP. |
+| UDP | Your container app's subnet | \* | \* | `123` | NTP server. |
+| TCP | Your container app's subnet | \* | \* | `5671` | Container Apps control plane. |
+| TCP | Your container app's subnet | \* | \* | `5672` | Container Apps control plane. |
+| Any | Your container app's subnet | \* | Your container app's subnet | \* | Allow communication between IPs in your container app's subnet. |
++
-The following IP rules are required when using NSGs on both the Consumption only environment and the workload profiles environment:
+<sup>1</sup> This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`.
+<sup>2</sup> If you're using Azure Container Registry (ACR) with NSGs configured on your virtual network, create a private endpoint on your ACR to allow Azure Container Apps to pull images through the virtual network. You don't need to add an NSG rule for ACR when configured with private endpoints.
-| Protocol | Port | IP | Description |
-|--|--|--|--|
-| TCP | `443` | \* | Allowing all outbound on port `443` provides a way to allow all FQDN based outbound dependencies that don't have a static IP. |
-| UDP | `123` | \* | NTP server. |
-| TCP | `5671` | \* | Container Apps control plane. |
-| TCP | `5672` | \* | Container Apps control plane. |
-| Any | \* | Infrastructure subnet address space | Allow communication between IPs in the infrastructure subnet. This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`. |
#### Considerations - If you're running HTTP servers, you might need to add ports `80` and `443`.-- Adding deny rules for some ports and protocols with lower priority than `65000` may cause service interruption and unexpected behavior.
+- Adding deny rules for some ports and protocols with lower priority than `65000` might cause service interruption and unexpected behavior.
- Don't explicitly deny the Azure DNS address `168.63.128.16` in the outgoing NSG rules, or your Container Apps environment won't be able to function.
container-apps User Defined Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md
Azure creates a default route table for your virtual networks on create. By impl
You can also use a NAT gateway or any other third party appliances instead of Azure Firewall.
-For more information on networking concepts in Container Apps, see [Networking Environment in Azure Container Apps](./networking.md).
+See the [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewall) in [networking in Azure Container Apps](./networking.md) for more information.
## Prerequisites
A subnet called **AzureFirewallSubnet** is required in order to deploy a firewal
| **Virtual network** | Select the integrated virtual network. | | **Public IP address** | Select an existing address or create one by selecting **Add new**. |
-1. Select **Review + create**. After validation finishes, select **Create**. The validation step may take a few minutes to complete.
+1. Select **Review + create**. After validation finishes, select **Create**. The validation step might take a few minutes to complete.
1. Once the deployment completes, select **Go to Resource**.
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
Currently the point in time restore functionality has the following limitations:
* Multi-regions write accounts aren't supported.
-* Currently Azure Synapse Link can be enabled, in preview, in continuous backup database accounts. The opposite situation isn't supported yet, it is not possible to turn on continuous backup in Synapse Link enabled database accounts. And analytical store isn't included in backups. For more information about backup and analytical store, see [analytical store backup](analytical-store-introduction.md#backup).
+* Currently Azure Synapse Link can be enabled in continuous backup database accounts. But the opposite situation isn't supported yet, it is not possible to turn on continuous backup in Synapse Link enabled database accounts. And analytical store isn't included in backups. For more information about backup and analytical store, see [analytical store backup](analytical-store-introduction.md#backup).
* The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account didn't exist.
cosmos-db Concepts Colocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-colocation.md
Previously updated : 05/06/2019 Last updated : 10/01/2023 # Table colocation in Azure Cosmos DB for PostgreSQL
Colocation means storing related information together on the same nodes. Queries
## Data colocation for hash-distributed tables
-In Azure Cosmos DB for PostgreSQL, a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables.
+In Azure Cosmos DB for PostgreSQL, a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables. The concept of hash-distributed tables is also known as [row-based sharding](concepts-sharding-models.md#row-based-sharding). In [schema-based sharding](concepts-sharding-models.md#schema-based-sharding), tables within a distributed schema are always colocated.
:::image type="content" source="media/concepts-colocation/colocation-shards.png" alt-text="Diagram shows shards with the same hash range placed on the same node for events shards and page shards." border="false"::: ## A practical example of colocation
-Consider the following tables that might be part of a multi-tenant web
+Consider the following tables that might be part of a multitenant web
analytics SaaS: ```sql
In some cases, queries and table schemas must be changed to include the tenant I
## Next steps -- See how tenant data is colocated in the [multi-tenant tutorial](tutorial-design-database-multi-tenant.md).
+- See how tenant data is colocated in the [multitenant tutorial](tutorial-design-database-multi-tenant.md).
cosmos-db Concepts Distributed Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-distributed-data.md
- Title: Distributed data ΓÇô Azure Cosmos DB for PostgreSQL
-description: Learn about distributed tables, reference tables, local tables, and shards.
----- Previously updated : 05/06/2019--
-# Distributed data in Azure Cosmos DB for PostgreSQL
--
-This article outlines the three table types in Azure Cosmos DB for PostgreSQL.
-It shows how distributed tables are stored as shards, and the way that shards are placed on nodes.
-
-## Table types
-
-There are three types of tables in a cluster, each
-used for different purposes.
-
-### Type 1: Distributed tables
-
-The first type, and most common, is distributed tables. They
-appear to be normal tables to SQL statements, but they're horizontally
-partitioned across worker nodes. What this means is that the rows
-of the table are stored on different nodes, in fragment tables called
-shards.
-
-Azure Cosmos DB for PostgreSQL runs not only SQL but DDL statements throughout a cluster.
-Changing the schema of a distributed table cascades to update
-all the table's shards across workers.
-
-#### Distribution column
-
-Azure Cosmos DB for PostgreSQL uses algorithmic sharding to assign rows to shards. The assignment is made deterministically based on the value
-of a table column called the distribution column. The cluster
-administrator must designate this column when distributing a table.
-Making the right choice is important for performance and functionality.
-
-### Type 2: Reference tables
-
-A reference table is a type of distributed table whose entire contents are
-concentrated into a single shard. The shard is replicated on every worker and
-the coordinator. Queries on any worker can access the reference information
-locally, without the network overhead of requesting rows from another node.
-Reference tables have no distribution column because there's no need to
-distinguish separate shards per row.
-
-Reference tables are typically small and are used to store data that's
-relevant to queries running on any worker node. An example is enumerated
-values like order statuses or product categories.
-
-### Type 3: Local tables
-
-When you use Azure Cosmos DB for PostgreSQL, the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them.
-
-A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication.
-
-## Shards
-
-The previous section described how distributed tables are stored as shards on
-worker nodes. This section discusses more technical details.
-
-The `pg_dist_shard` metadata table on the coordinator contains a
-row for each shard of each distributed table in the system. The row
-matches a shard ID with a range of integers in a hash space
-(shardminvalue, shardmaxvalue).
-
-```sql
-SELECT * from pg_dist_shard;
- logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
-++--++
- github_events | 102026 | t | 268435456 | 402653183
- github_events | 102027 | t | 402653184 | 536870911
- github_events | 102028 | t | 536870912 | 671088639
- github_events | 102029 | t | 671088640 | 805306367
- (4 rows)
-```
-
-If the coordinator node wants to determine which shard holds a row of
-`github_events`, it hashes the value of the distribution column in the
-row. Then the node checks which shard\'s range contains the hashed value. The
-ranges are defined so that the image of the hash function is their
-disjoint union.
-
-### Shard placements
-
-Suppose that shard 102027 is associated with the row in question. The row
-is read or written in a table called `github_events_102027` in one of
-the workers. Which worker? That's determined entirely by the metadata
-tables. The mapping of shard to worker is known as the shard placement.
-
-The coordinator node
-rewrites queries into fragments that refer to the specific tables
-like `github_events_102027` and runs those fragments on the
-appropriate workers. Here's an example of a query run behind the scenes to find the node holding shard ID 102027.
-
-```sql
-SELECT
- shardid,
- node.nodename,
- node.nodeport
-FROM pg_dist_placement placement
-JOIN pg_dist_node node
- ON placement.groupid = node.groupid
- AND node.noderole = 'primary'::noderole
-WHERE shardid = 102027;
-```
-
-```output
-ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
-Γöé shardid Γöé nodename Γöé nodeport Γöé
-Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
-Γöé 102027 Γöé localhost Γöé 5433 Γöé
-ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
-```
-
-## Next steps
--- Learn how to [choose a distribution column](howto-choose-distribution-column.md) for distributed tables.
cosmos-db Concepts Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-nodes.md
Previously updated : 10/26/2022 Last updated : 09/29/2023 # Nodes and tables in Azure Cosmos DB for PostgreSQL
allows the database to scale by adding more nodes to the cluster.
Every cluster has a coordinator node and multiple workers. Applications send their queries to the coordinator node, which relays it to the relevant
-workers and accumulates their results. Applications are not able to connect
-directly to workers.
+workers and accumulates their results.
-Azure Cosmos DB for PostgreSQL allows the database administrator to *distribute* tables,
-storing different rows on different worker nodes. Distributed tables are the
-key to Azure Cosmos DB for PostgreSQL performance. Failing to distribute tables leaves them entirely
-on the coordinator node and cannot take advantage of cross-machine parallelism.
+Azure Cosmos DB for PostgreSQL allows the database administrator to *distribute* tables and/or schemas,
+storing different rows on different worker nodes. Distributed tables and/or schemas are the
+key to Azure Cosmos DB for PostgreSQL performance. Failing to distribute tables and/or schemas leaves them entirely
+on the coordinator node and can't take advantage of cross-machine parallelism.
For each query on distributed tables, the coordinator either routes it to a single worker node, or parallelizes it across several depending on whether the
-required data lives on a single node or multiple. The coordinator decides what
+required data lives on a single node or multiple. With [schema-based sharding](concepts-sharding-models.md#schema-based-sharding), the coordinator routes the queries directly to the node that hosts the schema. In both schema-based sharding and [row-based sharding](concepts-sharding-models.md#row-based-sharding), the coordinator decides what
to do by consulting metadata tables. These tables track the DNS names and health of worker nodes, and the distribution of data across nodes. ## Table types
-There are three types of tables in a cluster, each
+There are five types of tables in a cluster, each
stored differently on nodes and used for different purposes. ### Type 1: Distributed tables
values like order statuses or product categories.
When you use Azure Cosmos DB for PostgreSQL, the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them.
-A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication.
+A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a `users` table for application sign-in and authentication.
+
+### Type 4: Local managed tables
+
+Azure Cosmos DB for PostgreSQL might automatically add local tables to metadata if a foreign key reference exists between a local table and a reference table. Additionally locally managed tables can be manually created by executing [create_reference_table](reference-functions.md#citus_add_local_table_to_metadata) citus_add_local_table_to_metadata function on regular local tables. Tables present in metadata are considered managed tables and can be queried from any node, Citus knows to route to the coordinator to obtain data from the local managed table. Such tables are displayed as local in [citus_tables](reference-metadata.md#distributed-tables-view) view.
+
+### Type 5: Schema tables
+
+With [schema-based sharding](concepts-sharding-models.md#schema-based-sharding) introduced in Citus 12.0, distributed schemas are automatically associated with individual colocation groups. Tables created in those schemas are automatically converted to colocated distributed tables without a shard key. Such tables are considered schema tables and are displayed as schema in [citus_tables](reference-metadata.md#distributed-tables-view) view.
## Shards
cosmos-db Concepts Sharding Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-sharding-models.md
+
+ Title: Sharding models - Azure Cosmos DB for PostgreSQL
+description: What is sharding, and what sharding models are available in Azure Cosmos DB for PostgreSQL
+++++ Last updated : 09/08/2023++
+# Sharding models
++
+Sharding is a technique used in database systems and distributed computing to horizontally partition data across multiple servers or nodes. It involves breaking up a large database or dataset into smaller, more manageable parts called Shards. A shard contains a subset of the data, and together shards form the complete dataset.
+
+Azure Cosmos DB for PostgreSQL offers two types of data sharding, namely row-based and schema-based. Each option comes with its own [Sharding tradeoffs](#sharding-tradeoffs), allowing you to choose the approach that best aligns with your application's requirements.
+
+## Row-based sharding
+
+The traditional way in which Azure Cosmos DB for PostgreSQL shards tables is the single database, shared schema model also known as row-based sharding, tenants coexist as rows within the same table. The tenant is determined by defining a [distribution column](./concepts-nodes.md#distribution-column), which allows splitting up a table horizontally.
+
+Row-based is the most hardware efficient way of sharding. Tenants are densely packed and distributed among the nodes in the cluster. This approach however requires making sure that all tables in the schema have the distribution column and that all queries in the application filter by it. Row-based sharding shines in IoT workloads and for achieving the best margin out of hardware use.
+
+Benefits:
+
+* Best performance
+* Best tenant density per node
+
+Drawbacks:
+
+* Requires schema modifications
+* Requires application query modifications
+* All tenants must share the same schema
+
+## Schema-based sharding
+
+Available with Citus 12.0 in Azure Cosmos DB for PostgreSQL, schema-based sharding is the shared database, separate schema model, the schema becomes the logical shard within the database. Multitenant apps can use a schema per tenant to easily shard along the tenant dimension. Query changes aren't required and the application only needs a small modification to set the proper search_path when switching tenants. Schema-based sharding is an ideal solution for microservices, and for ISVs deploying applications that can't undergo the changes required to onboard row-based sharding.
+
+Benefits:
+
+* Tenants can have heterogeneous schemas
+* No schema modifications required
+* No application query modifications required
+* Schema-based sharding SQL compatibility is better compared to row-based sharding
+
+Drawbacks:
+
+* Fewer tenants per node compared to row-based sharding
+
+## Sharding tradeoffs
+
+<br />
+
+|| Schema-based sharding | Row-based sharding|
+||||
+|Multi-tenancy model|Separate schema per tenant|Shared tables with tenant ID columns|
+|Citus version|12.0+|All versions|
+|Extra steps compared to vanilla PostgreSQL|None, only a config change|Use create_distributed_table on each table to distribute & colocate tables by tenant ID|
+|Number of tenants|1-10k|1-1 M+|
+|Data modeling requirement|No foreign keys across distributed schemas|Need to include a tenant ID column (a distribution column, also known as a sharding key) in each table, and in primary keys, foreign keys|
+|SQL requirement for single node queries|Use a single distributed schema per query|Joins and WHERE clauses should include tenant_id column|
+|Parallel cross-tenant queries|No|Yes|
+|Custom table definitions per tenant|Yes|No|
+|Access control|Schema permissions|Schema permissions|
+|Data sharing across tenants|Yes, using reference tables (in a separate schema)|Yes, using reference tables|
+|Tenant to shard isolation|Every tenant has its own shard group by definition|Can give specific tenant IDs their own shard group via isolate_tenant_to_new_shard|
cosmos-db Concepts Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-upgrade.md
Previously updated : 05/16/2023 Last updated : 10/01/2023 # Cluster upgrades in Azure Cosmos DB for PostgreSQL
Last updated 05/16/2023
The Azure Cosmos DB for PostgreSQL managed service can handle upgrades of both the PostgreSQL server, and the Citus extension. All clusters are created with [the latest Citus version](./reference-extensions.md#citus-extension) available for the major PostgreSQL version you select during cluster provisioning. When you select a PostgreSQL version such as PostgreSQL 15 for in-place cluster upgrade, the latest Citus version supported for selected PostgreSQL version is going to be installed.
-If you need to upgrade the Citus version only, you can do so by using an in-place upgrade. For instance, you may want to upgrade Citus 11.0 to Citus 11.3 on your PostgreSQL 14 cluster without upgrading Postgres version.
+If you need to upgrade the Citus version only, you can do so by using an in-place upgrade. For instance, you might want to upgrade Citus 11.0 to Citus 11.3 on your PostgreSQL 14 cluster without upgrading Postgres version.
## Upgrade precautions
Also, upgrading a major version of Citus can introduce changes in behavior.
It's best to familiarize yourself with new product features and changes to avoid surprises.
+Noteworthy Citus 12 changes:
+* The default rebalance strategy changed from `by_shard_count` to `by_disk_size`.
+* Support for PostgreSQL 13 has been dropped as of this version.
+ Noteworthy Citus 11 changes:
-* Table shards may disappear in your SQL client. Their visibility
- is now controlled by
+* Table shards might disappear in your SQL client. You can control their visibility
+ using
[citus.show_shards_for_app_name_prefixes](reference-parameters.md#citusshow_shards_for_app_name_prefixes-text). * There are several [deprecated features](https://www.citusdata.com/updates/v11-0/#deprecated-features).
cosmos-db Howto Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-grow.md
queries.
> [!NOTE] > To take advantage of newly added nodes you must [rebalance distributed table > shards](howto-scale-rebalance.md), which means moving some
-> [shards](concepts-distributed-data.md#shards) from existing nodes
+> [shards](concepts-nodes.md#shards) from existing nodes
> to the new ones. Rebalancing can work in the background, and requires no > downtime.
cosmos-db Howto Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-rebalance.md
Last updated 01/30/2023
[!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] To take advantage of newly added nodes, rebalance distributed table
-[shards](concepts-distributed-data.md#shards). Rebalancing moves shards from existing nodes to the new ones. Azure Cosmos DB for PostgreSQL offers
+[shards](concepts-nodes.md#shards). Rebalancing moves shards from existing nodes to the new ones. Azure Cosmos DB for PostgreSQL offers
zero-downtime rebalancing, meaning queries continue without interruption during shard rebalancing.
cosmos-db Howto Useful Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-useful-diagnostic-queries.md
Previously updated : 01/30/2023 Last updated : 10/01/2023 # Useful diagnostic queries in Azure Cosmos DB for PostgreSQL
Last updated 01/30/2023
## Finding which node contains data for a specific tenant
-In the multi-tenant use case, we can determine which worker node contains the
+In the multitenant use case, we can determine which worker node contains the
rows for a specific tenant. Azure Cosmos DB for PostgreSQL groups the rows of distributed tables into shards, and places each shard on a worker node in the cluster.
The output contains the host and port of the worker database.
ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ ```
+## Finding which node hosts a distributed schema
+
+Distributed schemas are automatically associated with individual colocation groups such that the tables created in those schemas are converted to colocated distributed tables without a shard key. You can find where a distributed schema resides by joining `citus_shards` with `citus_schemas`:
+
+```postgresql
+select schema_name, nodename, nodeport
+ from citus_shards
+ join citus_schemas cs
+ on cs.colocation_id = citus_shards.colocation_id
+ group by 1,2,3;
+```
+
+```
+ schema_name | nodename | nodeport
+-+--+-
+ a | localhost | 9701
+ b | localhost | 9702
+ with_data | localhost | 9702
+```
+
+You can also query `citus_shards` directly filtering down to schema table type to have a detailed listing for all tables.
+
+```postgresql
+select * from citus_shards where citus_table_type = 'schema';
+```
+
+```
+ table_name | shardid | shard_name | citus_table_type | colocation_id | nodename | nodeport | shard_size | schema_name | colocation_id | schema_size | schema_owner
+-++--+++--+-++-++-+--
+ a.cities | 102080 | a.cities_102080 | schema | 4 | localhost | 9701 | 8192 | a | 4 | 128 kB | citus
+ a.map_tags | 102145 | a.map_tags_102145 | schema | 4 | localhost | 9701 | 32768 | a | 4 | 128 kB | citus
+ a.measurement | 102047 | a.measurement_102047 | schema | 4 | localhost | 9701 | 0 | a | 4 | 128 kB | citus
+ a.my_table | 102179 | a.my_table_102179 | schema | 4 | localhost | 9701 | 16384 | a | 4 | 128 kB | citus
+ a.people | 102013 | a.people_102013 | schema | 4 | localhost | 9701 | 32768 | a | 4 | 128 kB | citus
+ a.test | 102008 | a.test_102008 | schema | 4 | localhost | 9701 | 8192 | a | 4 | 128 kB | citus
+ a.widgets | 102146 | a.widgets_102146 | schema | 4 | localhost | 9701 | 32768 | a | 4 | 128 kB | citus
+ b.test | 102009 | b.test_102009 | schema | 5 | localhost | 9702 | 8192 | b | 5 | 32 kB | citus
+ b.test_col | 102012 | b.test_col_102012 | schema | 5 | localhost | 9702 | 24576 | b | 5 | 32 kB | citus
+ with_data.test | 102180 | with_data.test_102180 | schema | 11 | localhost | 9702 | 647168 | with_data | 11 | 632 kB | citus
+```
+ ## Finding the distribution column for a table Each distributed table has a "distribution column." (For more information, see [Distributed Data Modeling](howto-choose-distribution-column.md).) It can be important to know which column it is. For instance, when joining or filtering
-tables, you may see error messages with hints like, "add a filter to the
+tables, you might see error messages with hints like, "add a filter to the
distribution column." The `pg_dist_*` tables on the coordinator node contain diverse metadata about
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/introduction.md
Previously updated : 02/28/2023 Last updated : 10/01/2023 recommendations: false
reviewed the following articles:
> - Connect and query with your [app stack](quickstart-app-stacks-overview.yml). > - See how the [Azure Cosmos DB for PostgreSQL API](reference-overview.md) extends PostgreSQL, and try [useful diagnostic queries](howto-useful-diagnostic-queries.md). > - Pick the best [cluster size](howto-scale-initial.md) for your workload.
+> - Learn how to use Azure Cosmos DB for PostgreSQL as the [storage backend for multiple microservices](tutorial-design-database-microservices.md).
> - [Monitor](howto-monitoring.md) cluster performance. > - Ingest data efficiently with [Azure Stream Analytics](howto-ingest-azure-stream-analytics.md) > and [Azure Data Factory](howto-ingest-azure-data-factory.md).
cosmos-db Quickstart Build Scalable Apps Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-concepts.md
recommendations: false Previously updated : 01/30/2023 Last updated : 10/01/2023 # Fundamental concepts for scaling in Azure Cosmos DB for PostgreSQL
quick overview of the terms and concepts involved.
## Architectural overview
-Azure Cosmos DB for PostgreSQL gives you the power to distribute tables across multiple
+Azure Cosmos DB for PostgreSQL gives you the power to distribute tables and/or schemas across multiple
machines in a cluster and transparently query them the same you query plain PostgreSQL:
In the Azure Cosmos DB for PostgreSQL architecture, there are multiple kinds of
* The **coordinator** node stores distributed table metadata and is responsible for distributed planning.
-* By contrast, the **worker** nodes store the actual data and do the computation.
+* By contrast, the **worker** nodes store the actual data, metadata and do the computation.
* Both the coordinator and workers are plain PostgreSQL databases, with the `citus` extension loaded.
run a command called `create_distributed_table()`. Once you run this
command, Azure Cosmos DB for PostgreSQL transparently creates shards for the table across worker nodes. In the diagram, shards are represented as blue boxes.
+To distribute a normal PostgreSQL schema, you run the `citus_schema_distribute()` command. Once you run this command, Azure Cosmos DB for PostgreSQL transparently turns tables in such schemas into a single shard colocated tables that can be moved as a unit between nodes of the cluster.
+ > [!NOTE] > > On a cluster with no worker nodes, shards of distributed tables are on the coordinator node.
Colocation helps optimize JOINs across these tables. If you join the two tables
on `site_id`, Azure Cosmos DB for PostgreSQL can perform the join locally on worker nodes without shuffling data between nodes.
+Tables within a distributed schema are always colocated with each other.
+ ## Next steps > [!div class="nextstepaction"]
cosmos-db Quickstart Build Scalable Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-overview.md
recommendations: false Previously updated : 01/30/2023 Last updated : 10/01/2023 # Build scalable apps in Azure Cosmos DB for PostgreSQL
Last updated 01/30/2023
There are three steps involved in building scalable apps with Azure Cosmos DB for PostgreSQL: 1. Classify your application workload. There are use-case where Azure Cosmos DB for PostgreSQL
- shines: multi-tenant SaaS, real-time operational analytics, and high
+ shines: Multitenant SaaS, microservices, real-time operational analytics, and high
throughput OLTP. Determine whether your app falls into one of these categories.
-2. Based on the workload, identify the optimal shard key for the distributed
+2. Based on the workload, use [schema-based sharding](concepts-sharding-models.md#schema-based-sharding) or identify the optimal shard key for the distributed
tables. Classify your tables as reference, distributed, or local.
-3. Update the database schema and application queries to make them go fast
+3. When using [row-based sharding](concepts-sharding-models.md#row-based-sharding), update the database schema and application queries to make them go fast
across nodes. **Next steps**
cosmos-db Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-functions.md
Previously updated : 11/01/2022 Last updated : 09/29/2023 # Azure Cosmos DB for PostgreSQL functions
distributed functionality to Azure Cosmos DB for PostgreSQL.
> [!NOTE] > > clusters running older versions of the Citus Engine may not
-> offer all the functions listed below.
+> offer all the functions listed on this page.
## Table and Shard DDL
+### citus\_schema\_distribute
+
+Converts existing regular schemas into distributed schemas. Distributed schemas are autoassociated with individual colocation groups. Tables created in those schemas are converted to colocated distributed tables without a shard key. The process of distributing the schema automatically assigns and moves it to an existing node in the cluster.
+
+#### Arguments
+
+**schemaname:** Name of the schema, which needs to be distributed.
+
+#### Return value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT citus_schema_distribute('tenant_a');
+SELECT citus_schema_distribute('tenant_b');
+SELECT citus_schema_distribute('tenant_c');
+```
+
+For more examples, see how-to [design for microservices](tutorial-design-database-microservices.md).
+
+### citus\_schema\_undistribute
+
+Converts an existing distributed schema back into a regular schema. The process results in the tables and data being moved from the current node back to the coordinator node in the cluster.
+
+#### Arguments
+
+**schemaname:** Name of the schema, which needs to be distributed.
+
+#### Return value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT citus_schema_undistribute('tenant_a');
+SELECT citus_schema_undistribute('tenant_b');
+SELECT citus_schema_undistribute('tenant_c');
+```
+
+For more examples, see how-to [design for microservices](tutorial-design-database-microservices.md).
+ ### create\_distributed\_table The create\_distributed\_table() function is used to define a distributed table
or colocation group, use the [alter_distributed_table](#alter_distributed_table)
Possible values for `shard_count` are between 1 and 64000. For guidance on choosing the optimal value, see [Shard Count](howto-shard-count.md).
-#### Return Value
+#### Return value
N/A
distribution.
**table_name:** Name of the distributed table whose local counterpart on the coordinator node should be truncated.
-#### Return Value
+#### Return value
N/A
worker node.
**table\_name:** Name of the small dimension or reference table that needs to be distributed.
-#### Return Value
+#### Return value
N/A
defined as a reference table
SELECT create_reference_table('nation'); ```
+### citus\_add\_local\_table\_to\_metadata
+
+Adds a local Postgres table into Citus metadata. A major use-case for this function is to make local tables on the coordinator accessible from any node in the cluster. The data associated with the local table stays on the coordinator ΓÇô only its schema and metadata are sent to the workers.
+
+Adding local tables to the metadata comes at a slight cost. When you add the table, Citus must track it in the [partition table](reference-metadata.md#partition-table). Local tables that are added to metadata inherit the same limitations as reference tables.
+
+When you undistribute the table, Citus removes the resulting local tables from metadata, which eliminates such limitations on those tables.
+
+#### Arguments
+
+**table\_name:** Name of the table on the coordinator to be added to Citus metadata.
+
+**cascade\_via\_foreign\_keys**: (Optional) When this argument set to ΓÇ£true,ΓÇ¥ citus_add_local_table_to_metadata adds other tables that are in a foreign key relationship with given table into metadata automatically. Use caution with this parameter, because it can potentially affect many tables.
+
+#### Return value
+
+N/A
+
+#### Example
+
+This example informs the database that the nation table should be defined as a coordinator-local table, accessible from any node:
+
+```postgresql
+SELECT citus_add_local_table_to_metadata('nation');
+```
+ ### alter_distributed_table The alter_distributed_table() function can be used to change the distribution
tables that were previously colocated with the table, and the colocation will
be preserved. If it is "false", the current colocation of this table will be broken.
-#### Return Value
+#### Return value
N/A
This function doesn't move any data around physically.
If you want to break the colocation of a table, you should specify `colocate_with => 'none'`.
-#### Return Value
+#### Return value
N/A
undistribute_table also undistributes all tables that are related to table_name
through foreign keys. Use caution with this parameter, because it can potentially affect many tables.
-#### Return Value
+#### Return value
N/A
a distributed table (or, more generally, colocation group), be sure to name
that table using the `colocate_with` parameter. Then each invocation of the function will run on the worker node containing relevant shards.
-#### Return Value
+#### Return value
N/A
overridden with these GUCs:
**table_name:** Name of the columnar table. **chunk_row_count:** (Optional) The maximum number of rows per chunk for
-newly inserted data. Existing chunks of data won't be changed and may have
+newly inserted data. Existing chunks of data won't be changed and might have
more rows than this maximum value. The default value is 10000. **stripe_row_count:** (Optional) The maximum number of rows per stripe for
-newly inserted data. Existing stripes of data won't be changed and may have
+newly inserted data. Existing stripes of data won't be changed and might have
more rows than this maximum value. The default value is 150000. **compression:** (Optional) `[none|pglz|zstd|lz4|lz4hc]` The compression type
The alter_table_set_access_method() function changes access method of a table
**access_method:** Name of the new access method.
-#### Return Value
+#### Return value
N/A
will contain the point end_at, and no later partitions will be created.
**start_from:** (timestamptz, optional) pick the first partition so that it contains the point start_from. The default value is `now()`.
-#### Return Value
+#### Return value
True if it needed to create new partitions, false if they all existed already.
be partitioned on one column, of type date, timestamp, or timestamptz.
**older_than:** (timestamptz) drop partitions whose upper range is less than or equal to older_than.
-#### Return Value
+#### Return value
N/A
or equal to older_than.
**new_access_method:** (name) either 'heap' for row-based storage, or 'columnar' for columnar storage.
-#### Return Value
+#### Return value
N/A
doesn't work for the append distribution.
**distribution\_value:** The value of the distribution column.
-#### Return Value
+#### Return value
The shard ID Azure Cosmos DB for PostgreSQL associates with the distribution column value for the given table.
column](howto-choose-distribution-column.md).
**column\_var\_text:** The value of `partkey` in the `pg_dist_partition` table.
-#### Return Value
+#### Return value
The name of `table_name`'s distribution column.
visibility map and free space map for the shards.
**logicalrelid:** the name of a distributed table.
-#### Return Value
+#### Return value
Size in bytes as a bigint.
excluding indexes (but including TOAST, free space map, and visibility map).
**logicalrelid:** the name of a distributed table.
-#### Return Value
+#### Return value
Size in bytes as a bigint.
distributed table, including all indexes and TOAST data.
**logicalrelid:** the name of a distributed table.
-#### Return Value
+#### Return value
Size in bytes as a bigint.
all stats, call both functions.
N/A
-#### Return Value
+#### Return value
None
host names and port numbers.
N/A
-#### Return Value
+#### Return value
List of tuples where each tuple contains the following information:
placement is present (\"target\" node).
**target\_node\_port:** The port on the target worker node on which the database server is listening.
-#### Return Value
+#### Return value
N/A
command. The possible values are:
> - `block_writes`: Use COPY (blocking writes) for tables lacking > primary key or replica identity.
-#### Return Value
+#### Return value
N/A
distributing to equalize the cost across workers is the same as equalizing the
number of shards on each. The constant cost strategy is called \"by\_shard\_count\" and is the default rebalancing strategy.
-The default strategy is appropriate under these circumstances:
+The "by\_shard\_count" strategy is appropriate under these circumstances:
* The shards are roughly the same size * The shards get roughly the same amount of traffic * Worker nodes are all the same size/type * Shards haven't been pinned to particular workers
-If any of these assumptions don't hold, then the default rebalancing
-can result in a bad plan. In this case you may customize the strategy,
-using the `rebalance_strategy` parameter.
+If any of these assumptions donΓÇÖt hold, then rebalancing ΓÇ£by_shard_countΓÇ¥ can result in a bad plan.
+
+The default rebalancing strategy is ΓÇ£by_disk_sizeΓÇ¥. You can always customize the strategy, using the `rebalance_strategy` parameter.
It's advisable to call [get_rebalance_table_shards_plan](#get_rebalance_table_shards_plan) before
other shards.
If this argument is omitted, the function chooses the default strategy, as indicated in the table.
-#### Return Value
+#### Return value
N/A
The same arguments as rebalance\_table\_shards: relation, threshold,
max\_shard\_moves, excluded\_shard\_list, and drain\_only. See documentation of that function for the arguments' meaning.
-#### Return Value
+#### Return value
Tuples containing these columns:
executed by `rebalance_table_shards()`.
N/A
-#### Return Value
+#### Return value
Tuples containing these columns:
precisely the cumulative shard cost should be balanced between nodes
minimum value allowed for the threshold argument of rebalance\_table\_shards(). Its default value is 0
-#### Return Value
+#### Return value
N/A
when rebalancing shards.
**name:** the name of the strategy in pg\_dist\_rebalance\_strategy
-#### Return Value
+#### Return value
N/A
SELECT * from citus_remote_connection_stats();
### isolate\_tenant\_to\_new\_shard This function creates a new shard to hold rows with a specific single value in
-the distribution column. It's especially handy for the multi-tenant
+the distribution column. It's especially handy for the multitenant
use case, where a large tenant can be placed alone on its own shard and ultimately its own physical node.
assigned to the new shard.
from all tables in the current table's [colocation group](concepts-colocation.md).
-#### Return Value
+#### Return value
**shard\_id:** The function returns the unique ID assigned to the newly created shard.
cosmos-db Reference Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-metadata.md
Previously updated : 02/18/2022 Last updated : 10/01/2023 # Azure Cosmos DB for PostgreSQL system tables and views
distribution_argument_index |
colocationid | ```
+### Distributed schemas view
+
+Citus 12.0 introduced the concept of [schema-based sharding](concepts-sharding-models.md#schema-based-sharding) and with it the `citus_schemas`` view, which shows which schemas have been distributed in the system. The view only lists distributed schemas, local schemas aren't displayed.
+
+| Name | Type | Description |
+|--|--|--|
+| schema_name | regnamespace | Name of the distributed schema |
+| colocation_id | integer | Colocation ID of the distributed schema |
+| schema_size | text | Human readable size summary of all objects within the schema |
+| schema_owner | name | Role that owns the schema |
+
+HereΓÇÖs an example:
+
+```
+ schema_name | colocation_id | schema_size | schema_owner
+-++-+--
+ userservice | 1 | 0 bytes | userservice
+ timeservice | 2 | 0 bytes | timeservice
+ pingservice | 3 | 632 kB | pingservice
+```
+ ### Distributed tables view The `citus_tables` view shows a summary of all tables managed by Azure Cosmos
the same distribution column values will be placed on the same worker nodes.
Colocation enables join optimizations, certain distributed rollups, and foreign key support. Shard colocation is inferred when the shard counts, replication factors, and partition column types all match between two tables; however, a
-custom colocation group may be specified when creating a distributed table, if
+custom colocation group can be specified when creating a distributed table, if
so desired. | Name | Type | Description |
can use to determine where to move shards.
| default_strategy | boolean | Whether rebalance_table_shards should choose this strategy by default. Use citus_set_default_rebalance_strategy to update this column | | shard_cost_function | regproc | Identifier for a cost function, which must take a shardid as bigint, and return its notion of a cost, as type real | | node_capacity_function | regproc | Identifier for a capacity function, which must take a nodeid as int, and return its notion of node capacity as type real |
-| shard_allowed_on_node_function | regproc | Identifier for a function that given shardid bigint, and nodeidarg int, returns boolean for whether Azure Cosmos DB for PostgreSQL may store the shard on the node |
+| shard_allowed_on_node_function | regproc | Identifier for a function that given shardid bigint, and nodeidarg int, returns boolean for whether Azure Cosmos DB for PostgreSQL can store the shard on the node |
| default_threshold | float4 | Threshold for deeming a node too full or too empty, which determines when the rebalance_table_shards should try to move shards | | minimum_threshold | float4 | A safeguard to prevent the threshold argument of rebalance_table_shards() from being set too low |
SELECT * FROM pg_dist_rebalance_strategy;
``` -[ RECORD 1 ]-+-- Name | by_shard_count
-default_strategy | true
+default_strategy | false
shard_cost_function | citus_shard_cost_1 node_capacity_function | citus_node_capacity_1 shard_allowed_on_node_function | citus_shard_allowed_on_node_true
default_threshold | 0
minimum_threshold | 0 -[ RECORD 2 ]-+-- Name | by_disk_size
-default_strategy | false
+default_strategy | true
shard_cost_function | citus_shard_cost_by_disk_size node_capacity_function | citus_node_capacity_1 shard_allowed_on_node_function | citus_shard_allowed_on_node_true
default_threshold | 0.1
minimum_threshold | 0.01 ```
-The default strategy, `by_shard_count`, assigns every shard the same
-cost. Its effect is to equalize the shard count across nodes. The other
-predefined strategy, `by_disk_size`, assigns a cost to each shard
-matching its disk size in bytes plus that of the shards that are
-colocated with it. The disk size is calculated using
-`pg_total_relation_size`, so it includes indices. This strategy attempts
-to achieve the same disk space on every node. Note the threshold of 0.1--it prevents unnecessary shard movement caused by insignificant
-differences in disk space.
+The strategy `by_disk_size` assigns every shard the same cost. Its effect is to equalize the shard count across nodes. The default strategy, `by_disk_size`, assigns a cost to each shard matching its disk size in bytes plus that of the shards that are colocated with it. The disk size is calculated using `pg_total_relation_size`, so it includes indices. This strategy attempts to achieve the same disk space on every node. Note the threshold of `0.1`, it prevents unnecessary shard movement caused by insignificant differences in disk space.
#### Creating custom rebalancer strategies
with) the
[pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) view in PostgreSQL, which tracks statistics about query speed.
-This view can trace queries to originating tenants in a multi-tenant
+This view can trace queries to originating tenants in a multitenant
application, which helps for deciding when to do tenant isolation. | Name | Type | Description |
cosmos-db Reference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-overview.md
Previously updated : 08/02/2022 Last updated : 09/29/2023 # Azure Cosmos DB for PostgreSQL distributed SQL API
configuration options for:
||-| | [alter_distributed_table](reference-functions.md#alter_distributed_table) | Change the distribution column, shard count or colocation properties of a distributed table | | [citus_copy_shard_placement](reference-functions.md#master_copy_shard_placement) | Repair an inactive shard placement using data from a healthy placement |
+| [citus_schema_distribute](reference-functions.md#) | Turn a PostgreSQL schema into a distributed schema |
+| [citus_schema_undistribute](reference-functions.md#) | Undo the action of citus_schema_distribute |
| [create_distributed_table](reference-functions.md#create_distributed_table) | Turn a PostgreSQL table into a distributed (sharded) table | | [create_reference_table](reference-functions.md#create_reference_table) | Maintain full copies of a table in sync across all nodes |
+| [citus_add_local_table_to_metadata](reference-functions.md#citus_add_local_table_to_metadata) | Add a local table to metadata to enable querying it from any node |
| [isolate_tenant_to_new_shard](reference-functions.md#isolate_tenant_to_new_shard) | Create a new shard to hold rows with a specific single value in the distribution column | | [truncate_local_data_after_distributing_table](reference-functions.md#truncate_local_data_after_distributing_table) | Truncate all local rows after distributing a table | | [undistribute_table](reference-functions.md#undistribute_table) | Undo the action of create_distributed_table or create_reference_table |
cosmos-db Reference Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-parameters.md
Previously updated : 06/06/2023 Last updated : 10/01/2023 # Azure Cosmos DB for PostgreSQL server parameters
all worker nodes, or just for the coordinator node.
> [!NOTE] >
-> Clusters running older versions of [the Citus extension](./reference-versions.md#citus-and-other-extension-versions) may not
+> Clusters running older versions of [the Citus extension](./reference-versions.md#citus-and-other-extension-versions) might not
> offer all the parameters listed below. ### General configuration
Azure Cosmos DB for PostgreSQL thus validates the version of the code and that o
extension match, and errors out if they don\'t. This value defaults to true, and is effective on the coordinator. In
-rare cases, complex upgrade processes may require setting this parameter
+rare cases, complex upgrade processes might require setting this parameter
to false, thus disabling the check. #### citus.log\_distributed\_deadlock\_detection (boolean)
Allow new [local tables](concepts-nodes.md#type-3-local-tables) to be accessed
by queries on worker nodes. Adds all newly created tables to Citus metadata when enabled. The default value is 'false'.
+#### citus.rebalancer\_by\_disk\_size\_base\_cost (integer)
+
+Using the by_disk_size rebalance strategy each shard group gets this cost in bytes added to its actual disk size. This value is used to avoid creating a bad balance when thereΓÇÖs little data in some of the shards. The assumption is that even empty shards have some cost, because of parallelism and because empty shard groups are likely to grow in the future.
+
+The default value is `100MB`.
+ ### Query Statistics #### citus.stat\_statements\_purge\_interval (integer)
runtime.
#### citus.stat_statements_max (integer) The maximum number of rows to store in `citus_stat_statements`. Defaults to
-50000, and may be changed to any value in the range 1000 - 10000000. Each row requires 140 bytes of storage, so setting `stat_statements_max` to its
+50000, and can be changed to any value in the range 1000 - 10000000. Each row requires 140 bytes of storage, so setting `stat_statements_max` to its
maximum value of 10M would consume 1.4 GB of memory. Changing this GUC doesn't take effect until PostgreSQL is restarted.
Changing this GUC doesn't take effect until PostgreSQL is restarted.
#### citus.stat_statements_track (enum) Recording statistics for `citus_stat_statements` requires extra CPU resources.
-When the database is experiencing load, the administrator may wish to disable
-statement tracking. The `citus.stat_statements_track` GUC can turn tracking on
-and off.
+When the database is experiencing load, the administrator can disable
+statement tracking by setting `citus.stat_statements_track` to `none`.
* **all:** (default) Track all statements. * **none:** Disable tracking.
+#### citus.stat\_tenants\_untracked\_sample\_rate
+
+Sampling rate for new tenants in `citus_stat_tenants`. The rate can be of range between `0.0` and `1.0`. Default is `1.0` meaning 100% of untracked tenant queries are sampled. Setting it to a lower value means that already tracked tenants have 100% queries sampled, but tenants that are currently untracked are sampled only at the provided rate.
+ ### Data Loading #### citus.multi\_shard\_commit\_protocol (enum)
case by choosing between the following commit protocols:
should be increased on all the workers, typically to the same value as max\_connections. - **1pc:** The transactions in which COPY is performed on the shard
- placements is committed in a single round. Data may be lost if a
+ placements is committed in a single round. Data might be lost if a
commit fails after COPY succeeds on all placements (rare). #### citus.shard\_replication\_factor (integer)
case by choosing between the following commit protocols:
Sets the replication factor for shards that is, the number of nodes on which shards are placed, and defaults to 1. This parameter can be set at run-time and is effective on the coordinator. The ideal value for this parameter depends
-on the size of the cluster and rate of node failure. For example, you may want
-to increase this replication factor if you run large clusters and observe node
+on the size of the cluster and rate of node failure. For example, you can increase this replication factor if you run large clusters and observe node
failures on a more frequent basis. ### Planner Configuration
SELECT * FROM citus_table JOIN postgres_table USING (x) WHERE citus_table.x = 10
#### citus.limit\_clause\_row\_fetch\_count (integer) Sets the number of rows to fetch per task for limit clause optimization.
-In some cases, select queries with limit clauses may need to fetch all
+In some cases, select queries with limit clauses might need to fetch all
rows from each task to generate results. In those cases, and where an approximation would produce meaningful results, this configuration value sets the number of rows to fetch from each shard. Limit approximations
and is effective on the coordinator.
> This GUC is applicable only when > [shard_replication_factor](reference-parameters.md#citusshard_replication_factor-integer) > is greater than one, or for queries against
-> [reference_tables](concepts-distributed-data.md#type-2-reference-tables).
+> [reference_tables](concepts-nodes.md#type-2-reference-tables).
Sets the policy to use when assigning tasks to workers. The coordinator assigns tasks to workers based on shard locations. This configuration
be used.
This parameter can be set at run-time and is effective on the coordinator.
+#### citus.enable\_non\_colocated\_router\_query\_pushdown (boolean)
+
+Enables router planner for the queries that reference non-colocated distributed tables.
+
+The router planner is only enabled for queries that reference colocated distributed tables because otherwise shards might not be on the same node. Enabling this flag allows optimization for queries that reference such tables, but the query might not work after rebalancing the shards or altering the shard count of those tables.
+
+The default is `off`.
+ ### Intermediate Data Transfer #### citus.max\_intermediate\_result\_size (integer)
subqueries. The default is 1 GB, and a value of -1 means no limit.
Queries exceeding the limit are canceled and produce an error message.
+### DDL
+
+#### citus.enable\_schema\_based\_sharding
+
+With the parameter set to `ON`, all created schemas are distributed by default. Distributed schemas are automatically associated with individual colocation groups such that the tables created in those schemas are converted to colocated distributed tables without a shard key. This setting can be modified for individual sessions.
+
+For an example of using this GUC, see [how to design for microservice](tutorial-design-database-microservices.md).
+ ### Executor Configuration #### General
This parameter can be set at run-time and is effective on the coordinator.
##### citus.multi\_task\_query\_log\_level (enum) {#multi_task_logging} Sets a log-level for any query that generates more than one task (that is,
-which hits more than one shard). Logging is useful during a multi-tenant
+which hits more than one shard). Logging is useful during a multitenant
application migration, as you can choose to error or warn for such queries, to find them and add a tenant\_id filter to them. This parameter can be set at runtime and is effective on the coordinator. The default value for this
The supported values for this enum are:
- **warning:** Logs statement at WARNING severity level. - **error:** Logs statement at ERROR severity level.
-It may be useful to use `error` during development testing,
+It could be useful to use `error` during development testing,
and a lower log-level like `log` during actual production deployment. Choosing `log` will cause multi-task queries to appear in the database logs with the query itself shown after \"STATEMENT.\"
The supported values are:
* **immediate:** raises error in transactions where parallel operations like create\_distributed\_table happen before an attempted CREATE TYPE. * **automatic:** defer creation of types when sharing a transaction with a
- parallel operation on distributed tables. There may be some inconsistency
+ parallel operation on distributed tables. There might be some inconsistency
between which database objects exist on different nodes. * **deferred:** return to pre-11.0 behavior, which is like automatic but with other subtle corner cases. We recommend the automatic setting over deferred,
amounts of data. Examples are when many rows are requested, the rows have
many columns, or they use wide types such as `hll` from the postgresql-hll extension.
-The default value is true for Postgres versions 14 and higher. For Postgres
-versions 13 and lower the default is false, which means all results are encoded
-and transferred in text format.
+The default value is `true`. When set to `false`, all results are encoded and transferred in text format.
##### citus.max_adaptive_executor_pool_size (integer)
hence update their status regularly.
The task tracker executor on the coordinator synchronously assigns tasks in batches to the daemon on the workers. This parameter sets the maximum number of tasks to assign in a single batch. Choosing a larger batch size allows for
-faster task assignment. However, if the number of workers is large, then it may
+faster task assignment. However, if the number of workers is large, then it might
take longer for all workers to get tasks. This parameter can be set at runtime and is effective on the coordinator.
distributed query. In most cases, the explain output is similar across
tasks. Occasionally, some of the tasks are planned differently or have much higher execution times. In those cases, it can be useful to enable this parameter, after which the EXPLAIN output includes all tasks. Explaining
-all tasks may cause the EXPLAIN to take longer.
+all tasks might cause the EXPLAIN to take longer.
##### citus.explain_analyze_sort_method (enum)
The following [managed PgBouncer](./concepts-connection-pool.md) parameters can
* [min_wal_size](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-MIN-WAL-SIZE) - Sets the minimum size to shrink the WAL to * [operator_precedence_warning](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-OPERATOR-PRECEDENCE-WARNING) - Emits a warning for constructs that changed meaning since PostgreSQL 9.4 * [parallel_setup_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-SETUP-COST) - Sets the planner's estimate of the cost of starting up worker processes for parallel query
-* [parallel_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-TUPLE-COST) - Sets the planner's estimate of the cost of passing each tuple (row) from worker to master backend
+* [parallel_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-TUPLE-COST) - Sets the planner's estimate of the cost of passing each tuple (row) from worker to main backend
* [pg_stat_statements.save](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Saves pg_stat_statements statistics across server shutdowns * [pg_stat_statements.track](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Selects which statements are tracked by pg_stat_statements * [pg_stat_statements.track_utility](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Selects whether utility commands are tracked by pg_stat_statements
cosmos-db Tutorial Design Database Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-design-database-microservices.md
+
+ Title: 'Tutorial: Design for microservices - Azure Cosmos DB for PostgreSQL'
+description: This tutorial shows how to design for microservices with Azure Cosmos DB for PostgreSQL.
+++++ Last updated : 09/30/2023++
+# Microservices
+
+In this tutorial, you use Azure Cosmos DB for PostgreSQL as the storage backend for multiple microservices, demonstrating a sample setup and basic operation of such a cluster. Learn how to:
+
+> [!div class="checklist"]
+> * Create a cluster
+> * Create roles for your microservices
+> * Use psql utility to create roles and distributed schemas
+> * Create tables for the sample services
+> * Configure services
+> * Run services
+> * Explore the database
++
+## Prerequisites
++
+## Create roles for your microservices
+
+Distributed schemas are relocatable within an Azure Cosmos DB for PostgreSQL cluster. The system can rebalance them as a whole unit across the available nodes, allowing to efficiently share resources without manual allocation.
+
+By design, microservices own their storage layer, we donΓÇÖt make any assumptions on the type of tables and data that they create and store. We provide a schema for every service and assume that they use a distinct ROLE to connect to the database. When a user connects, their role name is put at the beginning of the search_path, so if the role matches with the schema name you donΓÇÖt need any application changes to set the correct search_path.
+
+We use three services in our example:
+
+* user
+* time
+* ping
+
+Follow the steps describing [how to create user roles](howto-create-users.md#how-to-create-user-roles) and create the following roles for each service:
+
+* `userservice`
+* `timeservice`
+* `pingservice`
+
+## Use psql utility to create distributed schemas
+
+Once connected to the Azure Cosmos DB for PostgreSQL using psql, you can complete some basic tasks.
+
+There are two ways in which a schema can be distributed in Azure Cosmos DB for PostgreSQL:
+
+Manually by calling `citus_schema_distribute(schema_name)` function:
+
+```postgresql
+CREATE SCHEMA AUTHORIZATION userservice;
+CREATE SCHEMA AUTHORIZATION timeservice;
+CREATE SCHEMA AUTHORIZATION pingservice;
+
+SELECT citus_schema_distribute('userservice');
+SELECT citus_schema_distribute('timeservice');
+SELECT citus_schema_distribute('pingservice');
+```
+
+This method also allows you to convert existing regular schemas into distributed schemas.
+
+> [!NOTE]
+>
+> You can only distribute schemas that do not contain distributed and reference tables.
+
+Alternative approach is to enable citus.enable_schema_based_sharding configuration variable:
+
+```postgresql
+SET citus.enable_schema_based_sharding TO ON;
+
+CREATE SCHEMA AUTHORIZATION userservice;
+CREATE SCHEMA AUTHORIZATION timeservice;
+CREATE SCHEMA AUTHORIZATION pingservice;
+```
+
+The variable can be changed for the current session or permanently in coordinator node parameters. With the parameter set to ON, all created schemas are distributed by default.
+
+You can list the currently distributed schemas by running:
+
+```postgresql
+select * from citus_schemas;
+```
+
+```
+ schema_name | colocation_id | schema_size | schema_owner
+-++-+--
+ userservice | 5 | 0 bytes | userservice
+ timeservice | 6 | 0 bytes | timeservice
+ pingservice | 7 | 0 bytes | pingservice
+(3 rows)
+```
+
+## Create tables for the sample services
+
+You now need to connect to the Azure Cosmos DB for PostgreSQL for every microservice. You can use the \c command to swap the user within an existing psql instance.
+
+```
+\c citus userservice
+```
+
+```postgresql
+CREATE TABLE users (
+ id SERIAL PRIMARY KEY,
+ name VARCHAR(255) NOT NULL,
+ email VARCHAR(255) NOT NULL
+);
+```
+
+```
+\c citus timeservice
+```
+
+```postgresql
+CREATE TABLE query_details (
+ id SERIAL PRIMARY KEY,
+ ip_address INET NOT NULL,
+ query_time TIMESTAMP NOT NULL
+);
+```
+
+```
+\c citus pingservice
+```
+
+```postgresql
+CREATE TABLE ping_results (
+ id SERIAL PRIMARY KEY,
+ host VARCHAR(255) NOT NULL,
+ result TEXT NOT NULL
+);
+```
+
+## Configure services
+
+In this tutorial, we use a simple set of services. You can obtain them by cloning this public repository:
+
+```bash
+git clone https://github.com/citusdata/citus-example-microservices.git
+```
+
+```
+$ tree
+.
+Γö£ΓöÇΓöÇ LICENSE
+Γö£ΓöÇΓöÇ README.md
+Γö£ΓöÇΓöÇ ping
+Γöé Γö£ΓöÇΓöÇ app.py
+Γöé Γö£ΓöÇΓöÇ ping.sql
+Γöé ΓööΓöÇΓöÇ requirements.txt
+Γö£ΓöÇΓöÇ time
+Γöé Γö£ΓöÇΓöÇ app.py
+Γöé Γö£ΓöÇΓöÇ requirements.txt
+Γöé ΓööΓöÇΓöÇ time.sql
+ΓööΓöÇΓöÇ user
+ Γö£ΓöÇΓöÇ app.py
+ Γö£ΓöÇΓöÇ requirements.txt
+ ΓööΓöÇΓöÇ user.sql
+```
+
+Before you run the services however, edit `user/app.py`, `ping/app.py` and `time/app.py` files providing the [connection configuration](https://www.psycopg.org/docs/module.html#psycopg2.connect) for your Azure Cosmos DB for PostgreSQL cluster:
+
+```python
+# Database configuration
+db_config = {
+ 'host': 'c-EXAMPLE.EXAMPLE.postgres.cosmos.azure.com',
+ 'database': 'citus',
+ 'password': 'SECRET',
+ 'user': 'pingservice',
+ 'port': 5432
+}
+```
+
+After making the changes, save all modified files and move on to the next step of running the services.
+
+## Run services
+
+Change into every app directory and run them in their own python env.
+
+```postgresql
+cd user
+pipenv install
+pipenv shell
+python app.py
+```
+
+Repeat the commands for time and ping service, after which you can use the API.
+
+Create some users:
+
+```bash
+curl -X POST -H "Content-Type: application/json" -d '[
+ {"name": "John Doe", "email": "john@example.com"},
+ {"name": "Jane Smith", "email": "jane@example.com"},
+ {"name": "Mike Johnson", "email": "mike@example.com"},
+ {"name": "Emily Davis", "email": "emily@example.com"},
+ {"name": "David Wilson", "email": "david@example.com"},
+ {"name": "Sarah Thompson", "email": "sarah@example.com"},
+ {"name": "Alex Miller", "email": "alex@example.com"},
+ {"name": "Olivia Anderson", "email": "olivia@example.com"},
+ {"name": "Daniel Martin", "email": "daniel@example.com"},
+ {"name": "Sophia White", "email": "sophia@example.com"}
+]' http://localhost:5000/users
+```
+
+List the created users:
+
+```bash
+curl http://localhost:5000/users
+```
+
+Get current time:
+
+```bash
+Get current time:
+```
+
+Run the ping against example.com:
+
+```bash
+curl -X POST -H "Content-Type: application/json" -d '{"host": "example.com"}' http://localhost:5002/ping
+```
+
+## Explore the database
+
+Now that you called some API functions, data has been stored and you can check if `citus_schemas` reflects what is expected:
+
+```postgresql
+select * from citus_schemas;
+```
+
+```
+ schema_name | colocation_id | schema_size | schema_owner
+-++-+--
+ userservice | 1 | 112 kB | userservice
+ timeservice | 2 | 32 kB | timeservice
+ pingservice | 3 | 32 kB | pingservice
+(3 rows)
+```
+
+When you created the schemas, you didnΓÇÖt tell Azure Cosmos DB for PostgreSQL on which machines to create the schemas. It was done automatically. You can see where each schema resides with the following query:
+
+```postgresql
+ select nodename,nodeport, table_name, pg_size_pretty(sum(shard_size))
+ from citus_shards
+group by nodename,nodeport, table_name;
+```
+
+```
+nodename | nodeport | table_name | pg_size_pretty
+--+-++-
+ localhost | 9701 | timeservice.query_details | 32 kB
+ localhost | 9702 | userservice.users | 112 kB
+ localhost | 9702 | pingservice.ping_results | 32 kB
+```
+
+For brevity of the example output on this page, instead of using `nodename` as displayed in Azure Cosmos DB for PostgreSQL we replace it with localhost. Assume that `localhost:9701` is worker one and `localhost:9702` is worker two. Node names on the managed service are longer and contain randomized elements.
++
+You can see that the time service landed on node `localhost:9701` while the user and ping service share space on the second worker `localhost:9702`. The example apps are simplistic, and the data sizes here are ignorable, but letΓÇÖs assume that you're annoyed by the uneven storage space utilization between the nodes. It would make more sense to have the two smaller time and ping services reside on one machine while the large user service resides alone.
+
+You can easily rebalance the cluster by disk size:
+
+```postgresql
+select citus_rebalance_start();
+```
+
+```
+NOTICE: Scheduled 1 moves as job 1
+DETAIL: Rebalance scheduled as background job
+HINT: To monitor progress, run: SELECT * FROM citus_rebalance_status();
+ citus_rebalance_start
+--
+ 1
+(1 row)
+```
+
+When done, you can check how our new layout looks:
+
+```postgresql
+ select nodename,nodeport, table_name, pg_size_pretty(sum(shard_size))
+ from citus_shards
+group by nodename,nodeport, table_name;
+```
+
+```
+ nodename | nodeport | table_name | pg_size_pretty
+--+-++-
+ localhost | 9701 | timeservice.query_details | 32 kB
+ localhost | 9701 | pingservice.ping_results | 32 kB
+ localhost | 9702 | userservice.users | 112 kB
+(3 rows)
+```
+
+According to expectations, the schemas have been moved and we have a more balanced cluster. This operation has been transparent for the applications. You donΓÇÖt even need to restart them, they continue serving queries.
+
+## Next steps
+
+In this tutorial, you learned how to create distributed schemas, ran microservices using them as storage. You also learned how to explore and manage schema-based sharded Azure Cosmos DB for PostgreSQL.
+
+- Learn about cluster [node types](./concepts-nodes.md)
cosmos-db Tutorial Shard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-shard.md
ms.devlang: azurecli Previously updated : 12/16/2020 Last updated : 10/01/2023 # Tutorial: Shard data on worker nodes in Azure Cosmos DB for PostgreSQL
to this one.
Distributing table rows across multiple PostgreSQL servers is a key technique for scalable queries in Azure Cosmos DB for PostgreSQL. Together, multiple nodes can hold more data than a traditional database, and in many cases can use worker CPUs in
-parallel to execute queries.
+parallel to execute queries. The concept of hash-distributed tables is also known as [row-based sharding](concepts-sharding-models.md#row-based-sharding).
In the prerequisites section, we created a cluster with two worker nodes.
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Azure Synapse Link isn't recommended if you're looking for traditional data ware
* Although analytical store data isn't backed up, and therefore can't be restored, you can rebuild your analytical store by reenabling Azure Synapse Link in the restored container. Check the [analytical store documentation](analytical-store-introduction.md) for more information.
-* The capability to turn on Synapse Link in database accounts with continuous backup enabled is in preview now. The opposite situation, to turn on continuous backup in Synapse Link enabled database accounts, is still not supported yet.
+* The capability to turn on Synapse Link in database accounts with continuous backup enabled is available now. But the opposite situation, to turn on continuous backup in Synapse Link enabled database accounts, is still not supported yet.
* Granular role-based access control isn't supported when querying from Synapse. Users that have access to your Synapse workspace and have access to the Azure Cosmos DB account can access all containers within that account. We currently don't support more granular access to the containers.
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
An in-progress status is returned as an `Accepted` state under `provisioningStat
### [PowerShell](#tab/azure-powershell)
-To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
+To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/get-azsubscriptionalias) command, using the billing scope `"/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321"`.
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
An in-progress status is returned as an `Accepted` state under `provisioningStat
### [PowerShell](#tab/azure-powershell)
-To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
+To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscriptionalias) command and the billing scope `"/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`.
cost-management-billing Programmatically Create Subscription Microsoft Partner Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md
Pass the optional *resellerId* copied from the second step in the request body o
### [PowerShell](#tab/azure-powershell)
-To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
+To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
Run the following New-AzSubscriptionAlias command, using the billing scope `"/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"`.
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
In the response, as part of the header `Location`, you get back a url that you c
### [PowerShell](#tab/azure-powershell)
-To install the latest version of the module that contains the `New-AzSubscription` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
+To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget).
Run the [New-AzSubscription](/powershell/module/az.subscription) command below, replacing `<enrollmentAccountObjectId>` with the `ObjectId` collected in the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId).
defender-for-cloud Support Matrix Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-servers.md
The following table shows feature support for Windows machines in Azure, Azure A
| Missing OS patches assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes | | Security misconfigurations assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes | | [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | - | No |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | - | No |
| Third-party vulnerability assessment (BYOL) | Γ£ö | - | No | | [Network security assessment](protect-network-resources.md) | Γ£ö | - | No |
The following table shows feature support for Linux machines in Azure, Azure Arc
| Missing OS patches assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes | | Security misconfigurations assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes | | [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | - | - | No |
-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | - | No |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | - | No |
| Third-party vulnerability assessment (BYOL) | Γ£ö | - | No | | [Network security assessment](protect-network-resources.md) | Γ£ö | - | No |
The following table shows feature support for AWS and GCP machines.
| Missing OS patches assessment | Γ£ö | Γ£ö | | Security misconfigurations assessment | Γ£ö | Γ£ö | | [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | Γ£ö | Γ£ö |
-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) |
| Third-party vulnerability assessment | - | - | | [Network security assessment](protect-network-resources.md) | - | - | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | Γ£ö | - |
deployment-environments Configure Environment Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-environment-definition.md
Title: Add and configure an environment definition
-description: Learn how to add and configure an environment definition to use in your dev center projects. Environment definitions contain an IaC template that defines the environment.
+description: Learn how to add and configure an environment definition to use in your Azure Deployment Environments projects. Environment definitions contain an IaC template that defines the environment.
# Add and configure an environment definition in Azure Deployment Environments
+In this article, you learn how to add or update an environment definition in an Azure Deployment Environments catalog.
+ In Azure Deployment Environments, you can use a [catalog](concept-environments-key-concepts.md#catalogs) to provide your development teams with a curated set of predefined [infrastructure as code (IaC)](/devops/deliver/what-is-infrastructure-as-code) templates called [*environment definitions*](concept-environments-key-concepts.md#environment-definitions). An environment definition is combined of least two files:
In this article, you learn how to:
## Add an environment definition
+To add an environment definition to a catalog in Azure Deployment Environments, you first add the files to the repository. You then synchronize the dev center catalog with the updated repository.
+ To add an environment definition: 1. In your repository, create a subfolder in the repository folder path.
az devcenter dev environment create --environment-definition-name
Refer to the [Azure CLI devcenter extension](/cli/azure/devcenter/dev/environment) for full details of the `az devcenter dev environment create` command. ## Update an environment definition
-To modify the configuration of Azure resources in an existing environment definition, update the associated ARM template JSON file in the repository. The change is immediately reflected when you create a new environment by using the specific environment definition. The update also is applied when you redeploy an environment that's associated with that environment definition.
+To modify the configuration of Azure resources in an existing environment definition in Azure Deployment Environments, update the associated ARM template JSON file in the repository. The change is immediately reflected when you create a new environment by using the specific environment definition. The update also is applied when you redeploy an environment that's associated with that environment definition.
To update any metadata related to the ARM template, modify *manifest.yaml*, and then [update the catalog](how-to-configure-catalog.md#update-a-catalog). ## Delete an environment definition
-To delete an existing environment definition, in the repository, delete the subfolder that contains the ARM template JSON file and the associated manifest YAML file. Then, [update the catalog](how-to-configure-catalog.md#update-a-catalog).
+To delete an existing environment definition, in the repository, delete the subfolder that contains the ARM template JSON file and the associated manifest YAML file. Then, [update the catalog](how-to-configure-catalog.md#update-a-catalog) in Azure Deployment Environments.
After you delete an environment definition, development teams can no longer use the specific environment definition to deploy a new environment. Update the environment definition reference for any existing environments that were created by using the deleted environment definition. If the reference isn't updated and the environment is redeployed, the deployment fails.
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
Title: Add and configure a catalog
-description: Learn how to add a catalog in your dev center to provide environment templates for your developers. Catalogs are repositories stored in GitHub or Azure DevOps.
+description: Learn how to add a catalog in your Azure Deployment Environments dev center to provide environment templates for your developers. Catalogs are repositories stored in GitHub or Azure DevOps.
Last updated 04/25/2023
-# Add and configure a catalog from GitHub or Azure DevOps
+# Add and configure a catalog from GitHub or Azure DevOps in Azure Deployment Environments
-Learn how to add and configure a [catalog](./concept-environments-key-concepts.md#catalogs) in your Azure Deployment Environments dev center. You can use a catalog to provide your development teams with a curated set of infrastructure as code (IaC) templates called [environment definitions](./concept-environments-key-concepts.md#environment-definitions). Your catalog is encrypted; Azure Deployment Environments supports encryption at rest with platform-managed encryption keys, which are managed by Microsoft for Azure Services.
+Learn how to add and configure a [catalog](./concept-environments-key-concepts.md#catalogs) in your Azure Deployment Environments dev center.
+
+You can use a catalog to provide your development teams with a curated set of infrastructure as code (IaC) templates called [environment definitions](./concept-environments-key-concepts.md#environment-definitions). Your catalog is encrypted; Azure Deployment Environments supports encryption at rest with platform-managed encryption keys, which are managed by Microsoft for Azure Services.
For more information about environment definitions, see [Add and configure an environment definition](./configure-environment-definition.md).
In this article, you learn how to:
## Add a catalog
+In Azure Deployment Environments, catalogs help you provide a set of curated IaC templates for your development teams to create environments. You can attach either a GitHub repository or an Azure DevOps repository as a catalog.
+ To add a catalog, you complete these tasks: - Get the clone URL for your repository.
Get the path to the secret you created in the key vault.
If you update the Azure Resource Manager template (ARM template) contents or definition in the attached repository, you can provide the latest set of environment definitions to your development teams by syncing the catalog.
-To sync an updated catalog:
+To sync an updated catalog in Azure Deployment Environments:
1. On the left menu for your dev center, under **Environment configuration**, select **Catalogs**, 1. Select the specific catalog, and then select **Sync**. The service scans through the repository and makes the latest list of environment definitions available to all the associated projects in the dev center. ## Delete a catalog
-You can delete a catalog to remove it from the dev center. Templates in a deleted catalog aren't available to development teams when they deploy new environments. Update the environment definition reference for any existing environments that were created by using the environment definitions in the deleted catalog. If the reference isn't updated and the environment is redeployed, the deployment fails.
+You can delete a catalog to remove it from the Azure Deployment Environments dev center. Templates in a deleted catalog aren't available to development teams when they deploy new environments. Update the environment definition reference for any existing environments that were created by using the environment definitions in the deleted catalog. If the reference isn't updated and the environment is redeployed, the deployment fails.
To delete a catalog:
deployment-environments How To Configure Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md
# Configure a managed identity for a dev center
-A [managed identity](../active-directory/managed-identities-azure-resources/overview.md) adds elevated-privileges capabilities and secure authentication to any service that supports Microsoft Entra authentication. Azure Deployment Environments uses identities to give development teams self-serve deployment capabilities without giving them access to the subscriptions in which Azure resources are created.
+In this article, you learn how to add and configure a managed identity for your Azure Deployments Environments dev center to enable secure deployment for development teams.
+
+Azure Deployment Environments uses managed identities to give development teams self-serve deployment capabilities without giving them access to the subscriptions in which Azure resources are created. A [managed identity](../active-directory/managed-identities-azure-resources/overview.md) adds elevated-privileges capabilities and secure authentication to any service that supports Microsoft Entra authentication.
The managed identity that's attached to a dev center should be [assigned both the Contributor role and the User Access Administrator in the deployment subscriptions](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) for each environment type. When an environment deployment is requested, the service grants appropriate permissions to the deployment identities that are set up for the environment type to deploy on behalf of the user. The managed identity that's attached to a dev center also is used to add to a [catalog](how-to-configure-catalog.md) and access [environment definitions](configure-environment-definition.md) in the catalog.
-In this article, you learn how to:
-
-> [!div class="checklist"]
->
-> - Add a managed identity to your dev center
-> - Assign a subscription role assignment to a managed identity
-> - Grant access to a key vault secret for a managed identity
- ## Add a managed identity In Azure Deployment Environments, you can choose between two types of managed identities:
As a security best practice, if you choose to use user-assigned identities, use
## Assign a subscription role assignment to the managed identity
-The identity that's attached to the dev center should be assigned the Owner role for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to the project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription.
+The identity that's attached to the dev center in Azure Deployment Environments should be assigned the Owner role for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to the project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription.
### Add a role assignment to a system-assigned managed identity
The identity that's attached to the dev center should be assigned the Owner role
## Grant the managed identity access to the key vault secret
-You can set up your key vault to use either a [key vault access policy'](../key-vault/general/assign-access-policy.md) or [Azure role-based access control](../key-vault/general/rbac-guide.md).
+You can set up your key vault to use either a [key vault access policy](../key-vault/general/assign-access-policy.md) or [Azure role-based access control](../key-vault/general/rbac-guide.md).
> [!NOTE] > Before you can add a repository as a catalog, you must grant the managed identity access to the key vault secret that contains the repository's personal access token.
deployment-environments How To Manage Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-manage-environments.md
Last updated 10/06/2023
-# Manage your deployment environment
+# Manage environments in Azure Deployment Environments
-In Azure Deployment Environments, a platform engineer gives developers access to projects and the environment types that are associated with them. After a developer has access, they can create deployment environments based on the preconfigured environment types. The permissions that the creator of the environment and the rest of team have to access the environment's resources are defined in the specific environment type.
+In this article, you learn how to manage environments in Azure Deployment Environments. As a developer, you can create and manage your environments from the developer portal or by using the Azure CLI.
-As a developer, you can create and manage your environments from the developer portal or by using the Azure CLI.
+In Azure Deployment Environments, a platform engineer gives developers access to projects and the environment types that are associated with them. After a developer has access, they can create deployment environments based on the preconfigured environment types. The permissions that the creator of the environment and the rest of team have to access the environment's resources are defined in the specific environment type.
## Prerequisites
As a developer, you can create and manage your environments from the developer p
## Manage an environment by using the developer portal
-The developer portal provides a graphical interface for development teams to create new environments and manage existing environments. You can create, redeploy, and delete your environments as needed in the developer portal.
+The developer portal provides a graphical interface for development teams to create new environments and manage existing environments in Azure Deployment Environments. You can create, redeploy, and delete your environments as needed in the developer portal.
### Create an environment by using the developer portal
When you need to update your environment, you can redeploy it. The redeployment
1. On the environment you want to redeploy, on the options menu, select **Redeploy**.
- :::image type="content" source="media/how-to-manage-environments/option-redeploy.png" alt-text="Screenshot showing an environment tile with the options menu expanded and the redeploy option selected.":::
+ :::image type="content" source="media/how-to-manage-environments/option-redeploy.png" alt-text="Screenshot showing an environment tile with the options menu expanded and the Redeploy option selected.":::
1. If parameters are defined on the environment definition, you're prompted to make any changes you want to make. When you've made your changes, select **Redeploy**.
You can delete your environment completely when you don't need it anymore.
## Manage an environment by using the Azure CLI
-The Azure CLI provides a command-line interface for speed and efficiency when you create multiple similar environments, or for platforms where resources like memory are limited. You can use the `devcenter` Azure CLI extension to create, list, deploy, or delete an environment.
+The Azure CLI provides a command-line interface for speed and efficiency when you create multiple similar environments, or for platforms where resources like memory are limited. You can use the `devcenter` Azure CLI extension to create, list, deploy, or delete an environment in Azure Deployment Environments.
To learn how to manage your environments by using the CLI, see [Create and access an environment by using the Azure CLI](how-to-create-access-environments.md).
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
In this quickstart, you learn how to:
- [Create and configure a project](quickstart-create-and-configure-projects.md). ## Create an environment
-You can create an environment from the developer portal.
+
+An environment in Azure Deployment Environments is a collection of Azure resources on which your application is deployed. You can create an environment from the developer portal.
> [!NOTE] > Only a user who has the [Deployment Environments User](how-to-configure-deployment-environments-user.md) role, the [DevCenter Project Admin](how-to-configure-project-admin.md) role, or a [built-in role](../role-based-access-control/built-in-roles.md) that has appropriate permissions can create an environment.
You can create an environment from the developer portal.
:::image type="content" source="media/quickstart-create-access-environments/add-environment.png" alt-text="Screenshot showing add environment pane.":::
-If your environment is configured to accept parameters, you are able to enter them on a separate pane. In this example, you don't need to specify any parameters.
+If your environment is configured to accept parameters, you're able to enter them on a separate pane. In this example, you don't need to specify any parameters.
1. Select **Create**. You see your environment in the developer portal immediately, with an indicator that shows creation in progress. ## Access an environment
-You can access and manage your environments in the Microsoft Developer portal.
+
+You can access and manage your environments in the Azure Deployment Environments developer portal.
1. Sign in to the [developer portal](https://devportal.microsoft.com).
-1. You are able to view all of your existing environments. To access the specific resources created as part of an Environment, select the **Environment Resources** link.
+1. You're able to view all of your existing environments. To access the specific resources created as part of an Environment, select the **Environment Resources** link.
:::image type="content" source="media/quickstart-create-access-environments/environment-resources.png" alt-text="Screenshot showing an environment card, with the environment resources link highlighted.":::
-1. You are able to view the resources in your environment listed in the Azure portal.
+1. You're able to view the resources in your environment listed in the Azure portal.
:::image type="content" source="media/quickstart-create-access-environments/azure-portal-view-of-environment.png" alt-text="Screenshot showing Azure portal list of environment resources."::: Creating an environment automatically creates a resource group that stores the environment's resources. The resource group name follows the pattern {projectName}-{environmentName}. You can view the resource group in the Azure portal.
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Title: Create and configure a dev center
-description: Learn how to configure a dev center in Deployment Environments. You create a dev center, attach an identity, attach a catalog, and create environment types.
+description: Learn how to configure a dev center in Azure Deployment Environments. You create a dev center, attach an identity, attach a catalog, and create environment types.
Last updated 09/06/2023
# Quickstart: Create and configure a dev center for Azure Deployment Environments
-This quickstart shows you how to create and configure a dev center in Azure Deployment Environments.
+In this quickstart, you'll set up all the resources in Azure Deployment Environments to enable development teams to self-service deployment environments for their applications. Learn how to create and configure a dev center, add a catalog to the dev center, and define an environment type.
A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications. To learn more about the components of Azure Deployment Environments, see [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md).
To create and configure a Dev center in Azure Deployment Environments by using t
:::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the Dev centers overview, to confirm that the dev center is created."::: ### Create a Key Vault
-When you are using a GitHub repository or an Azure DevOps repository to store your [catalog](./concept-environments-key-concepts.md#catalogs), you need an Azure Key Vault to store a personal access token (PAT) that is used to grant Azure access to your repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. This quickstart assumes you're using an RBAC Key Vault and a GitHub repository.
+When you're using a GitHub repository or an Azure DevOps repository to store your [catalog](./concept-environments-key-concepts.md#catalogs), you need an Azure Key Vault to store a personal access token (PAT) that is used to grant Azure access to your repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. This quickstart assumes you're using an RBAC Key Vault and a GitHub repository.
If you don't have an existing key vault, use the following steps to create one: [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
Last updated 09/06/2023
-# Quickstart: Create and configure a project
+# Quickstart: Create and configure an Azure Deployment Environments project
-This quickstart shows you how to create a project in Azure Deployment Environments, and associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
+This quickstart shows you how to create a project in Azure Deployment Environments, and associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). After you complete this quickstart, developers can use the developer portal to create environments to deploy their applications.
The following diagram shows the steps you perform in this quickstart to configure a project associated with a dev center for Deployment Environments in the Azure portal.
You need to perform the steps in both quickstarts before you can create a deploy
## Create a project
-To create a project in your dev center:
+In Azure Deployment Environments, a project represents a team or business function within the organization. When you associate a project with a dev center, all the settings for the dev center are automatically applied to the project. Each project can be associated with only one dev center.
+
+To create an Azure Deployment Environments project in your dev center:
1. In the [Azure portal](https://portal.azure.com/), go to Azure Deployment Environments.
To create a project in your dev center:
## Create a project environment type
+In Azure Deployment Environments, project environment types are a subset of the environment types that you configure for the dev center. They help you preconfigure the types of environments that specific development teams can create.
+ To configure a project, add a [project environment type](how-to-configure-project-environment-types.md): 1. In the Azure portal, go to your project.
To configure a project, add a [project environment type](how-to-configure-projec
|Name |Value | ||-| |**Type**| Select a dev center level environment type to enable for the specific project.|
- |**Deployment subscription**| Select the subscription in which the environment will be created.|
+ |**Deployment subscription**| Select the subscription in which the environment is created.|
|**Deployment identity** | Select either a system-assigned identity or a user-assigned managed identity that's used to perform deployments on behalf of the user.| |**Permissions on environment resources** > **Environment creator role(s)**| Select the roles to give access to the environment resources.| |**Permissions on environment resources** > **Additional access** | Select the users or Microsoft Entra groups to assign to specific roles on the environment resources.|
To configure a project, add a [project environment type](how-to-configure-projec
## Give access to the development team
+Before developers can create environments based on the environment types in a project, you must provide access for them through a role assignment at the level of the project. The Deployment Environments User role enables users to create, manage and delete their own environments. You must have sufficient permissions to a project before you can add users to it.
+ 1. In the Azure portal, go to your project. 1. In the left menu, select **Access control (IAM)**.
deployment-environments Tutorial Deploy Environments In Cicd Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/tutorial-deploy-environments-in-cicd-github.md
Last updated 04/13/2023
-# Tutorial: Deploy environments in CI/CD with GitHub
-Continuous integration and continuous delivery (CI/CD) is a software development approach that helps teams to automate the process of building, testing, and deploying software changes. CI/CD enables you to release software changes more frequently and with greater confidence.
+# Tutorial: Deploy environments in CI/CD with GitHub and Azure Deployment Environments
In this tutorial, you'll Learn how to integrate Azure Deployment Environments into your CI/CD pipeline by using GitHub Actions. You can use any GitOps provider that supports CI/CD, like GitHub Actions, Azure Arc, GitLab, or Jenkins.
+Continuous integration and continuous delivery (CI/CD) is a software development approach that helps teams to automate the process of building, testing, and deploying software changes. CI/CD enables you to release software changes more frequently and with greater confidence.
+ You use a workflow that features three branches: main, dev, and test. - The *main* branch is always considered production. - You create feature branches from the *main* branch. - You create pull requests to merge feature branches into *main*.
-This workflow is a small example for the purposes of this tutorial. Real world workflows may be more complex.
+This workflow is a small example for the purposes of this tutorial. Real world workflows might be more complex.
Before beginning this tutorial, you can familiarize yourself with Deployment Environments resources and concepts by reviewing [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md).
In this tutorial, you learn how to:
## 1. Create and configure a dev center
-In this section, you create a dev center and project with three environment types; Dev, Test and Prod
+In this section, you create an Azure Deployment Environments dev center and project with three environment types: Dev, Test and Prod.
- The Prod environment type contains the single production environment - A new environment is created in Dev for each feature branch - A new environment is created in Test for each pull request+ ### 1.1 Setup the Azure CLI To begin, sign in to Azure. Run the following command, and follow the prompts to complete the authentication process.
You can protect important branches by setting branch protection rules. Protectio
### 3.4 Create a GitHub personal access token
-Next, create a [fine-grained personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#fine-grained-personal-access-tokens) to enable your dev center to connect to your repository and consume the environment catalog.
+Next, create a [fine-grained personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#fine-grained-personal-access-tokens) to enable your Azure Deployment Environments dev center to connect to your repository and consume the environment catalog.
> [!NOTE] > Fine-grained personal access token are currently in beta and subject to change. To leave feedback, see the [feedback discussion](https://github.com/community/community/discussions/36441).
az keyvault secret set \
## 4. Connect the catalog to your dev center
-A catalog is a repository that contains a set of environment definitions. Catalog items consist of an IaC template and a manifest file. The template defines the environment, and the manifest provides metadata about the template. Development teams use environment definitions from the catalog to create environments.
+In Azure Deployment Environments, a catalog is a repository that contains a set of environment definitions. Catalog items consist of an IaC template and a manifest file. The template defines the environment, and the manifest provides metadata about the template. Development teams use environment definitions from the catalog to create environments.
The template you used to create your GitHub repository contains a catalog in the _Environments_ folder.
You can also authenticate a service principal directly using a secret, but that
With GitHub environments, you can configure environments with protection rules and secrets. A workflow job that references an environment must follow any protection rules for the environment before running or accessing the environment's secrets.
-Create three environments: Dev, Test, and Prod to map to the project's environment types.
+Create three environments: Dev, Test, and Prod to map to the environment types in the Azure Deployment Environments project.
> [!NOTE] > Environments, environment secrets, and environment protection rules are available in public repositories for all products. For access to environments, environment secrets, and deployment branches in **private** or **internal** repositories, you must use GitHub Pro, GitHub Team, or GitHub Enterprise. For access to other environment protection rules in **private** or **internal** repositories, you must use GitHub Enterprise. For more information, see "[GitHubΓÇÖs products.](https://docs.github.com/en/get-started/learning-about-github/githubs-products)"
For more information about environments and required approvals, see "[Using envi
1. Select **Required reviewers**.
-2. Search for and select your GitHub user. You may enter up to six people or teams. Only one of the required reviewers needs to approve the job for it to proceed.
+2. Search for and select your GitHub user. You can enter up to six people or teams. Only one of the required reviewers needs to approve the job for it to proceed.
3. Select **Save protection rules**.
dev-box Concept Dev Box Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md
Title: Microsoft Dev Box key concepts
-description: Learn key concepts and terminology for Microsoft Dev Box.
+description: Learn key concepts and terminology for Microsoft Dev Box. Get an understanding about dev center, dev box, dev box definitions, and dev box pools.
#Customer intent: As a platform engineer, I want to understand Dev Box concepts and terminology so that I can set up a Dev Box environment.
-# Key concepts for Microsoft Dev Box
+# Key concepts for Microsoft Dev Box
-This article describes the key concepts and components of Microsoft Dev Box.
+This article describes the key concepts and components of Microsoft Dev Box to help you set up the service successfully.
-As you learn about Microsoft Dev Box, you'll also encounter components of [Azure Deployment Environments](../deployment-environments/overview-what-is-azure-deployment-environments.md), a complementary service that shares certain architectural components. Deployment Environments provides developers with preconfigured cloud-based environments for developing applications.
+Microsoft Dev Box gives developers self-service access to preconfigured, and ready-to-code cloud-based workstations. You can configure the service to meet your development team and project structure, and manage security and network settings to access resources securely. Different components play a part in the configuration of Microsoft Dev Box.
+Microsoft Dev Box builds on the same foundations as [Azure Deployment Environments](/azure/deployment-environments/overview-what-is-azure-deployment-environments). Deployment Environments provides developers with preconfigured cloud-based environments for developing applications. Both services are complementary and share certain architectural components, such as a [dev center](#dev-center) or [project](#project).
+
## Dev center A dev center is a collection of [Projects](#project) that require similar settings. Dev centers enable platform engineers to:
dev-box How To Configure Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md
Title: Configure Azure Compute Gallery
-description: Learn how to create an Azure Compute Gallery repository for managing and sharing Dev Box images.
+description: Learn how to create and attach an Azure compute gallery to a dev center in Microsoft Dev Box. Use a compute gallery to manage and share dev box images.
# Configure Azure Compute Gallery for Microsoft Dev Box
-Azure Compute Gallery is a service for managing and sharing images. A gallery is a repository that's stored in your Azure subscription and helps you build structure and organization around your image resources. You can use a gallery to provide custom images for your dev box users.
+In this article, you learn how to configure and attach an Azure compute gallery to a dev center in Microsoft Dev Box. With Azure Compute Gallery, you can give developers customized images for their dev box.
+
+Azure Compute Gallery is a service for managing and sharing images. A gallery is a repository that's stored in your Azure subscription and helps you build structure and organization around your image resources.
+
+After you attach a compute gallery to a dev center in Microsoft Dev Box, you can create dev box definition based on images stored in the compute gallery.
Advantages of using a gallery include:
To learn more about Azure Compute Gallery and how to create galleries, see:
- A dev center. If you don't have one available, follow the steps in [Create a dev center](quickstart-configure-dev-box-service.md#1-create-a-dev-center). - A compute gallery. Images stored in a compute gallery can be used in a dev box definition, provided they meet the requirements listed in the [Compute gallery image requirements](#compute-gallery-image-requirements) section.
-
+ > [!NOTE] > Microsoft Dev Box doesn't support community galleries.
To learn more about Azure Compute Gallery and how to create galleries, see:
A gallery used to configure dev box definitions must have at least [one image definition and one image version](../virtual-machines/image-version.md).
-When creating a virtual machine image, select an image from the marketplace that is Dev Box compatible, like the following examples:
+When you create a virtual machine image, select an image from the Azure Marketplace that is compatible with Microsoft Dev Box. The following are examples of compatible images:
- [Visual Studio 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftvisualstudio.visualstudio2019plustools?tab=Overview) - [Visual Studio 2022](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftvisualstudio.visualstudioplustools?tab=Overview)
The image version must meet the following requirements:
:::image type="content" source="media/how-to-configure-azure-compute-gallery/image-definition.png" alt-text="Screenshot that shows Windows 365 image requirement settings."::: > [!NOTE]
-> - Dev Box image requirements exceed [Windows 365 image requirements](/windows-365/enterprise/device-images) and include settings to optimize dev box creation time and performance.
+> - Microsoft Dev Box image requirements exceed [Windows 365 image requirements](/windows-365/enterprise/device-images) and include settings to optimize dev box creation time and performance.
> - Images that do not meet Windows 365 requirements will not be listed for creation. ## Provide permissions for services to access a gallery
-When you use an Azure Compute Gallery image to create a dev box definition, the Windows 365 service validates the image to ensure that it meets the requirements to be provisioned for a dev box. The Dev Box service replicates the image to the regions specified in the attached network connections, so the images are present in the region that's required for dev box creation.
+When you use an Azure Compute Gallery image to create a dev box definition, the Windows 365 service validates the image to ensure that it meets the requirements to be provisioned for a dev box. Microsoft Dev Box replicates the image to the regions specified in the attached network connections, so the images are present in the region that's required for dev box creation.
To allow the services to perform these actions, you must provide permissions to your gallery as follows.
To allow the services to perform these actions, you must provide permissions to
### Assign roles
-The Dev Box service behaves differently depending how you attach your gallery:
+Microsoft Dev Box behaves differently depending how you attach your gallery:
- When you use the Azure portal to attach the gallery to your dev center, the Dev Box service creates the necessary role assignments automatically after you attach the gallery. - When you use the Azure CLI to attach the gallery to your dev center, you must manually create the Windows 365 service principal and the dev center's managed identity role assignments before you attach the gallery.
You can use the same managed identity in multiple dev centers and compute galler
## Attach a gallery to a dev center
-To use the images from a gallery in dev box definitions, you must first associate the gallery with the dev center by attaching it:
+To use the images from a compute gallery in dev box definitions, you must first associate the gallery with the dev center by attaching it:
1. Sign in to the [Azure portal](https://portal.azure.com).
You can detach galleries from dev centers so that their images can no longer be
The gallery is detached from the dev center. The gallery and its images aren't deleted, and you can reattach it if necessary.
-## Next steps
+## Related content
- Learn more about [key concepts in Microsoft Dev Box](./concept-dev-box-concepts.md).
dev-box How To Configure Dev Box Hibernation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-dev-box-hibernation.md
Title: Configure hibernation for Microsoft Dev Box
-description: Learn how to enable, disable and troubleshoot hibernation for your dev boxes. Configure hibernation settings for your image and dev box definition.
+description: Learn how to enable, disable, and troubleshoot hibernation in Microsoft Dev Box. Configure hibernation settings for your image and dev box definition.
#Customer intent: As a platform engineer, I want dev box users to be able to hibernate their dev boxes as part of my cost management strategy and so that dev box users can resume their work where they left off.
-# Configure Dev Box Hibernation (preview) for a dev box definition
+# Configure hibernation in Microsoft Dev Box
+
+In this article, you learn how to enable and disable hibernation in Microsoft Dev Box. You control hibernation at the dev box image and dev box definition level.
Hibernating dev boxes at the end of the workday can help you save a substantial portion of your VM costs. It eliminates the need for developers to shut down their dev box and lose their open windows and applications. With the introduction of Dev Box Hibernation (Preview), you can enable this capability on new dev boxes and hibernate and resume them. This feature provides a convenient way to manage your dev boxes while maintaining your work environment.
-There are two steps in enabling hibernation; you must enable hibernation on your dev box image and enable hibernation on your dev box definition.
+There are two steps to enable hibernation:
+
+1. Enable hibernation on your dev box image
+1. Enable hibernation on your dev box definition
> [!IMPORTANT] > Dev Box Hibernation is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## Key concepts for hibernation-enabled images
+## Considerations for hibernation-enabled images
- The following SKUs support hibernation: 8, 16 vCPU SKUs. 32 vCPU SKUs do not support hibernation.
These settings are known to be incompatible with hibernation, and aren't support
1. In the start menu, search for *Turn Windows features on or off* 1. In Turn Windows features on or off, select **Virtual Machine Platform**, and then select **OK**
-## Enable hibernation on your dev box image
+## Enable hibernation on your dev box image
-The Visual Studio and Microsoft 365 images that Dev Box provides in the Azure Marketplace are already configured to support hibernation. You don't need to enable hibernation on these images, they're ready to use.
+If you plan to use a custom image from an Azure compute gallery, you need to enable hibernation capabilities when you create the new image. You can't enable hibernation for existing images.
-If you plan to use a custom image from an Azure Compute Gallery, you need to enable hibernation capabilities as you create the new image. To enable hibernation capabilities, set the IsHibernateSupported flag to true. You must set the IsHibernateSupported flag when you create the image, existing images can't be modified.
+> [!NOTE]
+> The Visual Studio and Microsoft 365 images that Microsoft Dev Box provides in the Azure Marketplace are already configured to support hibernation. You don't need to enable hibernation on these images, they're ready to use.
-To enable hibernation capabilities, set the `IsHibernateSupported` flag to true:
+To enable hibernation capabilities, set the `IsHibernateSupported` flag to `true` when you create the image:
```azurecli az sig image-definition create
For more information about creating a custom image, see [Configure a dev box by
## Enable hibernation on a dev box definition
-You can enable hibernation as you create a dev box definition, providing that the dev box definition uses a hibernation-enabled custom or marketplace image. You can also update an existing dev box definition that uses a hibernation-enabled custom or marketplace image.
+In Microsoft Dev Box, you enable hibernation for a dev box definition, providing that the dev box definition uses a hibernation-enabled custom or marketplace image. You can also update an existing dev box definition that uses a hibernation-enabled custom or marketplace image.
All new dev boxes created in dev box pools that use a dev box definition with hibernation enabled can hibernate in addition to shutting down. If a pool has dev boxes that were created before hibernation was enabled, they continue to only support shutdown.
-Dev Box validates your image for hibernate support. Your dev box definition might fail validation if hibernation couldn't be successfully enabled using your image.
+Microsoft Dev Box validates your image for hibernate support. Your dev box definition might fail validation if hibernation couldn't be successfully enabled using your image.
You can enable hibernation on a dev box definition by using the Azure portal or the CLI.
-### Enable hibernation on an existing dev box definition by using the Azure portal
+### Enable hibernation for a dev box definition by using the Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com).
You can enable hibernation on a dev box definition by using the Azure portal or
1. Select **Save**.
-### Update an existing dev box definition by using the CLI
+### Enable hibernation for a dev box definition by using the Azure CLI
```azurecli az devcenter admin devbox-definition update
az devcenter admin devbox-definition update
## Disable hibernation on a dev box definition
- If you have issues provisioning new VMs after enabling hibernation on a pool or you want to revert to shut down only dev boxes, you can disable hibernation on the dev box definition.
+If you have issues provisioning new VMs after enabling hibernation on a pool or you want to revert to shut down only dev boxes, you can disable hibernation on the dev box definition.
You can disable hibernation on a dev box definition by using the Azure portal or the CLI.
-### Disable hibernation on an existing dev box definition by using the Azure portal
+### Disable hibernation for a dev box definition by using the Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com).
You can disable hibernation on a dev box definition by using the Azure portal or
1. Select **Save**.
-### Disable hibernation on an existing dev box definition by using the CLI
+### Disable hibernation for a dev box definition by using the CLI
```azurecli az devcenter admin devbox-definition update --dev-box-definition-name <DevBoxDefinitionName> -ΓÇôdev-center-name <devcentername> --resource-group <resourcegroupname> ΓÇô-hibernateSupport disabled ```
-## Next steps
+## Related content
- [Create a dev box pool](how-to-manage-dev-box-pools.md) - [Configure a dev box by using Azure VM Image Builder](how-to-customize-devbox-azure-image-builder.md)
dev-box How To Configure Network Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-network-connections.md
Title: Configure network connections
-description: Learn how to create, delete, attach, and remove Microsoft Dev Box network connections.
+description: Learn how to manage network connections for a dev center in Microsoft Dev Box. Use network connections to connect to virtual network or enable connecting to on-premises resources from a dev box.
# Connect dev boxes to resources by configuring network connections
-Network connections allow dev boxes to connect to existing virtual networks. They also determine the region into which dev boxes are deployed.
+In this article, you learn how to manage network connections for a dev center in Microsoft Dev Box. Network connections enable dev boxes to connect to existing virtual networks. In addition, you can configure the network settings to enable connecting to on-premises resources from your dev box. The location, or Azure region, of the network connection determines where associated dev boxes are hosted.
+
+You need to add at least one network connection to a dev center in Microsoft Dev Box.
When you're planning network connectivity for your dev boxes, you must:
To create a network connection, you need an existing virtual network and subnet.
| - | -- | | **Subscription** | Select your subscription. | | **Resource group** | Select an existing resource group. Or create a new one by selecting **Create new**, entering **rg-name**, and then selecting **OK**. |
- | **Name** | Enter **VNet-name**. |
+ | **Name** | Enter *VNet-name*. |
| **Region** | Select the region for the virtual network and dev boxes. | :::image type="content" source="./media/how-to-manage-network-connection/example-basics-tab.png" alt-text="Screenshot of the Basics tab on the pane for creating a virtual network in the Azure portal." border="true":::
To create a network connection, you need an existing virtual network and subnet.
1. Select **Create**.
-## Allow access to Dev Box endpoints from your network
+## Allow access to Microsoft Dev Box endpoints from your network
An organization can control network ingress and egress by using a firewall, network security groups, and even Microsoft Defender.
The following sections show you how to create and configure a network connection
### Types of Active Directory join
-The Dev Box service requires a configured and working Active Directory join, which defines how dev boxes join your domain and access resources. There are two choices:
+Microsoft Dev Box requires a configured and working Active Directory join, which defines how dev boxes join your domain and access resources. There are two choices:
- **Microsoft Entra join**: If your organization uses Microsoft Entra ID, you can use a Microsoft Entra join (sometimes called a native Microsoft Entra join). Dev box users sign in to Microsoft Entra joined dev boxes by using their Microsoft Entra account and access resources based on the permissions assigned to that account. Microsoft Entra join enables access to cloud-based and on-premises apps and resources.
You can remove a network connection from a dev center if you no longer want to u
The network connection is no longer available for use in the dev center.
-## Next steps
+## Related content
- [Manage a dev box definition](how-to-manage-dev-box-definitions.md) - [Manage a dev box pool](how-to-manage-dev-box-pools.md)
dev-box How To Create Dev Boxes Developer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-create-dev-boxes-developer-portal.md
Title: Create & configure a dev box by using the developer portal
+ Title: Manage a dev box in the developer portal
-description: Learn how to create, delete, and connect to Microsoft Dev Box dev boxes by using the developer portal.
+description: Learn how to create, delete, and connect to a dev box by using the Microsoft Dev Box developer portal.
Last updated 09/11/2023
-# Manage a dev box by using the developer portal
+# Manage a dev box by using the Microsoft Dev Box developer portal
-Developers can manage their dev boxes through the developer portal. As a developer, you can view information about your dev boxes. You can also connect to, start, stop, restart, and delete them.
+In this article, you learn how to manage a dev box by using the Microsoft Dev Box developer portal. Developers can access their dev boxes directly in the developer portal, instead of having to use the Azure portal.
+
+As a developer, you can view information about your dev boxes. You can also connect to, start, stop, restart, and delete them.
## Permissions
As a dev box developer, you can:
## Create a dev box
-You can create as many dev boxes as you need through the developer portal, but there are common ways to split up your workload.
-
-You could create a dev box for your front-end work and a separate dev box for your back-end work. You could also create multiple dev boxes for your back end.
+You can create as many dev boxes as you need through the Microsoft Dev Box developer portal. You might create a separate dev box for different scenarios, for example:
-For example, say you're working on a bug. You could use a separate dev box for the bug fix to work on the specific task and troubleshoot the issue without impacting your primary machine.
+- **Dev box per workload**: you could create a dev box for your front-end work and a separate dev box for your back-end work. You could also create multiple dev boxes for your back end.
+- **Dev box for bug fixing**: you could use a separate dev box for the bug fix to work on the specific task and troubleshoot the issue without impacting your primary machine.
You can create a dev box by using:
You can create a dev box by using:
## Connect to a dev box
-After you create your dev box, you can connect to it through a Remote Desktop app or through a browser.
+After you create your dev box, you can connect to it in two ways:
-Remote Desktop provides the highest performance and best user experience for heavy workloads. Remote Desktop also supports multi-monitor configuration. For more information, see [Tutorial: Use a Remote Desktop client to connect to a dev box](./tutorial-connect-to-dev-box-with-remote-desktop-app.md).
+- **Remote desktop client application**: remote desktop provides the highest performance and best user experience for heavy workloads. Remote Desktop also supports multi-monitor configuration. For more information, see [Tutorial: Use a Remote Desktop client to connect to a dev box](./tutorial-connect-to-dev-box-with-remote-desktop-app.md).
-Use the browser for lighter workloads. When you access your dev box via your phone or laptop, you can use the browser. The browser is useful for tasks such as a quick bug fix or a review of a GitHub pull request. For more information, see the [steps for using a browser to connect to a dev box](./quickstart-create-dev-box.md#connect-to-a-dev-box).
+- **Browser**: use the browser for lighter workloads. When you access your dev box via your phone or laptop, you can use the browser. The browser is useful for tasks such as a quick bug fix or a review of a GitHub pull request. For more information, see the [steps for using a browser to connect to a dev box](./quickstart-create-dev-box.md#connect-to-a-dev-box).
## Shutdown, restart or start a dev box
-You can perform many actions on a dev box through the developer portal by using the actions menu on the dev box tile. The options you see depend on the state of the dev box, and the configuration of the dev box pool it belongs to. For example, you can shut down or restart a running dev box, or start a stopped dev box.
+You can perform many actions on a dev box in the Microsoft Dev Box developer portal by using the actions menu on the dev box tile. The available options depend on the state of the dev box and the configuration of the dev box pool it belongs to. For example, you can shut down or restart a running dev box, or start a stopped dev box.
To shut down or restart a dev box.
To start a dev box:
## Get information about a dev box
-You can view information about a dev box, like the creation date, the dev center it belongs to, and the dev box pool it belongs to. You can also check the source image in use.
+You can use the Microsoft Dev Box developer portal to view information about a dev box, such as the creation date, and the dev center and dev box pool it belongs to. You can also check the source image in use.
To get more information about your dev box:
To get more information about your dev box:
## Delete a dev box
-When you no longer need a dev box, you can delete it.
+When you no longer need a dev box, you can delete it in the developer portal.
There are many reasons why you might not need a dev box anymore. Maybe you finished testing, or you finished working on a specific project within your product.
dev-box How To Customize Devbox Azure Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-devbox-azure-image-builder.md
Title: Configure a dev box by using Azure VM Image Builder
-description: Learn how to create a custom image by using Azure VM Image Builder, and then create a dev box by using the image.
+description: Learn how to use Azure VM Image Builder to build a custom image for configuring dev boxes with Microsoft Dev Box.
Last updated 04/25/2023
-# Configure a dev box by using Azure VM Image Builder
+# Configure a dev box by using Azure VM Image Builder and Microsoft Dev Box
-When your organization uses standardized virtual machine (VM) images, it can more easily migrate to the cloud and help ensure consistency in your deployments.
+In this article, you use Azure VM Image Builder to create a customized dev box in Microsoft Dev Box by using a template. The template includes a customization step to install Visual Studio Code (VS Code).
-Images ordinarily include predefined security, configuration settings, and any necessary software. Setting up your own imaging pipeline requires time, infrastructure, and many other details. With Azure VM Image Builder, you can create a configuration that describes your image. The service then builds the image and submits it to a dev box project.
-
-In this article, you create a customized dev box by using a template. The template includes a customization step to install Visual Studio Code (VS Code).
+When your organization uses standardized virtual machine (VM) images, it can more easily migrate to the cloud and help ensure consistency in your deployments. Images ordinarily include predefined security, configuration settings, and any necessary software. Setting up your own imaging pipeline requires time, infrastructure, and many other details. With Azure VM Image Builder, you can create a configuration that describes your image. The service then builds the image and submits it to a dev box project.
Although it's possible to create custom VM images by hand or by using other tools, the process can be cumbersome and unreliable. VM Image Builder, which is built on HashiCorp Packer, gives you the benefits of a managed service.
To provision a custom image that you created by using VM Image Builder, you need
## Create a Windows image and distribute it to Azure Compute Gallery
-The next step is to use Azure VM Image Builder and Azure PowerShell to create an image version in Azure Compute Gallery and then distribute the image globally. You can also do this by using the Azure CLI.
+The first step is to use Azure VM Image Builder and Azure PowerShell to create an image version in Azure Compute Gallery and then distribute the image globally. You can also do this by using the Azure CLI.
1. To use VM Image Builder, you need to register the features.
$features = @($SecurityType)
New-AzGalleryImageDefinition -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location -Name $imageDefName -OsState generalized -OsType Windows -Publisher 'myCompany' -Offer 'vscodebox' -Sku '1-0-0' -Feature $features -HyperVGeneration "V2" ```
-1. Copy the following Azure Resource Manger template for VM Image Builder. This template indicates the source image and the customizations applied. This template installs Choco and VS Code. It also indicates where the image will be distributed.
+1. Copy the following Azure Resource Manger template for VM Image Builder. This template indicates the source image and the customizations applied. This template installs Choco and VS Code. It also indicates where the image is distributed.
```json {
Alternatively, you can view the provisioning state of your image in the Azure po
After your custom image has been provisioned in the gallery, you can configure the gallery to use the images in the dev center. For more information, see [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).
-## Set up the Dev Box service with a custom image
+## Set up Microsoft Dev Box with a custom image
-After the gallery images are available in the dev center, you can use the custom image with the Microsoft Dev Box service. For more information, see [Quickstart: Configure Microsoft Dev Box ](./quickstart-configure-dev-box-service.md).
+After the gallery images are available in the dev center, you can use the custom image with Microsoft Dev Box. For more information, see [Quickstart: Configure Microsoft Dev Box](./quickstart-configure-dev-box-service.md).
-## Next steps
+## Related content
- [3. Create a dev box definition](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition)
dev-box How To Dev Box User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-dev-box-user.md
Title: Provide user access to dev box projects
+ Title: Grant user access to dev box projects
-description: Learn how to provide user-level access to projects for developers so that they can create and manage dev boxes.
+description: Learn how to grant user-level access to projects in Microsoft Dev Box to enable developers to create and manage dev boxes.
Last updated 04/25/2023
-# Provide user-level access to projects for developers
+# Grant user-level access to projects in Microsoft Dev Box
-Team members must have access to a specific Microsoft Dev Box project before they can create dev boxes. By using the built-in DevCenter Dev Box User role, you can assign permissions to Active Directory users or groups at the project level.
+In this article, you learn how to grant developers access to create and manage a dev box in the Microsoft Dev Box developer portal. Microsoft Dev Box uses Azure role-based access control (Azure RBAC) to grant access to functionality in the service.
+
+Team members must have access to a specific Microsoft Dev Box project before they can create dev boxes. By using the built-in DevCenter Dev Box User role, you can assign permissions to Active Directory users or groups. You assign the role at the project level in Microsoft Dev Box.
[!INCLUDE [supported accounts note](./includes/note-supported-accounts.md)]
A DevCenter Dev Box User can:
## Assign permissions to dev box users
+To grant a user access to create and manage a dev box in Microsoft Dev Box, you assign the DevCenter Dev Box User role at the project level.
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **projects**. In the list of results, select **Projects**.
The users can now view the project and all the pools within it. Dev box users ca
[!INCLUDE [dev box runs on creation note](./includes/note-dev-box-runs-on-creation.md)]
-## Next steps
+## Related content
- [Quickstart: Create a dev box by using the developer portal](quickstart-create-dev-box.md)
dev-box How To Hibernate Your Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-hibernate-your-dev-box.md
Title: Hibernate your Microsoft Dev Box
+ Title: Hibernate a dev box
-description: Learn how to hibernate your dev boxes.
+description: Learn how to hibernate a dev box in Microsoft Dev Box. Use hibernation to shut down your VM, while preserving your active work.
#Customer intent: As a developer, I want to be able to hibernate my dev boxes so that I can resume work where I left off.
-# How to hibernate your dev box
+# Hibernate a dev box in Microsoft Dev Box
+
+In this article, you learn how to hibernate and resume a dev box in Microsoft Dev Box.
Hibernation is a power-saving state that saves your running applications to your hard disk and then shuts down the virtual machine (VM). When you resume the VM, all your previous work is restored.
-You can hibernate your dev box through the developer portal or the CLI. You can't hibernate your dev box from the dev box itself.
+You can hibernate your dev box through the Microsoft Dev Box developer portal or the CLI. You can't hibernate your dev box from within the virtual machine.
> [!IMPORTANT] > Dev Box Hibernation is currently in PREVIEW.
You can hibernate your dev box through the developer portal or the CLI. You can'
## Hibernate your dev box using the developer portal
-Hibernate your dev box through the developer portal:
+To hibernate your dev box through the Microsoft Dev Box developer portal:
1. Sign in to the [developer portal](https://aka.ms/devbox-portal). 1. On the dev box you want to hibernate, on the more options menu, select **Hibernate**.
-Dev boxes that support hibernation will show the **Hibernate** option. Dev boxes that only support shutdown will show the **Shutdown** option.
+Dev boxes that support hibernation show the **Hibernate** option. Dev boxes that only support shutdown show the **Shutdown** option.
## Resume your dev box using the developer portal
-Resume your Dev box through the developer portal:
+To resume your dev box through the Microsoft Dev Box developer portal:
1. Sign in to the [developer portal](https://aka.ms/devbox-portal). 1. On the dev box you want to resume, on the more options menu, select **Resume**.
-In addition, you can also double select on your dev box in the list of VMs you see in the "Remote Desktop" app. Your dev box automatically starts up and resumes from a hibernating state.
+In addition, you can also double select on your dev box in the list of VMs you see in the "Remote Desktop" app. Your dev box automatically starts up and resumes from a hibernating state.
-## Hibernate your dev box using the CLI
+## Hibernate your dev box using the Azure CLI
-You can use the CLI to hibernate your dev box:
+To hibernate your dev box by using the Azure CLI:
```azurecli-interactive az devcenter dev dev-box stop --name <YourDevBoxName> --dev-center-name <YourDevCenterName> --project-name <YourProjectName> --user-id "me" --hibernate true
To learn more about managing your dev box from the CLI, see: [devcenter referenc
**My dev box doesn't resume from hibernated state. Attempts to connect to it fail and I receive an error from the RDP app.**
-If your machine is unresponsive, it may have stalled either while going into hibernation or resuming from hibernation, you can manually reboot your dev box.
+If your machine is unresponsive, it might have stalled either while going into hibernation or resuming from hibernation, you can manually reboot your dev box.
To shut down your dev box, either
To shut down your dev box, either
**When my dev box resumes from a hibernated state, all my open windows were gone.**
-Dev Box Hibernation is a preview feature, and you might run into reliability issues. Enable AutoSave on your applications to minimize the impact of session loss.
+Dev Box Hibernation is a preview feature, and you might run into reliability issues. Enable AutoSave on your applications to minimize the effects of session loss.
**I changed some settings on one of my dev boxes and it no longer hibernates. My other dev boxes hibernate without issues. What could be the problem?** Some settings aren't compatible with hibernation and prevent your dev box from hibernating. To learn about these settings, see: [Settings not compatible with hibernation](how-to-configure-dev-box-hibernation.md#settings-not-compatible-with-hibernation).
- ## Next steps
+ ## Related content
- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md) - [How to configure Dev Box Hibernation (preview)](how-to-configure-dev-box-hibernation.md)
dev-box How To Manage Dev Box Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md
Title: Create, update, delete dev box definitions
+ Title: Manage dev box definitions
-description: Microsoft Dev Box dev box definitions define a source image, compute size, and storage size for your dev boxes with. Learn how to manage dev box definitions.
+description: Microsoft Dev Box dev box definitions define a source image, compute size, and storage size for your dev boxes. Learn how to manage dev box definitions.
# Manage a dev box definition
-A dev box definition is a Microsoft Dev Box resource that specifies a source image, compute size, and storage size.
+In this article, you learn how to manage a dev box definition by using the Azure portal. A dev box definition is a Microsoft Dev Box resource that specifies the source image, compute size, and storage size for a dev box.
Depending on their task, development teams have different software, configuration, compute, and storage requirements. You can create a new dev box definition to fulfill each team's needs. There's no limit to the number of dev box definitions that you can create, and you can use dev box definitions across multiple projects in a dev center.
To manage a dev box definition, you need the following permissions:
## Sources of images
-When you create a dev box definition, you can choose a preconfigured image from Azure Marketplace or a custom image from Azure Compute Gallery.
+When you create a dev box definition, you need to select a virtual machine image. Microsoft Dev Box supports the following types of images:
+
+- Preconfigured images from the Azure Marketplace
+- Custom images stored in an Azure compute gallery
### Azure Marketplace
When you're selecting an Azure Marketplace image, consider using an image that h
### Azure Compute Gallery
-Azure Compute Gallery enables you to store and manage a collection of custom images. You can build an image to your dev team's exact requirements and store it in a gallery.
+Azure Compute Gallery enables you to store and manage a collection of custom images. You can build an image to your dev team's exact requirements and store it in a compute gallery.
-To use the custom image while creating a dev box definition, attach the gallery to your dev center. To learn how to attach a gallery, see [Configure Azure Compute Gallery](how-to-configure-azure-compute-gallery.md).
+To use the custom image while creating a dev box definition, attach the compute gallery to your dev center in Microsoft Dev Box. Follow these steps to [attach a compute gallery to a dev center](how-to-configure-azure-compute-gallery.md).
## Image versions
-When you select an image to use in your dev box definition, you must specify whether you'll use updated versions of the image:
+When you select an image to use in your dev box definition, you must specify which version of the image you want to use:
- **Numbered image versions**: If you want a consistent dev box definition in which the base image doesn't change, use a specific, numbered version of the image. Using a numbered version ensures that all the dev boxes in the pool always use the same version of the image. - **Latest image versions**: If you want a flexible dev box definition in which you can update the base image as needs change, use the latest version of the image. This choice ensures that new dev boxes use the most recent version of the image. Existing dev boxes aren't modified when an image version is updated. ## Create a dev box definition
-You can create multiple dev box definitions to meet the needs of your developer teams.
+In Microsoft Dev Box, you can create multiple dev box definitions to meet the needs of your developer teams. You associate dev box definitions with a dev center.
The following steps show you how to create a dev box definition by using an existing dev center. If you don't have an available dev center, follow the steps in [Quickstart: Configure Microsoft Dev Box](./quickstart-configure-dev-box-service.md) to create one.
The following steps show you how to create a dev box definition by using an exis
## Update a dev box definition
-Over time, your needs for dev boxes will change. You might want to move from a Windows 10 base operating system to a Windows 11 base operating system, or increase the default compute specification for your dev boxes. Your initial dev box definitions might no longer be appropriate for your needs. You can update a dev box definition so that new dev boxes will use the new configuration.
+Over time, your needs for dev boxes can change. You might want to move from a Windows 10 base operating system to a Windows 11 base operating system, or increase the default compute specification for your dev boxes. Your initial dev box definitions might no longer be appropriate for your needs. You can update a dev box definition so that new dev boxes use the new configuration.
You can update the image, image version, compute, and storage settings for a dev box definition:
You can update the image, image version, compute, and storage settings for a dev
You can delete a dev box definition when you no longer want to use it. Deleting a dev box definition is permanent and can't be undone. Dev box definitions can't be deleted if one or more dev box pools are using them.
+To delete a dev box definition in the Azure portal:
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **dev center**. In the list of results, select **Dev centers**.
You can delete a dev box definition when you no longer want to use it. Deleting
:::image type="content" source="./media/how-to-manage-dev-box-definitions/delete-warning.png" alt-text="Screenshot of the warning message about deleting a dev box definition.":::
-## Next steps
+## Related content
- [Provide access to projects for project admins](./how-to-project-admin.md) - [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
dev-box How To Manage Dev Box Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md
Title: Manage a dev box pool
-description: Microsoft Dev Box dev box pools are collections of dev boxes that you manage together. Learn how to create, configure, and delete dev box pools.
+description: Microsoft Dev Box dev box pools are collections of dev boxes that you manage together. Learn how to create, configure, and delete dev box pools.
#Customer intent: As a platform engineer, I want to be able to manage dev box pools so that I can provide appropriate dev boxes to my users.
-# Manage a dev box pool
+# Manage a dev box pool in Microsoft Dev Box
-To allow developers to create their own dev boxes, you need to set up dev box pools that define the dev box specifications and network connections for new dev boxes. Developers can then create dev boxes from the dev box pools they have access to through their project memberships.
+In this article, you learn how to manage a dev box pool in Microsoft Dev Box by using the Azure portal.
+
+A dev box pool is the collection of dev boxes that have the same settings, such as the dev box definition and network connection. A dev box pool is associated with a Microsoft Dev Box project.
+
+Developers that have access to the project in the dev center, can then choose to create a dev box from a dev box pool.
## Permissions
To manage a dev box pool, you need the following permissions:
## Create a dev box pool
-A dev box pool is a collection of dev boxes that you manage together. You must have a pool before users can create a dev box.
+In Microsoft Dev Box, a dev box pool is a collection of dev boxes that you manage together. You must have at least one dev box pool before users can create a dev box.
-The following steps show you how to create a dev box pool that's associated with a project. You'll use an existing dev box definition and network connection in the dev center to configure the pool.
+The following steps show you how to create a dev box pool that's associated with a project. You use an existing dev box definition and network connection in the dev center to configure the pool.
If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure Microsoft Dev Box ](quickstart-configure-dev-box-service.md) to create them.
You can delete a dev box pool when you're no longer using it.
> [!CAUTION] > When you delete a dev box pool, all existing dev boxes within the pool are permanently deleted.
+To delete a dev box pool in the Azure portal:
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **projects**. In the list of results, select **Projects**.
You can delete a dev box pool when you're no longer using it.
:::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-delete-confirm.png" alt-text="Screenshot of the confirmation message for deleting a dev box pool.":::
-## Next steps
+## Related content
- [Provide access to projects for project admins](./how-to-project-admin.md) - [3. Create a dev box definition](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition)
dev-box How To Manage Dev Box Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-projects.md
#Customer intent: As a platform engineer, I want to be able to manage dev box projects so that I can provide appropriate dev boxes to my users. -->
-# Manage a dev box project
+# Manage a Microsoft Dev Box project
+
+In this article, you learn how to manage a Microsoft Dev Box project by using the Azure portal.
+ A project is the point of access to Microsoft Dev Box for the development team members. A project contains dev box pools, which specify the dev box definitions and network connections used when dev boxes are created. Dev managers can configure the project with dev box pools that specify dev box definitions appropriate for their team's workloads. Dev box users create dev boxes from the dev box pools they have access to through their project memberships.
-Each project is associated with a single dev center. When you associate a project with a dev center, all the settings at the dev center level will be applied to the project automatically.
+Each project is associated with a single dev center. When you associate a project with a dev center, all the settings at the dev center level are applied to the project automatically.
## Project admins
-Microsoft Dev Box makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their team, like creating and managing dev box pools. To provide users permissions to manage projects, add them to the DevCenter Project Admin role. The tasks in this quickstart can be performed by project admins.
+Microsoft Dev Box makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their team, like creating and managing dev box pools. To provide users permissions to manage projects, add them to the DevCenter Project Admin role. The tasks in this article can be performed by project admins.
To learn how to add a user to the Project Admin role, refer to [Provide access to projects for project admins](how-to-project-admin.md).
To manage a dev box project, you need the following permissions:
|Manage a dev box within the project|Owner, Contributor, or DevCenter Project Admin.| |Add a dev box user to the project|Owner permissions on the project.|
-## Create a dev box project
+## Create a Microsoft Dev Box project
-The following steps show you how to create and configure a project in dev box.
+The following steps show you how to create and configure a Microsoft Dev Box project.
1. In the [Azure portal](https://portal.azure.com), in the search box, type *Projects* and then select **Projects** from the list.
The following steps show you how to create and configure a project in dev box.
|-|-| |**Subscription**|Select the subscription in which you want to create the project.| |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.|
- |**Dev center**|Select the dev center to which you want to associate this project. All the dev center level settings will be applied to the project.|
+ |**Dev center**|Select the dev center to which you want to associate this project. All the dev center level settings are applied to the project.|
|**Name**|Enter a name for your project. | |**Description**|Enter a brief description of the project. |
The following steps show you how to create and configure a project in dev box.
1. Confirm that the project is created successfully by checking the notifications. Select **Go to resource**. 1. Verify that you see the **Project** page.
-## Delete a dev box project
-You can delete a dev box project when you're no longer using it. Deleting a project is permanent and cannot be undone. You cannot delete a project that has dev box pools associated with it.
+
+## Delete a Microsoft Dev Box project
+
+You can delete a Microsoft Dev Box project when you're no longer using it. Deleting a project is permanent and can't be undone. You can't delete a project that has dev box pools associated with it.
1. Sign in to the [Azure portal](https://portal.azure.com).
You can delete a dev box project when you're no longer using it. Deleting a proj
:::image type="content" source="./media/how-to-manage-dev-box-projects/confirm-delete-project.png" alt-text="Screenshot of the Delete dev box pool confirmation message.":::
-## Provide access to a dev box project
+## Provide access to a Microsoft Dev Box project
+ Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage and delete their own dev boxes. You must have sufficient permissions to a project before you can add users to it. 1. Sign in to the [Azure portal](https://portal.azure.com).
Before users can create dev boxes based on the dev box pools in a project, you m
:::image type="content" source="media/how-to-manage-dev-box-projects/add-role-assignment-user.png" alt-text="Screenshot that shows the Add role assignment pane.":::
-The user will now be able to view the project and all the pools within it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal).
+The user is now able to view the project and all the pools within it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal).
-To assign administrative access to a project, select the DevCenter Project Admin role. For more details on how to add a user to the Project Admin role, refer to [Provide access to projects for project admins](how-to-project-admin.md).
+To assign administrative access to a project, select the DevCenter Project Admin role. For more information on how to add a user to the Project Admin role, see [Provide access to projects for project admins](how-to-project-admin.md).
## Next steps
dev-box How To Manage Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md
# Manage a Microsoft Dev Box dev center
+In this article, you learn how to manage a dev center in Microsoft Dev Box by using the Azure portal.
+ Development teams vary in the way they function and might have different needs. A dev center helps you manage these scenarios by enabling you to group similar sets of projects together and apply similar settings. ## Permissions
To manage a dev center, you need the following permissions:
## Create a dev center
-Your development teams' requirements change over time. You can create a new dev center to support organizational changes like a new business requirement or a new regional center. You can create as many or as few dev centers as you need, depending on how you organize and manage your development teams.
+Your development teams' requirements change over time. You can create a new dev center in Microsoft Dev Box to support organizational changes like a new business requirement or a new regional center.
+
+You can create as many or as few dev centers as you need, depending on how you organize and manage your development teams.
-To create a dev center:
+To create a dev center in the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com).
To create a dev center:
## Delete a dev center
-You might choose to delete a dev center to reflect organizational or workload changes. Deleting a dev center is irreversible, and you must prepare for the deletion carefully.
+You might choose to delete a dev center to reflect organizational or workload changes. Deleting a dev center in Microsoft Dev Box is irreversible, and you must prepare for the deletion carefully.
A dev center can't be deleted while any projects are associated with it. You must delete the projects before you can delete the dev center.+ Attached network connections and their associated virtual networks are not deleted when you delete a dev center. When you're ready to delete your dev center, follow these steps:
When you're ready to delete your dev center, follow these steps:
You can attach existing network connections to a dev center. You must attach a network connection to a dev center before you can use it in projects to create dev box pools.
+Network connections enable dev boxes to connect to existing virtual networks. The location, or Azure region, of the network connection determines where associated dev boxes are hosted.
+
+To attach a network connection to a dev center in Microsoft Dev Box:
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **dev centers**. In the list of results, select **Dev centers**.
To make role assignments:
| **Assign access to** | Select **User, group, or service principal**. | | **Members** | Select the users or groups that you want to be able to access the dev center. |
-## Next steps
+## Related content
- [Provide access to projects for project admins](./how-to-project-admin.md) - [3. Create a dev box definition](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition)
dev-box How To Project Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-project-admin.md
Title: Provide administrative access to Microsoft Dev Box projects
-description: Learn how to manage multiple Dev Box projects by assigning admin permissions and delegating project administration.
+ Title: Grant admin access to dev box projects
+description: Learn how to manage multiple Microsoft Dev Box projects by granting admin permissions and delegating project administration.
Last updated 04/25/2023
-# Provide administrative access to Dev Box projects for project admins
+# Grant administrative access to Microsoft Dev Box projects
+
+In this article, you learn how to grant project administrators access to perform administrative tasks on Microsoft Dev Box projects. Microsoft Dev Box uses Azure role-based access control (Azure RBAC) to grant access to functionality in the service.
You can create multiple Microsoft Dev Box projects in the dev center to align with each team's specific requirements. By using the built-in DevCenter Project Admin role, you can delegate project administration to a member of a team. Project admins can use the network connections and dev box definitions configured at the dev center level to create and manage dev box pools within their project.
A DevCenter Project Admin can manage a project by:
## Assign permissions to project admins
+To grant a user project admin permission in Microsoft Dev Box, you assign the DevCenter Project Admin role at the project level.
+ Use the following steps to assign the DevCenter Project Admin role: 1. Sign in to the [Azure portal](https://portal.azure.com).
The users can now manage the project and create dev box pools within it.
[!INCLUDE [permissions note](./includes/note-permission-to-create-dev-box.md)]
-## Next steps
+## Related content
- [Quickstart: Configure Microsoft Dev Box](quickstart-configure-dev-box-service.md)
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
Title: 'Quickstart: Configure Microsoft Dev Box'
-description: In this quickstart, you learn how to configure the Microsoft Dev Box service to provide dev box workstations for users.
+description: In this quickstart, you set up the Microsoft Dev Box resources to enable developers to self-service a cloud-based dev box. Create a dev center, dev box definition, and dev box pool.
Last updated 04/25/2023
# Quickstart: Configure Microsoft Dev Box
-This quickstart describes how to set up Microsoft Dev Box to enable development teams to self-serve their dev boxes. A dev box is a virtual machine (VM) preconfigured with the tools and resources the developer needs for a project. A dev box acts as a day-to-day workstation for the developer.
+In this quickstart, you'll set up all the resources in Microsoft Dev Box to enable development teams to self-service their dev boxes. Learn how to create and configure a dev center, specify a dev box definition, and create a dev box pool. After you complete this quickstart, developers can use the developer portal to create and connect to a dev box.
+
+A dev box acts as a day-to-day cloud-based workstation for the developer. A dev box is a virtual machine (VM) preconfigured with the tools and resources the developer needs for a project.
The process of setting up Microsoft Dev Box involves two distinct phases. In the first phase, platform engineers configure the necessary Microsoft Dev Box resources through the Azure portal. After this phase is complete, users can proceed to the next phase, creating and managing their dev boxes through the developer portal. This quickstart shows you how to complete the first phase.
The following graphic shows the steps required to configure Microsoft Dev Box in
First, you create a dev center and a project to organize your dev box resources. Next, you configure network components to enable dev boxes to connect to your organizational resources. Then, you create a dev box definition that is used to create dev boxes. After that, you create a dev box pool to define the network connection and dev box definition that dev boxes to use. Users who have access to a project can create dev boxes from the pools associated with that project.
-After you complete this quickstart, you'll have Microsoft Dev Box set up ready for users to create and connect to dev boxes.
- If you already have a Microsoft Dev Box configured and you want to learn how to create and connect to dev boxes, refer to: [Quickstart: Create a dev box by using the developer portal](quickstart-create-dev-box.md). ## Prerequisites
To complete this quickstart, you need:
- If your organization routes egress traffic through a firewall, open the appropriate ports. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). ## 1. Create a dev center
+To get started with Microsoft Dev Box, you first create a dev center. A dev center in Microsoft Dev Box provides a centralized place to manage the collection of projects, the configuration of available dev box images and sizes, and the networking settings to enable access to organizational resources.
+ Use the following steps to create a dev center so that you can manage your dev box resources: 1. Sign in to the [Azure portal](https://portal.azure.com).
Because you're not configuring Deployment Environments, you can safely ignore th
## 2. Configure a network connection
-Network connections determine the region in which dev boxes are deployed. They also allow dev boxes to be connected to your existing virtual networks. The following steps show you how to create and configure a network connection in Microsoft Dev Box.
+To determine in which Azure region the developer workstations are hosted, you need to add at least one network connection to the dev center in Microsoft Dev Box. The dev boxes are hosted in the region that is associated with the network connection. The network connection also enables you to connect to existing virtual networks or resources hosted on-premises from within a dev box.
+
+The following steps show you how to create and configure a network connection in Microsoft Dev Box.
### Create a virtual network and subnet
After you attach a network connection, the Azure portal runs several health chec
To resolve any errors, see [Troubleshoot Azure network connections](/windows-365/enterprise/troubleshoot-azure-network-connection).
-Dev boxes automatically register with Microsoft Intune when they're created. If your network connection displays a warning for the **Intune Enrollment Restrictions Allow Windows Enrollment** test, check the Intune Windows platform restriction policy, as it may block you from provisioning.
+Dev boxes automatically register with Microsoft Intune when they're created. If your network connection displays a warning for the **Intune Enrollment Restrictions Allow Windows Enrollment** test, check the Intune Windows platform restriction policy, as it might block you from provisioning.
:::image type="content" source="media/quickstart-configure-dev-box-service/network-connection-intune-warning.png" alt-text="Intune warning":::
To learn more, see [Step 5 ΓÇô Enroll devices in Microsoft Intune: Windows enrol
## 3. Create a dev box definition
-Dev box definitions define the image and SKU (compute + storage) that's used in the creation of the dev boxes. To create and configure a dev box definition:
+Next, you create a dev box definition in the dev center. A dev box definition defines the VM image and the VM SKU (compute size + storage) that's used in the creation of the dev boxes. Depending on the type of development project or developer profiles, you can create multiple dev box definitions. For example, some developers might need a specific developer tool set, whereas others need a cloud workstation that has more compute resources.
+
+The dev box definitions you create in a dev center are available for all projects associated with that dev center. You need to add at least one dev box definition to your dev center.
+
+To create and configure a dev box definition for your dev center:
1. Open the dev center in which you want to create the dev box definition.
Dev box definitions define the image and SKU (compute + storage) that's used in
|Name|Value|Note| |-|-|-|
- |**Name**|Enter a descriptive name for your dev box definition.|
+ |**Name**|Enter a descriptive name for your dev box definition.| |
|**Image**|Select the base operating system for the dev box. You can select an image from Azure Marketplace or from Azure Compute Gallery. </br> If you're creating a dev box definition for testing purposes, consider using the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. |To access custom images when you create a dev box definition, you can use Azure Compute Gallery. For more information, see [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).| |**Image version**|Select a specific, numbered version to ensure that all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure that new dev boxes use the latest image available.|Selecting the **Latest** image version enables the dev box pool to use the most recent version of your chosen image from the gallery. This way, the created dev boxes stay up to date with the latest tools and code for your image. Existing dev boxes aren't modified when an image version is updated.| |**Compute**|Select the compute combination for your dev box definition.||
Dev box definitions define the image and SKU (compute + storage) that's used in
## 4. Create a dev box pool
-A dev box pool is a collection of dev boxes that have similar settings. Dev box pools specify the dev box definitions and network connections that dev boxes use. You must associate at least one pool with your project before users can create a dev box.
+Now that you've defined a network connection and dev box definition in your dev center, you can create a dev box pool in the project. A dev box pool is the collection of dev boxes that have the same settings, such as the dev box definition and network connection. Developers that have access to the project in the dev center, can then choose to create a dev box from a dev box pool.
+
+You must associate at least one dev box pool with your project before users can create a dev box.
To create a dev box pool that's associated with a project:
The Azure portal deploys the dev box pool and runs health checks to ensure that
## 5. Provide access to a dev box project
-Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage, and delete their own dev boxes. You must have sufficient permissions to a project before you can add users to it.
+Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage, and delete their own dev boxes. You grant access for the user at the level of the project.
+
+You must have sufficient permissions to a project before you can add users to it.
To assign roles:
You can assign the DevCenter Project Admin role by using the steps described ear
[!INCLUDE [permissions note](./includes/note-permission-to-create-dev-box.md)]
-## Next steps
+## Next step
In this quickstart, you configured the Microsoft Dev Box resources that are required to enable users to create their own dev boxes. To learn how to create and connect to a dev box, advance to the next quickstart:
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
Title: 'Quickstart: Create a dev box'
-description: In this quickstart, you learn how to create a dev box and connect to it through a browser.
+description: In this quickstart, learn how developers can create a dev box in the Microsoft Dev Box developer portal, and remotely connect to it through the browser.
Last updated 09/12/2023
#Customer intent: As a dev box user, I want to understand how to create and access a dev box so that I can start work.
-# Quickstart: Create a dev box by using the developer portal
+# Quickstart: Create and connect to a dev box by using the Microsoft Dev Box developer portal
In this quickstart, you get started with Microsoft Dev Box by creating a dev box through the developer portal. After you create the dev box, you can connect to it with a Remote Desktop session through a browser or through a Remote Desktop app.
-You can create and manage multiple dev boxes as a dev box user. Create a dev box for each task that you're working on, and create multiple dev boxes within a single project to help streamline your workflow.
+You can create and manage multiple dev boxes as a dev box user. Create a dev box for each task that you're working on, and create multiple dev boxes within a single project to help streamline your workflow. For example, you might switch to another dev box to fix a bug in a previous version, or if you need to work on a different part of the application.
## Prerequisites
To complete this quickstart, you need:
## Create a dev box
-1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
+Microsoft Dev Box enables you to create cloud-hosted developer workstations in a self-service way. You can create and manage dev boxes by using the developer portal.
+
+Depending on the project configuration and your permissions, you have access to different projects and associated dev box configurations.
+
+To create a dev box in the Microsoft Dev Box developer portal:
+
+1. Sign in to the [Microsoft Dev Box developer portal](https://aka.ms/devbox-portal).
2. Select **Get started**.
To complete this quickstart, you need:
## Connect to a dev box
-After you create a dev box, one way to access it quickly is through a browser:
+After you create a dev box, you can connect remotely to the developer VM. Microsoft Dev Box supports connecting to a dev box in the following ways:
+
+- Connect through the browser from within the developer portal
+- Connect by using a remote desktop client application
+
+To connect to a dev box by using the browser:
1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
After you create a dev box, one way to access it quickly is through a browser:
:::image type="content" source="./media/quickstart-create-dev-box/dev-portal-open-in-browser.png" alt-text="Screenshot of dev box card that shows the option for opening in a browser.":::
-A new tab opens with a Remote Desktop session through which you can use your dev box. Use a work or school account to log in to your dev box, not a personal Microsoft account.
+A new tab opens with a Remote Desktop session through which you can use your dev box. Use a work or school account to sign in to your dev box, not a personal Microsoft account.
## Clean up resources
When you no longer need your dev box, you can delete it:
[!INCLUDE [dev box runs on creation note](./includes/clean-up-resources.md)]
-## Next steps
+## Related content
In this quickstart, you created a dev box through the developer portal and connected to it by using a browser.
-To learn how to connect to a dev box by using a Remote Desktop app, see [Tutorial: Use a Remote Desktop client to connect to a dev box](./tutorial-connect-to-dev-box-with-remote-desktop-app.md).
+- Learn how to [connect to a dev box by using a Remote Desktop app](./tutorial-connect-to-dev-box-with-remote-desktop-app.md)
dev-box Tutorial Connect To Dev Box With Remote Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md
Title: 'Tutorial: Use a Remote Desktop client to connect to a dev box'
-description: In this tutorial, you learn how to download a Remote Desktop client and connect to your dev box. You also learn how to configure a dev box to use multiple monitors during a remote desktop session.
+description: In this tutorial, you download and use a remote desktop client to connect to a dev box in Microsoft Dev Box. Configure the RDP client for a multi-monitor setup.
Last updated 09/11/2023
-# Tutorial: Use a Remote Desktop client to connect to a dev box
+# Tutorial: Use a remote desktop client to connect to a dev box
-After you configure the Microsoft Dev Box service and create dev boxes, you can connect to them by using a browser or by using a Remote Desktop client.
+In this tutorial, you'll download and use a remote desktop client application to connect to a dev box in Microsoft Dev Box. Learn how to configure the application to take advantage of a multi-monitor setup.
Remote Desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a Remote Desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android.
+Alternately, you can also connect to your dev box through the browser from the Microsoft Dev Box developer portal.
+ In this tutorial, you learn how to: > [!div class="checklist"]
To complete this tutorial, you must first:
- [Configure Microsoft Dev Box](./quickstart-configure-dev-box-service.md). - [Create a dev box](./quickstart-create-dev-box.md#create-a-dev-box) on the [developer portal](https://aka.ms/devbox-portal).
-## Download the client and connect to your dev box
+## Download the remote desktop client and connect to your dev box
+
+You can use a remote desktop client application to connect to your dev box in Microsoft Dev Box. Remote Desktop clients are available for many operating systems and devices.
-Remote Desktop clients are available for many operating systems and devices. In this tutorial, you can view the steps for Windows or the steps for a non-Windows operating system by selecting the relevant tab.
+Select the relevant tab to view the steps to download and use the remote desktop client application from Windows or non-Windows operating systems.
# [Windows](#tab/windows)
To use a non-Windows Remote Desktop client to connect to your dev box:
## Configure Remote Desktop to use multiple monitors
-Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both support up to 16 monitors. Use the following steps to configure Remote Desktop to use multiple monitors.
+When you connect to your cloud-hosted developer machine in Microsoft Dev Box, you can take advantage of a multi-monitor setup. Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both support up to 16 monitors.
+
+Use the following steps to configure Remote Desktop to use multiple monitors.
# [Windows](#tab/windows)
Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both s
|Single display |Remote desktop uses a single display. | |Select displays |Remote Desktop uses only the monitors you select. |
- :::image type="content" source="media/tutorial-connect-to-dev-box-with-remote-desktop-app/remote-desktop-select-display.png" alt-text="Screenshot showing the remote desktop display settings. ":::
+ :::image type="content" source="media/tutorial-connect-to-dev-box-with-remote-desktop-app/remote-desktop-select-display.png" alt-text="Screenshot showing the remote desktop display settings, highlighting the option to select the number of displays.":::
1. Close the settings pane, and then select your dev box to begin the remote desktop session.
The dev box might take a few moments to stop.
## Related content
-To learn about managing your dev box, see:
- - [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)
+- Learn how to [connect to a dev box through the browser](./quickstart-create-dev-box.md#connect-to-a-dev-box)
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
within a policy rule, except the following functions and user-defined functions:
- deployment() - environment() - extensionResourceId()
+- [lambda()](../../../azure-resource-manager/templates/template-functions-lambda.md)
- listAccountSas() - listKeys() - listSecrets()
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 09/01/2023 Last updated : 10/27/2023
# Azure Resource Graph sample queries by category This page is a collection of Azure Resource Graph sample queries grouped by general and service
-categories. To jump to a specific **category**, use the menu on the right side of the page.
+categories. To jump to a specific **category**, use the links on the top of the page.
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature. ## Azure Advisor
Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature
[!INCLUDE [azure-resource-graph-samples-cat-azure-monitor](../../../../includes/resource-graph/samples/bycat/azure-monitor.md)] ++ ## Azure Policy [!INCLUDE [azure-resource-graph-samples-cat-azure-policy](../../../../includes/resource-graph/samples/bycat/azure-policy.md)]
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md
Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 09/01/2023 Last updated : 10/27/2023
# Azure Resource Graph sample queries by table This page is a collection of Azure Resource Graph sample queries grouped by table. To jump to a
-specific **table**, use the menu on the right side of the page. Otherwise, use
+specific **table**, use the links on the top of the page. Otherwise, use
<kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature. For a list of tables and related details, see [Resource Graph tables](../concepts/query-language.md#resource-graph-tables).
details, see [Resource Graph tables](../concepts/query-language.md#resource-grap
[!INCLUDE [Azure-resource-graph-samples-table-healthresourcechanges](../../../../includes/resource-graph/samples/bytable/healthresourcechanges.md)]
+## InsightResources
++ ## IoT Defender [!INCLUDE [azure-resource-graph-samples-table-iot-defender](../../../../includes/resource-graph/samples/bytable/iot-defender.md)]
details, see [Resource Graph tables](../concepts/query-language.md#resource-grap
[!INCLUDE [virtual-machine-basic-sku-public-ip](../../includes/resource-graph/query/virtual-machine-basic-sku-public-ip.md)] + ## SecurityResources [!INCLUDE [azure-resource-graph-samples-table-securityresources](../../../../includes/resource-graph/samples/bytable/securityresources.md)]
hdinsight-aks Trademarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trademarks.md
Product names, logos and other material used on this Azure HDInsight on AKS lear
- Apache, Apache Kafka, Kafka and the Kafka logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF). - Apache, Apache Flink, Flink and the Flink logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF). - Apache HBase, HBase and the HBase logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).-- Apache®, Apache Spark™, Apache HBase®, Apache Kafka®, and Apache Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. The use of these marks does not imply endorsement by The Apache Software Foundation.
+- Apache, Apache Cassandra, Cassandra and Cassandra logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+- Apache®, Apache Spark™, Apache HBase®, Apache Kafka®, Apache Cassandra® and Apache Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. The use of these marks does not imply endorsement by The Apache Software Foundation.
hdinsight-aks Trino Add Catalogs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-catalogs.md
Title: Configure catalogs in Azure HDInsight on AKS
description: Add catalogs to an existing Trino cluster in HDInsight on AKS Previously updated : 08/29/2023 Last updated : 10/19/2023 # Configure catalogs
Last updated 08/29/2023
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] Every Trino cluster comes by default with few catalogs - system, tpcds, `tpch`. You can add your own catalogs same way you would do with OSS Trino.
-In addition, HDInsight on AKS Trino allows storing secrets in Key Vault so you donΓÇÖt have to specify them explicitly in ARM template.
+In addition, Trino with HDInsight on AKS allows storing secrets in Key Vault so you donΓÇÖt have to specify them explicitly in ARM template.
You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal.
This article demonstrates how you can add a new catalog to your cluster using AR
## Prerequisites
-* An operational HDInsight on AKS Trino cluster.
+* An operational Trino cluster with HDInsight on AKS.
* Azure SQL database. * Azure SQL server login/password are stored in the Key Vault secrets and user-assigned MSI attached to your Trino cluster granted permissions to read them. Refer [store credentials in Key Vault and assign role to MSI](../prerequisites-resources.md#create-azure-key-vault). * Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster.
This article demonstrates how you can add a new catalog to your cluster using AR
|files|List of Trino catalog files to be added to the cluster.| |filename|List of Trino catalog files to be added to the cluster.| |content|`json` escaped string to put into trino catalog file. This string should contain all trino-specific catalog properties, which depend on type of connector used. For more information, see OSS trino documentation.|
- |${SECRET_REF:\<referenceName\>}|Special tag to reference a secret from secretsProfile. HDInsight on AKS Trino at runtime fetch the secret from Key Vault and substitute it in catalog configuration.|
+ |${SECRET_REF:\<referenceName\>}|Special tag to reference a secret from secretsProfile. Trino at runtime fetch the secret from Key Vault and substitute it in catalog configuration.|
|values|ItΓÇÖs possible to specify catalog configuration using content property as single string, and using separate key-value pairs for each individual Trino catalog property as shown for memory catalog.| Deploy the updated ARM template to reflect the changes in your cluster. Learn how to [deploy an ARM template](/azure/azure-resource-manager/templates/deploy-portal).
hdinsight-aks Trino Add Delta Lake Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-delta-lake-catalog.md
Title: Configure Delta Lake catalog
description: How to configure Delta Lake catalog in a Trino cluster. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Configure Delta Lake catalog [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This article provides an overview of how to configure Delta Lake catalog in your HDInsight on AKS Trino cluster. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal.
+This article provides an overview of how to configure Delta Lake catalog in your Trino cluster with HDInsight on AKS. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal.
## Prerequisites
hdinsight-aks Trino Add Iceberg Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-iceberg-catalog.md
Title: Configure Iceberg catalog
description: How to configure iceberg catalog in a Trino cluster. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Configure Iceberg catalog [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This article provides an overview of how to configure Iceberg catalog in HDInsight on AKS Trino cluster. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal.
+This article provides an overview of how to configure Iceberg catalog in your Trino cluster with HDInsight on AKS. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal.
## Prerequisites
hdinsight-aks Trino Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-airflow.md
Title: Use Airflow with Trino cluster
-description: How to create Airflow DAG connecting to Azure HDInsight on AKS Trino
+ Title: Use Apache Airflow with Trino cluster
+description: How to create Apache Airflow DAG to connect to Trino cluster with HDInsight on AKS
Previously updated : 08/29/2023 Last updated : 10/19/2023
-# Use Airflow with Trino cluster
+# Use Apache AirflowΓäó with Trino cluster
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This article demonstrates how to configure available open-source [Airflow Trino provider](https://airflow.apache.org/docs/apache-airflow-providers-trino/stable/https://docsupdatetracker.net/index.html) to connect to HDInsight on AKS Trino cluster.
-The objective is to show how you can connect Airflow to HDInsight on AKS Trino considering main steps as obtaining access token and running query.
+This article demonstrates how to configure available open-source [Apache Airflow Trino provider](https://airflow.apache.org/docs/apache-airflow-providers-trino/stable/https://docsupdatetracker.net/index.html) to connect to your Trino cluster with HDInsight on AKS.
+The objective is to show how you can connect Airflow to Trino with HDInsight on AKS considering main steps as obtaining access token and running query.
## Prerequisites
-* An operational HDInsight on AKS Trino cluster.
+* An operational Trino cluster with HDInsight on AKS.
* Airflow cluster. * Azure service principal client ID and secret to use for authentication. * [Allow access to the service principal to Trino cluster](../hdinsight-on-aks-manage-authorization-profile.md).
Now let's create simple DAG performing those steps. Complete code as follows
1. Copy the [following code](#example-code) and save it in $AIRFLOW_HOME/dags/example_trino.py, so Airlift can discover the DAG. 1. Update the script entering your Trino cluster endpoint and authentication details.
-1. Trino endpoint (`trino_endpoint`) - HDInsight on AKS Trino cluster endpoint from Overview page in the Azure portal.
+1. Trino endpoint (`trino_endpoint`) - Trino cluster endpoint from Overview page in the Azure portal.
1. Azure Tenant ID (`azure_tenant_id`) - Identifier of your Azure Tenant, which can be found in Azure portal. 1. Service Principal Client ID - Client ID of an application or service principal to use in Airlift for authentication to your Trino cluster. 1. Service Principal Secret - Secret for the service principal.
-1. Pay attention to connection properties, which configure JWT authentication type, https and port. These values are required to connect to HDInsight on AKS Trino cluster.
+1. Pay attention to connection properties, which configure JWT authentication type, https and port. These values are required to connect to your Trino cluster.
> [!NOTE] > Give access to the service principal ID (object ID) to your Trino cluster. Follow the steps to [grant access](../hdinsight-on-aks-manage-authorization-profile.md).
After restarting Airflow, find and run example_trino DAG. Results of the sample
> For production scenarios, you should choose to handle connection and secrets diffirently, using Airflow secrets management. ## Next steps
-This example demonstrates basic steps required to connect Airflow to HDInsight on AKS Trino. Main steps are obtaining access token and running query.
+This example demonstrates basic steps required to connect Airflow to Trino with HDInsight on AKS. Main steps are obtaining access token and running query.
## See also * [Getting started with Airflow](https://airflow.apache.org/docs/apache-airflow/stable/start.html) * [Airflow Trino provider](https://airflow.apache.org/docs/apache-airflow-providers-trino/stable/https://docsupdatetracker.net/index.html) * [Airflow secrets](https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/https://docsupdatetracker.net/index.html)
-* [HDInsight on AKS Trino authentication](./trino-authentication.md)
+* [Trino authentication for HDInsight on AKS](./trino-authentication.md)
* [MSAL for Python](/entra/msal/python)
hdinsight-aks Trino Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-authentication.md
Title: Client authentication
description: How to authenticate to Trino cluster Previously updated : 08/29/2023 Last updated : 10/19/2023 # Authentication mechanism [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-Azure HDInsight on AKS Trino provides tools such as CLI client, JDBC driver etc., to access the cluster, which is integrated with Microsoft Entra ID to simplify the authentication for users.
+Trino with HDInsight on AKS provides tools such as CLI client, JDBC driver etc., to access the cluster, which is integrated with Microsoft Entra ID to simplify the authentication for users.
Supported tools or clients need to authenticate using Microsoft Entra ID OAuth2 standards that are, a JWT access token issued by Microsoft Entra ID must be provided to the cluster endpoint. This section describes common authentication flows supported by the tools.
hdinsight-aks Trino Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-caching.md
Title: Configure caching
description: Learn how to configure caching in Trino Previously updated : 08/29/2023 Last updated : 10/19/2023 # Configure caching
Last updated 08/29/2023
Querying object storage using the Hive connector is a common use case for Trino. This process often involves sending large amounts of data. Objects are retrieved from HDFS or another supported object store by multiple workers and processed by those workers. Repeated queries with different parameters, or even different queries from different users, often access and transfer the same objects.
-HDInsight on AKS Trino has added **final result caching** capability, which provides the following benefits:
+HDInsight on AKS has added **final result caching** capability for Trino, which provides the following benefits:
* Reduce the load on object storage. * Improve the query performance.
Available configuration parameters are:
|`query.cache.max-result-data-size`|0|Max data size for a result. If this value exceeded, then result doesn't cache.| > [!NOTE]
-> Final result caching is using query plan and ttl as a cache key.
+> Final result caching uses query plan and ttl as a cache key.
-Final result caching can also be controlled through the following session parameters:
+### Final result caching can also be controlled through the following session parameters:
|Session parameter|Default|Description| ||||
Final result caching can also be controlled through the following session parame
#### Prerequisites
-* An operational HDInsight on AKS Trino cluster.
+* An operational Trino cluster with HDInsight on AKS.
* Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview).
hdinsight-aks Trino Catalog Glue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-catalog-glue.md
Title: Query data from AWS S3 and with Glue
-description: How to configure HDInsight on AKS Trino catalogs with Glue as metastore
+ Title: Query data from AWS S3 and with AWS Glue
+description: How to configure Trino catalogs for HDInsight on AKS with AWS Glue as metastore
Previously updated : 08/29/2023 Last updated : 10/19/2023
-# Query data from AWS S3 and with Glue
+# Query data from AWS S3 using AWS Glue
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This article provides examples of how you can add catalogs to an HDInsight on AKS Trino cluster where catalogs are using AWS Glue as metastore and S3 as storage.
+This article provides examples of how you can add catalogs to a Trino cluster with HDInsight on AKS where catalogs are using AWS Glue as metastore and AWS S3 as storage.
## Prerequisites
-* [Understanding of HDInsight on AKS Trino cluster configurations](./trino-service-configuration.md).
+* [Understanding of Trino cluster configurations for HDInsight on AKS](./trino-service-configuration.md).
* [How to add catalogs to an existing cluster](./trino-add-catalogs.md). * [AWS account with Glue and S3](./trino-catalog-glue.md#quickstart-with-aws-glue-and-s3).
-## Trino catalogs with S3 and Glue as metastore
+## Trino catalogs with AWS S3 and AWS Glue as metastore
Several Trino connectors support AWS Glue. More details on catalogs Glue configuration properties can be found in [Trino documentation](https://trino.io/docs/410/connector/hive.html#aws-glue-catalog-configuration-properties). Refer to [Quickstart with AWS Glue and S3](./trino-catalog-glue.md#quickstart-with-aws-glue-and-s3) for setting up AWS resources.
Refer to [Quickstart with AWS Glue and S3](./trino-catalog-glue.md#quickstart-wi
### Add Hive catalog
-You can add the following sample JSON in your HDInsight on AKS Trino cluster under `clusterProfile` section in the ARM template.
+You can add the following sample JSON in your Trino cluster under `clusterProfile` section in the ARM template.
<br>Update the values as per your requirement. ```json
You can add the following sample JSON in your HDInsight on AKS Trino cluster und
### Add Delta Lake catalog
-You can add the following sample JSON in your HDInsight on AKS Trino cluster under `clusterProfile` section in the ARM template.
+You can add the following sample JSON in your Trino cluster under `clusterProfile` section in the ARM template.
<br>Update the values as per your requirement. ```json
You can add the following sample JSON in your HDInsight on AKS Trino cluster und
``` ### Add Iceberg catalog
-You can add the following sample JSON in your HDInsight on AKS Trino cluster under `clusterProfile` section in the ARM template.
+You can add the following sample JSON in your Trino cluster under `clusterProfile` section in the ARM template.
<br>Update the values as per your requirement. ```json
Catalog examples in the previous code refer to access keys stored as secrets in
## Quickstart with AWS Glue and S3 ### 1. Create AWS user and save access keys to Azure Key Vault.
-Use existing or create new user in AWS IAM - this user is used by Trino connector to read data from Glue/S3. Create and retrieve access keys on Security Credentials tab and save them as secrets into [Azure Key Vault](/azure/key-vault/secrets/about-secrets) linked to your HDInsight on AKS Trino cluster. Refer to [Add catalogs to existing cluster](./trino-add-catalogs.md) for details on how to link Key Vault to your Trino cluster.
+Use existing or create new user in AWS IAM - this user is used by Trino connector to read data from Glue/S3. Create and retrieve access keys on Security Credentials tab and save them as secrets into [Azure Key Vault](/azure/key-vault/secrets/about-secrets) linked to your Trino cluster. Refer to [Add catalogs to existing cluster](./trino-add-catalogs.md) for details on how to link Key Vault to your Trino cluster.
### 2. Create AWS S3 bucket Use existing or create new S3 bucket, it's used in Glue database as location to store data.
Use existing or create new S3 bucket, it's used in Glue database as location to
In AWS Glue, create new database, for example, "trinodb" and configure location, which points to your S3 bucket from previous step, for example, `s3://trinoglues3/` ### 4. Configure Trino catalog
-Configure a Trino catalog using examples above [Trino catalogs with S3 and Glue as metastore](./trino-catalog-glue.md#trino-catalogs-with-s3-and-glue-as-metastore).
+Configure a Trino catalog using examples above [Trino catalogs with S3 and Glue as metastore](./trino-catalog-glue.md#trino-catalogs-with-aws-s3-and-aws-glue-as-metastore).
### 5. Create and query sample table Here are few sample queries to test connectivity to AWS reading and writing data. Schema name is AWS Glue database name you created earlier.
hdinsight-aks Trino Connect To Metastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connect-to-metastore.md
Title: Add external Hive metastore database
description: Connecting to the HIVE metastore for Trino clusters in HDInsight on AKS Previously updated : 08/29/2023 Last updated : 10/19/2023 # Use external Hive metastore database [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-Hive metastore is used as a central repository for storing metadata about the data. This article describes how you can add a Hive metastore database to your HDInsight on AKS Trino cluster. There are two ways:
+Hive metastore is used as a central repository for storing metadata about the data. This article describes how you can add a Hive metastore database to your Trino cluster with HDInsight on AKS. There are two ways:
* You can add a Hive catalog and link it to an external Hive metastore database during [Trino cluster creation](./trino-create-cluster.md).
Hive metastore is used as a central repository for storing metadata about the da
The following example covers the addition of Hive catalog and metastore database to your cluster using ARM template. ## Prerequisites
-* An operational HDInsight on AKS Trino cluster.
+* An operational Trino cluster with HDInsight on AKS.
* Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview).
hdinsight-aks Trino Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-create-cluster.md
Title: Create a Trino cluster - Azure portal
description: Creating a Trino cluster in HDInsight on AKS on the Azure portal. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Create a Trino cluster in the Azure portal (Preview) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This article describes the steps to create an HDInsight on AKS Trino cluster by using the Azure portal.
+This article describes the steps to create a Trino cluster with HDInsight on AKS by using the Azure portal.
## Prerequisites
hdinsight-aks Trino Create Delta Lake Tables Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-create-delta-lake-tables-synapse.md
Title: Read Delta Lake tables (Synapse or External Location)
description: How to read external tables created in Synapse or other systems into a Trino cluster. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Read Delta Lake tables (Synapse or external location)
Last updated 08/29/2023
This article provides an overview of how to read a Delta Lake table without having any access to the metastore (Synapse or other metastores without public access).
-You can perform the following operations on the tables using HDInsight on AKS Trino.
+You can perform the following operations on the tables using Trino with HDInsight on AKS.
* DELETE * UPDATE
This section shows how to create a Delta table over a pre-existing location give
`abfss://container@storageaccount.dfs.core.windows.net/synapse/workspaces/workspace_name/warehouse/table_name/`
-1. Create a Delta Lake schema in HDInsight on AKS Trino.
+1. Create a Delta Lake schema in Trino.
```sql CREATE SCHEMA delta.default;
This section shows how to create a Delta table over a pre-existing location give
## Write Delta Lake tables in Synapse Spark
-Use `format("delta")` to save a dataframe as a Delta table, then you can use the path where you saved the dataframe as delta format to register the table in HDInsight on AKS Trino.
+Use `format("delta")` to save a dataframe as a Delta table, then you can use the path where you saved the dataframe as delta format to register the table in Trino.
```python my_dataframe.write.format("delta").save("abfss://container@storageaccount.dfs.core.windows.net/synapse/workspaces/workspace_name/warehouse/table_name")
hdinsight-aks Trino Custom Plugins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-custom-plugins.md
Title: Add custom plugins in Azure HDInsight on AKS
description: Add custom plugins to an existing Trino cluster in HDInsight on AKS Previously updated : 08/29/2023 Last updated : 10/19/2023 # Custom plugins [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This article provides details on how to deploy custom plugins to your HDInsight on AKS Trino cluster.
+This article provides details on how to deploy custom plugins to your Trino cluster with HDInsight on AKS.
Trino provides a rich interface allowing users to write their own plugins such as event listeners, custom SQL functions etc. You can add the configuration described in this article to make custom plugins available in your Trino cluster using ARM template. ## Prerequisites
-* An operational HDInsight on AKS Trino cluster.
+* An operational Trino cluster with HDInsight on AKS.
* Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview).
hdinsight-aks Trino Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-fault-tolerance.md
Title: Configure fault-tolerance
-description: Learn how to configure fault-tolerance in HDInsight on AKS Trino.
+description: Learn how to configure fault-tolerance in Trino with HDInsight on AKS.
Previously updated : 08/29/2023 Last updated : 10/19/2023 # Fault-tolerant execution [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-HDInsight on AKS Trino supports [fault-tolerant execution](https://trino.io/docs/current/admin/fault-tolerant-execution.html) to mitigate query failures and increase resilience.
-This article describes how you can enable fault tolerance for your HDInsight on AKS Trino cluster.
+Trino supports [fault-tolerant execution](https://trino.io/docs/current/admin/fault-tolerant-execution.html) to mitigate query failures and increase resilience.
+This article describes how you can enable fault tolerance for your Trino cluster with HDInsight on AKS.
## Configuration
To enable fault-tolerant execution on queries/tasks with a larger result set, co
## Exchange manager Exchange manager is responsible for managing spooled data to back fault-tolerant execution. For more details, refer [Trino documentation]( https://trino.io/docs/current/admin/fault-tolerant-execution.html#fte-exchange-manager).
-<br>HDInsight on AKS Trino supports `filesystem` based exchange managers that can store the data in Azure Blob Storage (ADLS Gen 2). This section describes how to configure exchange manager with Azure Blob Storage.
+<br>Trino with HDInsight on AKS supports `filesystem` based exchange managers that can store the data in Azure Blob Storage (ADLS Gen 2). This section describes how to configure exchange manager with Azure Blob Storage.
To set up exchange manager with Azure Blob Storage as spooling destination, you need three required properties in `exchange-manager.properties` file.
You can find the connection string in *Security + Networking* -> *Access keys* s
:::image type="content" source="./media/trino-fault-tolerance/connection-string.png" alt-text="Screenshot showing storage account connection string." border="true" lightbox="./media/trino-fault-tolerance/connection-string.png"::: > [!NOTE]
-> HDInsight on AKS Trino currently does not support MSI authentication in exchange manager set up backed by Azure Blob Storage.
+> Trino with HDInsight on AKS currently does not support MSI authentication in exchange manager set up backed by Azure Blob Storage.
hdinsight-aks Trino Jvm Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-jvm-configuration.md
Title: Modifying JVM heap settings
description: How to modify initial and max heap size for Trino pods. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Configure JVM heap size [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This article describes how to modify initial and max heap size for HDInsight on AKS Trino pods.
+This article describes how to modify initial and max heap size for Trino pods with HDInsight on AKS.
`Xms` and `-Xmx` settings can be changed to control initial and max heap size of Trino pods. You can modify the JVM heap settings using ARM template. > [!NOTE]
-> In HDInsight on AKS, Heap settings on Trino pods are already right-sized based on the selected SKU size. These settings should only be modified when a user wants to control JVM behavior on the pods and is aware of side-effects of changing these settings.
+> In HDInsight on AKS, heap settings on Trino pods are already right-sized based on the selected SKU size. These settings should only be modified when a user wants to control JVM behavior on the pods and is aware of side-effects of changing these settings.
## Prerequisites
-* An operational HDInsight on AKS Trino cluster.
+* An operational Trino cluster with HDInsight on AKS.
* Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview).
hdinsight-aks Trino Miscellaneous Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-miscellaneous-files.md
Title: Using miscellaneous files
description: Using miscellaneous files with Trino clusters in HDInsight on AKS Previously updated : 08/29/2023 Last updated : 10/19/2023 # Using miscellaneous files
This article provides details on how to specify and use miscellaneous files conf
You can add the configurations for using miscellaneous files in your cluster using ARM template. For broader examples, refer to [Service configuration](./trino-service-configuration.md). ## Prerequisites
-* An operational HDInsight on AKS Trino cluster.
+* An operational Trino cluster with HDInsight on AKS.
* Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview).
You can add the configurations for using miscellaneous files in your cluster usi
Each file specification in `miscfiles` component under `clusterProfile.serviceConfigsProfiles` in the ARM template requires:
-* `fileName`: Symbolic name of the file to use as a reference in other configurations. This name isn't a physical file name. To use given miscellaneous file in other configurations, specify `${MISC:\<fileName\>}` and HDInsight on AKS Trino substitutes this tag with actual file path at runtime provided value must satisfy the following conditions:
+* `fileName`: Symbolic name of the file to use as a reference in other configurations. This name isn't a physical file name. To use given miscellaneous file in other configurations, specify `${MISC:\<fileName\>}` and HDInsight on AKS substitutes this tag with actual file path at runtime provided value must satisfy the following conditions:
* Contain no more than 253 characters * Contain only lowercase alphanumeric characters, `-` or `.` * Start and end with an alphanumeric character
-* `path`: Relative file path including file name and extension if applicable. HDInsight on AKS Trino only guarantees location of each given miscellaneous file relative to other miscellaneous files that is, base directory may change. You can't assume anything about absolute path of miscellaneous files, except that it ends with value specified in ΓÇ£pathΓÇ¥ property.
+* `path`: Relative file path including file name and extension if applicable. Trino with HDInsight on AKS only guarantees location of each given miscellaneous file relative to other miscellaneous files that is, base directory may change. You can't assume anything about absolute path of miscellaneous files, except that it ends with value specified in ΓÇ£pathΓÇ¥ property.
* `content`: JSON escaped string with file content. The format of the content is specific to certain Trino functionality and may vary, for example, json for [resource groups](https://trino.io/docs/current/admin/resource-groups.html).
hdinsight-aks Trino Query Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-query-logging.md
Title: Query logging
-description: Log Query lifecycle events in Trino Cluster
+description: Log query lifecycle events in Trino cluster
Previously updated : 08/29/2023 Last updated : 10/19/2023 # Query logging [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-Trino supports custom [event listeners](https://trino.io/docs/current/develop/event-listener.html) that can be used to listen for Query lifecycle events. You can author your own event listeners or use a built-in plugin provided by HDInsight on AKS Trino that logs events to Azure Blob Storage.
+Trino supports custom [event listeners](https://trino.io/docs/current/develop/event-listener.html) that can be used to listen for Query lifecycle events. You can author your own event listeners or use a built-in plugin provided by HDInsight on AKS that logs events to Azure Blob Storage.
You can enable built-in query logging in two ways:
This article covers addition of query logging to your cluster using ARM template
## Prerequisites
-* An operational HDInsight on AKS Trino cluster.
+* An operational Trino cluster with HDInsight on AKS.
* Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview).
hdinsight-aks Trino Scan Stats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-scan-stats.md
Title: Use scan statistics
description: How to enable, understand and query scan statistics using query log tables for Trino clusters for HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Enable scan statistics for queries
Last updated 08/29/2023
Often data teams are required to investigate performance or optimize queries to improve resource utilization or meet business requirements.
-A new capability has been added in HDInsight on AKS Trino that allows user to capture Scan statistics for any connector. This capability provides deeper insights into query performance profile beyond what is available in statistics produced by Trino.
+A new capability has been added in Trino for HDInsight on AKS that allows user to capture Scan statistics for any connector. This capability provides deeper insights into query performance profile beyond what is available in statistics produced by Trino.
You can enable this feature using [session property](https://trino.io/docs/current/sql/set-session.html#session-properties.) `collect_raw_scan_statistics`, and by following Trino command: ```
hdinsight-aks Trino Service Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-service-configuration.md
Title: Trino cluster configuration
description: How to perform service configuration for Trino clusters for HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Trino configuration management [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-HDInsight on AKS Trino cluster comes with most of the default configurations of open-source Trino. This article describes how to update config files, and adds your own supplemental config files to the cluster.
+Trino cluster with HDInsight on AKS comes with most of the default configurations of open-source Trino. This article describes how to update config files, and adds your own supplemental config files to the cluster.
You can add/update the configurations in two ways:
You can add/update the configurations in two ways:
* [Using ARM template](#using-arm-template) > [!NOTE]
-> HDInsight on AKS Trino enforces certain configurations and prohibits modification of some files and/or properties. This is done to ensure proper security/connectivity via configuration. Example of prohibited files/properties includes, but is not limited to:
+> Trino with HDInsight on AKS enforces certain configurations and prohibits modification of some files and/or properties. This is done to ensure proper security/connectivity via configuration. Example of prohibited files/properties includes, but is not limited to:
> * jvm.config file with the exception of Heap size settings. > * Node.properties: node.id, node.data-dir, log.path etc. > * `Config.properties: http-server.authentication.*, http-server.https.* etc.`
Follow the steps to modify the configurations:
### Prerequisites
-* An operational HDInsight on AKS Trino cluster.
+* An operational Trino cluster with HDInsight on AKS.
* Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview). ### Cluster management
-All HDInsight on AKS Trino configurations can be specified in `serviceConfigsProfiles.serviceName[ΓÇ£trinoΓÇ¥]` under `properties.clusterProfile`.
+All Trino configurations can be specified in `serviceConfigsProfiles.serviceName[ΓÇ£trinoΓÇ¥]` under `properties.clusterProfile`.
The following example focuses on `coordinator/worker/miscfiles`. For catalogs, see [Add catalogs to an existing cluster](trino-add-catalogs.md):
hdinsight-aks Trino Superset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-superset.md
Title: Use Apache Superset with HDInsight on AKS Trino
-description: Deploying Superset and connecting to HDInsight on AKS Trino
+ Title: Use Apache Superset with Trino on HDInsight on AKS
+description: Deploying Superset and connecting to Trino with HDInsight on AKS
Previously updated : 08/29/2023 Last updated : 10/19/2023
-# Deploy Apache Superset
+# Deploy Apache SupersetΓäó
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] Visualization is essential to effectively explore, present, and share data. [Apache Superset](https://superset.apache.org/) allows you to run queries, visualize, and build dashboards over your data in a flexible Web UI.
-This article describes how to deploy an Apache Superset UI instance in Azure and connect it to HDInsight on AKS Trino cluster to query data and create dashboards.
+This article describes how to deploy an Apache Superset UI instance in Azure and connect it to Trino cluster with HDInsight on AKS to query data and create dashboards.
Summary of the steps covered in this article: 1. [Prerequisites](#prerequisites).
Summary of the steps covered in this article:
*If using Windows, use [Ubuntu on WSL2](https://ubuntu.com/tutorials/install-ubuntu-on-wsl2-on-windows-11-with-gui-support#1-overview) to run these instructions in a bash shell Linux environment within Windows. Otherwise, you need to modify commands to work in Windows.*
-### Create a HDInsight on AKS Trino cluster and assign a Managed Identity
+### Create a Trino cluster and assign a Managed Identity
-1. If you haven't already, create an [HDInsight on AKS Trino cluster](trino-create-cluster.md).
+1. If you haven't already, create a [Trino cluster with HDInsight on AKS](trino-create-cluster.md).
2. For Apache Superset to call Trino, it needs to have a managed identity (MSI). Create or pick an existing [user assigned managed identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
This step creates the Azure Kubernetes Service (AKS) cluster where you can insta
5. Select **Trino**.
- 6. Enter the SQL Alchemy URI of your HDInsight on AKS Trino cluster.
+ 6. Enter the SQL Alchemy URI of your Trino cluster.
You need to modify three parts of this connection string:
This step creates the Azure Kubernetes Service (AKS) cluster where you can insta
Now, you're ready to create datasets and charts.
-### Troubleshooting
+## Troubleshooting
* Verify your Trino cluster has been configured to allow the Superset cluster's user assigned managed identity to connect. You can verify this value by looking at the resource JSON of your Trino cluster (authorizationProfile/userIds). Make sure that you're using the identity's object ID, not the client ID.
hdinsight-aks Trino Ui Command Line Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-command-line-interface.md
Title: Trino CLI
description: Using Trino via CLI Previously updated : 08/29/2023 Last updated : 10/19/2023 # Trino CLI [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-The Trino CLI provides a terminal-based, interactive shell for running queries.
+The Trino CLI for HDInsight on AKS provides a terminal-based, interactive shell for running queries.
## Install on Windows
-For Windows, the Trino CLI is installed via an MSI, which gives you access to the CLI through the Windows Command Prompt (CMD) or PowerShell. When installing for Windows Subsystem for Linux (WSL), see [Install on Linux](#install-on-linux).
+For Windows, the Trino CLI for HDInsight on AKS is installed via an MSI, which gives you access to the CLI through the Windows Command Prompt (CMD) or PowerShell. When installing for Windows Subsystem for Linux (WSL), see [Install on Linux](#install-on-linux).
### Requirements
For Windows, the Trino CLI is installed via an MSI, which gives you access to th
### Install or update
-The MSI package is used for installing or updating the HDInsight on AKS Trino CLI on Windows.
+The MSI package is used for installing or updating the Trino CLI for HDInsight on AKS on Windows.
Download and install the latest release of the Trino CLI. When the installer asks if it can make changes to your computer, click the "Yes" box. After the installation is complete, you'll need to close and reopen any active Windows Command Prompt or PowerShell windows to use the Trino CLI.
hdinsight-aks Trino Ui Dbeaver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-dbeaver.md
Title: Trino with DBeaver
description: Using Trino in DBeaver. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Connect and query with DBeaver [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-It's possible to use JDBC driver with many available database tools. This article demonstrates how to configure one of the most popular tool **DBeaver** to connect to HDInsight on AKS Trino cluster in few simple steps.
+It's possible to use JDBC driver with many available database tools. This article demonstrates how to configure one of the most popular tool **DBeaver** to connect to Trino cluster with HDInsight on AKS in few simple steps.
## Prerequisites * [Download and install DBeaver](https://dbeaver.io/download/).
-* [Install HDInsight on AKS Trino CLI with JDBC driver](./trino-ui-command-line-interface.md#install-on-windows).
+* [Install Trino CLI with JDBC driver for HDInsight on AKS](./trino-ui-command-line-interface.md#install-on-windows).
-## Configure DBeaver to use HDInsight on AKS Trino JDBC driver
+## Configure DBeaver to use Trino JDBC driver for HDInsight on AKS
Open DBeaver and from the main menu, select Database -> Driver Manager. > [!NOTE]
- > DBeaver comes with existing open-source Trino driver, create a copy of it and register HDInsight on AKS Trino JDBC driver.
+ > DBeaver comes with existing open-source Trino driver, create a copy of it and register as Trino JDBC driver for HDInsight on AKS.
1. Select **Trino** driver from list and click **Copy**.
- * Update **Driver Name**, for example, "Azure Trino" or "Azure HDInsight on AKS Trino" or any other name.
+ * Update **Driver Name**, for example, "Azure Trino" or "Trino for HDInsight on AKS" or any other name.
* Make sure **Default Port** is 443.
- :::image type="content" source="./media/trino-ui-dbeaver/dbeaver-new-driver.png" alt-text="Screenshot showing Create new Azure Trino driver."
+ :::image type="content" source="./media/trino-ui-dbeaver/dbeaver-new-driver.png" alt-text="Screenshot showing Create new Trino driver for HDInsight on AKS."
1. Select **Libraries** tab. 1. Delete all libraries currently registered.
- 1. Click **Add File** and select [installed](./trino-ui-command-line-interface.md#install-on-windows) HDInsight on AKS Trino JDBC jar file from your local disk.
+ 1. Click **Add File** and select [installed](./trino-ui-command-line-interface.md#install-on-windows) Trino JDBC jar file for HDInsight on AKS from your local disk.
> [!NOTE]
- > HDInsight on AKS Trino CLI comes with Trino JDBC jar. You can find it in your local disk.
+ > Trino CLI for HDInsight on AKS comes with Trino JDBC jar. You can find it in your local disk.
> <br> Reference location example: `C:\Program Files (x86)\Microsoft SDKs\Azure\TrinoCli-0.410.0\lib`. Location may defer if the installation directory or CLI version is different. 1. Click **Find Class** and select ```io.trino.jdbc.TrinoDriver```.
- :::image type="content" source="./media/trino-ui-dbeaver/dbeaver-new-driver-library.png" alt-text="Screenshot showing Select Azure Trino JDBC driver file."
+ :::image type="content" source="./media/trino-ui-dbeaver/dbeaver-new-driver-library.png" alt-text="Screenshot showing Select Trino JDBC driver file."
1. Click **OK** and close Driver Manager, the driver is configured to use.
-## Query and browse HDInsight on AKS Trino cluster with DBeaver
+## Query and browse Trino cluster with DBeaver
1. Connect to your Trino cluster by clicking **New Database Connection** in the toolbar.
hdinsight-aks Trino Ui Jdbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-jdbc-driver.md
Title: Trino JDBC driver
description: Using the Trino JDBC driver. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Trino JDBC driver [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-HDInsight on AKS Trino provides JDBC driver, which supports Microsoft Entra authentication and adds few parameters for it.
+Trino with HDInsight on AKS provides JDBC driver, which supports Microsoft Entra authentication and adds few parameters for it.
## Install
-JDBC driver jar is included in the Trino CLI package, [Install HDInsight on AKS Trino CLI](./trino-ui-command-line-interface.md). If CLI is already installed, you can find it on your file system at following path:
+JDBC driver jar is included in the Trino CLI package, [Install Trino CLI for HDInsight on AKS](./trino-ui-command-line-interface.md). If CLI is already installed, you can find it on your file system at following path:
> Windows: `C:\Program Files (x86)\Microsoft SDKs\Azure\TrinoCli-<version>\lib` > > Linux: `~/lib/trino-cli`
hdinsight-aks Trino Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui.md
Title: Trino UI
description: Using Trino UI Previously updated : 08/29/2023 Last updated : 10/19/2023 # Trino UI [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This article covers the details around the Trino UI provided for monitoring the cluster nodes and the queries submitted.
+This article covers the details around the Trino UI provided for monitoring the cluster nodes and queries submitted to Trino.
1. Sign in to [Azure portal](https://portal.azure.com).
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md
This table lists the versions of HDInsight that are available in the Azure porta
| HDInsight version | VM OS | Release date| Support type | Support expiration date | Retirement date | High availability | | | | | | | | |
-| [HDInsight 5.1](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |Feb 27, 2023 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |
+| [HDInsight 5.1](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |November 1, 2023 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |
+| [HDInsight 5.0](./hdinsight-5x-component-versioning.md) |Ubuntu 18.0.4 LTS |March 11, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |
| [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced | Not announced |Yes | **Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You might not be able to create clusters from the Azure portal.
Azure HDInsight supports the following Apache Spark versions.
| -- | -- |--|--|--|--|--| | 4.0 | 2.4 | July 8, 2019 | End of life announced (EOLA)| February 10, 2023| August 10, 2023 | February 10, 2024 | | 5.0 | 3.1 | March 11, 2022 | General availability |-|-|-|
-| 5.1 | 3.3 | October 26, 2023 | General availability |-|-|-|
+| 5.1 | 3.3 | November 1, 2023 | General availability |-|-|-|
## Support options for HDInsight versions
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
For workload specific versions, see
## What's new
-* HDInsight announces the General availability of HDInsight 5.1 starting October 26, 2023. This release brings in a full stack refresh to the [open source components](./hdinsight-5x-component-versioning.md#open-source-components-available-with-hdinsight-5x) and the integrations from Microsoft.
+* HDInsight announces the General availability of HDInsight 5.1 starting November 1, 2023. This release brings in a full stack refresh to the [open source components](./hdinsight-5x-component-versioning.md#open-source-components-available-with-hdinsight-5x) and the integrations from Microsoft.
* Latest Open Source Versions ΓÇô [HDInsight 5.1](./hdinsight-5x-component-versioning.md) comes with the latest stable [open-source version](./hdinsight-5x-component-versioning.md#open-source-components-available-with-hdinsight-5x) available. Customers can benefit from all latest open source features, Microsoft performance improvements, and Bug fixes. * Secure ΓÇô The latest versions come with the most recent security fixes, both open-source security fixes and security improvements by Microsoft. * Lower TCO ΓÇô With performance enhancements customers can lower the operating cost, along with [enhanced autoscale](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/enhanced-autoscale-capabilities-in-hdinsight-clusters/ba-p/3811271).
hdinsight Migrate 5 1 Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/migrate-5-1-versions.md
Azure HDInsight 5.1 offers the latest open-source components with significant en
### Apache Kafka 3.2.0
-If you migrate from Kafka to 3.2.0, you can take advantage of the following new features:
+If you migrate to Kafka 3.2.0 (HDI 5.1), you can take advantage of the following new features:
----- Support Automated consumer offsets sync across cluster in MM 2.0, making it easier to migrate or failover consumers across clusters. (KIP-545)-- Hint to the partition leader to recover the partition: A new feature that allows the controller to communicate to a newly elected topic partition leader whether it needs to recover its state (KIP-704)-- Supports TLS 1.2 by default for secure communication-- Zookeeper Dependency Removal: Producers and consumers no longer need the zookeeper parameter. Use the `--bootstrap-server` option instead of `--zookeeper` with CLI commands. (KIP-500)-- Configurable backlog size for creating Acceptor: A new configuration that allows setting the size of the SYN backlog for TCPΓÇÖs acceptor sockets on the brokers (KIP-764)-- Top-level error code field to DescribeLogDirsResponse: A new error code that makes DescribeLogDirs API consistent with other APIs and allows returning other errors besides CLUSTER_AUTHORIZATION_FAILED (KIP-784)
+- Support Automated consumer offsets sync across cluster in MM 2.0, making it easier to migrate or failover consumers across clusters [KIP-545](https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0).
+- Hint to the partition leader to recover the partition: A new feature that allows the controller to communicate to a newly elected topic partition leader whether it needs to recover its state [KIP-704](https://cwiki.apache.org/confluence/display/KAFKA/KIP-704%3A+Send+a+hint+to+the+partition+leader+to+recover+the+partition).
+- Supports TLS 1.2 by default for secure communication.
+- Zookeeper Dependency Removal: Producers and consumers no longer need the zookeeper parameter. Use the `--bootstrap-server` option instead of `--zookeeper` with CLI commands [KIP-500](https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum).
+- Configurable backlog size for creating Acceptor: A new configuration that allows setting the size of the SYN backlog for TCPΓÇÖs acceptor sockets on the brokers [KIP-764](https://cwiki.apache.org/confluence/display/KAFKA/KIP-764%3A+Configurable+backlog+size+for+creating+Acceptor).
+- Top-level error code field to DescribeLogDirsResponse: A new error code that makes DescribeLogDirs API consistent with other APIs and allows returning other errors besides CLUSTER_AUTHORIZATION_FAILED [KIP-784](https://cwiki.apache.org/confluence/display/KAFKA/KIP-784%3A+Add+top-level+error+code+field+to+DescribeLogDirsResponse).
For a complete list of updates, see [Apache Kafka 3.2.0 release notes](https://archive.apache.org/dist/kafka/3.2.0/RELEASE_NOTES.html). - ## Kafka client compatibility
-New Kafka brokers support older clients. [KIP-35 - Retrieving protocol version](https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version) introduced a mechanism for dynamically determining the functionality of a Kafka broker and [KIP-97: Improved Kafka Client RPC Compatibility Policy](https://cwiki.apache.org/confluence/display/KAFKA/KIP-97%3A+Improved+Kafka+Client+RPC+Compatibility+Policy) introduced a new compatibility policy and guarantees for the Java client. Previously, a Kafka client had to interact with a broker of the same version or a newer version. Now, newer versions of the Java clients and other clients that support KIP-35 such as `librdkafka` can fall back to older request types or throw appropriate errors if functionality isn't available.
+New Kafka brokers support older clients. [KIP-35 - Retrieving protocol version](https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version) introduced a mechanism for dynamically determining the functionality of a Kafka broker and [KIP-97: Improved Kafka Client RPC Compatibility Policy](https://cwiki.apache.org/confluence/display/KAFKA/KIP-97%3A+Improved+Kafka+Client+RPC+Compatibility+Policy) introduced a new compatibility policy and guarantees for the Java client. Previously, a Kafka client had to interact with a broker of the same version or a newer version. Now, newer versions of the Java clients and other clients that support [KIP-35](https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version) such as `librdkafka` can fall back to older request types or throw appropriate errors if functionality isn't available.
:::image type="content" source="./media/migrate-5-1-versions/client-compatibility.png" alt-text="Screenshot shows Upgrade Kafka client compatibility." lightbox="./media/migrate-5-1-versions/client-compatibility.png":::
healthcare-apis Deploy Dicom Services In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure.md
Title: Deploy DICOM service using the Azure portal - Azure Health Data Services
-description: This article describes how to deploy DICOM service in the Azure portal.
+ Title: Deploy the DICOM service by using the Azure portal - Azure Health Data Services
+description: This article describes how to deploy the DICOM service in the Azure portal.
# Deploy the DICOM service
-In this quickstart, you'll learn how to deploy the DICOM&reg; service using the Azure portal.
+In this quickstart, you learn how to deploy the DICOM&reg; service by using the Azure portal.
-Once deployment is complete, you can use the Azure portal to navigate to the newly created DICOM service to see the details including your service URL. The service URL to access your DICOM service will be: ```https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com```. Make sure to specify the version as part of the url when making requests. More information can be found in the [API Versioning for DICOM service documentation](api-versioning-dicom-service.md).
+After deployment is finished, you can use the Azure portal to go to the newly created DICOM service to see the details, including your service URL. The service URL to access your DICOM service is ```https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com```. Make sure to specify the version as part of the URL when you make requests. For more information, see [API versioning for the DICOM service](api-versioning-dicom-service.md).
## Prerequisites To deploy the DICOM service, you need a workspace created in the Azure portal. For more information, see [Deploy a workspace in the Azure portal](../healthcare-apis-quickstart.md).
-## Deploying DICOM service
+## Deploy the DICOM service
1. On the **Resource group** page of the Azure portal, select the name of your **Azure Health Data Services workspace**.
- [ ![Screenshot of select workspace resource group.](media/select-workspace-resource-group.png) ](media/select-workspace-resource-group.png#lightbox)
+ [![Screenshot that shows selecting a workspace resource group.](media/select-workspace-resource-group.png) ](media/select-workspace-resource-group.png#lightbox)
-2. Select **Deploy DICOM service**.
+1. Select **Deploy DICOM service**.
- [ ![Screenshot of deploy DICOM service.](media/workspace-deploy-dicom-services.png) ](media/workspace-deploy-dicom-services.png#lightbox)
+ [![Screenshot that shows deploying the DICOM service.](media/workspace-deploy-dicom-services.png) ](media/workspace-deploy-dicom-services.png#lightbox)
+1. Select **Add DICOM service**.
-3. Select **Add DICOM service**.
+ [![Screenshot that shows adding the DICOM service.](media/add-dicom-service.png) ](media/add-dicom-service.png#lightbox)
- [ ![Screenshot of add DICOM service.](media/add-dicom-service.png) ](media/add-dicom-service.png#lightbox)
+1. Enter a name for the DICOM service, and then select **Review + create**.
+ [![Screenshot that shows the DICOM service name.](media/enter-dicom-service-name.png) ](media/enter-dicom-service-name.png#lightbox)
-4. Enter a name for DICOM service, and then select **Review + create**.
+1. (Optional) Select **Next: Tags**.
- [ ![Screenshot of DICOM service name.](media/enter-dicom-service-name.png) ](media/enter-dicom-service-name.png#lightbox)
+ Tags are name/value pairs used for categorizing resources. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md).
+1. When you notice the green validation check mark, select **Create** to deploy the DICOM service.
- (**Optional**) Select **Next: Tags >**.
+1. After the deployment process is finished, select **Go to resource**.
- Tags are name/value pairs used for categorizing resources. For information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md).
+ [![Screenshot that shows Go to resource.](media/go-to-resource.png) ](media/go-to-resource.png#lightbox)
-5. When you notice the green validation check mark, select **Create** to deploy DICOM service.
-
-6. When the deployment process completes, select **Go to resource**.
-
- [ ![Screenshot of DICOM go to resource.](media/go-to-resource.png) ](media/go-to-resource.png#lightbox)
-
- The result of the newly deployed DICOM service is shown below.
-
- [ ![Screenshot of DICOM finished deployment.](media/results-deployed-dicom-service.png) ](media/results-deployed-dicom-service.png#lightbox)
+ The result of the newly deployed DICOM service is shown here.
+ [![Screenshot that shows the DICOM finished deployment.](media/results-deployed-dicom-service.png) ](media/results-deployed-dicom-service.png#lightbox)
## Next steps
-[Assign roles for the DICOM service](../configure-azure-rbac.md#assign-roles-for-the-dicom-service)
-
-[Use DICOMweb Standard APIs with DICOM services](dicomweb-standard-apis-with-dicom-services.md)
+* [Assign roles for the DICOM service](../configure-azure-rbac.md#assign-roles-for-the-dicom-service)
+* [Use DICOMweb Standard APIs with DICOM services](dicomweb-standard-apis-with-dicom-services.md)
[!INCLUDE [DICOM trademark statement](../includes/healthcare-apis-dicom-trademark.md)]
healthcare-apis Dicom Digital Pathology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-digital-pathology.md
Here are some samples open source tools to build your own converter:
### Storage
-Each converted WSI results in a DICOM series with multiple instances. We recommend uploading each instance as a single part POST for better performance.
+Each converted WSI results in a DICOM series with multiple instances.
+
+#### Batch upload
+Considering the bigger size and number of instances that needs to be uploaded, we recommend batch upload of each series or a batch of converted WSIs using [Import](import-files.md)
+
+#### Streaming upload
+If you want to upload each file as they get converted you can use the STOW single part request like below.
[Prerequisites](dicomweb-standard-apis-curl.md#prerequisites)
Follow the [CORS guidelines](configure-cross-origin-resource-sharing.md) if the
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ## Find an ISV partner Reach out to dicom-support@microsoft.com if you want to work with our partner ISVs that provide end-to-end solutions and support.
healthcare-apis Export Dicom Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/export-dicom-files.md
Title: Export DICOM files using the export API of the DICOM service
-description: This how-to guide explains how to export DICOM files to an Azure Blob Storage account
+ Title: Export DICOM files by using the export API of the DICOM service
+description: This how-to guide explains how to export DICOM files to an Azure Blob Storage account.
Last updated 10/14/2022
-# Export DICOM Files
+# Export DICOM files
-The DICOM service provides the ability to easily export DICOM data in a file format, simplifying the process of using medical imaging in external workflows, such as AI and machine learning. DICOM studies, series, and instances can be exported in bulk to an [Azure Blob Storage account](../../storage/blobs/storage-blobs-introduction.md) using the export API. DICOM data that is exported to a storage account will be exported as a `.dcm` file in a folder structure that organizes instances by `StudyInstanceID` and `SeriesInstanceID`.
+The DICOM&reg; service provides the ability to easily export DICOM data in a file format. The service simplifies the process of using medical imaging in external workflows, such as AI and machine learning. You can use the export API to export DICOM studies, series, and instances in bulk to an [Azure Blob Storage account](../../storage/blobs/storage-blobs-introduction.md). DICOM data that's exported to a storage account is exported as a `.dcm` file in a folder structure that organizes instances by `StudyInstanceID` and `SeriesInstanceID`.
-There are three steps to exporting data from the DICOM service:
+There are three steps to exporting data from the DICOM service:
-- Enable a system assigned managed identity for the DICOM service.-- Configure a new or existing storage account and give permission to the system managed identity.
+- Enable a system-assigned managed identity for the DICOM service.
+- Configure a new or existing storage account and give permission to the system-assigned managed identity.
- Use the export API to create a new export job to export the data. ## Enable managed identity for the DICOM service
-The first step to export data from the DICOM service is to enable a system managed identity. This managed identity is used to authenticate the DICOM service and give permission to the storage account used as the destination for export. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+The first step to export data from the DICOM service is to enable a system-assigned managed identity. This managed identity is used to authenticate the DICOM service and give permission to the storage account used as the destination for export. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
1. In the Azure portal, browse to the DICOM service that you want to export from and select **Identity**.
+ :::image type="content" source="media/dicom-export-identity.png" alt-text="Screenshot that shows selection of Identity view." lightbox="media/dicom-export-identity.png":::
-2. Set the **Status** option to **On**, and then select **Save**.
+1. Set the **Status** option to **On**, and then select **Save**.
+ :::image type="content" source="media/dicom-export-enable-system-identity.png" alt-text="Screenshot that shows the system-assigned identity toggle." lightbox="media/dicom-export-enable-system-identity.png":::
-3. Select **Yes** in the confirmation dialog that appears.
+1. Select **Yes** in the confirmation dialog that appears.
+ :::image type="content" source="media/dicom-export-confirm-enable.png" alt-text="Screenshot that shows the dialog confirming enabling system identity." lightbox="media/dicom-export-confirm-enable.png":::
-It will take a few minutes to create the system managed identity. When the system identity has been enabled, an **Object (principal) ID** will be displayed.
+It takes a few minutes to create the system-assigned managed identity. After the system identity is enabled, an **Object (principal) ID** appears.
-## Give storage account permissions to the system managed identity
+## Give storage account permissions to the system-assigned managed identity
-The system managed identity will need **Storage Blob Data Contributor** permission to write data to the destination storage account.
+The system-assigned managed identity needs **Storage Blob Data Contributor** permission to write data to the destination storage account.
-1. Under **Permissions** select **Azure role assignments**.
+1. Under **Permissions**, select **Azure role assignments**.
+ :::image type="content" source="media/dicom-export-azure-role-assignments.png" alt-text="Screenshot that shows the Azure role assignments button on the Identity view." lightbox="media/dicom-export-azure-role-assignments.png":::
+1. Select **Add role assignment**. On the **Add role assignment** pane, make the following selections:
-2. Select **Add role assignment**. On the **Add role assignment** panel, make the following selections:
* Under **Scope**, select **Storage**.
- * Under **Resource**, select the destination storage account for the export operation.
- * Under **Role**, select **Storage Blob Data Contributor**.
+ * Under **Resource**, select the destination storage account for the export operation.
+ * Under **Role**, select **Storage Blob Data Contributor**.
+ :::image type="content" source="media/dicom-export-add-role-assignment.png" alt-text="Screenshot that shows the Add role assignment pane." lightbox="media/dicom-export-add-role-assignment.png":::
-3. Select **Save** to add the permission to the system managed identity.
+1. Select **Save** to add the permission to the system-assigned managed identity.
## Use the export API
-The export API exposes one `POST` endpoint for exporting data.
+The export API exposes one `POST` endpoint for exporting data.
``` POST <dicom-service-url>/<version>/export ```
-Given a *source*, the set of data to be exported, and a *destination*, the location to which data will be exported, the endpoint returns a reference to a new, long-running export operation. The duration of this operation depends on the volume of data to be exported. See [Operation Status](#operation-status) below for more details about monitoring progress of export operations.
+Given a *source*, the set of data to be exported, and a *destination*, the location to which data will be exported, the endpoint returns a reference to a new, long-running export operation. The duration of this operation depends on the volume of data to be exported. For more information about monitoring progress of export operations, see the [Operation status](#operation-status) section.
-Any errors encountered while attempting to export will be recorded in an error log. See [Errors](#errors) below for details.
+Any errors encountered while you attempt to export are recorded in an error log. For more information, see the [Errors](#errors) section.
## Request
The only setting is the list of identifiers to export.
| Property | Required | Default | Description | | :- | :- | : | :- |
-| `Values` | Yes | | A list of one or more DICOM studies, series, and/or SOP instances identifiers in the format of `"<StudyInstanceUID>[/<SeriesInstanceUID>[/<SOPInstanceUID>]]"`. |
+| `Values` | Yes | | A list of one or more DICOM studies, series, and/or SOP instance identifiers in the format of `"<StudyInstanceUID>[/<SeriesInstanceUID>[/<SOPInstanceUID>]]"` |
### Destination settings
-The connection to the Azure Blob storage account is specified with a `BlobContainerUri`.
+The connection to the Blob Storage account is specified with `BlobContainerUri`.
| Property | Required | Default | Description | | :- | :- | : | :- |
-| `BlobContainerUri` | No | `""` | The complete URI for the blob container. |
-| `UseManagedIdentity` | Yes | `false` | A required flag indicating whether managed identity should be used to authenticate to the blob container. |
+| `BlobContainerUri` | No | `""` | The complete URI for the blob container |
+| `UseManagedIdentity` | Yes | `false` | A required flag that indicates whether managed identity should be used to authenticate to the blob container |
### Example
-The below example requests the export of the following DICOM resources to the blob container named `export` in the storage account named `dicomexport`:
-- All instances within the study whose `StudyInstanceUID` is `1.2.3`.-- All instances within the series whose `StudyInstanceUID` is `12.3` and `SeriesInstanceUID` is `4.5.678`.-- The instance whose `StudyInstanceUID` is `123.456`, `SeriesInstanceUID` is `7.8`, and `SOPInstanceUID` is `9.1011.12`.
+The following example requests the export of the following DICOM resources to the blob container named `export` in the storage account named `dicomexport`:
+
+- All instances within the study whose `StudyInstanceUID` is `1.2.3`
+- All instances within the series whose `StudyInstanceUID` is `12.3` and `SeriesInstanceUID` is `4.5.678`
+- The instance whose `StudyInstanceUID` is `123.456`, `SeriesInstanceUID` is `7.8`, and `SOPInstanceUID` is `9.1011.12`
```http POST /export HTTP/1.1
Content-Type: application/json
The export API returns a `202` status code when an export operation is started successfully. The body of the response contains a reference to the operation, while the value of the `Location` header is the URL for the export operation's status (the same as `href` in the body).
-Inside of the destination container, the DCM files can be found with the following path format: `<operation id>/results/<study>/<series>/<sop instance>.dcm`
+Inside the destination container, use the path format `<operation id>/results/<study>/<series>/<sop instance>.dcm` to find the DCM files.
```http HTTP/1.1 202 Accepted
Content-Type: application/json
``` ### Operation status
-The above `href` URL can be polled for the current status of the export operation until completion. Once the job has reached a terminal state, the API will return a 200 status code instead of 202, and the value of its status property will be updated accordingly.
+
+Poll the preceding `href` URL for the current status of the export operation until completion. After the job has reached a terminal state, the API returns a 200 status code instead of 202. The value of its status property is updated accordingly.
```http HTTP/1.1 200 OK
Content-Type: application/json
## Errors
-If there are any user errors when exporting a DICOM file, then the file is skipped and its corresponding error is logged. This error log is also exported alongside the DICOM files and can be reviewed by the caller. The error log can be found at `<export blob container uri>/<operation ID>/errors.log`.
+If there are any user errors when you export a DICOM file, the file is skipped and its corresponding error is logged. This error log is also exported alongside the DICOM files and the caller can review it. You can find the error log at `<export blob container uri>/<operation ID>/errors.log`.
### Format
-Each line of the error log is a JSON object with the following properties. A given error identifier may appear multiple times in the log as each update to the log is processed *at least once*.
+Each line of the error log is a JSON object with the following properties. A given error identifier might appear multiple times in the log as each update to the log is processed *at least once*.
| Property | Description | | | -- |
-| `Timestamp` | The date and time when the error occurred. |
-| `Identifier` | The identifier for the DICOM study, series, or SOP instance in the format of `"<study instance UID>[/<series instance UID>[/<SOP instance UID>]]"`. |
-| `Error` | The detailed error message. |
+| `Timestamp` | The date and time when the error occurred |
+| `Identifier` | The identifier for the DICOM study, series, or SOP instance in the format of `"<study instance UID>[/<series instance UID>[/<SOP instance UID>]]"` |
+| `Error` | The detailed error message |
## Next steps >[!div class="nextstepaction"] >[Overview of the DICOM service](dicom-services-overview.md)+
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md
# Get started with the DICOM service
-This article outlines the basic steps to get started with the DICOM&reg; service in [Azure Health Data Services](../healthcare-apis-overview.md).
+This article outlines the basic steps to get started with the DICOM&reg; service in [Azure Health Data Services](../healthcare-apis-overview.md).
-As a prerequisite, you need an Azure subscription and permissions to create Azure resource groups and to deploy Azure resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in PowerShell, Azure CLI, and REST API scripts. You need a workspace to provision a DICOM service. A FHIR&reg; service is optional and is needed only if you connect imaging data with electronic health records of the patient via DICOMcast.
+As a prerequisite, you need an Azure subscription and permissions to create Azure resource groups and deploy Azure resources. You can follow all the steps or skip some if you have an existing environment. Also, you can combine all the steps and complete them in PowerShell, Azure CLI, and REST API scripts. You need a workspace to provision a DICOM service. A FHIR&reg; service is optional and is needed only if you connect imaging data with electronic health records of the patient via DICOMcast.
-[![Screenshot of Get Started with DICOM diagram.](media/get-started-with-dicom.png)](media/get-started-with-dicom.png#lightbox)
+[![Diagram that shows how to get started with DICOM.](media/get-started-with-dicom.png)](media/get-started-with-dicom.png#lightbox)
## Create a workspace in your Azure subscription
-You can create a workspace from the [Azure portal](../healthcare-apis-quickstart.md) or using PowerShell, Azure CLI, and REST API. You can find scripts from the [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
+You can create a workspace from the [Azure portal](../healthcare-apis-quickstart.md) or by using PowerShell, the Azure CLI, or a REST API. You can find scripts from the [Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
> [!NOTE] > There are limits to the number of workspaces and the number of DICOM service instances you can create in each Azure subscription. ## Create a DICOM service in the workspace
-You can create a DICOM service instance from the [Azure portal](deploy-dicom-services-in-azure.md) or using PowerShell, Azure CLI, and REST API. You can find scripts from the [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
+You can create a DICOM service instance from the [Azure portal](deploy-dicom-services-in-azure.md) or by using PowerShell, the Azure CLI, or a REST API. You can find scripts from the [Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
Optionally, you can create a [FHIR service](../fhir/fhir-portal-quickstart.md) and [MedTech service](../iot/deploy-iot-connector-in-azure.md) in the workspace.
The DICOM service is secured by a Microsoft Entra ID that can't be disabled. To
### Register a client application
-You can create or register a client application from the [Azure portal](dicom-register-application.md), or using PowerShell and Azure CLI scripts. This client application can be used for one or more DICOM service instances. It can also be used for other services in Azure Health Data Services.
+You can create or register a client application from the [Azure portal](dicom-register-application.md) or by using PowerShell and Azure CLI scripts. You can use this client application for one or more DICOM service instances. You can also use it for other services in Health Data Services.
If the client application is created with a certificate or client secret, ensure that you renew the certificate or client secret before expiration and replace the client credentials in your applications.
-You can delete a client application. Before doing that, ensure that it's not used in production, dev, test, or quality assurance (QA) environments.
+You can delete a client application. Before doing that, ensure that it's not used in production, dev, test, or quality assurance environments.
### Grant access permissions
-You can grant access permissions or assign roles from the [Azure portal](../configure-azure-rbac.md), or using PowerShell and Azure CLI scripts.
+You can grant access permissions or assign roles from the [Azure portal](../configure-azure-rbac.md) or by using PowerShell and Azure CLI scripts.
-### Perform create, read, update, and delete (CRUD) transactions
+### Perform CRUD transactions
-You can perform, create, read (search), update, or delete (CRUD) transactions against the DICOM service in your applications or by using tools such as Postman, REST Client, cURL, and Python. Because the DICOM service is secured by default, you must obtain an access token and include it in your transaction request.
+You can perform, create, read (search), update, or delete (CRUD) transactions against the DICOM service in your applications or by using tools such as Postman, REST client, cURL, and Python. Because the DICOM service is secured by default, you must obtain an access token and include it in your transaction request.
#### Get an access token
-You can obtain a Microsoft Entra access token using PowerShell, Azure CLI, REST CLI, or .NET SDK. For more information, see [Get access token](../get-access-token.md).
+You can obtain a Microsoft Entra access token by using PowerShell, the Azure CLI, a REST CLI, or the .NET SDK. For more information, see [Get access token](../get-access-token.md).
-#### Access using existing tools
+#### Access by using existing tools
- [.NET C#](dicomweb-standard-apis-c-sharp.md) - [cURL](dicomweb-standard-apis-curl.md) - [Python](dicomweb-standard-apis-python.md) - Postman-- REST Client
+- REST client
### DICOMweb standard APIs and change feed
-You can find more details on DICOMweb standard APIs and change feed in the [DICOM service](dicom-services-overview.md) documentation.
+You can find more information on DICOMweb standard APIs and change feed in the [DICOM service](dicom-services-overview.md) documentation.
#### DICOMcast
-DICOMcast is currently available as an [open source](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md) project.
+DICOMcast is currently available as an [open-source](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md) project.
## Next steps
-[Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md)
-
+[Deploy the DICOM service by using the Azure portal](deploy-dicom-services-in-azure.md)
[!INCLUDE [FHIR and DICOM trademark statements](../includes/healthcare-apis-fhir-dicom-trademark.md)]
healthcare-apis Import Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/import-files.md
Title: Import DICOM files into the DICOM service
-description: Learn how to import DICOM files using bulk import in Azure Health Data Services
+description: Learn how to import DICOM files by using bulk import in Azure Health Data Services.
Last updated 10/05/2023
-# Import DICOM files (Preview)
+# Import DICOM files (preview)
Bulk import is a quick way to add data to the DICOM&reg; service. Importing DICOM files with the bulk import capability enables: -- **Back-up and migration**. For example, your organization might have many DICOM instances stored in local or on-premises systems that you want to back up or migrate to the cloud for better security, scalability, and availability. Rather than uploading the data one by one, use bulk import to transfer the data faster and more efficiently.
+- **Backup and migration**: For example, your organization might have many DICOM instances stored in local or on-premises systems that you want to back up or migrate to the cloud for better security, scalability, and availability. Rather than uploading the data one by one, use bulk import to transfer the data faster and more efficiently.
-- **Machine learning development**. For example, your organization might have a large dataset of DICOM instances that you want to use for training machine learning models. With bulk import, you can upload the data to the DICOM service and then access it from [Microsoft Fabric](get-started-with-analytics-dicom.md), [Azure Machine Learning](../../machine-learning/overview-what-is-azure-machine-learning.md), or other tools.
+- **Machine learning development**: For example, your organization might have a large dataset of DICOM instances that you want to use for training machine learning models. With bulk import, you can upload the data to the DICOM service and then access it from [Microsoft Fabric](get-started-with-analytics-dicom.md), [Azure Machine Learning](../../machine-learning/overview-what-is-azure-machine-learning.md), or other tools.
## Prerequisites -- **Deploy an instance of the DICOM service**. For more information, see [Deploy the DICOM service](deploy-dicom-services-in-azure.md).
-
-- **Deploy the events capability for the DICOM service**. For more information, see [Deploy events using the Azure portal](../events/events-deploy-portal.md).
+- **Deploy an instance of the DICOM service.** For more information, see [Deploy the DICOM service](deploy-dicom-services-in-azure.md).
+- **Deploy the events capability for the DICOM service.** For more information, see [Deploy events by using the Azure portal](../events/events-deploy-portal.md).
## Enable a system-assigned managed identity
Before you perform a bulk import, you need to enable a system-assigned managed i
1. In the Azure portal, go to the DICOM instance and then select **Identity** from the left pane.
-2. On the **Identity** page, select the **System assigned** tab, and then set the **Status** field to **On**. Choose **Save**.
+1. On the **Identity** page, select the **System assigned** tab, and then set the **Status** field to **On**. Select **Save**.
+ :::image type="content" source="media/system-assigned-managed-identity.png" alt-text="Screenshot that shows the system-assigned managed identity toggle on the Identity page." lightbox="media/system-assigned-managed-identity.png":::
## Enable bulk import
You need to enable bulk import before you import data.
1. In the Azure portal, go to the DICOM service and then select **Bulk Import** from the left pane.
-2. On the **Bulk Import** page, in the **Bulk Import** field, select **Enabled**. Choose **Save**.
+1. On the **Bulk Import** page, in the **Bulk Import** field, select **Enabled**. Select **Save**.
+ :::image type="content" source="media/import-enable.png" alt-text="Screenshot that shows the Bulk Import page with the toggle set to Enabled." lightbox="media/import-enable.png":::
-#### Use an Azure Resource Manager (ARM) template
+#### Use an Azure Resource Manager template
-When you use an ARM template, enable bulk import with the property named `bulkImportConfiguration`.
+When you use an Azure Resource Manager template (ARM template), enable bulk import with the property named `bulkImportConfiguration`.
Here's an example of how to configure bulk import in an ARM template:
Here's an example of how to configure bulk import in an ARM template:
## Import data
-After you enable bulk import, a resource group is provisioned in your Azure subscription. The name of the resource group begins with the prefix `AHDS_`, followed by the workspace and DICOM service name. For example, the DICOM service named `mydicom` in the workspace `contoso`, the resource group would be named `AHDS_contoso-mydicom`.
+After you enable bulk import, a resource group is provisioned in your Azure subscription. The name of the resource group begins with the prefix `AHDS_`, followed by the workspace and DICOM service name. For example, for the DICOM service named `mydicom` in the workspace `contoso`, the resource group is named `AHDS_contoso-mydicom`.
Within the new resource group, two resources are created: -- A randomly named storage account that has two precreated containers (`import-container` and `error-container`), and two queues (`import-queue` and `error-queue`).
+- A randomly named storage account that has two precreated containers (`import-container` and `error-container`) and two queues (`import-queue` and `error-queue`).
- An [Azure Event Grid system topic](/azure/event-grid/create-view-manage-system-topics) named `dicom-bulk-import`. DICOM images are added to the DICOM service by copying them into the `import-container`. Bulk import monitors this container for new images and adds them to the DICOM service. If there are errors that prevent a file from being added successfully, the errors are copied to the `error-container` and an error message is written to the `error-queue`. #### Grant write access to the import container
-The user or account that adds DICOM images to the import container needs write access to the container using the `Data Owner` role. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+The user or account that adds DICOM images to the import container needs write access to the container by using the `Data Owner` role. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
#### Upload DICOM images to the import container
-Data is uploaded to Azure storage containers in many ways:
+Data is uploaded to Azure Storage containers in many ways:
-- [Upload a blob with Azure Storage Explorer](../../storage/blobs/quickstart-storage-explorer.md#upload-blobs-to-the-container) -- [Upload a blob with AzCopy](../../storage/common/storage-use-azcopy-blobs-upload.md) -- [Upload a blob with Azure CLI](../../storage/blobs/storage-quickstart-blobs-cli.md#upload-a-blob)
+- [Upload a blob with Azure Storage Explorer](../../storage/blobs/quickstart-storage-explorer.md#upload-blobs-to-the-container)
+- [Upload a blob with AzCopy](../../storage/common/storage-use-azcopy-blobs-upload.md)
+- [Upload a blob with the Azure CLI](../../storage/blobs/storage-quickstart-blobs-cli.md#upload-a-blob)
+
machine-learning How To Use Openai Models In Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-openai-models-in-azure-ml.md
Title: How to use Azure OpenAI models in Azure Machine Learning
+ Title: Use Azure OpenAI models in Azure Machine Learning
-description: Use Azure OpenAI models in Azure Machine Learning
+description: Learn how to use Azure OpenAI models in Azure Machine Learning.
-# How to use Azure OpenAI models in Azure Machine Learning (Preview)
+# Use Azure OpenAI models in Azure Machine Learning (preview)
> [!IMPORTANT] > Items marked (preview) in this article are currently in public preview. > The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In this article, you learn how to discover, finetune and deploy Azure OpenAI models at scale, using Azure Machine Learning.
+In this article, you learn how to discover, fine-tune, and deploy Azure OpenAI models at scale by using Azure Machine Learning.
## Prerequisites-- [You must have access](../ai-services/openai/overview.md#how-do-i-get-access-to-azure-openai) to the Azure OpenAI Service-- You must be in an Azure OpenAI service [supported region](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability)
-## What is OpenAI Models in Azure Machine Learning?
-In recent years, advancements in AI have led to the rise of large foundation models that are trained on a vast quantity of data. These models can be easily adapted to a wide variety of applications across various industries. This emerging trend gives rise to a unique opportunity for enterprises to build and use these foundation models in their deep learning workloads.
+- [You must have access](../ai-services/openai/overview.md#how-do-i-get-access-to-azure-openai) to Azure OpenAI Service.
+- You must be in an Azure OpenAI [supported region](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
-**OpenAI Models in AzureML** provides Azure Machine Learning native capabilities that enable customers to build and operationalize OpenAI models at scale:
+## What are OpenAI models in Azure Machine Learning?
-- Accessing [Azure OpenAI](../ai-services/openai/overview.md) in Azure Machine Learning, made available in the Azure Machine Learning Model catalog-- Make connection with the Azure OpenAI service-- Finetuning Azure OpenAI Models with Azure Machine Learning-- Deploying Azure OpenAI Models with Azure Machine Learning to the Azure OpenAI service
+In recent years, advancements in AI have led to the rise of large foundation models that are trained on a vast quantity of data. These models can be easily adapted to many applications across various industries. This emerging trend gives rise to a unique opportunity for enterprises to build and use these foundation models in their deep learning workloads.
-## Access Azure OpenAI models in Azure Machine Learning
-The model catalog (preview) in Azure Machine Learning studio is your starting point to explore various collections of foundation models. The Azure OpenAI models collection is a collection of models, exclusively available on Azure. These models enable customers to access prompt engineering, finetuning, evaluation, and deployment capabilities for large language models available in Azure OpenAI Service. You can view the complete list of supported OpenAI models in the [model catalog](https://ml.azure.com/model/catalog), under the `Azure OpenAI Service` collection.
+OpenAI models in Machine Learning provide Machine Learning native capabilities that enable customers to build and use Azure OpenAI models at scale by:
-> [!TIP]
->Supported OpenAI models are published to the AzureML Model Catalog. View a complete list of [Azure OpenAI models](../ai-services/openai/concepts/models.md).
+- Accessing [Azure OpenAI](../ai-services/openai/overview.md) in Machine Learning, which is made available in the Machine Learning model catalog.
+- Making a connection with Azure OpenAI.
+- Fine-tuning Azure OpenAI models with Machine Learning.
+- Deploying Azure OpenAI models with Machine Learning to Azure OpenAI.
+## Access Azure OpenAI models in Machine Learning
-You can filter the list of models in the model catalog by inference task, or by finetuning task. Select a specific model name and see the model card for the selected model, which lists detailed information about the model. For example:
+The model catalog (preview) in Azure Machine Learning studio is your starting point to explore various collections of foundation models. The Azure OpenAI models collection consists of models that are exclusively available on Azure. These models enable customers to access prompt engineering, fine-tuning, evaluation, and deployment capabilities for large language models that are available in Azure OpenAI. You can view the complete list of supported Azure OpenAI models in the [model catalog](https://ml.azure.com/model/catalog) under the **Azure OpenAI Service** collection.
+> [!TIP]
+>Supported Azure OpenAI models are published to the Machine Learning model catalog. You can view a complete list of [Azure OpenAI models](../ai-services/openai/concepts/models.md).
+You can filter the list of models in the model catalog by inference task or by fine-tuning task. Select a specific model name and see the model card for the selected model, which lists detailed information about the model.
-### Connect to Azure OpenAI service
-In order to deploy an Azure OpenAI model, you need to have an [Azure OpenAI resource](https://azure.microsoft.com/products/cognitive-services/openai-service/). You can create an Azure OpenAI resource following the instructions [here](../ai-services/openai/how-to/create-resource.md).
-### Deploying Azure OpenAI models
-To deploy an Azure Open Model from Azure Machine Learning, in order to deploy an Azure OpenAI model:
+### Connect to Azure OpenAI
-1. Select on **Model Catalog** in the left pane.
-1. Select **View Models** under Azure OpenAI language models. Then select a model to deploy.
-1. Select `Deploy` to deploy the model to the Azure OpenAI service.
+To deploy an Azure OpenAI model, you need to have an [Azure OpenAI resource](https://azure.microsoft.com/products/cognitive-services/openai-service/). To create an Azure OpenAI resource, follow the instructions in [Create and deploy an Azure OpenAI Service resource](../ai-services/openai/how-to/create-resource.md).
- :::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/deploy-to-azure-open-ai-turbo.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/deploy-to-azure-open-ai-turbo.png" alt-text="Screenshot showing the deploy to Azure OpenAI.":::
+### Deploy Azure OpenAI models
-1. Select on **Azure OpenAI resource** from the options
-1. Provide a name for your deployment in **Deployment Name** and select **Deploy**.
-1. The find the models deployed to Azure OpenAI service, go to the **Endpoint** section in your workspace.
-1. Select the **Azure OpenAI** tab and find the deployment you created. When you select the deployment, you'll be redirect to the OpenAI resource that is linked to the deployment.
+To deploy an Azure OpenAI model from Machine Learning:
-> [!NOTE]
-> Azure Machine Learning will automatically deploy [all base Azure OpenAI models](../ai-services/openai/concepts/models.md) for you so you can using interact with the models when getting started.
+1. Select **Model catalog** on the left pane.
+1. Select **View Models** under **Azure OpenAI language models**. Then select a model to deploy.
+1. Select **Deploy** to deploy the model to Azure OpenAI.
-## Finetune Azure OpenAI models using your own training data
+ :::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/deploy-to-azure-open-ai-turbo.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/deploy-to-azure-open-ai-turbo.png" alt-text="Screenshot that shows deploying to Azure OpenAI.":::
-In order to improve model performance in your workload, you might want to fine tune the model using your own training data. You can easily finetune these models by using either the finetune settings in the studio or by using the code based samples in this tutorial.
-
-### Finetune using the studio
-You can invoke the finetune settings form by selecting on the **Finetune** button on the model card for any foundation model.
+1. Select **Azure OpenAI resource** from the options.
+1. Enter a name for your deployment in **Deployment Name** and select **Deploy**.
+1. To find the models deployed to Azure OpenAI, go to the **Endpoint** section in your workspace.
+1. Select the **Azure OpenAI** tab and find the deployment you created. When you select the deployment, you're redirected to the OpenAI resource that's linked to the deployment.
-**Finetune Settings:**
+> [!NOTE]
+> Machine Learning automatically deploys [all base Azure OpenAI models](../ai-services/openai/concepts/models.md) so that you can interact with the models when you get started.
+## Fine-tune Azure OpenAI models by using your own training data
+To improve model performance in your workload, you might want to fine-tune the model by using your own training data. You can easily fine-tune these models by using either the fine-tune settings in the studio or the code-based samples in this tutorial.
-**Training Data**
-
-1. Pass in the training data you would like to use to finetune your model. You can choose to either upload a local file (in JSONL format) or select an existing registered dataset from your workspace.
-For models with completion task type, the training data you use must be formatted as a JSON Lines (JSONL) document in which each line represents a single prompt-completion pair.
+### Fine-tune by using the studio
- :::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data.png" alt-text="Screenshot showing the training data in the finetune UI section.":::
+To invoke the **Finetune** settings form, select **Finetune** on the model card for any foundation model.
- For models with a chat task type, each row in the dataset should be a list of JSON objects. Each row corresponds to a conversation and each object in the row is a turn/utterance in the conversation.
+### Finetune settings
- :::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data-chat.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data-chat.png" alt-text="Screenshot showing the training data after the data is uploaded into Azure.":::
- * Validation data: Pass in the data you would like to use to validate your model.
+#### Training data
-2. Select **Finish** in the finetune form to submit your finetuning job. Once the job completes, you can view evaluation metrics for the finetuned model. You can then deploy this finetuned model to an endpoint for inferencing.
+1. Pass in the training data you want to use to fine-tune your model. You can choose to upload a local file in JSON Lines (JSONL) format. Or you can select an existing registered dataset from your workspace.
-**Customizing finetuning parameters:**
+ - **Models with a completion task type**: The training data you use must be formatted as a JSONL document in which each line represents a single prompt-completion pair.
-If you would like to customize the finetuning parameters, you can select on the Customize button in the Finetune wizard to configure parameters such as batch size, number of epochs and learning rate multiplier. Each of these settings has default values, but can be customized via code based samples, if needed.
+ :::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data.png" alt-text="Screenshot that shows the training data in the fine-tune UI section.":::
+ - **Models with a chat task type**: Each row in the dataset should be a list of JSON objects. Each row corresponds to a conversation. Each object in the row is a turn or utterance in the conversation.
-**Deploying finetuned models:**
-To run a deploy fine-tuned model job from Azure Machine Learning, in order to deploy finetuned an Azure OpenAI model:
+ :::image type="content" source="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data-chat.png" lightbox="./media/how-to-use-openai-models-in-azure-ml/finetune-training-data-chat.png" alt-text="Screenshot that shows the training data after the data is uploaded into Azure.":::
-1. After you have finished finetuning an Azure OpenAI model
-1. Find the registered model in **Models** list with the name provided during finetuning and select the model you want to deploy.
-1. Select the **Deploy** button and give the deployment name. The model is deployed to the default Azure OpenAI resource linked to your workspace.
+ - **Validation data**: Pass in the data you want to use to validate your model.
+
+1. Select **Finish** on the fine-tune form to submit your fine-tuning job. After the job finishes, you can view evaluation metrics for the fine-tuned model. You can then deploy this fine-tuned model to an endpoint for inferencing.
+
+#### Customize fine-tuning parameters
+
+If you want to customize the fine-tuning parameters, you can select **Customize** in the **Finetune** wizard to configure parameters such as batch size, number of epochs, and learning rate multiplier. Each of these settings has default values but can be customized via code-based samples, if needed.
++
+#### Deploy fine-tuned models
+
+To run a fine-tuned model job from Machine Learning, in order to deploy an Azure OpenAI model:
+
+1. After you've finished fine-tuning an Azure OpenAI model, find the registered model in the **Models** list with the name provided during fine-tuning and select the model you want to deploy.
+1. Select **Deploy** and name the deployment. The model is deployed to the default Azure OpenAI resource linked to your workspace.
+
+### Fine-tuning by using code-based samples
+
+To enable users to quickly get started with code-based fine-tuning, we've published samples (both Python notebooks and Azure CLI examples) to the *azureml-examples* GitHub repo:
-### Finetuning using code based samples
-To enable users to quickly get started with code based finetuning, we have published samples (both Python notebooks and CLI examples) to the azureml-examples git repo -
* [SDK example](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/azure_openai) * [CLI example](https://github.com/Azure/azureml-examples/tree/main/cli/foundation-models/azure_openai) ### Troubleshooting
-Here are some steps to help you resolve any of the following issues with your Azure OpenAI in Azure Machine Learning experience.
-You might receive any of the following errors when you try to deploy an Azure OpenAI model.
+Here are some steps to help you resolve any of the following issues with Azure OpenAI in Machine Learning.
+
+You might receive any of the following errors when you try to deploy an Azure OpenAI model:
- **Only one deployment can be made per model name and version**
- - **Fix**: Go to the [Azure OpenAI Studio](https://oai.azure.com/portal) and delete the deployments of the model you're trying to deploy.
+ - **Fix:** Go to [Azure OpenAI Studio](https://oai.azure.com/portal) and delete the deployments of the model you're trying to deploy.
- **Failed to create deployment**
- - **Fix**: Azure OpenAI failed to create. This is due to quota issues, make sure you have enough quota for the deployment. The default quota for fine-tuned models is 2 deployment per customer.
+ - **Fix:** Azure OpenAI failed to create. This error occurs because of quota issues. Make sure you have enough quota for the deployment. The default quota for fine-tuned models is two deployments per customer.
- **Failed to get Azure OpenAI resource**
- - **Fix**: Unable to create the resource. You either aren't in correct region, or you have exceeded the maximum limit of three Azure OpenAI resources. You need to delete an existing Azure OpenAI resource or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
+ - **Fix:** Unable to create the resource. You either aren't in the correct region or you've exceeded the maximum limit of three Azure OpenAI resources. You need to delete an existing Azure OpenAI resource, or you need to make sure you created a workspace in one of the [supported regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability).
-- **Model Not Deployable**
- - **Fix**: This usually happens while trying to deploy a GPT-4 model. Due to high demand you need to [apply for access to use GPT-4 models](/azure/ai-services/openai/concepts/models#gpt-4-models).
+- **Model not deployable**
+ - **Fix:** This error usually happens while trying to deploy a GPT-4 model. Because of high demand, you need to [apply for access to use GPT-4 models](/azure/ai-services/openai/concepts/models#gpt-4-models).
-- **Finetuning job Failed**
- - **Fix**: Currently, only a maximum of 10 workspaces can be designated for a particular subscription for new fine tunable models. If a user creates more workspaces, they will get access to the models, but their jobs will fail. Try to limit number of workspaces per subscription to 10.
+- **Fine-tuning job failed**
+ - **Fix:** Currently, only a maximum of 10 workspaces can be designated for a particular subscription for new fine-tunable models. If a user creates more workspaces, they get access to the models, but their jobs fail. Try to limit the number of workspaces per subscription to 10.
## Next steps
-[How to use foundation models](how-to-use-foundation-models.md)
+[Use foundation models](how-to-use-foundation-models.md)
managed-grafana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/high-availability.md
Previously updated : 3/23/2023 Last updated : 10/13/2023 # Azure Managed Grafana service reliability
+This article provides information on availability zone support, disaster recovery and availability of Azure Managed Grafana for instances in the Standard plan. The Essential plan (preview) doesn't offer the same reliability and isn't recommended for use in production.
+ An Azure Managed Grafana instance in the Standard tier is hosted on a dedicated set of virtual machines (VMs). By default, two VMs are deployed to provide redundancy. Each VM runs a Grafana server. A network load balancer distributes browser requests amongst the Grafana servers. On the backend, the Grafana servers are connected to a common database that stores the configuration and other persistent data for an entire Managed Grafana instance. :::image type="content" source="media/service-reliability/diagram.png" alt-text="Diagram of the Managed Grafana Standard tier instance setup.":::
Microsoft is not providing or setting up disaster recovery for this service. In
## Zone redundancy
-Normally the network load balancer, VMs and database that underpin a Managed Grafana instance are located in a region based on system resource availability, and could end up being in a same Azure datacenter.
+The network load balancer, VMs and database that underpin a Managed Grafana instance are located in a region based on system resource availability, and could end up being in a same Azure datacenter.
### With zone redundancy enabled
Zone redundancy support is enabled in the following regions:
| East US | West Europe | | Australia East | | South Central US | | | | - For a complete list of regions where Managed Grafana is available, see [Products available by region - Azure Managed Grafana](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=managed-grafana&regions=all) ## Next steps
managed-grafana How To Authentication Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-authentication-permissions.md
Previously updated : 12/13/2022 Last updated : 10/13/2023
Create a workspace with the Azure portal or the CLI.
### [Portal](#tab/azure-portal)
-#### Create a workspace: basic and advanced settings
+#### Configure basic settings
1. In the upper-left corner of the home page, select **Create a resource**. In the **Search resources, services, and docs (G+/)** box, enter *Azure Managed Grafana* and select **Azure Managed Grafana**.
Create a workspace with the Azure portal or the CLI.
| Resource group name | Create a resource group for your Azure Managed Grafana resources. | *my-resource-group* | | Location | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. | *(US) East US* | | Name | Enter a unique resource name. It will be used as the domain name in your Managed Grafana instance URL. | *my-grafana* |
- | Zone redundancy | Zone redundancy is disabled by default. Zone redundancy automatically provisions and manages a standby replica of the Managed Grafana instance in a different availability zone within one region. There's an [additional charge](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing) for this option. | *Disabled* |
+ | Pricing Plan | Choose between an Essential (preview) and a Standard plan. The Standard tier offers additional features. [More information about pricing plans](overview.md#service-tiers). | *Essential (preview)* |
- :::image type="content" source="media/authentication/create-form-basics.png" alt-text="Screenshot of the Azure portal. Create workspace form. Basics.":::
+1. Keep all other default values and select the tab **Permission** to control access rights for your Grafana instance and data sources:
-1. Select **Next : Advanced >** to access API key creation and statics IP address options. **Enable API key creation** and **Deterministic outbound IP** options are set to **Disable** by default. Optionally enable API key creation and enable a static IP address.
-
-1. Select **Next : Permission >** to control access rights for your Grafana instance and data sources:
-
-#### Create a workspace: permission settings
+#### Configure permission settings
Review below different methods to manage permissions to access data sources within Azure Managed Grafana.
Azure Managed Grafana can also access data sources with managed identity disable
> [!NOTE] > Turning off system-assigned managed identity disables the Azure Monitor data source plugin for your Azure Managed Grafana instance. In this scenario, use a service principal instead of Azure Monitor to access data sources.
-#### Create a workspace: tags and review + create
-
-1. Select **Next : Tags** and optionally add tags to categorize resources.
+#### Review and create the new instance
-1. Select **Next : Review + create >**. After validation runs, select **Create**. Your Azure Managed Grafana resource is deploying.
+Select the **Review + create** tab. After validation runs, select **Create**. Your Azure Managed Grafana resource is deploying.
- :::image type="content" source="media/authentication/create-form-validation.png" alt-text="Screenshot of the Azure portal. Create workspace form. Validation.":::
### [Azure CLI](#tab/azure-cli)
managed-grafana How To Connect To Data Source Privately https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-to-data-source-privately.md
Previously updated : 5/18/2023 Last updated : 10/06/2023 # Connect to a data source privately (preview)
Managed private endpoints work with Azure services that support private link. Us
To follow the steps in this guide, you must have: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- An Azure Managed Grafana instance. If you don't have one yet, [create a new instance](quickstart-managed-grafana-portal.md).
+- An Azure Managed Grafana instance in the Standard tier. If you don't have one yet, [create a new instance](quickstart-managed-grafana-portal.md).
## Create a managed private endpoint for Azure Monitor workspace
managed-grafana How To Data Source Plugins Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md
Previously updated : 1/12/2023 Last updated : 10/13/2023 # How to configure data sources for Azure Managed Grafana ## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](./how-to-permissions.md).-- A resource including monitoring data with Managed Grafana monitoring permissions. Read [how to configure permissions](how-to-permissions.md) for more information.-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+[An Azure Managed Grafana instance](./how-to-permissions.md)
## Supported Grafana data sources
-By design, Grafana can be configured with multiple data sources. A data source is an externalized storage backend that holds your telemetry information. Azure Managed Grafana supports many popular data sources.
-
-Azure-specific data sources available for all customers:
--- [Azure Data Explorer](https://github.com/grafana/azure-data-explorer-datasource?utm_source=grafana_add_ds)-- [Azure Monitor](https://grafana.com/docs/grafana/latest/datasources/azuremonitor/). Is preloaded in all Grafana instances.-
-Data sources reserved for Grafana Enterprise customers - exclusively preloaded in instances with a Grafana Enterprise subscription:
--- [AppDynamics](https://grafana.com/grafana/plugins/dlopes7-appdynamics-datasource)-- [Azure Devops](https://grafana.com/grafana/plugins/grafana-azuredevops-datasource)-- [DataDog](https://grafana.com/grafana/plugins/grafana-datadog-datasource)-- [Dynatrace](https://grafana.com/grafana/plugins/grafana-dynatrace-datasource)-- [Gitlab](https://grafana.com/grafana/plugins/grafana-gitlab-datasource)-- [Honeycomb](https://grafana.com/grafana/plugins/grafana-honeycomb-datasource)-- [Jira](https://grafana.com/grafana/plugins/grafana-jira-datasource)-- [MongoDB](https://grafana.com/grafana/plugins/grafana-mongodb-datasource)-- [New Relic](https://grafana.com/grafana/plugins/grafana-newrelic-datasource)-- [Oracle Database](https://grafana.com/grafana/plugins/grafana-oracle-datasource)-- [Salesforce](https://grafana.com/grafana/plugins/grafana-salesforce-datasource)-- [SAP HANA®](https://grafana.com/grafana/plugins/grafana-saphana-datasource)-- [ServiceNow](https://grafana.com/grafana/plugins/grafana-servicenow-datasource)-- [Snowflake](https://grafana.com/grafana/plugins/grafana-snowflake-datasource)-- [Splunk](https://grafana.com/grafana/plugins/grafana-splunk-datasource)-- [Splunk Infrastructure monitoring (SignalFx)](https://grafana.com/grafana/plugins/grafana-splunk-monitoring-datasource)-- [Wavefront](https://grafana.com/grafana/plugins/grafana-wavefront-datasource)-
-Other data sources:
--- [Alertmanager](https://grafana.com/docs/grafana/latest/datasources/alertmanager/)-- [CloudWatch](https://grafana.com/docs/grafana/latest/datasources/aws-cloudwatch/)-- Direct Input-- [Elasticsearch](https://grafana.com/docs/grafana/latest/datasources/elasticsearch/)-- [Google Cloud Monitoring](https://grafana.com/docs/grafana/latest/datasources/google-cloud-monitoring/)-- [Graphite](https://grafana.com/docs/grafana/latest/datasources/graphite/)-- [InfluxDB](https://grafana.com/docs/grafana/latest/datasources/influxdb/)-- [Jaeger](https://grafana.com/docs/grafana/latest/datasources/jaeger/)-- [Loki](https://grafana.com/docs/grafana/latest/datasources/loki/)-- [Microsoft SQL Server](https://grafana.com/docs/grafana/latest/datasources/mssql/)-- [MySQL](https://grafana.com/docs/grafana/latest/datasources/mysql/)-- [OpenTSDB](https://grafana.com/docs/grafana/latest/datasources/opentsdb/)-- [PostgreSQL](https://grafana.com/docs/grafana/latest/datasources/postgres/)-- [Prometheus](https://grafana.com/docs/grafana/latest/datasources/prometheus/)-- [Tempo](https://grafana.com/docs/grafana/latest/datasources/tempo/)-- [TestData DB](https://grafana.com/docs/grafana/latest/datasources/testdata/)-- [Zipkin](https://grafana.com/docs/grafana/latest/datasources/zipkin/)
+By design, Grafana can be configured with multiple *data sources*. A data source is an externalized storage backend that holds telemetry information.
+
+Azure Managed Grafana supports many popular data sources. The table below lists the data sources that can be added to Azure Managed Grafana for each service tier.
+
+| Data sources | Essential (preview) | Standard |
+|-|--|-|
+| [Alertmanager](https://grafana.com/docs/grafana/latest/datasources/alertmanager/) | - | Γ£ö |
+| [AWS CloudWatch](https://grafana.com/docs/grafana/latest/datasources/aws-cloudwatch/) | - | Γ£ö |
+| [Azure Data Explorer](https://github.com/grafana/azure-data-explorer-datasource?utm_source=grafana_add_ds) | - | Γ£ö |
+| [Azure Monitor](https://grafana.com/docs/grafana/latest/datasources/azuremonitor/) | Γ£ö | Γ£ö |
+| [Elasticsearch](https://grafana.com/docs/grafana/latest/datasources/elasticsearch/) | - | Γ£ö |
+| [GitHub](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/integrations/integration-reference/integration-github) | - | Γ£ö |
+| [Google Cloud Monitoring](https://grafana.com/docs/grafana/latest/datasources/google-cloud-monitoring/) | - | Γ£ö |
+| [Graphite](https://grafana.com/docs/grafana/latest/datasources/graphite/) | - | Γ£ö |
+| [InfluxDB](https://grafana.com/docs/grafana/latest/datasources/influxdb/) | - | Γ£ö |
+| [Jaeger](https://grafana.com/docs/grafana/latest/datasources/jaeger/) | - | Γ£ö |
+| [JSON API](https://grafana.com/grafana/plugins/grafana-jira-datasource) | - | Γ£ö |
+| [Loki](https://grafana.com/docs/grafana/latest/datasources/loki/) | - | Γ£ö |
+| [Microsoft SQL Server](https://grafana.com/docs/grafana/latest/datasources/mssql/) | - | Γ£ö |
+| [MySQL](https://grafana.com/docs/grafana/latest/datasources/mysql/) | - | Γ£ö |
+| [OpenTSDB](https://grafana.com/docs/grafana/latest/datasources/opentsdb/) | - | Γ£ö |
+| [PostgreSQL](https://grafana.com/docs/grafana/latest/datasources/postgres/) | - | Γ£ö |
+| [Prometheus](https://grafana.com/docs/grafana/latest/datasources/prometheus/) | Γ£ö | Γ£ö |
+| [Tempo](https://grafana.com/docs/grafana/latest/datasources/tempo/) | - | Γ£ö |
+| [TestData](https://grafana.com/docs/grafana/latest/datasources/testdata/) | Γ£ö | Γ£ö |
+| [Zipkin](https://grafana.com/docs/grafana/latest/datasources/zipkin/) | - | Γ£ö |
+
+Within the Standard service tier, users who have subscribed to the Grafana Enterprise option can also access the following data sources.
+
+* [AppDynamics](https://grafana.com/grafana/plugins/dlopes7-appdynamics-datasource)
+* [Azure Devops](https://grafana.com/grafana/plugins/grafana-azuredevops-datasource)
+* [DataDog](https://grafana.com/grafana/plugins/grafana-datadog-datasource)
+* [Dynatrace](https://grafana.com/grafana/plugins/grafana-dynatrace-datasource)
+* [GitLab](https://grafana.com/grafana/plugins/grafana-gitlab-datasource)
+* [Honeycomb](https://grafana.com/grafana/plugins/grafana-honeycomb-datasource)
+* [MongoDB](https://grafana.com/grafana/plugins/grafana-mongodb-datasource)
+* [New Relic](https://grafana.com/grafana/plugins/grafana-newrelic-datasource)
+* [Oracle Database](https://grafana.com/grafana/plugins/grafana-oracle-datasource)
+* [Salesforce](https://grafana.com/grafana/plugins/grafana-salesforce-datasource)
+* [SAP HANA®](https://grafana.com/grafana/plugins/grafana-saphana-datasource)
+* [ServiceNow](https://grafana.com/grafana/plugins/grafana-servicenow-datasource)
+* [Snowflake](https://grafana.com/grafana/plugins/grafana-snowflake-datasource)
+* [Splunk](https://grafana.com/grafana/plugins/grafana-splunk-datasource)
+* [Splunk Infrastructure monitoring (SignalFx)](https://grafana.com/grafana/plugins/grafana-splunk-monitoring-datasource)
+* [Wavefront](https://grafana.com/grafana/plugins/grafana-wavefront-datasource)
For more information about data sources, go to [Data sources](https://grafana.com/docs/grafana/latest/datasources/) on the Grafana Labs website.
-## Add a datasource
+## Add a data source
-A number of data sources are added by in your Grafana instance by default. To add more data sources, follow the steps below using the Azure portal or the Azure CLI.
+A number of data sources, such as Azure Monitor, are added to your Grafana instance by default. To add more data sources, follow the steps below using the Azure portal or the Azure CLI.
### [Portal](#tab/azure-portal)
-1. Open the Grafana UI of your Azure Managed Grafana instance and select **Configuration** > **Data sources** from the left menu.
-1. Select **Add data source**, search for the data source you need from the available list, and select it.
+1. Open your Azure Managed Grafana instance in the Azure portal.
+1. Select **Overview** from the left menu, then open the **Endpoint** URL.
+1. In the Grafana portal, deploy the menu on the left and select **Connections**.
+1. Under Connect data, select a data source from the list, and add the data source to your instance.
1. Fill out the form with the data source settings and select **Save and test** to validate the connection to your data source. :::image type="content" source="media/data-sources/add-data-source.png" alt-text="Screenshot of the Add data source page.":::
-> [!NOTE]
-> Installing Grafana plugins listed on the page **Configuration** > **Plugins** isnΓÇÖt currently supported.
- ### [Azure CLI](#tab/azure-cli) Run the [az grafana data-source create](/cli/azure/grafana/data-source#az-grafana-data-source-create) command to add and manage Azure Managed Grafana data sources with the Azure CLI.
az grafana data-source create --name <instance-name> --definition '{
+> [!TIP]
+> If you can't connect to a data source, you may need to [modify access permissions](how-to-permissions.md) to allow access from your Azure Managed Grafana instance.
+ ## Update a data source ### Azure Monitor configuration
-The Azure Monitor data source is automatically added to all new Managed Grafana resources. To review or modify its configuration, follow these steps below in the Grafana UI of your Azure Managed Grafana instance or in the Azure CLI.
+The Azure Monitor data source is automatically added to all new Managed Grafana resources. To review or modify its configuration, follow the steps below in the Grafana portal of your Azure Managed Grafana instance or in the Azure CLI.
### [Portal](#tab/azure-portal)
-1. From the left menu, select **Configuration** > **Data sources**.
+1. Deploy the menu on the left and select **Connections** > **Data sources**.
:::image type="content" source="media/data-sources/configuration.png" alt-text="Screenshot of the Add data sources page.":::
az grafana data-source update --data-source 'Azure Monitor' --name <instance-nam
> [!NOTE]
-> User-assigned managed identity isn't supported currently.
+> User-assigned managed identity isn't currently supported.
### Azure Data Explorer configuration
managed-grafana How To Deterministic Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-deterministic-ip.md
Previously updated : 03/23/2022 Last updated : 10/06/2023 # Use deterministic outbound IPs In this guide, learn how to activate deterministic outbound IP support used by Azure Managed Grafana to communicate with data sources, disable public access and set up a firewall rule to allow inbound requests from your Grafana instance.
+>[!NOTE]
+> The deterministic outbound IPs feature is only accessible for customers with a Standard plan. For more information about plans, go to [pricing plans](overview.md#service-tiers).
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
Deterministic outbound IP support is disabled by default in Azure Managed Grafan
#### [Portal](#tab/portal)
-When creating an instance, in the **Advanced** tab, set **Deterministic outbound IP** to **Enable**.
+When creating an instance, select the **Standard** pricing plan and then in the **Advanced** tab, set **Deterministic outbound IP** to **Enable**.
For more information about creating a new instance, go to [Quickstart: Create an Azure Managed Grafana instance](quickstart-managed-grafana-portal.md).
managed-grafana How To Enable Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-enable-zone-redundancy.md
Previously updated : 02/28/2023 Last updated : 10/06/2023
Azure Managed Grafana offers a zone-redundant option to protect your instance ag
In this how-to guide, learn how to enable zone redundancy for Azure Managed Grafana during the creation of your Managed Grafana instance. > [!NOTE]
-> Zone redundancy for Azure Managed Grafana is a billable option. [See prices](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing). Zone redundancy can only be enabled when creating the Managed Grafana instance, and can't be modified subsequently.
+> Zone redundancy for Azure Managed Grafana is a billable option. [See prices](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing). Zone redundancy can only be enabled when creating the Managed Grafana instance, and can't be modified subsequently.
-## Prerequisite
+## Prerequisites
-An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
## Sign in to Azure
This command will prompt your web browser to launch and load an Azure sign-in pa
-## Create a Managed Grafana workspace
+## Create an Azure Managed Grafana workspace
Create a workspace and enable zone redundancy with the Azure portal or the CLI.
Create a workspace and enable zone redundancy with the Azure portal or the CLI.
1. In the **Basics** pane, enter the following settings.
- | Setting | Description | Example |
- ||-||
- | Subscription ID | Select the Azure subscription you want to use. | *my-subscription* |
- | Resource group name | Create a resource group for your Azure Managed Grafana resources. | *my-resource-group* |
- | Location | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. | *(US) East US* |
- | Name | Enter a unique resource name. It will be used as the domain name in your Managed Grafana instance URL. | *my-grafana* |
- | Zone Redundancy | Set **Enable Zone Redundancy** to **Enable**. Zone redundancy automatically provisions and manages a standby replica of the Managed Grafana instance in a different availability zone within one region. | *Enabled* |
+ | Setting | Description | Example |
+ ||--||
+ | Subscription ID | Select the Azure subscription you want to use. | *my-subscription* |
+ | Resource group name | Create a resource group for your Azure Managed Grafana resources. | *my-resource-group* |
+ | Location | Specify the geographic location in which to host your resource. Choose the location closest to you. | *(US) East US* |
+ | Name | Enter a unique resource name. It will be used as the domain name in your Managed Grafana instance URL. | *my-grafana* |
+ | Pricing plan | Select the **Standard** plan to get access to the zone redundancy feature. This feature is only available for customers using a [Standard plan](overview.md#service-tiers). | *Standard* |
+ | Zone Redundancy | Set **Enable Zone Redundancy** to **Enable**. | *Enabled* |
-1. Set **Zone redundancy** to **Enable**. Zone redundancy automatically provisions and manages a standby replica of the Managed Grafana instance in a different availability zone within one region. There's an [additional charge](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing) for this option.
-
- :::image type="content" source="media/zone-redundancy/create-form-basics.png" alt-text="Screenshot of the Azure portal. Create workspace form. Basics.":::
+ Zone redundancy automatically provisions and manages a standby replica of the Managed Grafana instance in a different availability zone within one region. There's an [additional charge](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing) for this option.
1. Keep all other options set to their default values and select **Review + create**.
managed-grafana How To Grafana Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-grafana-enterprise.md
Previously updated : 01/09/2023 Last updated : 10/06/2023 # Enable Grafana Enterprise
In this guide, learn how to activate the Grafana Enterprise add-on in Azure Mana
The Grafana Enterprise plans offered through Azure Managed Grafana enable users to access Grafana Enterprise plugins to do more with Azure Managed Grafana.
-Grafana Enterprise plugins, as of January 2023:
+>[!NOTE]
+> To activate the Grafana Enterprise option, your Azure Managed Grafana instance must be using the Standard plan. For more information about plans, go to [pricing plans](overview.md#service-tiers).
+
+Grafana Enterprise plugins, as of October 2023:
- AppDynamics - Azure DevOps-- Datadog - Databricks
+- Datadog
- Dynatrace - GitLab - Honeycomb - Jira
+- k6 Cloud App
+- Looker
- MongoDB - New Relic - Oracle Database
You can enable access to Grafana Enterprise plugins by selecting a Grafana Enter
## Create a workspace with Grafana Enterprise enabled
-To activate Grafana Enterprise plugins when creating an Azure Managed Grafana Workspace, in **Create a Grafana Workspace**, go to the **Basics** tab and follow the steps below:
+When [creating a new Azure Managed Grafana workspace](quickstart-managed-grafana-portal.md) and filling out the **Basics** tab of the creation form, follow the steps below:
1. Under **Project Details**, select an Azure subscription and enter a resource group name or use the generated suggested resource group name 1. Under **Instance Details**, select an Azure region and enter a resource name.
+1. Under **Pricing Plans**, select the **Standard** plan.
1. Under **Grafana Enterprise**, check the box **Grafana Enterprise**, select **Free Trial - Azure Managed Grafana Enterprise Upgrade** and keep the option **Recurring billing** on **Disabled**. :::image type="content" source="media/grafana-enterprise/create-with-enterprise-plan.png" alt-text="Screenshot of the Grafana dashboard, instance creation basic details.":::
The Azure platform displays some useful links at the bottom of the page.
## Start using Grafana Enterprise plugins
-Grafana Enterprise gives you access to preinstalled plugins reserved for Grafana Enterprise customers. Once you've activated a Grafana Enterprise plan, go to the Grafana platform, and then select **Configuration > Data sources** from the left menu to set up a data source.
+Grafana Enterprise gives you access to preinstalled plugins reserved for Grafana Enterprise customers. Once you've activated a Grafana Enterprise plan, go to the Grafana platform and select **Connections > Connect data** from the left menu to create a new connection using the newly accessible data sources.
:::image type="content" source="media/grafana-enterprise/access-data-sources.png" alt-text="Screenshot of the Grafana dashboard. Access data sources.":::
managed-grafana How To Set Up Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-set-up-private-access.md
Previously updated : 02/16/2023 Last updated : 10/27/2023
In this guide, you'll learn how to disable public access to your Azure Managed G
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- An existing Managed Grafana workspace. [Create one if you haven't already](quickstart-managed-grafana-portal.md).
+- An existing Azure Managed Grafana instance in the Standard tier. [Create one if you haven't already](quickstart-managed-grafana-portal.md).
## Disable public access to a workspace
Public access is enabled by default when you create an Azure Grafana workspace.
> [!NOTE] > When private access (preview) is enabled, pinging charts using the [*Pin to Grafana*](../azure-monitor/visualize/grafana-plugin.md#pin-charts-from-the-azure-portal-to-azure-managed-grafana) feature will no longer work as the Azure portal canΓÇÖt access a Managed Grafana workspace on a private IP address.
-To disable access to an Azure Managed Grafana workspace from public network, follow these steps:
- ### [Portal](#tab/azure-portal) 1. Navigate to your Azure Managed Grafana workspace in the Azure portal.
Once you have disabled public access, set up a [private endpoint](../private-lin
1. Select **Next : DNS >** to configure a DNS record. If you don't want to make changes to the default settings, you can move forward to the next tab.
- 1. For **Integrate with private DNS zone**, select **Yes** to integrate your private endpoint with a private DNS zone. You may also use your own DNS servers or create DNS records using the host files on your virtual machines.
+ 1. For **Integrate with private DNS zone**, select **Yes** to integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records using the host files on your virtual machines.
1. A subscription and resource group for your private DNS zone are preselected. You can change them optionally.
managed-grafana How To Smtp Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-smtp-settings.md
Previously updated : 02/01/2023 Last updated : 10/13/2023 # Configure SMTP settings In this guide, you learn how to configure SMTP settings to generate email alerts in Azure Managed Grafana. Notifications alert users when some given scenarios occur on a Grafana dashboard.
-SMTP settings can be enabled on an existing Azure Managed Grafana instance via the Azure Portal and the Azure CLI. Enabling SMTP settings while creating a new instance is currently not supported.
+SMTP settings can be enabled on an existing Azure Managed Grafana instance via the Azure portal and the Azure CLI. Enabling SMTP settings while creating a new instance is currently not supported.
## Prerequisites To follow the steps in this guide, you must have: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- An Azure Managed Grafana instance. If you don't have one yet, [create a new instance](quickstart-managed-grafana-portal.md).-- An SMTP server. If you don't have one yet, you may want to consider using [Twilio SendGrid's email API for Azure](https://azuremarketplace.microsoft.com/marketplace/apps/sendgrid.tsg-saas-offer).
+- An Azure Managed Grafana instance in the Standard plan. If you don't have one yet, [create a new instance](quickstart-managed-grafana-portal.md).
+- An SMTP server. If you don't have one yet, you might want to consider using [Twilio SendGrid's email API for Azure](https://azuremarketplace.microsoft.com/marketplace/apps/sendgrid.tsg-saas-offer).
## Enable and configure SMTP settings
Follow these steps to activate SMTP settings, enable email notifications and con
| Skip Verify | Disable |This setting controls whether a client verifies the server's certificate chain and host name. If **Skip Verify** is **Enable**, client accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to machine-in-the-middle attacks unless custom verification is used. Default is **Disable** (toggled off). [More information](https://pkg.go.dev/crypto/tls#Config). | | StartTLS Policy | OpportunisticStartTLS | There are three options. [More information](https://pkg.go.dev/github.com/go-mail/mail#StartTLSPolicy).<br><ul><li>**OpportunisticStartTLS** means that SMTP transactions are encrypted if STARTTLS is supported by the SMTP server. Otherwise, messages are sent in the clear. It's the default setting.</li><li>**MandatoryStartTLS** means that SMTP transactions must be encrypted. SMTP transactions are aborted unless STARTTLS is supported by the SMTP server.</li><li>**NoStartTLS** means encryption is disabled and messages are sent in the clear.</li></ul> |
- 1. Select **Save** to save the SMTP settings. Updating may take a couple of minutes.
+ 1. Select **Save** to save the SMTP settings. Updating might take a couple of minutes.
:::image type="content" source="media/smtp-settings/save-updated-settings.png" alt-text="Screenshot of the Azure platform. Email Settings tab with new data.":::
To disable SMTP settings, follow these steps.
Within the Grafana portal, you can find a list of all Grafana alerting error messages that occurred in **Alerting > Notifications**.
-The following are some common error messages you may encounter:
+The following are some common error messages you might encounter:
- "Authentication failed: The provided authorization grant is invalid, expired, or revoked". Grafana couldn't connect to the SMTP server. Check if the password entered in the SMTP settings in the Azure portal is correct. - "Failed to sent test alert: SMTP not configured". SMTP is disabled. Open the Azure Managed Grafana instance in the Azure portal and enable SMTP settings.
managed-grafana How To Use Azure Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-use-azure-monitor-alerts.md
Previously updated : 6/8/2023 Last updated : 10/06/2023 # Use Azure Monitor alerts with Grafana
-In this guide, you learn how to set up Azure Monitor alerts and use them with Azure Managed Grafana. Both Azure Monitor and Grafana provide alerting functions. Grafana alerts work with any supported data source. Alert rules are processed in your Managed Grafana workspace. Because of that, Grafana alerts need to share the same compute resources and query throttling limits with dashboard rendering. Azure Monitor has its own [alert system](../azure-monitor/alerts/alerts-overview.md). It offers many advantages:
+In this guide, you learn how to set up Azure Monitor alerts and use them with Azure Managed Grafana.
+
+> [!NOTE]
+> Grafana alerts are only available for instances in the Standard plan.
+
+Both Azure Monitor and Grafana provide alerting functions. Grafana alerts work with any supported data source. Alert rules are processed in your Managed Grafana workspace. Because of that, Grafana alerts need to share the same compute resources and query throttling limits with dashboard rendering. Azure Monitor has its own [alert system](../azure-monitor/alerts/alerts-overview.md). It offers many advantages:
* Scalability - Azure Monitor alerts are evaluated in the Azure Monitor platform that's been architected to autoscale to your needs. * Compliance - Azure Monitor alerts and [action groups](../azure-monitor/alerts/action-groups.md) are governed by Azure's compliance standards on privacy, including unsubscribe support. * Customized notifications and actions - Azure Monitor alerts use action groups to send notifications through email, text, voice, and Azure app. These events can be configured to trigger further actions implemented in Functions, Logic apps, webhook, and other supported action types. * Consistent resource management - Azure Monitor alerts are managed as Azure resources. They can be created, updated and viewed using Azure APIs and tools, such as ARM templates, Azure CLI or SDKs.
-For any Azure Monitor service, including Managed Service for Prometheus, you should define and manage your alert rules in Azure Monitor. You can view fired and resolved alerts in the [Azure Alert Consumption dashboard](https://grafana.com/grafana/dashboards/15128-azure-alert-consumption/) included in Managed Grafana.
+For any Azure Monitor service, including Managed Service for Prometheus, you should define and manage your alert rules in Azure Monitor. You can view fired and resolved alerts in the [Azure Alert Consumption dashboard](https://grafana.com/grafana/dashboards/15128-azure-alert-consumption/) included in Managed Grafana.
> [!IMPORTANT] > Using Grafana alerts with an Azure Monitor service isn't officially supported by Microsoft.
Define alert rules in Azure Monitor based on the type of alerts:
| Managed service for Prometheus | Use [Prometheus rule groups](../azure-monitor/essentials/prometheus-rule-groups.md). A set of [predefined Prometheus alert rules](../azure-monitor/containers/container-insights-metric-alerts.md) and [recording rules](../azure-monitor/essentials/prometheus-metrics-scrape-default.md#recording-rules) for AKS is available. | | Other metrics, logs, health | Create new [alert rules](../azure-monitor/alerts/alerts-create-new-alert-rule.md). | - You can view alert state and conditions using the Azure Alert Consumption dashboard in your Managed Grafana workspace. ## Next steps
managed-grafana How To Use Reporting And Image Rendering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-use-reporting-and-image-rendering.md
Previously updated : 5/6/2023 Last updated : 10/06/2023 # Use reporting and image rendering (preview)
Generating reports in the PDF format requires Grafana's image rendering capabili
Image rendering is a CPU-intensive operation. An Azure Managed Grafana instance needs about 10 seconds to render one panel, assuming data query is completed in less than 1 second. The Grafana software allows a maximum of 200 seconds to generate an entire report. Dashboards should contain no more than 20 panels each if they're used in PDF reports. You may have to reduce the panel number further if you plan to include other artifacts (for example, CSV) in the reports. > [!NOTE]
-> You'll see a "Image Rendering Timeout" error if a rendering request has exceeded the 200 second limit.
+> You'll see an "Image Rendering Timeout" error if a rendering request has exceeded the 200 second limit.
For screen-capturing in alerts, the Grafana software only allows 30 seconds to snapshot panel images before timing out. At most three screenshots can be taken within this time frame. If there's a sudden surge in alert volume, some alerts may not have screenshots even if screen-capturing has been enabled.
For screen-capturing in alerts, the Grafana software only allows 30 seconds to s
To follow the steps in this guide, you must have: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- An Azure Managed Grafana instance. If you don't have one yet, [create a new instance](quickstart-managed-grafana-portal.md).
+- An Azure Managed Grafana instance in the Standard plan. If you don't have one yet, [create a new instance](quickstart-managed-grafana-portal.md).
- An SMTP server. If you don't have one yet, you may want to consider using [Twilio SendGrid's email API for Azure](https://azuremarketplace.microsoft.com/marketplace/apps/sendgrid.tsg-saas-offer). - Email set up for your Azure Managed Grafana instance. [Configure SMTP settings](how-to-smtp-settings.md).
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
# Limitations of Azure Managed Grafana
-Azure Managed Grafana delivers the native Grafana functionality in the highest possible fidelity. There are some differences between what it provides and what you can get by self-hosting Grafana. As a general rule, Azure Managed Grafana disables features and settings that may affect the security or reliability of the service and individual Grafana instances it manages.
+Azure Managed Grafana delivers the native Grafana functionality in the highest possible fidelity. There are some differences between what it provides and what you can get by self-hosting Grafana. As a general rule, Azure Managed Grafana disables features and settings that might affect the security or reliability of the service and individual Grafana instances it manages.
## Current limitations
Azure Managed Grafana has the following known limitations:
* Installing, uninstalling and upgrading plugins from the Grafana Catalog isn't possible.
-* Data source query results are capped at 80 MB. To mitigate this constraint, reduce the size of the query, for example, by shortening the time duration.
-
-* Querying Azure Data Explorer may take a long time or return 50x errors. To resolve these issues, use a table format instead of a time series, shorten the time duration, or avoid having many panels querying the same data cluster that can trigger throttling.
+* Querying Azure Data Explorer might take a long time or return 50x errors. To resolve these issues, use a table format instead of a time series, shorten the time duration, or avoid having many panels querying the same data cluster that can trigger throttling.
* Users can be assigned the following Grafana Organization level roles: Admin, Editor, or Viewer. The Grafana Server Admin role isn't available to customers.
Azure Managed Grafana has the following known limitations:
* Azure Managed Grafana currently doesn't support the Grafana Role Based Access Control (RBAC) feature and the [RBAC API](https://grafana.com/docs/grafana/latest/developers/http_api/access_control/) is therefore disabled.
-* Unified alerting is enabled by default for all instances created after December 2022. For instances created before this date, unified alerting must be enabled manually by the Azure Managed Grafana team. For activation, [contact us](mailto:ad4g@microsoft.com)
+* Unified alerting is enabled by default for all instances created after December 2022. For instances created before this date, unified alerting must be enabled manually by the Azure Managed Grafana team. For activation, [contact us](mailto:ad4g@microsoft.com)
* Some Azure Managed Grafana features aren't available in Azure Government and Microsoft Azure operated by 21Vianet due to limitations in these specific environments. This following table lists the feature differences.
Azure Managed Grafana has the following known limitations:
| Team sync with Microsoft Entra ID | &#x274C; | &#x274C; | | Enterprise plugins | &#x274C; | &#x274C; |
+## Quotas
+
+The following quotas apply to the Essential (preview) and Standard plans.
+
+| Limit | Description | Essential | Standard |
+|--|--|--||
+| Alert rules | Maximum number of alert rules that can be created | Not supported | 100 per instance |
+| Dashboards | Maximum number of dashboards that can be created | 20 per instance | Unlimited |
+| Data sources | Maximum number of datasources that can be created | 5 per instance | Unlimited |
+| API keys | Maximum number of API keys that can be created | 2 per instance | 100 per instance |
+| Data query timeout | Maximum wait duration for the reception of data query response headers, before Grafana times out | 200 seconds | 200 seconds |
+| Data source query size | Maximum number of bytes that are read/accepted from responses of outgoing HTTP requests | 80 MB | 80 MB |
+| Render image or PDF report wait time | Maximum duration for an image or report PDF rendering request to complete before Grafana times out. | Not supported | 220 seconds |
+| Instance count | Maximum number of instances in a single subscription per Azure region | 1 | 20 |
+ ## Next steps > [!div class="nextstepaction"] > [Troubleshooting](./troubleshoot-managed-grafana.md)
+> [Support](./find-help-open-support-ticket.md)
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
Previously updated : 3/23/2023 Last updated : 10/27/2023 # What is Azure Managed Grafana?
Managed Grafana uses Microsoft Entra IDΓÇÖs centralized identity management, whi
You can create dashboards instantaneously by importing existing charts directly from the Azure portal or by using prebuilt dashboards.
+## Service tiers
+
+Azure Managed Grafana is available in the two service tiers presented below.
+
+| Tier | Description |
+|--|-|
+| Essential (preview) | Provides the core Grafana functionalities in use with Azure data sources. Since it doesn't provide an SLA guarantee, this tier should be used only for non-production environments. |
+| Standard | The default tier, offering better performance, more features and an SLA. It's recommended for most situations. |
+
+> [!NOTE]
+> The Essential plan (preview) is currently being rolled out and will be available in all cloud regions on October 30, 2023.
+
+The [Azure Managed Grafana pricing page](https://azure.microsoft.com/pricing/details/managed-grafana/) gives more information on these tiers and the following table lists which main features are supported in each tier:
+
+| Feature | Essential (preview) | Standard |
+||-|--|
+| [Zone redundancy](how-to-enable-zone-redundancy.md) | - | Γ£ö |
+| [Deterministic outbound IPs](how-to-deterministic-ip.md) | - | Γ£ö |
+| [Private endpoints](how-to-set-up-private-access.md) | - | Γ£ö |
+| [Alerting](https://grafana.com/docs/grafana/latest/alerting/) | - | Γ£ö |
+| [Emails, via SMTP](how-to-smtp-settings.md) | - | Γ£ö |
+| [Reporting/image rendering](how-to-use-reporting-and-image-rendering.md) | - | Γ£ö |
+| [API keys](how-to-create-api-keys.md) and [service accounts](how-to-service-accounts.md) | Γ£ö | Γ£ö |
+| [Data source plugins](how-to-data-source-plugins-managed-identity.md) | Azure Monitor, Prometheus, TestData | All core plugins, including Azure Monitor and Prometheus, as well as Azure Data Explorer, GitHub, and JSON API. |
+| [Grafana Enterprise](how-to-grafana-enterprise.md) | - | Optional, with licensing costs |
+
+> [!NOTE]
+> Users can upgrade an instance from Essential (preview) to Standard by going to **Settings** > **Configuration** > **Pricing Plans**. Downgrading from Standard to Essential (preview) however isn't supported.
+
+## Quotas
+
+Different quotas apply to Azure Managed Grafana service instances depending on their service tiers. For a list of the quotas that apply to the Essential (preview) and Standard pricing plans, see [quotas](known-limitations.md#quotas).
+ ## Next steps > [!div class="nextstepaction"]
managed-grafana Quickstart Managed Grafana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-cli.md
Previously updated : 12/13/2022 Last updated : 10/06/2023 ms.devlang: azurecli
Get started by creating an Azure Managed Grafana workspace using the Azure CLI. Creating a workspace will generate an Azure Managed Grafana instance.
+> [!NOTE]
+> Azure Managed Grafana now has [two pricing plans](overview.md#service-tiers). This guides takes you through creating a new workspace in the Standard plan. To generate a workspace in the newly released Essential (preview) plan, [use the Azure portal](quickstart-managed-grafana-portal.md). We are working on enabling the creation of a workspace in the Essential (preview) plan using the Azure CLI.
+ ## Prerequisites - An Azure account for work or school and an active subscription. [Create an account for free](https://azure.microsoft.com/free).
managed-grafana Quickstart Managed Grafana Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-portal.md
Previously updated : 03/23/2022 Last updated : 10/27/2023 - # Quickstart: Create an Azure Managed Grafana instance using the Azure portal
Get started by creating an Azure Managed Grafana workspace using the Azure porta
1. In the **Basics** pane, enter the following settings.
- | Setting | Sample value | Description |
- ||||
- | Subscription ID | *my-subscription* | Select the Azure subscription you want to use. |
- | Resource group name | *my-resource-group* | Create a resource group for your Azure Managed Grafana resources. |
- | Location | *(US) East US* | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. |
- | Name | *my-grafana* | Enter a unique resource name. It will be used as the domain name in your Managed Grafana instance URL. |
- | Zone redundancy | *Disabled* | Zone redundancy is disabled by default. Zone redundancy automatically provisions and manages a standby replica of the Managed Grafana instance in a different availability zone within one region. There's an [additional charge](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing) for this option. |
-
- :::image type="content" source="media/quickstart-portal/create-form-basics.png" alt-text="Screenshot of the Azure portal. Create workspace form. Basics.":::
-
-1. Select **Next : Advanced >** to access API key creation and statics IP address options. **Enable API key creation** and **Deterministic outbound IP** options are set to **Disable** by default. Optionally enable API key creation and enable a static IP address.
+ | Setting | Sample value | Description |
+ |||--|
+ | Subscription ID | *my-subscription* | Select the Azure subscription you want to use. |
+ | Resource group name | *my-resource-group* | Create a resource group for your Azure Managed Grafana resources. |
+ | Location | *(US) East US* | Use Location to specify the geographic location in which to host your resource. Choose the location closest to you. |
+ | Name | *my-grafana* | Enter a unique resource name. It will be used as the domain name in your Managed Grafana instance URL. |
+ | Pricing Plan | *Essential (preview)* | Choose between the Essential (preview) or the Standard plan. The Essential plan is the cheapest option you can use to evaluate the service. This plan isn't recommended for production use. For more information about Azure Managed Grafana plans, go to [pricing plans](overview.md#service-tiers). |
- :::image type="content" source="media/quickstart-portal/create-form-advanced.png" alt-text="Screenshot of the Azure portal. Create workspace form. Advanced.":::
+1. If you've chosen the Standard plan, optionally enable zone redundancy for your new instance.
+1. Select **Next : Advanced >** to access additional options:
+ - **Enable API key creation** is set to **Disable** by default.
+ - If you've opted for the Standard plan, optionally enable the **Deterministic outbound IP** feature, which is set to **Disable** by default.
1. Select **Next : Permission >** to control access rights for your Grafana instance and data sources: 1. **System assigned managed identity** is set to **On**.
Get started by creating an Azure Managed Grafana workspace using the Azure porta
1. The box **Include myself** under **Grafana administrator role** is checked. This option grants you the Grafana administrator role, and lets you manage access rights. You can give this right to more members by selecting **Add**. If this option grays out for you, ask someone with the Owner role on the subscription to assign you the Grafana Admin role.
- :::image type="content" source="media/quickstart-portal/create-form-permission.png" alt-text="Screenshot of the Azure portal. Create workspace form. Permission.":::
- 1. Optionally select **Next : Tags** and add tags to categorize resources.
- :::image type="content" source="media/quickstart-portal/create-form-tags.png" alt-text="Screenshot of the Azure portal. Create workspace form. Tags.":::
- 1. Select **Next : Review + create >**. After validation runs, select **Create**. Your Azure Managed Grafana resource is deploying. :::image type="content" source="media/quickstart-portal/create-form-validation.png" alt-text="Screenshot of the Azure portal. Create workspace form. Validation.":::
Get started by creating an Azure Managed Grafana workspace using the Azure porta
:::image type="content" source="media/quickstart-portal/grafana-ui.png" alt-text="Screenshot of a Managed Grafana instance.":::
- > [!NOTE]
- > Azure Managed Grafana doesn't support connecting with personal Microsoft accounts currently.
+> [!NOTE]
+> Azure Managed Grafana doesn't currently support connecting with personal Microsoft accounts.
You can now start interacting with the Grafana application to configure data sources, create dashboards, reports and alerts. Suggested read: [Monitor Azure services and applications using Grafana](../azure-monitor/visualize/grafana-plugin.md).
migrate Migrate Servers To Azure Using Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-servers-to-azure-using-private-link.md
The tool uses a replication appliance to replicate your servers to Azure. Follow
>[!Note] > Before you register the replication appliance, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication appliance. Additional DNS configuration may be required for the on-premises replication appliance to resolve the private link FQDNs to their private IP addresses. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
-After you verify the connectivity, download the appliance setup and key file, run the installation process, and register the appliance to Azure Migrate. Learn more about how to [set up the replication appliance](./tutorial-migrate-physical-virtual-machines.md#prepare-a-machine-for-the-replication-appliance). After you set up the replication appliance, follow these instructions to [install the mobility service](./tutorial-migrate-physical-virtual-machines.md#install-the-mobility-service) on the machines you want to migrate.
+After you verify the connectivity, download the appliance setup and key file, run the installation process, and register the appliance to Azure Migrate. Learn more about how to [set up the replication appliance](./tutorial-migrate-physical-virtual-machines.md#prepare-a-machine-for-the-replication-appliance). After you set up the replication appliance, follow these instructions to [install the mobility service](./tutorial-migrate-physical-virtual-machines.md#install-the-mobility-service-agent) on the machines you want to migrate.
## Replicate servers
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
If you just created a free Azure account, you're the owner of your subscription.
1. In the Azure portal, search for "subscriptions", and under **Services**, select **Subscriptions**.
- ![Screenshot of search box to search for the Azure subscription.](./media/tutorial-discover-physical/search-subscription.png)
+ :::image type="content" source="./media/tutorial-discover-physical/search-subscription.png" alt-text="Screenshot of search box to search for the Azure subscription.":::
1. Select **Access control (IAM)**.
If you just created a free Azure account, you're the owner of your subscription.
| Assign access to | User | | Members | azmigrateuser |
- ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of Add role assignment page in Azure portal.":::
1. To register the appliance, your Azure account needs **permissions to register Microsoft Entra apps.**
For Windows servers, use a domain account for domain-joined servers, and a local
- The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users. - If Remote management Users group isn't present, then add the user account to the group: **WinRMRemoteWMIUsers_**. - The account needs these permissions for the appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the WMI classes listed here.-- In some cases, adding the account to these groups may not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md) to enable the required permissions.
+- In some cases, adding the account to these groups might not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md) to enable the required permissions.
> [!Note] > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers.
Set up a new project.
5. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one. 6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
- ![Screenshot of project name and region.](./media/tutorial-discover-physical/new-project.png)
- > [!Note] > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](discover-and-assess-using-private-endpoints.md#create-a-project-with-private-endpoint-connectivity). 7. Select **Create**. 8. Wait a few minutes for the project to deploy. The **Azure Migrate: Discovery and assessment** tool is added by default to the new project.
- ![Page showing Server Assessment tool added by default.](./media/tutorial-discover-physical/added-tool.png)
+ :::image type="content" source="./media/tutorial-discover-physical/added-tool.png" alt-text="Screenshot of Discovery and assessment tool added by default.":::
> [!NOTE] > If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers. [Learn more](create-manage-projects.md#find-a-project).
To set up the appliance, you:
1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**. 2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
-3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you'll set up for discovery of physical or virtual servers. The name should be alphanumeric with 14 characters or fewer.
+3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you set up for discovery of physical or virtual servers. The name should be alphanumeric with 14 characters or fewer.
1. Select **Generate key** to start the creation of the required Azure resources. Don't close the Discover servers page during the creation of resources. 1. After the successful creation of the Azure resources, a **project key** is generated.
-1. Copy the key as you'll need it to complete the registration of the appliance during its configuration.
-
- [ ![Selections for Generate Key.](./media/tutorial-assess-physical/generate-key-physical-inline-1.png)](./media/tutorial-assess-physical/generate-key-physical-expanded-1.png#lightbox)
+1. Copy the key as you need it to complete the registration of the appliance during its configuration.
### 2. Download the installer script
In the configuration manager, select **Set up prerequisites**, and then complete
:::image type="content" source="./media/tutorial-discover-vmware/prerequisites.png" alt-text="Screenshot that shows setting up the prerequisites in the appliance configuration manager."::: > [!NOTE]
- > This is a new user experience in Azure Migrate appliance which is available only if you have set up an appliance using the latest OVA/Installer script downloaded from the portal. The appliances which have already been registered will continue seeing the older version of the user experience and will continue to work without any issues.
+ > This is a new user experience in Azure Migrate appliance which is available only if you have set up an appliance using the latest OVA/Installer script downloaded from the portal. The appliances which have already been registered continue seeing the older version of the user experience and continue to work without any issues.
1. For the appliance to run auto-update, paste the project key that you copied from the portal. If you don't have the key, go to **Azure Migrate: Discovery and assessment** > **Overview** > **Manage existing appliances**. Select the appliance name you provided when you generated the project key, and then copy the key that's shown.
- 2. The appliance will verify the key and start the auto-update service, which updates all the services on the appliance to their latest versions. When the auto-update has run, you can select **View appliance services** to see the status and versions of the services running on the appliance server.
- 3. To register the appliance, you need to select **Login**. In **Continue with Azure Login**, select **Copy code & Login** to copy the device code (you must have a device code to authenticate with Azure) and open an Azure Login prompt in a new browser tab. Make sure you've disabled the pop-up blocker in the browser to see the prompt.
+ 2. The appliance verifies the key and starts the auto-update service, which updates all the services on the appliance to their latest versions. When the auto-update has run, you can select **View appliance services** to see the status and versions of the services running on the appliance server.
+ 3. To register the appliance, you need to select **Login**. In **Continue with Azure Login**, select **Copy code & Login** to copy the device code (you must have a device code to authenticate with Azure) and open an Azure sign in prompt in a new browser tab. Make sure you've disabled the pop-up blocker in the browser to see the prompt.
:::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and sign in.":::+ 4. In a new tab in your browser, paste the device code and sign in by using your Azure username and password. Signing in with a PIN isn't supported. > [!NOTE]
- > If you close the login tab accidentally without logging in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button.
+ > If you close the sign in tab accidentally without logging in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button.
5. After you successfully sign in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to sign in has the required permissions for the Azure resources that were created during key generation, appliance registration starts. After the appliance is successfully registered, to see the registration details, select **View details**.
Now, connect from the appliance to the physical servers to be discovered, and st
- If your Linux servers support the older version of RSA key, you can generate the key using the `$ ssh-keygen -m PEM -t rsa -b 4096` command. - Azure Migrate supports OpenSSH format of the SSH private key file as shown below:
- ![Screenshot of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
+ :::image type="content" source="./media/tutorial-discover-physical/key-format.png" alt-text="Screenshot of SSH private key supported format.":::
1. If you want to add multiple credentials at once, select **Add more** to save and add more credentials. Multiple credentials are supported for physical servers discovery. > [!Note]
- > By default, the credentials will be used to gather data about the installed applications, roles, and features, and also to collect dependency data from Windows and Linux servers, unless you disable the slider to not perform these features (as instructed in the last step).
+ > By default, the credentials are used to gather data about the installed applications, roles, and features, and also to collect dependency data from Windows and Linux servers, unless you disable the slider to not perform these features (as instructed in the last step).
1. In **Step 2:Provide physical or virtual server detailsΓÇï**, select **Add discovery source** to specify the server **IP address/FQDN** and the friendly name for credentials to connect to the server. 1. You can either **Add single item** at a time or **Add multiple items** in one go. There's also an option to provide server details through **Import CSV**.
Now, connect from the appliance to the physical servers to be discovered, and st
- If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. **Verify** the added records and select **Save**. - If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and select **Save**.
-1. Select Save. The appliance will try validating the connection to the servers added and show the **Validation status** in the table against each server.
+1. Select Save. The appliance tries validating the connection to the servers added and shows the **Validation status** in the table against each server.
- If validation fails for a server, review the error by selecting on **Validation failed** in the Status column of the table. Fix the issue, and validate again. - To remove a server, select **Delete**. 1. You can **revalidate** the connectivity to servers anytime before starting the discovery. 1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers. You can change this option at any time. :::image type="content" source="./media/tutorial-discover-physical/disable-slider.png" alt-text="Screenshot that shows where to disable the slider.":::
-1. To perform discovery of SQL Server instances and databases, you can add additional credentials (Windows domain/non-domain, SQL authentication credentials) and the appliance will attempt to automatically map the credentials to the SQL servers. If you add domain credentials, the appliance will authenticate the credentials against Active Directory of the domain to prevent any user accounts from locking out. To check validation of the domain credentials, follow these steps:
+
+1. To perform discovery of SQL Server instances and databases, you can add additional credentials (Windows domain/non-domain, SQL authentication credentials) and the appliance attempts to automatically map the credentials to the SQL servers. If you add domain credentials, the appliance authenticates the credentials against Active Directory of the domain to prevent any user accounts from locking out. To check validation of the domain credentials, follow these steps:
- In the configuration manager credentials table, see **Validation status** for domain credentials. Only the domain credentials are validated. - If validation fails, you can select a Failed status to see the validation error. Fix the issue, and then select **Revalidate credentials** to reattempt validation of the credentials.
select **Start discovery**, to kick off discovery of the successfully validated
* It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal. * [Software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers is finished. * [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
-* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself might not need network line of sight.
* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal.
-* [Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have web server role enabled, Azure Migrate will perform web apps discovery on the server. Web apps configuration data is updated once every 24 hours.
+* [Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have web server role enabled, Azure Migrate performs web apps discovery on the server. Web apps configuration data is updated once every 24 hours.
* During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md). * SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
-* By default, Azure Migrate uses the most secure way of connecting to SQL instances that is, Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority. However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+* By default, Azure Migrate uses the most secure way of connecting to SQL instances that is, Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses TLS/SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority. However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
:::image type="content" source="./media/tutorial-discover-vmware/sql-connection-properties.png" alt-text="Screenshot that shows how to edit SQL Server connection properties.":::
To view the remaining duration until end of support, that is, the number of mont
After the discovery has been initiated, you can delete any of the added servers from the appliance configuration manager by searching for the server name in the **Add discovery source** table and by selecting **Delete**. >[!NOTE]
-> If you choose to delete a server where discovery has been initiated, it will stop the ongoing discovery and assessment which may impact the confidence rating of the assessment that includes this server. [Learn more](./common-questions-discovery-assessment.md#why-is-the-confidence-rating-of-my-assessment-low)
+> If you choose to delete a server where discovery has been initiated, it stops the ongoing discovery and assessment which might impact the confidence rating of the assessment that includes this server. [Learn more](./common-questions-discovery-assessment.md#why-is-the-confidence-rating-of-my-assessment-low)
## Next steps
migrate Tutorial Migrate Aws Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md
The first step of migration is to set up the replication appliance. To set up th
![Finalize registration](./media/tutorial-migrate-physical-virtual-machines/finalize-registration.png)
-## Install the Mobility service
-
-A Mobility service agent must be installed on the source AWS VMs to be migrated. The agent installers are available on the replication appliance. You find the right installer, and install the agent on each machine you want to migrate. Do as follows:
-
-1. Sign in to the replication appliance.
-2. Navigate to **%ProgramData%\ASR\home\svsystems\pushinstallsvc\repository**.
-3. Find the installer for the source AWS VMs operating system and version. Review [supported operating systems](../site-recovery/vmware-physical-azure-support-matrix.md#replicated-machines).
-4. Copy the installer file to the source AWS VM you want to migrate.
-5. Make sure that you have the saved passphrase text file that was created when you installed the replication appliance.
- - If you forgot to save the passphrase, you can view the passphrase on the replication appliance with this step. From the command line, run **C:\ProgramData\ASR\home\svsystems\bin\genpassphrase.exe -v** to view the current passphrase.
- - Now, copy this passphrase to your clipboard and save it in a temporary text file on the source VMs.
-
-### Installation guide for Windows AWS VMs
-
-1. Extract the contents of installer file to a local folder (for example C:\Temp) on the AWS VM, as follows:
-
- ```
- ren Microsoft-ASR_UA*Windows*release.exe MobilityServiceInstaller.exe
- MobilityServiceInstaller.exe /q /x:C:\Temp\Extracted
- cd C:\Temp\Extracted
- ```
-
-2. Run the Mobility Service Installer:
- ```
- UnifiedAgent.exe /Role "MS" /Silent
- ```
-
-3. Register the agent with the replication appliance:
- ```
- cd C:\Program Files (x86)\Microsoft Azure Site Recovery\agent
- UnifiedAgentConfigurator.exe /CSEndPoint <replication appliance IP address> /PassphraseFilePath <Passphrase File Path>
- ```
-
-### Installation guide for Linux AWS VMs
-
-1. Extract the contents of the installer tarball to a local folder (for example /tmp/MobSvcInstaller) on the AWS VM, as follows:
- ```
- mkdir /tmp/MobSvcInstaller
- tar -C /tmp/MobSvcInstaller -xvf <Installer tarball>
- cd /tmp/MobSvcInstaller
- ```
-
-2. Run the installer script:
- ```
- sudo ./install -r MS -v VmWare -d <Install Location> -q -c CSLegacy
- ```
-
-3. Register the agent with the replication appliance:
- ```
- /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <replication appliance IP address> -P <Passphrase File Path> -c CSLegacy
- ```
+## Install the Mobility service agent
+
+A Mobility service agent must be pre-installed on the source AWS VMs to be migrated before you can initiate replication. The approach you choose to install the Mobility service agent may depend on your organization's preferences and existing tools, but be aware that the "push" installation method built into Azure Site Recovery is not currently supported. Approaches you may want to consider:
+
+- [AWS System Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html)
+- [System Center Configuration Manager](../site-recovery/vmware-azure-mobility-install-configuration-mgr.md)
+- [Arc for Servers and Custom Script Extensions](../azure-arc/servers/overview.md)
+- [Manual installation](../site-recovery/vmware-physical-mobility-service-overview.md)
## Enable replication for AWS VMs
migrate Tutorial Migrate Gcp Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-gcp-virtual-machines.md
The first step of migration is to set up the replication appliance. To set up th
![Finalize registration](./media/tutorial-migrate-physical-virtual-machines/finalize-registration.png)
-## Install the Mobility service
+## Install the Mobility service agent
-A Mobility service agent must be installed on the source GCP VMs to be migrated. The agent installers are available on the replication appliance. You find the right installer, and install the agent on each machine you want to migrate. Do as follows:
+A Mobility service agent must be pre-installed on the source GCP VMs to be migrated before you can initiate replication. The approach you choose to install the Mobility service agent may depend on your organization's preferences and existing tools, but be aware that the "push" installation method built into Azure Site Recovery is not currently supported. Approaches you may want to consider:
-1. Sign in to the replication appliance.
-2. Navigate to **%ProgramData%\ASR\home\svsystems\pushinstallsvc\repository**.
-3. Find the installer for the source GCP VMs operating system and version. Review [supported operating systems](../site-recovery/vmware-physical-azure-support-matrix.md#replicated-machines).
-4. Copy the installer file to the source GCP VM you want to migrate.
-5. Make sure that you have the saved passphrase text file that was created when you installed the replication appliance.
- - If you forgot to save the passphrase, you can view the passphrase on the replication appliance with this step. From the command line, run **C:\ProgramData\ASR\home\svsystems\bin\genpassphrase.exe -v** to view the current passphrase.
- - Now, copy this passphrase to your clipboard and save it in a temporary text file on the source VMs.
-
-### Installation guide for Windows GCP VMs
-
-1. Extract the contents of installer file to a local folder (for example C:\Temp) on the GCP VM, as follows:
-
- ```
- ren Microsoft-ASR_UA*Windows*release.exe MobilityServiceInstaller.exe
- MobilityServiceInstaller.exe /q /x:C:\Temp\Extracted
- cd C:\Temp\Extracted
- ```
-
-2. Run the Mobility Service Installer:
- ```
- UnifiedAgent.exe /Role "MS" /Silent
- ```
-
-3. Register the agent with the replication appliance:
- ```
- cd C:\Program Files (x86)\Microsoft Azure Site Recovery\agent
- UnifiedAgentConfigurator.exe /CSEndPoint <replication appliance IP address> /PassphraseFilePath <Passphrase File Path>
- ```
-
-### Installation guide for Linux GCP VMs
+- [System Center Configuration Manager](../site-recovery/vmware-azure-mobility-install-configuration-mgr.md)
+- [Arc for Servers and Custom Script Extensions](../azure-arc/servers/overview.md)
+- [Manual installation](../site-recovery/vmware-physical-mobility-service-overview.md)
1. Extract the contents of the installer tarball to a local folder (for example /tmp/MobSvcInstaller) on the GCP VM, as follows: ```
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
Mobility service agent needs to be installed on the servers to get them discover
> It is recommended to perform discovery and asessment prior to the migration using the Azure Migrate: Discovery and assessment tool, a separate lightweight Azure Migrate appliance. You can deploy the appliance as a physical server to continuously discover servers and performance metadata. For detailed steps, see [Discover physical servers](tutorial-discover-physical.md).
-## Install the Mobility service
+## Install the Mobility service agent
-On machines you want to migrate, you need to install the Mobility service agent. The agent installers are available on the replication appliance. You find the right installer, and install the agent on each machine you want to migrate. Do this as follows:
+A Mobility service agent must be pre-installed on the source physical machines to be migrated before you can initiate replication. The approach you choose to install the Mobility service agent may depend on your organization's preferences and existing tools, but be aware that the "push" installation method built into Azure Site Recovery is not currently supported. Approaches you may want to consider:
-1. Sign in to the replication appliance.
-2. Navigate to **%ProgramData%\ASR\home\svsystems\pushinstallsvc\repository**.
-3. Find the installer for the machine operating system and version. Review [supported operating systems](../site-recovery/vmware-physical-azure-support-matrix.md#replicated-machines).
-4. Copy the installer file to the machine you want to migrate.
-5. Make sure that you have the passphrase that was generated when you deployed the appliance.
- - Store the file in a temporary text file on the machine.
- - You can obtain the passphrase on the replication appliance. From the command line, run **C:\ProgramData\ASR\home\svsystems\bin\genpassphrase.exe -v** to view the current passphrase.
- - Don't regenerate the passphrase. This will break connectivity and you will have to reregister the replication appliance.
-
-> [!NOTE]
-> In the */Platform* parameter, you specify *VMware* if you migrate VMware VMs, or physical machines.
-
-### Install on Windows
-
-1. Extract the contents of installer file to a local folder (for example C:\Temp) on the machine, as follows:
-
- ```
- ren Microsoft-ASR_UA*Windows*release.exe MobilityServiceInstaller.exe
- MobilityServiceInstaller.exe /q /x:C:\Temp\Extracted
- cd C:\Temp\Extracted
- ```
-2. Run the Mobility Service Installer:
- ```
- UnifiedAgent.exe /Role "MS" /Platform "VmWare" /Silent /CSType CSLegacy
- ```
-3. Register the agent with the replication appliance:
- ```
- cd C:\Program Files (x86)\Microsoft Azure Site Recovery\agent
- UnifiedAgentConfigurator.exe /CSEndPoint <replication appliance IP address> /PassphraseFilePath <Passphrase File Path>
- ```
-
-### Install on Linux
+- [System Center Configuration Manager](../site-recovery/vmware-azure-mobility-install-configuration-mgr.md)
+- [Arc for Servers and Custom Script Extensions](../azure-arc/servers/overview.md)
+- [Manual installation](../site-recovery/vmware-physical-mobility-service-overview.md)
1. Extract the contents of the installer tarball to a local folder (for example /tmp/MobSvcInstaller) on the machine, as follows: ```
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-ad-authentication.md
Please note that management operations, such as adding new users, are only suppo
## Additional considerations - Microsoft Entra authentication is only available for MySQL 5.7 and newer.-- Only one Microsoft Entra administrator can be configured for a Azure Database for MySQL server at any time.
+- Only one Microsoft Entra administrator can be configured for an Azure Database for MySQL server at any time.
- Only a Microsoft Entra administrator for MySQL can initially connect to the Azure Database for MySQL using a Microsoft Entra account. The Active Directory administrator can configure subsequent Microsoft Entra database users. - If a user is deleted from Microsoft Entra ID, that user will no longer be able to authenticate with Microsoft Entra ID, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching user will still be in the database, it will not be possible to connect to the server with that user. > [!NOTE]
mysql Concepts Data Access Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-security-private-link.md
Clients can connect to the private endpoint from the same VNet, [peered VNet](..
Configure [VNet peering](../../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for MySQL from an Azure VM in a peered VNet. ### Connecting from an Azure VM in VNet-to-VNet environment
-Configure [VNet-to-VNet VPN gateway connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for MySQL from an Azure VM in a different region or subscription.
+Configure [VNet-to-VNet VPN gateway connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to an Azure Database for MySQL from an Azure VM in a different region or subscription.
### Connecting from an on-premises environment over VPN To establish connectivity from an on-premises environment to the Azure Database for MySQL, choose and implement one of the options:
network-watcher Traffic Analytics Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md
Previously updated : 08/25/2023 Last updated : 10/27/2023+ #CustomerIntent: As a administrator, I want learn about traffic analytics schema so I can easily use the queries and understand their output.
The following table lists the fields in the schema and what they signify for NSG
| **NSGRule_s** | NSG_RULENAME | Network security group rule that allowed or denied this flow. | | **NSGRuleType_s** | - User Defined <br> - Default | The type of network security group rule used by the flow. | | **MACAddress_s** | MAC Address | MAC address of the NIC at which the flow was captured. |
-| **Subscription_s** | Subscription of the Azure virtual network / network interface / virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
-| **Subscription1_s** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
-| **Subscription2_s** | Subscription ID | Subscription ID of virtual network/ network interface / virtual machine that the destination IP in the flow belongs to. |
+| **Subscription_g** | Subscription of the Azure virtual network / network interface / virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
+| **Subscription1_g** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
+| **Subscription2_g** | Subscription ID | Subscription ID of virtual network/ network interface / virtual machine that the destination IP in the flow belongs to. |
| **Region_s** | Azure region of virtual network / network interface / virtual machine that the IP in the flow belongs to. | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). | | **Region1_s** | Azure Region | Azure region of virtual network / network interface / virtual machine that the source IP in the flow belongs to. | | **Region2_s** | Azure Region | Azure region of virtual network that the destination IP in the flow belongs to. |
The following table lists the fields in the schema and what they signify for NSG
> The traffic analytics schema was updated on August 22, 2019. The new schema provides source and destination IPs separately, removing the need to parse the `FlowDirection` field so that queries are simpler. The updated schema had the following changes: > > - `FASchemaVersion_s` updated from 1 to 2.
-> - Deprecated fields: `VMIP_s`, `Subscription_s`, `Region_s`, `NSGRules_s`, `Subnet_s`, `VM_s`, `NIC_s`, `PublicIPs_s`, `FlowCount_d`
+> - Deprecated fields: `VMIP_s`, `Subscription_g`, `Region_s`, `NSGRules_s`, `Subnet_s`, `VM_s`, `NIC_s`, `PublicIPs_s`, `FlowCount_d`
> - New fields: `SrcPublicIPs_s`, `DestPublicIPs_s`, `NSGRule_s` ### VNet flow logs (preview)
List of threat types:
- `UnknownPrivate`: One of the IP addresses belong to an Azure virtual network, while the other IP address belongs to the private IP range defined in RFC 1918 and couldn't be mapped by traffic analytics to a customer owned site or Azure virtual network. - `Unknown`: Unable to map either of the IP addresses in the flow with the customer topology in Azure and on-premises (site).
-## Next Steps
+## Related content
- To learn more about traffic analytics, see [Traffic analytics overview](traffic-analytics.md). - See [Traffic analytics FAQ](traffic-analytics-faq.yml) for answers to traffic analytics most frequently asked questions.
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
Previously updated : 10/06/2023 Last updated : 10/27/2023 #CustomerIntent: As an Azure administrator, I want to use Traffic analytics to analyze Network Watcher flow logs so that I can view network activity, secure my networks, and optimize performance.
To use traffic analytics, you need the following components:
- Information about the flow, such as the source and destination IP addresses, the source and destination ports, and the protocol. - The status of the traffic, such as allowed or denied.
- For more information about NSG flow logs, see [VNet flow logs overview](vnet-flow-logs-overview.md).
+ For more information about VNet flow logs, see [VNet flow logs overview](vnet-flow-logs-overview.md).
+
+ > [!NOTE]
+ > For information about the differences between NSG flow logs and VNet flow logs, see [VNet flow logs compared to NSG flow logs](vnet-flow-logs-overview.md#vnet-flow-logs-compared-to-nsg-flow-logs)
## How traffic analytics works
An example might involve Host 1 at IP address 10.10.10.10 and Host 2 at IP addre
Reduced logs are enhanced with geography, security, and topology information and then stored in a Log Analytics workspace. The following diagram shows the data flow: ## Prerequisites
To get answers to the most frequently asked questions about traffic analytics, s
## Related content - To learn how to use traffic analytics, see [Usage scenarios](usage-scenarios-traffic-analytics.md).-- To understand the schema and processing details of traffic analytics, see [Schema and data aggregation in Traffic Analytics](traffic-analytics-schema.md).
+- To understand the schema and processing details of traffic analytics, see [Schema and data aggregation in Traffic Analytics](traffic-analytics-schema.md).
operator-nexus Howto Configure Network Packet Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-packet-broker.md
+
+ Title: "Azure Operator Nexus: Configure the network packet broker"
+description: Learn commands to create, view network packet broker's TAPRule.
++++ Last updated : 10/20/2023+++
+# Network Packet Broker
+Azure Operator Nexus's Network Packet Broker is a specialized offering from Microsoft Azure tailored for telecommunication service providers. With Azure Operator Nexus's Network Packet Broker, telecom operators can efficiently capture, aggregate, filter, and monitor traffic across their infrastructure (AON), allowing for deep packet inspection, traffic analysis, and enhanced network monitoring. This is particularly crucial in the telecommunications industry, where maintaining high-quality service, ensuring security, and complying with regulatory requirements are paramount. By leveraging this solution, operators can achieve better visibility into their network traffic, troubleshoot issues more effectively, and ultimately deliver improved services to their customers while maintaining the highest standards of network security and performance.
+
+The NPB has been designed and modeled as a separate top level Azure Resource Manager (ARM) resource under Microsoft.managednetworkfabric. Operators can Create, Read, Update and Delete Network TAP, Network TAP rule and Neighbor Group functions. Each network packet broker will have multiple resources such as Network TAP, Neighbor Group, & Network TAP Rules to manage, filter and forward designated traffic.
+
+## Steps to Enable Network Packet Broker
+
+**Prerequisites**
+
+- NPB devices are correctly racked, stacked, and provisioned. For Procedure on how to provision the network fabric, see [Network Fabric Provisioning](./howto-configure-network-fabric.md).
+- Respective vProbes should be set up with dedicated IPs
+- For internal vProbes, Layer 3 Isolation domains with internal networks should be created. Required connected subnets should be configured, in addition to it, the extension flag should be set to NPB (in internal networks). For Procedure on how to create internal and external networks on an Isolation Domain and set extension flag for NPB, see [Isolation Domains](./howto-configure-isolation-domain.md).
+- For the Network to Network Inter-connect (NNI) use case, NNI should be created as type `NPB`. Appropriate layer 2 and layer 3 properties should be defined during the creation of NNI. For Procedure on how to create the network to network interconnect (NNI), see [Network Fabric Provisioning](./howto-configure-network-fabric.md).
+
+**Steps**
+1. Create a Network TAP rule providing the match configuration (only inline input method is supported)
+1. Create a Neighbor Group resource defining destinations.
+1. Create a Network TAP resource referencing the Tap rules and Neighbor Groups.
+1. Enable the Network TAP resource.
+### NPB
+This resource would be auto-created by NNF during bootstrap.
+### Show NPB
+This command shows the details of NPB logical resource.
+```azurecli
+ az networkfabric npb show --resource-group "example-rg" --resource-name "NPB1"
+```
+Expected Output
+```azurecli
+{
+ "properties": {
+ "networkFabricId": "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkFabrics/example-networkFabric",
+ "networkDeviceIds": [
+ "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkDevices/example-networkDevice"
+ ],
+ "sourceInterfaceIds": [
+ "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkDevices/example-networkDevice/networkInterfaces/example-networkInterface"
+ ],
+ "networkTapIds": [
+ "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkTaps/example-networkTap"
+ ],
+ "neighborGroupIds": [
+ "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/neighborGroups/example-neighborGroup"
+ ],
+ "provisioningState": "Succeeded"
+ },
+ "tags": {
+ "key2806": "key"
+ },
+ "location": "eastuseuap",
+ "id": "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkPacketBrokers/example-networkPacketBroker",
+ "name": "example-networkPacketBroker",
+ "type": "microsoft.managednetworkfabric/networkPacketBrokers",
+ "systemData": {
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "createdAt": "2023-05-17T11:56:12.100Z",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2023-05-17T11:56:12.100Z"
+ }
+}
+```
+## Network TAP Rules
+NetworkTapRule resource provides ability for providing filtering and forwarding combinations of conditions and actions.
+### Parameters for Network TAP Rules
+| Parameter | Description | Example | Required |
+|--|-||-|
+| resource-group | Use an appropriate resource group name specifically for your NetworkTapRule | ResourceGroupName |True |
+| resource-name | Resource Name of the Network Tap | InternetTAPrule1 |True |
+| location | AzON Azure Region used during NFC Creation | eastus |True |
+| configuration-type | Input method to configure Network Tap Rule. | Inline or File|True |
+| match-configurations |List of match configurations.| ||
+| match-configurations/matchconfigurationName|Name of Match configuration block | | |
+| match-configurations/sequenceNumber|Sequence number of Match configuration | | |
+| match-configurations/ipAddressType|Ip address family | | |
+| match-configurations/matchconditions|List of dynamic match conditions based on port, protocol, Vlan & Ip conditions. | | |
+| match-configurations/action|Provide action details. Actions can be Drop, Count, Log,Goto,Redirect,Mirror| | |
+| dynamic-match-configurations|List of dynamic match configurations based Port, Vlan & IP | | |
+> [!NOTE]
+> Network Tap rules and Neighbor Groups must be created prior to refrencing them in Network Tap
+### Create Network Tap Rule
+This command creates a Network Tap rule:
+```azurecli
+az networkfabric taprule create --resource-group "example-rg" --location "westus3"--resource-name "example-networktaprule"\
+ --configuration-type "Inline" \
+ --match-configurations "[{matchConfigurationName:config1,sequenceNumber:10,ipAddressType:IPv4,matchConditions:[{encapsulationType:None,portCondition:{portType:SourcePort,layer4Protocol:TCP,ports:[100],portGroupNames:['example-portGroup1']},protocolTypes:[TCP],vlanMatchCondition:{vlans:['10'],innerVlans:['11-20']},ipCondition:{type:SourceIP,prefixType:Prefix,ipPrefixValues:['10.10.10.10/20']}}],\
+ actions:[{type:Drop,truncate:100,isTimestampEnabled:True,destinationId:'/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxx/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/neighborGroups/example-neighborGroup',matchConfigurationName:match1}]}]"\
+ --dynamic-match-configurations"[{ipGroups:[{name:'example-ipGroup1',ipAddressType:IPv4,ipPrefixes:['10.10.10.10/30']}],vlanGroups:[{name:'exmaple-vlanGroup',vlans:['10']}],portGroups:[{name:'example-portGroup1',ports:['100-200']}]}]"
+```
+Expected output:
+```output
+{
+ "properties": {
+ "networkTapId": "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkTaps/example-taprule",
+ "pollingIntervalInSeconds": 30,
+ "lastSyncedTime": "2023-06-12T07:11:22.485Z",
+ "configurationState": "Succeeded",
+ "provisioningState": "Accepted",
+ "administrativeState": "Enabled",
+ "annotation": "annotation",
+ "configurationType": "Inline",
+ "tapRulesUrl": "",
+ "matchConfigurations": [
+ {
+ "matchConfigurationName": "config1",
+ "sequenceNumber": 10,
+ "ipAddressType": "IPv4",
+ "matchConditions": [
+ {
+ "encapsulationType": "None",
+ "portCondition": {
+ "portType": "SourcePort",
+ "l4Protocol": "TCP",
+ "ports": [
+ "100"
+ ],
+ "portGroupNames": [
+ "example-portGroup1"
+ ]
+ },
+ "protocolTypes": [
+ "TCP"
+ ],
+ "vlanMatchCondition": {
+ "vlans": [
+ "10"
+ ],
+ "innerVlans": [
+ "11-20"
+ ],
+ "vlanGroupNames": [
+ "exmaple-vlanGroup"
+ ]
+ },
+ "ipCondition": {
+ "type": "SourceIP",
+ "prefixType": "Prefix",
+ "ipPrefixValues": [
+ "10.10.10.10/20"
+ ],
+ "ipGroupNames": [
+ "example-ipGroup"
+ ]
+ }
+ }
+ ],
+ "actions": [
+ {
+ "type": "Drop",
+ "truncate": "100",
+ "isTimestampEnabled": "True",
+ "destinationId": "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/neighborGroups/example-neighborGroup",
+ "matchConfigurationName": "match1"
+ }
+ ]
+ }
+ ],
+ "dynamicMatchConfigurations": [
+ {
+ "ipGroups": [
+ {
+ "name": "example-ipGroup1",
+ "ipPrefixes": [
+ "10.10.10.10/30"
+ ]
+ }
+ ],
+ "vlanGroups": [
+ {
+ "name": "exmaple-vlanGroup",
+ "vlans": [
+ "10",
+ "100-200"
+ ]
+ }
+ ],
+ "portGroups": [
+ {
+ "name": "example-portGroup1",
+ "ports": [
+ "100-200"
+ ]
+ },
+ {
+ "name": "example-portGroup2",
+ "ports": [
+ "900",
+ "1000-2000"
+ ]
+ }
+ ]
+ }
+ ]
+ },
+ "tags": {
+ "keyID": "keyValue"
+ },
+ "location": "eastuseuap",
+ "id": "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkTapRules/example-tapRule",
+ "name": "example-tapRule",
+ "type": "microsoft.managednetworkfabric/networkTapRules",
+ "systemData": {
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "createdAt": "2023-06-12T07:11:22.488Z",
+ "lastModifiedBy": "user@mail.com",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2023-06-12T07:11:22.488Z"
+ }
+}
+```
+### Show Network Tap Rule
+This command displays an IP community resource:
+```azurecli
+az networkfabric taprule show --resource-group "example-rg" --resource-name "example-networktaprule"
+```
+Expected output:
+```output
+{
+ "properties": {
+ "networkTapId": "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkTaps/example-taprule",
+ "pollingIntervalInSeconds": 30,
+ "lastSyncedTime": "2023-06-12T07:11:22.485Z",
+ "configurationState": "Succeeded",
+ "provisioningState": "Accepted",
+ "administrativeState": "Enabled",
+ "annotation": "annotation",
+ "configurationType": "Inline",
+ "tapRulesUrl": "",
+ "matchConfigurations": [
+ {
+ "matchConfigurationName": "config1",
+ "sequenceNumber": 10,
+ "ipAddressType": "IPv4",
+ "matchConditions": [
+ {
+ "encapsulationType": "None",
+ "portCondition": {
+ "portType": "SourcePort",
+ "l4Protocol": "TCP",
+ "ports": [
+ "100"
+ ],
+ "portGroupNames": [
+ "example-portGroup1"
+ ]
+ },
+ "protocolTypes": [
+ "TCP"
+ ],
+ "vlanMatchCondition": {
+ "vlans": [
+ "10"
+ ],
+ "innerVlans": [
+ "11-20"
+ ],
+ "vlanGroupNames": [
+ "exmaple-vlanGroup"
+ ]
+ },
+ "ipCondition": {
+ "type": "SourceIP",
+ "prefixType": "Prefix",
+ "ipPrefixValues": [
+ "10.10.10.10/20"
+ ],
+ "ipGroupNames": [
+ "example-ipGroup"
+ ]
+ }
+ }
+ ],
+ "actions": [
+ {
+ "type": "Drop",
+ "truncate": "100",
+ "isTimestampEnabled": "True",
+ "destinationId": "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/neighborGroups/example-neighborGroup",
+ "matchConfigurationName": "match1"
+ }
+ ]
+ }
+ ],
+ "dynamicMatchConfigurations": [
+ {
+ "ipGroups": [
+ {
+ "name": "example-ipGroup1",
+ "ipPrefixes": [
+ "10.10.10.10/30"
+ ]
+ }
+ ],
+ "vlanGroups": [
+ {
+ "name": "exmaple-vlanGroup",
+ "vlans": [
+ "10",
+ "100-200"
+ ]
+ }
+ ],
+ "portGroups": [
+ {
+ "name": "example-portGroup1",
+ "ports": [
+ "100-200"
+ ]
+ },
+ {
+ "name": "example-portGroup2",
+ "ports": [
+ "900",
+ "1000-2000"
+ ]
+ }
+ ]
+ }
+ ]
+ },
+ "tags": {
+ "keyID": "keyValue"
+ },
+ "location": "eastuseuap",
+ "id": "/subscriptions/1234ABCD-0A1B-1234-5678-123456ABCDEF/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkTapRules/example-tapRule",
+ "name": "example-tapRule",
+ "type": "microsoft.managednetworkfabric/networkTapRules",
+ "systemData": {
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "createdAt": "2023-06-12T07:11:22.488Z",
+ "lastModifiedBy": "user@mail.com",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2023-06-12T07:11:22.488Z"
+ }
+}
+```
+## Neighbor group
+Neighbor Group resource has the ability to group destinations for forwarding the filtered traffic
+### Parameters for Neighbor Group
+| Parameter | Description | Example | Required |
+|--|-||-|
+| resource-group | Use an appropriate resource group name specifically for your NeighborGroup | ResourceGroupName |True |
+| resource-name | Resource Name of the NeighborGroup | example-Neighbor |True |
+| location | AzON Azure Region used during NFC Creation | eastus |True |
+| destination |List of Ipv4 or Ipv6 destinations to forward traffic | 10.10.10.10|True |
+### Create Neighbor group
+This command creates a Neighbor Group resource:
+```azurecli
+ az networkfabric neighborgroup create --resource-group "example-rg" --location "westus3"
+--resource-name "example-neighborgroup" --destination "{ipv4Addresses:['10.10.10.10']}"
+```
+Expected output:
+```output
+{
+ "properties": {
+ "networkTapIds": [
+ ],
+ "networkTapRuleIds": [
+ ],
+ "destination": {
+ "ipv4Addresses": [
+ "10.10.10.10",
+ ]
+ },
+ "provisioningState": "Succeeded",
+ "annotation": "annotation"
+ },
+ "tags": {
+ "keyID": "KeyValue"
+ },
+ "location": "eastus",
+ "id": "/subscriptions/subscriptionId/resourceGroups/example-rg/providers/Microsoft.ManagedNetworkFabric/neighborGroups/example-neighborGroup",
+ "name": "example-neighborGroup",
+ "type": "microsoft.managednetworkfabric/neighborGroups",
+ "systemData": {
+ "createdBy": "user@mail.com",
+ "createdByType": "User",
+ "createdAt": "2023-05-23T05:49:59.193Z",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2023-05-23T05:49:59.194Z"
+ }
+}
+```
+### Show Neighbor group resource
+This command displays an IP extended community resource:
+```azurecli
+ az networkfabric neighborgroup show --resource-group "example-rg" --resource-name "example-neighborgroup"
+```
+Expected output:
+```output
+{
+ "properties": {
+ "networkTapIds": [
+ ],
+ "networkTapRuleIds": [
+ ],
+ "destination": {
+ "ipv4Addresses": [
+ "10.10.10.10",
+ ]
+ },
+ "provisioningState": "Succeeded",
+ "annotation": "annotation"
+ },
+ "tags": {
+ "keyID": "KeyValue"
+ },
+ "location": "eastus",
+ "id": "/subscriptions/subscriptionId/resourceGroups/example-rg/providers/Microsoft.ManagedNetworkFabric/neighborGroups/example-neighborGroup",
+ "name": "example-neighborGroup",
+ "type": "microsoft.managednetworkfabric/neighborGroups",
+ "systemData": {
+ "createdBy": "user@mail.com",
+ "createdByType": "User",
+ "createdAt": "2023-05-23T05:49:59.193Z",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2023-05-23T05:49:59.194Z"
+ }
+}
+```
+## Network TAP
+Network TAP allows Operators to define destinations and encapsulation mechanism to forward filtered traffic based on the Network TAP Rules
+### Parameters for Network TAP
+| Parameter | Description | Example | Required |
+|--|-||-|
+| resource-group | Use an appropriate resource group name specifically for your Network Tap | ResourceGroupName |True |
+| resource-name | Resource Name of the Network Tap | NetworkTAP-Austin |True |
+| location | AzON Azure Region used during NFC Creation | eastus |True |
+| network-packet-broker-id | ARMID of Network Packet Broker resource | |True |
+| polling-type | Polling method for Network Tap rules (Push or Pull)| Pull|True |
+| destination |Destination definitions| |True |
+| destination/name | name of destination | ||
+| destination/type| type of destination.IsolationDomain or NNI | ||
+| destination/IsolationDomainProperties| Details of Isolation domain. Encapsulation, Neighbor group IDs | Azure Resource Manager (ARM) ID of internal network or NNI |False|
+| destinationTapRuleId| ARMID of the Tap rule, which needs to be applied | |True |
+### Create Network TAP
+This command creates network Tap resource:
+```azurecli
+az networkfabric tap create --resource-group "example-rg" --location "westus3" \
+--resource-name "example-networktap" \
+--network-packet-broker-id "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxx/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkPacketBrokers/example-networkPacketBroker" \
+--polling-type "Pull"\
+--destinations "[{name:'example-destinationName',destinationType:IsolationDomain,destinationId:'/subscriptions/xxxxx/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/l3IsloationDomains/example-l3Domain/internalNetworks/example-internalNetwork',\
+isolationDomainProperties:{encapsulation:None,neighborGroupIds:['/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxx/resourcegroups/example-rg/providers/Microsoft.ManagedNetworkFabric/neighborGroups/example-neighborGroup']},\
+```
orbital Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/contact-profile.md
Title: Configure a contact profile on Azure Orbital Ground Station service
+ Title: Azure Orbital Ground Station - Configure a contact profile
description: Learn how to configure a contact profile
# Configure a contact profile
-Configure a contact profile with Azure Orbital Ground Station to save and reuse contact configurations. This is required before scheduling a contact to ingest data from a satellite into Azure.
+Learn how to configure a [contact profile](concepts-contact-profile.md) with Azure Orbital Ground Station to save and reuse contact configurations. To schedule a contact, you must have a contact profile resource and satellite resource.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Contributor permissions at the subscription level.-- To collect telemetry during the contact, create an event hub. [Learn more about Azure Event Hubs](../event-hubs/event-hubs-about.md)-- An IP address (private or public) for data retrieval/delivery. [Create a VM and use its private IP](../virtual-machines/windows/quick-create-portal.md)
+- To collect telemetry during the contact, [create an event hub](receive-real-time-telemetry.md). [Learn more about Azure Event Hubs](../event-hubs/event-hubs-about.md).
+- An IP address (private or public) for data retrieval/delivery. Learn how to [create a VM and use its private IP](../virtual-machines/windows/quick-create-portal.md).
## Sign in to Azure
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
## Create a contact profile resource
-1. In the Azure portal search box, enter **Contact Profiles**. Select **Contact Profiles** in the search results. Alternatively, navigate to the Azure Orbital service and select **Contact profiles** in the left column.
-2. In the **Contact Profiles** page, select **Create**.
-3. In **Create Contact Profile Resource**, enter or select this information in the **Basics** tab:
+1. In the Azure portal search box, enter **Contact Profiles**. Select **Contact Profiles** in the search results. Alternatively, navigate to the Azure Orbital service and click **Contact profiles** in the left column.
+2. In the **Contact Profiles** page, click **Create**.
+3. In **Create Contact Profile Resource**, enter or select the following information in the **Basics** tab:
| **Field** | **Value** | | | |
- | Subscription | Select a subscription |
- | Resource group | Select a resource group |
- | Name | Enter the contact profile name. Specify the antenna provider and mission information here. Like *Microsoft_Aqua_Uplink_Downlink_1* |
- | Region | Select a region |
- | Minimum viable contact duration | Define the minimum duration of the contact as a prerequisite to show you available time slots to communicate with your spacecraft. If an available time window is less than this time, it won't show in the list of available options. Provide minimum contact duration in ISO 8601 format. Like *PT1M* |
- | Minimum elevation | Define minimum elevation of the contact, after acquisition of signal (AOS), as a prerequisite to show you available time slots to communicate with your spacecraft. Using higher value can reduce the duration of the contact. Provide minimum viable elevation in decimal degrees. |
- | Auto track configuration | Select the frequency band to be used for autotracking during the contact. X band, S band, or Disabled. |
- | Event Hubs Namespace | Select an Event Hubs Namespace to which you'll send telemetry data of your contacts. Select a Subscription before you can select an Event Hubs Namespace. |
- | Event Hubs Instance | Select an Event Hubs Instance that belongs to the previously selected Namespace. *This field will only appear if an Event Hubs Namespace is selected first*. |
- | Virtual Network | Select a Virtual Network according to the instructions on the page. |
-
- :::image type="content" source="media/orbital-eos-contact-profile.png" alt-text="Contact Profile Resource Page" lightbox="media/orbital-eos-contact-profile.png":::
-
-4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
-5. In the **Links** page, select **Add new Link**.
-6. In the **Add Link** page, enter or select this information per link direction:
+ | **Subscription** | Select a **subscription**. |
+ | **Resource group** | Select a **resource group**. |
+ | **Name** | Enter a contact profile **name**. Specify the antenna provider and mission information here, e.g. Microsoft_Aqua_Uplink_Downlink_1. |
+ | **Region** | Select a **region**. |
+ | **Minimum viable contact duration** | Define the **minimum duration** of the contact as a prerequisite to show available time slots to communicate with your spacecraft. _If an available time window is less than this time, it won't be in the list of available options. Provide minimum contact duration in ISO 8601 format, e.g. PT1M._ |
+ | **Minimum elevation** | Define **minimum elevation** of the contact, after acquisition of signal (AOS), as a prerequisite to show available time slots to communicate with your spacecraft. _Using a higher value might reduce the duration of the contact. Provide minimum viable elevation in decimal degrees._ |
+ | **Auto track configuration** | Select the frequency band to be used for autotracking during the contact: **X band**, **S band**, or **Disabled**. |
+ | **Event Hubs Namespace** | Select an **Event Hubs Namespace** to which you'll send telemetry data of your contacts. Learn how to [configure Event Hubs](receive-real-time-telemetry.md#configure-event-hubs). _You must select a subscription before you can select an Event Hubs Namespace._ |
+ | **Event Hubs Instance** | Select an **Event Hubs Instance** that belongs to the previously selected Namespace. _This field only appears if an Event Hubs Namespace is selected first_. |
+ | **Virtual Network** | Select a **virtual network**. *This VNET must be in the same region as the contact profile.* |
+ | **Subnet** | Select a **subnet**. *The subnet must be within the previously chosen VNET, be delegated to the Microsoft.Orbital service, and have a minimum address prefix of size /24.* |
+
+ :::image type="content" source="media/orbital-eos-contact-profile.png" alt-text="Screenshot of the contact profile basics page." lightbox="media/orbital-eos-contact-profile.png":::
+
+4. Click **Next**. In the **Links** pane, click **Add new Link**.
+5. In the **Add Link** page, enter or select the following information per link direction:
| **Field** | **Value** | | | |
- | Name | Provide a name for the link |
- | Direction | Select the link direction |
- | Gain/Temperature (Downlink only) | Enter the gain to noise temperature in db/K |
- | EIRP in dBW (Uplink only) | Enter the effective isotropic radiated power in dBW |
- | Polarization | Select RHCP, LHCP, Dual, or Linear Vertical |
+ | **Name** | Provide a **name** for the link. |
+ | **Direction** | Select the link **direction**. |
+ | **Gain/Temperature** (_downlink only_) | Enter the **gain to noise temperature** in dB/K. |
+ | **EIRP in dBW** (_uplink only_) | Enter the **effective isotropic radiated power** in dBW. |
+ | **Polarization** | Select **RHCP**, **LHCP**, **Dual**, or **Linear Vertical**. |
-7. Select the **Add Channel** button.
-8. In the **Add Channel** page, enter or select this information per channel:
+ :::image type="content" source="media/orbital-eos-contact-link.png" alt-text="Screenshot of the contact profile links pane." lightbox="media/orbital-eos-contact-link.png":::
+
+6. Click **Add Channel**. In the **Add Channel** pane, enter or select the following information per channel:
| **Field** | **Value** | | | |
- | Name | Enter the name for the channel |
- | Center Frequency | Enter the center frequency in MHz |
- | Bandwidth MHz | Enter the bandwidth in MHz |
- | Endpoint name | Enter the name of the data delivery endpoint |
- | IP Address | Specify the IP Address for data retrieval/delivery |
- | Port | Specify the Port for data retrieval/delivery |
- | Protocol | Select TCP or UDP protocol for data retrieval/delivery |
- | Demodulation Configuration Type (Downlink only) | Select type |
- | Demodulation Configuration (Downlink only) | Refer to [configure the modem chain](modem-chain.md) for options. |
- | Decoding Configuration (Downlink only)| If applicable, paste your decoding configuration |
- | Modulation Configuration (Uplink only) | Refer to [configure the modem chain](modem-chain.md) for options. |
- | Encoding Configuration (Uplink only)| If applicable, paste your encoding configuration |
-
- :::image type="content" source="media/orbital-eos-contact-link.png" alt-text="Contact Profile Links Page" lightbox="media/orbital-eos-contact-link.png":::
-
-7. Select the **Submit** button to add the channel.
-8. After adding all channels, select the **Submit** button to add the link.
-
-9. If a mission requires third-party providers, select the **Third-Party Configuration** tab, or select the **Next: Third-Party Configurations** button at the bottom of the page.
-
+ | **Name** | Enter a **name** for the channel. |
+ | **Center Frequency** (MHz) | Enter the **center frequency** in MHz. |
+ | **Bandwidth** (MHz) | Enter the **bandwidth** in MHz. |
+ | **Endpoint name** | Enter the **name** of the data delivery endpoint, e.g. the name of a virtual machine in your resource group. |
+ | **IP Address** | Specify the **IP Address** for data retrieval/delivery in TCP/UDP **server mode**. Leave the IP Address field **blank** for TCP/UDP **client mode**. |
+ | **Port** | Specify the **port** for data retrieval/delivery. *The port must be within 49152 and 65535 and must be unique across all links in the contact profile.* |
+ | **Protocol** | Select **TCP** or **UDP** protocol for data retrieval/delivery. |
+ | **Demodulation Configuration Type** (_downlink only_) | Select **Preset Named Modem Configuration** or **Raw XML**. |
+ | **Demodulation Configuration** (_downlink only_) | Refer to [configure the RF chain](modem-chain.md) for options. |
+ | **Decoding Configuration** (_downlink only_)| If applicable, paste your decoding configuration. |
+ | **Modulation Configuration** (_uplink only_) | Refer to [configure the RF chain](modem-chain.md) for options. |
+ | **Encoding Configuration** (_uplink only_)| If applicable, paste your encoding configuration. |
+
+7. Click **Submit** to add the channel. After adding all channels, click **Submit** to add the link.
+8. If a mission requires third-party providers, click the **Third-Party Configuration** tab.
+
> [!NOTE]
- > Mission configurations are agreed upon with third-party providers. Contacts can only be successfully scheduled with the partners if the contact profile contains the discussed mission configuration.
+ > Mission configurations are agreed upon with partner network providers. Contacts can only be successfully scheduled with the partners if the contact profile contains the appropriate mission configuration.
-10. In the **Third-Party Configurations** page, select **Add new Configuration**.
-11. In the **Mission Configuration** page, enter this information:
+11. In the **Third-Party Configurations** tab, click **Add new Configuration**.
+12. In the **Mission Configuration** page, enter the following information:
| **Field** | **Value** | | | |
- | Provider Name | Enter the name of the provider |
- | Mission Configuration | Enter the mission configuration from the provider |
+ | **Provider Name** | Enter the **name** of the provider. |
+ | **Mission Configuration** | Enter the **mission configuration** from the provider. |
+
+13. Click **Submit** to add the mission configuration.
+14. Click **Review + create**. After the validation is complete, click **Create**.
-13. Select the **Submit** button to add the mission configuration.
-14. Select the **Review + create** tab or select the **Review + create** button at the bottom of the page.
-15. Select the **Create** button.
+After a successful deployment, the contact profile is added to your resource group.
## Next steps -- [How-to Receive real-time telemetry](receive-real-time-telemetry.md)
+- [Receive real-time antenna telemetry](receive-real-time-telemetry.md)
- [Configure the RF chain](modem-chain.md) - [Schedule a contact](schedule-contact.md)
orbital Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/downlink-aqua.md
Title: Downlink data from NASA's Aqua satellite by using Azure Orbital Ground Station
-description: Learn how to schedule a contact with NASA's Aqua public satellite by using the Azure Orbital Ground Station service.
+ Title: Azure Orbital Ground Station - Downlink data from public satellites
+description: Learn how to schedule a contact with public satellites by using the Azure Orbital Ground Station service.
Last updated 07/12/2022
-# Customer intent: As a satellite operator, I want to ingest data from NASA's Aqua public satellite into Azure.
+# Customer intent: As a satellite operator, I want to ingest data from NASA's public satellites into Azure.
-# Tutorial: Downlink data from NASA's Aqua public satellite
+# Tutorial: Downlink data from public satellites
-You can communicate with satellites directly from Azure by using the Azure Orbital Ground Station service. After you downlink data, you can process and analyze it in Azure. In this guide, you'll learn how to:
+You can communicate with satellites directly from Azure by using the Azure Orbital Ground Station service. After you downlink data, you can process and analyze it in Azure.
+
+In this tutorial, you'll learn how to:
> [!div class="checklist"]
-> * Create and authorize a spacecraft for the Aqua public satellite.
-> * Prepare a virtual machine (VM) to receive downlinked Aqua data.
-> * Configure a contact profile for an Aqua downlink mission.
-> * Schedule a contact with Aqua by using Azure Orbital and save the downlinked data.
+> * Create and authorize a spacecraft for select public satellites.
+> * Prepare a virtual machine (VM) to receive downlinked data.
+> * Configure a contact profile for a downlink mission.
+> * Schedule a contact with a supported public satellite using Azure Orbital Ground Station and save the downlinked data.
+
+Azure Orbital Ground Station supports several public satellites including [Aqua](https://aqua.nasa.gov/content/about-aqua), [Suomi NPP](https://eospso.nasa.gov/missions/suomi-national-polar-orbiting-partnership), [JPSS-1/NOAA-20](https://eospso.nasa.gov/missions/joint-polar-satellite-system-1), and [Terra](https://terra.nasa.gov/about).
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Contributor permissions at the subscription level.
+- [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher to submit a spacecraft authorization request.
## Sign in to Azure
-Sign in to the [Azure portal - Azure Orbital Preview](https://aka.ms/orbital/portal).
+Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
-> [!NOTE]
-> For all the procedures in this tutorial, follow the steps exactly as shown, or you won't be able to find the resources. Use the preceding link to sign in directly to the Azure Orbital Preview page.
+## Create a spacecraft resource
-## Create a spacecraft resource for Aqua
+1. In the Azure portal search box, enter **Spacecrafts**. Select **Spacecrafts** in the search results.
+2. On the **Spacecrafts** page, click **Create**.
+3. Choose which public satellite to contact: Aqua, Suomi NPP, JPSS-1/NOAA-20, or Terra. The table below outlines the NORAD ID, center frequency, bandwidth, and link direction and polarization for each satellite. Refer to this information in the following steps and throughout the tutorial.
-1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
-2. On the **Spacecraft** page, select **Create**.
-3. Get an up-to-date Two-Line Element (TLE) for Aqua by checking [CelesTrak](https://celestrak.com/NORAD/elements/active.txt).
+ | **Spacecraft** | **NORAD ID** | **Center Frequency (MHz)** | **Bandwidth (MHz)** | **Direction** | **Polarization** |
+ | : | :-: | :-: | :-: | :-: | :-: |
+ | Aqua | 27424 | 8160 | 15 | Downlink | RHCP |
+ | Suomi NPP | 37849 | 7812 | 30 | Downlink | RHCP |
+ | JPSS-1/NOAA-20 | 43013 | 7812 | 30 | Downlink | RHCP |
+ | Terra | 25994 | 8212.5 | 45 | Downlink | RHCP |
+
+4. Search for your desired public satellite in [CelesTrak](https://celestrak.com/NORAD/elements/active.txt) and identify its current Two-Line Element (TLE).
> [!NOTE]
- > Be sure to update this TLE value before you schedule a contact. A TLE that's more than two weeks old might result in an unsuccessful downlink.
+ > Be sure to update this TLE to the most current value before you schedule a contact. A TLE that's more than two weeks old might result in an unsuccessful downlink.
+ >
+ > [Read more about TLE values](spacecraft-object.md#ephemeris).
-4. In **Create spacecraft resource**, on the **Basics** tab, enter or select this information:
+5. In **Create spacecraft resource**, on the **Basics** tab, enter or select the following information:
| **Field** | **Value** | | | | | **Subscription** | Select your subscription. | | **Resource Group** | Select your resource group. |
- | **Name** | Enter **AQUA**. |
+ | **Name** | Enter the **name** of the public spacecraft. |
| **Region** | Select **West US 2**. |
- | **NORAD ID** | Enter **27424**. |
- | **TLE title line** | Enter **AQUA**. |
+ | **NORAD ID** | Enter the **NORAD ID** from the table above. |
+ | **TLE title line** | Enter **AQUA**, **SUOMI NPP**, **NOAA 20**, or **TERRA**. |
| **TLE line 1** | Enter TLE line 1 from CelesTrak. | | **TLE line 2** | Enter TLE line 2 from CelesTrak. |
-5. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page. Then, enter or select this information:
+6. Click **Next**. In the **Links** pane, select **Add new Link**.
+7. In the **Add Link** page, enter or select the following information:
| **Field** | **Value** | | | |
+ | **Name** | Enter **Downlink**. |
| **Direction** | Select **Downlink**. |
- | **Center Frequency** | Enter **8160**. |
- | **Bandwidth** | Enter **15**. |
+ | **Center Frequency** | Enter the **center frequency** in MHz from the table above. |
+ | **Bandwidth** | Enter the **bandwidth** in MHz from the table above. |
| **Polarization** | Select **RHCP**. |
-7. Select the **Review + create** tab, or select the **Next: Review + create** button.
-8. Select **Create**.
+8. Click **Review + create**. After the validation is complete, click **Create**.
-## Request authorization of the new Aqua spacecraft resource
+## Request authorization of the new public spacecraft resource
-1. Go to the overview page for the newly created spacecraft resource.
-2. On the left pane, in the **Support + troubleshooting** section, select **New support request**.
+1. Navigate to the overview page for the newly created spacecraft resource within your resource group.
+2. On the left pane, navigate to **Support + troubleshooting** then click **Diagnose and solve problems**. Under Spacecraft Management and Setup, click **Troubleshoot**, then click **Create a support request**.
> [!NOTE] > A [Basic support plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request.
-3. On the **New support request** page, on the **Basics** tab, enter or select this information:
+3. On the **New support request** page, under the **Problem description** tab, enter or select the following information:
| **Field** | **Value** | | | |
- | **Summary** | Enter **Request authorization for AQUA**. |
| **Issue type** | Select **Technical**. | | **Subscription** | Select the subscription in which you created the spacecraft resource. | | **Service** | Select **My services**. | | **Service type** | Search for and select **Azure Orbital**. |
+ | **Resource** | Select the spacecraft resource you created. |
+ | **Summary** | Enter **Request authorization for [insert name of public satellite]**. |
| **Problem type** | Select **Spacecraft Management and Setup**. | | **Problem subtype** | Select **Spacecraft Registration**. |
-4. Select the **Details** tab at the top of the page. In the **Problem details** section, enter this information:
+4. click **Next**. If a Solutions page pops up, click **Return to support request**. click **Next** to move to the **Additional details** tab.
+5. Under the **Additional details** tab, enter the following information:
| **Field** | **Value** | | | |
- | **When did the problem start?** | Select the current date and time. |
- | **Description** | List Aqua's center frequency (**8160**) and the desired ground stations. |
- | **File upload** | Upload any pertinent licensing material, if applicable. |
+ | **When did the problem start?** | Select the **current date and time**. |
+ | **Select Ground Stations** | Select the desired **ground stations**. |
+ | **Supplemental Terms** | Select **Yes** to accept and acknowledge the Azure Orbital [supplemental terms](https://azure.microsoft.com/products/orbital/#overview). |
+ | **Description** | Enter the satellite's **center frequency** from the table above. |
+ | **File upload** | No additional files are required. |
-6. Complete the **Advanced diagnostic information** and **Support method** sections of the **Details** tab.
-7. Select the **Review + create** tab, or select the **Next: Review + create** button.
-8. Select **Create**.
+6. Complete the **Advanced diagnostic information** and **Support method** sections of the **Additional details** tab according to your preferences.
+7. Click **Review + create**. After the validation is complete, click **Create**.
+
+After submission, the Azure Orbital Ground Station team reviews your satellite authorization request. Requests for supported public satellites shouldn't take long to approve.
> [!NOTE]
- > You can confirm that your spacecraft resource for Aqua is authorized by checking that the **Authorization status** shows **Allowed** on the spacecraft's overview page.
+ > You can confirm that your spacecraft resource is authorized by checking that the **Authorization status** shows **Allowed** on the spacecraft's overview page.
+
+## Prepare your virtual machine and network to receive public satellite data
-## Prepare your virtual machine and network to receive Aqua data
+1. [Create a virtual network](../virtual-network/quick-create-portal.md) to host your data endpoint virtual machine (VM) using the same subscription and resource group where your spacecraft resource is located.
-1. [Create a virtual network](../virtual-network/quick-create-portal.md) to host your data endpoint VM.
-2. [Create a virtual machine](../virtual-network/quick-create-portal.md#create-virtual-machines) within the virtual network that you created. Ensure that this VM has the following specifications:
- - The operating system is Linux (Ubuntu 20.04 or later).
- - The size is at least 32 GiB of RAM.
- - The VM has internet access for downloading tools by having one standard public IP address.
+2. [Create a virtual machine](../virtual-network/quick-create-portal.md#create-virtual-machines) within the virtual network that you created using the same subscription and resource group where your spacecraft resource is located. Ensure that this VM has the following specifications:
+ - Under the Basics tab:
+ - **Image**: the operating system is Linux (**Ubuntu 20.04** or later).
+ - **Size** the VM has at least **32 GiB** of RAM.
+ - Under the Networking tab:
+ - **Public IP**: the VM has internet access for downloading tools by having one standard public IP address.
> [!TIP] > The public IP address here is only for internet connectivity, not contact data. For more information, see [Default outbound access in Azure](../virtual-network/ip-services/default-outbound-access.md).
-3. Enter the following commands to create a temporary file system (*tmpfs*) on the virtual machine. This virtual machine is where the data will be written to avoid slow writes to disk.
+3. Navigate to the newly created VM. Follow the instructions linked in Step 2 to connect to the VM. At the bash prompt for your VM, enter the following commands to create a temporary file system (*tmpfs*) on the VM. This VM is where the data will be written to avoid slow writes to disk.
+
+ > [!NOTE]
+ > This command references Aqua. Edit the command to reflect the public spacecraft you're using.
```console sudo mkdir /media/aqua sudo mount -t tmpfs -o size=28G tmpfs /media/aqua ```
-4. Enter the following command to ensure that the Socat tool is installed on the machine:
+4. Enter the following command in your VM to ensure that the Socat tool is installed on the machine:
```console sudo apt install socat ```
-5. [Prepare the network for Azure Orbital Ground Station integration](prepare-network.md) to configure your network.
+5. Follow instructions to [delegate a subnet](prepare-network.md#create-and-prepare-subnet-for-vnet-injection) to Azure Orbital Ground Station.
+
+6. Follow instructions to [prepare your VM endpoint](prepare-network.md#prepare-endpoints). Enter the following command in your VM to set the MTU level to 3650:
+
+ ```console
+ sudo ifconfig eth0 3650
+ ```
+
+## Configure Event Hubs for antenna telemetry
-## Configure a contact profile for an Aqua downlink mission
+To receive antenna telemetry during contacts with your selected public satellite, follow instructions to [create and configure an Azure event hub](receive-real-time-telemetry.md#configure-event-hubs) in your subscription.
-1. In the Azure portal's search box, enter **Contact profile**. Select **Contact profile** in the search results.
-2. On the **Contact profile** page, select **Create**.
-3. In **Create contact profile resource**, on the **Basics** tab, enter or select this information:
+## Configure a contact profile to downlink from a public satellite
+
+1. In the Azure portal's search box, enter **Contact Profiles**. Select **Contact Profiles** in the search results.
+2. On the **Contact Profiles** page, click **Create**.
+3. In **Create Contact Profile resource**, on the **Basics** tab, enter or select the following information:
| **Field** | **Value** | | | |
- | **Subscription** | Select your subscription. |
- | **Resource group** | Select your resource group. |
- | **Name** | Enter **AQUA_Downlink**. |
+ | **Subscription** | Select your **subscription**. |
+ | **Resource group** | Select your **resource group**. |
+ | **Name** | Enter **[Satellite_Name]_Downlink**, e.g., Aqua_Downlink. |
| **Region** | Select **West US 2**. | | **Minimum viable contact duration** | Enter **PT1M**. | | **Minimum elevation** | Enter **5.0**. | | **Auto track configuration** | Select **X-band**. |
- | **Event Hubs Namespace** | Select an Azure Event Hubs namespace to which you'll send telemetry data for your contacts. You must select a subscription before you can select an Event Hubs namespace. |
- | **Event Hubs Instance** | Select an Event Hubs instance that belongs to the previously selected namespace. This field appears only if you select an Event Hubs namespace first. |
+ | **Send telemetry to Event Hub?** | Select **Yes**. |
+ | **Event Hubs Namespace** | Select an Azure Event Hubs **namespace** to which you'll send telemetry data for your contacts. You must select a subscription before you can select an Event Hubs namespace. |
+ | **Event Hubs Instance** | Select an Event Hubs **instance** that belongs to the previously selected namespace. _This field appears only if you select an Event Hubs namespace first_. |
+ | **Virtual Network** | Select the **virtual network** that you created earlier. |
+ | **Subnet** | Select the delegated subnet that you created earlier. _This field appears only if you select a virtual network first_. |
+
+4. Click **Next**. In the **Links** page, click **Add new Link**.
+5. On the **Add Link** page, enter or select the following information:
+
+ | **Field** | **Value** |
+ | | |
+ | **Name** | Enter a name for the link, e.g. Aqua_Downlink |
+ | **Direction** | Select **Downlink**. |
+ | **Gain/Temperature** | Enter **0**. |
+ | **EIRP in dBW** | Only applicable to uplink. Leave blank. |
+ | **Polarization** | Select **RHCP**. |
-4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page. Then, select **Add new Link**.
-6. On the **Add Link** pane, enter or select this information:
+6. Click **Add Channel**. In the **Add Channel** pane, add or select the following information:
| **Field** | **Value** | | | |
- | **Direction** | Enter **Downlink**. |
- | **Gain/Temperature in db/K** | Enter **0**. |
- | **Center Frequency** | Enter **8160.0**. |
- | **Bandwidth MHz** | Enter **15.0**. |
- | **Polarization** | Enter **RHCP**. |
- | **Endpoint name** | Enter the name of the virtual machine that you created earlier. |
- | **IP Address** | Enter the private IP address of the virtual machine that you created earlier. |
+ | **Name** | Enter a name for the channel, e.g Aqua_Downlink_Channel. |
+ | **Center Frequency (MHz)** | Enter the **center frequency** in MHz. Refer to the table above for the value for your selected spacecraft. |
+ | **Bandwidth (MHz)** | Enter the **bandwidth** in MHz. Refer to the table above for the value for your selected spacecraft. |
+ | **Endpoint name** | Enter the **name of the virtual machine** that you created earlier. |
+ | **IP Address** | Enter the **private IP address** of the virtual machine that you created earlier. |
| **Port** | Enter **56001**. | | **Protocol** | Enter **TCP**. |
- | **Demodulation Configuration** | Select the **Preset Named Modem Configuration** option, and then select **Aqua Direct Broadcast**.|
+ | **Demodulation Configuration Type** | Select **Preset Named Modem Configuration**. |
+ | **Demodulation Configuration** | Select the **demodulation configuration** for your selected public satellite. Refer to [configure the modem chain](modem-chain.md#named-modem-configuration) for details. |
| **Decoding Configuration** | Leave this field blank. |
-7. Select the **Submit** button.
-8. Select the **Review + create** tab, or select the **Next: Review + create** button.
-9. Select **Create**.
+7. Click **Submit** to add the channel. Click **Submit** again to add the link.
+8. Click **Review + create**. After the validation is complete, click **Create**.
## Schedule a contact with Aqua and save the downlinked data
Sign in to the [Azure portal - Azure Orbital Preview](https://aka.ms/orbital/por
> Check [public satellite schedules](https://directreadout.sci.gsfc.nasa.gov/?id=dspContent&cid=14) to understand if there may be public broadcast outages. Azure Orbital Ground Station does not control the public satellites and cannot guarantee availability of data during the pass. 1. In the Azure portal's search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
-2. On the **Spacecraft** page, select **AQUA**.
-3. Select **Schedule contact** on the top bar of the spacecraft's overview.
-4. On the **Schedule contact** page, specify this information:
+2. On the **Spacecraft** page, select your public spacecraft resource.
+3. Click **Schedule contact** on the top bar of the spacecraft's overview.
+4. On the **Schedule contact** page, specify the following information:
| **Field** | **Value** | | | |
- | **Contact profile** | Select **AQUA_Downlink**. |
- | **Ground station** | Select **Quincy**. |
+ | **Contact profile** | Select the **contact profile** you previously created. |
+ | **Ground station** | Select **Microsoft_Quincy**. |
| **Start time** | Identify a start time for the contact availability window. | | **End time** | Identify an end time for the contact availability window. |
-5. Select **Search** to view available contact times.
-6. Select one or more contact windows, and then select **Schedule**.
-7. View the scheduled contact by selecting the **AQUA** spacecraft and going to **Contacts**.
+5. Click **Search** to view available contact times.
+6. Select one or more contact windows, and then click **Schedule**.
+7. View the scheduled contact by selecting the spacecraft resource, navigating to Configurations on the left panel, and clicking **Contacts**.
8. Shortly before you start running the contact, start listening on port 56001 and output the data received in the file:
+ > [!NOTE]
+ > This command references Aqua. Edit the command to reflect the public spacecraft you're using.
+ ```console socat -u tcp-listen:56001,fork create:/media/aqua/out.bin ``` 9. After you run your contact, copy the output file from *tmpfs* into your home directory, to avoid overwriting the file when you run another contact:
+ > [!NOTE]
+ > This command references Aqua. Edit the command to reflect the public spacecraft you're using.
+ ```console mkdir ~/aquadata cp /media/aqua/out.bin ~/aquadata/raw-$(date +"%FT%H%M%z").bin
orbital Receive Real Time Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/receive-real-time-telemetry.md
Last updated 07/12/2022
-# Receive real-time telemetry
+# Receive real-time antenna telemetry
-An Azure Orbital Ground station emits telemetry events that can be used to analyze the ground station operation during a contact. You can configure your contact profile to send telemetry events to Azure Event Hubs. The steps in this article describe how to create and sent events to Event Hubs.
+Azure Orbital Ground station emits antenna telemetry events that can be used to analyze ground station operation during a contact. You can configure your contact profile to send telemetry events to [Azure Event Hubs](../event-hubs/event-hubs-about.md)..
+
+In this guide, you'll learn how to:
+
+> [!div class="checklist"]
+> * Configure Azure Event Hubs for Azure Orbital Ground Station
+> * Enable telemetry in your contact profile.
+> * Verify content of telemetry data
+> * Understand telemetry points
## Configure Event Hubs
-1. In your subscription, go to Resource Provider settings and register Microsoft.Orbital as a provider
-2. Create an Azure Event Hubs in your subscription.
+1. In your subscription, go to **resource providers** in settings. Search for **Microsoft.Orbital** and register it as a provider.
+2. [Create an Azure Event Hubs namespace](../../articles/event-hubs/event-hubs-create.md#create-an-event-hubs-namespace) and an [event hub](../../articles/event-hubs/event-hubs-create.md#create-an-event-hub) in your subscription.
> [!Note] > Choose Public access for connectivity access to the Eventhubs. Private access or service endpoints is not supported.
-3. From the left menu, select Access Control (IAM). Under Grant Access to this Resource, select Add Role Assignment
-4. Select Azure Event Hubs Data Sender.
-5. Assign access to 'User, group, or service principal'
-6. Click '+ Select members'
-7. Search for 'Azure Orbital Resource Provider' and press Select
-8. Press Review + Assign. This action will grant Azure Orbital the rights to send telemetry into your event hub.
-9. To confirm the newly added role assignment, go back to the Access Control (IAM) page and select View access to this resource.
-Congrats! Orbital can now communicate with your hub.
+3. From the left menu, select **Access control (IAM)**. Under 'Grant access to this resource,' select **Add role assignment**.
+
+> [!Note]
+> To assign Azure roles, you must have:
+> `Microsoft.Authorization/roleAssignments/write` permissions, such as [User Access Administrator](../../articles/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../articles/role-based-access-control/built-in-roles.md#owner)
+
+4. Under the **Role** tab, search for and select **Azure Event Hubs Data Sender**. Click **Next**.
+5. Under the **Members** tab, assign access to **User, group, or service principal**.
+6. Click **+ Select members**.
+7. Search for **Azure Orbital Resource Provider** and click **Select**.
+8. Click **Review + assign**. This action grants Azure Orbital Ground Station the rights to send telemetry into your event hub.
+9. To confirm the newly added role assignment, go back to the Access Control (IAM) page and select **View access to this resource**. Azure Orbital Resource Provider should be under **Azure Event Hubs Data Sender**.
-## Enable telemetry for a contact profile in the Azure portal
+## Enable Event Hubs telemetry for a contact profile
-Ensure the contact profile is configured as follows:
+Configure a [contact profile](contact-profile.md) as follows:
1. Choose a namespace using the Event Hubs Namespace dropdown. 1. Choose an instance using the Event Hubs Instance dropdown that appears after namespace selection.
-## Schedule a contact
+You can update the settings of an existing contact profile by
-Schedule a contact using the Contact Profile that you previously configured for Telemetry.
+## Verify antenna telemetry data from a contact
-Once the contact begins, you should begin seeing data in your Event Hubs soon after.
-
-## Verifying telemetry data
+[Schedule contacts](schedule-contact.md) using the contact profile that you previously configured for Event Hubs telemetry. Once a contact begins, you should begin seeing data in your Event Hubs soon after.
You can verify both the presence and content of incoming telemetry data multiple ways.
-### Portal: Event Hubs Capture
+### Event Hubs Namespace
-To verify that events are being received in your Event Hubs, you can check the graphs present on the Event Hubs namespace Overview page. This view shows data across all Event Hubs instances within a namespace. You can navigate to the Overview page of a specific instance to see the graphs for that instance.
+To verify that events are being received in your Event Hubs, you can check the graphs present on the overview page of your Event Hubs namespace within your resource group. This view shows data across all Event Hubs instances within a namespace. You can navigate to the overview page of a specific Event Hub instance in your resource group to see the graphs for that instance.
-### Verify content of telemetry data
+### Deliver antenna telemetry data to a storage account
-You can enable Event Hubs Capture feature that will automatically deliver the telemetry data to an Azure Blob storage account of your choosing.
-Follow the [instructions to enable Capture](../event-hubs/event-hubs-capture-enable-through-portal.md). Once enabled, you can check your container and view/download the data.
+You can enable the Event Hubs Capture feature to automatically deliver the telemetry data to an Azure Blob storage account of your choosing.
+Follow the [instructions to enable Capture](../../articles/event-hubs/event-hubs-capture-enable-through-portal.md#enable-capture-when-you-create-an-event-hub) and [capture data to Azure storage](../../articles/event-hubs/event-hubs-capture-enable-through-portal.md#capture-data-to-azure-storage). Once enabled, you can check your container and view/download the data.
-## Event Hubs consumer
-
-Code: Event Hubs Consumer.
-Event Hubs documentation provides guidance on how to write simple consumer apps to receive events from your Event Hubs:
-- [Python](../event-hubs/event-hubs-python-get-started-send.md)-- [.NET](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md)-- [Java](../event-hubs/event-hubs-java-get-started-send.md)-- [JavaScript](../event-hubs/event-hubs-node-get-started-send.md)-
-## Understanding telemetry points
+## Understand telemetry points
### Current Telemetry Schema Version: 4.0 The ground station provides telemetry using Avro as a schema. The schema is below:
The ground station provides telemetry using Avro as a schema. The schema is belo
] } ```
-| **Telemetry Point** | **Source Device / Point** | **Possible Values** | **Definition** |
-| : | : | : | :- |
+The following table provides the source device/point, possible values, and definition of each telemetry point.
+
+| **Telemetry Point** | **Source Device/Point** | **Possible Values** | **Definition** |
+| : | :- | : | :- |
| version | Manually set internally | | Release version of the telemetry | | contactID | Contact resource | | Identification number of the contact | | contactPlatformIdentifier | Contact resource | | | | groundStationName | Contact resource | | Name of groundstation |
-| antennaType | Respective 1P/3P telemetry builders set this value | MICROSOFT, KSAT, VIASAT | Antenna network used for the contact. |
+| antennaType | Respective Microsoft / partner telemetry builders set this value | MICROSOFT, KSAT, VIASAT | Antenna network used for the contact. |
| antennaId | Contact resource | | Human-readable name of antenna ID | | spacecraftName | Parsed from Contact Platform Identifier | | Name of spacecraft | | gpsTime | Coversion of utcTime | | Time in GPS time that the customer telemetry message was generated. |
The ground station provides telemetry using Avro as a schema. The schema is belo
| digitizerName | Digitizer | | Name of digitizer device | | endpointName | Contact profile link channel | | Name of the endpoint used for the contact. | | inputEbN0InDb | Modem: measuredEbN0 | ΓÇó NULL (Modem model other than QRadio or QRx) <br> ΓÇó Double: Input EbN0 | Input energy per bit to noise power spectral density in dB. |
-| inputEsN0InDb | Not used in 1P telemetry | NULL (Not used in 1P telemetry) | Input energy per symbol to noise power spectral density in dB. |
+| inputEsN0InDb | Not used in Microsoft antenna telemetry | NULL (Not used in Microsoft antenna telemetry) | Input energy per symbol to noise power spectral density in dB. |
| inputRfPowerDbm | Digitizer: inputRfPower | ΓÇó NULL (Uplink or Digitizer driver other than SNNB or SNWB) <br> ΓÇó Double: Input Rf Power | Input RF power in dBm. | | outputRfPowerDbm | Digitizer: outputRfPower | ΓÇó NULL (Downlink or Digitizer driver other than SNNB or SNWB) <br> ΓÇó Double: Output Rf Power | Ouput RF power in dBm. | | outputPacketRate | Digitizer: rfOutputStream[0].measuredPacketRate | ΓÇó NULL (Downlink or Digitizer driver other than SNNB or SNWB) <br> ΓÇó Double: Output Packet Rate | Measured packet rate for Uplink |
The ground station provides telemetry using Avro as a schema. The schema is belo
| modemLockStatus | Modem: carrierLockState | ΓÇó NULL (Modem model other than QRadio or QRx; couldnΓÇÖt parse lock status Enum) <br> ΓÇó Empty string (if metric reading was null) <br> ΓÇó String: Lock status | Confirmation that the modem was locked. | | commandsSent | Modem: commandsSent | ΓÇó NULL (if not Uplink and QRadio) <br> ΓÇó Double: # of commands sent | Confirmation that commands were sent during the contact. |
+## Event Consumers
+
+You can write simple consumer apps to receive events from your Event Hubs using [event consumers](../../articles/event-hubs/event-hubs-features.md#event-consumers). Refer to the following documentation to learn how to send and receive events Event Hubs in various languages:
+- [Python](../event-hubs/event-hubs-python-get-started-send.md)
+- [.NET](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md)
+- [Java](../event-hubs/event-hubs-java-get-started-send.md)
+- [JavaScript](../event-hubs/event-hubs-node-get-started-send.md)
+ ## Changelog
-2023-10-03 - Introduce version 4.0. Updated schema to include uplink packet metrics and names of infrastructure in use (groundstation, antenna, spacecraft, modem, digitizer, link, channel) <br>
-2023-06-05 - Updatd schema to show metrics under channels instead of links.
+2023-10-03 - Introduce version 4.0. Updated schema to include uplink packet metrics and names of infrastructure in use (ground station, antenna, spacecraft, modem, digitizer, link, channel) <br>
+2023-06-05 - Updated schema to show metrics under channels instead of links.
## Next steps
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
Execute steps listed in [Tutorial: Downlink data from NASA's Aqua public satelli
The above tutorial provides a walkthrough for scheduling a contact with Aqua and collecting the direct broadcast data on an Azure VM. > [!NOTE]
-> In the section [Prepare a virtual machine (VM) to receive the downlinked AQUA data](downlink-aqua.md#prepare-your-virtual-machine-and-network-to-receive-aqua-data), use the following values:
+> In the section [Prepare a virtual machine (VM) to receive the downlinked AQUA data](downlink-aqua.md#prepare-your-virtual-machine-and-network-to-receive-public-satellite-data), use the following values:
> > - **Name:** receiver-vm > - **Operating System:** Linux (CentOS Linux 7 or higher)
orbital Virtual Rf Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/virtual-rf-tutorial.md
Title: Understand virtual RF (vRF) through demodulation of Aqua using GNU Radio - Azure Orbital
+ Title: Azure Orbital Ground Station - Understand virtual RF through demodulation of Aqua using GNU Radio
description: Learn how to use virtual RF (vRF) instead of a managed modem. Receive a raw RF signal from NASA's Aqua public satellite and process it in GNU Radio.
Last updated 04/21/2023
-# Customer intent: As an Azure Orbital customer I want easy to understand documentation for virtual RF so I don't have to bug the product team to understand how to build my applications.
+# Customer intent: As an Azure Orbital customer I want to understand documentation for virtual RF.
# Tutorial: Understand virtual RF (vRF) through demodulation of Aqua using GNU Radio
-In [Tutorial: Downlink data from NASA's Aqua public satellite](downlink-aqua.md), data from NASA's Aqua satellite is downlinked using a **managed modem**, meaning the raw RF signal received from the Aqua satellite by the ground station is passed through a modem managed by Azure Orbital. The output of this modem, which is in the form of bytes, is then streamed to the user's VM. As part of the step [Configure a contact profile for an Aqua downlink mission](downlink-aqua.md#configure-a-contact-profile-for-an-aqua-downlink-mission) the **Demodulation Configuration** was set to **Aqua Direct Broadcast**, which is what enabled and configured the managed modem to demodulate/decode the RF signal received from Aqua. Using the vRF concept, no managed modem is used, and instead the raw RF signal is sent to the user's VM for processing. This concept can apply to both the downlink and uplink, but in this tutorial we examine the downlink process. We create a vRF, based on GNU Radio, which processes the raw RF signal and act as the modem.
+In [Tutorial: Downlink data from a public satellite](downlink-aqua.md), data from NASA's Aqua satellite is downlinked using a **managed modem**, meaning the raw RF signal received from the Aqua satellite by the ground station is passed through a modem managed by Azure Orbital. The output of this modem, which is in the form of bytes, is then streamed to the user's VM. As part of the step [Configure a contact profile for a public satellite downlink mission](downlink-aqua.md#configure-a-contact-profile-to-downlink-from-a-public-satellite) the **Demodulation Configuration** was set to **Aqua Direct Broadcast**, which is what enabled and configured the managed modem to demodulate/decode the RF signal received from Aqua. Using the vRF concept, no managed modem is used, and instead the raw RF signal is sent to the user's VM for processing. This concept can apply to both the downlink and uplink, but in this tutorial we examine the downlink process. We create a vRF, based on GNU Radio, which processes the raw RF signal and act as the modem.
In this guide, you learn how to:
In this guide, you learn how to:
Before we dive into the tutorial, it's important to understand how vRF works and how it compares to using a managed modem. With a managed modem, the entire physical (PHY) layer occurs within Azure Orbital, meaning the RF signal is immediately processed within Azure Orbital's resources and the user only receives the information bytes produced by the modem. Using vRF, there's no managed modem, and the raw RF signal is streamed to the user from the ground station digitizer. This approach allows the user to run their own modem, or capture the RF signal for later processing.
-Advantages of vRF include the ability to use modems that aren't supported by Azure Orbital or can't be shared with Azure Orbital. vRF also allows running the same RF signal through a modem while trying different parameters to optimize performance. This approach can be used to reduce the number of satellite passes needed during testing and speed up development. Due to the nature of raw RF signals, the packet/file size is typically greater than the bytes contained within that RF signal; usually between 2-10x larger. More data means the network throughput between the VM and Azure Orbital may be a limiting factor for vRF when it may not have been for a managed modem.
+Advantages of vRF include the ability to use modems that aren't supported by Azure Orbital or can't be shared with Azure Orbital. vRF also allows running the same RF signal through a modem while trying different parameters to optimize performance. This approach can be used to reduce the number of satellite passes needed during testing and speed up development. Due to the nature of raw RF signals, the packet/file size is typically greater than the bytes contained within that RF signal; usually between 2-10x larger. More data means the network throughput between the VM and Azure Orbital could be a limiting factor for vRF when it might not have been for a managed modem.
-Throughout this tutorial, you learn first hand how vRF works. At the end of this tutorial, you can find several RF and digitizer-specific details that may be of interest to a vRF user.
+Throughout this tutorial, you learn first hand how vRF works. At the end of this tutorial, you can find several RF and digitizer-specific details that could be of interest to a vRF user.
## Role of DIFI within vRF
The DIFI Packet Protocol contains two primary message types: data packets and co
## Step 1: Use AOGS to schedule a contact and collect Aqua data
-First we remove the managed modem, and capture the raw RF data into a pcap file. Execute the steps listed in [Tutorial: Downlink data from NASA's Aqua public satellite](downlink-aqua.md) but during step [Configure a contact profile for an Aqua downlink mission](downlink-aqua.md#configure-a-contact-profile-for-an-aqua-downlink-mission) leave the **Demodulation Configuration** blank and choose UDP for **Protocol**. Lastly, towards the end, instead of the `socat` command (which captures TCP packets), run `sudo tcpdump -i eth0 port 56001 -vvv -p -w /tmp/aqua.pcap` to capture the UDP packets to a pcap file.
+First we remove the managed modem, and capture the raw RF data into a pcap file. Execute the steps listed in [Tutorial: Downlink data from NASA's Aqua public satellite](downlink-aqua.md) but during step [Configure a contact profile for an Aqua downlink mission](downlink-aqua.md#configure-a-contact-profile-to-downlink-from-a-public-satellite) leave the **Demodulation Configuration** blank and choose UDP for **Protocol**. Lastly, towards the end, instead of the `socat` command (which captures TCP packets), run `sudo tcpdump -i eth0 port 56001 -vvv -p -w /tmp/aqua.pcap` to capture the UDP packets to a pcap file.
> [!NOTE] > The following three modifications are needed to [Tutorial: Downlink data from NASA's Aqua public satellite](downlink-aqua.md):
export SAVE_IQ=true
python drx.py ```
-You should see activity in the terminal if it worked, and there should be a new file `/tmp/samples.iq` growing larger as the script runs (it may take several minutes to finish). This new file is a binary IQ file, containing the raw RF signal. This drx.py script is essentially stripping off the DIFI header and concatenating many packets worth of IQ samples into one file. Processing the entire pcap will take a while, but you can feel free to stop it after ~10 seconds, which should save more than enough IQ samples for use in the next step.
+You should see activity in the terminal if it worked, and there should be a new file `/tmp/samples.iq` growing larger as the script runs (it can take several minutes to finish). This new file is a binary IQ file, containing the raw RF signal. This drx.py script is essentially stripping off the DIFI header and concatenating many packets worth of IQ samples into one file. Processing the entire pcap will take a while, but you can feel free to stop it after ~10 seconds, which should save more than enough IQ samples for use in the next step.
## Step 3: Demodulate the Aqua signal in GNU Radio
Before running the flowgraph, verify that your `/tmp/samples.iq` exists (or if y
:::image type="content" source="media/aqua-constellation.png" alt-text="Screenshot of the IQ plot of the Aqua signal." lightbox="media/aqua-constellation.png":::
-Yours may vary, based on the strength the signal was received. If no GUI showed up, then check GNU Radio's output in the bottom left for errors. If the GUI shows up but resembles a horizontal noisy line (with no hump), it means the contact didn't actually receive the Aqua signal. In this case, double check that autotrack is enabled in your Contact Profile and that the center frequency was entered correctly.
+Yours might vary, based on the strength the signal was received. If no GUI showed up, then check GNU Radio's output in the bottom left for errors. If the GUI shows up but resembles a horizontal noisy line (with no hump), it means the contact didn't actually receive the Aqua signal. In this case, double check that autotrack is enabled in your Contact Profile and that the center frequency was entered correctly.
The time it takes GNU Radio to finish is based on how long you let `drx.py` run, combined with your computer/VM CPU power. As the flowgraph runs, it's demodulating the RF signal in `/tmp/samples.iq` and creating the file `/tmp/aqua_out.bin`, which contains the output of the modem.
If the above steps worked, you have successfully created and deployed a downlink
## vRF within AOGS reference
-In this section, we provide several RF/digitizer-specific details that may be of interest to a vRF user or designer.
+In this section, we provide several RF/digitizer-specific details that could be of interest to a vRF user or designer.
On the downlink side, a vRF receives a signal from Azure Orbital. A DIFI stream is sent to the user's VM by Azure Orbital during a pass, and is expected to be captured by the user in real-time. Examples include using tcpdump, socat, or directly ingested into a modem. Next are some specifications related to how Azure Orbital's ground station receives and processes the signal:
On the downlink side, a vRF receives a signal from Azure Orbital. A DIFI stream
* No frequency offset is used * The user VM MTU size should be set to 3650 for X-Band and 1500 for S-Band, which is the max packet size coming from Azure Orbital
-On the uplink side, the user must provide a DIFI stream to Azure Orbital throughout the pass, for Azure Orbital to transmit. The following notes may be of interest to an uplink vRF designer:
+On the uplink side, the user must provide a DIFI stream to Azure Orbital throughout the pass, for Azure Orbital to transmit. The following notes could be of interest to an uplink vRF designer:
* The center frequency is specified in Contact Profile * The signal sample rate is set through the DIFI stream (even though a bandwidth is provided as part of the Contact Profile, it's purely for network configuration under the hood)
postgresql How To Cost Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-cost-optimization.md
Consolidating databases can be a cost-saving strategy for Azure Database for Pos
Consolidating databases can help you save costs by reducing the number of Flexible Server instances you need to run and by enabling you to use larger instances that are more cost-effective than smaller instances. It is important to evaluate the impact of consolidation on your databases' performance and ensure that the consolidated Flexible Server instance is appropriately sized to meet all database needs.
-To learn more, refer [Improve the performance of Azure applications by using Azure Advisor](../../advisor/advisor-reference-performance-recommendations.md#postgresql)
+To learn more, refer [Improve the performance of Azure applications by using Azure Advisor](../../advisor/advisor-reference-performance-recommendations.md#databases)
## 6. Place test servers in cost-efficient geo-regions
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| SignalR (Microsoft.SignalRService/SignalR) | signalR | privatelink.service.signalr.net | service.signalr.net | | Azure Monitor (Microsoft.Insights/privateLinkScopes) | azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net | | Azure AI services (Microsoft.CognitiveServices/accounts) | account | privatelink.cognitiveservices.azure.com <br/> privatelink.openai.azure.com | cognitiveservices.azure.com <br/> openai.azure.com |
-| Azure File Sync (Microsoft.StorageSync/storageSyncServices) | afs | {regionName}.privatelink.afs.azure.net | {regionName}.afs.azure.net |
+| Azure File Sync (Microsoft.StorageSync/storageSyncServices) | afs | privatelink.afs.azure.net | afs.azure.net |
| Azure Data Factory (Microsoft.DataFactory/factories) | dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net | | Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.com | adf.azure.com | | Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net |
sap Sap On Azure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/sap-on-azure-overview.md
Azure Monitor for SAP solutions is an Azure-native monitoring product for SAP la
For more information, see the [Azure Monitor for SAP solutions](monitor/about-azure-monitor-sap-solutions.md) documentation. ## Next steps
search Resource Partners Knowledge Mining https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-partners-knowledge-mining.md
Previously updated : 07/13/2023 Last updated : 10/26/2023 # Partner spotlight
Get expert help from Microsoft partners who build comprehensive solutions that i
| Partner | Description | Product link | ||-|-|
-| ![Agolo](media/resource-partners/agolo-logo.png "Agolo company logo") | [**Agolo**](https://www.agolo.com) is the leading summarization engine for enterprise use. Agolo's AI platform analyzes hundreds of thousands of media articles, research documents and proprietary information to give each customer a summary of key points specific to their areas of interest. </br></br>Our partnership with Microsoft combines the power and adaptability of the Azure Cognitive Search platform, integrated with Agolo summarization. Rather than typical search engine snippets, the results page displays contextually relevant Agolo summaries, instantly enabling the user to determine the relevance of that document to their specific needs. The impact of summarization-powered search is that users find more relevant content faster, enabling them to do their job more effectively and gaining a competitive advantage. | [Product page](https://www.agolo.com/microsoft-azure-cognitive-search) |
-| ![BA Insight](media/resource-partners/ba-insight-logo.png "BA Insights company logo") | [**BA Insight Search for Workplace**](https://www.bainsight.com/azure-search/) is a complete enterprise search solution powered by Azure Cognitive Search. It is the first of its kind solution, bringing the internet to enterprises for secure, "askable", powerful search to help organizations get a return on information. It delivers a web-like search experience, connects to 80+ enterprise systems and provides automated and intelligent meta tagging. | [Product page](https://www.bainsight.com/azure-search/) |
+| ![BA Insight](media/resource-partners/ba-insight-logo.png "BA Insights company logo") | [**BA Insight Search for Workplace**](https://www.bainsight.com/azure-search/) is a complete enterprise search solution powered by Azure Cognitive Search. It's the first of its kind solution, bringing the internet to enterprises for secure, "askable", powerful search to help organizations get a return on information. It delivers a web-like search experience, connects to 80+ enterprise systems and provides automated and intelligent meta tagging. | [Product page](https://www.bainsight.com/azure-search/) |
| ![BlueGranite](media/resource-partners/blue-granite-full-color.png "Blue Granite company logo") | [**BlueGranite**](https://www.bluegranite.com/) offers 25 years of experience in Modern Business Intelligence, Data Platforms, and AI solutions across multiple industries. Their Knowledge Mining services enable organizations to obtain unique insights from structured and unstructured data sources. Modular AI capabilities perform searches on numerous file types to index data and associate that data with more traditional data sources. Analytics tools extract patterns and trends from the enriched data and showcase results to users at all levels. | [Product page](https://www.bluegranite.com/knowledge-mining) |
-| ![Enlighten Designs](media/resource-partners/enlighten-ver2.png "Enlighten Designs company logo") | [**Enlighten Designs**](https://www.enlighten.co.nz) is an award-winning innovation studio that has been enabling client value and delivering digitally transformative experiences for over 22 years. We are pushing the boundaries of the Microsoft technology toolbox, harnessing Cognitive Search, application development, and advanced Azure services that have the potential to transform our world. As experts in Power BI and data visualization, we hold the titles for the most viewed, and the most downloaded Power BI visuals in the world and are Microsoft's Data Journalism agency of record when it comes to data storytelling. | [Product page](https://www.enlighten.co.nz/Services/Data-Visualisation/Azure-Cognitive-Search) |
+| ![Enlighten Designs](media/resource-partners/enlighten-ver2.png "Enlighten Designs company logo") | [**Enlighten Designs**](https://www.enlighten.co.nz) is an award-winning innovation studio that has been enabling client value and delivering digitally transformative experiences for over 22 years. We're pushing the boundaries of the Microsoft technology toolbox, harnessing Cognitive Search, application development, and advanced Azure services that have the potential to transform our world. As experts in Power BI and data visualization, we hold the titles for the most viewed, and the most downloaded Power BI visuals in the world and are Microsoft's Data Journalism agency of record when it comes to data storytelling. | [Product page](https://www.enlighten.co.nz/Services/Data-Visualisation/Azure-Cognitive-Search) |
| ![Neudesic](media/resource-partners/neudesic-logo.png "Neudesic company logo") | [**Neudesic**](https://www.neudesic.com/) is the trusted technology partner in business innovation, delivering impactful business results to clients through digital modernization and evolution. Our consultants bring business and technology expertise together, offering a wide range of cloud and data-driven solutions, including custom application development, data and artificial intelligence, comprehensive managed services, and business software products. Founded in 2002, Neudesic is a privately held company headquartered in Irvine, California. | [Product page](https://www.neudesic.com/services/modern-workplace/document-intelligence-platform-schedule-demo/)| | ![OrangeNXT](media/resource-partners/orangenxt-beldmerk-boven-160px.png "OrangeNXT company logo") | [**OrangeNXT**](https://orangenxt.com/) offers expertise in data consolidation, data modeling, and building skillsets that include custom logic developed for specific use-cases.</br></br>digitalNXT Search is an OrangeNXT solution that combines AI, optical character recognition (OCR), and natural language processing in Azure Cognitive Search pipeline to help you extract search results from multiple structured and unstructured data sources. Integral to digitalNXT Search is advanced custom cognitive skills for interpreting and correlating selected data.</br></br>| [Product page](https://orangenxt.com/solutions/digitalnxt/digitalnxt-search/)| | ![Plain Concepts](media/resource-partners/plain-concepts-logo.png "Plain Concepts company logo") | [**Plain Concepts**](https://www.plainconcepts.com/contact/) is a Microsoft Partner with over 15 years of cloud, data, and AI expertise on Azure, and more than 12 Microsoft MVP awards. We specialize in the creation of new data relationships among heterogeneous information sources, which combined with our experience with Artificial Intelligence, Machine Learning, and Azure AI services, exponentially increases the productivity of both machines and human teams. We help customers to face the digital revolution with the AI-based solutions that best suits their company requirements.| [Product page](https://www.plainconcepts.com/artificial-intelligence/) |
-| ![Raytion](media/resource-partners/raytion-logo-blue.png "Raytion company logo") | [**Raytion**](https://www.raytion.com/) is an internationally operating IT business consultancy with a strategic focus on collaboration, search and cloud. Raytion offers intelligent and fully featured search solutions based on Microsoft Azure Cognitive Search and the Raytion product suite. Raytion's solutions enable an easy indexation of a broad range of enterprise content systems and provide a sophisticated search experience, which can be tailored to individual requirements. They are the foundation of enterprise search, knowledge searches, service desk agent support and many more applications. | [Product page](https://www.raytion.com/connectors) |
+| ![Raytion](media/resource-partners/raytion-logo-blue.png "Raytion company logo") | [**Raytion**](https://www.raytion.com/) is an internationally operating IT business consultancy with a strategic focus on collaboration, search and cloud. Raytion offers intelligent and fully featured search solutions based on Microsoft Azure Cognitive Search and the Raytion product suite. Raytion's solutions enable an easy indexation of a broad range of enterprise content systems and provide a sophisticated search experience, which can be tailored to individual requirements. They're the foundation of enterprise search, knowledge searches, service desk agent support and many more applications. | [Product page](https://www.raytion.com/connectors) |
sentinel Ama Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ama-migrate.md
Each organization will have different metrics of success and internal migration
6. Uninstall the legacy agent. For more information, see [Manage the Azure Log Analytics agent ](../azure-monitor/agents/agent-manage.md#uninstall-agent). ## FAQs
-The following FAQs address issues specific to AMA migration with Microsoft Sentinel. For more information, see also the [Frequently asked questions for AMA migration](/azure/azure-monitor/faq#azure-monitor-agent) in the Azure Monitor documentation.
+The following FAQs address issues specific to AMA migration with Microsoft Sentinel. For more information, see also the [Frequently asked questions for AMA migration](../azure-monitor/agents/azure-monitor-agent-migration.md#frequently-asked-questions) and [Frequently asked questions for Azure Monitor Agent](../azure-monitor/agents/agents-overview.md#frequently-asked-questions) in the Azure Monitor documentation.
## What happens if I run both MMA/OMS and AMA in parallel in my Microsoft Sentinel deployment? Both the AMA and MMA/OMS agents can co-exist on the same machine. If they both send data, from the same data source to a Microsoft Sentinel workspace, at the same time, from a single host, duplicate events and double ingestion charges will occur.
-For your production rollout, we recommend that you configure either an MMA/OMS agent or the AMA for each data source. To address any issues for duplication, see the relevant FAQs in the [Azure Monitor documentation](/azure/azure-monitor/faq#azure-monitor-agent).
+For your production rollout, we recommend that you configure either an MM#frequently-asked-questions).
## The AMA doesnΓÇÖt yet have the features my Microsoft Sentinel deployment needs to work. Should I migrate yet? The legacy Log Analytics agent will be retired on 31 August 2024.
While you can run the MMA and AMA simultaneously, you may want to migrate each c
For more information, see: -- [Frequently asked questions for AMA migration](/azure/azure-monitor/faq#azure-monitor-agent)
+- [Frequently asked questions for AMA migration](../azure-monitor/agents/azure-monitor-agent-migration.md#frequently-asked-questions)
- [Overview of the Azure Monitor agents](../azure-monitor/agents/agents-overview.md) - [Migrate from Log Analytics agents](../azure-monitor/agents/azure-monitor-agent-migration.md) - [Windows Security Events via AMA](data-connectors/windows-security-events-via-ama.md)
service-connector How To Integrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-app-configuration.md
Previously updated : 08/11/2022 Last updated : 10/26/2023 # Integrate Azure App Configuration with Service Connector
-This page shows the supported authentication types and client types of Azure App Configuration using Service Connector. You might still be able to connect to App Configuration in other programming languages without using Service Connector. You can learn more about the [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure App Configuration to other cloud services using Service Connector. You might still be able to connect to App Configuration using other methods. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
## Supported compute services
This page shows the supported authentication types and client types of Azure App
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|-|::|::|::|::|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Container Apps](#tab/container-apps)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal | |-|::|::|::|::| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|-|::|::|::|::|
-| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| None | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-## Default environment variable names or application properties
+## Default environment variable names or application properties and sample code
-Use the connection details below to connect compute services to Azure App Configuration stores instances. For each example below, replace the placeholder texts
-`<App-Configuration-name>`, `<ID>`, `<secret>`, `<client-ID>`, `<client-secret>`, and `<tenant-ID>` with your App Configuration store name, ID, secret, client ID, client secret and tenant ID.
-
-### Secret / connection string
-
-> [!div class="mx-tdBreakAll"]
-> | Default environment variable name | Description | Sample value |
-> | | | |
-> | AZURE_APPCONFIGURATION_CONNECTIONSTRING | Your App Configuration Connection String | `Endpoint=https://<App-Configuration-name>.azconfig.io;Id=<ID>;Secret=<secret>` |
+Use the connection details below to connect compute services to Azure App Configuration stores. This page also shows default environment variable names and values you get when you create the service connection, as well as sample code. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
### System-assigned managed identity
Use the connection details below to connect compute services to Azure App Config
|--||| | AZURE_APPCONFIGURATION_ENDPOINT | App Configuration endpoint | `https://<App-Configuration-name>.azconfig.io` |
+Refer to the steps and code below to connect to Azure App Configuration using a system-assigned managed identity.
+ ### User-assigned managed identity | Default environment variable name | Description | Sample value |
Use the connection details below to connect compute services to Azure App Config
| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration Endpoint | `https://App-Configuration-name>.azconfig.io` | | AZURE_APPCONFIGURATION_CLIENTID | Your client ID | `<client-ID>` |
+Refer to the steps and code below to connect to Azure App Configuration using a user-assigned managed identity.
+
+### Connection string
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> | | | |
+> | AZURE_APPCONFIGURATION_CONNECTIONSTRING | Your App Configuration Connection String | `Endpoint=https://<App-Configuration-name>.azconfig.io;Id=<ID>;Secret=<secret>` |
+
+#### Sample Code
+Refer to the steps and code below to connect to Azure App Configuration using a connection string.
++ ### Service principal | Default environment variable name | Description | Sample value |
Use the connection details below to connect compute services to Azure App Config
| AZURE_APPCONFIGURATION_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_APPCONFIGURATION_TENANTID | Your tenant ID | `<tenant-ID>` |
+Refer to the steps and code below to connect to Azure App Configuration using a service principaL.
+ ## Next steps Follow the tutorial listed below to learn more about Service Connector.
spring-apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md
Azure Spring Apps intelligently schedules your applications on the underlying Ku
### In which regions is the Azure Spring Apps Basic/Standard plan available?
-East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, UK West, Sweden Central, Southeast Asia, Australia East, Canada Central, Canada East, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, Switzerland North, China East 2, China North 2, and China North 3. [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
+See [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-apps).
### In which regions is the Azure Spring Apps Enterprise plan available?
-East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, UK West, Sweden Central, Southeast Asia, Australia East, Canada Central, Canada East, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, and Switzerland North.
+While the Azure Spring Apps Basic/Standard plan is available in regions of China, the Enterprise plan is not available in all regions on Azure China.
### Is any customer data stored outside of the specified region?
spring-apps How To Launch From Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-launch-from-source.md
Azure Spring Apps enables Spring Boot applications on Azure.
You can launch applications directly from Java source code or from a pre-built JAR. This article explains the deployment procedures.
-This quickstart explains how to:
-
-> [!div class="checklist"]
-> * Provision a service instance
-> * Set a configuration server for an instance
-> * Build an application locally
-> * Deploy each application
-> * Assign a public endpoint for an application
- ## Prerequisites Before you begin, ensure that your Azure subscription has the required dependencies:
az spring app show-deploy-log --name <app-name>
1. Open the **Apps** pane to view apps for your service instance. 2. Select an application to view its **Overview** page. 3. Select **Assign endpoint** to assign a public endpoint to the application. This process can take a few minutes.
-4. Copy the URL from the **Overview** page and then paste it into your browser to view your running application.
-
-> [!div class="nextstepaction"]
-> [I ran into an issue](https://www.research.net/r/javae2e?tutorial=asc-source-quickstart&step=public-endpoint)
+4. Copy the URL from the **Overview** page and paste it into your browser to view running application.
## Next steps
-In this quickstart, you learned how to:
-
-> [!div class="checklist"]
-> * Provision a service instance
-> * Set a configuration server for an instance
-> * Build an application locally
-> * Deploy each application
-> * Edit environment variables for applications
-> * Assign a public endpoint to an application
-
-> [!div class="nextstepaction"]
-> [Quickstart: Monitoring Azure Spring Apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
+* [Quickstart: Monitoring Azure Spring Apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
-More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/service-binding-cosmosdb-sql).
+More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
spring-apps How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-managed-identities.md
We highly recommend that you use system-assigned and user-assigned managed ident
For the maximum number of user-assigned managed identities per application, see [Quotas and Service Plans for Azure Spring Apps](./quotas.md).
-### Azure services that aren't supported
-
-The following services do not currently support managed identity-based access:
--- Azure Redis Cache-- Azure Flexible MySQL-- Azure Flexible PostgreSQL-- Azure Database for MariaDB-- Azure Cosmos DB for MongoDB-- Azure Cosmos DB for Apache Cassandra-- Azure Databricks- ## Concept mapping
spring-apps Quickstart Provision Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-provision-service-instance.md
zone_pivot_groups: programming-languages-spring-apps
-# Quickstart: Provision an Azure Spring Apps service instance
+# Provision an Azure Spring Apps service instance
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌ Enterprise
-This quickstart shows you how to provision a Basic or Standard plan Azure Spring Apps service instance.
+This article shows you how to provision a Basic or Standard plan Azure Spring Apps service instance.
Azure Spring Apps supports multiple plans. For more information, see [Quotas and service plans for Azure Spring Apps](./quotas.md). To learn how to create service instances for other plans, see the following articles:
Use the following steps to create an instance of Azure Spring Apps:
1. Select **Review and create**.
-> [!div class="nextstepaction"]
-> [I ran into an issue](https://www.research.net/r/javae2e?tutorial=asc-cli-quickstart&step=public-endpoint)
- #### [Azure CLI](#tab/Azure-CLI) 1. Use the following command to add or update the Azure Spring Apps extension for the Azure CLI:
Use the following steps to create an instance of Azure Spring Apps:
--name <service-instance-name> ```
- For more information, see [What is Azure Resource Manager?](../azure-resource-manager/management/overview.md)
- ## Clean up resources
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
+If you plan to continue working with subsequent tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
```azurecli az group delete --name <resource-group-name>
storage Static Website Content Delivery Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/static-website-content-delivery-network.md
- Title: Integrate a static website with Azure CDN-
-description: Learn how to cache static website content from an Azure Storage account by using Azure Content Delivery Network (CDN).
----- Previously updated : 04/07/2020--
-# Integrate a static website with Azure CDN
-
-You can enable [Azure Content Delivery Network (CDN)](../../cdn/cdn-overview.md) to cache content from a [static website](storage-blob-static-website.md) that is hosted in an Azure storage account. You can use Azure CDN to configure the custom domain endpoint for your static website, provision custom TLS/SSL certificates, and configure custom rewrite rules. Configuring Azure CDN results in additional charges, but provides consistent low latencies to your website from anywhere in the world. Azure CDN also provides TLS encryption with your own certificate.
-
-For information on Azure CDN pricing, see [Azure CDN pricing](https://azure.microsoft.com/pricing/details/cdn/).
-
-## Enable Azure CDN for your static website
-
-You can enable Azure CDN for your static website directly from your storage account. If you want to specify advanced configuration settings for your CDN endpoint, such as [large file download optimization](../../cdn/cdn-optimization-overview.md#large-file-download), you can instead use the [Azure CDN extension](../../cdn/cdn-create-new-endpoint.md) to create a CDN profile and endpoint.
-
-1. Locate your storage account in the Azure portal and display the account overview.
-
-1. Under the **Security + Networking** menu, select **Azure CDN** to open the **Azure CDN** page:
-
- ![Create CDN endpoint](media/storage-blob-static-website-custom-domain/cdn-storage-new.png)
-
-1. In the **CDN profile** section, specify whether to create a new CDN profile or use an existing one. A CDN profile is a collection of CDN endpoints that share a pricing tier and provider. Then enter a name for the CDN that's unique within your subscription.
-
-1. Specify a pricing tier for the CDN endpoint. To learn more about pricing, see [Content Delivery Network pricing](https://azure.microsoft.com/pricing/details/cdn/). For more information about the features available with each tier, see [Compare Azure CDN product features](../../cdn/cdn-features.md).
-
-1. In the **CDN endpoint name** field, specify a name for your CDN endpoint. The CDN endpoint must be unique across Azure and provides the first part of the endpoint URL. The form validates that the endpoint name is unique.
-
-1. Specify your static website endpoint in the **Origin hostname** field.
-
- To find your static website endpoint, navigate to the **Static website** settings for your storage account. Copy the primary endpoint and paste it into the CDN configuration.
-
- > [!IMPORTANT]
- > Make sure to remove the protocol identifier (*e.g.*, HTTPS) and the trailing slash in the URL. For example, if the static website endpoint is
- > `https://mystorageaccount.z5.web.core.windows.net/`, then you would specify `mystorageaccount.z5.web.core.windows.net` in the **Origin hostname** field.
-
- The following image shows an example endpoint configuration:
-
- ![Screenshot showing sample CDN endpoint configuration](media/storage-blob-static-website-custom-domain/add-cdn-endpoint.png)
-
-1. Select **Create**, and then wait for the CDN to provision. After the endpoint is created, it appears in the endpoint list. (If you have any errors in the form, an exclamation mark appears next to that field.)
-
-1. To verify that the CDN endpoint is configured correctly, click on the endpoint to navigate to its settings. From the CDN overview for your storage account, locate the endpoint hostname, and navigate to the endpoint, as shown in the following image. The format of your CDN endpoint will be similar to `https://staticwebsitesamples.azureedge.net`.
-
- ![Screenshot showing overview of CDN endpoint](media/storage-blob-static-website-custom-domain/verify-cdn-endpoint.png)
-
-1. Once the CDN endpoint is provisioned, navigating to the CDN endpoint displays the contents of the https://docsupdatetracker.net/index.html file that you previously uploaded to your static website.
-
-1. To review the origin settings for your CDN endpoint, navigate to **Origin** under the **Settings** section for your CDN endpoint. You will see that the **Origin type** field is set to *Custom Origin* and that the **Origin hostname** field displays your static website endpoint.
-
- ![Screenshot showing Origin settings for CDN endpoint](media/storage-blob-static-website-custom-domain/verify-cdn-origin.png)
-
-## Remove content from Azure CDN
-
-If you no longer want to cache an object in Azure CDN, you can take one of the following steps:
--- Make the container private instead of public. For more information, see [Remediate anonymous read access to blob data (Azure Resource Manager deployments)](anonymous-read-access-prevent.md).-- Disable or delete the CDN endpoint by using the Azure portal.-- Modify your hosted service to no longer respond to requests for the object.-
-An object that's already cached in Azure CDN remains cached until the time-to-live period for the object expires or until the endpoint is [purged](../../cdn/cdn-purge-endpoint.md). When the time-to-live period expires, Azure CDN determines whether the CDN endpoint is still valid and the object is still anonymously accessible. If they are not, the object will no longer be cached.
-
-## Next steps
-
-(Optional) Add a custom domain to your Azure CDN endpoint. See [Tutorial: Add a custom domain to your Azure CDN endpoint](../../cdn/cdn-map-content-to-custom-domain.md).
storage Storage Custom Domain Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-custom-domain-name.md
The approach involves using [Azure Front Door (preferred)](../../frontdoor/front
### Using Azure CDN
-1. Enable [Azure CDN](../../cdn/cdn-overview.md) on your blob or web endpoint.
-
- For a Blob Storage endpoint, see [Integrate an Azure storage account with Azure CDN](../../cdn/cdn-create-a-storage-account-with-cdn.md).
-
- For a static website endpoint, see [Integrate a static website with Azure CDN](static-website-content-delivery-network.md).
+1. Enable [Azure CDN](../../cdn/cdn-overview.md) on your blob or web endpoint. For step-by-step guidance, see [Integrate an Azure storage account with Azure CDN](../../cdn/cdn-create-a-storage-account-with-cdn.md).
2. [Map Azure CDN content to a custom domain](../../cdn/cdn-map-content-to-custom-domain.md).
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
* <a id="ad-sid-to-upn"></a> **Is it possible to view the userPrincipalName (UPN) of a file/directory owner in File Explorer instead of the security identifier (SID)?**
- In File Explorer, the SID of a file/directory owner is displayed instead of the UPN for files and directories hosted on Azure Files. However, you can use the following PowerShell command to view all items in a directory and their owner, including UPN:
+ Windows Explorer calls an RPC API directly to the server (Azure Files) to translate the SID to a UPN. This API is something that Azure Files does not support. In File Explorer, the SID of a file/directory owner is displayed instead of the UPN for files and directories hosted on Azure Files. However, you can use the following PowerShell command to view all items in a directory and their owner, including UPN:
```PowerShell Get-ChildItem <Path> | Get-ACL | Select Path, Owner
stream-analytics Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md
Previously updated : 10/23/2023 Last updated : 10/27/2023 # Kafka output from Azure Stream Analytics (Preview)
-Azure Stream Analytics allows you to connect directly to Kafka clusters as a producer to output data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The Kafka Adapters are backward compatible and support all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions.
+Azure Stream Analytics allows you to connect directly to Kafka clusters as a producer to output data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The ASA Kafka output is backward compatible and supports all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions.
Supported compression types are None, Gzip, Snappy, LZ4, and Zstd. ## Configuration
The following table lists the property names and their description for creating
| Property name | Description | ||-|
-| Input/Output Alias | A friendly name used in queries to reference your input or output |
+| Output Alias | A friendly name used in queries to reference your output |
| Bootstrap server addresses | A list of host/port pairs to establish the connection to the Kafka cluster. | | Kafka topic | A unit of your Kafka cluster you want to write events to. | | Security Protocol | How you want to connect to your Kafka cluster. Azure Stream Analytics supports mTLS, SASL_SSL, SASL_PLAINTEXT or None. |
You can use four types of security protocols to connect to your Kafka clusters:
### Connect to Confluent Cloud using API key
-The ASA Kafka adapter is a librdkafka-based client, and to connect to confluent cloud, you will need TLS certificates that confluent cloud uses for server auth.
-Confluent uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA)
+The ASA Kafka output is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server auth.
+Confluent uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA) You can download the ISRG Root X1 certificate in PEM format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/).
To authenticate using the API Key confluent offers, you must use the SASL_SSL protocol and complete the configuration as follows:
To authenticate using the API Key confluent offers, you must use the SASL_SSL pr
| Username | Key/ Username from API Key | | Password | Secret/ Password from API key | | KeyVault | Name of Azure Key vault with Uploaded certificate from LetΓÇÖs Encrypt |
- | Certificate | Certificate uploaded to KeyVault downloaded from LetΓÇÖs Encrypt (You can download the ISRG Root X1 Self-sign cert in PEM format) |
+ | Certificate | Certificate uploaded to KeyVault downloaded from LetΓÇÖs Encrypt (Download the ISRG Root X1 certificate in PEM format) |
## Key vault integration
To authenticate using the API Key confluent offers, you must use the SASL_SSL pr
> Azure Stream Analytics integrates seamlessly with Azure Key vault to access stored secrets needed for authentication and encryption when using mTLS or SASL_SSL security protocols. Your Azure Stream Analytics job connects to your Azure Key vault using managed identity to ensure a secure connection and avoid the exfiltration of secrets.- Certificates are stored as secrets in the key vault and must be in PEM format.
-The following command can upload the certificate as a secret to your key vault. You need "Administrator" access to your Key vault for this command to work properly.
+### Configure Key vault with permissions
+
+You can create a key vault resource by following the documentation [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md)
+To be able to upload certificates, you must have "**Key Vault Administrator**" access to your Key vault. Follow the following to grant admin access.
+
+> [!NOTE]
+> You must have "**Owner**" permissions to grant other key vault permissions.
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+
+1. Assign the role using the following configuration:
+
+ | Setting | Value |
+ | | |
+ | Role | Key Vault Administrator |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Your account information or email> |
++
+### Upload Certificate to Key vault
+
+You can use Azure CLI to upload certificates as secrets to your key vault or use the Azure portal to upload the certificate as a secret.
+> [!IMPORTANT]
+> You must have "**Key Vault Administrator**" permissions access to your Key vault for this command to work properly
+> You must upload the certificate as a secret.
+> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job.
+
+#### Option One - Upload certificate via Azure CLI
+
+The following command can upload the certificate as a secret to your key vault.
```azurecli-interactive az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to secret> ```
+#### Option Two - Upload certificate via the Azure portal
+Use the following steps to upload a certificate as a secret using the Azure portal in your key vault:
+1. Select **Secrets**.
+
+1. Select **Generate/Import** > **Add role assignment** to open the **Add role assignment** page.
+
+1. Complete the following configuration for creating a secret:
+
+ | Setting | Value |
+ | | |
+ | Upload Options | Certificate |
+ | Upload certificate | \<select the certificate to upload> |
+ | Name | \<Name you want to give your secret> |
+ | activation date | (optional) |
+ | expiration date | (optional) |
+
+### Configure Managed identity
+Azure Stream Analytics requires you to configure managed identity to access key vault.
+You can configure your ASA job to use managed identity by navigating to the **Managed Identity** tab on the left side under **Configure**.
+
+ ![Configure Stream Analytics managed identity](./media/common/stream-analytics-enable-managed-identity-new.png)
+
+1. Click on the **managed identity tab** under **configure**.
+2. Select **Switch Identity** and select the identity to use with the job: system-assigned identity or user-assigned identity.
+3. For user-assigned identity, select the subscription where your user-assigned identity is located and select the name of your identity.
+4. Review and **save**.
+ ### Grant the Stream Analytics job permissions to access the certificate in the key vault For your Azure Stream Analytics job to access the certificate in your key vault and read the secret for authentication using managed identity, the service principal you created when you configured managed identity for your Azure Stream Analytics job must have special permissions to the key vault.
For your Azure Stream Analytics job to access the certificate in your key vault
| Setting | Value | | | |
- | Role | Key vault secret reader |
- | Assign access to | User, group, or service principal |
- | Members | \<Name of your Stream Analytics job> |
-
+ | Role | Key vault secrets user |
+ | Managed identity | Stream Analytics job for System-assigned managed identity or User-assigned managed identity |
+ | Members | \<Name of your Stream Analytics job> or \<name of user-assigned identity> |
+
### VNET integration
-When configuring your Azure Stream Analytics job to connect to your Kafka clusters, depending on your configuration, you might have to configure your job to access your Kafka clusters, which are behind a firewall or inside a virtual network. You can visit the Azure Stream Analytics VNET documentation to learn more about configuring private endpoints to access resources inside a virtual network or behind a firewall.
+
+If your Kafka is inside a virtual network (VNET) or behind a firewall, you must configure your Azure Stream Analytics job to access your Kafka topic.
+Visit the [Run your Azure Stream Analytics job in an Azure Virtual Network documentation](../stream-analytics/run-job-in-virtual-network.md) for more information.
### Limitations
-* When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units.
+* When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units or one (1) V2 streaming unit.
* When using mTLS or SASL_SSL with Azure Key vault, you must convert your Java Key Store to PEM format. * The minimum version of Kafka you can configure Azure Stream Analytics to connect to is version 0.10. > [!NOTE]
-> For direct help with using the Azure Stream Analytics Kafka adapter, please reach out to [askasa@microsoft.com](mailto:askasa@microsoft.com).
+> For direct help with using the Azure Stream Analytics Kafka output, please reach out to [askasa@microsoft.com](mailto:askasa@microsoft.com).
>
stream-analytics Stream Analytics Define Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md
Previously updated : 10/23/2023 Last updated : 10/27/2023 # Stream data from Kafka into Azure Stream Analytics (Preview)
The following are the major use cases:
Azure Stream Analytics lets you connect directly to Kafka clusters to ingest data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The Kafka Adapters are backward compatible and support all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions. Supported compression types are None, Gzip, Snappy, LZ4, and Zstd.
-### Configuration
+## Configuration
The following table lists the property names and their description for creating a Kafka Input:
+> [!IMPORTANT]
+> To configure your Kafka cluster as an input, the timestamp type of the input topic should be **LogAppendTime**. The only timestamp type Azure Stream Analytics supports is **LogAppendTime**.
+>
+ | Property name | Description | ||-| | Input/Output Alias | A friendly name used in queries to reference your input or output |
You can use four types of security protocols to connect to your Kafka clusters:
### Connect to Confluent Cloud using API key
-The ASA Kafka adapter is a librdkafka-based client, and to connect to confluent cloud, you will need TLS certificates that confluent cloud uses for server auth.
-Confluent uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA)
+The ASA Kafka adapter is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server auth.
+Confluent uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA) You can download the ISRG Root X1 certificate in PEM format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/).
To authenticate using the API Key confluent offers, you must use the SASL_SSL protocol and complete the configuration as follows:
To authenticate using the API Key confluent offers, you must use the SASL_SSL pr
| Username | Key/ Username from API Key | | Password | Secret/ Password from API key | | KeyVault | Name of Azure Key vault with Uploaded certificate from LetΓÇÖs Encrypt |
- | Certificate | Certificate uploaded to KeyVault downloaded from LetΓÇÖs Encrypt (You can download the ISRG Root X1 Self-sign cert in PEM format) |
+ | Certificate | Certificate uploaded to KeyVault downloaded from LetΓÇÖs Encrypt (Download the ISRG Root X1 certificate in PEM format) |
## Key vault integration
To authenticate using the API Key confluent offers, you must use the SASL_SSL pr
> Azure Stream Analytics integrates seamlessly with Azure Key vault to access stored secrets needed for authentication and encryption when using mTLS or SASL_SSL security protocols. Your Azure Stream Analytics job connects to your Azure Key vault using managed identity to ensure a secure connection and avoid the exfiltration of secrets.- Certificates are stored as secrets in the key vault and must be in PEM format.
-The following command can upload the certificate as a secret to your key vault. You need "Administrator" access to your Key vault for this command to work properly.
+### Configure Key vault with permissions
+
+You can create a key vault resource by following the documentation [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md)
+To be able to upload certificates, you must have "**Key Vault Administrator**" access to your Key vault. Follow the following to grant admin access.
+
+> [!NOTE]
+> You must have "**Owner**" permissions to grant other key vault permissions.
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+
+1. Assign the role using the following configuration:
+
+ | Setting | Value |
+ | | |
+ | Role | Key Vault Administrator |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Your account information or email> |
++
+### Upload Certificate to Key vault
+
+You can use Azure CLI to upload certificates as secrets to your key vault or use the Azure portal to upload the certificate as a secret.
+> [!IMPORTANT]
+> You must upload the certificate as a secret.
+
+#### Option One - Upload certificate via Azure CLI
+
+The following command can upload the certificate as a secret to your key vault. You must have "**Key Vault Administrator**" permissions access to your Key vault for this command to work properly.
```azurecli-interactive az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to secret> ```
+#### Option Two - Upload certificate via the Azure portal
+Use the following steps to upload a certificate as a secret using the Azure portal in your key vault:
+1. Select **Secrets**.
+
+1. Select **Generate/Import** > **Add role assignment** to open the **Add role assignment** page.
+
+1. Complete the following configuration for creating a secret:
+
+ | Setting | Value |
+ | | |
+ | Upload Options | Certificate |
+ | Upload certificate | \<select the certificate to upload> |
+ | Name | \<Name you want to give your secret> |
++
+### Configure Managed identity
+Azure Stream Analytics requires you to configure managed identity to access key vault.
+You can configure your ASA job to use managed identity by navigating to the **Managed Identity** tab on the left side under **Configure**.
+
+ ![Configure Stream Analytics managed identity](./media/common/stream-analytics-enable-managed-identity-new.png)
+
+1. Click on the **managed identity tab** under **configure**.
+2. Select on **Switch Identity** and select the identity to use with the job: system-assigned identity or user-assigned identity.
+3. For user-assigned identity, select the subscription where your user-assigned identity is located and select the name of your identity.
+4. Review and **save**.
+ ### Grant the Stream Analytics job permissions to access the certificate in the key vault For your Azure Stream Analytics job to access the certificate in your key vault and read the secret for authentication using managed identity, the service principal you created when you configured managed identity for your Azure Stream Analytics job must have special permissions to the key vault.
For your Azure Stream Analytics job to access the certificate in your key vault
| Setting | Value | | | |
- | Role | Key vault secret reader |
- | Assign access to | User, group, or service principal |
- | Members | \<Name of your Stream Analytics job> |
+ | Role | Key vault secrets user |
+ | Managed identity | Stream Analytics job for System-assigned managed identity or User-assigned managed identity |
+ | Members | \<Name of your Stream Analytics job> or \<name of user-assigned identity> |
### VNET integration
-When configuring your Azure Stream Analytics job to connect to your Kafka clusters, depending on your configuration, you might have to configure your job to access your Kafka clusters, which are behind a firewall or inside a virtual network. You can visit the Azure Stream Analytics VNET documentation to learn more about configuring private endpoints to access resources inside a virtual network or behind a firewall.
+
+If your Kafka is inside a virtual network (VNET) or behind a firewall, you must configure your Azure Stream Analytics job to access your Kafka topic.
+Visit the [Run your Azure Stream Analytics job in an Azure Virtual Network documentation](../stream-analytics/run-job-in-virtual-network.md) for more information.
+ ### Limitations
update-manager Guidance Migration Automation Update Management Azure Update Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-automation-update-management-azure-update-manager.md
description: Guidance overview on migration from Automation Update Management to
Previously updated : 09/14/2023 Last updated : 10/27/2023
Guidance to move various capabilities is provided in table below:
1 | Patch management for Off-Azure machines. | Could run with or without Arc connectivity. | Azure Arc is a prerequisite for non-Azure machines. | 1. [Create service principal](../app-service/quickstart-php.md#1get-the-sample-repository) </br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 1. [Create service principal](../azure-arc/servers/onboard-service-principal.md#azure-powershell) <br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 2 | Enable periodic assessment to check for latest updates automatically every few hours. | Machines automatically receive the latest updates every 12 hours for Windows and every 3 hours for Linux. | Periodic assessment is an update setting on your machine. If it's turned on, the Update Manager fetches updates every 24 hours for the machine and shows the latest update status. | 1. [Single machine](manage-update-settings.md#configure-settings-on-a-single-vm) </br> 2. [At scale](manage-update-settings.md#configure-settings-at-scale) </br> 3. [At scale using policy](periodic-assessment-at-scale.md) | 1. [For Azure VM](../virtual-machines/automatic-vm-guest-patching.md#azure-powershell-when-updating-a-windows-vm) </br> 2.[For Arc-enabled VM](/powershell/module/az.connectedmachine/update-azconnectedmachine?view=azps-10.2.0) | 3 | Static Update deployment schedules (Static list of machines for update deployment). | Automation Update management had its own schedules. | Azure Update Manager creates a [maintenance configuration](../virtual-machines/maintenance-configurations.md) object for a schedule. So, you need to create this object, copying all schedule settings from Automation Update Management to Azure Update Manager schedule. | 1. [Single VM](scheduled-patching.md#schedule-recurring-updates-on-a-single-vm) </br> 2. [At scale](scheduled-patching.md#schedule-recurring-updates-at-scale) </br> 3. [At scale using policy](scheduled-patching.md#onboard-to-schedule-by-using-azure-policy) | [Create a static scope](manage-vms-programmatically.md) |
-4 | Dynamic Update deployment schedules (Defining scope of machines using resource group, tags, etc. which is evaluated dynamically at runtime).| Same as static update schedules. | Same as static update schedules. | [Add a dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope | [Create a dynamic scope]( tutorial-dynamic-grouping-for-scheduled-patching.md#create-a-dynamic-scope) |
-5 | Deboard from Azure Automation Update management. | After you complete the steps 1, 2, and 3, you need to clean up Azure Update management objects. | | 1. [Remove machines from solution](../automation/update-management/remove-feature.md#remove-management-of-vms) </br> 2. [Remove Update Management solution](../automation/update-management/remove-feature.md#remove-updatemanagement-solution) </br> 3. [Unlink workspace from Automation account](../automation/update-management/remove-feature.md#unlink-workspace-from-automation-account) </br> 4. [Cleanup Automation account](../automation/update-management/remove-feature.md#cleanup-automation-account) | NA |
+4 | Dynamic Update deployment schedules (Defining scope of machines using resource group, tags, etc. which is evaluated dynamically at runtime).| Same as static update schedules. | Same as static update schedules. | [Add a dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope) | [Create a dynamic scope]( tutorial-dynamic-grouping-for-scheduled-patching.md#create-a-dynamic-scope) |
+5 | Deboard from Azure Automation Update management. | After you complete the steps 1, 2, and 3, you need to clean up Azure Update management objects. | | [Remove Update Management solution](../automation/update-management/remove-feature.md#remove-updatemanagement-solution) </br> | NA |
6 | Reporting | Custom update reports using Log Analytics queries. | Update data is stored in Azure Resource Graph (ARG). Customers can query ARG data to build custom dashboards, workbooks etc. | The old Automation Update Management data stored in Log analytics can be accessed, but there's no provision to move data to ARG. You can write ARG queries to access data that will be stored to ARG after virtual machines are patched via Azure Update Manager. With ARG queries you can, build dashboards and workbooks using following instructions: </br> 1. [Log structure of Azure Resource graph updates data](query-logs.md) </br> 2. [Sample ARG queries](sample-query-logs.md) </br> 3. [Create workbooks](manage-workbooks.md) | NA | 7 | Customize workflows using pre and post scripts. | Available as Automation runbooks. | We recommend that you use Automation runbooks once they are available. | | | 8 | Create alerts based on updates data for your environment | Alerts can be set up on updates data stored in Log Analytics. |We recommend that you use alerts once they are available. | | |
virtual-machines Image Builder Api Update Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-api-update-release-notes.md
Previously updated : 06/05/2023 Last updated : 10/05/2023
This article contains all major API changes and feature updates for the Azure VM
## Updates + ### April 2023 New portal functionality has been added for Azure Image Builder. Search ΓÇ£Image TemplatesΓÇ¥ in Azure portal, then click ΓÇ£CreateΓÇ¥. You can also [get started here](https://ms.portal.azure.com/#create/Microsoft.ImageTemplate) with building and validating custom images inside the portal. ## API releases
+### Version 2023-07-01
+
+**Coming Soon**
+
+Support for updating Azure Compute Gallery distribution targets.
+++
+**Changes**
+
+New `errorHandling` property. This property provides users with more control over how errors are handled during the image building process. For more information, see [errorHandling](../virtual-machines/linux/image-builder-json.md#properties-errorhandling)
### Version 2022-07-01
virtual-machines Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux.md
Azure Disk Encryption for Linux virtual machines (VMs) uses the DM-Crypt feature
Azure Disk Encryption is [integrated with Azure Key Vault](disk-encryption-key-vault.md) to help you control and manage the disk encryption keys and secrets. For an overview of the service, see [Azure Disk Encryption for Linux VMs](disk-encryption-overview.md).
+## Prerequisites
+ You can only apply disk encryption to virtual machines of [supported VM sizes and operating systems](disk-encryption-overview.md#supported-vms-and-operating-systems). You must also meet the following prerequisites: - [Additional requirements for VMs](disk-encryption-overview.md#supported-vms-and-operating-systems) - [Networking requirements](disk-encryption-overview.md#networking-requirements) - [Encryption key storage requirements](disk-encryption-overview.md#encryption-key-storage-requirements)
-In all cases, you should [take a snapshot](snapshot-copy-managed-disk.md) and/or create a backup before disks are encrypted. Backups ensure that a recovery option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup before encryption occurs. Once a backup is made, you can use the [Set-AzVMDiskEncryptionExtension cmdlet](/powershell/module/az.compute/set-azvmdiskencryptionextension) to encrypt managed disks by specifying the -skipVmBackup parameter. For more information about how to back up and restore encrypted VMs, see the [Azure Backup](../../backup/backup-azure-vms-encryption.md) article.
+In all cases, you should [take a snapshot](snapshot-copy-managed-disk.md) and/or create a backup before disks are encrypted. Backups ensure that a recovery option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup before encryption occurs. Once a backup is made, you can use the [Set-AzVMDiskEncryptionExtension cmdlet](/powershell/module/az.compute/set-azvmdiskencryptionextension) to encrypt managed disks by specifying the -skipVmBackup parameter. For more information about how to back up and restore encrypted VMs, see the [Azure Backup](../../backup/backup-azure-vms-encryption.md) article.
->[!WARNING]
-> - If you have previously used Azure Disk Encryption with Microsoft Entra ID to encrypt a VM, you must continue use this option to encrypt your VM. See [Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-overview-aad.md) for details.
->
-> - When encrypting Linux OS volumes, the VM should be considered unavailable. We strongly recommend to avoid SSH logins while the encryption is in progress to avoid issues blocking any open files that will need to be accessed during the encryption process. To check progress, use the [Get-AzVMDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) PowerShell cmdlet or the [vm encryption show](/cli/azure/vm/encryption#az-vm-encryption-show) CLI command. This process can be expected to take a few hours for a 30GB OS volume, plus additional time for encrypting data volumes. Data volume encryption time will be proportional to the size and quantity of the data volumes unless the encrypt format all option is used.
-> - Disabling encryption on Linux VMs is only supported for data volumes. It is not supported on data or OS volumes if the OS volume has been encrypted.
+## Restrictions
+
+If you have previously used Azure Disk Encryption with Microsoft Entra ID to encrypt a VM, you must continue use this option to encrypt your VM. See [Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-overview-aad.md) for details.
+
+When encrypting Linux OS volumes, the VM should be considered unavailable. We strongly recommend to avoid SSH logins while the encryption is in progress to avoid issues blocking any open files that will need to be accessed during the encryption process. To check progress, use the [Get-AzVMDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) PowerShell cmdlet or the [vm encryption show](/cli/azure/vm/encryption#az-vm-encryption-show) CLI command. This process can be expected to take a few hours for a 30GB OS volume, plus additional time for encrypting data volumes. Data volume encryption time will be proportional to the size and quantity of the data volumes unless the encrypt format all option is used.
+
+Disabling encryption on Linux VMs is only supported for data volumes. It is not supported on data or OS volumes if the OS volume has been encrypted.
+
+Azure Disk Encryption does not work for the following Linux scenarios, features, and technology:
+
+- Encrypting basic tier VM or VMs created through the classic VM creation method.
+- Disabling encryption on an OS drive or data drive of a Linux VM when the OS drive is encrypted.
+- Encrypting the OS drive for Linux Virtual Machine Scale Sets.
+- Encrypting custom images on Linux VMs.
+- Integration with an on-premises key management system.
+- Azure Files (shared file system).
+- Network File System (NFS).
+- Dynamic volumes.
+- Ephemeral OS disks.
+- Encryption of shared/distributed file systems like (but not limited to): DFS, GFS, DRDB, and CephFS.
+- Moving an encrypted VM to another subscription or region.
+- Creating an image or snapshot of an encrypted VM and using it to deploy additional VMs.
+- Kernel Crash Dump (kdump).
+- Oracle ACFS (ASM Cluster File System).
+- NVMe disks such as those on [High performance computing VM sizes](../sizes-hpc.md) or [Storage optimized VM sizes](../sizes-storage.md).
+- A VM with "nested mount points"; that is, multiple mount points in a single path (such as "/1stmountpoint/data/2stmountpoint").
+- A VM with a data drive mounted on top of an OS folder.
+- A VM on which a root (OS disk) logical volume has been extended using a data disk.
+- M-series VMs with Write Accelerator disks.
+- Applying ADE to a VM that has disks encrypted with [Encryption at Host](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) or [server-side encryption with customer-managed keys](../disk-encryption.md) (SSE + CMK). Applying SSE + CMK to a data disk or adding a data disk with SSE + CMK configured to a VM encrypted with ADE is an unsupported scenario as well.
+- Migrating a VM that is encrypted with ADE, or has **ever** been encrypted with ADE, to [Encryption at Host](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) or [server-side encryption with customer-managed keys](../disk-encryption.md).
+- Encrypting VMs in failover clusters.
+- Encryption of [Azure ultra disks](../disks-enable-ultra-ssd.md).
+- Encryption of [Premium SSD v2 disks](../disks-types.md#premium-ssd-v2-limitations).
+- Encryption of VMs in subscriptions that have the [Secrets should have the specified maximum validity period](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F342e8053-e12e-4c44-be01-c3c2f318400f) policy enabled with the [DENY effect](../../governance/policy/concepts/effects.md).
## Install tools and connect to Azure
You can remove the encryption extension using Azure PowerShell or the Azure CLI.
az vm extension delete -g "MyVirtualMachineResourceGroup" --vm-name "MySecureVM" -n "AzureDiskEncryptionForLinux" ```
-## Unsupported scenarios
-
-Azure Disk Encryption does not work for the following Linux scenarios, features, and technology:
--- Encrypting basic tier VM or VMs created through the classic VM creation method.-- Disabling encryption on an OS drive or data drive of a Linux VM when the OS drive is encrypted.-- Encrypting the OS drive for Linux Virtual Machine Scale Sets.-- Encrypting custom images on Linux VMs.-- Integration with an on-premises key management system.-- Azure Files (shared file system).-- Network File System (NFS).-- Dynamic volumes.-- Ephemeral OS disks.-- Encryption of shared/distributed file systems like (but not limited to): DFS, GFS, DRDB, and CephFS.-- Moving an encrypted VM to another subscription or region.-- Creating an image or snapshot of an encrypted VM and using it to deploy additional VMs.-- Kernel Crash Dump (kdump).-- Oracle ACFS (ASM Cluster File System).-- NVMe disks such as those on [High performance computing VM sizes](../sizes-hpc.md) or [Storage optimized VM sizes](../sizes-storage.md).-- A VM with "nested mount points"; that is, multiple mount points in a single path (such as "/1stmountpoint/data/2stmountpoint").-- A VM with a data drive mounted on top of an OS folder.-- A VM on which a root (OS disk) logical volume has been extended using a data disk.-- M-series VMs with Write Accelerator disks.-- Applying ADE to a VM that has disks encrypted with [Encryption at Host](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) or [server-side encryption with customer-managed keys](../disk-encryption.md) (SSE + CMK). Applying SSE + CMK to a data disk or adding a data disk with SSE + CMK configured to a VM encrypted with ADE is an unsupported scenario as well.-- Migrating a VM that is encrypted with ADE, or has **ever** been encrypted with ADE, to [Encryption at Host](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) or [server-side encryption with customer-managed keys](../disk-encryption.md).-- Encrypting VMs in failover clusters.-- Encryption of [Azure ultra disks](../disks-enable-ultra-ssd.md).-- Encryption of [Premium SSD v2 disks](../disks-types.md#premium-ssd-v2-limitations).-- Encryption of VMs in subscriptions that have the [Secrets should have the specified maximum validity period](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F342e8053-e12e-4c44-be01-c3c2f318400f) policy enabled with the [DENY effect](../../governance/policy/concepts/effects.md).- ## Next steps - [Azure Disk Encryption overview](disk-encryption-overview.md)
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
Azure Disk Encryption is also not available on [Basic, A-series VMs](https://azu
| Virtual machine | Minimum memory requirement | |--|--| | Linux VMs when only encrypting data volumes| 2 GB |
-| Linux VMs when encrypting both data and OS volumes, and where the root (/) file system usage is 4GB or less | 8 GB |
-| Linux VMs when encrypting both data and OS volumes, and where the root (/) file system usage is greater than 4GB | The root file system usage * 2. For instance, a 16 GB of root file system usage requires at least 32GB of RAM |
+| Linux VMs when encrypting both data and OS volumes, and where the root (/) file system usage is 4 GB or less | 8 GB |
+| Linux VMs when encrypting both data and OS volumes, and where the root (/) file system usage is greater than 4 GB | The root file system usage * 2. For instance, a 16 GB of root file system usage requires at least 32 GB of RAM |
Once the OS disk encryption process is complete on Linux virtual machines, the VM can be configured to run with less memory.
-For more exceptions, see [Azure Disk Encryption: Unsupported scenarios](disk-encryption-linux.md#unsupported-scenarios).
+For more exceptions, see [Azure Disk Encryption: Restrictions](disk-encryption-linux.md#restrictions).
### Supported operating systems
Linux server distributions that are not endorsed by Azure do not support Azure D
| Oracle | Oracle Linux 8.5 Gen 2 | 8.5 | Oracle:Oracle-Linux:ol85-lvm-gen2:latest | OS and data disk (see note below) | | RedHat | RHEL 9.2 | 9.2 | RedHat:RHEL:9_2:latest | OS and data disk (see note below) | | RedHat | RHEL 9.2 Gen 2 | 9.2 | RedHat:RHEL:92-gen2:latest | OS and data disk (see note below) |
-| RedHat | RHEL 9.1 | 9.1 | RedHat:RHEL:9_1:latest | OS and data disk (see note below) |
-| RedHat | RHEL 9.1 Gen 2 | 9.1 | RedHat:RHEL:91-gen2:latest | OS and data disk (see note below) |
| RedHat | RHEL 9.0 | 9.0 | RedHat:RHEL:9_0:latest | OS and data disk (see note below) | | RedHat | RHEL 9.0 Gen 2 | 9.0 | RedHat:RHEL:90-gen2:latest | OS and data disk (see note below) | | RedHat | RHEL 9-lvm | 9-lvm | RedHat:RHEL:9-lvm:latest | OS and data disk (see note below) |
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
description: Learn how to create a Bicep file or ARM template JSON template to u
Previously updated : 09/18/2023 Last updated : 10/03/2023
The basic format is:
"properties": { "buildTimeoutInMinutes": <minutes>, "customize": [],
+ "errorHandling":[],
"distribute": [], "optimize": [], "source": {},
To override the commands, use the PowerShell or Shell script provisioners to cre
Image Builder reads these commands, these commands are written out to the AIB logs, `customization.log`. See [troubleshooting](image-builder-troubleshoot.md#customization-log) on how to collect logs.
+## Properties: errorHandling
+
+The `errorHandling` property allows you to configure how errors are handled during image creation.
+
+# [JSON](#tab/json)
+
+```json
+{
+ "errorHandling": {
+ "onCustomizerError": "abort",
+ "onValidationError": "cleanup"
+ }
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+errorHandling: {
+ onCustomizerError: 'abort',
+ onValidationError: 'cleanup'
+}
+```
+++
+The `errorHandling` property allows you to configure how errors are handled during image creation. It has two properties:
+
+- **onCustomizerError** - Specifies the action to take when an error occurs during the customizer phase of image creation.
+- **onValidationError** - Specifies the action to take when an error occurs during validation of the image template.
+
+The `errorHandling` property also has two possible values for handling errors during image creation:
+
+- **cleanup** - Ensures that temporary resources created by Packer are cleaned up even if Packer or one of the customizations/validations encounters an error. This maintains backwards compatibility with existing behavior.
+- **abort** - In case Packer encounters an error, the Azure Image Builder (AIB) service skips the clean up of temporary resources. As the owner of the AIB template, you are responsible for cleaning up these resources from your subscription. These resources may contain useful information such as logs and files left behind in a temporary VM, which can aid in investigating the error encountered by Packer.
+ ## Properties: distribute Azure Image Builder supports three distribution targets:
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-overview.md
You can learn the fundamentals of Azure Disk Encryption for Windows in just a fe
Windows VMs are available in a [range of sizes](../sizes-general.md). Azure Disk Encryption is supported on Generation 1 and Generation 2 VMs. Azure Disk Encryption is also available for VMs with premium storage.
-Azure Disk Encryption is not available on [Basic, A-series VMs](https://azure.microsoft.com/pricing/details/virtual-machines/series/), or on virtual machines with a less than 2 GB of memory. For more exceptions, see [Azure Disk Encryption: Unsupported scenarios](disk-encryption-windows.md#unsupported-scenarios).
+Azure Disk Encryption is not available on [Basic, A-series VMs](https://azure.microsoft.com/pricing/details/virtual-machines/series/), or on virtual machines with a less than 2 GB of memory. For more exceptions, see [Azure Disk Encryption: Restrictions](disk-encryption-windows.md#restrictions).
### Supported operating systems
Azure Disk Encryption requires an Azure Key Vault to control and manage disk enc
For details, see [Creating and configuring a key vault for Azure Disk Encryption](disk-encryption-key-vault.md). ## Terminology+ The following table defines some of the common terms used in Azure disk encryption documentation: | Terminology | Definition |
virtual-machines Disk Encryption Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-troubleshooting.md
When encrypting a VM fails with the error message "Failed to send DiskEncryption
/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br> > The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in: https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]-- Ensure you are not following any [unsupported scenario](disk-encryption-windows.md#unsupported-scenarios)
+- Ensure you are not violating any [restrictions](disk-encryption-windows.md#restrictions)
- Ensure you are meeting [network requirements](disk-encryption-overview.md#networking-requirements) and try again ## Troubleshooting Azure Disk Encryption behind a firewall
virtual-machines Disk Encryption Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-windows.md
Azure Disk Encryption for Windows virtual machines (VMs) uses the BitLocker feat
Azure Disk Encryption is [integrated with Azure Key Vault](disk-encryption-key-vault.md) to help you control and manage the disk encryption keys and secrets. For an overview of the service, see [Azure Disk Encryption for Windows VMs](disk-encryption-overview.md).
+## Prerequisites
+ You can only apply disk encryption to virtual machines of [supported VM sizes and operating systems](disk-encryption-overview.md#supported-vms-and-operating-systems). You must also meet the following prerequisites: - [Networking requirements](disk-encryption-overview.md#networking-requirements) - [Group Policy requirements](disk-encryption-overview.md#group-policy-requirements) - [Encryption key storage requirements](disk-encryption-overview.md#encryption-key-storage-requirements)
->[!IMPORTANT]
-> - If you have previously used Azure Disk Encryption with Microsoft Entra ID to encrypt a VM, you must continue use this option to encrypt your VM. See [Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-overview-aad.md) for details.
->
-> - You should [take a snapshot](snapshot-copy-managed-disk.md) and/or create a backup before disks are encrypted. Backups ensure that a recovery option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup before encryption occurs. Once a backup is made, you can use the [Set-AzVMDiskEncryptionExtension cmdlet](/powershell/module/az.compute/set-azvmdiskencryptionextension) to encrypt managed disks by specifying the -skipVmBackup parameter. For more information about how to back up and restore encrypted VMs, see [Back up and restore encrypted Azure VM](../../backup/backup-azure-vms-encryption.md).
->
-> - Encrypting or disabling encryption may cause a VM to reboot.
+## Restrictions
+
+If you have previously used Azure Disk Encryption with Microsoft Entra ID to encrypt a VM, you must continue use this option to encrypt your VM. See [Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-overview-aad.md) for details.
+
+You should [take a snapshot](snapshot-copy-managed-disk.md) and/or create a backup before disks are encrypted. Backups ensure that a recovery option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup before encryption occurs. Once a backup is made, you can use the [Set-AzVMDiskEncryptionExtension cmdlet](/powershell/module/az.compute/set-azvmdiskencryptionextension) to encrypt managed disks by specifying the -skipVmBackup parameter. For more information about how to back up and restore encrypted VMs, see [Back up and restore encrypted Azure VM](../../backup/backup-azure-vms-encryption.md).
+
+Encrypting or disabling encryption may cause a VM to reboot.
+
+Azure Disk Encryption does not work for the following scenarios, features, and technology:
+
+- Encrypting basic tier VM or VMs created through the classic VM creation method.
+- Encrypting VMs configured with software-based RAID systems.
+- Encrypting VMs configured with Storage Spaces Direct (S2D), or Windows Server versions before 2016 configured with Windows Storage Spaces.
+- Integration with an on-premises key management system.
+- Azure Files (shared file system).
+- Network File System (NFS).
+- Dynamic volumes.
+- Windows Server containers, which create dynamic volumes for each container.
+- Ephemeral OS disks.
+- iSCSI disks.
+- Encryption of shared/distributed file systems like (but not limited to) DFS, GFS, DRDB, and CephFS.
+- Moving an encrypted VM to another subscription or region.
+- Creating an image or snapshot of an encrypted VM and using it to deploy additional VMs.
+- M-series VMs with Write Accelerator disks.
+- Applying ADE to a VM that has disks encrypted with [Encryption at Host](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) or [server-side encryption with customer-managed keys](../disk-encryption.md) (SSE + CMK). Applying SSE + CMK to a data disk or adding a data disk with SSE + CMK configured to a VM encrypted with ADE is an unsupported scenario as well.
+- Migrating a VM that is encrypted with ADE, or has **ever** been encrypted with ADE, to [Encryption at Host](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) or [server-side encryption with customer-managed keys](../disk-encryption.md).
+- Encrypting VMs in failover clusters.
+- Encryption of [Azure ultra disks](../disks-enable-ultra-ssd.md).
+- Encryption of [Premium SSD v2 disks](../disks-types.md#premium-ssd-v2-limitations).
+- Encryption of VMs in subscriptions that have the [Secrets should have the specified maximum validity period](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F342e8053-e12e-4c44-be01-c3c2f318400f) policy enabled with the [DENY effect](../../governance/policy/concepts/effects.md).
## Install tools and connect to Azure
NVMe disks will be uninitialized the following scenarios:
In these scenarios, the NVMe disks need to be initialized after the VM starts. To enable encryption on the NVMe disks, run command to enable Azure Disk Encryption again after the NVMe disks are initialized.
-In addition to the scenarios listed in the [Unsupported Scenarios](#unsupported-scenarios) section, encryption of NVMe disks is not supported for:
+In addition to the scenarios listed in the [Restrictions](#restrictions) section, encryption of NVMe disks is not supported for:
- VMs encrypted with Azure Disk Encryption with Microsoft Entra ID (previous release) - NVMe disks with storage spaces
You can remove the encryption extension using Azure PowerShell or the Azure CLI.
az vm extension delete -g "MyVirtualMachineResourceGroup" --vm-name "MySecureVM" -n "AzureDiskEncryption" ```
-## Unsupported scenarios
-
-Azure Disk Encryption does not work for the following scenarios, features, and technology:
--- Encrypting basic tier VM or VMs created through the classic VM creation method.-- Encrypting VMs configured with software-based RAID systems.-- Encrypting VMs configured with Storage Spaces Direct (S2D), or Windows Server versions before 2016 configured with Windows Storage Spaces.-- Integration with an on-premises key management system.-- Azure Files (shared file system).-- Network File System (NFS).-- Dynamic volumes.-- Windows Server containers, which create dynamic volumes for each container.-- Ephemeral OS disks.-- iSCSI disks.-- Encryption of shared/distributed file systems like (but not limited to) DFS, GFS, DRDB, and CephFS.-- Moving an encrypted VM to another subscription or region.-- Creating an image or snapshot of an encrypted VM and using it to deploy additional VMs.-- M-series VMs with Write Accelerator disks.-- Applying ADE to a VM that has disks encrypted with [Encryption at Host](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) or [server-side encryption with customer-managed keys](../disk-encryption.md) (SSE + CMK). Applying SSE + CMK to a data disk or adding a data disk with SSE + CMK configured to a VM encrypted with ADE is an unsupported scenario as well.-- Migrating a VM that is encrypted with ADE, or has **ever** been encrypted with ADE, to [Encryption at Host](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) or [server-side encryption with customer-managed keys](../disk-encryption.md).-- Encrypting VMs in failover clusters.-- Encryption of [Azure ultra disks](../disks-enable-ultra-ssd.md).-- Encryption of [Premium SSD v2 disks](../disks-types.md#premium-ssd-v2-limitations).-- Encryption of VMs in subscriptions that have the [Secrets should have the specified maximum validity period](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F342e8053-e12e-4c44-be01-c3c2f318400f) policy enabled with the [DENY effect](../../governance/policy/concepts/effects.md). ## Next steps
virtual-network Network Security Groups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-groups-overview.md
Previously updated : 03/15/2023 Last updated : 10/27/2023
A network security group contains as many rules as desired, within Azure subscri
|Property |Explanation | ||| |Name|A unique name within the network security group. The name can be up to 80 characters long. It must begin with a word character, and it must end with a word character or with '_'. The name may contain word characters or '.', '-', '\_'.|
-|Priority | A number between 100 and 4096. Rules are processed in priority order, with lower numbers processed before higher numbers, because lower numbers have higher priority. Once traffic matches a rule, processing stops. As a result, any rules that exist with lower priorities (higher numbers) that have the same attributes as rules with higher priorities aren't processed.|
+|Priority | A number between 100 and 4096. Rules are processed in priority order, with lower numbers processed before higher numbers, because lower numbers have higher priority. Once traffic matches a rule, processing stops. As a result, any rules that exist with lower priorities (higher numbers) that have the same attributes as rules with higher priorities aren't processed. </br> **Azure default security rules are given the highest number with the lowest priority to ensure that custom rules are always processed first.** |
|Source or destination| Any, or an individual IP address, classless inter-domain routing (CIDR) block (10.0.0.0/24, for example), service tag, or application security group. If you specify an address for an Azure resource, specify the private IP address assigned to the resource. Network security groups are processed after Azure translates a public IP address to a private IP address for inbound traffic, and before Azure translates a private IP address to a public IP address for outbound traffic. Fewer security rules are needed when you specify a range, a service tag, or application security group. The ability to specify multiple individual IP addresses and ranges (you can't specify multiple service tags or application groups) in a rule is referred to as [augmented security rules](#augmented-security-rules). Augmented security rules can only be created in network security groups created through the Resource Manager deployment model. You can't specify multiple IP addresses and IP address ranges in network security groups created through the classic deployment model.| |Protocol | TCP, UDP, ICMP, ESP, AH, or Any. The ESP and AH protocols aren't currently available via the Azure portal but can be used via ARM templates. | |Direction| Whether the rule applies to inbound, or outbound traffic.|
virtual-network Virtual Network Service Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoints-overview.md
Previously updated : 04/20/2023 Last updated : 10/27/2023
Virtual Network (VNet) service endpoint provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Service Endpoints enables private IP addresses in the VNet to reach the endpoint of an Azure service without needing a public IP address on the VNet. >[!NOTE]
- > Microsoft recommends use of Azure Private Link for secure and private access to services hosted on Azure platform. For more information, see [Azure Private Link](../private-link/private-link-overview.md).
+ > Microsoft recommends use of Azure Private Link and private endpoints for secure and private access to services hosted on the Azure platform. Azure Private Link provisions a network interface into a virtual network of your choosing for Azure services such as Azure Storage or Azure SQL. For more information, see [Azure Private Link](../private-link/private-link-overview.md) and [What is a private endpoint?](../private-link/private-endpoint-overview.md).
Service endpoints are available for the following Azure services and regions. The *Microsoft.\** resource is in parenthesis. Enable this resource from the subnet side while configuring service endpoints for your service:
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
Azure Standard SKU public IP resources must use a static allocation method. Ther
### Can I request a static public IP address for my VPN gateway?
-We recommend that you use a Standard SKU public IP address for your VPN gateway. Standard SKU public IP address resources use a static allocation method. While we do support dynamic IP address assignment for certain gateway SKUs (gateway SKUs that don't have an *AZ* in the name), we recommend that you use a Standard SKU public IP address going forward for all virtual network gateways.
+We recommend that you use a Standard SKU public IP address for your VPN gateway. Standard SKU public IP address resources use a static allocation method. While we do support dynamic IP address assignment for certain gateway SKUs (gateway SKUs that don't have an *AZ* in the name), we recommend that you use a Standard SKU public IP address going forward for all virtual network gateways except gateways using the Basic gateway SKU. The Basic gateway SKU currently supports only Basic SKU public IP addresses. We'll soon be adding support for Standard SKU public IP addresses for Basic gateway SKUs.
For non-zone-redundant and non-zonal gateways (gateway SKUs that do *not* have *AZ* in the name), dynamic IP address assignment is supported, but is being phased out. When you use a dynamic IP address, the IP address doesn't change after it has been assigned to your VPN gateway. The only time the VPN gateway IP address changes is when the gateway is deleted and then re-created. The VPN gateway public IP address doesn't change when you resize, reset, or complete other internal maintenance and upgrades of your VPN gateway.