Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md | description: Learn how to enable custom domains in your redirect URLs for Azure - Copy the URL, change the domain name manually, and then paste it back to your br Azure Front Door passes the user's original IP address. It's the IP address that you'll see in the audit reporting or your custom policy. +> [!IMPORTANT] +> If the client sends an `x-forwarded-for` header to Azure Front Door, Azure AD B2C will use the originator's `x-forwarded-for` as the user's IP address for [Conditional Access Evaluation](./conditional-access-identity-protection-overview.md) and the `{Context:IPAddress}` [claims resolver](./claim-resolver-overview.md). + ### Can I use a third-party Web Application Firewall (WAF) with B2C? Yes, Azure AD B2C supports BYO-WAF (Bring Your Own Web Application Firewall). However, you must test WAF to ensure that it doesn't block or alert legitimate requests to Azure AD B2C user flows or custom policies. Learn how to configure [Akamai WAF](partner-akamai.md) and [Cloudflare WAF](partner-cloudflare.md) with Azure AD B2C. |
active-directory-b2c | Javascript And Page Layout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md | zone_pivot_groups: b2c-policy-type [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] -With Azure Active Directory B2C (Azure AD B2C) [HTML templates](customize-ui-with-html.md), you can craft your users' identity experiences. Your HTML templates can contain only certain HTML tags and attributes. Basic HTML tags, such as <b>, <i>, <u>, <h1>, and <hr> are allowed. More advanced tags such as <script>, and <iframe> are removed for security reasons. +With Azure Active Directory B2C (Azure AD B2C) [HTML templates](customize-ui-with-html.md), you can craft your users' identity experiences. Your HTML templates can contain only certain HTML tags and attributes. Basic HTML tags, such as <b>, <i>, <u>, <h1>, and <hr> are allowed. More advanced tags such as <script>, and <iframe> are removed for security reasons but the `<script>` tag should be added in the `<head>` tag. ++The `<script>` tag should be added in the `<head>` tag in two ways: ++1. Adding the `defer` attribute, which specifies that the script is downloaded in parallel to parsing the page, then the script is executed after the page has finished parsing: ++ ```javascript + <script src="my-script.js" defer></script> + ``` +++2. Adding `async` attribute that specifies that the script is downloaded in parallel to parsing the page, then the script is executed as soon as it is available (before parsing completes): ++ ```javascript + <script src="my-script.js" async></script> + ``` To enable JavaScript and advance HTML tags and attributes: |
advisor | Advisor Reference Operational Excellence Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md | Title: Operational excellence recommendations description: Operational excellence recommendations ++ Previously updated : 02/02/2022 Last updated : 10/05/2023 # Operational excellence recommendations You can get these recommendations on the **Operational Excellence** tab of the A 1. On the **Advisor** dashboard, select the **Operational Excellence** tab. -## Azure Spring Apps +## AI + machine learning -### Update your outdated Azure Spring Apps SDK to the latest version +### Upgrade to the latest version of the Immersive Reader SDK -We have identified API calls from an outdated Azure Spring Apps SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities. +We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. The latest version of the Immersive Reader SDK provides you with updated security, performance, and an expanded set of features for customizing and enhancing your integration experience. -Learn more about the [Azure Spring Apps service](../spring-apps/index.yml). +Learn more about [Azure AI Immersive Reader](/azure/ai-services/immersive-reader/). -### Update Azure Spring Apps API Version +### Upgrade to the latest version of the Immersive Reader SDK -We have identified API calls from outdated Azure Spring Apps API for resources under this subscription. We recommend switching to the latest Azure Spring Apps API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version. This ensures you receive the latest features and performance improvements. +We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. The latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience. -Learn more about the [Azure Spring Apps service](../spring-apps/index.yml). +Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade to the latest version of the Immersive Reader SDK)](https://aka.ms/ImmersiveReaderAzureAdvisorSDKLearnMore). -## Automation -### Upgrade to Start/Stop VMs v2 -This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the original version available with Azure Automation, but it is designed to take advantage of newer technology in Azure. +## Analytics -Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v2)](https://aka.ms/startstopv2docs). +### Reduce the cache policy on your Data Explorer tables ++Reduce the table cache policy to match the usage patterns (query lookback period) ++Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesOperationalExcellence (Reduce the cache policy on your Data Explorer tables)](https://aka.ms/adxcachepolicy). +++++## Compute ++### Update your outdated Azure Spring Apps SDK to the latest version -## Azure VMware +We have identified API calls from an outdated Azure Spring Apps SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities. ++Learn more about the [Azure Spring Apps service](../spring-apps/index.yml). ++### Update Azure Spring Apps API Version ++We have identified API calls from outdated Azure Spring Apps API for resources under this subscription. We recommend switching to the latest Azure Spring Apps API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version, which ensures you receive the latest features and performance improvements. ++Learn more about the [Azure Spring Apps service](../spring-apps/index.yml). ### New HCX version is available for upgrade -Your HCX version is not latest. New HCX version is available for upgrade. Updating a VMware HCX system installs the latest features, problem fixes, and security patches. +Your HCX version isn't latest. New HCX version is available for upgrade. Updating a VMware HCX system installs the latest features, problem fixes, and security patches. Learn more about [AVS Private cloud - HCXVersion (New HCX version is available for upgrade)](https://aka.ms/vmware/hcxdoc). -## Batch - ### Recreate your pool to get the latest node agent features and fixes Your pool has an old node agent. Consider recreating your pool to get the latest node agent updates and bug fixes. Learn more about [Batch account - OldPool (Recreate your pool to get the latest ### Delete and recreate your pool to remove a deprecated internal component -Your pool is using a deprecated internal component. Please delete and recreate your pool for improved stability and performance. +Your pool is using a deprecated internal component. Delete and recreate your pool for improved stability and performance. Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](https://aka.ms/batch_deprecatedcomponent_learnmore). -### Upgrade to the latest API version to ensure your Batch account remains operational. +### Upgrade to the latest API version to ensure your Batch account remains operational In the past 14 days, you have invoked a Batch management or service API version that is scheduled for deprecation. Upgrade to the latest API version to ensure your Batch account remains operational. Learn more about [Batch account - UpgradeAPI (Upgrade to the latest API version to ensure your Batch account remains operational.)](https://aka.ms/batch_deprecatedapi_learnmore). -### Delete and recreate your pool using a VM size that will soon be retired +### Delete and recreate your pool using a different VM size -Your pool is using A8-A11 VMs, which are set to be retired in March 2021. Please delete your pool and recreate it with a different VM size. +Your pool is using A8-A11 VMs, which are set to be retired in March 2021. Delete your pool and recreate it with a different VM size. -Learn more about [Batch account - RemoveA8_A11Pools (Delete and recreate your pool using a VM size that will soon be retired)](https://aka.ms/batch_a8_a11_retirement_learnmore). +Learn more about [Batch account - RemoveA8_A11Pools (Delete and recreate your pool using a different VM size)](https://aka.ms/batch_a8_a11_retirement_learnmore). ### Recreate your pool with a new image -Your pool is using an image with an imminent expiration date. Please recreate the pool with a new image to avoid potential interruptions. A list of newer images is available via the ListSupportedImages API. +Your pool is using an image with an imminent expiration date. Recreate the pool with a new image to avoid potential interruptions. A list of newer images is available via the ListSupportedImages API. Learn more about [Batch account - EolImage (Recreate your pool with a new image)](https://aka.ms/batch_expiring_image_learn_more). -## Cache for Redis +### Increase the number of compute resources you can deploy by 10 vCPU -### Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. This is a common source of incidents affecting customer applications +If quota limits are exceeded, new VM deployments are blocked until quota is increased. Increase your quota now to enable deployment of more resources. Learn More -Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. It's difficult to configure the network accurately and avoid affecting cache functionality. It's easy to break the cache accidentally while making configuration changes for other network resources. This is a common source of incidents affecting customer applications +Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number of compute resources you can deploy by 10 vCPU)](https://aka.ms/SubscriptionServiceLimits). -Learn more about [Redis Cache Server - PrivateLink (Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. This is a common source of incidents affecting customer applications)](https://aka.ms/VnetToPrivateLink). +### Add Azure Monitor to your virtual machine (VM) labeled as production -### TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses. +Azure Monitor for VMs monitors your Azure virtual machines (VM) and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and it monitors their processes and dependencies on other resources and external processes. It includes support for monitoring performance and application dependencies for VMs that are hosted on-premises or in another cloud provider. -TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses. We highly recommend that you configure your cache to use TLS 1.2 only and your application should use TLS 1.2 or later. See https://aka.ms/TLSVersions for more information. +Learn more about [Virtual machine - AddMonitorProdVM (Add Azure Monitor to your virtual machine (VM) labeled as production)](/azure/azure-monitor/insights/vminsights-overview). -Learn more about [Redis Cache Server - TLSVersion (TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses.)](https://aka.ms/TLSVersions). +### Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers -## Azure AI services +Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers. Frequent DNS lookups and NTP sync can be viewed as malicious traffic and blocked by the DDOS service in the Azure environment -### Upgrade to the latest version of the Immersive Reader SDK +Learn more about [Virtual machine - GetVmlistFortigateNtpIssue (Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers.)](https://docs.fortinet.com/document/fortigate/6.2.3/fortios-release-notes/236526/known-issues). -We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. Using the latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience. +### An Azure environment update has been rolled out that might affect your Checkpoint Firewall -Learn more about [Azure AI Immersive Reader](/azure/ai-services/immersive-reader/). +The image version of the Checkpoint firewall installed might have been affected by the recent Azure environment update. A kernel panic resulting in a reboot to factory defaults can occur in certain circumstances. -## Compute +Learn more about [Virtual machine - NvaCheckpointNicServicing (An Azure environment update has been rolled out that might affect your Checkpoint Firewall.)](https://supportcenter.checkpoint.com/supportcenter/portal). -### Increase the number of compute resources you can deploy by 10 vCPU +### The iControl REST interface has an unauthenticated remote command execution vulnerability -If quota limits are exceeded, new VM deployments will be blocked until quota is increased. Increase your quota now to enable deployment of more resources. Learn More +An unauthenticated remote command execution vulnerability allows for unauthenticated attackers with network access to the iControl REST interface, through the BIG-IP management interface and self IP addresses, to execute arbitrary system commands, create or delete files, and disable services. This vulnerability can only be exploited through the control plane and can't be exploited through the data plane. Exploitation can lead to complete system compromise. The BIG-IP system in Appliance mode is also vulnerable -Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number of compute resources you can deploy by 10 vCPU)](https://aka.ms/SubscriptionServiceLimits). +Learn more about [Virtual machine - GetF5vulnK03009991 (The iControl REST interface has an unauthenticated remote command execution vulnerability.)](https://support.f5.com/csp/article/K03009991). -### Add Azure Monitor to your virtual machine (VM) labeled as production +### NVA Accelerated Networking enabled but potentially not working -Azure Monitor for VMs monitors your Azure virtual machines (VM) and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and it monitors their processes and dependencies on other resources and external processes. It includes support for monitoring performance and application dependencies for VMs that are hosted on-premises or in another cloud provider. +Desired state for Accelerated Networking is set to ΓÇÿtrueΓÇÖ for one or more interfaces on your VM, but actual state for accelerated networking isn't enabled. -Learn more about [Virtual machine - AddMonitorProdVM (Add Azure Monitor to your virtual machine (VM) labeled as production)](/azure/azure-monitor/insights/vminsights-overview). +Learn more about [Virtual machine - GetVmListANDisabled (NVA Accelerated Networking enabled but potentially not working.)](../virtual-network/create-vm-accelerated-networking-cli.md). -### Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers. +### Virtual machines with Citrix Application Delivery Controller (ADC) and accelerated networking enabled might disconnect during maintenance operation -Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers. This can be viewed as malicious traffic and blocked by the DDOS service in the Azure environment +We have identified that you're running a Network virtual Appliance (NVA) called Citrix Application Delivery Controller (ADC), and the NVA has accelerated networking enabled. The Virtual machine that this NVA is deployed on might experience connectivity issues during a platform maintenance operation. It is recommended that you follow the article provided by the vendor: https://aka.ms/Citrix_CTX331516 -Learn more about [Virtual machine - GetVmlistFortigateNtpIssue (Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers.)](https://docs.fortinet.com/document/fortigate/6.2.3/fortios-release-notes/236526/known-issues). +Learn more about [Virtual machine - GetCitrixVFRevokeError (Virtual machines with Citrix Application Delivery Controller (ADC) and accelerated networking enabled might disconnect during maintenance operation)](https://aka.ms/Citrix_CTX331516). -### An Azure environment update has been rolled out that may affect your Checkpoint Firewall. +### Update your outdated Azure Spring Cloud SDK to the latest version -The image version of the Checkpoint firewall installed may have been affected by the recent Azure environment update. A kernel panic resulting in a reboot to factory defaults can occur in certain circumstances. +We have identified API calls from an outdated Azure Spring Cloud SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities. -Learn more about [Virtual machine - NvaCheckpointNicServicing (An Azure environment update has been rolled out that may affect your Checkpoint Firewall.)](https://supportcenter.checkpoint.com/supportcenter/portal). +Learn more about [Spring Cloud Service - SpringCloudUpgradeOutdatedSDK (Update your outdated Azure Spring Cloud SDK to the latest version)](/azure/spring-cloud). -### The iControl REST interface has an unauthenticated remote command execution vulnerability. +### Update Azure Spring Cloud API Version -This vulnerability allows for unauthenticated attackers with network access to the iControl REST interface, through the BIG-IP management interface and self IP addresses, to execute arbitrary system commands, create or delete files, and disable services. This vulnerability can only be exploited through the control plane and cannot be exploited through the data plane. Exploitation can lead to complete system compromise. The BIG-IP system in Appliance mode is also vulnerable +We have identified API calls from outdated Azure Spring Cloud API for resources under this subscription. We recommend switching to the latest Spring Cloud API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version, which ensures you receive the latest features and performance improvements. ++Learn more about [Spring Cloud Service - UpgradeAzureSpringCloudAPI (Update Azure Spring Cloud API Version)](/azure/spring-cloud). -Learn more about [Virtual machine - GetF5vulnK03009991 (The iControl REST interface has an unauthenticated remote command execution vulnerability.)](https://support.f5.com/csp/article/K03009991). -### NVA Accelerated Networking enabled but potentially not working. -Desired state for Accelerated Networking is set to ΓÇÿtrueΓÇÖ for one or more interfaces on this VM, but actual state for accelerated networking is not enabled. -Learn more about [Virtual machine - GetVmListANDisabled (NVA Accelerated Networking enabled but potentially not working.)](../virtual-network/create-vm-accelerated-networking-cli.md). -### Virtual machines with Citrix Application Delivery Controller (ADC) and accelerated networking enabled may disconnect during maintenance operation +## Containers -We have identified that you are running a Network virtual Appliance (NVA) called Citrix Application Delivery Controller (ADC), and the NVA has accelerated networking enabled. The Virtual machine that this NVA is deployed on may experience connectivity issues during a platform maintenance operation. It is recommended that you follow the article provided by the vendor: https://aka.ms/Citrix_CTX331516 +### The api version you use for Microsoft.App is deprecated, use latest api version -Learn more about [Virtual machine - GetCitrixVFRevokeError (Virtual machines with Citrix Application Delivery Controller (ADC) and accelerated networking enabled may disconnect during maintenance operation)](https://aka.ms/Citrix_CTX331516). +The api version you use for Microsoft.App is deprecated, use latest api version -## Kubernetes +Learn more about [Microsoft App Container App - UseLatestApiVersion (The api version you use for Microsoft.App is deprecated, use latest api version)](https://aka.ms/containerappsapiversion). ### Update cluster's service principal -This cluster's service principal is expired and the cluster will not be healthy until the service principal is updated +This cluster's service principal is expired and the cluster isn't healthy until the service principal is updated Learn more about [Kubernetes service - UpdateServicePrincipal (Update cluster's service principal)](../aks/update-credentials.md). Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn116IsFound (Depr ### Enable the Cluster Autoscaler -This cluster has not enabled AKS Cluster Autoscaler, and it will not adapt to changing load conditions unless you have other ways to autoscale your cluster +This cluster has not enabled AKS Cluster Autoscaler, and it can't adapt to changing load conditions unless you have other ways to autoscale your cluster Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Cluster Autoscaler)](/azure/aks/cluster-autoscaler). ### The AKS node pool subnet is full -Some of the subnets for this cluster's node pools are full and cannot take any more worker nodes. Using the Azure CNI plugin requires to reserve IP addresses for each node and all the pods for the node at node provisioning time. If there is not enough IP address space in the subnet, no worker nodes can be deployed. Additionally, the AKS cluster cannot be upgraded if the node subnet is full. +Some of the subnets for this cluster's node pools are full and can't take any more worker nodes. Using the Azure CNI plugin requires to reserve IP addresses for each node and all the pods for the node at node provisioning time. If there isn't enough IP address space in the subnet, no worker nodes can be deployed. Additionally, the AKS cluster can't be upgraded if the node subnet is full. Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](../aks/create-node-pools.md#add-a-node-pool-with-a-unique-subnet). +### Expired ETCD cert ++Expired ETCD cert, update. ++Learn more about [Kubernetes service - ExpiredETCDCertPre03012022 (Expired ETCD cert)](https://aka.ms/AKSUpdateCredentials). + ### Disable the Application Routing Addon This cluster has Pod Security Policies enabled, which are going to be deprecated in favor of Azure Policy for AKS Learn more about [Kubernetes service - UseAzurePolicyForKubernetes (Disable the ### Use Ephemeral OS disk -This cluster is not using ephemeral OS disks which can provide lower read/write latency, along with faster node scaling and cluster upgrades +This cluster isn't using ephemeral OS disks which can provide lower read/write latency, along with faster node scaling and cluster upgrades Learn more about [Kubernetes service - UseEphemeralOSdisk (Use Ephemeral OS disk)](../aks/concepts-storage.md#ephemeral-os-disk). +### Outdated Azure Linux (Mariner) OS SKUs Found ++Found outdated Azure Linux (Mariner) OS SKUs. 'CBL-Mariner' SKU isn't supported. 'Mariner' SKU is equivalent to 'AzureLinux', but it's advisable to switch to 'AzureLinux' SKU for future updates and support, as 'AzureLinux' is the Generally Available version. ++Learn more about [Kubernetes service - ClustersWithDeprecatedMarinerSKU (Outdated Azure Linux (Mariner) OS SKUs Found)](https://aka.ms/AzureLinuxOSSKU). + ### Free and Standard tiers for AKS control plane management -This cluster has not enabled the Standard tier which includes the Uptime SLA by default, and is limited to an SLO of 99.5%. +This cluster has not enabled the Standard tier that includes the Uptime SLA by default, and is limited to an SLO of 99.5%. Learn more about [Kubernetes service - Free and Standard Tier](../aks/free-standard-pricing-tiers.md). Deprecated Kubernetes API in 1.22 has been found. Avoid using deprecated APIs. Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn122IsFound (Deprecated Kubernetes API in 1.22 has been found)](https://aka.ms/aks-deprecated-k8s-api-1.22). -## MySQL +++## Databases ++### Azure SQL IaaS Agent must be installed in full mode ++Full mode installs the SQL IaaS Agent to the VM to deliver full functionality. Use it for managing a SQL Server VM with a single instance. There is no cost associated with using the full manageability mode. System administrator permissions are required. Note that installing or upgrading to full mode is an online operation, there is no restart required. ++Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent must be installed in full mode)](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management). ++### Install SQL best practices assessment on your SQL VM ++SQL best practices assessment provides a mechanism to evaluate the configuration of your Azure SQL VM for best practices like indexes, deprecated features, trace flag usage, statistics, etc. Assessment results are uploaded to your Log Analytics workspace using Azure Monitoring Agent (AMA). ++Learn more about [SQL virtual machine - SqlAssessmentAdvisorRec (Install SQL best practices assessment on your SQL VM)](/azure/azure-sql/virtual-machines/windows/sql-assessment-for-sql-vm). ++### Migrate Azure Cosmos DB attachments to Azure Blob Storage ++We noticed that your Azure Cosmos DB collection is using the legacy attachments feature. We recommend migrating attachments to Azure Blob Storage to improve the resiliency and scalability of your blob data. ++Learn more about [Azure Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](../cosmos-db/attachments.md#migrating-attachments-to-azure-blob-storage). ++### Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup ++Your Azure Cosmos DB accounts are configured with periodic backup. Continuous backup with point-in-time restore is now available on these accounts. With continuous backup, you can restore your data to any point in time within the past 30 days. Continuous backup might also be more cost-effective as a single copy of your data is retained. ++Learn more about [Azure Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](../cosmos-db/continuous-backup-restore-introduction.md). ++### Enable partition merge to configure an optimal database partition layout ++Your account has collections that could benefit from enabling partition merge. Minimizing the number of partitions reduces rate limiting and resolve storage fragmentation problems. Containers are likely to benefit from this if the RU/s per physical partition is < 3000 RUs and storage is < 20 GB. ++Learn more about [Cosmos DB account - CosmosDBPartitionMerge (Enable partition merge to configure an optimal database partition layout)](/azure/cosmos-db/merge?tabs=azure-powershell). ++ ### Your Azure Database for MySQL - Flexible Server is vulnerable using weak, deprecated TLSv1 or TLSv1.1 protocols -To support modern security standards, MySQL community edition discontinued the support for communication over Transport Layer Security (TLS) 1.0 and 1.1 protocols. Microsoft will also stop supporting connection over TLSv1 and TLSv1.1 to Azure Database for MySQL - Flexible server soon to comply with the modern security standards. We recommend you upgrade your client driver to support TLSv1.2. +To support modern security standards, MySQL community edition discontinued the support for communication over Transport Layer Security (TLS) 1.0 and 1.1 protocols. Microsoft also stopped supporting connections over TLSv1 and TLSv1.1 to Azure Database for MySQL - Flexible server to comply with the modern security standards. We recommend you upgrade your client driver to support TLSv1.2. Learn more about [Azure Database for MySQL flexible server - OrcasMeruMySqlTlsDeprecation (Your Azure Database for MySQL - Flexible Server is vulnerable using weak, deprecated TLSv1 or TLSv1.1 protocols)](https://aka.ms/encrypted_connection_deprecated_protocols). -## Desktop Virtualization +### Optimize or partition tables in your database which has huge tablespace size -### Permissions missing for start VM on connect +The maximum supported tablespace size in Azure Database for MySQL -Flexible server is 4TB. To effectively manage large tables, we recommended that you optimize the table or implement partitioning, which helps distribute the data across multiple files and prevent reaching the hard limit of 4TB in the tablespace. -We have determined you enabled start VM on connect but didn't grant the Azure Virtual Desktop the rights to power manage VMs in your subscription. As a result your users connecting to host pools won't receive a remote desktop session. Review feature documentation for requirements. +Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerSingleTablespace4TBLimit2bf9 (Optimize or partition tables in your database which has huge tablespace size)](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/how-to-reclaim-storage-space-with-azure-database-for-mysql/ba-p/3615876). -Learn more about [Host Pool - AVDStartVMonConnect (Permissions missing for start VM on connect)](https://aka.ms/AVDStartVMRequirement). +### Enable storage autogrow for MySQL Flexible Server -### No validation environment enabled +Storage auto-growth prevents a server from running out of storage and becoming read-only. -We have determined that you do not have a validation environment enabled in current subscription. When creating your host pools, you have selected "No" for "Validation environment" in the properties tab. Having at least one host pool with a validation environment enabled ensures the business continuity through Azure Virtual Desktop service deployments with early detection of potential issues. +Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerStorageAutogrow43b64 (Enable storage autogrow for MySQL Flexible Server)](/azure/mysql/flexible-server/concepts-service-tiers-storage#storage-auto-grow). -Learn more about [Host Pool - ValidationEnvHostPools (No validation environment enabled)](../virtual-desktop/create-validation-host-pool.md). +### Apply resource delete lock -### Not enough production environments enabled +Lock your MySQL Flexible Server to protect from accidental user deletions and modifications -We have determined that too many of your host pools have Validation Environment enabled. In order for Validation Environments to best serve their purpose, you should have at least one, but never more than half of your host pools in Validation Environment. By having a healthy balance between your host pools with Validation Environment enabled and those with it disabled, you will best be able to utilize the benefits of the multistage deployments that Azure Virtual Desktop offers with certain updates. To fix this issue, open your host pool's properties and select "No" next to the "Validation Environment" setting. +Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerResourceLockbe19e (Apply resource delete lock)](/azure/azure-resource-manager/management/lock-resources). -Learn more about [Host Pool - ProductionEnvHostPools (Not enough production environments enabled)](../virtual-desktop/create-host-pools-powershell.md). +### Add firewall rules for MySQL Flexible Server -## Azure Cosmos DB +Add firewall rules to protect your server from unauthorized access -### Migrate Azure Cosmos DB attachments to Azure Blob Storage +Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerNoFirewallRule6e523 (Add firewall rules for MySQL Flexible Server)](/azure/mysql/flexible-server/how-to-manage-firewall-portal). -We noticed that your Azure Cosmos DB collection is using the legacy attachments feature. We recommend migrating attachments to Azure Blob Storage to improve the resiliency and scalability of your blob data. -Learn more about [Azure Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](../cosmos-db/attachments.md#migrating-attachments-to-azure-blob-storage). +### Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration, which is a common source of incidents affecting customer applications -### Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup +Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. It's difficult to configure the network accurately and avoid affecting cache functionality. It's easy to break the cache accidentally while making configuration changes for other network resources, which is a common source of incidents affecting customer applications -Your Azure Cosmos DB accounts are configured with periodic backup. Continuous backup with point-in-time restore is now available on these accounts. With continuous backup, you can restore your data to any point in time within the past 30 days. Continuous backup may also be more cost-effective as a single copy of your data is retained. +Learn more about [Redis Cache Server - PrivateLink (Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. This is a common source of incidents affecting customer applications)](https://aka.ms/VnetToPrivateLink). -Learn more about [Azure Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](../cosmos-db/continuous-backup-restore-introduction.md). +### Support for TLS versions 1.0 and 1.1 is retiring on September 30, 2024 ++Support for TLS 1.0/1.1 is retiring on September 30, 2024. Configure your cache to use TLS 1.2 only and your application to use TLS 1.2 or later. See https://aka.ms/TLSVersions for more information. ++Learn more about [Redis Cache Server - TLSVersion (Support for TLS versions 1.0 and 1.1 is retiring on September 30, 2024.)](https://aka.ms/TLSVersions). ++### TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses ++TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses. We highly recommend that you configure your cache to use TLS 1.2 only and your application to use TLS 1.2 or later. See https://aka.ms/TLSVersions for more information. ++Learn more about [Redis Cache Server - TLSVersion (TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses.)](https://aka.ms/TLSVersions). ++### Cloud service caches are being retired in August 2024, migrate before then to avoid any problems ++This instance of Azure Cache for Redis has a dependency on Cloud Services (classic) which is being retired in August 2024. Follow the instructions found in the following link to migrate to an instance without this dependency. If you need to upgrade your cache to Redis 6 note that upgrading a cache with a dependency on cloud services isn't supported. You must migrate your cache instance to Virtual Machine Scale Set before upgrading. For more information, see the following link. Note: If you have completed your migration away from Cloud Services, allow up to 24 hours for this recommendation to be removed ++Learn more about [Redis Cache Server - MigrateFromCloudService (Cloud service caches are being retired in August 2024, migrate before then to avoid any problems)](/azure/azure-cache-for-redis/cache-faq#caches-with-a-dependency-on-cloud-services-%28classic%29). ++### Redis persistence allows you to persist data stored in a cache so you can reload data from an event that caused data loss. ++Redis persistence allows you to persist data stored in Redis. You can also take snapshots and back up the data. If there's a hardware failure, the persisted data is automatically loaded in your cache instance. Data loss is possible if a failure occurs where Cache nodes are down. ++Learn more about [Redis Cache Server - Persistence (Redis persistence allows you to persist data stored in a cache so you can reload data from an event that caused data loss.)](https://aka.ms/redis/persistence). ++### Using persistence with soft delete enabled can increase storage costs. ++Check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see the following link. ++Learn more about [Redis Cache Server - PersistenceSoftEnable (Using persistence with soft delete enabled can increase storage costs.)](https://aka.ms/redis/persistence). ++### You might benefit from using an Enterprise tier cache instance ++This instance of Azure Cache for Redis is using one or more advanced features from the list - more than 6 shards, geo-replication, zone-redundancy or persistence. Consider switching to an Enterprise tier cache to get the most out of your Redis experience. Enterprise tier caches offer higher availability, better performance and more powerful features like active geo-replication. ++Learn more about [Redis Cache Server - ConsiderUsingRedisEnterprise (You might benefit from using an Enterprise tier cache instance)](https://aka.ms/redisenterpriseupgrade). ++++++## Integration ++### Use Azure AD-based authentication for more fine-grained control and simplified management ++You can use Azure AD-based authentication, instead of gateway tokens, which allows you to use standard procedures to create, assign and manage permissions and control expiry times. Additionally, you gain fine-grained control across gateway deployments and easily revoke access in case of a breach. ++Learn more about [Api Management - ShgwUseAdAuth (Use Azure AD-based authentication for more fine-grained control and simplified management)](https://aka.ms/apim/shgw/how-to/use-ad-auth). ++### Validate JWT policy is being used with security keys that have insecure key size for validating Json Web Token (JWT). ++Validate JWT policy is being used with security keys that have insecure key size for validating Json Web Token (JWT). We recommend using longer key sizes to improve security for JWT-based authentication & authorization. ++Learn more about [Api Management - validate-jwt-with-insecure-key-size (Validate JWT policy is being used with security keys that have insecure key size for validating Json Web Token (JWT).)](). ++### Use self-hosted gateway v2 ++We have identified one or more instances of your self-hosted gateway(s) that are using a deprecated version of the self-hosted gateway (v0.x and/or v1.x). ++Learn more about [Api Management - shgw-legacy-image-usage (Use self-hosted gateway v2)](https://aka.ms/apim/shgw/migration/v2). -## Monitor +### Use Configuration API v2 for self-hosted gateways ++We have identified one or more instances of your self-hosted gateway(s) that are using the deprecated Configuration API v1. ++Learn more about [Api Management - shgw-config-api-v1-usage (Use Configuration API v2 for self-hosted gateways)](https://aka.ms/apim/shgw/migration/v2). ++### Only allow tracing on subscriptions intended for debugging purposes. Sharing subscription keys with tracing allowed with unauthorized users could lead to disclosure of sensitive information contained in tracing logs such as keys, access tokens, passwords, internal hostnames, and IP addresses. ++Traces generated by Azure API Management service might contain sensitive information that is intended for service owner and must not be exposed to clients using the service. Using tracing enabled subscription keys in production or automated scenarios creates a risk of sensitive information exposure if client making call to the service requests a trace. ++Learn more about [Api Management - heavy-tracing-usage (Only allow tracing on subscriptions intended for debugging purposes. Sharing subscription keys with tracing allowed with unauthorized users could lead to disclosure of sensitive information contained in tracing logs such as keys, access tokens, passwords, internal hostnames, and IP addresses.)](/azure/api-management/api-management-howto-api-inspector). ++### Self-hosted gateway instances were identified that use gateway tokens that expire soon ++At least one deployed self-hosted gateway instance was identified that uses a gateway token that expires in the next seven days. To ensure that it can connect to the control-plane, generate a new gateway token and update your deployed self-hosted gateways (does not impact data-plane traffic). ++Learn more about [Api Management - ShgwGatewayTokenNearExpiry (Self-hosted gateway instance(s) were identified that use gateway tokens that expire soon)](). +++## Internet of Things ++### IoT Hub Fallback Route Disabled ++We have detected that the Fallback Route on your IoT Hub has been disabled. When the Fallback Route is disabled messages stop flowing to the default endpoint. If you're no longer able to ingest telemetry downstream consider re-enabling the Fallback Route. ++Learn more about [IoT hub - IoTHubFallbackDisabledAdvisor (IoT Hub Fallback Route Disabled)](/azure/iot-hub/iot-hub-devguide-messages-d2c#fallback-route). +++++## Management and governance ++### Upgrade to Start/Stop VMs v2 ++The new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the original version available with Azure Automation, but it is designed to take advantage of newer technology in Azure. ++Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v2)](https://aka.ms/startstopv2docs). ### Repair your log alert rule -We have detected that one or more of your alert rules have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries may become invalid overtime due to changes in referenced resources, tables, or commands. We recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure. +We have detected that one or more of your alert rules have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries might become invalid overtime due to changes in referenced resources, tables, or commands. We recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure. Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](https://aka.ms/aa_logalerts_queryrepair). The alert rule was disabled by Azure Monitor as it was causing service issues. T Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](https://aka.ms/aa_logalerts_queryrepair). -## Key Vault +### Update Azure Managed Grafana SDK Version -### Create a backup of HSM +We have identified that an older SDK version has been used to manage or access your Grafana workspace. To get access to all the latest functionality, it is recommended that you switch to use the latest SDK version. -Create a periodic HSM backup to prevent data loss and have ability to recover the HSM in case of a disaster. +Learn more about [Grafana Dashboard - UpdateAzureManagedGrafanaSDK (Update Azure Managed Grafana SDK Version)](https://aka.ms/GrafanaPortalLearnMore). -Learn more about [Managed HSM Service - CreateHSMBackup (Create a backup of HSM)](../key-vault/managed-hsm/best-practices.md#create-backups). +### Switch to Azure Monitor based alerts for backup -## Data Explorer +Switch to Azure Monitor based alerts for backup to leverage various benefits, such as - standardized, at-scale alert management experiences offered by Azure, ability to route alerts to different notification channels of choice, and greater flexibility in alert configuration. ++Learn more about [Recovery Services vault - SwitchToAzureMonitorAlerts (Switch to Azure Monitor based alerts for backup)](https://aka.ms/AzMonAlertsBackup). -### Reduce the cache policy on your Data Explorer tables -Reduce the table cache policy to match the usage patterns (query lookback period) -Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesOperationalExcellence (Reduce the cache policy on your Data Explorer tables)](https://aka.ms/adxcachepolicy). ## Networking +### Resolve Certificate Update issue for your Application Gateway ++We have detected that one or more of your Application Gateways is unable to fetch the latest version certificate present in your Key Vault. If it is intended to use a particular version of the certificate, ignore this message. ++Learn more about [Application gateway - AppGwAdvisorRecommendationForCertificateUpdateErrors (Resolve Certificate Update issue for your Application Gateway)](). + ### Resolve Azure Key Vault issue for your Application Gateway -We've detected that one or more of your Application Gateways is unable to obtain a certificate due to misconfigured Key Vault. You should fix this configuration immediately to avoid operational issues with your gateway. +We've detected that one or more of your Application Gateways is unable to obtain a certificate due to misconfigured Key Vault. You must fix this configuration immediately to avoid operational issues with your gateway. Learn more about [Application gateway - AppGwAdvisorRecommendationForKeyVaultErrors (Resolve Azure Key Vault issue for your Application Gateway)](https://aka.ms/agkverror). Traffic Analytics is a cloud-based solution that provides visibility into user a Learn more about [Network Security Group - NSGFlowLogsenableTA (Enable Traffic Analytics to view insights into traffic patterns across Azure resources)](https://aka.ms/aa_enableta_learnmore). -## SQL Virtual Machine --### SQL IaaS Agent should be installed in full mode --Full mode installs the SQL IaaS Agent to the VM to deliver full functionality. Use it for managing a SQL Server VM with a single instance. There is no cost associated with using the full manageability mode. System administrator permissions are required. Note that installing or upgrading to full mode is an online operation, there is no restart required. --Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent should be installed in full mode)](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management?tabs=azure-powershell). --## Storage --### Prevent hitting subscription limit for maximum storage accounts --A region can support a maximum of 250 storage accounts per subscription. You have either already reached or are about to reach that limit. If you reach that limit, you will be unable to create any more storage accounts in that subscription/region combination. Please evaluate the recommended action below to avoid hitting the limit. --Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](https://aka.ms/subscalelimit). --### Update to newer releases of the Storage Java v12 SDK for better reliability. --We noticed that one or more of your applications use an older version of the Azure Storage Java v12 SDK to write data to Azure Storage. Unfortunately, the version of the SDK being used has a critical issue that uploads incorrect data during retries (for example, in case of HTTP 500 errors), resulting in an invalid object being written. The issue is fixed in newer releases of the Java v12 SDK. --Learn more about [Storage Account - UpdateStorageJavaSDK (Update to newer releases of the Storage Java v12 SDK for better reliability.)](/azure/developer/java/sdk/?view=azure-java-stable&preserve-view=true). --## Subscription - ### Set up staging environments in Azure App Service -Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, no requests are dropped because of swap operations. +Deploy an app to a slot first and then swap it into production to ensure that all instances of the slot are warmed up before being swapped and eliminate downtime. The traffic redirection is seamless, no requests are dropped because of swap operations. Learn more about [Subscription - AzureApplicationService (Set up staging environments in Azure App Service)](../app-service/deploy-staging-slots.md). ### Enforce 'Add or replace a tag on resources' using Azure Policy -Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy adds or replaces the specified tag and value when any resource is created or updated. Existing resources can be remediated by triggering a remediation task. Does not modify tags on resource groups. +Azure Policy is a service in Azure that you use to create, assign, and manage policies that enforce different rules and effects over your resources. Enforce a policy that adds or replaces the specified tag and value when any resource is created or updated. Existing resources can be remediated by triggering a remediation task, which does not modify tags on resource groups. Learn more about [Subscription - AddTagPolicy (Enforce 'Add or replace a tag on resources' using Azure Policy)](../governance/policy/overview.md). ### Enforce 'Allowed locations' using Azure Policy -Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy enables you to restrict the locations your organization can specify when deploying resources. Use to enforce your geo-compliance requirements. +Azure Policy is a service in Azure that you use to create, assign, and manage policies that enforce different rules and effects over your resources. Enforce a policy that enables you to restrict the locations your organization can specify when deploying resources. Use the policy to enforce your geo-compliance requirements. Learn more about [Subscription - AllowedLocationsPolicy (Enforce 'Allowed locations' using Azure Policy)](../governance/policy/overview.md). ### Enforce 'Audit VMs that do not use managed disks' using Azure Policy -Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy audits VMs that do not use managed disks. +Azure Policy is a service in Azure that you use to create, assign, and manage policies that enforce different rules and effects over your resources. Enforce a policy that audits VMs that do not use managed disks. Learn more about [Subscription - AuditForManagedDisksPolicy (Enforce 'Audit VMs that do not use managed disks' using Azure Policy)](../governance/policy/overview.md). ### Enforce 'Allowed virtual machine SKUs' using Azure Policy -Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy enables you to specify a set of virtual machine SKUs that your organization can deploy. +Azure Policy is a service in Azure that you use to create, assign, and manage policies that enforce different rules and effects over your resources. Enforce a policy that enables you to specify a set of virtual machine SKUs that your organization can deploy. Learn more about [Subscription - AllowedVirtualMachineSkuPolicy (Enforce 'Allowed virtual machine SKUs' using Azure Policy)](../governance/policy/overview.md). ### Enforce 'Inherit a tag from the resource group' using Azure Policy -Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy adds or replaces the specified tag and value from the parent resource group when any resource is created or updated. Existing resources can be remediated by triggering a remediation task. +Azure Policy is a service in Azure that you use to create, assign, and manage policies that enforce different rules and effects over your resources. Enforce a policy that adds or replaces the specified tag and value from the parent resource group when any resource is created or updated. Existing resources can be remediated by triggering a remediation task. Learn more about [Subscription - InheritTagPolicy (Enforce 'Inherit a tag from the resource group' using Azure Policy)](../governance/policy/overview.md). Using Azure Lighthouse improves security and reduces unnecessary access to your Learn more about [Subscription - OnboardCSPSubscriptionsToLighthouse (Use Azure Lighthouse to simply and securely manage customer subscriptions at scale)](../lighthouse/concepts/cloud-solution-provider.md). -## Web +### Subscription with more than 10 VNets must be managed using AVNM -### Set up staging environments in Azure App Service +Subscription with more than 10 VNets must be managed using AVNM. Azure Virtual Network Manager is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. -Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, no requests are dropped because of swap operations. +Learn more about [Subscription - ManageVNetsUsingAVNM (Subscription with more than 10 VNets must be managed using AVNM)](/azure/virtual-network-manager/). -Learn more about [App service - AzureAppService-StagingEnv (Set up staging environments in Azure App Service)](../app-service/deploy-staging-slots.md). +### VNet with more than 5 peerings must be managed using AVNM connectivity configuration -### Update Service Connector API Version +VNet with more than 5 peerings must be managed using AVNM connectivity configuration. Azure Virtual Network Manager is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. -We have identified API calls from outdated Service Connector API for resources under this subscription. We recommend switching to the latest Service Connector API version. You need to update your existing code or tools to use the latest API version. +Learn more about [Virtual network - ManagePeeringsUsingAVNM (VNet with more than 5 peerings must be managed using AVNM connectivity configuration)](). -Learn more about [App service - UpgradeServiceConnectorAPI (Update Service Connector API Version)](/azure/service-connector). +### Upgrade NSG flow logs to VNet flow logs -### Update Service Connector SDK to the latest version +Virtual Network flow log allows you to record IP traffic flowing in a virtual network. It provides several benefits over Network Security Group flow log like simplified enablement, enhanced coverage, accuracy, performance and observability of Virtual Network Manager rules and encryption status. -We have identified API calls from an outdated Service Connector SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities. +Learn more about [Resource - UpgradeNSGToVnetFlowLog (Upgrade NSG flow logs to VNet flow logs)](https://aka.ms/vnetflowlogspreviewdocs). -Learn more about [App service - UpgradeServiceConnectorSDK (Update Service Connector SDK to the latest version)](/azure/service-connector). -## Azure Center for SAP -### Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP -Azure Center for SAP solutions recommendation: All VMs in SAP system should be certified for SAP. -Learn more about [App Server Instance - VM_0001 (Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533). +## SAP for Azure ++### Ensure the HANA DB VM type supports the HANA scenario in your SAP workload ++Correct VM type needs to be selected for the specific HANA Scenario. The HANA scenarios can be 'OLAP', 'OLTP', 'OLAP: Scaleup' and 'OLTP: Scaleup'. See SAP note 1928533 for the correct VM type for your SAP workload. The correct VM type helps ensure better performance and support for your SAP systems ++Learn more about [Database Instance - HanaDBSupport (Ensure the HANA DB VM type supports the HANA scenario in your SAP workload)](https://launchpad.support.sap.com/#/notes/1928533). ++### Ensure the Operating system in App VM is supported in combination with DB type in your SAP workload ++Operating system in the VMs in your SAP workload need to be supported for the DB type selected. See SAP note 1928533 for the correct OS-DB combinations for the ASCS, Database and Application VMs to ensure better performance and support for your SAP systems ++Learn more about [App Server Instance - AppOSDBSupport (Ensure the Operating system in App VM is supported in combination with DB type in your SAP workload)](https://launchpad.support.sap.com/#/notes/1928533). ++### Set the parameter net.ipv4.tcp_keepalive_time to '300' in the Application VM OS in SAP workloads ++In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_keepalive_time = 300 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads in order. ++Learn more about [App Server Instance - AppIPV4TCPKeepAlive (Set the parameter net.ipv4.tcp_keepalive_time to '300' in the Application VM OS in SAP workloads)](https://launchpad.support.sap.com/#/notes/1410736). ++### Ensure the Operating system in DB VM is supported for the DB type in your SAP workload ++Operating system in the VMs in your SAP workload need to be supported for the DB type selected. See SAP note 1928533 for the correct OS-DB combinations for the ASCS, Database and Application VMs to ensure better performance and support for your SAP systems ++Learn more about [Database Instance - DBOSDBSupport (Ensure the Operating system in DB VM is supported for the DB type in your SAP workload)](https://launchpad.support.sap.com/#/notes/1928533). ++### Set the parameter net.ipv4.tcp_retries2 to '15' in the Application VM OS in SAP workloads ++In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_retries2 = 15 to enable faster reconnection after an ASCS failover. This is recommended for all Application VM OS in SAP workloads. ++Learn more about [App Server Instance - AppIpv4Retries2 (Set the parameter net.ipv4.tcp_retries2 to '15' in the Application VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019722#:~:text=To%20check%20for%20current%20values%20of%20certain%20TCP%20tuning). ++### See the parameter net.ipv4.tcp_keepalive_probes to '9' in the Application VM OS in SAP workloads ++In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_keepalive_probes = 9 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads. ++Learn more about [App Server Instance - AppIPV4Probes (See the parameter net.ipv4.tcp_keepalive_probes to '9' in the Application VM OS in SAP workloads)](/azure/virtual-machines/workloads/sap/high-availability-guide). ++### Set the parameter net.ipv4.tcp_tw_recycle to '0' in the Application VM OS in SAP workloads ++In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_tw_recycle = 0 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads. ++Learn more about [App Server Instance - AppIpv4Recycle (Set the parameter net.ipv4.tcp_tw_recycle to '0' in the Application VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019722#:~:text=To%20check%20for%20current%20values%20of%20certain%20TCP%20tuning). ++### Ensure the Operating system in ASCS VM is supported in combination with DB type in your SAP workload ++Operating system in the VMs in your SAP workload need to be supported for the DB type selected. See SAP note 1928533 for the correct OS-DB combinations for the ASCS, Database and Application VMs. The correct OS-DB combinations help ensure better performance and support for your SAP systems ++Learn more about [Central Server Instance - ASCSOSDBSupport (Ensure the Operating system in ASCS VM is supported in combination with DB type in your SAP workload)](https://launchpad.support.sap.com/#/notes/1928533). ++### Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP ++Azure Center for SAP solutions recommendation: All VMs in SAP system must be certified for SAP. ++Learn more about [App Server Instance - VM_0001 (Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533). ++### Set the parameter net.ipv4.tcp_retries1 to '3' in the Application VM OS in SAP workloads ++In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_retries1 = 3 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads. ++Learn more about [App Server Instance - AppIpv4Retries1 (Set the parameter net.ipv4.tcp_retries1 to '3' in the Application VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019722#:~:text=To%20check%20for%20current%20values%20of%20certain%20TCP%20tuning). ++### Set the parameter net.ipv4.tcp_tw_reuse to '0' in the Application VM OS in SAP workloads ++In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_tw_reuse = 0 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads. ++Learn more about [App Server Instance - AppIpv4TcpReuse (Set the parameter net.ipv4.tcp_tw_reuse to '0' in the Application VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019722#:~:text=To%20check%20for%20current%20values%20of%20certain%20TCP%20tuning). ++### Set the parameter net.ipv4.tcp_keepalive_intvl to '75' in the Application VM OS in SAP workloads ++In the Application VM OS, edit the /etc/sysctl.conf file and add net.ipv4.tcp_keepalive_intvl = 75 to enable faster reconnection after an ASCS failover. This setting is recommended for all Application VM OS in SAP workloads. ++Learn more about [App Server Instance - AppIPV4intvl (Set the parameter net.ipv4.tcp_keepalive_intvl to '75' in the Application VM OS in SAP workloads)](/azure/virtual-machines/workloads/sap/high-availability-guide). +++++### Ensure Accelerated Networking is enabled on all NICs for improved performance of SAP workloads ++Network latency between App VMs and DB VMs for SAP workloads is required to be 0.7ms or less. If accelerated networking isn't enabled, network latency can increase beyond the threshold of 0.7ms ++Learn more about [Database Instance - NIC_0001_DB (Ensure Accelerated Networking is enabled on all NICs for improved performance of SAP workloads)](https://launchpad.support.sap.com/#/notes/1928533). ++### Ensure Accelerated Networking is enabled on all NICs for improved performance of SAP workloads ++Network latency between App VMs and DB VMs for SAP workloads is required to be 0.7ms or less. If accelerated networking isn't enabled, network latency can increase beyond the threshold of 0.7ms ++Learn more about [App Server Instance - NIC_0001 (Ensure Accelerated Networking is enabled on all NICs for improved performance of SAP workloads)](https://launchpad.support.sap.com/#/notes/1928533). +++ ### Azure Center for SAP recommendation: Ensure Accelerated networking is enabled on all interfaces Azure Center for SAP solutions recommendation: Ensure Accelerated networking is Learn more about [Central Server Instance - NIC_0001_ASCS (Azure Center for SAP recommendation: Ensure Accelerated networking is enabled on all interfaces)](https://launchpad.support.sap.com/#/notes/1928533). -### Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP +### Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP -Azure Center for SAP solutions recommendation: All VMs in SAP system should be certified for SAP. +Azure Center for SAP solutions recommendation: All VMs in SAP system must be certified for SAP. -Learn more about [Central Server Instance - VM_0001_ASCS (Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533). +Learn more about [Central Server Instance - VM_0001_ASCS (Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533). -### Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP +### Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP -Azure Center for SAP solutions recommendation: All VMs in SAP system should be certified for SAP. +Azure Center for SAP solutions recommendation: All VMs in SAP system must be certified for SAP. -Learn more about [Database Instance - VM_0001_DB (Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533). +Learn more about [Database Instance - VM_0001_DB (Azure Center for SAP recommendation: All VMs in SAP system must be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533). ++### Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads ++fstrim scans the filesystem and sends 'UNMAP' commands for each unused block it finds; useful in thin-provisioned system if the system is over-provisioned. Running SAP HANA on an over-provisioned storage array isn't recommended. Active fstrim can cause XFS metadata corruption See SAP note: 2205917 ++Learn more about [App Server Instance - GetFsTrimForApp (Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019447). ++### Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads ++fstrim scans the filesystem and sends 'UNMAP' commands for each unused block it finds; useful in thin-provisioned system if the system is over-provisioned. Running SAP HANA on an over-provisioned storage array isn't recommended. Active fstrim can cause XFS metadata corruption See SAP note: 2205917 ++Learn more about [Central Server Instance - GetFsTrimForAscs (Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019447). ++### Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads ++fstrim scans the filesystem and sends 'UNMAP' commands for each unused block it finds; useful in thin-provisioned system if the system is over-provisioned. Running SAP HANA on an over-provisioned storage array isn't recommended. Active fstrim can cause XFS metadata corruption See SAP note: 2205917 ++Learn more about [Database Instance - GetFsTrimForDb (Disable fstrim in SLES OS to avoid XFS metadata corruption in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000019447). ++### For better performance and support, ensure HANA data filesystem type is supported for HANA DB ++For different volumes of SAP HANA, where asynchronous I/O is used, SAP only supports filesystems validated as part of an SAP HANA appliance certification. Using an unsupported filesystem might lead to various operational issues, e.g. hanging recovery and index server crashes. See SAP note 2972496. ++Learn more about [Database Instance - HanaDataFileSystemSupported (For better performance and support, ensure HANA data filesystem type is supported for HANA DB)](https://launchpad.support.sap.com/#/notes/2972496). ++### For better performance and support, ensure HANA shared filesystem type is supported for HANA DB ++For different volumes of SAP HANA, where asynchronous I/O is used, SAP only supports filesystems validated as part of an SAP HANA appliance certification. Using an unsupported filesystem might lead to various operational issues, e.g. hanging recovery and index server crashes. See SAP note 2972496. ++Learn more about [Database Instance - HanaSharedFileSystem (For better performance and support, ensure HANA shared filesystem type is supported for HANA DB)](https://launchpad.support.sap.com/#/notes/2972496). +++### For better performance and support, ensure HANA log filesystem type is supported for HANA DB ++For different volumes of SAP HANA, where asynchronous I/O is used, SAP only supports filesystems validated as part of an SAP HANA appliance certification. Using an unsupported filesystem might lead to various operational issues, e.g. hanging recovery and index server crashes. See SAP note 2972496. ++Learn more about [Database Instance - HanaLogFileSystemSupported (For better performance and support, ensure HANA log filesystem type is supported for HANA DB)](https://launchpad.support.sap.com/#/notes/2972496). ### Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET -Azure Center for SAP recommendation: Ensure all NICs for a system should be attached to the same VNET. +Azure Center for SAP recommendation: Ensure all NICs for a system must be attached to the same VNET. Learn more about [App Server Instance - AllVmsHaveSameVnetApp (Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET)](/azure/virtual-machines/workloads/sap/sap-deployment-checklist#:~:text=this%20article.-,Networking,-.). -### Azure Center for SAP recommendation: Swap space on HANA systems should be 2GB +### Azure Center for SAP recommendation: Swap space on HANA systems must be 2GB -Azure Center for SAP solutions recommendation: Swap space on HANA systems should be 2GB. +Azure Center for SAP solutions recommendation: Swap space on HANA systems must be 2GB. -Learn more about [Database Instance - SwapSpaceForSap (Azure Center for SAP recommendation: Swap space on HANA systems should be 2GB)](https://launchpad.support.sap.com/#/notes/1999997). +Learn more about [Database Instance - SwapSpaceForSap (Azure Center for SAP recommendation: Swap space on HANA systems must be 2GB)](https://launchpad.support.sap.com/#/notes/1999997). ### Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET -Azure Center for SAP recommendation: Ensure all NICs for a system should be attached to the same VNET. +Azure Center for SAP recommendation: Ensure all NICs for a system must be attached to the same VNET. Learn more about [Central Server Instance - AllVmsHaveSameVnetAscs (Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET)](/azure/virtual-machines/workloads/sap/sap-deployment-checklist#:~:text=this%20article.-,Networking,-.). ### Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET -Azure Center for SAP recommendation: Ensure all NICs for a system should be attached to the same VNET. +Azure Center for SAP recommendation: Ensure all NICs for a system must be attached to the same VNET. Learn more about [Database Instance - AllVmsHaveSameVnetDb (Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET)](/azure/virtual-machines/workloads/sap/sap-deployment-checklist#:~:text=this%20article.-,Networking,-.). Azure Center for SAP solutions recommendation: Ensure network configuration is Learn more about [Database Instance - NetworkConfigForSap (Azure Center for SAP recommendation: Ensure network configuration is optimized for HANA and OS)](https://launchpad.support.sap.com/#/notes/2382421). +++## Storage ++### Create a backup of HSM ++Create a periodic HSM backup to prevent data loss and have ability to recover the HSM in case of a disaster. ++Learn more about [Managed HSM Service - CreateHSMBackup (Create a backup of HSM)](../key-vault/managed-hsm/best-practices.md#create-backups). ++### Application Volume Group SDK Recommendation ++The minimum API version for Azure NetApp Files application volume group feature must be 2022-01-01. We recommend using 2022-03-01 when possible to fully leverage the API. ++Learn more about [Volume - Application Volume Group SDK version recommendation (Application Volume Group SDK Recommendation)](https://aka.ms/anf-sdkversion). ++### Availability Zone Volumes SDK Recommendation ++The minimum SDK version of 2022-05-01 is recommended for the Azure NetApp Files Availability zone volume placement feature, to enable deployment of new Azure NetApp Files volumes in the Azure availability zone (AZ) that you specify. ++Learn more about [Volume - Azure NetApp Files AZ Volume SDK version recommendation (Availability Zone Volumes SDK Recommendation)](https://aka.ms/anf-sdkversion). ++### Cross Zone Replication SDK recommendation ++The minimum SDK version of 2022-05-01 is recommended for the Azure NetApp Files Cross Zone Replication feature, to enable you to replicate volumes across availability zones within the same region. ++Learn more about [Volume - Azure NetApp Files Cross Zone Replication SDK recommendation (Cross Zone Replication SDK recommendation)](https://aka.ms/anf-sdkversion). ++### Volume Encryption using Customer Managed Keys with Azure Key Vault SDK Recommendation ++The minimum API version for Azure NetApp Files Customer Managed Keys with Azure Key Vault feature is 2022-05-01. ++Learn more about [Volume - CMK with AKV SDK Recommendation (Volume Encryption using Customer Managed Keys with Azure Key Vault SDK Recommendation)](). ++### Cool Access SDK Recommendation ++The minimum SDK version of 2022-03-01 is recommended for Standard service level with cool access feature to enable moving inactive data to an Azure storage account (the cool tier) and free up storage that resides within Azure NetApp Files volumes, resulting in overall cost savings. ++Learn more about [Capacity Pool - Azure NetApp Files Cool Access SDK version recommendation (Cool Access SDK Recommendation)](https://aka.ms/anf-sdkversion). ++### Large Volumes SDK Recommendation ++The minimum SDK version of 2022-xx-xx is recommended for automation of large volume creation, resizing and deletion. ++Learn more about [Volume - Large Volumes SDK Recommendation (Large Volumes SDK Recommendation)](/azure/azure-netapp-files/azure-netapp-files-resource-limits). ++### Prevent hitting subscription limit for maximum storage accounts ++A region can support a maximum of 250 storage accounts per subscription. You have either already reached or are about to reach that limit. If you reach that limit, you're unable to create any more storage accounts in that subscription/region combination. Evaluate the recommended action below to avoid hitting the limit. ++Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](https://aka.ms/subscalelimit). ++### Update to newer releases of the Storage Java v12 SDK for better reliability. ++We noticed that one or more of your applications use an older version of the Azure Storage Java v12 SDK to write data to Azure Storage. Unfortunately, the version of the SDK being used has a critical issue that uploads incorrect data during retries (for example, in case of HTTP 500 errors), resulting in an invalid object being written. The issue is fixed in newer releases of the Java v12 SDK. ++Learn more about [Storage Account - UpdateStorageJavaSDK (Update to newer releases of the Storage Java v12 SDK for better reliability.)](/azure/developer/java/sdk/?view=azure-java-stable&preserve-view=true). ++++++## Virtual desktop infrastructure ++### Permissions missing for start VM on connect ++We have determined you enabled start VM on connect but didn't grant the Azure Virtual Desktop the rights to power manage VMs in your subscription. As a result your users connecting to host pools won't receive a remote desktop session. Review feature documentation for requirements. ++Learn more about [Host Pool - AVDStartVMonConnect (Permissions missing for start VM on connect)](https://aka.ms/AVDStartVMRequirement). ++### No validation environment enabled ++We have determined that you do not have a validation environment enabled in current subscription. When creating your host pools, you have selected "No" for "Validation environment" in the properties tab. Having at least one host pool with a validation environment enabled ensures the business continuity through Azure Virtual Desktop service deployments with early detection of potential issues. ++Learn more about [Host Pool - ValidationEnvHostPools (No validation environment enabled)](../virtual-desktop/create-validation-host-pool.md). ++### Not enough production environments enabled ++We have determined that too many of your host pools have Validation Environment enabled. In order for Validation Environments to best serve their purpose, you must have at least one, but never more than half of your host pools in Validation Environment. By having a healthy balance between your host pools with Validation Environment enabled and those with it disabled, you're best able to utilize the benefits of the multistage deployments that Azure Virtual Desktop offers with certain updates. To fix this issue, open your host pool's properties and select "No" next to the "Validation Environment" setting. ++Learn more about [Host Pool - ProductionEnvHostPools (Not enough production environments enabled)](../virtual-desktop/create-host-pools-powershell.md). +++++## Web ++### Set up staging environments in Azure App Service ++Deploy an app to a slot first and then swap it into production to ensure that all instances of the slot are warmed up before being swapped and eliminate downtime. The traffic redirection is seamless, no requests are dropped because of swap operations. ++Learn more about [App service - AzureAppService-StagingEnv (Set up staging environments in Azure App Service)](../app-service/deploy-staging-slots.md). ++### Update Service Connector API Version ++We have identified API calls from outdated Service Connector API for resources under this subscription. We recommend switching to the latest Service Connector API version. You need to update your existing code or tools to use the latest API version. ++Learn more about [App service - UpgradeServiceConnectorAPI (Update Service Connector API Version)](/azure/service-connector). ++### Update Service Connector SDK to the latest version ++We have identified API calls from an outdated Service Connector SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities. ++Learn more about [App service - UpgradeServiceConnectorSDK (Update Service Connector SDK to the latest version)](/azure/service-connector). ++++++ ## Next steps Learn more about [Operational Excellence - Microsoft Azure Well Architected Framework](/azure/architecture/framework/devops/overview) |
advisor | Advisor Reference Performance Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md | Title: Performance recommendations description: Full list of available performance recommendations in Advisor. Previously updated : 02/03/2022++ Last updated : 10/15/2023 # Performance recommendations The performance recommendations in Azure Advisor can help improve the speed and 1. On the **Advisor** dashboard, select the **Performance** tab. -## Attestation +## AI + machine learning -### Update Attestation API Version +### 429 Throttling Detected on this resource -We have identified API calls from outdated Attestation API for resources under this subscription. We recommend switching to the latest Attestation API versions. You need to update your existing code to use the latest API version. This ensures you receive the latest features and performance improvements. +We observed that there have been 1,000 or more 429 throttling errors on this resource in a one day timeframe. Consider enabling autoscale to better handle higher call volumes and reduce the number of 429 errors. -Learn more about [Attestation provider - UpgradeAttestationAPI (Update Attestation API Version)](/rest/api/attestation). +Learn more about [Azure AI services autoscale](/azure/ai-services/autoscale?tabs=portal). -## Azure VMware Solution +### Text Analytics Model Version Deprecation -### vSAN capacity utilization has crossed critical threshold +Upgrade the model version to a newer model version or latest to utilize the latest and highest quality models. -Your vSAN capacity utilization has reached 75%. The cluster utilization is required to remain below the 75% critical threshold for SLA compliance. Add new nodes to VSphere cluster to increase capacity or delete VMs to reduce consumption or adjust VM workloads +Learn more about [Cognitive Service - TAUpgradeToLatestModelVersion (Text Analytics Model Version Deprecation)](https://aka.ms/language-model-lifecycle). -Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization has crossed critical threshold)](../azure-vmware/concepts-private-clouds-clusters.md). +### Text Analytics Model Version Deprecation -## Azure Cache for Redis +Upgrade the model version to a newer model version or latest to utilize the latest and highest quality models. -### Improve your Cache and application performance when running with high network bandwidth +Learn more about [Cognitive Service - TAUpgradeModelVersiontoLatest (Text Analytics Model Version Deprecation)](https://aka.ms/language-model-lifecycle). -Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity. +### Upgrade to the latest Cognitive Service Text Analytics API version -Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth). +Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personal data recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints, we have Opinion Mining in SA endpoint, redacted text property in personal data endpoint -### Improve your Cache and application performance when running with many connected clients +Learn more about [Cognitive Service - UpgradeToLatestAPI (Upgrade to the latest Cognitive Service Text Analytics API version)](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api). -Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity. +### Upgrade to the latest API version of Azure Cognitive Service for Language -Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections). +Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability. -### Improve your Cache and application performance when running with high server load +Learn more about [Cognitive Service - UpgradeToLatestAPILanguage (Upgrade to the latest API version of Azure Cognitive Service for Language)](https://aka.ms/language-api). -Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity. +### Upgrade to the latest Cognitive Service Text Analytics SDK version -Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu). +Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personal data recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints, we have Opinion Mining in SA endpoint, redacted text property in personal data endpoint -### Improve your Cache and application performance when running with high memory pressure +Learn more about [Cognitive Service - UpgradeToLatestSDK (Upgrade to the latest Cognitive Service Text Analytics SDK version)](/azure/cognitive-services/text-analytics/quickstarts/text-analytics-sdk?tabs=version-3-1&pivots=programming-language-csharp). -Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity. +### Upgrade to the latest Cognitive Service Language SDK version -Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory). +Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. -### Improve your Cache and application performance when memory rss usage is high. +Learn more about [Cognitive Service - UpgradeToLatestSDKLanguage (Upgrade to the latest Cognitive Service Language SDK version)](https://aka.ms/language-api). -Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity. +### Upgrade to the latest Azure AI Language SDK version -Learn more about [Redis Cache Server - RedisCacheUsedMemoryRSS (Improve your Cache and application performance when memory rss usage is high.)](https://aka.ms/redis/recommendations/memory). +Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personal data recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints, we have Opinion Mining in SA endpoint, redacted text property in personal data endpoint. -### Improve your Cache and application performance when memory rss usage is high. +Learn more about [Azure AI Language](/azure/ai-services/language-service/language-detection/overview). -Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity. -Learn more about [Redis Cache Server - RedisCacheUsedMemoryRSSHigh (Improve your Cache and application performance when memory rss usage is high.)](https://aka.ms/redis/recommendations/memory). -### Improve your Cache and application performance when running with high network bandwidth -Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity. +## Analytics -Learn more about [Redis Cache Server - RedisCacheNetworkBandwidthHigh (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth). +### Right-size Data Explorer resources for optimal performance. -### Improve your Cache and application performance when running with high memory pressure +This recommendation surfaces all Data Explorer resources that exceed the recommended data capacity (80%). The recommended action to improve the performance is to scale to the recommended configuration shown. -Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity. +Learn more about [Data explorer resource - Right-size ADX resource (Right-size Data Explorer resources for optimal performance.)](https://aka.ms/adxskuperformance). -Learn more about [Redis Cache Server - RedisCacheUsedMemoryHigh (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory). +### Review table cache policies for Data Explorer tables -### Improve your Cache and application performance when running with many connected clients +This recommendation surfaces Data Explorer tables with a high number of queries that look back beyond the configured cache period (policy) - you see the top 10 tables by query percentage that access out-of-cache data. The recommended action to improve the performance: Limit queries on this table to the minimal necessary time range (within the defined policy). Alternatively, if data from the entire time range is required, increase the cache period to the recommended value. -Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity. +Learn more about [Data explorer resource - UpdateCachePoliciesForAdxTables (Review table cache policies for Data Explorer tables)](https://aka.ms/adxcachepolicy). -Learn more about [Redis Cache Server - RedisCacheConnectedClientsHigh (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections). +### Reduce Data Explorer table cache policy for better performance -### Improve your Cache and application performance when running with high server load +Reducing the table cache policy frees up unused data from the resource's cache and improves performance. -Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity. +Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesToImprovePerformance (Reduce Data Explorer table cache policy for better performance)](https://aka.ms/adxcachepolicy). -Learn more about [Redis Cache Server - RedisCacheServerLoadHigh (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu). +### Increase the cache in the cache policy -### Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache. +Based on your actual usage during the last month, update the cache policy to increase the hot cache for the table. The retention period must always be larger than the cache period. If you increase the cache and the retention period is lower than the cache period, update the retention policy. The analysis is based only on user queries that scanned data. -Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache. If client host machine is running hot on memory, CPU or network bandwidth, the cache responses will not reach your application fast enough and could result in higher latency. +Learn more about [Data explorer resource - IncreaseCacheForAzureDataExplorerTablesToImprovePerformance (Increase the cache in the cache policy)](https://aka.ms/adxcachepolicy). -Learn more about [Redis Cache Server - UnresponsiveClient (Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache.)](/azure/azure-cache-for-redis/cache-troubleshoot-client). +### Enable Optimized Autoscale for Data Explorer resources -## CDN +Looks like your resource could have automatically scaled to improve performance (based on your actual usage during the last week, cache utilization, ingestion utilization, CPU, and streaming ingests utilization). To optimize costs and performance, we recommend enabling Optimized Autoscale. -### Upgrade SDK version recommendation +Learn more about [Data explorer resource - PerformanceEnableOptimizedAutoscaleAzureDataExplorer (Enable Optimized Autoscale for Data Explorer resources)](https://aka.ms/adxoptimizedautoscale). -The latest version of Azure Front Door Standard and Premium Client Library or SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Front Door Standard and Premium. +### Reads happen on most recent data -Learn more about [Front Door Profile - UpgradeCDNToLatestSDKLanguage (Upgrade SDK version recommendation)](https://aka.ms/afd/tiercomparison). +More than 75% of your read requests are landing on the memstore, indicating that the reads are primarily on recent data. Recent data reads suggest that even if a flush happens on the memstore, the recent file needs to be accessed and put in the cache. -## Azure AI services +Learn more about [HDInsight cluster - HBaseMemstoreReadPercentage (Reads happen on most recent data)](../hdinsight/hbase/apache-hbase-advisor.md). -### 429 Throttling Detected on this resource +### Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance. -We observed that there have been 1,000 or more 429 throttling errors on this resource in a one day timeframe. Consider enabling autoscale to better handle higher call volumes and reduce the number of 429 errors. +You're seeing this advisor recommendation because HDInsight team's system log shows that in the past seven days, your cluster has encountered the following scenarios: -Learn more about [Azure AI services autoscale](/azure/ai-services/autoscale?tabs=portal). +1. High WAL sync time latency -### Upgrade to the latest Azure AI Language SDK version +2. High write request count (at least 3 one hour windows of over 1000 avg_write_requests/second/node) -Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personally identifiable information recognition, Entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have Opinion Mining in SA endpoint, redacted text property in personally identifiable information endpoint. +These conditions are indicators that your cluster is suffering from high write latencies, which can be due to heavy workload on your cluster. -Learn more about [Azure AI Language](/azure/ai-services/language-service/language-detection/overview). +To improve the performance of your cluster, consider utilizing the Accelerated Writes feature provided by Azure HDInsight HBase. The Accelerated Writes feature for HDInsight Apache HBase clusters attaches premium SSD-managed disks to every RegionServer (worker node) instead of using cloud storage. As a result, it provides low write-latency and better resiliency for your applications. -## Communication services +To read more on this feature, visit link: -### Use recommended version of Chat SDK +Learn more about [HDInsight cluster - AccWriteCandidate (Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.)](../hdinsight/hbase/apache-hbase-accelerated-writes.md). -Azure Communication Services Chat SDK can be used to add rich, real-time chat to your applications. Update to the recommended version of Chat SDK to ensure the latest fixes and features. +### More than 75% of your queries are full scan queries -Learn more about [Communication service - UpgradeChatSdk (Use recommended version of Chat SDK)](../communication-services/concepts/chat/sdk-features.md). +More than 75% of the scan queries on your cluster are doing a full region/table scan. Modify your scan queries to avoid full region or table scans. -### Use recommended version of Resource Manager SDK +Learn more about [HDInsight cluster - ScanQueryTuningcandidate (More than 75% of your queries are full scan queries.)](../hdinsight/hbase/apache-hbase-advisor.md). -Resource Manager SDK can be used to provision and manage Azure Communication Services resources. Update to the recommended version of Resource Manager SDK to ensure the latest fixes and features. +### Check your region counts as you have blocking updates -Learn more about [Communication service - UpgradeResourceManagerSdk (Use recommended version of Resource Manager SDK)](../communication-services/quickstarts/create-communication-resource.md?pivots=platform-net&tabs=windows). +Region counts needs to be adjusted to avoid updates getting blocked. It might require a scale up of the cluster by adding new nodes. -### Use recommended version of Identity SDK +Learn more about [HDInsight cluster - RegionCountCandidate (Check your region counts as you have blocking updates.)](../hdinsight/hbase/apache-hbase-advisor.md). -Azure Communication Services Identity SDK can be used to manage identities, users, and access tokens. Update to the recommended version of Identity SDK to ensure the latest fixes and features. +### Consider increasing the flusher threads -Learn more about [Communication service - UpgradeIdentitySdk (Use recommended version of Identity SDK)](../communication-services/concepts/sdk-options.md). +The flush queue size in your region servers is more than 100 or there are updates getting blocked frequently. Tuning of the flush handler is recommended. -### Use recommended version of SMS SDK +Learn more about [HDInsight cluster - FlushQueueCandidate (Consider increasing the flusher threads)](../hdinsight/hbase/apache-hbase-advisor.md). -Azure Communication Services SMS SDK can be used to send and receive SMS messages. Update to the recommended version of SMS SDK to ensure the latest fixes and features. +### Consider increasing your compaction threads for compactions to complete faster -Learn more about [Communication service - UpgradeSmsSdk (Use recommended version of SMS SDK)](/azure/communication-services/concepts/telephony-sms/sdk-features). +The compaction queue in your region servers is more than 2000 suggesting that more data requires compaction. Slower compactions can affect read performance as the number of files to read are more. More files without compaction can also affect the heap usage related to how files interact with Azure file system. -### Use recommended version of Phone Numbers SDK +Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increasing your compaction threads for compactions to complete faster)](/azure/hdinsight/hbase/apache-hbase-advisor). -Azure Communication Services Phone Numbers SDK can be used to acquire and manage phone numbers. Update to the recommended version of Phone Numbers SDK to ensure the latest fixes and features. +### Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows -Learn more about [Communication service - UpgradePhoneNumbersSdk (Use recommended version of Phone Numbers SDK)](../communication-services/concepts/sdk-options.md). +Clustered columnstore tables are organized in data into segments. Having high segment quality is critical to achieving optimal query performance on a columnstore table. You can measure segment quality by the number of rows in a compressed row group. -### Use recommended version of Calling SDK +Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](https://aka.ms/AzureSynapseCCIGuidance). -Azure Communication Services Calling SDK can be used to enable voice, video, screen-sharing, and other real-time communication. Update to the recommended version of Calling SDK to ensure the latest fixes and features. +### Update SynapseManagementClient SDK Version -Learn more about [Communication service - UpgradeCallingSdk (Use recommended version of Calling SDK)](../communication-services/concepts/voice-video-calling/calling-sdk-features.md). +New SynapseManagementClient is using .NET SDK 4.0 or above. -### Use recommended version of Call Automation SDK +Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update SynapseManagementClient SDK Version)](https://aka.ms/UpgradeSynapseManagementClientSDK). -Azure Communication Services Call Automation SDK can be used to make and manage calls, play audio, and configure recording. Update to the recommended version of Call Automation SDK to ensure the latest fixes and features. -Learn more about [Communication service - UpgradeServerCallingSdk (Use recommended version of Call Automation SDK)](../communication-services/concepts/voice-video-calling/call-automation-apis.md). -### Use recommended version of Network Traversal SDK +## Compute -Azure Communication Services Network Traversal SDK can be used to access TURN servers for low-level data transport. Update to the recommended version of Network Traversal SDK to ensure the latest fixes and features. +### vSAN capacity utilization has crossed critical threshold -Learn more about [Communication service - UpgradeTurnSdk (Use recommended version of Network Traversal SDK)](../communication-services/concepts/sdk-options.md). +Your vSAN capacity utilization has reached 75%. The cluster utilization is required to remain below the 75% critical threshold for SLA compliance. Add new nodes to VSphere cluster to increase capacity or delete VMs to reduce consumption or adjust VM workloads -## Compute +Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization has crossed critical threshold)](../azure-vmware/concepts-private-clouds-clusters.md). ### Update Automanage to the latest API Version -We have identified sdk calls from outdated API for resources under this subscription. We recommend switching to the latest sdk versions. This ensures you receive the latest features and performance improvements. +We have identified SDK calls from outdated API for resources under this subscription. We recommend switching to the latest SDK versions to ensure you receive the latest features and performance improvements. Learn more about [Virtual machine - UpdateToLatestApi (Update Automanage to the latest API Version)](/azure/automanage/reference-sdk). ### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location. -We have determined that your VMs are located in a region different or far from where your users are connecting from, using Azure Virtual Desktop. This may lead to prolonged connection response times and will impact overall user experience on Azure Virtual Desktop. +We have determined that your VMs are located in a region different or far from where your users are connecting with Azure Virtual Desktop. Distant user regions might lead to prolonged connection response times and affect overall user experience. Learn more about [Virtual machine - RegionProximitySessionHosts (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](../virtual-desktop/connection-latency.md). Learn more about [Virtual machine - ManagedDisksStorageAccount (Use Managed disk ### Convert Managed Disks from Standard HDD to Premium SSD for performance -We have noticed your Standard HDD disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which will take three to five minutes. +We have noticed your Standard HDD disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which takes three to five minutes. Learn more about [Disk - MDHDDtoPremiumForPerformance (Convert Managed Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd). ### Enable Accelerated Networking to improve network performance and latency -We have detected that Accelerated Networking is not enabled on VM resources in your existing deployment that may be capable of supporting this feature. If your VM OS image supports Accelerated Networking as detailed in the documentation, make sure to enable this free feature on these VMs to maximize the performance and latency of your networking workloads in cloud +We have detected that Accelerated Networking isn't enabled on VM resources in your existing deployment that might be capable of supporting this feature. If your VM OS image supports Accelerated Networking as detailed in the documentation, make sure to enable this free feature on these VMs to maximize the performance and latency of your networking workloads in cloud Learn more about [Virtual machine - AccelNetConfiguration (Enable Accelerated Networking to improve network performance and latency)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms). ### Use SSD Disks for your production workloads -We noticed that you are using SSD disks while also using Standard HDD disks on the same VM. Standard HDD managed disks are generally recommended for dev-test and backup; we recommend you use Premium SSDs or Standard SSDs for production. Premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Standard SSDs provide consistent and lower latency. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which will take three to five minutes. +We noticed that you're using SSD disks while also using Standard HDD disks on the same VM. Standard HDD managed disks are recommended for dev-test and backup; we recommend you use Premium SSDs or Standard SSDs for production. Premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Standard SSDs provide consistent and lower latency. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which takes three to five minutes. Learn more about [Virtual machine - MixedDiskTypeToSSDPublic (Use SSD Disks for your production workloads)](/azure/virtual-machines/windows/disks-types#disk-comparison). ### Match production Virtual Machines with Production Disk for consistent performance and better latency -Production virtual machines need production disks if you want to get the best performance. We see that you are running a production level virtual machine, however, you are using a low performing disk with standard HDD. Upgrading your disks that are attached to your production disks, either Standard SSD or Premium SSD, will benefit you with a more consistent experience and improvements in latency. +Production virtual machines need production disks if you want to get the best performance. We see that you're running a production level virtual machine, however, you're using a low performing disk with standard HDD. Upgrading disks that are attached to your production disks, either Standard SSD or Premium SSD, benefits you with a more consistent experience and improvements in latency. Learn more about [Virtual machine - MatchProdVMProdDisks (Match production Virtual Machines with Production Disk for consistent performance and better latency)](/azure/virtual-machines/windows/disks-types#disk-comparison). -### Accelerated Networking may require stopping and starting the VM +### Accelerated Networking might require stopping and starting the VM -We have detected that Accelerated Networking is not engaged on VM resources in your existing deployment even though the feature has been requested. In rare cases like this, it may be necessary to stop and start your VM, at your convenience, to re-engage AccelNet. +We have detected that Accelerated Networking isn't engaged on VM resources in your existing deployment even though the feature has been requested. In rare cases like this, it might be necessary to stop and start your VM, at your convenience, to re-engage AccelNet. -Learn more about [Virtual machine - AccelNetDisengaged (Accelerated Networking may require stopping and starting the VM)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms). +Learn more about [Virtual machine - AccelNetDisengaged (Accelerated Networking might require stopping and starting the VM)](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms). -### Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance. +### Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance -Ultra disk is available in the same region as your database workload. Ultra disk offers high throughput, high IOPS, and consistent low latency disk storage for your database workloads: For Oracle DBs, you can now use either 4k or 512E sector sizes with Ultra disk depending on your Oracle DB version. For SQL server, leveraging Ultra disk for your log disk might offer more performance for your database. See instructions here for migrating your log disk to Ultra disk. +Ultra disk is available in the same region as your database workload. Ultra disk offers high throughput, high IOPS, and consistent low latency disk storage for your database workloads: For Oracle DBs, you can now use either 4k or 512E sector sizes with Ultra disk depending on your Oracle DB version. For SQL server, using Ultra disk for your log disk might offer more performance for your database. See instructions here for migrating your log disk to Ultra disk. Learn more about [Virtual machine - AzureStorageVmUltraDisk (Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.)](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal). -### Upgrade the size of your virtual machines close to resource exhaustion +### Upgrade the size of your most active virtual machines to prevent resource exhaustion and improve performance ++We analyzed data for the past seven days and identified virtual machines (VMs) with high utilization across different metrics (that is, CPU, Memory, and VM IO). Those VMs might experience performance issues since they're nearing or at their SKU's limits. Consider upgrading their SKU to improve performance. ++Learn more about [Virtual machine - UpgradeSizeHighVMUtilV0 (Upgrade the size of your most active virtual machines to prevent resource exhaustion and improve performance)](https://aka.ms/aa_resizehighusagevmrec_learnmore). + -We analyzed data for the past 7 days and identified virtual machines (VMs) with high utilization across different metrics (i.e., CPU, Memory, and VM IO). Those VMs may experience performance issues since they are nearing/at their SKU's limits. Consider upgrading their SKU to improve performance. -Learn more about [Virtual machine - Improve the performance of highly used VMs using Azure Advisor](https://aka.ms/aa_resizehighusagevmrec_learnmore) -## Kubernetes +## Containers ### Unsupported Kubernetes version is detected Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions). -## DataFactory +### Unsupported Kubernetes version is detected ++Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with a supported version. ++Learn more about [HDInsight Cluster Pool - UnsupportedHiloAKSVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions). ++### Clusters with a single node pool ++We recommended that you add one or more node pools instead of using a single node pool. Multiple pools help to isolate critical system pods from your application to prevent misconfigured or rogue application pods from accidentally killing system pods. ++Learn more about [Kubernetes service - ClustersWithASingleNodePool (Clusters with a Single Node Pool)](/azure/aks/use-system-pools?tabs=azure-cli#system-and-user-node-pools). ++### Update Fleet API to the latest version ++We have identified SDK calls from outdated Fleet API for resources under your subscription. We recommend switching to the latest SDK version, which ensures you receive the latest features and performance improvements. ++Learn more about [Kubernetes fleet manager | PREVIEW - UpdateToLatestFleetApi (Update Fleet API to the latest Version)](/azure/kubernetes-fleet/update-orchestration). +++++## Databases ++### Configure your Azure Cosmos DB query page size (MaxItemCount) to -1 ++You're using the query page size of 100 for queries for your Azure Cosmos DB container. We recommend using a page size of -1 for faster scans. ++Learn more about [Azure Cosmos DB account - CosmosDBQueryPageSize (Configure your Azure Cosmos DB query page size (MaxItemCount) to -1)](/azure/cosmos-db/sql-api-query-metrics#max-item-count). ++### Add composite indexes to your Azure Cosmos DB container ++Your Azure Cosmos DB containers are running ORDER BY queries incurring high Request Unit (RU) charges. It's recommended to add composite indexes to your containers' indexing policy to improve the RU consumption and decrease the latency of these queries. ++Learn more about [Azure Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](../cosmos-db/index-policy.md#composite-indexes). ++### Optimize your Azure Cosmos DB indexing policy to only index what's needed ++Your Azure Cosmos DB containers are using the default indexing policy, which indexes every property in your documents. Because you're storing large documents, a high number of properties get indexed, resulting in high Request Unit consumption and poor write latency. To optimize write performance, we recommend overriding the default indexing policy to only index the properties used in your queries. ++Learn more about [Azure Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](../cosmos-db/index-policy.md). ++### Use hierarchical partition keys for optimal data distribution ++Your account has a custom setting that allows the logical partition size in a container to exceed the limit of 20 GB. The Azure Cosmos DB team applied this setting as a temporary measure to give you time to rearchitect your application with a different partition key. It isn't recommended as a long-term solution, as SLA guarantees aren't honored when the limit is increased. You can now use hierarchical partition keys (preview) to rearchitect your application. The feature allows you to exceed the 20-GB limit by setting up to three partition keys, ideal for multitenant scenarios or workloads that use synthetic keys. -### Review your throttled Data Factory Triggers +Learn more about [Azure Cosmos DB account - CosmosDBHierarchicalPartitionKey (Use hierarchical partition keys for optimal data distribution)](https://devblogs.microsoft.com/cosmosdb/hierarchical-partition-keys-private-preview/). ++### Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK ++We noticed that your Azure Cosmos DB applications are using Gateway mode via the Azure Cosmos DB .NET or Java SDKs. We recommend switching to Direct connectivity for lower latency and higher scalability. ++Learn more about [Azure Cosmos DB account - CosmosDBGatewayMode (Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK)](/azure/cosmos-db/performance-tips#networking). ++### Enhance Performance by Scaling Up for Optimal Resource Utilization ++Maximizing the efficiency of your system's resources is crucial for maintaining top-notch performance. Our system closely monitors CPU usage, and when it crosses the 90% threshold over a 12-hour period, a proactive alert is triggered. This alert not only informs Azure Cosmos DB for MongoDB vCore users of the elevated CPU consumption but also provides valuable guidance on scaling up to a higher tier. By upgrading to a more robust tier, you can unlock improved performance and ensure your system operates at its peak potential. ++Learn more about [Scaling and configuring Your Azure Cosmos DB for MongoDB vCore cluster](/azure/cosmos-db/mongodb/vcore/how-to-scale-cluster). -A high volume of throttling has been detected in an event-based trigger that runs in your Data Factory resource. This is causing your pipeline runs to drop from the run queue. Review the trigger definition to resolve issues and increase performance. +### PerformanceBoostervCore -Learn more about [Data factory trigger - ADFThrottledTriggers (Review your throttled Data Factory Triggers)](https://aka.ms/adf-create-event-trigger). +When CPU usage surpasses 90% within a 12-hour timeframe, users are notified about the high usage. Additionally it advises them to scale up to a higher tier to get a better performance. ++Learn more about [Cosmos DB account - ScaleUpvCoreRecommendation (PerformanceBoostervCore)](/azure/cosmos-db/mongodb/vcore/how-to-scale-cluster). -## MariaDB ### Scale the storage limit for MariaDB server -Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases +Our system shows that the server might be constrained because it's approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or the server moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases Learn more about [MariaDB server - OrcasMariaDbStorageLimit (Scale the storage limit for MariaDB server)](https://aka.ms/mariadbstoragelimits). ### Increase the MariaDB server vCores -Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size. +Our system shows that the CPU has been running under high utilization for an extended time period over the last seven days. High CPU utilization might lead to slow query performance. To improve performance, we recommend moving to a larger compute size. Learn more about [MariaDB server - OrcasMariaDbCpuOverload (Increase the MariaDB server vCores)](https://aka.ms/mariadbpricing). ### Scale the MariaDB server to higher SKU -Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs. +Our system shows that the server might be unable to support the connection requests because of the maximum supported connections for the given SKU, which might result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs. Learn more about [MariaDB server - OrcasMariaDbConcurrentConnection (Scale the MariaDB server to higher SKU)](https://aka.ms/mariadbconnectionlimits). ### Move your MariaDB server to Memory Optimized SKU -Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS. +Our system shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS. Learn more about [MariaDB server - OrcasMariaDbMemoryCache (Move your MariaDB server to Memory Optimized SKU)](https://aka.ms/mariadbpricing). ### Increase the reliability of audit logs -Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance. +Our system shows that the server's audit logs might have been lost over the past day. Lost audit logs can occur when your server is experiencing a CPU-heavy workload, or a server generates a large number of audit logs over a short time period. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance. Learn more about [MariaDB server - OrcasMariaDBAuditLog (Increase the reliability of audit logs)](https://aka.ms/mariadb-audit-logs). -## MySQL - ### Scale the storage limit for MySQL server -Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases +Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases Learn more about [MySQL server - OrcasMySQLStorageLimit (Scale the storage limit for MySQL server)](https://aka.ms/mysqlstoragelimits). ### Scale the MySQL server to higher SKU -Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to a higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs. +Our system shows that the server might be unable to support the connection requests because of the maximum supported connections for the given SKU, which might result in a large number of failed connections requests that adversely affect the performance. To improve performance, we recommend moving to a higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs. Learn more about [MySQL server - OrcasMySQLConcurrentConnection (Scale the MySQL server to higher SKU)](https://aka.ms/mysqlconnectionlimits). ### Increase the MySQL server vCores -Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size. +Our system shows that the CPU has been running under high utilization for an extended time period over the last seven days. High CPU utilization might lead to slow query performance. To improve performance, we recommend moving to a larger compute size. Learn more about [MySQL server - OrcasMySQLCpuOverload (Increase the MySQL server vCores)](https://aka.ms/mysqlpricing). ### Move your MySQL server to Memory Optimized SKU -Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS. +Our system shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS. Learn more about [MySQL server - OrcasMySQLMemoryCache (Move your MySQL server to Memory Optimized SKU)](https://aka.ms/mysqlpricing). ### Add a MySQL Read Replica server -Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica. +Our system shows that you might have a read intensive workload running, which results in resource contention for this server. Resource contention might lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica. Learn more about [MySQL server - OrcasMySQLReadReplica (Add a MySQL Read Replica server)](https://aka.ms/mysqlreadreplica). ### Improve MySQL connection management -Our internal telemetry indicates that your application connecting to MySQL server may not be managing connections efficiently. This may result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections. This can be done by configuring a server side connection-pooler, such as ProxySQL. +Our system shows that your application connecting to MySQL server might be managing connections poorly, which might result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections. You can do this by configuring a server side connection-pooler, such as ProxySQL. Learn more about [MySQL server - OrcasMySQLConnectionPooling (Improve MySQL connection management)](https://aka.ms/azure_mysql_connection_pooling). ### Increase the reliability of audit logs -Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance. +Our system shows that the server's audit logs might have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short time period. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance. Learn more about [MySQL server - OrcasMySQLAuditLog (Increase the reliability of audit logs)](https://aka.ms/mysql-audit-logs). ### Improve performance by optimizing MySQL temporary-table sizing -Our internal telemetry indicates that your MySQL server may be incurring unnecessary I/O overhead due to low temporary-table parameter settings. This may result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions. +Our system shows that your MySQL server might be incurring unnecessary I/O overhead due to low temporary-table parameter settings. This might result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions. Learn more about [MySQL server - OrcasMySqlTmpTables (Improve performance by optimizing MySQL temporary-table sizing)](https://aka.ms/azure_mysql_tmp_table). ### Improve MySQL connection latency -Our internal telemetry indicates that your application connecting to MySQL server may not be managing connections efficiently. This may result in higher application latency. To improve connection latency, we recommend that you enable connection redirection. This can be done by enabling the connection redirection feature of the PHP driver. +Our system shows that your application connecting to MySQL server might be managing connections poorly. This might result in higher application latency. To improve connection latency, we recommend that you enable connection redirection. This can be done by enabling the connection redirection feature of the PHP driver. Learn more about [MySQL server - OrcasMySQLConnectionRedirection (Improve MySQL connection latency)](https://aka.ms/azure_mysql_connection_redirection). ### Increase the storage limit for MySQL Flexible Server -Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount. +Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount. Learn more about [Azure Database for MySQL flexible server - OrcasMeruMySqlStorageUpsell (Increase the storage limit for MySQL Flexible Server)](https://aka.ms/azure_mysql_flexible_server_storage). ### Scale the MySQL Flexible Server to a higher SKU -Our telemetry indicates that your Flexible Server is exceeding the connection limits associated with your current SKU. A large number of failed connection requests may adversely affect server performance. To improve performance, we recommend increasing the number of vCores or switching to a higher SKU. +Our system shows that your Flexible Server is exceeding the connection limits associated with your current SKU. A large number of failed connection requests might adversely affect server performance. To improve performance, we recommend increasing the number of vCores or switching to a higher SKU. Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlConnectionUpsell (Scale the MySQL Flexible Server to a higher SKU)](https://aka.ms/azure_mysql_flexible_server_storage). ### Increase the MySQL Flexible Server vCores. -Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size. +Our system shows that the CPU has been running under high utilization for an extended time period over the last seven days. High CPU utilization might lead to slow query performance. To improve performance, we recommend moving to a larger compute size. Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlCpuUpcell (Increase the MySQL Flexible Server vCores.)](https://aka.ms/azure_mysql_flexible_server_pricing). ### Improve performance by optimizing MySQL temporary-table sizing. -Our internal telemetry indicates that your MySQL server may be incurring unnecessary I/O overhead due to low temporary-table parameter settings. This may result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions. +Our system shows that your MySQL server might be incurring unnecessary I/O overhead due to low temporary-table parameter settings. Unnecessary I/O overhead might result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions. Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlTmpTable (Improve performance by optimizing MySQL temporary-table sizing.)](https://dev.mysql.com/doc/refman/8.0/en/internal-temporary-tables.html#internal-temporary-tables-engines). ### Move your MySQL server to Memory Optimized SKU -Our internal telemetry shows that there is high memory usage for this server which can result in slower query performance and increased IOPS. To improve performance, please review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS. +Our system shows that there is high memory usage for this server that can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS. Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlMemoryUpsell (Move your MySQL server to Memory Optimized SKU)](https://aka.ms/azure_mysql_flexible_server_storage). ### Add a MySQL Read Replica server -Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica. +Our system shows that you might have a read intensive workload running, which results in resource contention for this server. This might lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica. Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlReadReplicaUpsell (Add a MySQL Read Replica server)](https://aka.ms/flexible-server-mysql-read-replicas). -## PostgreSQL - ### Increase the work_mem to avoid excessive disk spilling from sort and hash -Our internal telemetry shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server which will help to reduce the scenarios when the sort or hash happens on disk, thereby improving the overall query performance. +Our system shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server, which helps to reduce the scenarios when the sort or hash happens on disk and improves the overall query performance. Learn more about [PostgreSQL server - OrcasPostgreSqlWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration). -### Scale the storage limit for PostgreSQL server --Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases --Learn more about [PostgreSQL server - OrcasPostgreSqlStorageLimit (Scale the storage limit for PostgreSQL server)](https://aka.ms/postgresqlstoragelimits). +### Boost your workload performance by 30% with the new Ev5 compute hardware -### Distribute data in server group to distribute workload among nodes +With the new Ev5 compute hardware, you can boost workload performance by 30% with higher concurrency and better throughput. Navigate to the Compute+Storage option on the Azure portal and switch to Ev5 compute at no extra cost. Ev5 compute provides best performance among other VM series in terms of QPS and latency. -It looks like the data has not been distributed in this server group but stays on the coordinator. For full Hyperscale (Citus) benefits distribute data on worker nodes in this server group. +Learn more about [Azure Database for MySQL flexible server - OrcasMeruMySqlComputeSeriesUpgradeEv5 (Boost your workload performance by 30% with the new Ev5 compute hardware)](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/boost-azure-mysql-business-critical-flexible-server-performance/ba-p/3603698). -Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusDistributeData (Distribute data in server group to distribute workload among nodes)](https://go.microsoft.com/fwlink/?linkid=2135201). -### Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly +### Scale the storage limit for PostgreSQL server -It looks like the data is not well balanced between worker nodes in this Hyperscale (Citus) server group. In order to use each worker node of the Hyperscale (Citus) server group effectively rebalance data in this server group. +Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases -Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusRebalanceData (Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly)](https://go.microsoft.com/fwlink/?linkid=2148869). +Learn more about [PostgreSQL server - OrcasPostgreSqlStorageLimit (Scale the storage limit for PostgreSQL server)](https://aka.ms/postgresqlstoragelimits). ### Scale the PostgreSQL server to higher SKU -Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs. +Our system shows that the server might be unable to support the connection requests because of the maximum supported connections for the given SKU, which might result in a large number of failed connections requests adversely affecting performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs. Learn more about [PostgreSQL server - OrcasPostgreSqlConcurrentConnection (Scale the PostgreSQL server to higher SKU)](https://aka.ms/postgresqlconnectionlimits). ### Move your PostgreSQL server to Memory Optimized SKU -Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS. +Our system shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS. Learn more about [PostgreSQL server - OrcasPostgreSqlMemoryCache (Move your PostgreSQL server to Memory Optimized SKU)](https://aka.ms/postgresqlpricing). ### Add a PostgreSQL Read Replica server -Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica. +Our system shows that you might have a read intensive workload running, which results in resource contention for this server. Resource contention can lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica. Learn more about [PostgreSQL server - OrcasPostgreSqlReadReplica (Add a PostgreSQL Read Replica server)](https://aka.ms/postgresqlreadreplica). ### Increase the PostgreSQL server vCores -Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size. +Our system shows that the CPU has been running under high utilization for an extended time period over the last seven days. High CPU utilization might lead to slow query performance. To improve performance, we recommend moving to a larger compute size. Learn more about [PostgreSQL server - OrcasPostgreSqlCpuOverload (Increase the PostgreSQL server vCores)](https://aka.ms/postgresqlpricing). ### Improve PostgreSQL connection management -Our internal telemetry indicates that your PostgreSQL server may not be managing connections efficiently. This may result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections. This can be done by configuring a server side connection-pooler, such as PgBouncer. +Our system shows that your PostgreSQL server might not be managing connections efficiently, which can result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections by configuring a server side connection-pooler, such as PgBouncer. Learn more about [PostgreSQL server - OrcasPostgreSqlConnectionPooling (Improve PostgreSQL connection management)](https://aka.ms/azure_postgresql_connection_pooling). ### Improve PostgreSQL log performance -Our internal telemetry indicates that your PostgreSQL server has been configured to output VERBOSE error logs. This can be useful for troubleshooting your database, but it can also result in reduced database performance. To improve performance, we recommend that you change the log_error_verbosity parameter to the DEFAULT setting. +Our system shows that your PostgreSQL server has been configured to output VERBOSE error logs. This setting can be useful for troubleshooting your database, but it can also result in reduced database performance. To improve performance, we recommend that you change the log_error_verbosity parameter to the DEFAULT setting. Learn more about [PostgreSQL server - OrcasPostgreSqlLogErrorVerbosity (Improve PostgreSQL log performance)](https://aka.ms/azure_postgresql_log_settings). ### Optimize query statistics collection on an Azure Database for PostgreSQL -Our internal telemetry indicates that your PostgreSQL server has been configured to track query statistics using the pg_stat_statements module. While useful for troubleshooting, it can also result in reduced server performance. To improve performance, we recommend that you change the pg_stat_statements.track parameter to NONE. +Our system shows that your PostgreSQL server has been configured to track query statistics using the pg_stat_statements module. While useful for troubleshooting, it can also result in reduced server performance. To improve performance, we recommend that you change the pg_stat_statements.track parameter to NONE. Learn more about [PostgreSQL server - OrcasPostgreSqlStatStatementsTrack (Optimize query statistics collection on an Azure Database for PostgreSQL)](https://aka.ms/azure_postgresql_optimize_query_stats). ### Optimize query store on an Azure Database for PostgreSQL when not troubleshooting -Our internal telemetry indicates that your PostgreSQL database has been configured to track query performance using the pg_qs.query_capture_mode parameter. While troubleshooting, we suggest setting the pg_qs.query_capture_mode parameter to TOP or ALL. When not troubleshooting, we recommend that you set the pg_qs.query_capture_mode parameter to NONE. +Our system shows that your PostgreSQL database has been configured to track query performance using the pg_qs.query_capture_mode parameter. While troubleshooting, we suggest setting the pg_qs.query_capture_mode parameter to TOP or ALL. When not troubleshooting, we recommend that you set the pg_qs.query_capture_mode parameter to NONE. Learn more about [PostgreSQL server - OrcasPostgreSqlQueryCaptureMode (Optimize query store on an Azure Database for PostgreSQL when not troubleshooting)](https://aka.ms/azure_postgresql_query_store). ### Increase the storage limit for PostgreSQL Flexible Server -Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount. +Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode. Learn more about [PostgreSQL server - OrcasPostgreSqlFlexibleServerStorageLimit (Increase the storage limit for PostgreSQL Flexible Server)](https://aka.ms/azure_postgresql_flexible_server_limits). -### Optimize logging settings by setting LoggingCollector to -1 +#### Optimize logging settings by setting LoggingCollector to -1 Optimize logging settings by setting LoggingCollector to -1 -### Optimize logging settings by setting LogDuration to OFF +Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging). ++#### Optimize logging settings by setting LogDuration to OFF Optimize logging settings by setting LogDuration to OFF -### Optimize logging settings by setting LogStatement to NONE +Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging). ++#### Optimize logging settings by setting LogStatement to NONE Optimize logging settings by setting LogStatement to NONE -### Optimize logging settings by setting ReplaceParameter to OFF +Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging). ++#### Optimize logging settings by setting ReplaceParameter to OFF Optimize logging settings by setting ReplaceParameter to OFF -### Optimize logging settings by setting LoggingCollector to OFF +Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging). ++#### Optimize logging settings by setting LoggingCollector to OFF Optimize logging settings by setting LoggingCollector to OFF +Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging). + ### Increase the storage limit for Hyperscale (Citus) server group -Our internal telemetry shows that one or more nodes in the server group may be constrained because they are approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space. +Our system shows that one or more nodes in the server group might be constrained because they are approaching limits for the currently provisioned storage values. This might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space. Learn more about [PostgreSQL server - OrcasPostgreSqlCitusStorageLimitHyperscaleCitus (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes). ### Optimize log_statement settings for PostgreSQL on Azure Database -Our internal telemetry indicates that you have log_statement enabled, for better performance, set it to NONE +Our system shows that you have log_statement enabled, for better performance set it to NONE Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogStatement (Optimize log_statement settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md). ### Increase the work_mem to avoid excessive disk spilling from sort and hash -Our internal telemetry shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server which will help to reduce the scenarios when the sort or hash happens on disk, thereby improving the overall query performance. +Our system shows that the configuration work_mem is too small for your PostgreSQL server, resulting in disk spilling and degraded query performance. We recommend increasing the work_mem limit for the server, which helps to reduce the scenarios when the sort or hash happens on disk and improves the overall query performance. Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration). ### Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning -Our internal telemetry suggests that you can improve storage performance by enabling Intelligent tuning +Our system suggests that you can improve storage performance by enabling Intelligent tuning Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruIntelligentTuning (Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning)](../postgresql/flexible-server/concepts-intelligent-tuning.md). ### Optimize log_duration settings for PostgreSQL on Azure Database -Our internal telemetry indicates that you have log_duration enabled, for better performance, set it to OFF +Our system shows that you have log_duration enabled, for better performance, set it to OFF Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogDuration (Optimize log_duration settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md). ### Optimize log_min_duration settings for PostgreSQL on Azure Database -Our internal telemetry indicates that you have log_min_duration enabled, for better performance, set it to -1 +Our system shows that you have log_min_duration enabled, for better performance, set it to -1 Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogMinDuration (Optimize log_min_duration settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md). ### Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database -Our internal telemetry indicates that you have pg_qs.query_capture_mode enabled, for better performance, set it to NONE +Our system shows that you have pg_qs.query_capture_mode enabled, for better performance, set it to NONE Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruQueryCaptureMode (Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-query-store-best-practices.md). ### Optimize PostgreSQL performance by enabling PGBouncer -Our Internal telemetry indicates that you can improve PostgreSQL performance by enabling PGBouncer +Our system shows that you can improve PostgreSQL performance by enabling PGBouncer Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruOrcasPostgreSQLConnectionPooling (Optimize PostgreSQL performance by enabling PGBouncer)](../postgresql/flexible-server/concepts-pgbouncer.md). ### Optimize log_error_verbosity settings for PostgreSQL on Azure Database -Our internal telemetry indicates that you have log_error_verbosity enabled, for better performance, set it to DEFAULT +Our system shows that you have log_error_verbosity enabled, for better performance, set it to DEFAULT Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogErrorVerbosity (Optimize log_error_verbosity settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md). ### Increase the storage limit for Hyperscale (Citus) server group -Our internal telemetry shows that one or more nodes in the server group may be constrained because they are approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space. +Our system shows that one or more nodes in the server group might be constrained because they are approaching limits for the currently provisioned storage values. This might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space. Learn more about [Hyperscale (Citus) server group - MarlinStorageLimitRecommendation (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes). ### Migrate your database from SSPG to FSPG -Consider our new offering Azure Database for PostgreSQL Flexible Server that provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience. Learn more. +Consider our new offering, Azure Database for PostgreSQL Flexible Server, which provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls, and simplified developer experience. Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](../postgresql/how-to-upgrade-using-dump-and-restore.md). ### Move your PostgreSQL Flexible Server to Memory Optimized SKU -Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, please review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS. +Our system shows that there is high churn in the buffer pool for this server, resulting in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS. Learn more about [PostgreSQL server - OrcasMeruMemoryUpsell (Move your PostgreSQL Flexible Server to Memory Optimized SKU)](https://aka.ms/azure_postgresql_flexible_server_pricing). -## DesktopVirtualization --### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location. +### Improve your Cache and application performance when running with high network bandwidth -We have determined that your VMs are located in a region different or far from where your users are connecting from, using Azure Virtual Desktop. This may lead to prolonged connection response times and will impact overall user experience on Azure Virtual Desktop. When creating VMs for your host pools, you should attempt to use a region closer to the user. Having close proximity ensures continuing satisfaction with the Azure Virtual Desktop service and a better overall quality of experience. +Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce network bandwidth or scale to a different size or SKU with more capacity. -Learn more about [Host Pool - RegionProximityHostPools (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](../virtual-desktop/connection-latency.md). +Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth). -### Change the max session limit for your depth first load balanced host pool to improve VM performance +### Improve your Cache and application performance when running with many connected clients -Depth first load balancing uses the max session limit to determine the maximum number of users that can have concurrent sessions on a single session host. If the max session limit is too high, all user sessions will be directed to the same session host and this may cause performance and reliability issues. Therefore, when setting a host pool to have depth first load balancing, you should also set an appropriate max session limit according to the configuration of your deployment and capacity of your VMs. To fix this, open your host pool's properties and change the value next to the "Max session limit" setting. +Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce the server load or scale to a different size or SKU with more capacity. -Learn more about [Host Pool - ChangeMaxSessionLimitForDepthFirstHostPool (Change the max session limit for your depth first load balanced host pool to improve VM performance )](../virtual-desktop/configure-host-pool-load-balancing.md). +Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections). -## Azure Cosmos DB +### Improve your Cache and application performance when running with many connected clients -### Configure your Azure Cosmos DB query page size (MaxItemCount) to -1 +Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce the server load or scale to a different size or SKU with more capacity. -You are using the query page size of 100 for queries for your Azure Cosmos DB container. We recommend using a page size of -1 for faster scans. +Learn more about [Redis Cache Server - RedisCacheConnectedClientsHigh (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections). -Learn more about [Azure Cosmos DB account - CosmosDBQueryPageSize (Configure your Azure Cosmos DB query page size (MaxItemCount) to -1)](/azure/cosmos-db/sql-api-query-metrics#max-item-count). +### Improve your Cache and application performance when running with high server load -### Add composite indexes to your Azure Cosmos DB container +Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce the server load or scale to a different size or SKU with more capacity. -Your Azure Cosmos DB containers are running ORDER BY queries incurring high Request Unit (RU) charges. It is recommended to add composite indexes to your containers' indexing policy to improve the RU consumption and decrease the latency of these queries. +Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu). -Learn more about [Azure Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](../cosmos-db/index-policy.md#composite-indexes). +### Improve your Cache and application performance when running with high server load -### Optimize your Azure Cosmos DB indexing policy to only index what's needed +Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce the server load or scale to a different size or SKU with more capacity. -Your Azure Cosmos DB containers are using the default indexing policy, which indexes every property in your documents. Because you're storing large documents, a high number of properties get indexed, resulting in high Request Unit consumption and poor write latency. To optimize write performance, we recommend overriding the default indexing policy to only index the properties used in your queries. +Learn more about [Redis Cache Server - RedisCacheServerLoadHigh (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu). -Learn more about [Azure Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](../cosmos-db/index-policy.md). +### Improve your Cache and application performance when running with high memory pressure -### Use hierarchical partition keys for optimal data distribution +Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce used memory or scale to a different size or SKU with more capacity. -This account has a custom setting that allows the logical partition size in a container to exceed the limit of 20 GB. This setting was applied by the Azure Cosmos DB team as a temporary measure to give you time to re-architect your application with a different partition key. It is not recommended as a long-term solution, as SLA guarantees are not honored when the limit is increased. You can now use hierarchical partition keys (preview) to re-architect your application. The feature allows you to exceed the 20 GB limit by setting up to three partition keys, ideal for multi-tenant scenarios or workloads that use synthetic keys. +Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory). -Learn more about [Azure Cosmos DB account - CosmosDBHierarchicalPartitionKey (Use hierarchical partition keys for optimal data distribution)](https://devblogs.microsoft.com/cosmosdb/hierarchical-partition-keys-private-preview/). +### Improve your Cache and application performance when memory rss usage is high. -### Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK +Cache instances perform best when not running under high network bandwidth that might cause unresponsiveness, data loss, or unavailability. Apply best practices to reduce used memory or scale to a different size or SKU with more capacity. -We noticed that your Azure Cosmos DB applications are using Gateway mode via the Azure Cosmos DB .NET or Java SDKs. We recommend switching to Direct connectivity for lower latency and higher scalability. +Learn more about [Redis Cache Server - RedisCacheUsedMemoryRSS (Improve your Cache and application performance when memory rss usage is high.)](https://aka.ms/redis/recommendations/memory). -Learn more about [Azure Cosmos DB account - CosmosDBGatewayMode (Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK)](/azure/cosmos-db/performance-tips#networking). +### Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache -### Enhance Performance by Scaling Up for Optimal Resource Utilization +Cache instances perform best when the host machines where the client application runs, is able to keep up with responses from the cache. If client host machine is running hot on memory, CPU, or network bandwidth, the cache responses don't reach your application fast enough and can result in higher latency. -Maximizing the efficiency of your system's resources is crucial for maintaining top-notch performance. Our system closely monitors CPU usage, and when it crosses the 90% threshold over a 12-hour period, a proactive alert is triggered. This alert not only informs Azure Cosmos DB for MongoDB vCore users of the elevated CPU consumption but also provides valuable guidance on scaling up to a higher tier. By upgrading to a more robust tier, you can unlock improved performance and ensure your system operates at its peak potential. +Learn more about [Redis Cache Server - UnresponsiveClient (Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache.)](/azure/azure-cache-for-redis/cache-troubleshoot-client). -Learn more about [Scaling and configuring Your Azure Cosmos DB for MongoDB vCore cluster](../cosmos-db/mongodb/vcore/how-to-scale-cluster.md) -## HDInsight +## DevOps -### Unsupported Kubernetes version is detected +### Update to the latest AMS API Version -Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with a supported version. +We have identified calls to an Azure Media Services (AMS) API version that is not recommended. We recommend switching to the latest AMS API version to ensure uninterrupted access to AMS, latest features, and performance improvements. -Learn more about [HDInsight Cluster Pool - UnsupportedHiloAKSVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions). +Learn more about [Monitor - UpdateToLatestAMSApiVersion (Update to the latest AMS API Version)](https://aka.ms/AMSAdvisor). -### Reads happen on most recent data +### Upgrade to the latest Workloads SDK version -More than 75% of your read requests are landing on the memstore. That indicates that the reads are primarily on recent data. This suggests that even if a flush happens on the memstore, the recent file needs to be accessed and that file needs to be in the cache. +Upgrade to the latest Workloads SDK version to get the best results in terms of model quality, performance and service availability. -Learn more about [HDInsight cluster - HBaseMemstoreReadPercentage (Reads happen on most recent data)](../hdinsight/hbase/apache-hbase-advisor.md). +Learn more about [Monitor - UpgradeToLatestAMSSdkVersion (Upgrade to the latest Workloads SDK version)](https://aka.ms/AMSAdvisor). -### Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance. -You are seeing this advisor recommendation because HDInsight team's system log shows that in the past 7 days, your cluster has encountered the following scenarios: - 1. High WAL sync time latency - 2. High write request count (at least 3 one hour windows of over 1000 avg_write_requests/second/node) -These conditions are indicators that your cluster is suffering from high write latencies. This could be due to heavy workload performed on your cluster. -To improve the performance of your cluster, you may want to consider utilizing the Accelerated Writes feature provided by Azure HDInsight HBase. The Accelerated Writes feature for HDInsight Apache HBase clusters attaches premium SSD-managed disks to every RegionServer (worker node) instead of using cloud storage. As a result, provides low write-latency and better resiliency for your applications. -To read more on this feature, please visit link: +## Integration -Learn more about [HDInsight cluster - AccWriteCandidate (Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.)](../hdinsight/hbase/apache-hbase-accelerated-writes.md). +### Upgrade your API Management resource to an alternative version -### More than 75% of your queries are full scan queries. +Your subscription is running on versions that have been scheduled for deprecation. On 30 September 2023, all API versions for the Azure API Management service prior to 2021-08-01 retire and API calls fail. Upgrade to newer version to prevent disruption to your services. -More than 75% of the scan queries on your cluster are doing a full region/table scan. Modify your scan queries to avoid full region or table scans. +Learn more about [Api Management - apimgmtdeprecation (Upgrade your API Management resource to an alternative version)](https://azure.microsoft.com/updates/api-versions-being-retired-for-azure-api-management/). -Learn more about [HDInsight cluster - ScanQueryTuningcandidate (More than 75% of your queries are full scan queries.)](../hdinsight/hbase/apache-hbase-advisor.md). -### Check your region counts as you have blocking updates. -Region counts needs to be adjusted to avoid updates getting blocked. It might require a scale up of the cluster by adding new nodes. -Learn more about [HDInsight cluster - RegionCountCandidate (Check your region counts as you have blocking updates.)](../hdinsight/hbase/apache-hbase-advisor.md). -### Consider increasing the flusher threads +## Mobile -The flush queue size in your region servers is more than 100 or there are updates getting blocked frequently. Tuning of the flush handler is recommended. +### Use recommended version of Chat SDK -Learn more about [HDInsight cluster - FlushQueueCandidate (Consider increasing the flusher threads)](../hdinsight/hbase/apache-hbase-advisor.md). +Azure Communication Services Chat SDK can be used to add rich, real-time chat to your applications. Update to the recommended version of Chat SDK to ensure the latest fixes and features. -### Consider increasing your compaction threads for compactions to complete faster +Learn more about [Communication service - UpgradeChatSdk (Use recommended version of Chat SDK)](../communication-services/concepts/chat/sdk-features.md). -The compaction queue in your region servers is more than 2000 suggesting that more data requires compaction. Slower compactions can impact read performance as the number of files to read are more. More files without compaction can also impact the heap usage related to how files interact with Azure file system. +### Use recommended version of Resource Manager SDK -Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increasing your compaction threads for compactions to complete faster)](/azure/hdinsight/hbase/apache-hbase-advisor). +Resource Manager SDK can be used to create and manage Azure Communication Services resources. Update to the recommended version of Resource Manager SDK to ensure the latest fixes and features. -## Automanage +Learn more about [Communication service - UpgradeResourceManagerSdk (Use recommended version of Resource Manager SDK)](../communication-services/quickstarts/create-communication-resource.md?pivots=platform-net&tabs=windows). -### Update Automanage to the latest API Version +### Use recommended version of Identity SDK -We have identified sdk calls from outdated API for resources under this subscription. We recommend switching to the latest sdk versions. This ensures you receive the latest features and performance improvements. +Azure Communication Services Identity SDK can be used to manage identities, users, and access tokens. Update to the recommended version of Identity SDK to ensure the latest fixes and features. -Learn more about [Machine - Azure Arc - UpdateToLatestApiHci (Update Automanage to the latest API Version)](/azure/automanage/reference-sdk). +Learn more about [Communication service - UpgradeIdentitySdk (Use recommended version of Identity SDK)](../communication-services/concepts/sdk-options.md). -## KeyVault +### Use recommended version of SMS SDK -### Update Key Vault SDK Version +Azure Communication Services SMS SDK can be used to send and receive SMS messages. Update to the recommended version of SMS SDK to ensure the latest fixes and features. -New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process.<br><br>**PLEASE DISMISS:**<br>If Key Vault is integrated with Azure Storage, Disk or other Azure services which can use old Key Vault SDK and when all your current custom applications are using .NET SDK 4.0 or above. +Learn more about [Communication service - UpgradeSmsSdk (Use recommended version of SMS SDK)](/azure/communication-services/concepts/telephony-sms/sdk-features). -Learn more about [Key vault - UpgradeKeyVaultSDK (Update Key Vault SDK Version)](../key-vault/general/client-libraries.md). +### Use recommended version of Phone Numbers SDK -### Update Key Vault SDK Version +Azure Communication Services Phone Numbers SDK can be used to acquire and manage phone numbers. Update to the recommended version of Phone Numbers SDK to ensure the latest fixes and features. -New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process. +Learn more about [Communication service - UpgradePhoneNumbersSdk (Use recommended version of Phone Numbers SDK)](../communication-services/concepts/sdk-options.md). -> [!IMPORTANT] -> Please be aware that you can only remediate recommendation for custom applications you have access to. Recommendations can be shown due to integration with other Azure services like Storage, Disk encryption, which are in process to update to new version of our SDK. If you use .NET 4.0 in all your applications please dismiss. +### Use recommended version of Calling SDK -Learn more about [Managed HSM Service - UpgradeKeyVaultMHSMSDK (Update Key Vault SDK Version)](../key-vault/general/client-libraries.md). +Azure Communication Services Calling SDK can be used to enable voice, video, screen-sharing, and other real-time communication. Update to the recommended version of Calling SDK to ensure the latest fixes and features. -## Data Exporer +Learn more about [Communication service - UpgradeCallingSdk (Use recommended version of Calling SDK)](../communication-services/concepts/voice-video-calling/calling-sdk-features.md). -### Right-size Data Explorer resources for optimal performance. +### Use recommended version of Call Automation SDK -This recommendation surfaces all Data Explorer resources which exceed the recommended data capacity (80%). The recommended action to improve the performance is to scale to the recommended configuration shown. +Azure Communication Services Call Automation SDK can be used to make and manage calls, play audio, and configure recording. Update to the recommended version of Call Automation SDK to ensure the latest fixes and features. -Learn more about [Data explorer resource - Right-size ADX resource (Right-size Data Explorer resources for optimal performance.)](https://aka.ms/adxskuperformance). +Learn more about [Communication service - UpgradeServerCallingSdk (Use recommended version of Call Automation SDK)](../communication-services/concepts/voice-video-calling/call-automation-apis.md). -### Review table cache policies for Data Explorer tables +### Use recommended version of Network Traversal SDK -This recommendation surfaces Data Explorer tables with a high number of queries that look back beyond the configured cache period (policy). (You'll see the top 10 tables by query percentage that access out-of-cache data). The recommended action to improve the performance: Limit queries on this table to the minimal necessary time range (within the defined policy). Alternatively, if data from the entire time range is required, increase the cache period to the recommended value. +Azure Communication Services Network Traversal SDK can be used to access TURN servers for low-level data transport. Update to the recommended version of Network Traversal SDK to ensure the latest fixes and features. -Learn more about [Data explorer resource - UpdateCachePoliciesForAdxTables (Review table cache policies for Data Explorer tables)](https://aka.ms/adxcachepolicy). +Learn more about [Communication service - UpgradeTurnSdk (Use recommended version of Network Traversal SDK)](../communication-services/concepts/sdk-options.md). -### Reduce Data Explorer table cache policy for better performance +### Use recommended version of Rooms SDK -Reducing the table cache policy will free up unused data from the resource's cache and improve performance. +Azure Communication Services Rooms SDK can be used to control who can join a call, when they can meet, and how they can collaborate. Update to the recommended version of Rooms SDK to ensure the latest fixes and features. A non-recommended version was detected in the last 48-60 hours. -Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesToImprovePerformance (Reduce Data Explorer table cache policy for better performance)](https://aka.ms/adxcachepolicy). +Learn more about [Communication service - UpgradeRoomsSdk (Use recommended version of Rooms SDK)](/azure/communication-services/concepts/rooms/room-concept). -## Networking -### Configure DNS Time to Live to 60 seconds -Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible. -Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5). +## Networking -### Configure DNS Time to Live to 20 seconds +### Upgrade SDK version recommendation -Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible. +The latest version of Azure Front Door Standard and Premium Client Library or SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Front Door Standard and Premium. -Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](https://aka.ms/Ngfw4r). +Learn more about [Front Door Profile - UpgradeCDNToLatestSDKLanguage (Upgrade SDK version recommendation)](https://aka.ms/afd/tiercomparison). -### Configure DNS Time to Live to 60 seconds +### Upgrade SDK version recommendation -Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible. +The latest version of Azure Traffic Collector SDK contains fixes to issues proactively identified through our QA process, supports the latest resource model & has reliability and performance optimization that can improve your overall experience of using ATC. -Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5). +Learn more about [Azure Traffic Collector - UpgradeATCToLatestSDKLanguage (Upgrade SDK version recommendation)](/azure/expressroute/traffic-collector). ### Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs -You have been using over 90% of your procured circuit bandwidth recently. If you exceed your allocated bandwidth, you will experience an increase in dropped packets sent over ExpressRoute. Upgrade your circuit bandwidth to maintain performance if your bandwidth needs remain this high. +You have been using over 90% of your procured circuit bandwidth recently. If you exceed your allocated bandwidth, you experience an increase in dropped packets sent over ExpressRoute. Upgrade your circuit bandwidth to maintain performance if your bandwidth needs remain this high. Learn more about [ExpressRoute circuit - UpgradeERCircuitBandwidth (Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs)](../expressroute/about-upgrade-circuit-bandwidth.md). -### Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use --Under high traffic load, the VPN gateway may drop packets due to high CPU. --Learn more about [Virtual network gateway - HighCPUVNetGateway (Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use)](https://aka.ms/HighCPUP2SVNetGateway). --### Consider increasing the size of your VNet Gateway SKU to address high P2S use --Each gateway SKU can only support a specified count of concurrent P2S connections. Your connection count is close to your gateway limit, so additional connection attempts may fail. --Learn more about [Virtual network gateway - HighP2SConnectionsVNetGateway (Consider increasing the size of your VNet Gateway SKU to address high P2S use)](https://aka.ms/HighP2SConnectionsVNetGateway). --### Make sure you have enough instances in your Application Gateway to support your traffic --Your Application Gateway has been running on high utilization recently and under heavy load, you may experience traffic loss or increase in latency. It is important that you scale your Application Gateway according to your traffic and with a bit of a buffer so that you are prepared for any traffic surges or spikes and minimizing the impact that it may have in your QoS. Application Gateway v1 SKU (Standard/WAF) supports manual scaling and v2 SKU (Standard_v2/WAF_v2) support manual and autoscaling. In case of manual scaling, increase your instance count and if autoscaling is enabled, make sure your maximum instance count is set to a higher value so Application Gateway can scale out as the traffic increases --Learn more about [Application gateway - HotAppGateway (Make sure you have enough instances in your Application Gateway to support your traffic)](https://aka.ms/hotappgw). --## SQL --### Create statistics on table columns --We have detected that you are missing table statistics which may be impacting query performance. The query optimizer uses statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan. --Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statistics on table columns)](https://aka.ms/learnmorestatistics). --### Remove data skew to increase query performance --We have detected distribution data skew greater than 15%. This can cause costly performance bottlenecks. --Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](https://aka.ms/learnmoredataskew). --### Update statistics on table columns --We have detected that you do not have up-to-date table statistics which may be impacting query performance. The query optimizer uses up-to-date statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan. --Learn more about [SQL data warehouse - UpdateTableStatisticsSqlDW (Update statistics on table columns)](https://aka.ms/learnmorestatistics). --### Scale up to optimize cache utilization with SQL Data Warehouse --We have detected that you had high cache used percentage with a low hit percentage. This indicates high cache eviction which can impact the performance of your workload. --Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to optimize cache utilization with SQL Data Warehouse)](https://aka.ms/learnmoreadaptivecache). --### Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse --We have detected that you had high tempdb utilization which can impact the performance of your workload. --Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](https://aka.ms/learnmoretempdb). --### Convert tables to replicated tables with SQL Data Warehouse --We have detected that you may benefit from using replicated tables. When using replicated tables, this will avoid costly data movement operations and significantly increase the performance of your workload. --Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to replicated tables with SQL Data Warehouse)](https://aka.ms/learnmorereplicatedtables). +### Experience more predictable, consistent latency with a private connection to Azure -### Split staged files in the storage account to increase load performance +Improve the performance, privacy, and reliability of your business-critical apps by extending your on-premises networks to Azure with Azure ExpressRoute. Establish private ExpressRoute connections directly from your WAN, through a cloud exchange facility, or through POP and IPVPN connections. -We have detected that you can increase load throughput by splitting your compressed files that are staged in your storage account. A good rule of thumb is to split compressed files into 60 or more to maximize the parallelism of your load. +Learn more about [Subscription - AzureExpressRoute (Experience more predictable, consistent latency with a private connection to Azure)](/azure/expressroute/expressroute-howto-circuit-portal-resource-manager). -Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](https://aka.ms/learnmorefilesplit). +### Upgrade Workloads API to the latest version (Azure Center for SAP solutions API) -### Increase batch size when loading to maximize load throughput, data compression, and query performance +We have identified calls to an outdated Workloads API version for resources under this resource group. We recommend switching to the latest Workloads API version to ensure uninterrupted access to latest features and performance improvements in Azure Center for SAP solutions. If there are multiple Virtual Instances for SAP solutions (VIS) shown in the recommendation, ensure you update the API version for all VIS resources. -We have detected that you can increase load performance and throughput by increasing the batch size when loading into your database. You should consider using the COPY statement. If you are unable to use the COPY statement, consider increasing the batch size when using loading utilities such as the SQLBulkCopy API or BCP - a good rule of thumb is a batch size between 100K to 1M rows. +Learn more about [Subscription - UpdateToLatestWaasApiVersionAtSub (Upgrade Workloads API to the latest version (Azure Center for SAP solutions API))](https://go.microsoft.com/fwlink/?linkid=2228001). -Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](https://aka.ms/learnmoreincreasebatchsize). +### Upgrade Workloads SDK to the latest version (Azure Center for SAP solutions SDK) -### Co-locate the storage account within the same region to minimize latency when loading +We have identified calls to an outdated Workloads SDK version from resources in this Resource Group. Upgrade to the latest Workloads SDK version to get the latest features and the best results in terms of model quality, performance and service availability for Azure Center for SAP solutions. If there are multiple Virtual Instances for SAP solutions (VIS) shown in the recommendation, ensure you update the SDK version for all VIS resources. -We have detected that you are loading from a region that is different from your SQL pool. You should consider loading from a storage account that is within the same region as your SQL pool to minimize latency when loading data. +Learn more about [Subscription - UpgradeToLatestWaasSdkVersionAtSub (Upgrade Workloads SDK to the latest version (Azure Center for SAP solutions SDK))](https://go.microsoft.com/fwlink/?linkid=2228000). -Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](https://aka.ms/learnmorestoragecolocation). +### Configure DNS Time to Live to 60 seconds -## Storage +Time to Live (TTL) affects how recent the response a client gets when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client is routed to a functioning endpoint more quickly, in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible. -### Use "Put Blob" for blobs smaller than 256 MB +Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5). -When writing a block blob that is 256 MB or less (64 MB for requests using REST versions before 2016-05-31), you can upload it in its entirety with a single write operation using "Put Blob". Based on your aggregated metrics, we believe your storage account's write operations can be optimized. +### Configure DNS Time to Live to 20 seconds -Learn more about [Storage Account - StorageCallPutBlob (Use \""Put Blob\"" for blobs smaller than 256 MB)](https://aka.ms/understandblockblobs). +Time to Live (TTL) affects how recent the response a client gets when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client is routed to a functioning endpoint more quickly, in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible. -### Upgrade your Storage Client Library to the latest version for better reliability and performance +Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](https://aka.ms/Ngfw4r). -The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage. +### Configure DNS Time to Live to 60 seconds -Learn more about [Storage Account - UpdateStorageDataMovementSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](https://aka.ms/AA5wtca). +Time to Live (TTL) affects how recent the response a client gets when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client is routed to a functioning endpoint more quickly, in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible. -### Upgrade to Standard SSD Disks for consistent and improved performance +Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5). -Because you are running IaaS virtual machine workloads on Standard HDD managed disks, we wanted to let you know that a Standard SSD disk option is now available for all Azure VM types. Standard SSD disks are a cost-effective storage option optimized for enterprise workloads that need consistent performance. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which will take three to five minutes. +### Consider increasing the size of your virtual network Gateway SKU to address consistently high CPU use -Learn more about [Storage Account - StandardSSDForNonPremVM (Upgrade to Standard SSD Disks for consistent and improved performance)](/azure/virtual-machines/windows/disks-types#standard-ssd). +Under high traffic load, the VPN gateway might drop packets due to high CPU. -### Use premium performance block blob storage +Learn more about [Virtual network gateway - HighCPUVNetGateway (Consider increasing the size of your virtual network (VNet) Gateway SKU to address consistently high CPU use)](https://aka.ms/HighCPUP2SVNetGateway). -One or more of your storage accounts has a high transaction rate per GB of block blob data stored. Use premium performance block blob storage instead of standard performance storage for your workloads that require fast storage response times and/or high transaction rates and potentially save on storage costs. +### Consider increasing the size of your virtual network Gateway SKU to address high P2S use -Learn more about [Storage Account - PremiumBlobStorageAccount (Use premium performance block blob storage)](https://aka.ms/usePremiumBlob). +Each gateway SKU can only support a specified count of concurrent P2S connections. Your connection count is close to your gateway limit, so more connection attempts might fail. -### Convert Unmanaged Disks from Standard HDD to Premium SSD for performance +Learn more about [Virtual network gateway - HighP2SConnectionsVNetGateway (Consider increasing the size of your VNet Gateway SKU to address high P2S use)](https://aka.ms/HighP2SConnectionsVNetGateway). -We have noticed your Unmanaged HDD Disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which will take three to five minutes. +### Make sure you have enough instances in your Application Gateway to support your traffic -Learn more about [Storage Account - UMDHDDtoPremiumForPerformance (Convert Unmanaged Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd). +Your Application Gateway has been running on high utilization recently and under heavy load you might experience traffic loss or increase in latency. It is important that you scale your Application Gateway accordingly and add a buffer so that you're prepared for any traffic surges or spikes and minimize the effect that it might have in your QoS. Application Gateway v1 SKU (Standard/WAF) supports manual scaling and v2 SKU (Standard_v2/WAF_v2) supports manual and autoscaling. With manual scaling, increase your instance count. If autoscaling is enabled, make sure your maximum instance count is set to a higher value so Application Gateway can scale out as the traffic increases. +Learn more about [Application gateway - HotAppGateway (Make sure you have enough instances in your Application Gateway to support your traffic)](https://aka.ms/hotappgw). -## Subscription -### Experience more predictable, consistent latency with a private connection to Azure -Improve the performance, privacy, and reliability of your business-critical apps by extending your on-premises networks to Azure with Azure ExpressRoute. Establish private ExpressRoute connections directly from your WAN, through a cloud exchange facility, or through POP and IPVPN connections. -Learn more about [Subscription - AzureExpressRoute (Experience more predictable, consistent latency with a private connection to Azure)](/azure/expressroute/expressroute-howto-circuit-portal-resource-manager). -## Synapse -### Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows +## SAP for Azure -Clustered columnstore tables are organized in data into segments. Having high segment quality is critical to achieving optimal query performance on a columnstore table. Segment quality can be measured by the number of rows in a compressed row group. +### To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the App VM OS in SAP workloads -Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](https://aka.ms/AzureSynapseCCIGuidance). +To avoid sporadic soft-lockup in Mellanox driver, reduce the can_queue value in the OS. The value cannot be set directly. Add the following kernel boot line options to achieve the same effect:'hv_storvsc.storvsc_ringbuffer_size=131072 hv_storvsc.storvsc_vcpus_per_sub_channel=1024' -### Update SynapseManagementClient SDK Version +Learn more about [App Server Instance - AppSoftLockup (To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the App VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000020248). -New SynapseManagementClient is using .NET SDK 4.0 or above. +### To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the ASCS VM OS in SAP workloads -Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update SynapseManagementClient SDK Version)](https://aka.ms/UpgradeSynapseManagementClientSDK). +To avoid sporadic soft-lockup in Mellanox driver, reduce the can_queue value in the OS. The value cannot be set directly. Add the following kernel boot line options to achieve the same effect:'hv_storvsc.storvsc_ringbuffer_size=131072 hv_storvsc.storvsc_vcpus_per_sub_channel=1024' -## Web +Learn more about [Central Server Instance - AscsoftLockup (To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the ASCS VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000020248). -### Move your App Service Plan to PremiumV2 for better performance --Your app served more than 1000 requests per day for the past 3 days. Your app may benefit from the higher performance infrastructure available with the Premium V2 App Service tier. The Premium V2 tier features Dv2-series VMs with faster processors, SSD storage, and doubled memory-to-core ratio when compared to the previous instances. Learn more about upgrading to Premium V2 from our documentation. +### To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the DB VM OS in SAP workloads -Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service Plan to PremiumV2 for better performance)](https://aka.ms/ant-premiumv2). +To avoid sporadic soft-lockup in Mellanox driver, reduce the can_queue value in the OS. The value cannot be set directly. Add the following kernel boot line options to achieve the same effect:'hv_storvsc.storvsc_ringbuffer_size=131072 hv_storvsc.storvsc_vcpus_per_sub_channel=1024' -### Check outbound connections from your App Service resource +Learn more about [Database Instance - DBSoftLockup (To avoid soft-lockup in Mellanox driver, reduce the can_queue value in the DB VM OS in SAP workloads)](https://www.suse.com/support/kb/doc/?id=000020248). -Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/IP port connection limits can cause unexpected connectivity issues for your apps. +### For improved file system performance in HANA DB with ANF, optimize tcp_wmem OS parameter -Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](https://aka.ms/antbc-socket). +The parameter net.ipv4.tcp_wmem specifies minimum, default, and maximum send buffer sizes that are used for a TCP socket. Set the parameter as per SAP note: 302436 to certify HANA DB to run with ANF and improve file system performance. The maximum value must not exceed net.core.wmem_max parameter. -## SAP on Azure Workloads +Learn more about [Database Instance - WriteBuffersAllocated (For improved file system performance in HANA DB with ANF, optimize tcp_wmem OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). -### For improved file system performance in HANA DB with ANF, optimize tcp_wmem OS parameter +### For improved file system performance in HANA DB with ANF, optimize tcp_rmem OS parameter -The parameter net.ipv4.tcp_wmem specifies minimum, default, and maximum send buffer sizes that are used for a TCP socket. Set the parameter as per SAP note: 302436 to certify HANA DB to run with ANF and improve file system performance. The maximum value should not exceed net.core.wmem_max parameter +The parameter net.ipv4.tcp_rmem specifies minimum, default, and maximum receive buffer sizes used for a TCP socket. Set the parameter as per SAP note 3024346 to certify HANA DB to run with ANF and improve file system performance. The maximum value must not exceed net.core.rmem_max parameter. -Learn more about [Database Instance - WriteBuffersAllocated (For improved file system performance in HANA DB with ANF, optimize tcp_wmem OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). +Learn more about [Database Instance - OptimiseReadTcp (For improved file system performance in HANA DB with ANF, optimize tcp_rmem OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, optimize wmem_max OS parameter -In HANA DB with ANF storage type, the maximum write socket buffer, defined by the parameter, net.core.wmem_max must be set large enough to handle outgoing network packets. This configuration certifies HANA DB to run with ANF and improves file system performance. See SAP note: 3024346 +In HANA DB with ANF storage type, the maximum write socket buffer, defined by the parameter net.core.wmem_max must be set large enough to handle outgoing network packets. The net.core.wmem_max configuration certifies HANA DB to run with ANF and improves file system performance. See SAP note: 3024346. Learn more about [Database Instance - MaxWriteBuffer (For improved file system performance in HANA DB with ANF, optimize wmem_max OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, optimize tcp_rmem OS parameter -The parameter net.ipv4.tcp_rmem specifies minimum, default, and maximum receive buffer sizes used for a TCP socket. Set the parameter as per SAP note 3024346 to certify HANA DB to run with ANF and improve file system performance. The maximum value should not exceed net.core.rmem_max parameter +The parameter net.ipv4.tcp_rmem specifies minimum, default, and maximum receive buffer sizes used for a TCP socket. Set the parameter as per SAP note 3024346 to certify HANA DB to run with ANF and improve file system performance. The maximum value must not exceed net.core.rmem_max parameter. Learn more about [Database Instance - OptimizeReadTcp (For improved file system performance in HANA DB with ANF, optimize tcp_rmem OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, optimize rmem_max OS parameter -In HANA DB with ANF storage type, the maximum read socket buffer, defined by the parameter, net.core.rmem_max must be set large enough to handle incoming network packets. This configuration certifies HANA DB to run with ANF and improves file system performance. See SAP note: 3024346. +In HANA DB with ANF storage type, the maximum read socket buffer, defined by the parameter, net.core.rmem_max must be set large enough to handle incoming network packets. The net.core.rmem_max configuration certifies HANA DB to run with ANF and improves file system performance. See SAP note: 3024346. Learn more about [Database Instance - MaxReadBuffer (For improved file system performance in HANA DB with ANF, optimize rmem_max OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, set receiver backlog queue size to 300000 -The parameter net.core.netdev_max_backlog specifies the size of the receiver backlog queue, used if a Network interface receives packets faster than the kernel can process. Set the parameter as per SAP note: 3024346. This configuration certifies HANA DB to run with ANF and improves file system performance. +The parameter net.core.netdev_max_backlog specifies the size of the receiver backlog queue, used if a network interface receives packets faster than the kernel can process. Set the parameter as per SAP note: 3024346. The net.core.netdev_max_backlog configuration certifies HANA DB to run with ANF and improves file system performance. Learn more about [Database Instance - BacklogQueueSize (For improved file system performance in HANA DB with ANF, set receiver backlog queue size to 300000)](https://launchpad.support.sap.com/#/notes/3024346). ### To improve file system performance in HANA DB with ANF, enable the TCP window scaling OS parameter -Enable the TCP window scaling parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads +Enable the TCP window scaling parameter as per SAP note: 302436. The TCP window scaling configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads. Learn more about [Database Instance - EnableTCPWindowScaling (To improve file system performance in HANA DB with ANF, enable the TCP window scaling OS parameter )](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, disable IPv6 protocol in OS -Disable IPv6 as per recommendation for SAP on Azure for HANA DB with ANF to improve file system performance +Disable IPv6 as per recommendation for SAP on Azure for HANA DB with ANF to improve file system performance. -Learn more about [Database Instance - DisableIPv6Protocol (For improved file system performance in HANA DB with ANF, disable IPv6 protocol in OS)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings). +Learn more about [Database Instance - DisableIPv6Protocol (For improved file system performance in HANA DB with ANF, disable IPv6 protocol in OS)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse). ### To improve file system performance in HANA DB with ANF, disable parameter for slow start after idle -The parameter net.ipv4.tcp_slow_start_after_idle disables the need to scale-up incrementally the TCP window size for TCP connections which were idle for some time. By setting this parameter to zero as per SAP note: 302436, the maximum speed is used from beginning for previously idle TCP connections +The parameter net.ipv4.tcp_slow_start_after_idle disables the need to scale-up incrementally the TCP window size for TCP connections that were idle for some time. By setting this parameter to zero as per SAP note: 302436, the maximum speed is used from beginning for previously idle TCP connections. Learn more about [Database Instance - ParameterSlowStart (To improve file system performance in HANA DB with ANF, disable parameter for slow start after idle)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF optimize tcp_max_syn_backlog OS parameter -To prevent the kernel from using SYN cookies in a situation where lots of connection requests are sent in a short timeframe and to prevent a warning about a potential SYN flooding attack in the system log, the size of the SYN backlog should be set to a reasonably high value. See SAP note 2382421 +To prevent the kernel from using SYN cookies in a situation where lots of connection requests are sent in a short timeframe and to prevent a warning about a potential SYN flooding attack in the system log, the size of the SYN backlog must be set to a reasonably high value. See SAP note 2382421. -Learn more about [Database Instance - TCPMaxSynBacklog (For improved file system performance in HANA DB with ANF optimize tcp_max_syn_backlog OS parameter)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings). +Learn more about [Database Instance - TCPMaxSynBacklog (For improved file system performance in HANA DB with ANF optimize tcp_max_syn_backlog OS parameter)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse). ### For improved file system performance in HANA DB with ANF, enable the tcp_sack OS parameter -Enable the tcp_sack parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads +Enable the tcp_sack parameter as per SAP note: 302436. The tcp_sack configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads. Learn more about [Database Instance - TCPSackParameter (For improved file system performance in HANA DB with ANF, enable the tcp_sack OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### In high-availability scenario for HANA DB with ANF, disable the tcp_timestamps OS parameter -Disable the tcp_timestamps parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in high-availability scenarios for HANA DB with ANF in SAP workloads +Disable the tcp_timestamps parameter as per SAP note: 302436. The tcp_timestamps configuration certifies HANA DB to run with ANF and improves file system performance in high-availability scenarios for HANA DB with ANF in SAP workloads Learn more about [Database Instance - DisableTCPTimestamps (In high-availability scenario for HANA DB with ANF, disable the tcp_timestamps OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### For improved file system performance in HANA DB with ANF, enable the tcp_timestamps OS parameter -Enable the tcp_timestamps parameter as per SAP note: 302436. This configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads +Enable the tcp_timestamps parameter as per SAP note: 302436. The tcp_timestamps configuration certifies HANA DB to run with ANF and improves file system performance in HANA DB with ANF in SAP workloads. Learn more about [Database Instance - EnableTCPTimestamps (For improved file system performance in HANA DB with ANF, enable the tcp_timestamps OS parameter)](https://launchpad.support.sap.com/#/notes/3024346). ### To improve file system performance in HANA DB with ANF, enable auto-tuning TCP receive buffer size -The parameter net.ipv4.tcp_moderate_rcvbuf enables TCP to perform receive buffer auto-tuning, to automatically size the buffer (no greater than tcp_rmem to match the size required by the path for full throughput. Enable this parameter as per SAP note: 302436 for improved file system performance +The parameter net.ipv4.tcp_moderate_rcvbuf enables TCP to perform buffer auto-tuning, to automatically size the buffer (no greater than tcp_rmem to match the size required by the path for full throughput. Enable this parameter as per SAP note: 302436 for improved file system performance. Learn more about [Database Instance - EnableAutoTuning (To improve file system performance in HANA DB with ANF, enable auto-tuning TCP receive buffer size)](https://launchpad.support.sap.com/#/notes/3024346). Learn more about [Database Instance - IPV4LocalPortRange (For improved file syst ### To improve file system performance in HANA DB with ANF, optimize sunrpc.tcp_slot_table_entries -Set the parameter sunrpc.tcp_slot_table_entries to 128 as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads +Set the parameter sunrpc.tcp_slot_table_entries to 128 as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads. -Learn more about [Database Instance - TCPSlotTableEntries (To improve file system performance in HANA DB with ANF, optimize sunrpc.tcp_slot_table_entries)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings). +Learn more about [Database Instance - TCPSlotTableEntries (To improve file system performance in HANA DB with ANF, optimize sunrpc.tcp_slot_table_entries)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse). -### All disks in LVM for /hana/data volume should be of the same type to ensure high performance in HANA DB +### All disks in LVM for /hana/data volume must be of the same type to ensure high performance in HANA DB -If multiple disk types are selected in the /hana/data volume, performance of HANA DB in SAP workloads might get restricted. Ensure all HANA Data volume disks are of the same type and are configured as per recommendation for SAP on Azure +If multiple disk types are selected in the /hana/data volume, performance of HANA DB in SAP workloads might get restricted. Ensure all HANA Data volume disks are of the same type and are configured as per recommendation for SAP on Azure. -Learn more about [Database Instance - HanaDataDiskTypeSame (All disks in LVM for /hana/data volume should be of the same type to ensure high performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Configuration%20for%20SAP%20/hana/data%20volume). +Learn more about [Database Instance - HanaDataDiskTypeSame (All disks in LVM for /hana/data volume must be of the same type to ensure high performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage). -### Stripe size for /hana/data should be 256 kb for improved performance of HANA DB in SAP workloads +### Stripe size for /hana/data must be 256 kb for improved performance of HANA DB in SAP workloads -If you are using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. Based on experience with recentLinux versions, Azure recommends using stripe size of 256 kb for /hana/data filesystem for better performance of HANA DB +If you're using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. Based on experience with recentLinux versions, Azure recommends using stripe size of 256 kb for /hana/data filesystem for better performance of HANA DB. -Learn more about [Database Instance - HanaDataStripeSize (Stripe size for /hana/data should be 256 kb for improved performance of HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=As%20stripe%20sizes%20the%20recommendation%20is%20to%20use). +Learn more about [Database Instance - HanaDataStripeSize (Stripe size for /hana/data must be 256 kb for improved performance of HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage). ### To improve file system performance in HANA DB with ANF, optimize the parameter vm.swappiness -Set the OS parameter vm.swappiness to 10 as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads +Set the OS parameter vm.swappiness to 10 as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads. -Learn more about [Database Instance - VmSwappiness (To improve file system performance in HANA DB with ANF, optimize the parameter vm.swappiness)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings). +Learn more about [Database Instance - VmSwappiness (To improve file system performance in HANA DB with ANF, optimize the parameter vm.swappiness)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse). ### To improve file system performance in HANA DB with ANF, disable net.ipv4.conf.all.rp_filter -Disable the reverse path filter linux OS parameter, net.ipv4.conf.all.rp_filter as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads +Disable the reverse path filter linux OS parameter, net.ipv4.conf.all.rp_filter as per recommendation for improved file system performance in HANA DB with ANF in SAP workloads. -Learn more about [Database Instance - DisableIPV4Conf (To improve file system performance in HANA DB with ANF, disable net.ipv4.conf.all.rp_filter)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings). +Learn more about [Database Instance - DisableIPV4Conf (To improve file system performance in HANA DB with ANF, disable net.ipv4.conf.all.rp_filter)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse). -### If using Ultradisk, the IOPS for /hana/data volume should be >=7000 for better HANA DB performance +### If using Ultradisk, the IOPS for /hana/data volume must be >=7000 for better HANA DB performance -IOPS of at least 7000 in /hana/data volume is recommended for SAP workloads when using Ultradisk. Select the disk type for /hana/data volume as per this requirement to ensure high performance of the DB +IOPS of at least 7000 in /hana/data volume is recommended for SAP workloads when using Ultradisk. Select the disk type for /hana/data volume as per this requirement to ensure high performance of the DB. -Learn more about [Database Instance - HanaDataIOPS (If using Ultradisk, the IOPS for /hana/data volume should be >=7000 for better HANA DB performance)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#azure-ultra-disk-storage-configuration-for-sap-hana:~:text=1%20x%20P6-,Azure%20Ultra%20disk%20storage%20configuration%20for%20SAP%20HANA,-Another%20Azure%20storage). +Learn more about [Database Instance - HanaDataIOPS (If using Ultradisk, the IOPS for /hana/data volume must be >=7000 for better HANA DB performance)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#azure-ultra-disk-storage-configuration-for-sap-hana). ### To improve file system performance in HANA DB with ANF, change parameter tcp_max_slot_table_entries -Set the OS parameter tcp_max_slot_table_entries to 128 as per SAP note: 302436 for improved file transfer performance in HANA DB with ANF in SAP workloads +Set the OS parameter tcp_max_slot_table_entries to 128 as per SAP note: 302436 for improved file transfer performance in HANA DB with ANF in SAP workloads. Learn more about [Database Instance - OptimizeTCPMaxSlotTableEntries (To improve file system performance in HANA DB with ANF, change parameter tcp_max_slot_table_entries)](/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse#:~:text=Create%20configuration%20file%20/etc/sysctl.d/ms%2Daz.conf%20with%20Microsoft%20for%20Azure%20configuration%20settings). ### Ensure the read performance of /hana/data volume is >=400 MB/sec for better performance in HANA DB -Read activity of at least 400 MB/sec for /hana/data for 16 MB and 64 MB I/O sizes is recommended for SAP workloads on Azure. Select the disk type for /hana/data as per this requirement to ensure high performance of the DB and to meet minimum storage requirements for SAP HANA +Read activity of at least 400 MB/sec for /hana/data for 16 MB and 64 MB I/O sizes is recommended for SAP workloads on Azure. Select the disk type for /hana/data as per this requirement to ensure high performance of the DB and to meet minimum storage requirements for SAP HANA. Learn more about [Database Instance - HanaDataVolumePerformance (Ensure the read performance of /hana/data volume is >=400 MB/sec for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Read%20activity%20of%20at%20least%20400%20MB/sec%20for%20/hana/data). -### Read/write performance of /hana/log volume should be >=250 MB/sec for better performance in HANA DB +### Read/write performance of /hana/log volume must be >=250 MB/sec for better performance in HANA DB -Read/Write activity of at least 250 MB/sec for /hana/log for 1 MB I/O size is recommended for SAP workloads on Azure. Select the disk type for /hana/log volume as per this requirement to ensure high performance of the DB and to meet minimum storage requirements for SAP HANA +Read/Write activity of at least 250 MB/sec for /hana/log for 1 MB I/O size is recommended for SAP workloads on Azure. Select the disk type for /hana/log volume as per this requirement to ensure high performance of the DB and to meet minimum storage requirements for SAP HANA. -Learn more about [Database Instance - HanaLogReadWriteVolume (Read/write performance of /hana/log volume should be >=250 MB/sec for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Read/write%20on%20/hana/log%20of%20250%20MB/sec%20with%201%20MB%20I/O%20sizes). +Learn more about [Database Instance - HanaLogReadWriteVolume (Read/write performance of /hana/log volume must be >=250 MB/sec for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=Read/write%20on%20/hana/log%20of%20250%20MB/sec%20with%201%20MB%20I/O%20sizes). -### If using Ultradisk, the IOPS for /hana/log volume should be >=2000 for better performance in HANA DB +### If using Ultradisk, the IOPS for /hana/log volume must be >=2000 for better performance in HANA DB -IOPS of at least 2000 in /hana/log volume is recommended for SAP workloads when using Ultradisk. Select the disk type for /hana/log volume as per this requirement to ensure high performance of the DB +IOPS of at least 2000 in /hana/log volume is recommended for SAP workloads when using Ultradisk. Select the disk type for /hana/log volume as per this requirement to ensure high performance of the DB. -Learn more about [Database Instance - HanaLogIOPS (If using Ultradisk, the IOPS for /hana/log volume should be >=2000 for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#azure-ultra-disk-storage-configuration-for-sap-hana:~:text=1%20x%20P6-,Azure%20Ultra%20disk%20storage%20configuration%20for%20SAP%20HANA,-Another%20Azure%20storage). +Learn more about [Database Instance - HanaLogIOPS (If using Ultradisk, the IOPS for /hana/log volume must be >=2000 for better performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#azure-ultra-disk-storage-configuration-for-sap-hana:~:text=1%20x%20P6-,Azure%20Ultra%20disk%20storage%20configuration%20for%20SAP%20HANA,-Another%20Azure%20storage). -### All disks in LVM for /hana/log volume should be of the same type to ensure high performance in HANA DB +### All disks in LVM for /hana/log volume must be of the same type to ensure high performance in HANA DB -If multiple disk types are selected in the /hana/log volume, performance of HANA DB in SAP workloads might get restricted. Ensure all HANA Data volume disks are of the same type and are configured as per recommendation for SAP on Azure +If multiple disk types are selected in the /hana/log volume, performance of HANA DB in SAP workloads might get restricted. Ensure all HANA Data volume disks are of the same type and are configured as per recommendation for SAP on Azure. -Learn more about [Database Instance - HanaDiskLogVolumeSameType (All disks in LVM for /hana/log volume should be of the same type to ensure high performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=For%20the%20/hana/log%20volume.%20the%20configuration%20would%20look%20like). +Learn more about [Database Instance - HanaDiskLogVolumeSameType (All disks in LVM for /hana/log volume must be of the same type to ensure high performance in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=For%20the%20/hana/log%20volume.%20the%20configuration%20would%20look%20like). ### Enable Write Accelerator on /hana/log volume with Premium disk for improved write latency in HANA DB Azure Write Accelerator is a functionality for Azure M-Series VMs. It improves I Learn more about [Database Instance - WriteAcceleratorEnabled (Enable Write Accelerator on /hana/log volume with Premium disk for improved write latency in HANA DB)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=different%20SAP%20applications.-,Solutions%20with%20premium%20storage%20and%20Azure%20Write%20Accelerator%20for%20Azure%20M%2DSeries%20virtual%20machines,-Azure%20Write%20Accelerator). -### Stripe size for /hana/log should be 64 kb for improved performance of HANA DB in SAP workloads +### Stripe size for /hana/log must be 64 kb for improved performance of HANA DB in SAP workloads ++If you're using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. To get enough throughput with larger I/O sizes, Azure recommends using stripe size of 64 kb for /hana/log filesystem for better performance of HANA DB. ++Learn more about [Database Instance - HanaLogStripeSize (Stripe size for /hana/log must be 64 kb for improved performance of HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=As%20stripe%20sizes%20the%20recommendation%20is%20to%20use). ++++++## Security ++### Update Attestation API Version ++We have identified API calls from outdated an Attestation API for resources under this subscription. We recommend switching to the latest Attestation API versions. You need to update your existing code to use the latest API version. Using the latest API version ensures you receive the latest features and performance improvements. ++Learn more about [Attestation provider - UpgradeAttestationAPI (Update Attestation API Version)](/rest/api/attestation). ++### Update Key Vault SDK Version ++New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process. If Key Vault is integrated with Azure Storage, Disk or other Azure services that can use old Key Vault SDK and when all your current custom applications are using .NET SDK 4.0 or above, dismiss the recommendation. ++Learn more about [Key vault - UpgradeKeyVaultSDK (Update Key Vault SDK Version)](../key-vault/general/client-libraries.md). ++### Update Key Vault SDK Version ++New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process. ++> [!IMPORTANT] +> Be aware that you can only remediate recommendation for custom applications you have access to. Recommendations can be shown due to integration with other Azure services like Storage, Disk encryption, which are in process to update to new version of our SDK. If you use .NET 4.0 in all your applications, dismiss the recommendation. ++Learn more about [Managed HSM Service - UpgradeKeyVaultMHSMSDK (Update Key Vault SDK Version)](../key-vault/general/client-libraries.md). +++++## Storage ++### Use "Put Blob" for blobs smaller than 256 MB ++When writing a block blob that is 256 MB or less (64 MB for requests using REST versions before 2016-05-31), you can upload it in its entirety with a single write operation using "Put Blob". Based on your aggregated metrics, we believe your storage account's write operations can be optimized. ++Learn more about [Storage Account - StorageCallPutBlob (Use \""Put Blob\"" for blobs smaller than 256 MB)](https://aka.ms/understandblockblobs). ++### Increase provisioned size of premium file share to avoid throttling of requests ++Your requests for premium file share are throttled as the I/O operations per second (IOPS) or throughput limits for the file share have reached. To protect your requests from being throttled, increase the size of the premium file share. ++Learn more about [Storage Account - AzureStorageAdvisorAvoidThrottlingPremiumFiles (Increase provisioned size of premium file share to avoid throttling of requests)](). ++### Create statistics on table columns ++We have detected that you're missing table statistics that might be impacting query performance. The query optimizer uses statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan. ++Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statistics on table columns)](https://aka.ms/learnmorestatistics). ++### Remove data skew to increase query performance ++We have detected distribution data skew greater than 15%, which can cause costly performance bottlenecks. ++Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](https://aka.ms/learnmoredataskew). ++### Update statistics on table columns ++We have detected that you don't have up-to-date table statistics, which might be impacting query performance. The query optimizer uses up-to-date statistics to estimate the cardinality or number of rows in the query result that enables the query optimizer to create a high quality query plan. ++Learn more about [SQL data warehouse - UpdateTableStatisticsSqlDW (Update statistics on table columns)](https://aka.ms/learnmorestatistics). ++### Scale up to optimize cache utilization with SQL Data Warehouse ++We have detected that you had high cache used percentage with low hit percentage, indicating a high cache eviction rate that can affect the performance of your workload. ++Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to optimize cache utilization with SQL Data Warehouse)](https://aka.ms/learnmoreadaptivecache). ++### Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse ++We have detected that you had high tempdb utilization that can affect the performance of your workload. ++Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](https://aka.ms/learnmoretempdb). ++### Convert tables to replicated tables with SQL Data Warehouse ++We have detected that you might benefit from using replicated tables. Replicated tables avoid costly data movement operations and significantly increase the performance of your workload. ++Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to replicated tables with SQL Data Warehouse)](https://aka.ms/learnmorereplicatedtables). ++### Split staged files in the storage account to increase load performance ++We have detected that you can increase load throughput by splitting your compressed files that are staged in your storage account. A good rule of thumb is to split compressed files into 60 or more to maximize the parallelism of your load. ++Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](https://aka.ms/learnmorefilesplit). ++### Increase batch size when loading to maximize load throughput, data compression, and query performance ++We have detected that you can increase load performance and throughput by increasing the batch size when loading into your database. Consider using the COPY statement. If you're unable to use the COPY statement, consider increasing the batch size when using loading utilities such as the SQLBulkCopy API or BCP - a good rule of thumb is a batch size between 100K to 1M rows. ++Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](https://aka.ms/learnmoreincreasebatchsize). ++### Co-locate the storage account within the same region to minimize latency when loading ++We have detected that you're loading from a region that is different from your SQL pool. Consider loading from a storage account that is within the same region as your SQL pool to minimize latency when loading data. ++Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](https://aka.ms/learnmorestoragecolocation). ++### Upgrade your Storage Client Library to the latest version for better reliability and performance ++The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage. ++Learn more about [Storage Account - UpdateStorageSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](https://aka.ms/learnmorestoragecolocation). ++### Upgrade your Storage Client Library to the latest version for better reliability and performance ++The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage. ++Learn more about [Storage Account - UpdateStorageDataMovementSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](https://aka.ms/AA5wtca). ++### Upgrade to Standard SSD Disks for consistent and improved performance ++Because you're running IaaS virtual machine workloads on Standard HDD managed disks, be aware that a Standard SSD disk option is now available for all Azure VM types. Standard SSD disks are a cost-effective storage option optimized for enterprise workloads that need consistent performance. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which takes three to five minutes. ++Learn more about [Storage Account - StandardSSDForNonPremVM (Upgrade to Standard SSD Disks for consistent and improved performance)](/azure/virtual-machines/windows/disks-types#standard-ssd). ++### Use premium performance block blob storage ++One or more of your storage accounts has a high transaction rate per GB of block blob data stored. Use premium performance block blob storage instead of standard performance storage for your workloads that require fast storage response times and/or high transaction rates and potentially save on storage costs. ++Learn more about [Storage Account - PremiumBlobStorageAccount (Use premium performance block blob storage)](https://aka.ms/usePremiumBlob). ++### Convert Unmanaged Disks from Standard HDD to Premium SSD for performance ++We have noticed your Unmanaged HDD Disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which takes three to five minutes. ++Learn more about [Storage Account - UMDHDDtoPremiumForPerformance (Convert Unmanaged Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd). ++### Distribute data in server group to distribute workload among nodes ++It looks like the data is not distributed in this server group but stays on the coordinator. For full Hyperscale (Citus) benefits, distribute data on worker nodes in the server group. ++Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusDistributeData (Distribute data in server group to distribute workload among nodes)](https://go.microsoft.com/fwlink/?linkid=2135201). ++### Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly ++It looks like the data is not well balanced between worker nodes in this Hyperscale (Citus) server group. In order to use each worker node of the Hyperscale (Citus) server group effectively rebalance data in the server group. ++Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusRebalanceData (Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly)](https://go.microsoft.com/fwlink/?linkid=2148869). +++++## Virtual desktop infrastructure ++### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location ++We have determined that your VMs are located in a region different or far from where your users are connecting with Azure Virtual Desktop, which might lead to prolonged connection response times and affect overall user experience. When you create VMs for your host pools, try to use a region closer to the user. Having close proximity ensures continuing satisfaction with the Azure Virtual Desktop service and a better overall quality of experience. ++Learn more about [Host Pool - RegionProximityHostPools (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](../virtual-desktop/connection-latency.md). ++### Change the max session limit for your depth first load balanced host pool to improve VM performance ++Depth first load balancing uses the max session limit to determine the maximum number of users that can have concurrent sessions on a single session host. If the max session limit is too high, all user sessions are directed to the same session host and this might cause performance and reliability issues. Therefore, when setting a host pool to have depth first load balancing, also set an appropriate max session limit according to the configuration of your deployment and capacity of your VMs. To fix this, open your host pool's properties and change the value next to the "Max session limit" setting. ++Learn more about [Host Pool - ChangeMaxSessionLimitForDepthFirstHostPool (Change the max session limit for your depth first load balanced host pool to improve VM performance )](../virtual-desktop/configure-host-pool-load-balancing.md). +++++## Web ++### Move your App Service Plan to PremiumV2 for better performance ++Your app served more than 1000 requests per day for the past 3 days. Your app might benefit from the higher performance infrastructure available with the Premium V2 App Service tier. The Premium V2 tier features Dv2-series VMs with faster processors, SSD storage, and doubled memory-to-core ratio when compared to the previous instances. Learn more about upgrading to Premium V2 from our documentation. ++Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service Plan to PremiumV2 for better performance)](https://aka.ms/ant-premiumv2). ++### Check outbound connections from your App Service resource ++Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/IP port connection limits can cause unexpected connectivity issues for your apps. ++Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](https://aka.ms/antbc-socket). + -If you are using LVM or mdadm to build stripe sets across several Azure premium disks, you need to define stripe sizes. To get enough throughput with larger I/O sizes, Azure recommends using stripe size of 64 kb for /hana/log filesystem for better performance of HANA DB -Learn more about [Database Instance - HanaLogStripeSize (Stripe size for /hana/log should be 64 kb for improved performance of HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#:~:text=As%20stripe%20sizes%20the%20recommendation%20is%20to%20use). ## Next steps |
advisor | Advisor Reference Reliability Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md | Azure Advisor helps you ensure and improve the continuity of your business-criti ## AI Services -### You are close to exceeding storage quota of 2GB. Create a Standard search service +### You're close to exceeding storage quota of 2GB. Create a Standard search service You're close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded. Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity). -### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service +### You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded. Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity). -### You are close to exceeding your available storage quota. Add more partitions if you need more storage +### You're close to exceeding your available storage quota. Add more partitions if you need more storage You're close to exceeding your available storage quota. Add extra partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work. Learn more about [HDInsight cluster - clusterOlderThanAYear (Your cluster was cr ### Your Kafka cluster disks are almost full -The data disks used by Kafka brokers in your HDInsight cluster are almost full. When that happens, the Apache Kafka broker process can't start and fails because of the disk full error. To mitigate, find the retention time for every topic, back up the files that are older and restart the brokers. +The data disks used by Kafka brokers in your HDInsight cluster are almost full. When that happens, the Apache Kafka broker process can't start and fails because of the disk full error. To mitigate, find the retention time for every Kafka Topic, back up the files that are older and restart the brokers. Learn more about [HDInsight cluster - KafkaDiskSpaceFull (Your Kafka Cluster Disks are almost full)](https://aka.ms/kafka-troubleshoot-full-disk). -### Creation of clusters under custom VNet requires more permission +### Creation of clusters under custom virtual network requires more permission -Your clusters with custom VNet were created without VNet joining permission. Ensure that the users who perform create operations have permissions to the Microsoft.Network/virtualNetworks/subnets/join action before September 30, 2023. +Your clusters with custom virtual network were created without virtual network joining permission. Ensure that the users who perform create operations have permissions to the Microsoft.Network/virtualNetworks/subnets/join action before September 30, 2023. Learn more about [HDInsight cluster - EnforceVNetJoinPermissionCheck (Creation of clusters under custom VNet requires more permission)](https://aka.ms/hdinsightEnforceVnet). Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and re ### Apply critical updates to your HDInsight clusters -The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources such as load balancer, network interface and public IP address, associated with your clusters. Do this before January 21, 2021 05:00 PM UTC when the HDInsight team is performing updates between January 21, 2021 05:00 PM UTC and January 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources in the same resource group and subnet where your cluster is. Failure to apply this update might result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters. +The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources associated with your clusters and applying the update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources associated with your clusters. Change your policy assignment before January 21, 2021 05:00 PM UTC when the HDInsight team is performing updates between January 21, 2021 05:00 PM UTC and January 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources in the same resource group and subnet where your cluster is. Failure to apply this update might result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before January 25, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service sends another notification if we failed to apply the update to your clusters. Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgr ### Enable virtual machine replication to protect your applications from regional outage -Virtual machines that don't have replication enabled to another region aren't resilient to regional outages. Replicating the machines drastically reduce any adverse business impact during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the following list so that in an event of an outage, you can quickly bring up your machines in remote Azure region. +Virtual machines that don't have replication enabled to another region aren't resilient to regional outages. Replicating the machines drastically reduce any adverse business effect during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the following list so that in an event of an outage, you can quickly bring up your machines in remote Azure region. Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](https://aka.ms/azure-site-recovery-dr-azure-vms). ### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no extra cost Learn more about [Virtual machine - VMRunningDeprecatedPlanLevelImage (Virtual m Virtual machines in your subscription are running on images that have been scheduled for deprecation. Once the image is deprecated, new VMs can't be created from the deprecated image. Upgrade to newer version of the image to prevent disruption to your workloads. + Learn more about [Virtual machine - VMRunningDeprecatedImage (Virtual machines in your subscription are running on images that have been scheduled for deprecation)](https://aka.ms/DeprecatedImagesFAQ). ### Use Availability zones for better resiliency and availability Availability Zones (AZ) in Azure help protect your applications and data from da Learn more about [Virtual machine - AvailabilityZoneVM (Use Availability zones for better resiliency and availability)](/azure/reliability/availability-zones-overview). +### Use Managed Disks to improve data reliability ++Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure. ++Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore). + ### Access to mandatory URLs missing for your Azure Virtual Desktop environment -In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to the allowed list, in case your virtual machine runs in a restricted environment. After visiting the "Learn More" link, you see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you might also search your application event log for event 3702. +In order for a session host to deploy and register to Azure Virtual Desktop properly, you need to add a set of URLs to the allowed list, in case your virtual machine runs in a restricted environment. After visiting the "Learn More" link, you see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you might also search your Application event log for event 3702. Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Azure Virtual Desktop environment)](../virtual-desktop/safe-url-list.md). Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade ## Databases -### Replication - Add a primary key to the table that currently does not have one +### Replication - Add a primary key to the table that currently doesn't have one Based on our internal monitoring, we have observed significant replication lag on your replica server. This lag is occurring because the replica server is replaying relay logs on a table that lacks a primary key. To ensure that the replica can synchronize with the primary and keep up with changes, add primary keys to the tables in the primary server. Once the primary keys are added, recreate the replica server. Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerReplicaMissingPKfb41 (Replication - Add a primary key to the table that currently doesn't have one)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table). -### High Availability - Add primary key to the table that currently does not have one +### High Availability - Add primary key to the table that currently doesn't have one Our internal monitoring system has identified significant replication lag on the High Availability standby server. The standby server replaying relay logs on a table that lacks a primary key, is the main cause of the lag. To address this issue and adhere to best practices, we recommend you add primary keys to all tables. Once you add the primary keys, proceed to disable and then re-enable High Availability to mitigate the problem. Learn more about [Azure Database for MySQL flexible server - MySqlFlexibleServerHAMissingPKcf38 (High Availability - Add primary key to the table that currently doesn't have one.)](/azure/mysql/how-to-troubleshoot-replication-latency#no-primary-key-or-unique-key-on-a-table). -### Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact +### Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid -Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade. +Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased with the maxfragmentationmemory-reserved setting available in the advanced settings option area. -Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](https://aka.ms/redis/recommendations/memory-policies). +Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability might be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential effect.)](https://aka.ms/redis/recommendations/memory-policies). ### Enable Azure backup for SQL on your virtual machines Learn more about [SQL virtual machine - EnableAzBackupForSQL (Enable Azure backu ### Improve PostgreSQL availability by removing inactive logical replication slots -Our internal telemetry indicates that your PostgreSQL server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server. +Our internal system indicates that your PostgreSQL server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server. Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_logical_decoding). ### Improve PostgreSQL availability by removing inactive logical replication slots -Our internal telemetry indicates that your PostgreSQL flexible server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication slots can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server. +Our internal system indicates that your PostgreSQL flexible server might have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. Inactive logical replication slots can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY take action. Either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server. Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding). ### Configure Consistent indexing mode on your Azure Cosmos DB container -We noticed that your Azure Cosmos DB container is configured with the Lazy indexing mode, which might impact the freshness of query results. We recommend switching to Consistent mode. +We noticed that your Azure Cosmos DB container is configured with the Lazy indexing mode, which might affect the freshness of query results. We recommend switching to Consistent mode. Learn more about [Azure Cosmos DB account - CosmosDBLazyIndexing (Configure Consistent indexing mode on your Azure Cosmos DB container)](/azure/cosmos-db/how-to-manage-indexing-policy). Some or all of your devices are using outdated SDK and we recommend you upgrade Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk). -### Upgrade Edge Device Runtime to a supported version for Iot Hub +### Upgrade Microsoft Edge Device Runtime to a supported version for Iot Hub -Some or all of your Edge devices are using outdated versions and we recommend you upgrade to the latest supported version of the runtime. See the details in the link given. +Some or all of your Microsoft Edge devices are using outdated versions and we recommend you upgrade to the latest supported version of the runtime. See the details in the link given. -Learn more about [IoT hub - UpgradeEdgeSdk (Upgrade Edge Device Runtime to a supported version for Iot Hub)](https://aka.ms/IOTEdgeSDKCheck). +Learn more about [IoT hub - UpgradeEdgeSdk (Upgrade Microsoft Edge Device Runtime to a supported version for Iot Hub)](https://aka.ms/IOTEdgeSDKCheck). Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WA ### Extra protection to mitigate Log4j 2 vulnerability (CVE-2021-44228) -To mitigate the impact of Log4j 2 vulnerability, we recommend these steps: +To mitigate the effect of Log4j 2 vulnerability, we recommend these steps: 1) Upgrade Log4j 2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link provided. 2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU. -Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve). +Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (More protection to mitigate Log4j 2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve). -### Update VNet permission of Application Gateway users +### Update virtual network permission of Application Gateway users To improve security and provide a more consistent experience across Azure, all users must pass a permission check before creating or updating an Application Gateway in a Virtual Network. The users or service principals must include at least Microsoft.Network/virtualNetworks/subnets/join/action permission. All endpoints associated to this proximity profile are in the same region. Users Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb). - ### Move to production gateway SKUs from Basic gateways The VPN gateway Basic SKU is designed for development or testing scenarios. Move to a production SKU if you're using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability. Prevent risk of connectivity failures due to SNAT port exhaustion by using NAT g Learn more about [Virtual network - natGateway (Use NAT gateway for outbound connectivity)](/azure/load-balancer/load-balancer-outbound-connections#2-associate-a-nat-gateway-to-the-subnet). +### Update virtual network permission of Application Gateway users ++To improve security and provide a more consistent experience across Azure, all users must pass a permission check before creating or updating an Application Gateway in a Virtual Network. The users or service principals must include at least Microsoft.Network/virtualNetworks/subnets/join/action permission. ++Learn more about [Application gateway - AppGwLinkedAccessFailureRecmmendation (Update VNet permission of Application Gateway users)](https://aka.ms/agsubnetjoin). ++### Use version-less Key Vault secret identifier to reference the certificates ++We strongly recommend that you use a version-less secret identifier to allow your application gateway resource to automatically retrieve the new certificate version, whenever available. Example: https://myvault.vault.azure.net/secrets/mysecret/ ++Learn more about [Application gateway - AppGwAdvisorRecommendationForCertificateUpdate (Use version-less Key Vault secret identifier to reference the certificates)](https://aka.ms/agkvversion). + ### Enable Active-Active gateways for redundancy In active-active configuration, both instances of the VPN gateway establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic is switched over to the other active IPsec tunnel automatically. Learn more about [Central Server Instance - ExpectedVotesHAASCSRH (Set the expec ### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads -The corosync token_retransmits_before_loss_const determines how many token retransmits the system attempts before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for ASCS HA setup. +The corosync token_retransmits_before_loss_const determines the number of times that tokens can be retransmitted the system attempts before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for ASCS HA setup. Learn more about [Central Server Instance - TokenRestransmitsHAASCSSLE (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). The softdog timer is loaded as a kernel module in linux OS. This timer triggers Learn more about [Central Server Instance - softdogmoduleloadedHAASCSSLE (Ensure the softdog module is loaded in for Pacemaker in ASCS HA setup in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). -### Ensure that there is one instance of fence_azure_arm in Pacemaker configuration for ASCS HA setup +### Ensure there's one instance of a fence_azure_arm in Pacemaker configuration for ASCS HA setup -The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure that there's one instance of fence_azure_arm in the pacemaker configuration for ASCS HA setup. The fence_azure_arm requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal. +The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure there's one instance of a fence_azure_arm in your Pacemaker configuration for ASCS HA setup. The fence_azure_arm requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal. -Learn more about [Central Server Instance - FenceAzureArmHAASCSSLE (Ensure that there's one instance of fence_azure_arm in Pacemaker configuration for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +Learn more about [Central Server Instance - FenceAzureArmHAASCSSLE (Ensure that there's one instance of a fence_azure_arm in your Pacemaker configuration for ASCS HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). ### Enable HA ports in the Azure Load Balancer for ASCS HA setup in SAP workloads Learn more about [Central Server Instance - ASCSHASetIdleTimeOutLB (Set the Idle ### Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads -Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down. +Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down. Learn more about [Central Server Instance - ASCSLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in ASCS HA setup in SAP workloads)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-general-update-november-2021/ba-p/2807619#network-settings-and-tuning-for-sap-on-azure). Learn more about [Database Instance - PreferSiteTakeoverHDB (Set parameter PREFE ### Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads -The corosync token_retransmits_before_loss_const determines how many token retransmits are attempted before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for HANA DB HA setup. +The corosync token_retransmits_before_loss_const determines the amount of token retransmits that are attempted before timeout in HA clusters. Set the totem.token_retransmits_before_loss_const to 10 as per recommendation for HANA DB HA setup. Learn more about [Database Instance - TokenRetransmitsHDB (Set 'token_retransmits_before_loss_const' to 10 in Pacemaker cluster in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). -### Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads --Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads. --Learn more about [Database Instance - ExpectedVotesSuseHDB (Set the expected votes parameter to 2 in the cluster cofiguration in HA enabled SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). - ### Set the two_node parameter to 1 in the cluster cofiguration in HA enabled SAP workloads In a two node HA cluster, set the quorum parameter 'two_node' to 1 as per recommendation for SAP on Azure. The softdog timer is loaded as a kernel module in linux OS. This timer triggers Learn more about [Database Instance - SoftdogConfigSuseHDB (Create the softdog config file in Pacemaker configuration for HA enable HANA DB in SAP workloads)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). -### Ensure that there is one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup +### Ensure there's one instance of a fence_azure_arm in Pacemaker configuration for HANA DB HA setup -The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure that there's one instance of fence_azure_arm in the pacemaker configuration for HANA DB HA setup. The fence_azure-arm instance requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal. +The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure there's one instance of a fence_azure_arm in your Pacemaker configuration for HANA DB HA setup. The fence_azure-arm instance requirement is applicable if you're using Azure fence agent for fencing with either managed identity or service principal. -Learn more about [Database Instance - FenceAzureArmSuseHDB (Ensure that there's one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). +Learn more about [Database Instance - FenceAzureArmSuseHDB (Ensure there's one instance of a fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). ### Ensure the softdog module is loaded in for Pacemaker in HA enabled HANA DB in SAP workloads Learn more about [Database Instance - DBHAEnableLBPorts (Enable HA ports in the ### Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads -Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets being dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down. +Disable TCP timestamps on VMs placed behind Azure Load Balancer. Enabled TCP timestamps cause the health probes to fail due to TCP packets dropped by the VM's guest OS TCP stack. Dropped TCP packets cause the load balancer to mark the endpoint as down. Learn more about [Database Instance - DBLBHADisableTCP (Disable TCP timestamps on VMs placed behind Azure Load Balancer in HANA DB HA setup in SAP workloads)](/azure/load-balancer/load-balancer-custom-probe-overview). +### There should be one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup ++The fence_azure_arm is an I/O fencing agent for Azure Resource Manager. Ensure there's one instance of a fence_azure_arm in the Pacemaker configuration for HANA DB HA setup. The fence_azure_arm is needed if you're using Azure fence agent for fencing with either managed identity or service principal. ++Learn more about [Database Instance - FenceAzureArmSuseHDB (There should be one instance of fence_azure_arm in Pacemaker configuration for HANA DB HA setup)](/azure/virtual-machines/workloads/sap/sap-hana-high-availability). + ## Storage Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete ### Enable Cross Region Restore for your recovery Services Vault -Enabling cross region restore for your geo-redundant vaults +Enabling cross region restore for your geo-redundant vaults. Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Restore for your Recovery Services Vault)](../backup/backup-azure-arm-restore-vms.md#cross-region-restore). ### Enable Backups on your virtual machines -Enable backups for your virtual machines and secure your data +Enable backups for your virtual machines and secure your data. Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on your virtual machines)](../backup/backup-overview.md). ### Configure blob backup -Configure blob backup +Configure blob backup. Learn more about [Storage Account - ConfigureBlobBackup (Configure blob backup)](/azure/backup/blob-backup-overview). ### Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data -Keep your information and applications safe with robust, one click backup from Azure. Activate Azure Backup to get cost-effective protection for a wide range of workloads including VMs, SQL databases, applications, and file shares. +Keep your information and applications safe with robust, one select backup from Azure. Activate Azure Backup to get cost-effective protection for a wide range of workloads including VMs, SQL databases, applications, and file shares. Learn more about [Subscription - AzureBackupService (Turn on Azure Backup to get simple, reliable, and cost-effective protection for your data)](/azure/backup/). As previously announced, Azure Data Lake Storage Gen1 will be retired on Februar Learn more about [Data lake store account - ADLSGen1_Deprecation (You have ADLS Gen1 Accounts Which Needs to be Migrated to ADLS Gen2)](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/). +### You have ADLS Gen1 Accounts Which Need to be Migrated to ADLS Gen2 ++As previously announced, Azure Data Lake Storage Gen1 will be retired on February 29, 2024. We highly recommend that you migrate your data lake to Azure Data Lake Storage Gen2, which offers advanced capabilities designed for big data analytics. Azure Data Lake Storage Gen2 is built on top of Azure Blob Storage. ++Learn more about [Data lake store account - ADLSGen1_Deprecation (You have ADLS Gen1 Accounts Which Needs to be Migrated to ADLS Gen2)](https://azure.microsoft.com/updates/action-required-switch-to-azure-data-lake-storage-gen2-by-29-february-2024/). + ### Enable Soft Delete to protect your blob data -After enabling the soft delete option, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires. +After you enable the Soft Delete option, deleted data transitions to a "soft" deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires. Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](https://aka.ms/softdelete). Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to imp ### Implement disaster recovery strategies for your Azure NetApp Files Resources -To avoid data or functionality loss in the event of a regional or zonal disaster, implement common disaster recovery techniques such as cross region replication or cross zone replication for your Azure NetApp Files volumes +To avoid data or functionality loss if there's a regional or zonal disaster, implement common disaster recovery techniques such as cross region replication or cross zone replication for your Azure NetApp Files volumes Learn more about [Volume - ANFCRRCZRRecommendation (Implement disaster recovery strategies for your Azure NetApp Files Resources)](https://aka.ms/anfcrr). Learn more about [Volume - SAPTimeoutsANF (Review SAP configuration for timeout ### Consider scaling out your App Service Plan to avoid CPU exhaustion -Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this you could scale out your app. +Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this problem, you could scale out your app. Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](https://aka.ms/antbc-cpu). Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service ### Use deployment slots for your App Service resource -You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app. +You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment effect to your production web app. Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](https://aka.ms/ant-staging). Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the back ### Move your App Service resource to Standard or higher and use deployment slots -You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app. +You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment effect to your production web app. Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](https://aka.ms/ant-staging). Learn more about [App Service plan - AppServiceNumberOfInstances (Consider scali ### Application code needs fixing when the worker process crashes due to Unhandled Exception -We identified the following thread that resulted in an unhandled exception for your App and the application code must be fixed to prevent impact to application availability. A crash happens when an exception in your code terminates the process. +We identified the following thread that resulted in an unhandled exception for your App and the application code must be fixed to prevent effect to application availability. A crash happens when an exception in your code terminates the process. Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code must be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html). ### Consider changing your App Service configuration to 64-bit -We identified your application is running in 32-bit and the memory is reaching the 2GB limit. Consider switching to 64-bit processes so you can take advantage of the extra memory available in your Web Worker role. This action triggers a web app restart, so schedule accordingly. +We identified your application is running in 32-bit and the memory is reaching the 2-GB limit. Consider switching to 64-bit processes so you can take advantage of the extra memory available in your Web Worker role. This action triggers a web app restart, so schedule accordingly. Learn more about [App service 32-bit limitations](/troubleshoot/azure/app-service/web-apps-performance-faqs#i-see-the-message-worker-process-requested-recycle-due-to-percent-memory-limit-how-do-i-address-this-issue). Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure F ### Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU -The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100GB. Consider upgrading these apps to Standard SKU to avoid throttling. +The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100 GB. Consider upgrading these apps to Standard SKU to avoid throttling. Learn more about [Static Web App - StaticWebAppsUpgradeToStandardSKU (Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU.)](https://azure.microsoft.com/pricing/details/app-service/static/). - ## Next steps Learn more about [Reliability - Microsoft Azure Well Architected Framework](/azure/architecture/framework/resiliency/overview) |
ai-services | Video Retrieval | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/video-retrieval.md | Azure AI Spatial Analysis Video Retrieval APIs are part of Azure AI Vision and e ## Input requirements -### Supported file formats +### Supported formats + | File format | Description | | -- | -- | | `asf` | ASF (Advanced / Active Streaming Format) |+| `avi` | AVI (Audio Video Interleaved) | | `flv` | FLV (Flash Video) | | `matroskamm`, `webm` | Matroska / WebM |-| `mov`, `mp4`, `m4a`, `3gp`, `3g2`, `mj2` | QuickTime / MOV | -| `mpegts` | MPEG-TS (MPEG-2 Transport Stream) | -| `rawvideo` | raw video | -| `rm` | RealMedia | -| `rtsp` | RTSP input | +| `mov`,`mp4`,`m4a`,`3gp`,`3g2`,`mj2` | QuickTime / MOV | + +### Supported video codecs -### Supported codecs | Codec | Format | | -- | -- | | `h264` | H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 |-| `rawvideo` | raw video | -| `h265` | HEVC +| `h265` | H.265/HEVC | | `libvpx-vp9` | libvpx VP9 (codec vp9) |+| `mpeg4` | MPEG-4 part 2 | + +### Supported audio codecs ++| Codec | Format | +| -- | -- | +| `aac` | AAC (Advanced Audio Coding) | +| `mp3` | MP3 (MPEG audio layer 3) | +| `pcm` | PCM (uncompressed) | +| `vorbis` | Vorbis | +| `wmav2` | Windows Media Audio 2 | ## Call the Video Retrieval APIs The Spatial Analysis Video Retrieval APIs allows a user to add metadata to video ### Step 1: Create an Index -To begin, you need to create an index to store and organize the video files and their metadata. The example below demonstrates how to create an index named "my-video-index." +To begin, you need to create an index to store and organize the video files and their metadata. The example below demonstrates how to create an index named "my-video-index" using the **[Create Index](https://eastus.dev.cognitive.microsoft.com/docs/services/ingestion-api-private-preview-2023-05-01-preview/operations/645db36646346106fcc4779b)** API. ```bash curl.exe -v -X PUT "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii " Connection: close ### Step 2: Add video files to the index -Next, you can add video files to the index with their associated metadata. The example below demonstrates how to add two video files to the index using SAS URLs to provide access. +Next, you can add video files to the index with their associated metadata. The example below demonstrates how to add two video files to the index using SAS URLs with the **[Create Ingestion](https://eastus.dev.cognitive.microsoft.com/docs/services/ingestion-api-private-preview-2023-05-01-preview/operations/645db36646346106fcc4779f)** API. ```bash Connection: close ### Step 3: Wait for ingestion to complete -After you add video files to the index, the ingestion process starts. It might take some time depending on the size and number of files. To ensure the ingestion is complete before performing searches, you can use the **Get Ingestion** call to check the status. Wait for this call to return `"state" = "Completed"` before proceeding to the next step. +After you add video files to the index, the ingestion process starts. It might take some time depending on the size and number of files. To ensure the ingestion is complete before performing searches, you can use the **[Get Ingestion](https://eastus.dev.cognitive.microsoft.com/docs/services/ingestion-api-private-preview-2023-05-01-preview/operations/645db36646346106fcc477a0)** API to check the status. Wait for this call to return `"state" = "Completed"` before proceeding to the next step. ```bash curl.exe -v _X GET "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index/ingestions?api-version=2023-05-01-preview&$top=20" -H "ocp-apim-subscription-key: <YOUR_SUBSCRIPTION_KEY>" After you add video files to the index, you can search for specific videos using #### Search with "vision" feature -To perform a search using the "vision" feature, specify the query text and any desired filters. +To perform a search using the "vision" feature, use the [Search By Text](https://eastus.dev.cognitive.microsoft.com/docs/services/ingestion-api-private-preview-2023-05-01-preview/operations/645db36646346106fcc477a2) API with the `vision` filter, specifying the query text and any other desired filters. ```bash POST -v -X "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index:queryByText?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii " Connection: close #### Search with "speech" feature -To perform a search using the "speech" feature, provide the query text and any desired filters. +To perform a search using the "speech" feature, use the **[Search By Text](https://eastus.dev.cognitive.microsoft.com/docs/services/ingestion-api-private-preview-2023-05-01-preview/operations/645db36646346106fcc477a2)** API with the `speech` filter, providing the query text and any other desired filters. ```bash curl.exe -v -X POST "https://<YOUR_ENDPOINT_URL>com/computervision/retrieval/indexes/my-video-index:queryByText?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii " |
ai-services | Fine Tuning Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/fine-tuning-considerations.md | + + Title: Azure OpenAI Service fine-tuning considerations +description: Learn more about what you should take into consideration before fine-tuning with Azure OpenAI Service ++++ Last updated : 10/23/2023+++recommendations: false ++++# When to use Azure OpenAI fine-tuning ++When deciding whether or not fine-tuning is the right solution to explore for a given use case, there are some key terms that it's helpful to be familiar with: ++- [Prompt Engineering](/azure/ai-services/openai/concepts/prompt-engineering) is a technique that involves designing prompts for natural language processing models. This process improves accuracy and relevancy in responses, optimizing the performance of the model. +- [Retrieval Augmented Generation (RAG)](/azure/machine-learning/concept-retrieval-augmented-generation?view=azureml-api-2&preserve-view=true) improves Large Language Model (LLM) performance by retrieving data from external sources and incorporating it into a prompt. RAG allows businesses to achieve customized solutions while maintaining data relevance and optimizing costs. +- [Fine-tuning](/azure/ai-services/openai/how-to/fine-tuning?pivots=programming-language-studio) retrains an existing Large Language Model using example data, resulting in a new "custom" Large Language Model that has been optimized using the provided examples. ++## What is Fine Tuning with Azure OpenAI? ++When we talk about fine tuning, we really mean *supervised fine-tuning* not continuous pre-training or Reinforcement Learning through Human Feedback (RLHF). Supervised fine-tuning refers to the process of retraining pre-trained models on specific datasets, typically to improve model performance on specific tasks or introduce information that wasn't well represented when the base model was originally trained. ++Fine-tuning is an advanced technique that requires expertise to use appropriately. The questions below will help you evaluate whether you are ready for fine-tuning, and how well you've thought through the process. You can use these to guide your next steps or identify other approaches that might be more appropriate. ++## Why do you want to fine-tune a model? ++- You should be able to clearly articulate a specific use case for fine-tuning and identify the [model](models.md#fine-tuning-models-preview) you hope to fine-tune. +- Good use cases for fine-tuning include steering the model to output content in a specific and customized style, tone, or format, or scenarios where the information needed to steer the model is too long or complex to fit into the prompt window. ++**Common signs you might not be ready for fine-tuning yet:** ++- No clear use case for fine tuning, or an inability to articulate much more than ΓÇ£I want to make a model betterΓÇ¥. +- If you identify cost as your primary motivator, proceed with caution. Fine-tuning might reduce costs for certain use cases by shortening prompts or allowing you to use a smaller model but thereΓÇÖs a higher upfront cost to training and you will have to pay for hosting your own custom model. Refer to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for more information on Azure OpenAI fine-tuning costs. +- If you want to add out of domain knowledge to the model, you should start with retrieval augmented generation (RAG) with features like Azure OpenAI's [on your data](./use-your-data.md) or [embeddings](../tutorials/embeddings.md). Often, this is a cheaper, more adaptable, and potentially more effective option depending on the use case and data. ++## What have you tried so far? ++Fine-tuning is an advanced capability, not the starting point for your generative AI journey. You should already be familiar with the basics of using Large Language Models (LLMs). You should start by evaluating the performance of a base model with prompt engineering and/or Retrieval Augmented Generation (RAG) to get a baseline for performance. ++Having a baseline for performance without fine-tuning is essential for knowing whether or not fine-tuning has improved model performance. Fine-tuning with bad data makes the base model worse, but without a baseline, it's hard to detect regressions. ++**If you are ready for fine-tuning you:** ++- Should be able to demonstrate evidence and knowledge of Prompt Engineering and RAG based approaches. +- Be able to share specific experiences and challenges with techniques other than fine-tuning that were already tried for your use case. +- Need to have quantitative assessments of baseline performance, whenever possible. ++**Common signs you might not be ready for fine-tuning yet:** ++- Starting with fine-tuning without having tested any other techniques. +- Insufficient knowledge or understanding on how fine-tuning applies specifically to Large Language Models (LLMs). +- No benchmark measurements to assess fine-tuning against. ++## What isnΓÇÖt working with alternate approaches? ++Understanding where prompt engineering falls short should provide guidance on going about your fine-tuning. Is the base model failing on edge cases or exceptions? Is the base model not consistently providing output in the right format, and you canΓÇÖt fit enough examples in the context window to fix it? ++Examples of failure with the base model and prompt engineering will help you identify the data they need to collect for fine-tuning, and how you should be evaluating your fine-tuned model. ++HereΓÇÖs an example: A customer wanted to use GPT-3.5-Turbo to turn natural language questions into queries in a specific, non-standard query language. They provided guidance in the prompt (ΓÇ£Always return GQLΓÇ¥) and used RAG to retrieve the database schema. However, the syntax wasn't always correct and often failed for edge cases. They collected thousands of examples of natural language questions and the equivalent queries for their database, including cases where the model had failed before ΓÇô and used that data to fine-tune the model. Combining their new fine-tuned model with their engineered prompt and retrieval brought the accuracy of the model outputs up to acceptable standards for use. ++**If you are ready for fine-tuning you:** ++- Have clear examples on how you have approached the challenges in alternate approaches and whatΓÇÖs been tested as possible resolutions to improve performance. +- You've identified shortcomings using a base model, such as inconsistent performance on edge cases, inability to fit enough few shot prompts in the context window to steer the model, high latency, etc. ++**Common signs you might not be ready for fine-tuning yet:** ++- Insufficient knowledge from the model or data source. +- Inability to find the right data to serve the model. ++## What data are you going to use for fine-tuning? ++Even with a great use case, fine-tuning is only as good as the quality of the data that you are able to provide. You need to be willing to invest the time and effort to make fine-tuning work. Different models will require different data volumes but you often need to be able to provide fairly large quantities of high-quality curated data. ++Another important point is even with high quality data if your data isn't in the necessary format for fine-tuning you will need to commit engineering resources in order to properly format the data. ++| Data | Babbage-002 & Davinci-002 | GPT-3.5-Turbo | +|||| +| Volume | Thousands of Examples | Thousands of Examples | +| Format | Prompt/Completion | Conversational Chat | ++**If you are ready for fine-tuning you:** ++- Have identified a dataset for fine-tuning. +- The dataset is in the appropriate format for training. +- Some level of curation has been employed to ensure dataset quality. ++**Common signs you might not be ready for fine-tuning yet:** ++- Dataset hasn't been identified yet. +- Dataset format doesn't match the model you wish to fine-tune. ++## How will you measure the quality of your fine-tuned model? ++There isnΓÇÖt a single right answer to this question, but you should have clearly defined goals for what success with fine-tuning looks like. Ideally, this shouldn't just be qualitative but should include quantitative measures of success like utilizing a holdout set of data for validation, as well as user acceptance testing or A/B testing the fine-tuned model against a base model. ++## Next steps ++- Watch the [Azure AI Show episode: "To fine-tune or not to fine-tune, that is the question"](https://www.youtube.com/watch?v=0Jo-z-MFxJs) +- Learn more about [Azure OpenAI fine-tuning](../how-to/fine-tuning.md) +- Explore our [fine-tuning tutorial](../tutorials/fine-tune.md) |
ai-services | How To Custom Speech Test And Train | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-test-and-train.md | Audio files can have silence at the beginning and end of the recording. If possi Custom Speech projects require audio files with these properties: +> [!IMPORTANT] +> These are requirements for Audio + human-labeled transcript training and testing. They differ from the ones for Audio only training and testing. If you want to use Audio only training and testing, [see this section](#audio-data-for-training-or-testing). + | Property | Value | |--|-| | File format | RIFF (WAV) | Audio data is optimal for testing the accuracy of Microsoft's baseline speech to Custom Speech projects require audio files with these properties: +> [!IMPORTANT] +> These are requirements for Audio only training and testing. They differ from the ones for Audio + human-labeled transcript training and testing. If you want to use Audio + human-labeled transcript training and testing, [see this section](#audio--human-labeled-transcript-data-for-training-or-testing). + | Property | Value | |--|--| | File format | RIFF (WAV) | |
ai-services | Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md | Title: Regions - Speech service description: A list of available regions and endpoints for the Speech service, including speech to text, text to speech, and speech translation.- Previously updated : 09/16/2022 Last updated : 10/27/2023 -+ # Speech service supported regions The following regions are supported for Speech service features such as speech t | Europe | France Central | `francecentral` | | Europe | Germany West Central | `germanywestcentral` | | Europe | Norway East | `norwayeast` |+| Europe | Sweden Central | `swedentcentral` | | Europe | Switzerland North | `switzerlandnorth` <sup>6</sup>| | Europe | Switzerland West | `switzerlandwest` | | Europe | UK South | `uksouth` <sup>1,2,3,4,7</sup>| |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Net In Overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Extra nodes created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space. -A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an Overlay network for direct communication between pods. There's no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pod, which provides connectivity performance between pods on par with VMs in a VNet. +A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an Overlay network for direct communication between pods. There's no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods, which provides connectivity performance between pods on par with VMs in a VNet. Workloads running within the pods are not even aware that network address manipulation is happening. :::image type="content" source="media/azure-cni-Overlay/azure-cni-overlay.png" alt-text="A diagram showing two nodes with three pods each running in an Overlay network. Pod traffic to endpoints outside the cluster is routed via NAT."::: Communication with endpoints outside the cluster, such as on-premises and peered You can provide outbound (egress) connectivity to the internet for Overlay pods using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting). -You can configure ingress connectivity to the cluster using an ingress controller, such as Nginx or [HTTP application routing](./http-application-routing.md). +You can configure ingress connectivity to the cluster using an ingress controller, such as Nginx or [HTTP application routing](./http-application-routing.md). You cannot configure ingress connectivity using Azure App Gateway. For details see [Limitations with Azure CNI Overlay](#limitations-with-azure-cni-overlay). ## Differences between Kubenet and Azure CNI Overlay |
aks | Cluster Autoscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md | Last updated 09/26/2023 # Automatically scale a cluster to meet application demands on Azure Kubernetes Service (AKS) -To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed. +To keep up with application demands in Azure Kubernetes Service (AKS), you might need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed. This article shows you how to enable and manage the cluster autoscaler in an AKS cluster, which is based on the open source [Kubernetes][kubernetes-cluster-autoscaler] version. To adjust to changing application demands, such as between workdays and evenings * The **[Horizontal Pod Autoscaler][horizontal-pod-autoscaler]** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand. * **[Vertical Pod Autoscaler][vertical-pod-autoscaler]** (preview) automatically sets resource requests and limits on containers per workload based on past usage to ensure pods are scheduled onto nodes that have the required CPU and memory resources. The Horizontal Pod Autoscaler scales the number of pod replicas as needed, and the cluster autoscaler scales the number of nodes in a node pool as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity after a period of time. Any pods on a node removed by the cluster autoscaler are safely scheduled elsewhere in the cluster. The cluster autoscaler and Horizontal Pod Autoscaler can work together and are o > [!NOTE] > Manual scaling is disabled when you use the cluster autoscaler. Let the cluster autoscaler determine the required number of nodes. If you want to manually scale your cluster, [disable the cluster autoscaler](#disable-the-cluster-autoscaler-on-a-cluster). -With cluster autoscaler enabled, when the node pool size is lower than the minimum or greater than the maximum it applies the scaling rules. Next, the autoscaler waits to take effect until a new node is needed in the node pool or until a node may be safely deleted from the current node pool. For more information, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work) +With cluster autoscaler enabled, when the node pool size is lower than the minimum or greater than the maximum it applies the scaling rules. Next, the autoscaler waits to take effect until a new node is needed in the node pool or until a node might be safely deleted from the current node pool. For more information, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work) -The cluster autoscaler may be unable to scale down if pods can't move, such as in the following situations: +The cluster autoscaler might be unable to scale down if pods can't move, such as in the following situations: * A directly created pod not backed by a controller object, such as a deployment or replica set. * A pod disruption budget (PDB) is too restrictive and doesn't allow the number of pods to fall below a certain threshold. The cluster autoscaler uses startup parameters for things like time intervals be > [!IMPORTANT] > The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscale in the Azure portal or using the Azure CLI. Let the Kubernetes cluster autoscaler manage the required scale settings. For more information, see [Can I modify the AKS resources in the node resource group?][aks-faq-node-resource-group] +#### [Azure CLI](#tab/azure-cli) + * Update an existing cluster using the [`az aks update`][az-aks-update] command and enable and configure the cluster autoscaler on the node pool using the `--enable-cluster-autoscaler` parameter and specifying a node `--min-count` and `--max-count`. The following example command updates an existing AKS cluster to enable the cluster autoscaler on the node pool for the cluster and sets a minimum of one and maximum of three nodes: ```azurecli-interactive The cluster autoscaler uses startup parameters for things like time intervals be It takes a few minutes to update the cluster and configure the cluster autoscaler settings. +#### [Portal](#tab/azure-portal) ++1. To enable cluster autoscaler on your existing clusterΓÇÖs node pools, navigate to *Node pools* from your cluster's overview page in the Azure portal. Select the *scale method* for the node pool youΓÇÖd like to adjust scaling settings for. ++ :::image type="content" source="./media/cluster-autoscaler/main-blade-column-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's node pools. The column for 'Scale method' is highlighted." lightbox="./media/cluster-autoscaler/main-blade-column.png"::: ++1. From here, you can enable or disable autoscaling, adjust minimum and maximum node count, and learn more about your node poolΓÇÖs size, capacity, and usage. Select *Apply* to save your changes. ++ :::image type="content" source="./media/cluster-autoscaler/menu-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's node pools is shown with the 'Scale node pool' menu expanded. The 'Apply' button is highlighted." lightbox="./media/cluster-autoscaler/menu.png"::: +++ ### Disable the cluster autoscaler on a cluster * Disable the cluster autoscaler using the [`az aks update`][az-aks-update-preview] command and the `--disable-cluster-autoscaler` parameter. The cluster autoscaler uses startup parameters for things like time intervals be Nodes aren't removed when the cluster autoscaler is disabled. > [!NOTE]-> You can manually scale your cluster after disabling the cluster autoscaler using the [`az aks scale`][az-aks-scale] command. If you use the horizontal pod autoscaler, that feature continues to run with the cluster autoscaler disabled, but pods may end up unable to be scheduled if all node resources are in use. +> You can manually scale your cluster after disabling the cluster autoscaler using the [`az aks scale`][az-aks-scale] command. If you use the horizontal pod autoscaler, that feature continues to run with the cluster autoscaler disabled, but pods might end up unable to be scheduled if all node resources are in use. ### Re-enable a disabled cluster autoscaler Monitor the performance of your applications and services, and adjust the cluste ## Use the cluster autoscaler profile -You can also configure more granular details of the cluster autoscaler by changing the default values in the cluster-wide autoscaler profile. For example, a scale down event happens after nodes are under-utilized after 10 minutes. If you have workloads that run every 15 minutes, you may want to change the autoscaler profile to scale down under-utilized nodes after 15 or 20 minutes. When you enable the cluster autoscaler, a default profile is used unless you specify different settings. The cluster autoscaler profile has the following settings you can update: +You can also configure more granular details of the cluster autoscaler by changing the default values in the cluster-wide autoscaler profile. For example, a scale down event happens after nodes are under-utilized after 10 minutes. If you have workloads that run every 15 minutes, you might want to change the autoscaler profile to scale down under-utilized nodes after 15 or 20 minutes. When you enable the cluster autoscaler, a default profile is used unless you specify different settings. The cluster autoscaler profile has the following settings you can update: * Example profile update that scales after 15 minutes and changes after 10 minutes of idle use. You can also configure more granular details of the cluster autoscaler by changi | scale-down-delay-after-failure | How long after scale down failure that scale down evaluation resumes | 3 minutes | | scale-down-unneeded-time | How long a node should be unneeded before it's eligible for scale down | 10 minutes | | scale-down-unready-time | How long an unready node should be unneeded before it's eligible for scale down | 20 minutes |+| ignore-daemonsets-utilization (Preview) | Whether DaemonSet pods will be ignored when calculating resource utilization for scaling down | false | +| daemonset-eviction-for-empty-nodes (Preview) | Whether DaemonSet pods will be gracefully terminated from empty nodes | false | +| daemonset-eviction-for-occupied-nodes (Preview) | Whether DaemonSet pods will be gracefully terminated from non-empty nodes | true | | scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, in which a node can be considered for scale down | 0.5 | | max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds |+| balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false | | balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false | | expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random | | skip-nodes-with-local-storage | If true, cluster autoscaler doesn't delete nodes with pods with local storage, for example, EmptyDir or HostPath | true | You can also configure more granular details of the cluster autoscaler by changi > > * The cluster autoscaler profile affects **all node pools** that use the cluster autoscaler. You can't set an autoscaler profile per node pool. When you set the profile, any existing node pools with the cluster autoscaler enabled immediately start using the profile. > * The cluster autoscaler profile requires Azure CLI version *2.11.1* or later. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].+> * To access preview features use the aks-preview extension version 0.5.126 or later + ### Set the cluster autoscaler profile on a new cluster You can also configure more granular details of the cluster autoscaler by changi You can retrieve logs and status updates from the cluster autoscaler to help diagnose and debug autoscaler events. AKS manages the cluster autoscaler on your behalf and runs it in the managed control plane. You can enable control plane node to see the logs and operations from the cluster autoscaler. +### [Azure CLI](#tab/azure-cli) + Use the following steps to configure logs to be pushed from the cluster autoscaler into Log Analytics: 1. Set up a rule for resource logs to push cluster autoscaler logs to Log Analytics using the [instructions here][aks-view-master-logs]. Make sure you check the box for `cluster-autoscaler` when selecting options for **Logs**. Use the following steps to configure logs to be pushed from the cluster autoscal As long as there are logs to retrieve, you should see logs similar to the following logs: - :::image type="content" source="media/autoscaler/autoscaler-logs.png" alt-text="Screenshot of Log Analytics logs."::: + :::image type="content" source="media/cluster-autoscaler/autoscaler-logs.png" alt-text="Screenshot of Log Analytics logs."::: The cluster autoscaler also writes out the health status to a `configmap` named `cluster-autoscaler-status`. You can retrieve these logs using the following `kubectl` command: Use the following steps to configure logs to be pushed from the cluster autoscal kubectl get configmap -n kube-system cluster-autoscaler-status -o yaml ``` +### [Portal](#tab/azure-portal) ++1. Navigate to *Node pools* from your cluster's overview page in the Azure portal. Select any of the tiles for autoscale events, autoscale warnings, or scale-ups not triggered to get more details. ++ :::image type="content" source="./media/cluster-autoscaler/main-blade-tiles-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's node pools. The section displaying autoscaler events, warning, and scale-ups not triggered is highlighted." lightbox="./media/cluster-autoscaler/main-blade-tiles.png"::: ++1. YouΓÇÖll see a list of Kubernetes events filtered to `source: cluster-autoscaler` that have occurred within the last hour. With this information, youΓÇÖll be able to troubleshoot and diagnose any issues that might arise while scaling your nodes. ++ :::image type="content" source="./media/cluster-autoscaler/events-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's events. The filter for source is highlighted, showing 'source: cluster-autoscaler'." lightbox="./media/cluster-autoscaler/events.png"::: +++ To learn more about the autoscaler logs, see the [Kubernetes/autoscaler GitHub project FAQ][kubernetes-faq]. ## Use the cluster autoscaler with node pools |
aks | Use Oidc Issuer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-oidc-issuer.md | Title: Create an OpenID Connect provider for your Azure Kubernetes Service (AKS) description: Learn how to configure the OpenID Connect (OIDC) provider for a cluster in Azure Kubernetes Service (AKS) Previously updated : 07/26/2023 Last updated : 10/27/2023 # Create an OpenID Connect provider on Azure Kubernetes Service (AKS) az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup ``` > [!IMPORTANT]-> Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice. Then key2 and key3 are valid, and key1 is invalid. +> Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice and restart the pods using projected service account tokens. Then key2 and key3 are valid, and key1 is invalid. ## Check the OIDC keys The output should resemble the following: https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000/ ``` -By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com/{uuid}`, where the value for `{region}` matches the location the AKS cluster is deployed in. The value `{uuid}` represents the OIDC key. +By default, the Issuer is set to use the base URL `https://{region}.oic.prod-aks.azure.com/{uuid}`, where the value for `{region}` matches the location the AKS cluster is deployed in. The value `{uuid}` represents the OIDC key, which is a randomly generated guid for each cluster that is immutable. ### Get the discovery document |
aks | Windows Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-best-practices.md | + + Title: Best practices for Windows containers on Azure Kubernetes Service (AKS) +description: Learn about best practices for running Windows containers in Azure Kubernetes Service (AKS). +++ Last updated : 10/27/2023+++# Best practices for Windows containers on Azure Kubernetes Service (AKS) ++In AKS, you can create node pools that run Linux or Windows Server as the operating system (OS) on the nodes. Windows Server nodes can run native Windows container applications, such as .NET Framework. The Linux OS and Windows OS have different container support and configuration considerations. For more information, see [Windows container considerations in Kubernetes][windows-vs-linux]. ++This article outlines best practices for running Windows containers on AKS. ++## Create an AKS cluster with Linux and Windows node pools ++When you create a new AKS cluster, the Azure platform creates a Linux node pool by default. This node pool contains system services needed for the cluster to function. Azure also creates and manages a control plane abstracted from the user, which means you aren't exposed to the underlying OS of the nodes hosting the main control plane components. We recommend that you run at least *two nodes* on the default Linux node pool to ensure the reliability and performance of your cluster. You can't delete the default Linux node pool unless you delete the entire cluster. ++There are some cases where you should consider deploying a Linux node pool when planning to run Windows-based workloads on your AKS cluster, such as: ++* If you want to run Linux and Windows workloads, you can deploy a Linux node pool and a Windows node pool in the same cluster. +* If you want to deploy infrastructure-related components based on Linux, such as NGINX, you need a Linux node pool alongside your Windows node pool. You can use control plane nodes for development and testing scenarios. For production workloads, we recommend that you deploy separate Linux node pools to ensure reliability and performance. ++## Modernize existing applications with Windows on AKS ++You might want to containerize existing applications and run them using Windows on AKS. Before starting the containerization process, it's important to understand the application architecture and dependencies. For more information, see [Containerize existing applications using Windows containers](/virtualization/windowscontainers/quick-start/lift-shift-to-containers). ++## Windows OS version ++> **Best practice guidance** +> +> Windows Server 2022 provides the latest security and performance improvements and is the recommended OS for Windows node pools on AKS. ++AKS uses Windows Server 2019 and Windows Server 2022 as the host OS versions and only supports process isolation. AKS doesn't support container images built by other versions of Windows Server. For more information, see [Windows container version compatibility](/virtualization/windowscontainers/deploy-containers/version-compatibility). ++Windows Server 2022 is the default OS for Kubernetes version 1.25 and later. Windows Server 2019 will retire after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information, see the [AKS release notes][aks-release-notes]. ++## Networking ++### Networking modes ++> **Best practice guidance** +> +> AKS clusters with Windows node pools only support Azure Container Networking Interface (Azure CNI) and use it by default. ++Windows doesn't support kubenet networking. AKS clusters with Windows node pools must use Azure CNI. For more information, see [Network concepts for applications in AKS][network-concepts-for-aks-applications]. ++Azure CNI offers two networking modes based on your workload requirements: ++* [**Azure CNI Overlay**][azure-cni-overlay] is an overlay network similar to kubenet. The overlay network allows you to use virtual network (VNet) IPs for nodes and private address spaces for pods within those nodes that you can reuse across the cluster. Azure CNI Overlay is the **recommended networking mode**. It provides simplified network configuration and management and the best scalability in AKS networking. +* [**Azure CNI with Dynamic IP Allocation**][azure-cni-dynamic-ip-allocation] requires extra planning and consideration for IP address management. This mode provides VNet IPs for nodes *and* pods. This configuration allows you direct access to pod IPs. However, it comes with increased complexity and reduced scalability. ++To help you decide which networking mode to use, see [Choosing a network model][azure-cni-choose-network-model]. ++### Network policies ++> **Best practice guidance** +> +> Use network policies to secure traffic between pods. Windows supports Azure Network Policy Manager and Calico Network Policy. For more information, see [Differences between Azure Network Policy Manager and Calico Network Policy][azurenpm-vs-calico]. ++When managing traffic between pods, you should apply the principle of least privilege. The Network Policy feature in Kubernetes allows you to define and enforce ingress and egress traffic rules between the pods in your cluster. For more information, see [Secure traffic between pods using network policies in AKS][network-policies-aks]. ++Windows pods on AKS clusters that use the Calico Network Policy enable [Floating IP][dsr] by default. ++## Upgrades and updates ++It's important to keep your Windows environment up-to-date to ensure your systems have the latest security updates, feature sets, and compliance requirements. In a Kubernetes environment like AKS, you need to maintain the Kubernetes version, Windows nodes, and Windows container images and pods. ++### Kubernetes version upgrades ++As a managed Kubernetes service, AKS provides the necessary tools to upgrade your cluster to the latest Kubernetes version. For more information, see [Upgrade an AKS cluster][upgrade-aks-cluster]. ++### Windows node monthly updates ++Windows nodes on AKS follow a monthly update schedule. Every month, AKS creates a new VHD with the latest available updates for Windows node pools. The VHD includes the host image, latest Nano Server image, latest Server Core image, and container. We recommend performing monthly updates to your Windows node pools to ensure your nodes have the latest security patches. For more information, see [Upgrade AKS node images][upgrade-aks-node-images]. ++> [!NOTE] +> Upgrades on Windows systems include both OS version upgrades and monthly node OS updates. ++You can stay up to date with the availability of new monthly releases using the [AKS release tracker][aks-release-tracker] and [AKS release notes][aks-release-notes]. ++### Windows node OS version upgrades ++Windows has a release cadence for new versions of the OS, including Windows Server 2019 and Windows Server 2022. When upgrading your Windows node OS version, ensure the Windows container image version matches the Windows container host version and the node pools have only one version of Windows Server. ++To upgrade the Windows node OS version, you need to complete the following steps: ++1. Create a new node pool with the new Windows Server version. +2. Deploy your workloads with the new Windows container images to the new node pool. +3. Decommission the old node pool. ++For more information, see [Upgrade Windows Server workloads on AKS][upgrade-windows-workloads-aks]. ++> [!NOTE] +> Windows announced a new [Windows Server Annual Channel for Containers](https://techcommunity.microsoft.com/t5/windows-server-news-and-best/windows-server-annual-channel-for-containers/ba-p/3866248) that supports portability and mixed versions of Windows nodes and containers. This feature isn't yet supported in AKS. +> +> To track AKS feature plans, see the [Public AKS roadmap](https://github.com/Azure/AKS/projects/1#card-90806240). ++## Next steps ++To learn more about Windows containers on AKS, see the following resources: ++* [Learn how to deploy, manage, and monitor Windows containers on AKS](/training/paths/deploy-manage-monitor-wincontainers-aks). +* Open an issue or provide feedback in the [Windows containers GitHub repository](https://github.com/microsoft/Windows-Containers/issues). +* Review the [third-party partner solutions for Windows on AKS][windows-on-aks-partner-solutions]. ++<!-- LINKS - internal --> +[azure-cni-overlay]: ./azure-cni-overlay.md +[azure-cni-dynamic-ip-allocation]: ./configure-azure-cni-dynamic-ip-allocation.md +[azure-cni-choose-network-model]: ./azure-cni-overlay.md#choosing-a-network-model-to-use +[network-concepts-for-aks-applications]: ./concepts-network.md +[windows-vs-linux]: ./windows-vs-linux-containers.md +[azurenpm-vs-calico]: ./use-network-policies.md#differences-between-azure-network-policy-manager-and-calico-network-policy-and-their-capabilities +[network-policies-aks]: ./use-network-policies.md +[dsr]: ../load-balancer/load-balancer-multivip-overview.md#rule-type-2-backend-port-reuse-by-using-floating-ip +[upgrade-aks-cluster]: ./upgrade-cluster.md +[upgrade-aks-node-images]: ./node-image-upgrade.md +[upgrade-windows-workloads-aks]: ./upgrade-windows-2019-2022.md +[windows-on-aks-partner-solutions]: ./windows-aks-partner-solutions.md ++<!-- LINKS - external --> +[aks-release-notes]: https://github.com/Azure/AKS/releases +[aks-release-tracker]: https://releases.aks.azure.com/ |
aks | Windows Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md | In Azure Kubernetes Service (AKS), you can create a node pool that runs Windows This article outlines some of the frequently asked questions and OS concepts for Windows Server nodes in AKS. -## Which Windows operating systems are supported? --AKS uses Windows Server 2019 and Windows Server 2022 as the host OS version and only supports process isolation. Container images built by using other Windows Server versions are not supported. For more information, see [Windows container version compatibility][windows-container-compat]. For Kubernetes version 1.25 and higher, Windows Server 2022 is the default operating system. Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes]. - ## What kind of disks are supported for Windows? Azure Disks and Azure Files are the supported volume types, and are accessed as NTFS volumes in the Windows Server container. Azure Disks and Azure Files are the supported volume types, and are accessed as Generation 2 VMs are supported on Linux and Windows for WS2022 only. For more information, see [Support for generation 2 VMs on Azure](../virtual-machines/generation-2.md). -## Can I run Windows only clusters in AKS? --The master nodes (the control plane) in an AKS cluster are hosted by the AKS service. You won't be exposed to the operating system of the nodes hosting the master components. All AKS clusters are created with a default first node pool, which is Linux-based. This node pool contains system services that are needed for the cluster to function. We recommend that you run at least two nodes in the first node pool to ensure the reliability of your cluster and the ability to do cluster operations. The first Linux-based node pool can't be deleted unless the AKS cluster itself is deleted. --In some cases, if you are planning to run Windows-based workloads on an AKS cluster, you should consider deploying a Linux node pool for the following reasons: -- If you are planning to run Windows and Linux workloads, you can deploy a Windows and Linux node pool on the same AKS cluster to run the workloads side by side.-- When deploying infrastructure-related components based on Linux, such as Ngix and others, these workloads require a Linux node pool alongside your Windows node pools. For development and test scenarios, you can use control plane nodes. For production workloads, we recommend deploying separate Linux node pools for performance and reliability.- ## How do I patch my Windows nodes? To get the latest patches for Windows nodes, you can either [upgrade the node pool][nodepool-upgrade] or [upgrade the node image][upgrade-node-image]. Windows Updates are not enabled on nodes in AKS. AKS releases new node pool images as soon as patches are available, and it's the user's responsibility to upgrade node pools to stay current on patches and hotfixes. This patch process is also true for the Kubernetes version being used. [AKS release notes][aks-release-notes] indicate when new versions are available. For more information on upgrading the Windows Server node pool, see [Upgrade a node pool in AKS][nodepool-upgrade]. If you're only interested in updating the node image, see [AKS node image upgrades][upgrade-node-image]. To get the latest patches for Windows nodes, you can either [upgrade the node po > [!NOTE] > The updated Windows Server image will only be used if a cluster upgrade (control plane upgrade) has been performed prior to upgrading the node pool. -## What network plug-ins are supported? --AKS clusters with Windows node pools must use the Azure Container Networking Interface (Azure CNI) (advanced) networking model. Kubenet (basic) networking is not supported. For more information on the differences in network models, see [Network concepts for applications in AKS][azure-network-models]. The Azure CNI network model requires extra planning and consideration for IP address management. For more information on how to plan and implement Azure CNI, see [Configure Azure CNI networking in AKS][configure-azure-cni]. --Windows nodes on AKS clusters also have [Direct Server Return (DSR)][dsr] enabled by default when Calico is enabled. - ## Is preserving the client source IP supported? At this time, [client source IP preservation][client-source-ip] is not supported with Windows nodes. |
api-management | Developer Portal Extend Custom Functionality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md | Title: Add custom functionality to the Azure API Management developer portal + Title: Add custom functionality to developer portal - Azure API Management description: How to customize the managed API Management developer portal with custom functionality such as custom widgets. Previously updated : 11/01/2022 Last updated : 10/27/2023 -# Extend the developer portal with custom features +# Extend the developer portal with custom widgets The API Management [developer portal](api-management-howto-developer-portal.md) features a visual editor and built-in widgets so that you can customize and style the portal's appearance. However, you may need to customize the developer portal further with custom functionality. For example, you might want to integrate your developer portal with a support system that involves adding a custom interface. This article explains ways to add custom functionality such as custom widgets to your API Management developer portal. The following table summarizes three options, with links to more detail. |Method |Description | ||| |[Custom HTML code widget](#use-custom-html-code-widget) | - Lightweight solution for API publishers to add custom logic for basic use cases<br/><br/>- Copy and paste custom HTML code into a form, and developer portal renders it in an iframe |-|[Create and upload custom widget](#create-and-upload-custom-widget) | - Developer solution for more advanced widget use cases<br/><br/>- Requires local implementation in React, Vue, or plain TypeScript<br/><br/>- Widget scaffold and tools provided to help developers create widget and upload to developer portal<br/><br/>- Supports workflows for source control, versioning, and code reuse<br/><br/> | +|[Create and upload custom widget](#create-and-upload-custom-widget) | - Developer solution for more advanced widget use cases<br/><br/>- Requires local implementation in React, Vue, or plain TypeScript<br/><br/>- Widget scaffold and tools provided to help developers create widget and upload to developer portal<br/><br/>- Widget creation, testing, and deployment can be scripted through open source [React Component Toolkit](#create-custom-widgets-using-open-source-react-component-toolkit)<br/><br/>- Supports workflows for source control, versioning, and code reuse | |[Self-host developer portal](developer-portal-self-host.md) | - Legacy extensibility option for customers who need to customize source code of the entire portal core<br/><br/> - Gives complete flexibility for customizing portal experience<br/><br/>- Requires advanced configuration<br/><br/>- Customer responsible for managing complete code lifecycle: fork code base, develop, deploy, host, patch, and upgrade |-- ## Use Custom HTML code widget The managed developer portal includes a **Custom HTML code** widget where you can insert HTML code for small portal customizations. For example, use custom HTML to embed a video or to add a form. The portal renders the custom widget in an inline frame (iframe). The managed developer portal includes a **Custom HTML code** widget where you ca ## Create and upload custom widget -For more advanced widget use cases, API Management provides a scaffold and tools to help developers create a widget and upload it to the developer portal. --### Prerequisites +For more advanced use cases, you can create and upload a custom widget to the developer portal. API Management provides a code scaffold for developers to create custom widgets in React, Vue, or plain TypeScript. The scaffold includes tools to help you develop and deploy your widget to the developer portal. +### Prerequisites + * Install [Node.JS runtime](https://nodejs.org/en/) locally * Basic knowledge of programming and web development To implement your widget using another JavaScript UI framework and libraries, yo * If your framework of choice isn't compatible with [Vite build tool](https://vitejs.dev/), configure it so that it outputs compiled files to the `./dist` folder. Optionally, redefine where the compiled files are located by providing a relative path as the fourth argument for the [`deployNodeJs`](#azureapi-management-custom-widgets-toolsdeploynodejs) function. * For local development, the `config.msapim.json` file must be accessible at the URL `localhost:<port>/config.msapim.json` when the server is running. +## Create custom widgets using open source React Component Toolkit ++The open source [React Component Toolkit](https://github.com/microsoft/react-component-toolkit) provides a suite of npm package scripts to help you convert a React application to the custom widget framework, test it, and deploy the custom widget to the developer portal. If you have access to an Azure OpenAI service, the toolkit can also create a widget from a text description that you provide. ++Currently, you can use the toolkit in two ways to deploy a custom widget: ++* Manually, by installing the toolkit and running the npm package scripts locally. You run the scripts sequentially to create, test, and deploy a React component as a custom widget to the developer portal. +* Using an [Azure Developer CLI (azd) template](https://github.com/Azure-Samples/react-component-toolkit-openai-demo) for an end-to-end deployment. The `azd` template deploys an Azure API Management instance and an Azure OpenAI instance. After resources are provisioned, an interactive script helps you create, test, and deploy a custom widget to the developer portal from a description that you provide. ++> [!NOTE] +> The React Component Toolkit and Azure Developer CLI sample template are open source projects. Support is provided only through GitHub issues in the respective repositories. -## Next steps +## Related content Learn more about the developer portal: |
app-service | Configure Common | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md | Set-AzWebApp $webapp By default, App Service starts your app from the root directory of your app code. But certain web frameworks don't start in the root directory. For example, [Laravel](https://laravel.com/) starts in the `public` subdirectory. Such an app would be accessible at `http://contoso.com/public`, for example, but you typically want to direct `http://contoso.com` to the `public` directory instead. If your app's startup file is in a different folder, or if your repository has more than one application, you can edit or add virtual applications and directories. +> [!IMPORTANT] +> Virtual directory to a physical path feature is only available on Windows apps. + # [Azure portal](#tab/portal) 1. In the [Azure portal], search for and select **App Services**, and then select your app. |
application-gateway | Ingress Controller Autoscale Pods | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-autoscale-pods.md | Use following two components: * [`Azure Kubernetes Metric Adapter`](https://github.com/Azure/azure-k8s-metrics-adapter) - We use the metric adapter to expose Application Gateway metrics through the metric server. The Azure Kubernetes Metric Adapter is an open source project under Azure, similar to the Application Gateway Ingress Controller. * [`Horizontal Pod Autoscaler`](../aks/concepts-scale.md#horizontal-pod-autoscaler) - We use HPA to use Application Gateway metrics and target a deployment for scaling. +> [!NOTE] +> The Azure Kubernetes Metrics Adapter is no longer maintained. Kubernetes Event-driven Autoscaling (KEDA) is an alternative.<br> +> Also see [Application Gateway for Containers](for-containers/overview.md). + ## Setting up Azure Kubernetes Metric Adapter 1. First, create a Microsoft Entra service principal and assign it `Monitoring Reader` access over Application Gateway's resource group. |
azure-app-configuration | Monitor App Configuration Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration-reference.md | -This article is a reference for the monitoring data collected by App Configuration. See [Monitoring App Configuration](monitor-app-configuration.md) for a walk through on to collect and analyze monitoring data for App Configuration. +This article is a reference for the monitoring data collected by App Configuration. See [Monitoring App Configuration](monitor-app-configuration.md) for how to collect and analyze monitoring data for App Configuration. ## Metrics Resource Provider and Type: [App Configuration Platform Metrics](../azure-monitor/essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) Resource Provider and Type: [App Configuration Platform Metrics](../azure-monito | Http Incoming Request Duration | Milliseconds | Server side duration of an Http Request | | Throttled Http Request Count | Count | Throttled requests are Http requests that receive a response with a status code of 429 | | Daily Storage Usage | Percent | Represents the amount of storage in use as a percentage of the maximum allowance. This metric is updated at least once daily. |+| Request Quota Usage | Percent | Represents the current total request usage in percentage. | | Replication Latency | Milliseconds | Represents the average time it takes for a replica to be consistent with current state. | For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md). App Configuration has the following dimensions associated with its metr | Metric Name | Dimension description | |-|--|-| Http Incoming Request Count | The supported dimensions are the **HttpStatusCode**, **AuthenticationScheme**, and **Endpoint** of each request. **AuthenticationScheme** can be filtered by AAD or HMAC authentication. | -| Http Incoming Request Duration | The supported dimensions are the **HttpStatusCode**, **AuthenticationScheme**, and **Endpoint** of each request. **AuthenticationScheme** can be filtered by AAD or HMAC authentication. | +| Http Incoming Request Count | The supported dimensions are the **HttpStatusCode**, **AuthenticationScheme**, and **Endpoint** of each request. **AuthenticationScheme** can be filtered by "AAD" or "HMAC" authentication. | +| Http Incoming Request Duration | The supported dimensions are the **HttpStatusCode**, **AuthenticationScheme**, and **Endpoint** of each request. **AuthenticationScheme** can be filtered by "AAD" or "HMAC" authentication. | | Throttled Http Request Count | The **Endpoint** of each request is included as a dimension. | | Daily Storage Usage | This metric does not have any dimensions. |+| Request Quota Usage | The supported dimensions are the **OperationType** ("Read"or "Write") and **Endpoint** of each request. | | Replication Latency | The **Endpoint** of the replica that data was replicated to is included as a dimension. | For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics). |
azure-app-configuration | Monitor App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration.md | You can analyze metrics for App Configuration with metrics from other Azure serv * Http Incoming Request Duration * Throttled Http Request Count (Http status code 429 Responses) * Daily Storage Usage+* Request Quota Usage * Replication Latency In the portal, navigate to the **Metrics** section and select the **Metric Namespaces** and **Metrics** you want to analyze. This screenshot shows you the metrics view when selecting **Http Incoming Request Count** for your configuration store. |
azure-app-configuration | Rest Api Throttling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-throttling.md | Last updated 08/17/2020 # Throttling -Configuration stores have limits on the requests that they may serve. Any requests that exceed an allotted quota for a configuration store will receive an HTTP 429 (Too Many Requests) response. +Configuration stores have limits on the requests that they can serve. Any requests that exceed an allotted quota for a configuration store will receive an HTTP 429 (Too Many Requests) response. Throttling is divided into different quota policies: In the above example, the client has exceeded its allowed quota and is advised t ## Other retry -The service may identify situations other than throttling that need a client retry (ex: 503 Service Unavailable). In all such cases, the `retry-after-ms` response header will be provided. To increase robustness, the client is advised to follow the suggested interval and perform a retry. +The service might identify situations other than throttling that need a client retry (ex: 503 Service Unavailable). In all such cases, the `retry-after-ms` response header will be provided. To increase robustness, the client is advised to follow the suggested interval and perform a retry. ```http HTTP/1.1 503 Service Unavailable retry-after-ms: 787 ```++## Monitoring ++To view the **Total Requests** quota usage, App Configuration provides a metric named **Request Quota Usage**. The request quota usage metric shows the current quota usage as a percentage. ++For more information on the request quota usage metric and other App Configuration metrics see [Monitoring App Configuration data reference](./monitor-app-configuration-reference.md). |
azure-arc | Cluster Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md | Title: "Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters." Previously updated : 10/12/2023 Last updated : 10/27/2023 description: "With cluster connect, you can securely connect to Azure Arc-enabled Kubernetes clusters from anywhere without requiring any inbound port to be enabled on the firewall." Before you begin, review the [conceptual overview of the cluster connect feature ## Prerequisites +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- An existing Azure Arc-enabled Kubernetes connected cluster. + - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). + - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. + ### [Azure CLI](#tab/azure-cli) -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) Azure CLI to the latest version. Before you begin, review the [conceptual overview of the cluster connect feature az extension update --name connectedk8s ``` -- An existing Azure Arc-enabled Kubernetes connected cluster.- - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). - - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. --- In addition to meeting the [network requirements for Arc-enabled Kubernetes](network-requirements.md), enable these endpoints for outbound access:-- | Endpoint | Port | - |-|-| - |`*.servicebus.windows.net` | 443 | - |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 | -- > [!NOTE] - > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder. - - Replace the placeholders and run the below command to set the environment variables used in this document: ```azurecli Before you begin, review the [conceptual overview of the cluster connect feature ### [Azure PowerShell](#tab/azure-powershell) -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).- - Install [Azure PowerShell version 6.6.0 or later](/powershell/azure/install-azure-powershell). -- An existing Azure Arc-enabled Kubernetes connected cluster.- - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). - - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. --- In addition to meeting the [network requirements for Arc-enabled Kubernetes](network-requirements.md), enable these endpoints for outbound access:-- | Endpoint | Port | - |-|-| - |`*.servicebus.windows.net` | 443 | - |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 | - - > [!NOTE] - > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder. - - Replace the placeholders and run the below command to set the environment variables used in this document: ```azurepowershell Before you begin, review the [conceptual overview of the cluster connect feature +- In addition to meeting the [network requirements for Arc-enabled Kubernetes](network-requirements.md), enable these endpoints for outbound access: ++ | Endpoint | Port | + |-|-| + |`*.servicebus.windows.net` | 443 | + |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 | ++ > [!NOTE] + > To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder. + [!INCLUDE [arc-region-note](../includes/arc-region-note.md)] ## Set up authentication On the existing Arc-enabled cluster, create the ClusterRoleBinding with either M 1. Authorize the entity with appropriate permissions. - - If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Microsoft Entra entity (service principal or user) that needs to access this cluster. Example: + - If you're using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Microsoft Entra entity (service principal or user) that needs to access this cluster. For example: ```console kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID ``` - - If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Microsoft Entra entity. Example: + - If you're using Azure RBAC for authorization checks on the cluster, you can create an applicable [Azure role assignment](azure-rbac.md#built-in-roles) mapped to the Microsoft Entra entity. For example: ```azurecli az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER On the existing Arc-enabled cluster, create the ClusterRoleBinding with either M 1. Authorize the entity with appropriate permissions. - - If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Microsoft Entra entity (service principal or user) that needs to access this cluster. Example: + - If you're using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the `kubeconfig` file pointing to the `apiserver` of your cluster for direct access, you can create one mapped to the Microsoft Entra entity (service principal or user) that needs to access this cluster. For example: ```console kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID ``` - - If you are using [Azure RBAC for authorization checks](azure-rbac.md) on the cluster, you can create an Azure role assignment mapped to the Microsoft Entra entity. Example: + - If you're using [Azure RBAC for authorization checks](azure-rbac.md) on the cluster, you can create an applicable [Azure role assignment](azure-rbac.md#built-in-roles) mapped to the Microsoft Entra entity. For example: - ```azurecli + ```azurepowershell + az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER az role assignment create --role "Azure Arc Enabled Kubernetes Cluster User Role" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER ``` Use `az connectedk8s show` to check your Arc-enabled Kubernetes agent version. ### [Agent version < 1.11.7](#tab/agent-version) -When making requests to the Kubernetes cluster, if the Microsoft Entra entity used is a part of more than 200 groups, you may see the following error: +When making requests to the Kubernetes cluster, if the Microsoft Entra entity used is a part of more than 200 groups, you might see the following error: `You must be logged in to the server (Error:Error while retrieving group info. Error:Overage claim (users with more than 200 group membership) is currently not supported.` This is a known limitation. To get past this error: ### [Agent version >= 1.11.7](#tab/agent-version-latest) -When making requests to the Kubernetes cluster, if the Microsoft Entra service principal used is a part of more than 200 groups, you may see the following error: +When making requests to the Kubernetes cluster, if the Microsoft Entra service principal used is a part of more than 200 groups, you might see the following error: `Overage claim (users with more than 200 group membership) for SPN is currently not supported. For troubleshooting, please refer to aka.ms/overageclaimtroubleshoot` |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | We strongly recommended to always update to the latest version, or opt in to the | Release Date | Release notes | Windows | Linux | |:|:|:|:| | October 2023| **Linux** <ul><li>Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics<li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ui> |None|1.28.0|-| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when AMA vm-extension is provisioned involving disable command</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | +| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | | August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ul>**Linux**<ul><li> Comming soon</li></ui>|1.19.0| Comming Soon | | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0|None| | June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncompliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4| |
azure-monitor | Azure Web Apps Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md | Monitoring of your Node.js web applications running on [Azure App Services](../. The easiest way to enable application monitoring for Node.js applications running on Azure App Services is through Azure portal. Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes. +>[!NOTE] +> You can configure the automatically attached agent using the APPLICATIONINSIGHTS_CONFIGURATION_CONTENT environment variable in the App Service Environment variable blade. For details on the configuration options that can be passed via this environment variable, see [Node.js Configuration](https://github.com/microsoft/ApplicationInsights-node.js#Configuration). + > [!NOTE]-> If both autoinstrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) in this article. +> If both automatic instrumentation and manual SDK-based instrumentation are detected, only the manual instrumentation settings are honored. This is to prevent duplicate data from being sent. For more information, see the [troubleshooting section](#troubleshooting) in this article. ### Autoinstrumentation through Azure portal Below is our step-by-step troubleshooting guide for extension/agent based monito If `SDKPresent` is true this indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off. - # [Linux](#tab/linux) 1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~3". Below is our step-by-step troubleshooting guide for extension/agent based monito ``` If `SDKPresent` is true this indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off.++ [!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)] For the latest updates and bug fixes, [consult the release notes](web-app-extens * [Receive alert notifications](../alerts/alerts-overview.md) whenever operational events happen or metrics cross a threshold. * Use [Application Insights for JavaScript apps and web pages](javascript.md) to get client telemetry from the browsers that visit a web page. * [Availability overview](availability-overview.md)+ |
azure-monitor | Best Practices Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-plan.md | -This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It describes planning that you should consider before starting your implementation. This ensures that the configuration options you choose meet your particular business requirements. +This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It describes planning that you should consider before starting your implementation. This planning ensures that the configuration options you choose meet your particular business requirements. -If you're not already familiar with monitoring concepts, start with the [Cloud monitoring guide](/azure/cloud-adoption-framework/manage/monitor) which is part of the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/). That guide defines high-level concepts of monitoring and provides guidance for defining requirements for your monitoring environment and supporting processes. This article will refer to sections of that guide that are relevant to particular planning steps. +If you're not already familiar with monitoring concepts, start with the [Cloud monitoring guide](/azure/cloud-adoption-framework/manage/monitor), which is part of the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/). That guide defines high-level concepts of monitoring and provides guidance for defining requirements for your monitoring environment and supporting processes. This article refers to sections of that guide that are relevant to particular planning steps. ## Understand Azure Monitor costs-A core goal of your monitoring strategy will be minimizing costs. Some data collection and features in Azure Monitor have no cost while other have costs based on their particular configuration, amount of data collected, or frequency that they're run. The articles in this scenario will identify any recommendations that include a cost, but you should be familiar with Azure Monitor pricing as you design your implementation for cost optimization. See the following for details and guidance on Azure Monitor pricing: +Minimizing costs is a core goal of your monitoring strategy. Some data collection and features in Azure Monitor have no cost. However, others have costs based on their particular configuration, amount of data collected, or frequency that they're run. The articles in this scenario identify any recommendations that include a cost, but you should be familiar with Azure Monitor pricing as you design your implementation for cost optimization. See the following pages for details and guidance on Azure Monitor pricing: - [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) - [Monitor usage and estimated costs in Azure Monitor](usage-estimated-costs.md) ## Define strategy-Before you design and implement any monitoring solution, you should establish a monitoring strategy so that you understand the goals and requirements of your plan. The strategy defines your particular requirements, the configuration that best meets those requirements, and processes to leverage the monitoring environment to maximize your applications' performance and reliability. The configuration options that you choose for Azure Monitor should be consistent with your strategy. +Before you design and implement any monitoring solution, you should establish a monitoring strategy so that you understand the goals and requirements of your plan. The strategy defines your particular requirements, the configuration that best meets those requirements, and processes to use the monitoring environment to maximize your applications' performance and reliability. The configuration options that you choose for Azure Monitor should be consistent with your strategy. -See [Cloud monitoring guide: Formulate a monitoring strategy](/azure/cloud-adoption-framework/strategy/monitoring-strategy) for a number of factors that you should consider when developing a monitoring strategy. You should also refer to [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview) which will assist in comparing completely cloud based monitoring with a hybrid model. +See [Cloud monitoring guide: Formulate a monitoring strategy](/azure/cloud-adoption-framework/strategy/monitoring-strategy) for many factors that you should consider when developing a monitoring strategy. You should also refer to [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview) for assistance with comparing completely cloud based monitoring with a hybrid model. ## Gather required information Before you determine the details of your implementation, you should gather information required to define those details. The following sections described information typically required for a complete implementation of Azure Monitor. ### What needs to be monitored?- You won't necessarily configure complete monitoring for all of your cloud resources but instead focus on your critical applications and the components they depend on. This will not only reduce your monitoring costs but also reduce the complexity of your monitoring environment. See [Cloud monitoring guide: Collect the right data](/azure/cloud-adoption-framework/manage/monitor/data-collection) for guidance on defining the data that you require. + You don't need to necessarily configure complete monitoring for all of your cloud resources but instead focus on your critical applications and the components they depend on. This focus will not only reduce your monitoring costs but also reduce the complexity of your monitoring environment. See [Cloud monitoring guide: Collect the right data](/azure/cloud-adoption-framework/manage/monitor/data-collection) for guidance on defining the data that you require. ### Who needs to have access and be notified-As you configure your monitoring environment, you need to determine which users should have access to monitoring data and which users need to be notified when an issue is detected. These may be application and resource owners, or you may have a centralized monitoring team. This information will determine how you configure permissions for data access and notifications for alerts. You may also require custom workbooks to present particular sets of information to different users. +As you configure your monitoring environment, you need to determine the folllowing: ++- Which users should have access to monitoring data +- Which users need to be notified when an issue is detected ++These users may be application and resource owners, or you may have a centralized monitoring team. This information determines how you configure permissions for data access and notifications for alerts. You may also require custom workbooks to present particular sets of information to different users. ### Service level agreements -Your organization may have SLAs that define your commitments for performance and uptime of your applications. These SLAs may determine how you need to configure time sensitive features of Azure Monitor such as alerts. You will also need to understand [data latency in Azure Monitor](logs/data-ingestion-time.md) since this will affect the responsiveness of monitoring scenarios and your ability to meet SLAs. +Your organization may have SLAs that define your commitments for performance and uptime of your applications. These SLAs may determine how you need to configure time sensitive features of Azure Monitor such as alerts. You also need to understand [data latency in Azure Monitor](logs/data-ingestion-time.md) since this affects the responsiveness of monitoring scenarios and your ability to meet SLAs. ## Identify monitoring services and products-Azure Monitor is designed to address Health and Status monitoring. A complete monitoring solution will typically involve multiple Azure services and potentially other products. Other monitoring objectives, which may require additional solutions, are described in the Cloud Monitoring Guide in [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements). +Azure Monitor is designed to address Health and Status monitoring. A complete monitoring solution typically involves multiple Azure services and potentially other products. Other monitoring objectives, which may require more solutions, are described in the Cloud Monitoring Guide in [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements). -The following sections describe other services and products that you may use in conjunction with Azure Monitor. This scenario currently doesn't include guidance on implementing these solutions so you should refer to their documentation. +The following sections describe other services and products that you may use with Azure Monitor. This scenario currently doesn't include guidance on implementing these solutions so you should refer to their documentation. ### Security monitoring While the operational data stored in Azure Monitor might be useful for investigating security incidents, other services in Azure were designed to monitor security. Security monitoring in Azure is performed by Microsoft Defender for Cloud and Microsoft Sentinel. While the operational data stored in Azure Monitor might be useful for investiga ### System Center Operations Manager-You may have an existing investment in System Center Operations Manager for monitoring on-premises resources and workloads running on your virtual machines. You may choose to [migrate this monitoring to Azure Monitor](azure-monitor-operations-manager.md) or continue to use both products together in a hybrid configuration. See [Cloud monitoring guide: Monitoring platforms overview](/azure/cloud-adoption-framework/manage/monitor/platform-overview) for a comparison of the two products. See [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview) for guidance on using the two in a hybrid configuration and on determining the most appropriate model for your environment. +You may have an existing investment in System Center Operations Manager for monitoring on-premises resources and workloads running on your virtual machines. You may choose to [migrate this monitoring to Azure Monitor](azure-monitor-operations-manager.md) or continue to use both products together in a hybrid configuration. See [Cloud monitoring guide: Monitoring platforms overview](/azure/cloud-adoption-framework/manage/monitor/platform-overview) for a comparison of the two products. See [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview) for how to use the two in a hybrid configuration and determine the most appropriate model for your environment. ++## Frequently asked questions ++This section provides answers to common questions. +### What IP addresses does Azure Monitor use? +See [IP addresses used by Application Insights and Log Analytics](app/ip-addresses.md) for the IP addresses and ports required for agents and other external resources to access Azure Monitor. ## Next steps |
azure-monitor | Change Analysis Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-enable.md | foreach ($webapp in $webapp_list) } ``` +## Frequently asked questions ++This section provides answers to common questions. ++### How can I enable Change Analysis for a web application? ++Enable Change Analysis for web application in guest changes by using the [Diagnose and solve problems tool](./change-analysis-visualizations.md#diagnose-and-solve-problems-tool). + ## Next steps - Learn about [visualizations in Change Analysis](change-analysis-visualizations.md) |
azure-monitor | Change Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md | Currently the following dependencies are supported in **Web App Diagnose and sol - **Web app deployment and configuration changes**: Since these changes are collected by a site extension and stored on disk space owned by your application, data collection and storage is subject to your application's behavior. Check to see if a misbehaving application is affecting the results. - **Snapshot retention for all changes**: The Change Analysis data for resources is tracked by Azure Resource Graphs (ARG). ARG keeps snapshot history of tracked resources only for 14 days. +## Frequently asked questions ++This section provides answers to common questions. ++### Does using Change Analysis incur cost? ++You can use Change Analysis at no extra cost. Enable the `Microsoft.ChangeAnalysis` resource provider, and anything supported by Change Analysis is open to you. ## Next steps |
azure-monitor | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/getting-started.md | These articles provide detailed information about each of the main steps you'll | [Configure alerts and automated responses](best-practices-alerts.md) |Configure notifications and processes that are automatically triggered when an alert is fired. | | [Optimize costs](best-practices-cost.md) | Reduce your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. | +## Frequently asked questions ++This section provides answers to common questions. ++### How do I enable Azure Monitor? ++Azure Monitor is enabled the moment that you create a new Azure subscription, and [activity log](./essentials/platform-logs-overview.md) and platform [metrics](essentials/data-platform-metrics.md) are automatically collected. Create [diagnostic settings](essentials/diagnostic-settings.md) to collect more detailed information about the operation of your Azure resources, and add [monitoring solutions](/previous-versions/azure/azure-monitor/insights/solutions) and [insights](./monitor-reference.md) to provide extra analysis on collected data for particular services. ++### How do I access Azure Monitor? ++Access all Azure Monitor features and data from the **Monitor** menu in the Azure portal. The **Monitoring** section of the menu for different Azure services provides access to the same tools with data filtered to a particular resource. Azure Monitor data is also accessible for various scenarios by using the Azure CLI, PowerShell, and a REST API. + ## Next steps |
azure-monitor | Data Platform Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md | For a list of where log queries are used and references to tutorials and other d ![Screenshot that shows queries in Log Analytics.](media/data-platform-logs/log-analytics.png) ## Relationship to Azure Data Explorer-Azure Monitor Logs is based on Azure Data Explorer. A Log Analytics workspace is roughly the equivalent of a database in Azure Data Explorer. Tables are structured the same, and both use KQL. +Azure Monitor Logs is based on Azure Data Explorer. A Log Analytics workspace is roughly the equivalent of a database in Azure Data Explorer. Tables are structured the same, and both use KQL. For information on KQL, see [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/). The experience of using Log Analytics to work with Azure Monitor queries in the Azure portal is similar to the experience of using the Azure Data Explorer Web UI. You can even [include data from a Log Analytics workspace in an Azure Data Explorer query](/azure/data-explorer/query-monitor-data). |
azure-monitor | Tutorial Logs Ingestion Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md | -The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme). +The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azingest), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme). > [!NOTE] > This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-portal.md) for a similar tutorial that uses the Azure portal UI to configure these components. The steps required to configure the Logs ingestion API are as follows: 3. [Create a data collection endpoint (DCE)](#create-data-collection-endpoint) to receive data. 2. [Create a custom table in a Log Analytics workspace](#create-new-table-in-log-analytics-workspace). This is the table you'll be sending data to. 4. [Create a data collection rule (DCR)](#create-data-collection-rule) to direct the data to the target table. -5. [Give the AD application access to the DCR](#assign-permissions-to-a-dcr). +5. [Give the Microsoft Entra application access to the DCR](#assign-permissions-to-a-dcr). 6. See [Sample code to send data to Azure Monitor using Logs ingestion API](tutorial-logs-ingestion-code.md) for sample code to send data to using the Logs ingestion API. ## Prerequisites |
azure-monitor | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md | You may need to integrate Azure Monitor with other systems or to build custom so |[Azure Functions](../azure-functions/functions-overview.md)| Similar to Azure Logic Apps, Azure Functions give you the ability to pre process and post process monitoring data as well as perform complex action beyond the scope of typical Azure Monitor alerts. Azure Functions uses code however providing additional flexibility over Logic Apps. |Azure DevOps and GitHub | Azure Monitor Application Insights gives you the ability to create [Work Item Integration](app/release-and-work-item-insights.md?tabs=work-item-integration) with monitoring data embedding in it. Additional options include [release annotations](app/release-and-work-item-insights.md?tabs=release-annotations) and [continuous monitoring](app/release-and-work-item-insights.md?tabs=continuous-monitoring). | +## Frequently asked questions ++This section provides answers to common questions. ++### What's the difference between Azure Monitor, Log Analytics, and Application Insights? ++In September 2018, Microsoft combined Azure Monitor, Log Analytics, and Application Insights into a single service to provide powerful end-to-end monitoring of your applications and the components they rely on. Features in Log Analytics and Application Insights haven't changed, although some features have been rebranded to Azure Monitor to better reflect their new scope. The log data engine and query language of Log Analytics is now referred to as Azure Monitor Logs. ++### How much does Azure Monitor cost? ++The cost of Azure Monitor is based on your usage of different features and is primarily determined by the amount of data you collect. See [Azure Monitor cost and usage](./usage-estimated-costs.md) for details on how costs are determined and [Cost optimization in Azure Monitor](./best-practices-cost.md) for recommendations on reducing your overall spend. ++### Is there an on-premises version of Azure Monitor? ++No. Azure Monitor is a scalable cloud service that processes and stores large amounts of data, although Azure Monitor can monitor resources that are on-premises and in other clouds. ++### Does Azure Monitor integrate with System Center Operations Manager? ++You can connect your existing System Center Operations Manager management group to Azure Monitor to collect data from agents into Azure Monitor Logs. This capability allows you to use log queries and solutions to analyze data collected from agents. You can also configure existing System Center Operations Manager agents to send data directly to Azure Monitor. See [Connect Operations Manager to Azure Monitor](agents/om-agents.md). + ## Next steps - [Getting started with Azure Monitor](getting-started.md) |
azure-resource-manager | Bicep Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md | You can enable preview features by adding: The preceding sample enables 'userDefineTypes' and 'extensibility`. The available experimental features include: - **assertions**: Should be enabled in tandem with `testFramework` experimental feature flag for expected functionality. Allows you to author boolean assertions using the `assert` keyword comparing the actual value of a parameter, variable, or resource name to an expected value. Assert statements can only be written directly within the Bicep file whose resources they reference. For more information, see [Bicep Experimental Test Framework](https://github.com/Azure/bicep/issues/11967).-- **compileTimeImports**: Allows you to use symbols defined in another template. See [Import user-defined data types](./bicep-import.md#import-user-defined-data-types-preview).+- **compileTimeImports**: Allows you to use symbols defined in another Bicep file. See [Import types, variables and functions](./bicep-import.md#import-types-variables-and-functions-preview). - **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider. See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). - **sourceMapping**: Enables basic source mapping to map an error location returned in the ARM template layer back to the relevant location in the Bicep file. - **resourceTypedParamsAndOutputs**: Enables the type for a parameter or output to be of type resource to make it easier to pass resource references between modules. This feature is only partially implemented. See [Simplifying resource referencing](https://github.com/azure/bicep/issues/2245). |
azure-resource-manager | Bicep Import | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-import.md | Title: Import Bicep namespaces -description: Describes how to import Bicep namespaces. + Title: Imports in Bicep +description: Describes how to import shared functionality and namespaces in Bicep. Last updated 09/21/2023 -# Import Bicep namespaces +# Imports in Bicep -This article describes the syntax you use to import user-defined data types and the Bicep namespaces including the Bicep extensibility providers. +This article describes the syntax you use to export and import shared functionality, as well as namespaces for Bicep extensibility providers. -## Import user-defined data types (Preview) +## Exporting types, variables and functions (Preview) -[Bicep version 0.21.1 or newer](./install.md) is required to use this feature. The experimental flag `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features). +> [!NOTE] +> [Bicep version 0.23 or newer](./install.md) is required to use this feature. The experimental feature `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features). For user-defined functions, the experimental feature `userDefinedFunctions` must also be enabled. +The `@export()` decorator is used to indicate that a given statement can be imported by another file. This decorator is only valid on type, variable and function statements. Variable statements marked with `@export()` must be compile-time constants. -The syntax for importing [user-defined data type](./user-defined-data-types.md) is: +The syntax for exporting functionality for use in other Bicep files is: ```bicep-import {<user-defined-data-type-name>, <user-defined-data-type-name>, ...} from '<bicep-file-name>' +@export() +<statement_to_export> +``` ++## Import types, variables and functions (Preview) ++> [!NOTE] +> [Bicep version 0.23.X or newer](./install.md) is required to use this feature. The experimental feature `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features). For user-defined functions, the experimental feature `userDefinedFunctions` must also be enabled. ++The syntax for importing functionality from another Bicep file is: ++```bicep +import {<symbol_name>, <symbol_name>, ...} from '<bicep_file_name>' +``` ++With optional aliasing to rename symbols: ++```bicep +import {<symbol_name> as <alias_name>, ...} from '<bicep_file_name>' ``` -or with wildcard syntax: +Using the wildcard import syntax: ```bicep-import * as <namespace> from '<bicep-file-name>' +import * as <alias_name> from '<bicep_file_name>' ``` -You can mix and match the two preceding syntaxes. +You can mix and match the preceding syntaxes. To access imported symbols using the wildcard syntax, you must use the `.` operator: `<alias_name>.<exported_symbol>`. -Only user-defined data types that bear the [@export() decorator](./user-defined-data-types.md#import-types-between-bicep-files-preview) can be imported. Currently, this decorator can only be used on [`type`](./user-defined-data-types.md) statements. +Only statements that have been [exported](#exporting-types-variables-and-functions-preview) in the file being referenced are available to be imported. -Imported types can be used anywhere a user-defined type might be, for example, within the type clauses of type, param, and output statements. +Functionality that has been imported from another file can be used without restrictions. For example, imported variables can be used anywhere a variable declared in-file would normally be valid. ### Example -myTypes.bicep +module.bicep ```bicep @export()-type myString = string +type myObjectType = { + foo: string + bar: int +} @export()-type myInt = int +var myConstant = 'This is a constant value' ++@export() +func sayHello(name string) string => 'Hello ${name}!' ``` main.bicep ```bicep-import * as myImports from 'myTypes.bicep' -import {myInt} from 'myTypes.bicep' +import * as myImports from 'exports.bicep' +import {myObjectType, sayHello} from 'exports.bicep' -param exampleString myImports.myString = 'Bicep' -param exampleInt myInt = 3 +param exampleObject myObjectType = { + foo: myImports.myConstant + bar: 0 +} -output outString myImports.myString = exampleString -output outInt myInt = exampleInt +output greeting string = sayHello('Bicep user') +output exampleObject myImports.myObjectType = exampleObject ``` -## Import namespaces and extensibility providers +## Import namespaces and extensibility providers (Preview) -The syntax for importing the namespaces is: +> [!NOTE] +> The experimental feature `extensibility` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features) to use this feature. ++The syntax for importing namespaces is: ```bicep import 'az@1.0.0' Both `az` and `sys` are Bicep built-in namespaces. They are imported by default. The syntax for importing Bicep extensibility providers is: +```bicep +import '<provider-name>@<provider-version>' +``` ++The syntax for importing Bicep extensibility providers which require configuration is: + ```bicep import '<provider-name>@<provider-version>' with { <provider-properties> |
azure-resource-manager | User Defined Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-functions.md | When defining a user function, there are some restrictions: * The function can't access variables. * The function can only use parameters that are defined in the function.-* The function can't call other user-defined functions. * The function can't use the [reference](bicep-functions-resource.md#reference) function or any of the [list](bicep-functions-resource.md#list) functions. * Parameters for the function can't have default values. |
azure-vmware | Azure Vmware Solution Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md | description: This article provides details about the known issues of Azure VMwar Previously updated : 4/20/2023 Last updated : 10/27/2023 # Known issues: Azure VMware Solution This article describes the currently known issues with Azure VMware Solution. -Refer to the table below to find details about resolution dates or possible workarounds. For more information about the different feature enhancements and bug fixes in Azure VMware Solution, see [What's New](azure-vmware-solution-platform-updates.md). +Refer to the table to find details about resolution dates or possible workarounds. For more information about the different feature enhancements and bug fixes in Azure VMware Solution, see [What's New](azure-vmware-solution-platform-updates.md). |Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- | | [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS - Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 |-| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active in the vSphere Client | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 | +| When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active in the vSphere Client | 2021 | This alert should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 | | When adding a cluster to my private cloud, the **Cluster-n: vSAN physical disk alarm 'Operation'** and **Cluster-n: vSAN cluster alarm 'vSAN Cluster Configuration Consistency'** alerts are active in the vSphere Client | 2021 | This should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |+| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **Capacity - Maximum Capacity Threshold** alarm is raised | 2023 | Alarm raised because there are more than 4 clusters in the private cloud with the medium form factor for the NSX-T Data Center Unified Appliance. The form factor needs to be scaled up to large. This issue will be detected and completed by Microsoft, however you can also open a support request. | 2023 | +| When I build a VMware HCX Service Mesh with the Enterprise license, the Replication Assisted vMotion Migration option is not available. | 2023 | The default VMware HCX Compute Profile does not have the Replication Assisted vMotion Migration option enabled. From the Azure VMware Solution vSphere Client, select the VMware HCX option and edit the default Compute Profile to enable Replication Assisted vMotion Migration. | 2023 | +| [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | Microsoft is currently working with its security teams and partners to evaluate the risk to Azure VMware Solution and its customers. Initial investigations have shown that controls in place within Azure VMware Solution reduce the risk of CVE-2023-03048. However Microsoft is working on a plan to rollout security fixes in the near future to completely remediate the security vulnerability. | October 2023 | In this article, you learned about the current known issues with the Azure VMware Solution. |
azure-web-pubsub | Howto Enable Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-enable-geo-replication.md | Update **webpubsub** extension to the latest version, then run: ```azurecli az webpubsub replica create --sku Premium_P1 -l eastus --replica-name MyReplica --name MyWebPubSub -g MyResourceGroup ```++- + ## Pricing and resource unit Each replica has its **own** `unit` and `autoscale settings`. |
baremetal-infrastructure | Solution Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md | The following table describes the network topologies supported by each network f |Topology |Supported | | :- |::|-|Connectivity to BareMetal (BM) in a local VNet| Yes | -|Connectivity to BM in a peered VNet (Same region)|Yes | -|Connectivity to BM in a peered VNet\* (Cross region or global peering)\*|No | +|Connectivity to BareMetal Infrasturcture (BMI) in a local VNet| Yes | +|Connectivity to BMI in a peered VNet (Same region)|Yes | +|Connectivity to BMI in a peered VNet\* (Cross region or global peering) with VWAN\*|Yes | +|Connectivity to BM in a peered VNet* (Cross region or global peering)* without VWAN| No| |On-premises connectivity to Delegated Subnet via Global and Local Expressroute |Yes| |ExpressRoute (ER) FastPath |No |-|Connectivity from on-premises to a BM in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit|Yes | +|Connectivity from on-premises to BMI in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit|Yes | |On-premises connectivity to Delegated Subnet via VPN GW| Yes |-|Connectivity from on-premises to a BM in a spoke VNet over VPN gateway and VNet peering with gateway transit| Yes | +|Connectivity from on-premises to BMI in a spoke VNet over VPN gateway and VNet peering with gateway transit| Yes | |Connectivity over Active/Passive VPN gateways| Yes | |Connectivity over Active/Active VPN gateways| No | |Connectivity over Active/Active Zone Redundant gateways| No | The following table describes whatΓÇÖs supported for each network features confi | :- | -: | |Delegated subnet per VNet |1| |[Network Security Groups](../../../virtual-network/network-security-groups-overview.md) on NC2 on Azure-delegated subnets|No|-|[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets|No| +|[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets with VWAN|Yes| +[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets without VWAN| No| |Connectivity from BareMetal to [private endpoints](../../../private-link/private-endpoint-overview.md) in the same Vnet on Azure-delegated subnets|No| |Connectivity from BareMetal to [private endpoints](../../../private-link/private-endpoint-overview.md) in a different spoke Vnet connected to vWAN|Yes| |Load balancers for NC2 on Azure traffic|No| |
chaos-studio | Chaos Studio Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md | During the public preview of Azure Chaos Studio, there are a few limitations and - **Supported browsers** - The Chaos Studio portal experience has only been tested on the following browsers: * **Windows:** Microsoft Edge, Google Chrome, and Firefox * **MacOS:** Safari, Google Chrome, and Firefox-- **Terraform** - At present Chaos Studio does not support terraform.+- **Terraform** - Chaos Studio does not support Terraform at this time. +- **Powershell modules** - Chaos Studio does not have dedicated Powershell modules at this time. For Powershell, use our REST API +- **Azure CLI** - Chaos Studio does not have dedicated AzCLI modules at this time. Use our REST API from AzCLI +- **Azure Policy** - Chaos Studio does not support the applicable built-in policies for our service (audit policy for customer-managed keys and Private Link) at this time. +- **Private Link** To use Private Link for Agent Service, you need to have your subscription allowlisted and use our preview API version. We do not support Azure Portal UI experiments for Agent-based experiments using Private Link. These restrictions do NOT apply to our Service-direct faults +- **Customer-Managed Keys** You will need to use our 2023-10-27-preview REST API via a CLI to create CMK-enabled experiments. We do not support Portal UI experiments using CMK at this time. +- **Lockbox** At present, we do not have integration with Customer Lockbox. +- **Java SDK** At present, we do not have a dedicated Java SDK. If this is something you would use, reach out to us with your feature request. - **Built-in roles** - Chaos Studio does not currently have its own built-in roles. Permissions may be attained to run a chaos experiment by either assigning an [Azure built-in role](chaos-studio-fault-providers.md) or a created custom role to the experiment's identity. ## Known issues |
cloud-shell | Private Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/private-vnet.md | description: This article describes a scenario for using Azure Cloud Shell in a ms.contributor: jahelmic Last updated 06/21/2023 Title: Using Cloud Shell in an Azure virtual network+ Title: Use Cloud Shell in an Azure virtual network -# Using Cloud Shell in an Azure virtual network +# Use Cloud Shell in an Azure virtual network -By default, Cloud Shell sessions run in a container in a Microsoft network that's separate from your -resources. Commands run inside the container can't access resources in a private virtual network. -For example, you can't use SSH to connect from Cloud Shell to a virtual machine that only has a -private IP address, or use `kubectl` to connect to a Kubernetes cluster that has locked down access. +By default, Azure Cloud Shell sessions run in a container in a Microsoft network that's separate +from your resources. Commands that run inside the container can't access resources in a private +virtual network. For example, you can't use Secure Shell (SSH) to connect from Cloud Shell to a +virtual machine that has only a private IP address, or use `kubectl` to connect to a Kubernetes +cluster that has locked down access. -To provide access to your private resources, you can deploy Cloud Shell into an Azure Virtual -Network that you control. This is referred to as _VNET isolation_. +To provide access to your private resources, you can deploy Cloud Shell into an Azure virtual +network that you control. This technique is called _virtual network isolation_. -## Benefits to VNET isolation with Azure Cloud Shell +## Benefits of virtual network isolation with Cloud Shell -Deploying Azure Cloud Shell in a private VNET offers several benefits: +Deploying Cloud Shell in a private virtual network offers these benefits: -- The resources you want to manage don't have to have public IP addresses.-- You can use command line tools, SSH, and PowerShell remoting from the Cloud Shell container to+- The resources that you want to manage don't need to have public IP addresses. +- You can use command-line tools, SSH, and PowerShell remoting from the Cloud Shell container to manage your resources. - The storage account that Cloud Shell uses doesn't have to be publicly accessible. -## Things to consider before deploying Azure Cloud Shell in a VNET +## Things to consider before deploying Azure Cloud Shell in a virtual network - Starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.-- VNET isolation requires you to use [Azure Relay][01], which is a paid service. In the Cloud Shell- scenario, one hybrid connection is used for each administrator while they're using Cloud Shell. - The connection is automatically closed when the Cloud Shell session ends. +- Virtual network isolation requires you to use [Azure Relay][01], which is a paid service. In the + Cloud Shell scenario, one hybrid connection is used for each administrator while they're using + Cloud Shell. The connection is automatically closed when the Cloud Shell session ends. ## Architecture The following diagram shows the resource architecture that you must build to enable this scenario. -![Illustration of Cloud Shell isolated VNET architecture.][03] +![Illustration of a Cloud Shell isolated virtual network architecture.][03] -- **Customer Client Network** - Client users can be located anywhere on the Internet to securely+- **Customer client network**: Client users can be located anywhere on the internet to securely access and authenticate to the Azure portal and use Cloud Shell to manage resources contained in- the customers subscription. For stricter security, you can allow users to launch Cloud Shell only + the customer's subscription. For stricter security, you can allow users to open Cloud Shell only from the virtual network contained in your subscription.-- **Microsoft Network** - Customers connect to the Azure portal on Microsoft's network to- authenticate and launch Cloud Shell. -- **Customer Virtual Network** - This is the network that contains the subnets to support VNET- isolation. Resources such as virtual machines and services are directly accessible from Cloud - Shell without the need to assign a public IP address. -- **Azure Relay** - An [Azure Relay][01] allows two endpoints that aren't directly reachable to+- **Microsoft network**: Customers connect to the Azure portal on Microsoft's network to + authenticate and open Cloud Shell. +- **Customer virtual network**: This is the network that contains the subnets to support virtual + network isolation. Resources such as virtual machines and services are directly accessible from + Cloud Shell without the need to assign a public IP address. +- **Azure Relay**: [Azure Relay][01] allows two endpoints that aren't directly reachable to communicate. In this case, it's used to allow the administrator's browser to communicate with the container in the private network.-- **File share** - Cloud Shell requires a storage account that is accessible from the virtual- network. The storage account provides the file share used by Cloud Shell users. +- **File share**: Cloud Shell requires a storage account that's accessible from the virtual network. + The storage account provides the file share used by Cloud Shell users. ## Related links |
cloud-shell | Quickstart Deploy Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-deploy-vnet.md | -# Deploy Azure Cloud Shell in a virtual network with quickstart templates +# Deploy Cloud Shell in a virtual network by using quickstart templates -Before you can deploy Azure Cloud Shell in a virtual network (VNet) configuration using the -quickstart templates, there are several prerequisites to complete before running the templates. +Before you run quickstart templates to deploy Azure Cloud Shell in a virtual network (VNet), there +are several prerequisites to complete. -This document guides you through the process to complete the configuration. +This article walks you through the following steps to configure and deploy Cloud Shell in a virtual +network: -## Steps to deploy Azure Cloud Shell in a virtual network --This article walks you through the following steps to deploy Azure Cloud Shell in a virtual network: --1. Register resource providers -1. Collect the required information -1. Create the virtual networks using the **Azure Cloud Shell - VNet** ARM template -1. Create the virtual network storage account using the **Azure Cloud Shell - VNet storage** ARM - template -1. Configure and use Azure Cloud Shell in a virtual network +1. Register resource providers. +1. Collect the required information. +1. Create the virtual networks by using the **Azure Cloud Shell - VNet** Azure Resource Manager + template (ARM template). +1. Create the virtual network storage account by using the **Azure Cloud Shell - VNet storage** ARM + template. +1. Configure and use Cloud Shell in a virtual network. ## 1. Register resource providers -Azure Cloud Shell needs access to certain Azure resources. That access is made available through +Cloud Shell needs access to certain Azure resources. You make that access available through resource providers. The following resource providers must be registered in your subscription: - **Microsoft.CloudShell** - **Microsoft.ContainerInstances** - **Microsoft.Relay** -Depending when your tenant was created, some of these providers might already be registered. +Depending on when your tenant was created, some of these providers might already be registered. -To see all resource providers, and the registration status for your subscription: +To see all resource providers and the registration status for your subscription: 1. Sign in to the [Azure portal][04]. 1. On the Azure portal menu, search for **Subscriptions**. Select it from the available options.-1. Select the subscription you want to view. +1. Select the subscription that you want to view. 1. On the left menu, under **Settings**, select **Resource providers**. 1. In the search box, enter `cloudshell` to search for the resource provider.-1. Select the **Microsoft.CloudShell** resource provider register from the provider list. -1. Select **Register** to change the status from **unregistered** to **Registered**. +1. Select the **Microsoft.CloudShell** resource provider from the provider list. +1. Select **Register** to change the status from **unregistered** to **registered**. 1. Repeat the previous steps for the **Microsoft.ContainerInstances** and **Microsoft.Relay** resource providers. - [![Screenshot of selecting resource providers in the Azure portal.][98]][98a] +[![Screenshot of selecting resource providers in the Azure portal.][98]][98a] ## 2. Collect the required information -There are several pieces of information that you need to collect before you can deploy Azure Cloud. -You can use the default Azure Cloud Shell instance to gather the required information and create the -necessary resources. You should create dedicated resources for the Azure Cloud Shell VNet -deployment. All resources must be in the same Azure region and contained in the same resource group. +You need to collect several pieces of information before you can deploy Cloud Shell. ++You can use the default Cloud Shell instance to gather the required information and create the +necessary resources. You should create dedicated resources for the Cloud Shell virtual network +deployment. All resources must be in the same Azure region and in the same resource group. ++Fill in the following values: -- **Subscription** - The name of your subscription containing the resource group used for the Azure- Cloud Shell VNet deployment -- **Resource Group** - The name of the resource group used for the Azure Cloud Shell VNet deployment-- **Region** - The location of the resource group-- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell VNet-- **Azure Container Instance OID** - The ID of the Azure Container Instance for your resource group-- **Azure Relay Namespace** - The name that you want to assign to the Relay resource created by the- template +- **Subscription**: The name of your subscription that contains the resource group for the Cloud + Shell virtual network deployment. +- **Resource Group**: The name of the resource group for the Cloud Shell virtual network deployment. +- **Region**: The location of the resource group. +- **Virtual Network**: The name of the Cloud Shell virtual network. +- **Azure Container Instance OID**: The ID of the Azure container instance for your resource group. +- **Azure Relay Namespace**: The name that you want to assign to the Azure Relay resource that the + template creates. ### Create a resource group -You can create the resource group using the Azure portal, Azure CLI, or Azure PowerShell. For more -information, see the following articles: +You can create the resource group by using the Azure portal, the Azure CLI, or Azure PowerShell. For +more information, see the following articles: - [Manage Azure resource groups by using the Azure portal][02] - [Manage Azure resource groups by using Azure CLI][01] information, see the following articles: ### Create a virtual network -You can create the virtual network using the Azure portal, Azure CLI, or Azure PowerShell. For more -information, see the following articles: +You can create the virtual network by using the Azure portal, the Azure CLI, or Azure PowerShell. +For more information, see the following articles: - [Use the Azure portal to create a virtual network][05] - [Use Azure PowerShell to create a virtual network][06] - [Use Azure CLI to create a virtual network][04] > [!NOTE]-> When setting the Container subnet address prefix for the Cloud Shell subnet it's important to -> consider the number of Cloud Shell sessions you need to run concurrently. If the number of Cloud -> Shell sessions exceeds the available IP addresses in the container subnet, users of those sessions -> can't connect to Cloud Shell. Increase the container subnet range to accommodate your specific -> needs. For more information, see the _Change Network Settings_ section of -> [Add, change, or delete a virtual network subnet][07] +> When you're setting the container subnet address prefix for the Cloud Shell subnet, it's important +> to consider the number of Cloud Shell sessions that you need to run concurrently. If the number of +> Cloud Shell sessions exceeds the available IP addresses in the container subnet, users of those +> sessions can't connect to Cloud Shell. Increase the container subnet range to accommodate your +> specific needs. For more information, see the "Change subnet settings" section of +> [Add, change, or delete a virtual network subnet][07]. -### Azure Container Instance ID +### Get the Azure container instance ID -The **Azure Container Instance ID** is a unique value for every tenant. You use this identifier in -the [quickstart templates][07] to configure virtual network for Cloud Shell. +The Azure container instance ID is a unique value for every tenant. You use this identifier in +the [quickstart templates][07] to configure a virtual network for Cloud Shell. -1. Sign in to the [Azure portal][09]. From the **Home** screen, select **Microsoft Entra ID**. If - the icon isn't displayed, enter `Microsoft Entra ID` in the top search bar. -1. In the left menu, select **Overview** and enter `azure container instance service` into the +1. Sign in to the [Azure portal][09]. From the home page, select **Microsoft Entra ID**. If the icon + isn't displayed, enter `Microsoft Entra ID` in the top search bar. +1. On the left menu, select **Overview**. Then enter `azure container instance service` in the search bar. [![Screenshot of searching for Azure Container Instance Service.][95]][95a] -1. In the results under **Enterprise applications**, select the **Azure Container Instance Service**. -1. Find **ObjectID** listed as a property on the **Overview** page for **Azure Container Instance - Service**. -1. You use this ID in the quickstart template for virtual network. +1. In the results, under **Enterprise applications**, select **Azure Container Instance Service**. +1. On the **Overview** page for **Azure Container Instance Service**, find the **Object ID** value + that's listed as a property. ++ You use this ID in the quickstart template for the virtual network. [![Screenshot of Azure Container Instance Service details.][96]][96a] -## 3. Create the virtual network using the ARM template +## 3. Create the virtual network by using the ARM template Use the [Azure Cloud Shell - VNet][08] template to create Cloud Shell resources in a virtual-network. The template creates three subnets under the virtual network created earlier. You might -choose to change the supplied names of the subnets or use the defaults. The virtual network, along -with the subnets, require valid IP address assignments. You need at least one IP address for the -Relay subnet and enough IP addresses in the container subnet to support the number of concurrent -sessions you expect to use. --The ARM template requires specific information about the resources you created earlier, along with -naming information for new resources. This information is filled out along with the prefilled +network. The template creates three subnets under the virtual network that you created earlier. You +might choose to change the supplied names of the subnets or use the defaults. ++The virtual network, along with the subnets, requires valid IP address assignments. You need at +least one IP address for the Relay subnet and enough IP addresses in the container subnet to support +the number of concurrent sessions that you expect to use. ++The ARM template requires specific information about the resources that you created earlier, along +with naming information for new resources. This information is filled out along with the prefilled information in the form. -Information needed for the template: +Information that you need for the template includes: -- **Subscription** - The name of your subscription containing the resource group for Azure Cloud- Shell VNet -- **Resource Group** - The resource group name of either an existing or newly created resource group-- **Region** - Location of the resource group-- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell virtual network-- **Network Security Group** - The name that you want to assign to the Network Security Group- created by the template -- **Azure Container Instance OID** - The ID of the Azure Container Instance for your resource group+- **Subscription**: The name of your subscription that contains the resource group for the Cloud + Shell virtual network. +- **Resource Group**: The name of an existing or newly created resource group. +- **Region**: The location of the resource group. +- **Virtual Network**: The name of the Cloud Shell virtual network. +- **Network Security Group**: The name that you want to assign to the network security group (NSG) + that the template creates. +- **Azure Container Instance OID**: The ID of the Azure container instance for your resource group. Fill out the form with the following information: | Project details | Value | | | -- |-| Subscription | Defaults to the current subscription context.<br>For this example, we're using `Contoso (carolb)` | -| Resource group | Enter the name of the resource group from the prerequisite information.<br>For this example, we're using `rg-cloudshell-eastus`. | +| **Subscription** | Defaults to the current subscription context.<br>The example in this article uses `Contoso (carolb)`. | +| **Resource group** | Enter the name of the resource group from the prerequisite information.<br>The example in this article uses `rg-cloudshell-eastus`. | | Instance details | Value | | - | - |-| Region | Prefilled with your default region.<br>For this example, we're using `East US`. | -| Existing VNET Name | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `vnet-cloudshell-eastus`. | -| Relay Namespace Name | Create a name that you want to assign to the Relay resource created by the template.<br>For this example, we're using `arn-cloudshell-eastus`. | -| Nsg Name | Enter the name of the Network Security Group (NSG). The deployment creates this NSG and assigns an access rule to it. | -| Azure Container Instance OID | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `8fe7fd25-33fe-4f89-ade3-0e705fcf4370`. | -| Container Subnet Name | Defaults to `cloudshellsubnet`. Enter the name of the subnet for your container. | -| Container Subnet Address Prefix | For this example, we use `10.1.0.0/16`, which provides 65,543 IP addresses for Cloud Shell instances. | -| Relay Subnet Name | Defaults to `relaysubnet`. Enter the name of the subnet containing your relay. | -| Relay Subnet Address Prefix | For this example, we use `10.0.2.0/24`. | -| Storage Subnet Name | Defaults to `storagesubnet`. Enter the name of the subnet containing your storage. | -| Storage Subnet Address Prefix | For this example, we use `10.0.3.0/24`. | -| Private Endpoint Name | Defaults to `cloudshellRelayEndpoint`. Enter the name of the subnet containing your container. | -| Tag Name | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. | -| Location | Defaults to `[resourceGroup().location]`. Leave unchanged. | --Once the form is complete, select **Review + Create** and deploy the network ARM template to your +| **Region** | Prefilled with your default region.<br>The example in this article uses `East US`. | +| **Existing VNET Name** | Fill in the value from the prerequisite information that you gathered.<br>The example in this article uses `vnet-cloudshell-eastus`. | +| **Relay Namespace Name** | Create a name that you want to assign to the Relay resource that the template creates.<br>The example in this article uses `arn-cloudshell-eastus`. | +| **Nsg Name** | Enter the name of the NSG. The deployment creates this NSG and assigns an access rule to it. | +| **Azure Container Instance OID** | Fill in the value from the prerequisite information that you gathered.<br>The example in this article uses `8fe7fd25-33fe-4f89-ade3-0e705fcf4370`. | +| **Container Subnet Name** | Defaults to `cloudshellsubnet`. Enter the name of the subnet for your container. | +| **Container Subnet Address Prefix** | The example in this article uses `10.1.0.0/16`, which provides 65,543 IP addresses for Cloud Shell instances. | +| **Relay Subnet Name** | Defaults to `relaysubnet`. Enter the name of the subnet that contains your relay. | +| **Relay Subnet Address Prefix** | The example in this article uses `10.0.2.0/24`. | +| **Storage Subnet Name** | Defaults to `storagesubnet`. Enter the name of the subnet that contains your storage. | +| **Storage Subnet Address Prefix** | The example in this article uses `10.0.3.0/24`. | +| **Private Endpoint Name** | Defaults to `cloudshellRelayEndpoint`. Enter the name of the subnet that contains your container. | +| **Tag Name** | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. | +| **Location** | Defaults to `[resourceGroup().location]`. Leave unchanged. | ++After the form is complete, select **Review + Create** and deploy the network ARM template to your subscription. -## 4. Create the virtual network storage using the ARM template +## 4. Create the virtual network storage by using the ARM template Use the [Azure Cloud Shell - VNet storage][09] template to create Cloud Shell resources in a virtual network. The template creates the storage account and assigns it to the private virtual network. -The ARM template requires specific information about the resources you created earlier, along +The ARM template requires specific information about the resources that you created earlier, along with naming information for new resources. -Information needed for the template: +Information that you need for the template includes: -- **Subscription** - The name of the subscription containing the resource group for Azure Cloud+- **Subscription**: The name of the subscription that contains the resource group for the Cloud Shell virtual network.-- **Resource Group** - The resource group name of either an existing or newly created resource group-- **Region** - Location of the resource group-- **Existing virtual network name** - The name of the virtual network created earlier-- **Existing Storage Subnet Name** - The name of the storage subnet created with the Network- quickstart template -- **Existing Container Subnet Name** - The name of the container subnet created with the Network- quickstart template +- **Resource Group**: The name of an existing or newly created resource group. +- **Region**: The location of the resource group. +- **Existing virtual network name**: The name of the virtual network that you created earlier. +- **Existing Storage Subnet Name**: The name of the storage subnet that you created by using the + network quickstart template. +- **Existing Container Subnet Name**: The name of the container subnet that you created by using the + network quickstart template. Fill out the form with the following information: | Project details | Value | | | -- |-| Subscription | Defaults to the current subscription context.<br>For this example, we're using `Contoso (carolb)` | -| Resource group | Enter the name of the resource group from the prerequisite information.<br>For this example, we're using `rg-cloudshell-eastus`. | +| **Subscription** | Defaults to the current subscription context.<br>The example in this article uses `Contoso (carolb)`. | +| **Resource group** | Enter the name of the resource group from the prerequisite information.<br>The example in this article uses `rg-cloudshell-eastus`. | | Instance details | Value | | | |-| Region | Prefilled with your default region.<br>For this example, we're using `East US`. | -| Existing VNET Name | For this example, we're using `vnet-cloudshell-eastus`. | -| Existing Storage Subnet Name | Fill in the name of the resource created by the network template. | -| Existing Container Subnet Name | Fill in the name of the resource created by the network template. | -| Storage Account Name | Create a name for the new storage account.<br>For this example, we're using `myvnetstorage1138`. | -| File Share Name | Defaults to `acsshare`. Enter the name of the file share want to create. | -| Resource Tags | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. | -| Location | Defaults to `[resourceGroup().location]`. Leave unchanged. | --Once the form is complete, select **Review + Create** and deploy the network ARM template to your +| **Region** | Prefilled with your default region.<br>The example in this article uses `East US`. | +| **Existing VNET Name** | The example in this article uses `vnet-cloudshell-eastus`. | +| **Existing Storage Subnet Name** | Fill in the name of the resource that the network template creates. | +| **Existing Container Subnet Name** | Fill in the name of the resource that the network template creates. | +| **Storage Account Name** | Create a name for the new storage account.<br>The example in this article uses `myvnetstorage1138`. | +| **File Share Name** | Defaults to `acsshare`. Enter the name of the file share that you want to create. | +| **Resource Tags** | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. | +| **Location** | Defaults to `[resourceGroup().location]`. Leave unchanged. | ++After the form is complete, select **Review + Create** and deploy the network ARM template to your subscription. -## 5. Configuring Cloud Shell to use a virtual network +## 5. Configure Cloud Shell to use a virtual network -After you have deployed your private Cloud Shell instance, each Cloud Shell user must change their +After you deploy your private Cloud Shell instance, each Cloud Shell user must change their configuration to use the new private instance. -If you have used the default Cloud Shell before deploying the private instance, you must reset your -user settings. +If you used the default Cloud Shell instance before you deployed the private instance, you must +reset your user settings: -1. Open Cloud Shell +1. Open Cloud Shell. 1. Select **Cloud Shell settings** from the menu bar (gear icon).-1. Select **Reset user settings** then select **Reset** +1. Select **Reset user settings**, and then select **Reset**. Resetting the user settings triggers the first-time user experience the next time you start Cloud Shell. -[![Screenshot of Cloud Shell storage dialog box.][97]][97a] +[![Screenshot of the Cloud Shell storage dialog.][97]][97a] -1. Choose your preferred shell experience (Bash or PowerShell) -1. Select **Show advanced settings** +1. Choose your preferred shell experience (Bash or PowerShell). +1. Select **Show advanced settings**. 1. Select the **Show VNET isolation settings** checkbox.-1. Choose the **Subscription** containing your private Cloud Shell instance. -1. Choose the **Region** containing your private Cloud Shell instance. -1. Select the **Resource group** name containing your private Cloud Shell instance. If you have - selected the correct resource group, the **Virtual network**, **Network profile**, and **Relay - namespace** should be automatically populated with the correct values. -1. Enter the name for the **File share** you created with the storage template. +1. Choose the subscription that contains your private Cloud Shell instance. +1. Choose the region that contains your private Cloud Shell instance. +1. For **Resource group**, select the resource group that contains your private Cloud Shell + instance. ++ If you select the correct resource group, **Virtual network**, **Network profile**, and **Relay + namespace** are automatically populated with the correct values. +1. For **File share**, enter the name of the file share that you created by using the storage + template. 1. Select **Create storage**. ## Next steps -You must complete the Cloud Shell configuration steps for each user that needs to use the new -private Cloud Shell instance. +You must complete the Cloud Shell configuration steps for each user who needs to use the new private +Cloud Shell instance. <!-- link references --> [01]: /azure/azure-resource-manager/management/manage-resource-groups-cli |
cloud-shell | Vnet Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/vnet-troubleshooting.md | + +description: > + This article provides instructions for troubleshooting a private virtual network deployment of + Azure Cloud Shell. +ms.contributor: jahelmic Last updated : 10/26/2023++ Title: Troubleshoot Azure Cloud Shell in a private virtual network ++# Troubleshoot Azure Cloud Shell in a private virtual network ++This article provides instructions for troubleshooting a private virtual network deployment of Azure +Cloud Shell. For best results, and to be supportable, following the deployment instructions in the +[Deploy Azure Cloud Shell in a virtual network using quickstart templates][03] article. ++## Verify you have set the correct permissions ++To configure Azure Cloud Shell in a virtual network, you must have the **Owner** role assignment on +the subscription. To view and assign roles, see [List Owners of a Subscription][01] ++Unless otherwise noted, all the troubleshooting steps start in **Subscriptions** section of the +Azure portal. ++1. Sign in to the [Azure portal][02]. +1. On the Azure portal menu, search for **Subscriptions**. Select it from the available options. +1. Select the subscription you want to view. ++## Verify resource provider registrations ++Azure Cloud Shell needs access to certain Azure resources. That access is made available through +resource providers. The following resource providers must be registered in your subscription: ++- **Microsoft.CloudShell** +- **Microsoft.ContainerInstances** +- **Microsoft.Relay** ++To see all resource providers, and the registration status for your subscription: ++1. Go to the **Settings** section of left menu of your subscription page. +1. Select **Resource providers**. +1. In the search box, enter `cloudshell` to search for the resource provider. +1. Select the **Microsoft.CloudShell** resource provider register from the provider list. +1. Select **Register** to change the status from **unregistered** to **Registered**. +1. Repeat the previous steps for the **Microsoft.ContainerInstances** and **Microsoft.Relay** + resource providers. ++ [![Screenshot of selecting resource providers in the Azure portal.][ss01]][ss01x] ++## Verify Azure Container Instance Service role assignments ++The **Azure Container Instance Service** application needs specific permissions for the **Relay** +and **Network Profile** resources. Use the following steps to see the resources and the role +permissions for your subscription: ++1. Go to the **Settings** section of left menu of your subscription page. +1. Select **Resource groups**. +1. Select the resource group you provided in the prerequisites for the deployment. +1. In the **Essentials** section of the **Overview**, select the **Show hidden types** checkbox. + This checkbox allows you to see all the resources created by the deployment. ++ [![Screenshot showing all the resources in your resource group.][ss02]][ss02x] ++1. Select the network profile resource with the type of `microsoft.network/networkprofile`. The name + should be `aci-networkProfile-<location>` where `<location>` is the location of the resource + group. +1. On network profile page, select **Access control (IAM)** in the left menu. +1. Then select **Role assignments** from the top menu bar. +1. In the search box, enter `container`. +1. Verify that **Azure Container Instance Service** has the `Network Contributor` role. ++ [![Screenshot showing the network profiles role assignments.][ss03]][ss03x] ++1. From the Resources page, select the relay namespace resource with the type of `Relay`. The name + should be the name of the relay namespace you provided in the deployment template. +1. On relay page, select **Access control (IAM)**, then select **Role assignments** from the top + menu bar. +1. In the search box, enter `container`. +1. Verify that **Azure Container Instance Service** has the `Contributor` role. ++ [![Screenshot showing the network relay role assignments.][ss04]][ss04x] ++## Redeploy Cloud Shell for a private virtual network ++Verify the configurations described in this article. If you continue receive an error message when +you try to use your deployment of Cloud Shell, you have two options: ++1. Open a support ticket +1. Redeploy Cloud Shell for a private virtual network ++### Open a support ticket ++If you want to open a support ticket, you can do so from the Azure portal. Be sure to capture any +error messages, including the **Correlation Id** and **Activity Id** values. Don't change any +settings or delete any resources until instructed to by a support technician. ++Follow these steps to open a support ticket: ++1. Select the **Support & Troubleshooting** icon on the top navigation bar in the Azure portal. +1. From the **Support & Troubleshooting** pane, select **Help + support**. +1. Select **Create a support request** at the top of the center pane. +1. Follow the instructions to create a support ticket. ++ [![Screenshot of creating a support ticket in the Azure portal.][ss05]][ss05x] ++### Redeploy Cloud Shell for a private virtual network ++Before you redeploy Cloud Shell, you must delete the existing deployment. In the prerequisites for +the deployment, you provided a resource group and a virtual network. If you created these resources +specifically for this deployment, then it should be safe to delete them. If you used existing +resources, then you shouldn't delete them. ++The following list provides a description of the resources created by the deployment: ++- A **microsoft.network/networkprofiles** resource named `aci-networkProfile-<location>` where + `<location>` is the location of the resource group. +- A **Private endpoint** resource named `cloudshellRelayEndpoint`. +- A **Network Interface** resource named `cloudshellRelayEndpoint.nic.<UUID>` where `<UUID>` is a + unique identifier added to the name. +- A **Virtual Network** resource that you provided from the prerequisites. +- A **Private DNS zone** named `privatelink.servicebus.windows.net`. +- A **Network security group** resource with the name you provided in the deployment template. +- A **microsoft.network/privatednszones/virtualnetworklinks** resource with a name starting the name + of the relay namespace you provided in the deployment template. +- A **Relay** resource with the name of the relay namespace you provided in the deployment template. +- A **Storage account** resource with the name you provided in the deployment template. ++Once you have removed the resources, you can redeploy Cloud Shell by following the steps in the +[Deploy Azure Cloud Shell in a virtual network using quickstart templates][03] article. ++You can find these resources by viewing the resource group in the Azure portal. ++[![Screenshot of resources created by the deployment.][ss02]][ss02x] ++<!-- link references --> +[01]: /azure/role-based-access-control/role-assignments-list-portal#list-owners-of-a-subscription +[02]: https://portal.azure.com/ +[03]: quickstart-deploy-vnet.md ++[ss01]: ./media/quickstart-deploy-vnet/resource-provider.png +[ss01x]: ./media/quickstart-deploy-vnet/resource-provider.png#lightbox +[ss02]: ./media/vnet-troubleshooting/show-resource-group.png +[ss02x]: ./media/vnet-troubleshooting/show-resource-group.png#lightbox +[ss03]: ./media/vnet-troubleshooting/network-profile-role.png +[ss03x]: ./media/vnet-troubleshooting/network-profile-role.png#lightbox +[ss04]: ./media/vnet-troubleshooting/relay-namespace-role.png +[ss04x]: ./media/vnet-troubleshooting/relay-namespace-role.png#lightbox +[ss05]: ./media/vnet-troubleshooting/create-support-ticket.png +[ss05x]: ./media/vnet-troubleshooting/create-support-ticket.png#lightbox |
communication-services | End Of Call Survey Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/end-of-call-survey-logs.md | The following are instructions for configuring your Azure Monitor resource to st ### Overview -The implementation of end-of-call survey logs represents an augmented functionality within ACS (Azure Communication Services), enabling Contoso to submit surveys to gather customers' subjective feedback on their calling experience. This approach aims to supplement the assessment of call quality beyond objective metrics such as audio and video bitrate, jitter, and latency, which may not fully capture whether a customer had a satisfactory or unsatisfactory experience. By leveraging Azure logs to publish and examine survey data, Contoso gains insights for analysis and identification of areas that require improvement. These survey results serve as a valuable resource for Azure Communication Services to continuously monitor and enhance quality and reliability. For more details about [End of call survey](../../../concepts/voice-video-calling/end-of-call-survey-concept.md) +The implementation of end-of-call survey logs represents an augmented functionality within Azure Communication Services (Azure Communication Services), enabling Contoso to submit surveys to gather customers' subjective feedback on their calling experience. This approach aims to supplement the assessment of call quality beyond objective metrics such as audio and video bitrate, jitter, and latency, which may not fully capture whether a customer had a satisfactory or unsatisfactory experience. By leveraging Azure logs to publish and examine survey data, Contoso gains insights for analysis and identification of areas that require improvement. These survey results serve as a valuable resource for Azure Communication Services to continuously monitor and enhance quality and reliability. For more details about [End of call survey](../../../concepts/voice-video-calling/end-of-call-survey-concept.md) The End of Call Survey is a valuable tool that allows you to gather insights into how end-users perceive the quality and reliability of your JavaScript/Web SDK calling solution. The accompanying logs contain crucial data that helps assess end-users' experience, including: |
communication-services | Rooms Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/rooms-logs.md | Communication Services offers the following types of logs that you can enable: | `UpsertedRoomParticipantsCount` | The count of participants upserted in a Room. | | `RemovedRoomParticipantsCount` | The count of participants removed from a Room. | | `TimeGenerated` | The timestamp (UTC) of when the log was generated. |+| `PstnDialOutEnabled` | Indicates whether a room has the ability to make PSTN calls to invite people to a meeting. | #### Example CreateRoom log Communication Services offers the following types of logs that you can enable: "CorrelationId": "Y4x6ZabFE0+E8ERwMpd68w", "Level": "Informational", "OperationName": "CreateRoom",- "OperationVersion": "2022-03-31-preview", + "OperationVersion": "2023-10-30-preview", "ResultType": "Succeeded", "ResultSignature": 201, "RoomId": "99466898241024408", "RoomLifespan": 61, "AddedRoomParticipantsCount": 4, "TimeGenerated": "5/25/2023, 4:32:49.469 AM",+ "PstnDialOutEnabled": false, } ] ``` Communication Services offers the following types of logs that you can enable: "CorrelationId": "CNiZIX7fvkumtBSpFq7fxg", "Level": "Informational", "OperationName": "GetRoom",- "OperationVersion": "2022-03-31-preview", + "OperationVersion": "2023-10-30-preview", "ResultType": "Succeeded", "ResultSignature": "200", "RoomId": "99466387192310000", Communication Services offers the following types of logs that you can enable: "CorrelationId": "Bwqzh0pdnkGPDwNcMnBkng", "Level": "Informational", "OperationName": "UpdateRoom",- "OperationVersion": "2022-03-31-preview", + "OperationVersion": "2023-10-30-preview", "ResultType": "Succeeded", "ResultSignature": "200", "RoomId": "99466387192310000", "RoomLifespan": 121, "TimeGenerated": "2022-08-19T17:07:30.3543160Z",+ "PstnDialOutEnabled": false, }, ] ``` Communication Services offers the following types of logs that you can enable: "CorrelationId": "x7rMXmihYEe3GFho9T/H2w", "Level": "Informational", "OperationName": "DeleteRoom",- "OperationVersion": "2022-02-01", + "OperationVersion": "2023-10-30-preview", "ResultType": "Succeeded", "ResultSignature": "204", "RoomId": "99466387192310000", Communication Services offers the following types of logs that you can enable: "CorrelationId": "KibM39CaXkK+HTInfsiY2w", "Level": "Informational", "OperationName": "ListRooms",- "OperationVersion": "2022-03-31-preview", + "OperationVersion": "2023-10-30-preview", "ResultType": "Succeeded", "ResultSignature": "200", "TimeGenerated": "2022-08-19T17:07:30.5393800Z", Communication Services offers the following types of logs that you can enable: "CorrelationId": "zHT8snnUMkaXCRDFfjQDJw", "Level": "Informational", "OperationName": "UpdateParticipants",- "OperationVersion": "2022-03-31-preview", + "OperationVersion": "2023-10-30-preview", "ResultType": "Succeeded", "ResultSignature": "200", "RoomId": "99466387192310000", |
communication-services | Voice And Video Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/voice-and-video-logs.md | The call summary log contains data to help you identify key properties of all ca | `endpointType` | This value describes the properties of each endpoint that's connected to the call. It can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. | | `sdkVersion` | The version string for the Communication Services Calling SDK version that each relevant endpoint uses (for example, `"1.1.00.20212500"`). | | `osVersion` | A string that represents the operating system and version of each endpoint device. |-| `participantTenantId` | The ID of the Microsoft tenant associated with the identity of the participant. The tenant can either be the Azure tenant that owns the ACS resource or the Microsoft tenant of an M365 identity. This field is used to guide cross-tenant redaction. -|`participantType` | Description of the participant as a combination of its client (Azure Communication Services (ACS) or Teams), and its identity, (ACS or Microsoft 365). Possible values include: ACS (ACS identity and ACS SDK), Teams (Teams identity and Teams client), ACS as Teams external user (ACS identity and ACS SDK in Teams call or meeting), and ACS as Microsoft 365 user (M365 identity and ACS client). +| `participantTenantId` | The ID of the Microsoft tenant associated with the identity of the participant. The tenant can either be the Azure tenant that owns the Azure Communication Services resource or the Microsoft tenant of an M365 identity. This field is used to guide cross-tenant redaction. +|`participantType` | Description of the participant as a combination of its client (Azure Communication Services (ACS) or Teams), and its identity, (ACS or Microsoft 365). Possible values include: Azure Communication Services (ACS identity and Azure Communication Services SDK), Teams (Teams identity and Teams client), Azure Communication Services as Teams external user (ACS identity and Azure Communication Services SDK in Teams call or meeting), and Azure Communication Services as Microsoft 365 user (M365 identity and Azure Communication Services client). | `pstnPartcipantCallType `|It represents the type and direction of PSTN participants including Emergency calling, direct routing, transfer, forwarding, etc.| ### Call diagnostic log schema |
communication-services | Azure Communication Services Azure Cognitive Services Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md | Title: Connect Azure Communication Services to Azure AI services -description: Provides a how-to guide for connecting ACS to Azure AI services. +description: Provides a how-to guide for connecting Azure Communication Services to Azure AI services. |
communication-services | Call Automation Teams Interop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation-teams-interop.md | This interoperability with Microsoft Teams over VoIP makes it easy for developer ## Scenario Showcase ΓÇô Expert Consultation A customer service agent, who is using a Contact Center Agent experience, wants to now add a subject matter expert, who is knowledge worker (regular employee) at Contoso and uses Microsoft Teams, into a support call with a customer to provide some expert advice to resolve a customer issue. -The dataflow diagram depicts a canonical scenario where a Teams user is added to an ongoing ACS call for expert consultation. +The dataflow diagram depicts a canonical scenario where a Teams user is added to an ongoing Azure Communication Services call for expert consultation. [ ![Diagram of calling flow for a customer service with Microsoft Teams and Call Automation.](./media/call-automation-teams-interop.png)](./media/call-automation-teams-interop.png#lightbox) 1. Customer is on an ongoing call with a Contact Center customer service agent. 1. During the call, the customer service agent needs expert help from one of the domain experts part of an engineering team. The agent is able to identify a knowledge worker who is available on Teams (presence via Graph APIs) and tries to add them to the call. -1. Contoso Contact CenterΓÇÖs SBC is already configured with ACS Direct Routing where this add participant request is processed. -1. Contoso Contact Center provider has implemented a web service, using ACS Call Automation that receives the ΓÇ£add ParticipantΓÇ¥ request. -1. With Teams interop built into ACS Call Automation, ACS then uses the Teams userΓÇÖs ObjectId to add them to the call. The Teams user receives the incoming call notification. They accept and join the call. +1. Contoso Contact CenterΓÇÖs SBC is already configured with Azure Communication Services Direct Routing where this add participant request is processed. +1. Contoso Contact Center provider has implemented a web service, using Azure Communication Services Call Automation that receives the ΓÇ£add ParticipantΓÇ¥ request. +1. With Teams interop built into Azure Communication Services Call Automation, Azure Communication Services then uses the Teams userΓÇÖs ObjectId to add them to the call. The Teams user receives the incoming call notification. They accept and join the call. 1. Once the Teams user has provided their expertise, they leave the call. The customer service agent and customer continue wrap up their conversation. ## Capabilities |
communication-services | Call Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md | Some of the common use cases that can be built using Call Automation include: - Increase engagement by building automated customer outreach programs for marketing and customer service. - Analyze in a post-call process your unmixed audio recordings for quality assurance purposes. -Azure Communication Services Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center. +Azure Communication Services Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an Azure Communication Services Calling SDK client app to answer the incoming call request. With support for Azure Communication Services PSTN or Direct Routing, you can then connect this workflow back to your contact center. ![Diagram of calling flow for a customer service scenario.](./media/call-automation-architecture.png) |
communication-services | Play Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md | +- Providing Azure Communication Services access to prerecorded audio files of WAV format, that Azure Communication Services can access with support for authentication - Regular text that can be converted into speech output through the integration with Azure AI services. You can use the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). (Supported in public preview) |
communication-services | Play Ai Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-ai-action.md | +- Providing Azure Communication Services access to pre-recorded audio files of WAV format, that Azure Communication Services can access with support for authentication - Regular text that can be converted into speech output through the integration with Azure AI services. You can leverage the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales please see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). |
communication-services | Recognize Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md | -With the release of ACS Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user, which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways, which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech. +With the release of Azure Communication Services Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user, which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways, which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech. **Voice recognition with speech-to-text (Public Preview)** |
communication-services | Recognize Ai Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-ai-action.md | -With the release of ACS Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech. +With the release of Azure Communication Services Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech. **Voice recognition with speech-to-text** |
communication-services | Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md | For customers that use Virtual appointments, refer to our Teams Interoperability - The maximum number of participants allowed in a chat thread is 250. - The maximum message size allowed is approximately 28 KB. - For chat threads with more than 20 participants, read receipts and typing indicator features are not supported.-- For Teams Interop scenarios, it is the number of ACS users, not Teams users that must be below 20 for read receipts and typing indicator features to be supported.+- For Teams Interop scenarios, it is the number of Azure Communication Services users, not Teams users that must be below 20 for read receipts and typing indicator features to be supported. ## Chat architecture |
communication-services | Detailed Call Flows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/detailed-call-flows.md | Communication Services is built primarily on two types of traffic: **real-time m Users of your Communication Services solution will be connecting to your services from their client devices. Communication between these devices and your servers is handled with **signaling**. For example: call initiation and real-time chat are supported by signaling between devices and your service. Most signaling traffic uses HTTPS REST, though in some scenarios, SIP can be used as a signaling traffic protocol. While this type of traffic is less sensitive to latency, low-latency signaling will give the users of your solution a pleasant end-user experience. -Call flows in ACS are based on the Session Description Protocol (SDP) RFC 4566 offer and answer model over HTTPS. Once the callee accepts an incoming call, the caller and callee agree on the session parameters. +Call flows in Azure Communication Services are based on the Session Description Protocol (SDP) RFC 4566 offer and answer model over HTTPS. Once the callee accepts an incoming call, the caller and callee agree on the session parameters. Media traffic is encrypted by, and flows between, the caller and callee using Secure RTP (SRTP), a profile of Real-time Transport Protocol (RTP) that provides confidentiality, authentication, and replay attack protection to RTP traffic. SRTP uses a session key generated by a secure random number generator and exchanged using the signaling TLS channel. -ACS media traffic between two endpoints participating in ACS audio, video, and application sharing, utilizes SRTP to encrypt the media stream. Cryptographic keys are negotiated between the two endpoints over a signaling protocol which uses TLS 1.2 and AES-256 (in GCM mode) encrypted UDP/TCP channel. +Azure Communication Services media traffic between two endpoints participating in Azure Communication Services audio, video, and application sharing, utilizes SRTP to encrypt the media stream. Cryptographic keys are negotiated between the two endpoints over a signaling protocol which uses TLS 1.2 and AES-256 (in GCM mode) encrypted UDP/TCP channel. |
communication-services | Email Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email-metrics.md | Title: Email metric definitions for Azure Communication Services -description: This document covers definitions of acs email metrics available in the Azure portal. +description: This document covers definitions of Azure Communication Services email metrics available in the Azure portal. |
communication-services | Enable Closed Captions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/enable-closed-captions.md | In this document, we're going to be looking at specifically Teams interoperabili *Usage of translations through Teams generated captions requires the organizer to have assigned a Teams Premium license, or in the case of Microsoft 365 users they must have a Teams premium license. More information about Teams Premium can be found [here](https://www.microsoft.com/microsoft-teams/premium#tabx93f55452286a4264a2778ef8902fb81a).* -In scenarios where there's a Teams user on a Teams client or a Microsoft 365 user with Azure Communication Services SDKs in the call, the developer can use Teams caption. This allows developers to work with the Teams captioning technology that may already be familiar with today. With Teams captions developers are limited to what their Teams license allows. Basic captions allow only one spoken and one caption language for the call. With Teams premium license developers can use the translation functionality offered by Teams to provide one spoken language for the call and translated caption languages on a per user basis. In a Teams interop scenario, captions enabled through ACS follows the same policies that are defined in Teams for [meetings](/powershell/module/skype/set-csteamsmeetingpolicy) and [calls](/powershell/module/skype/set-csteamscallingpolicy). +In scenarios where there's a Teams user on a Teams client or a Microsoft 365 user with Azure Communication Services SDKs in the call, the developer can use Teams caption. This allows developers to work with the Teams captioning technology that may already be familiar with today. With Teams captions developers are limited to what their Teams license allows. Basic captions allow only one spoken and one caption language for the call. With Teams premium license developers can use the translation functionality offered by Teams to provide one spoken language for the call and translated caption languages on a per user basis. In a Teams interop scenario, captions enabled through Azure Communication Services follows the same policies that are defined in Teams for [meetings](/powershell/module/skype/set-csteamsmeetingpolicy) and [calls](/powershell/module/skype/set-csteamscallingpolicy). ## Common use cases In scenarios where there's a Teams user on a Teams client or a Microsoft 365 use Accessibility ΓÇô For people with hearing impairments or who are new to the language to participate in calls and meetings. A key feature requirement in the Telemedical industry is to help patients communicate effectively with their health care providers. ### Teams interoperability -Use Teams ΓÇô Organizations using ACS and Teams can use Teams closed captions to improve their applications by providing closed captions capabilities to users. Those organizations can keep using Microsoft Teams for all calls and meetings without third party applications providing this capability. +Use Teams ΓÇô Organizations using Azure Communication Services and Teams can use Teams closed captions to improve their applications by providing closed captions capabilities to users. Those organizations can keep using Microsoft Teams for all calls and meetings without third party applications providing this capability. ### Global inclusivity Provide translation ΓÇô Use the translation functions provided to provide translated captions for users who may be new to the language or for companies that operate at a global scale and have offices around the world, their teams can have conversations even if some people might not be familiar with the spoken language. -## Sample architecture of ACS user using captions in a Teams meeting +## Sample architecture of Azure Communication Services user using captions in a Teams meeting ![Diagram of Teams meeting interop](./media/acs-teams-interop-captions.png) -## Sample architecture of an ACS user using captions in a meeting with a Microsoft 365 user on ACS SDK +## Sample architecture of an Azure Communication Services user using captions in a meeting with a Microsoft 365 user on Azure Communication Services SDK ![Diagram of CTE user](./media/m365-captions-interop.png) |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md | The following table shows supported server-side capabilities available in Azure |Capability | Supported | | | |-| [Manage ACS call recording](../../voice-video-calling/call-recording.md) | ❌ | +| [Manage Azure Communication Services call recording](../../voice-video-calling/call-recording.md) | ❌ | | [Azure Metrics](../../metrics.md) | ✔️ | | [Azure Monitor](../../analytics/logs/voice-and-video-logs.md) | ✔️ | | [Azure Communication Services Insights](../../analytics/insights/voice-and-video-insights.md) | ✔️ | |
communication-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/virtual-visits/overview.md | These three **implementation options** are columns in the table below, while eac |--||--||| | *Manager* | Configure Business Availability | Bookings | Bookings | Custom | | *Provider* | Managing upcoming appointments | Outlook & Teams | Outlook & Teams | Custom |-| *Provider* | Join the appointment | Teams | Teams | ACS Calling & Chat | -| *Consumer* | Schedule an appointment | Bookings | Bookings | ACS Rooms | -| *Consumer*| Be reminded of an appointment | Bookings | Bookings | ACS SMS | -| *Consumer*| Join the appointment | Teams or virtual appointments | ACS Calling & Chat | ACS Calling & Chat | +| *Provider* | Join the appointment | Teams | Teams | Azure Communication Services Calling & Chat | +| *Consumer* | Schedule an appointment | Bookings | Bookings | Azure Communication Services Rooms | +| *Consumer*| Be reminded of an appointment | Bookings | Bookings | Azure Communication Services SMS | +| *Consumer*| Join the appointment | Teams or virtual appointments | Azure Communication Services Calling & Chat | Azure Communication Services Calling & Chat | There are other ways to customize and combine Microsoft tools to deliver a virtual appointments experience: - **Replace Bookings with a custom scheduling experience with Graph.** You can build your own consumer-facing scheduling experience that controls Microsoft 365 meetings with Graph APIs. |
communication-services | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md | The following sections provide information about known issues associated with th ### Chrome M115 - regression -Chrome version 115 for Android introduced a regression when making video calls - the result of this bug is a user making a call on ACS with this version of Chrome will have no outgoing video in Group and ACS-MS Teams calls. +Chrome version 115 for Android introduced a regression when making video calls - the result of this bug is a user making a call on Azure Communication Services with this version of Chrome will have no outgoing video in Group and ACS-MS Teams calls. - This is a known regression introduced on [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1469318) - As a short term mitigation please instruct users to use Microsoft Edge or Firefox on Android, or avoid using Google Chrome 115/116 on Android Firefox desktop browser support is now available in public preview. Known issues ### iOS Chrome Known Issues iOS Chrome browser support is now available in public preview. Known issues are: - No outgoing and incoming audio when switching browser to background or locking the device-- No incoming/outgoing audio coming from bluetooth headset. When a user connects bluetooth headset in the middle of ACS call, the audio still comes out from the speaker until the user locks and unlocks the phone. We have seen this issue on older iOS versions (15.6, 15.7), and it is not reproducible on iOS 16.+- No incoming/outgoing audio coming from bluetooth headset. When a user connects bluetooth headset in the middle of Azure Communication Services call, the audio still comes out from the speaker until the user locks and unlocks the phone. We have seen this issue on older iOS versions (15.6, 15.7), and it is not reproducible on iOS 16. ### iOS 16 introduced bugs when putting browser in the background during a call-The iOS 16 release has introduced a bug that can stop the ACS audio\video call when using Safari mobile browser. Apple is aware of this issue and is looking for a fix on their side. The impact could be that an ACS call might stop working during a call and the only resolution to get it working again is to have the end customer restart their phone. +The iOS 16 release has introduced a bug that can stop the Azure Communication Services audio\video call when using Safari mobile browser. Apple is aware of this issue and is looking for a fix on their side. The impact could be that an Azure Communication Services call might stop working during a call and the only resolution to get it working again is to have the end customer restart their phone. To reproduce this bug: - Have a user using an iPhone running iOS 16-- Join ACS call (with audio only or with audio and video) using Safari iOS mobile browser+- Join Azure Communication Services call (with audio only or with audio and video) using Safari iOS mobile browser - If during a call someone puts the Safari browser in the background and views YouTube OR receives a FaceTime\phone call while connected via a Bluetooth device Results: - After a few minutes of this situation, the incoming and outgoing video may stop working.-- The only way to get ACS calling to work again is to have the end user restart their phone.+- The only way to get Azure Communication Services calling to work again is to have the end user restart their phone. ### Chrome M98 - regression Chrome version 98 introduced a regression with abnormal generation of video keyf ### No incoming audio during a call -Occasionally, a user in an ACS call may not be able to hear the audio from remote participants. +Occasionally, a user in an Azure Communication Services call may not be able to hear the audio from remote participants. There is a related [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1402250) bug that causes this issue, the issue can be mitigated by reconnecting the PeerConnection. We've added this workaround since SDK 1.9.1 (stable) and SDK 1.10.0 (beta) -On Android Chrome, if a user joins ACS call several times, the incoming audio can also disappear. The user is not able to hear the audio from other participants until the page is refreshed. We've fixed this issue in SDK 1.10.1-beta.1, and improved the audio resource usage. +On Android Chrome, if a user joins Azure Communication Services call several times, the incoming audio can also disappear. The user is not able to hear the audio from other participants until the page is refreshed. We've fixed this issue in SDK 1.10.1-beta.1, and improved the audio resource usage. ### Some Android devices failing call scenarios except for group calls. A number of specific Android devices fail to start, accept calls, and meetings. ### Android Chrome mutes the call after browser goes to background for one minute -On Android Chrome, if a user is on an ACS call and puts the browser into background for one minute. The microphone will lose access and the other participants in the call won't hear the audio from the user. Once the user brings the browser to foreground, microphone is available again. Related chromium bugs [here](https://bugs.chromium.org/p/chromium/issues/detail?id=1027446) and [here](https://bugs.chromium.org/p/webrtc/issues/detail?id=10940) +On Android Chrome, if a user is on an Azure Communication Services call and puts the browser into background for one minute. The microphone will lose access and the other participants in the call won't hear the audio from the user. Once the user brings the browser to foreground, microphone is available again. Related chromium bugs [here](https://bugs.chromium.org/p/chromium/issues/detail?id=1027446) and [here](https://bugs.chromium.org/p/webrtc/issues/detail?id=10940) ### A mobile (iOS and Android) user has dropped the call but is still showing up on the participant list. -The problem can occur if a mobile user leaves the ACS group call without using the Call.hangUp() API. When a mobile user closes the browser or refreshes the webpage without hang up, other participants in the group call will still see this mobile user on the participant list for about 60 seconds. +The problem can occur if a mobile user leaves the Azure Communication Services group call without using the Call.hangUp() API. When a mobile user closes the browser or refreshes the webpage without hang up, other participants in the group call will still see this mobile user on the participant list for about 60 seconds. ### iOS Safari refreshes the page if the user goes to another app and returns back to the browser -The problem can occur if a user in an ACS call with iOS Safari, and switches to other app for a while. After the user returns back to the browser, +The problem can occur if a user in an Azure Communication Services call with iOS Safari, and switches to other app for a while. After the user returns back to the browser, the browser page may refresh. This is because OS kills the browser. One way to mitigate this issue is to keep some states and recover after page refreshes. This problem can occur if another application or the operating system takes over - A user plays a YouTube video, for example, or starts a FaceTime call. Switching to another native application can capture access to the microphone or camera. - A user enables Siri, which will capture access to the microphone. -On iOS, for example, while on an ACS call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD will be raised and audio will stop flowing in the ACS call and the call will be marked as muted. Once the PSTN call is over, the user will have to go and unmute the ACS call for audio to start flowing again in the ACS call. In the case of Android Chrome when a PSTN call comes in, audio will stop flowing in the ACS call and the ACS call will not be marked as muted. In this case, there is no microphoneMutedUnexepectedly UFD event. Once the PSTN call is finished, Android Chrome will regain audio automatically and audio will start flowing normally again in the ACS call. +On iOS, for example, while on an Azure Communication Services call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD will be raised and audio will stop flowing in the Azure Communication Services call and the call will be marked as muted. Once the PSTN call is over, the user will have to go and unmute the Azure Communication Services call for audio to start flowing again in the Azure Communication Services call. In the case of Android Chrome when a PSTN call comes in, audio will stop flowing in the Azure Communication Services call and the Azure Communication Services call will not be marked as muted. In this case, there is no microphoneMutedUnexepectedly UFD event. Once the PSTN call is finished, Android Chrome will regain audio automatically and audio will start flowing normally again in the Azure Communication Services call. -In case camera is on and an interruption occurs, ACS call may or may not lose the camera. If lost then camera will be marked as off and user will have to go turn it back on after the interruption has released the camera. +In case camera is on and an interruption occurs, Azure Communication Services call may or may not lose the camera. If lost then camera will be marked as off and user will have to go turn it back on after the interruption has released the camera. Occasionally, microphone or camera devices won't be released on time, and that can cause issues with the original call. For example, if the user tries to unmute while watching a YouTube video, or if a PSTN call is active simultaneously. The environment in which this problem occurs is the following: The cause of this problem might be that acquiring your own stream from the same device will have a side effect of running into race conditions. Acquiring streams from other devices might lead the user into insufficient USB/IO bandwidth, and the `sourceUnavailableError` rate will skyrocket. -### Excessive use of certain APIs like mute/unmute will result in throttling on ACS infrastructure +### Excessive use of certain APIs like mute/unmute will result in throttling on Azure Communication Services infrastructure -As a result of the mute/unmute API call, ACS infrastructure informs other participants in the call about the state of audio of a local participant who invoked mute/unmute, so that participants in the call know who is muted/unmuted. -Excessive use of mute/unmute will be blocked in ACS infrastructure. That will happen if the participant (or application on behalf of participant) will attempt to mute/unmute continuously, every second, more than 15 times in a 30-second rolling window. +As a result of the mute/unmute API call, Azure Communication Services infrastructure informs other participants in the call about the state of audio of a local participant who invoked mute/unmute, so that participants in the call know who is muted/unmuted. +Excessive use of mute/unmute will be blocked in Azure Communication Services infrastructure. That will happen if the participant (or application on behalf of participant) will attempt to mute/unmute continuously, every second, more than 15 times in a 30-second rolling window. ## Communication Services Call Automation APIs |
communication-services | Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/metrics.md | The following operations are available on Rooms API request metrics: | DeleteRoom | Deletes a Room. | | GetRoom | Gets a Room by Room ID. | | PatchRoom | Updates a Room by Room ID. |-| ListRooms | Lists all the Rooms for an ACS Resource. | +| ListRooms | Lists all the Rooms for an Azure Communication Services Resource. | | AddParticipants | Adds participants to a Room.| | RemoveParticipants | Removes participants from a Room. | | GetParticipants | Gets list of participants for a Room. | |
communication-services | Number Lookup Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/number-lookup-concept.md | Key features of Azure Communication Services Number Lookup include: ## Value Proposition -The main benefits the solution will provide to ACS customers can be summarized on the below: +The main benefits the solution will provide to Azure Communication Services customers can be summarized on the below: - **Reduce Cost:** Optimize your communication expenses by sending messages only to phone numbers that are SMS-ready - **Increase efficiency:** Better target customers based on subscribersΓÇÖ data (name, type, location, etc.). You can also decide on the best communication channel to choose based on status (e.g., SMS or email while roaming instead of calls). The main benefits the solution will provide to ACS customers can be summarized o - **Validate the number can receive the SMS before you send it:** Check if a number has SMS capabilities or not and decide if needed to use different communication channels. *Contoso bank collected the phone numbers of the people who are interested in their services on their site. Contoso wants to send an invite to register for the promotional offer. Contoso checks before sending the link on the offer if SMS is possible channel for the number that customer provided on the site and donΓÇÖt waste money to send SMS to non mobile numbers.* - **Estimate the total cost of an SMS campaign before you launch it:** Get the current carrier of the target number and compare that with the list of known carrier surcharges.-*Contoso, a marketing company, wants to launch a large SMS campaign to promote a product. Contoso checks the current carrier details for the different numbers he is targeting with this campaign to estimate the cost based on what ACS is charging him.* +*Contoso, a marketing company, wants to launch a large SMS campaign to promote a product. Contoso checks the current carrier details for the different numbers he is targeting with this campaign to estimate the cost based on what Azure Communication Services is charging him.* ![Diagram showing call recording architecture using calling client sdk.](../numbers/mvp-use-case.png) |
communication-services | Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md | Alice is a Dynamics 365 contact center agent, who makes an outbound call from Om - One participant on the VoIP leg (Alice) from Omnichannel for Customer Service client application x 10 minutes x $0.004 per participant leg per minute = $0.04 - One participant on the Communication Services direct routing outbound leg (Bob) from Communication Services servers to an SBC x 10 minutes x $0.004 per participant leg per minute = $0.04-- Omnichannel for Customer Service bot doesn't introduce extra ACS charges.+- Omnichannel for Customer Service bot doesn't introduce extra Azure Communication Services charges. **Total cost for the call**: $0.04 + $0.04 = $0.08 Note that the service application that uses Call Automation SDK isn't charged to ### Pricing example: Inbound PSTN call redirected to another external telephone number using Call Automation SDK -Vlad dials your toll-free number (that you acquired from Communication Service) from his mobile phone. Your service application (built with Call Automation SDK) receives the call, and invokes the logic to redirect the call to a mobile phone number of Abraham using ACS direct routing. Abraham picks up the call and they talk with Vlad for 5 minutes. +Vlad dials your toll-free number (that you acquired from Communication Service) from his mobile phone. Your service application (built with Call Automation SDK) receives the call, and invokes the logic to redirect the call to a mobile phone number of Abraham using Azure Communication Services direct routing. Abraham picks up the call and they talk with Vlad for 5 minutes. - Vlad was on the call as a PSTN endpoint for a total of 5 minutes. - Your service application was on the call for the entire 5 minutes of the call. Vlad dials your toll-free number (that you acquired from Communication Service) **Cost calculations** - Inbound PSTN leg by Vlad to toll-free number acquired from Communication Services x 5 minutes x $0.0220 per minute for receiving the call = $0.11-- One participant on the ACS direct routing outbound leg (Abraham) from the service application to an SBC x 5 minutes x $0.004 per participant leg per minute = $0.02+- One participant on the Azure Communication Services direct routing outbound leg (Abraham) from the service application to an SBC x 5 minutes x $0.004 per participant leg per minute = $0.02 The service application that uses Call Automation SDK isn't charged to be part of the call. The additional monthly cost of leasing a US toll-free number isn't included in this calculation. |
communication-services | Raw Id Use Cases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/raw-id-use-cases.md | public void CommunicationIdentifierFromGetRawId() You can find more platform-specific examples in the following article: [Understand identifier types](./identifiers.md) ## Storing CommunicationIdentifier in a database-One of the typical jobs that may be required from you is mapping ACS users to users coming from Contoso user database or identity provider. This is usually achieved by adding an extra column or field in Contoso user DB or Identity Provider. However, given the characteristics of the Raw ID (stable, globally unique, and deterministic), you may as well choose it as a primary key for the user storage. +One of the typical jobs that may be required from you is mapping Azure Communication Services users to users coming from Contoso user database or identity provider. This is usually achieved by adding an extra column or field in Contoso user DB or Identity Provider. However, given the characteristics of the Raw ID (stable, globally unique, and deterministic), you may as well choose it as a primary key for the user storage. Assuming a `ContosoUser` is a class that represents a user of your application, and you want to save it along with a corresponding CommunicationIdentifier to the database. The original value for a `CommunicationIdentifier` can come from the Communication Identity, Calling or Chat APIs or from a custom Contoso API but can be represented as a `string` data type in your programming language no matter what the underlying type is: |
communication-services | Room Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md | Here are the main scenarios where rooms are useful: - **Rooms enable scheduled communication experience.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can schedule and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services. - **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. This will allow only a subset of users with assigned Communication Services identities to join a room call. - **Rooms enable structured communications through roles and permissions.** Rooms allow developers to assign predefined roles to users to exercise a higher degree of control and structure in communication. Ensure only presenters can speak and share content in a large meeting or in a virtual conference.+- **Rooms enable to perform calls using PSTN.** Rooms enable users to invite participants to a meeting by making phone calls through the public switched telephone network (PSTN). ## When to use rooms The tables below provide detailed capabilities mapped to the roles. At a high le | - Render a video in multiple places (local camera or remote stream) | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Set/Update video scaling mode | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Render remote video stream | ✔️ | ✔️ | ✔️ |+| **PSTN calls** | | | +| - Call participants using phone calls | ✔️ | ❌ | ❌ | *) Only available on the web calling SDK. Not available on iOS and Android calling SDKs |
communication-services | Sms Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md | Alphanumeric sender ID is not capable of receiving inbound messages or STOP mess Short Code availability is currently restricted to paid Azure subscriptions that have a billing address in the United States. Short Codes cannot be acquired on trial accounts or using Azure free credits. For more details, check out our [subscription eligibility page](../numbers/sub-eligibility-number-capability.md). ### Can you text to a toll-free number from a short code?-ACS toll-free numbers are enabled to receive messages from short codes. However, short codes are not typically enabled to send messages to toll-free numbers. If your messages from short codes to ACS toll-free numbers are failing, check with your short code provider if the short code is enabled to send messages to toll-free numbers. +Azure Communication Services toll-free numbers are enabled to receive messages from short codes. However, short codes are not typically enabled to send messages to toll-free numbers. If your messages from short codes to Azure Communication Services toll-free numbers are failing, check with your short code provider if the short code is enabled to send messages to toll-free numbers. ### How should a short code be formatted? Short codes do not fall under E.164 formatting guidelines and do not have a country code, or a "+" sign prefix. In the SMS API request, your short code should be passed as the 5-6 digit number you see in your short codes page without any prefix. |
communication-services | Direct Routing Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md | If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added > In all the examples, if the dialed number does not match the pattern, the call will be dropped unless there is a purchased number exist for the communication resource, and this number was used as `alternateCallerId` in the application. ## Managing inbound calls-For general inbound call management use [Call Automation SDKs](../call-automation/incoming-call-notification.md) to build an application that listens for and manage inbound calls placed to a phone number or received via ACS direct routing. +For general inbound call management use [Call Automation SDKs](../call-automation/incoming-call-notification.md) to build an application that listens for and manage inbound calls placed to a phone number or received via Azure Communication Services direct routing. Omnichannel for Customer Service customers, refer to [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling). ## Next steps |
communication-services | Call Recording | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md | An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated` ```typescript {- "resourceId": <string>, // stable resource id of the ACS resource recording + "resourceId": <string>, // stable resource id of the Azure Communication Services resource recording "callId": <string>, // id of the call "chunkDocumentId": <string>, // object identifier for the chunk this metadata corresponds to "chunkIndex": <number>, // index of this chunk with respect to all chunks in the recording |
communication-services | Calling Sdk Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md | The Azure Communication Services Calling SDK supports the following streaming co | **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video and 1 screen sharing | 1 video + 1 screen sharing | | **Maximum # of incoming remote streams that can be rendered simultaneously** | 9 videos + 1 screen sharing on desktop browsers*, 4 videos + 1 screen sharing on web mobile browsers | 9 videos + 1 screen sharing | -\* Starting from ACS Web Calling SDK version [1.16.3](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1163-stable-2023-08-24) +\* Starting from Azure Communication Services Web Calling SDK version [1.16.3](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1163-stable-2023-08-24) While the Calling SDK don't enforce these limits, your users might experience performance degradation if they're exceeded. Use the API of [Optimal Video Count](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#remote-video-quality) to determine how many current incoming video streams your web environment can support. ## Calling SDK timeouts |
communication-services | Data Channel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/data-channel.md | -> This document delves into the Data Channel feature present in the ACS Calling SDK. +> This document delves into the Data Channel feature present in the Azure Communication Services Calling SDK. > While the Data Channel in this context bears some resemblance to the Data Channel in WebRTC, it's crucial to recognize subtle differences in their specifics. > Throughout this document, we use terms *Data Channel API* or *API* to denote the Data Channel API within the SDK. > When referring to the Data Channel API in WebRTC, we explicitly use the term *WebRTC Data Channel API* for clarity and precision. |
communication-services | Manage Call Quality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/manage-call-quality.md | The following sections detail the tools to implement at different phases of a ca - **After a call** ## Before a call-**Pre-call readiness** ΓÇô By using the pre-call checks ACS provides, +**Pre-call readiness** ΓÇô By using the pre-call checks Azure Communication Services provides, you can learn a userΓÇÖs connection status before the call and take proactive action on their behalf. For example, if you learn a userΓÇÖs connection is poor you can suggest they turn off their video before Because Azure Communication Services Voice and Video calls run on web and mobile behavior on the call they're trying to participate in, referred to as the target call. You should make sure there aren't multiple browser tabs open before a call starts, and also monitor during the whole call lifecycle. You can pro-actively notify customers to close their excess tabs, or help them join a call correctly with useful messaging if they're unable to join a call initially. - To check if user has multiple instances- of ACS running in a browser, see: [How to detect if an application using Azure Communication Services' SDK is active in multiple tabs of a browser](../../how-tos/calling-sdk/is-sdk-active-in-multiple-tabs.md). + of Azure Communication Services running in a browser, see: [How to detect if an application using Azure Communication Services' SDK is active in multiple tabs of a browser](../../how-tos/calling-sdk/is-sdk-active-in-multiple-tabs.md). ## During a call Sometimes users can't hear each other, maybe the speaker is too quiet, the liste Since network conditions can change during a call, users can report poor audio and video quality even if they started the call without issue. Our Media statistics give you detailed quality metrics on each inbound and outbound audio, video, and screen share stream. These detailed insights help you monitor calls in progress, show users their network quality status throughout a call, and debug individual calls. -- These metrics help indicate issues on the ACS client SDK send and receive media streams. As an example, you can actively monitor the outgoing video stream's `availableBitrate`, notice a persistent drop below the recommended 1.5 Mbps and notify the user their video quality is degraded. +- These metrics help indicate issues on the Azure Communication Services client SDK send and receive media streams. As an example, you can actively monitor the outgoing video stream's `availableBitrate`, notice a persistent drop below the recommended 1.5 Mbps and notify the user their video quality is degraded. - It's important to note that our Server Log data only give you an overall summary of the call after it ends. Our detailed Media Statistics provide low level metrics throughout the call duration for use in during the call and afterwards for deeper analysis. - To learn more, see: [Media quality statistics](media-quality-sdk.md) |
communication-services | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/network-requirements.md | Communication Services connections require internet connectivity to specific por | Category | IP ranges or FQDN | Ports | | :-- | :-- | :-- |-| Media traffic | Range of Azure public cloud IP addresses 20.202.0.0/16 The range provided above is the range of IP addresses on either Media processor or ACS TURN service. | UDP 3478 through 3481, TCP ports 443 | +| Media traffic | Range of Azure public cloud IP addresses 20.202.0.0/16 The range provided above is the range of IP addresses on either Media processor or Azure Communication Services TURN service. | UDP 3478 through 3481, TCP ports 443 | | Signaling, telemetry, registration| *.skype.com, *.microsoft.com, *.azure.net, *.azure.com, *.office.com| TCP 443, 80 | |
communication-services | Simulcast | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/simulcast.md | The lack of simulcast support leads to a degraded video experience in calls with Simulcast is supported on Azure Communication Services SDK for WebJS (1.9.1-beta.1+) and native SDK for Android, iOS, and Windows. Currently, simulcast on the sender side is supported on following desktop browsers - Chrome and Edge. Simulcast on receiver side is supported on all platforms that Azure Communication Services Calling supports. Support for Sender side Simulcast capability from mobile browsers will be added in the future. ## How Simulcast works-Simulcast is a feature that allows a publisher, in this case the ACS calling SDK, to send different qualities of the same video to the SFU. The SFU then forwards the most suitable quality to each other endpoint on a call, based on their bandwidth, CPU, and resolution preferences. This way, the publisher can save resources and the subscribers can receive the best possible quality. The SFU doesn't change the video quality, it only selects which one to forward. +Simulcast is a feature that allows a publisher, in this case the Azure Communication Services calling SDK, to send different qualities of the same video to the SFU. The SFU then forwards the most suitable quality to each other endpoint on a call, based on their bandwidth, CPU, and resolution preferences. This way, the publisher can save resources and the subscribers can receive the best possible quality. The SFU doesn't change the video quality, it only selects which one to forward. ## Supported number of video qualities available with Simulcast.-Simulcast streaming from a web endpoint supports a maximum two video qualities. There aren't API controls needed to enable Simulcast for ACS. Simulcast is enabled and available for all video calls. +Simulcast streaming from a web endpoint supports a maximum two video qualities. There aren't API controls needed to enable Simulcast for Azure Communication Services. Simulcast is enabled and available for all video calls. ## Available video resolutions When streaming with simulcast, there are no set resolutions for high or low quality simulcast video streams. Instead, based on many different variables, either a single or multiple video steams are delivered. If every subscriber to video is requesting and capable of receiving maximum resolution what publisher can provide, only that maximum resolution will be sent. The following resolutions are supported: |
communication-services | Video Constraints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-constraints.md | -The Video Constraints API is a powerful tool that enables developers to control the video quality from within their video calls. With this API, developers can set maximum video resolutions, frame rate, and bitrate used so that the call is optimized for the user's device and network conditions. The ACS video engine is optimized to allow the video quality to change dynamically based on devices ability and network quality. But there might be certain scenarios where you would want to have tighter control of the video quality that end users experience. For instance, there may be situations where the highest video quality is not a priority, or you may want to limit the video bandwidth usage in the application. To support those use cases, you can use the Video Constraints API to have tighter control over video quality. +The Video Constraints API is a powerful tool that enables developers to control the video quality from within their video calls. With this API, developers can set maximum video resolutions, frame rate, and bitrate used so that the call is optimized for the user's device and network conditions. The Azure Communication Services video engine is optimized to allow the video quality to change dynamically based on devices ability and network quality. But there might be certain scenarios where you would want to have tighter control of the video quality that end users experience. For instance, there may be situations where the highest video quality is not a priority, or you may want to limit the video bandwidth usage in the application. To support those use cases, you can use the Video Constraints API to have tighter control over video quality. Another benefit of the Video Constraints API is that it enables developers to optimize the video call for different devices. For example, if a user is using an older device with limited processing power, developers can set constraints on the video resolution to ensure that the video call runs smoothly on that device -ACS Web Calling SDK supports setting the maximum video resolution, framerate, or bitrate that a client sends. The sender video constraints are supported on Desktop browsers (Chrome, Edge, Firefox) and when using iOS Safari mobile browser or Android Chrome mobile browser. +Azure Communication Services Web Calling SDK supports setting the maximum video resolution, framerate, or bitrate that a client sends. The sender video constraints are supported on Desktop browsers (Chrome, Edge, Firefox) and when using iOS Safari mobile browser or Android Chrome mobile browser. The native Calling SDK (Android, iOS, Windows) supports setting the maximum values of video resolution and framerate for outgoing video streams and setting the maximum resolution for incoming video streams. These constraints can be set at the start of the call and during the call. |
communication-services | Video Effects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-effects.md | -The Azure Communication Calling SDK allows you to create video effects that other users on a call are able to see. For example, for a user doing ACS calling using the WebJS SDK you can now enable that the user can turn on background blur. When the background blur is enabled, a user can feel more comfortable in doing a video call that the output video just shows a user, and all other content is blurred. +The Azure Communication Calling SDK allows you to create video effects that other users on a call are able to see. For example, for a user doing Azure Communication Services calling using the WebJS SDK you can now enable that the user can turn on background blur. When the background blur is enabled, a user can feel more comfortable in doing a video call that the output video just shows a user, and all other content is blurred. ## Prerequisites ### Install the Azure Communication Services Calling SDK |
communication-services | Actions For Call Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md | To place a call to a Communication Services user, you need to provide a Communic ```csharp Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events -var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller +var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller var callThisPerson = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); // person to call CreateCallResult response = await client.CreateCallAsync(callThisPerson, callbackUri); ``` CreateCallResult response = await client.CreateCallAsync(callThisPerson, callbac ```java String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events-PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+18001234567"); // This is the ACS provisioned phone number for the caller +PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+18001234567"); // This is the Azure Communication Services provisioned phone number for the caller CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16471234567"), callerIdNumber); // person to call CreateCallResult response = client.createCall(callInvite, callbackUri).block(); ``` CreateCallResult response = client.createCall(callInvite, callbackUri).block(); ```javascript const callInvite = { targetParticipant: { phoneNumber: "+18008008800" }, // person to call- sourceCallIdNumber: { phoneNumber: "+18888888888" } // This is the ACS provisioned phone number for the caller + sourceCallIdNumber: { phoneNumber: "+18888888888" } // This is the Azure Communication Services provisioned phone number for the caller }; const callbackUri = "https://<myendpoint>/Events"; // the callback endpoint where you want to receive subsequent events const response = await client.createCall(callInvite, callbackUri); const response = await client.createCall(callInvite, callbackUri); callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events caller_id_number = PhoneNumberIdentifier( "+18001234567"-) # This is the ACS provisioned phone number for the caller +) # This is the Azure Communication Services provisioned phone number for the caller call_invite = CallInvite( target=PhoneNumberIdentifier("+16471234567"), source_caller_id_number=caller_id_number, var pstnEndpoint = new PhoneNumberIdentifier("+16041234567"); var voipEndpoint = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-... var groupCallOptions = new CreateGroupCallOptions(new List<CommunicationIdentifier>{ pstnEndpoint, voipEndpoint }, callbackUri) {- SourceCallerIdNumber = new PhoneNumberIdentifier("+16044561234"), // This is the ACS provisioned phone number for the caller + SourceCallerIdNumber = new PhoneNumberIdentifier("+16044561234"), // This is the Azure Communication Services provisioned phone number for the caller }; CreateCallResult response = await client.CreateGroupCallAsync(groupCallOptions); ``` CreateCallResult response = await client.CreateGroupCallAsync(groupCallOptions); ```java String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events-PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+18001234567"); // This is the ACS provisioned phone number for the caller +PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+18001234567"); // This is the Azure Communication Services provisioned phone number for the caller List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(new PhoneNumberIdentifier("+16471234567"), new CommunicationUserIdentifier("<user_id_of_target>"))); CreateGroupCallOptions groupCallOptions = new CreateGroupCallOptions(targets, callbackUri); groupCallOptions.setSourceCallIdNumber(callerIdNumber); const participants = [ { communicationUserId: "<user_id_of_target>" }, //user id looks like 8:a1b1c1-... ]; const createCallOptions = {- sourceCallIdNumber: { phoneNumber: "+18888888888" }, // This is the ACS provisioned phone number for the caller + sourceCallIdNumber: { phoneNumber: "+18888888888" }, // This is the Azure Communication Services provisioned phone number for the caller }; const response = await client.createGroupCall(participants, callbackUri, createCallOptions); ``` const response = await client.createGroupCall(participants, callbackUri, createC callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events caller_id_number = PhoneNumberIdentifier( "+18888888888"-) # This is the ACS provisioned phone number for the caller +) # This is the Azure Communication Services provisioned phone number for the caller pstn_endpoint = PhoneNumberIdentifier("+18008008800") voip_endpoint = CommunicationUserIdentifier( "<user_id_of_target>" To redirect the call to a phone number, construct the target and caller ID with # [csharp](#tab/csharp) ```csharp-var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller +var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller var target = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); ``` # [Java](#tab/java) ```java-PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller +PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller CallInvite target = new CallInvite(new PhoneNumberIdentifier("+18001234567"), callerIdNumber); ``` const target = { ```python caller_id_number = PhoneNumberIdentifier( "+18888888888"-) # This is the ACS provisioned phone number for the caller +) # This is the Azure Communication Services provisioned phone number for the caller call_invite = CallInvite( target=PhoneNumberIdentifier("+16471234567"), source_caller_id_number=caller_id_number, You can add a participant (Communication Services user or phone number) to an ex # [csharp](#tab/csharp) ```csharp-var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller +var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller var addThisPerson = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); AddParticipantsResult result = await callConnection.AddParticipantAsync(addThisPerson); ``` AddParticipantsResult result = await callConnection.AddParticipantAsync(addThisP # [Java](#tab/java) ```java-PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller +PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the Azure Communication Services provisioned phone number for the caller CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); AddParticipantOptions addParticipantOptions = new AddParticipantOptions(callInvite); Response<AddParticipantResult> addParticipantResultResponse = callConnectionAsync.addParticipantWithResponse(addParticipantOptions).block(); Response<AddParticipantResult> addParticipantResultResponse = callConnectionAsyn # [JavaScript](#tab/javascript) ```javascript-const callerIdNumber = { phoneNumber: "+16044561234" }; // This is the ACS provisioned phone number for the caller +const callerIdNumber = { phoneNumber: "+16044561234" }; // This is the Azure Communication Services provisioned phone number for the caller const addThisPerson = { targetParticipant: { phoneNumber: "+16041234567" }, sourceCallIdNumber: callerIdNumber, const addParticipantResult = await callConnection.addParticipant(addThisPerson); ```python caller_id_number = PhoneNumberIdentifier( "+18888888888"-) # This is the ACS provisioned phone number for the caller +) # This is the Azure Communication Services provisioned phone number for the caller call_invite = CallInvite( target=PhoneNumberIdentifier("+18008008800"), source_caller_id_number=caller_id_number, |
communication-services | Control Mid Call Media Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/control-mid-call-media-actions.md | app.logger.info("Started continuous DTMF recognition") ``` -- -When your application no longer wishes to receive DTMF tones from the participant anymore, you can use the `StopContinuousDtmfRecognitionAsync` method to let ACS know to stop detecting DTMF tones. +When your application no longer wishes to receive DTMF tones from the participant anymore, you can use the `StopContinuousDtmfRecognitionAsync` method to let Azure Communication Services know to stop detecting DTMF tones. ### StopContinuousDtmfRecognitionAsync Stop detecting DTMF tones sent by participant. if event.type == "Microsoft.Communication.ContinuousDtmfRecognitionToneReceived" ``` -- -ACS provides you with a `SequenceId` as part of the `ContinuousDtmfRecognitionToneReceived` event, which your application can use to reconstruct the order in which the participant entered the DTMF tones. +Azure Communication Services provides you with a `SequenceId` as part of the `ContinuousDtmfRecognitionToneReceived` event, which your application can use to reconstruct the order in which the participant entered the DTMF tones. ### ContinuousDtmfRecognitionFailed Event Example of how you can handle when DTMF tone detection fails. |
communication-services | Mute Participants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/mute-participants.md | zone_pivot_groups: acs-csharp-java With Azure Communication Services Call Automation SDK, developers can now mute participants through server based API requests. This feature can be useful when you want your application to mute participants after they've joined the meeting to avoid any interruptions or distractions to ongoing meetings. -If youΓÇÖre interested in abilities to allow participants to mute/unmute themselves on the call when theyΓÇÖve joined with ACS Client Libraries, you can use our [mute/unmute function](../../../communication-services/how-tos/calling-sdk/manage-calls.md) provided through our Calling Library. +If youΓÇÖre interested in abilities to allow participants to mute/unmute themselves on the call when theyΓÇÖve joined with Azure Communication Services Client Libraries, you can use our [mute/unmute function](../../../communication-services/how-tos/calling-sdk/manage-calls.md) provided through our Calling Library. ## Common use cases |
communication-services | Call Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/call-transcription.md | zone_pivot_groups: acs-plat-ios-android-windows # Display call transcription state on the client > [!NOTE]-> Call transcription state is only available from Teams meetings. Currently there's no support for call transcription state for ACS to ACS calls. +> Call transcription state is only available from Teams meetings. Currently there's no support for call transcription state for Azure Communication Services to Azure Communication Services calls. When using call transcription you may want to let your users know that a call is being transcribe. Here's how. |
communication-services | Callkit Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/callkit-integration.md | Last updated 01/06/2023 Title: CallKit integration in ACS Calling SDK+ Title: CallKit integration in Azure Communication Services Calling SDK -description: Steps on how to integrate CallKit with ACS Calling SDK +description: Steps on how to integrate CallKit with Azure Communication Services Calling SDK # Integrate with CallKit description: Steps on how to integrate CallKit with ACS Calling SDK ## CallKit Integration (within SDK) - CallKit Integration in the ACS iOS SDK handles interaction with CallKit for us. To perform any call operations like mute/unmute, hold/resume, we only need to call the API on the ACS SDK. + CallKit Integration in the Azure Communication Services iOS SDK handles interaction with CallKit for us. To perform any call operations like mute/unmute, hold/resume, we only need to call the API on the Azure Communication Services SDK. ### Initialize call agent with CallKitOptions description: Steps on how to integrate CallKit with ACS Calling SDK ### Handle incoming push notification payload - When the app receives incoming push notification payload, we need to call `handlePush` to process it. ACS Calling SDK will raise the `IncomingCall` event. + When the app receives incoming push notification payload, we need to call `handlePush` to process it. Azure Communication Services Calling SDK will raise the `IncomingCall` event. ```Swift public func handlePushNotification(_ pushPayload: PKPushPayload) |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/capabilities.md | Do I have permission to turn video on, do I have permission to turn mic on, do I [!INCLUDE [Capabilities JavaScript](./includes/capabilities/capabilities-web.md)] ## Supported Calltype-The feature is currently supported only for ACS Rooms call type and teams meeting call type +The feature is currently supported only for Azure Communication Services Rooms call type and teams meeting call type ## Next steps - [Learn how to manage video](./manage-video.md) |
communication-services | Manage Calls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-calls.md | Last updated 08/10/2021 zone_pivot_groups: acs-plat-web-ios-android-windows -#Customer intent: As a developer, I want to manage calls with the acs sdks so that I can create a calling application that manages calls. +#Customer intent: As a developer, I want to manage calls with the Azure Communication Services sdks so that I can create a calling application that manages calls. # Manage calls |
communication-services | Manage Video | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-video.md | Last updated 08/10/2021 zone_pivot_groups: acs-plat-web-ios-android-windows -#Customer intent: As a developer, I want to manage video calls with the acs sdks so that I can create a calling application that provides video capabilities. +#Customer intent: As a developer, I want to manage video calls with the Azure Communication Services sdks so that I can create a calling application that provides video capabilities. # Manage video during calls |
communication-services | Push Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/push-notifications.md | Last updated 08/10/2021 zone_pivot_groups: acs-plat-web-ios-android -#Customer intent: As a developer, I want to enable push notifications with the acs sdks so that I can create a calling application that provides push notifications to its users. +#Customer intent: As a developer, I want to enable push notifications with the Azure Communication Services sdks so that I can create a calling application that provides push notifications to its users. # Enable push notifications for calls |
communication-services | Local Testing Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/event-grid/local-testing-event-grid.md | ngrok http 7071 "MessageId": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e", "From": "15555555555", "To": "15555555555",- "Message": "Great to connect with ACS events", + "Message": "Great to connect with Azure Communication Services events", "ReceivedTimestamp": "2020-09-18T00:27:45.32Z" }, "eventType": "Microsoft.Communication.SMSReceived", |
communication-services | Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/managed-identity.md | Azure Communication Services is a fully managed communication platform that enab ## Using Managed Identity with Azure Communication Services -ACS supports using Managed Identity to authenticate with the service. By using Managed Identity, you can eliminate the need to manage your own access tokens and credentials. +Azure Communication Services supports using Managed Identity to authenticate with the service. By using Managed Identity, you can eliminate the need to manage your own access tokens and credentials. Your Azure Communication Services resource can be assigned two types of identity: 1. A **System Assigned Identity** which is tied to your resource and is deleted when your resource is deleted. az communication identity assign --system-assigned --name myApp --resource-group ## Add a user-assigned identity -Assigning a user-assigned identity to your ACS resource requires that you first create the identity and then add its resource identifier to your Communication service resource. +Assigning a user-assigned identity to your Azure Communication Services resource requires that you first create the identity and then add its resource identifier to your Communication service resource. # [Azure portal](#tab/portal) az communication identity assign --name myApp --resource-group myResourceGroup - -- -## Managed Identity using ACS management SDKs -Managed Identity can also be assigned to your ACS resource using the Azure Communication Management SDKs. +## Managed Identity using Azure Communication Services management SDKs +Managed Identity can also be assigned to your Azure Communication Services resource using the Azure Communication Management SDKs. This assignment can be achieved by introducing the identity property in the resource definition either on creation or when updating the resource. # [.NET](#tab/dotnet) For more information specific to managing your resource instance, see [Managing # [JavaScript](#tab/javascript) -For Node.js apps and JavaScript functions, samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/communication/arm-communication/samples-dev/communicationServicesCreateOrUpdateSample.ts) +For Node.js apps and JavaScript functions, samples on how to create or update your Azure Communication Services resource with a managed identity can be found in the [Azure Communication Management Developer Samples for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/communication/arm-communication/samples-dev/communicationServicesCreateOrUpdateSample.ts) For more information on using the JavaScript Management SDK, see [Azure Communication Management SDK for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/communication/arm-communication/README.md) # [Python](#tab/python) -For Python apps and functions, Code Samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/communication/azure-mgmt-communication/generated_samples/communication_services/create_or_update_with_system_assigned_identity.py) +For Python apps and functions, Code Samples on how to create or update your Azure Communication Services resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/communication/azure-mgmt-communication/generated_samples/communication_services/create_or_update_with_system_assigned_identity.py) For more information on using the python Management SDK, see [Azure Communication Management SDK for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/communication/azure-mgmt-communication/README.md) # [Java](#tab/java) -For Java apps and functions, Code Samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/communication/azure-resourcemanager-communication/src/samples/java/com/azure/resourcemanager/communication/generated/CommunicationServicesCreateOrUpdateSamples.java). +For Java apps and functions, Code Samples on how to create or update your Azure Communication Services resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/communication/azure-resourcemanager-communication/src/samples/java/com/azure/resourcemanager/communication/generated/CommunicationServicesCreateOrUpdateSamples.java). For more information on using the java Management SDK, see [Azure Communication Management SDK for Java](https://github.com/Azure/azure-sdk-for-jav) # [GoLang](#tab/go) -For Golang apps and functions, Code Samples on how to create or update your ACS resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Golang](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/communication/armcommunication/services_client_example_test.go). +For Golang apps and functions, Code Samples on how to create or update your Azure Communication Services resource with a managed identity can be found in the [Azure Communication Management Developer Samples for Golang](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/communication/armcommunication/services_client_example_test.go). For more information on using the golang Management SDK, see [Azure Communication Management SDK for Golang](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/communication/armcommunication/README.md) |
communication-services | Domain Validation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/telephony/domain-validation.md | To use direct routing in Azure Communication Services, you need to validate that When you're verifying the ownership of the SBC FQDN, keep in mind that the `*.onmicrosoft.com` and `*.azure.com` domain names aren't supported. For example, if you have two domain names, `contoso.com` and `contoso.onmicrosoft.com`, use `sbc.contoso.com` as the SBC name. Validating domain part makes sense if you plan to add multiple SBCs from the same domain name space. For example if you're using `sbc-eu.contoso.com`, `sbc-us.contoso.com`, and `sbc-af.contoso.com` you can validate `contoso.com` domain once and add SBCs from that domain later without extra validation.-Validating entire FQDN is helpful if you're a service provider and don't want to validate your base domain ownership with every customer. For example if you're running SBCs `customer1.acs.adatum.biz`, `customer2.acs.adatum.biz`, and `customer3.acs.adatum.biz`, you don't need to validate `acs.adatum.biz` for every Communication resource, instead you validate the entire FQDN each time. This option provides more granular security approach. +Validating entire FQDN is helpful if you're a service provider and don't want to validate your base domain ownership with every customer. For example if you're running SBCs `customer1.Azure Communication Services.adatum.biz`, `customer2.Azure Communication Services.adatum.biz`, and `customer3.Azure Communication Services.adatum.biz`, you don't need to validate `acs.adatum.biz` for every Communication resource, instead you validate the entire FQDN each time. This option provides more granular security approach. ## Add a new domain name |
communication-services | Quickstart Botframework Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md | When you have a Communication Services resource, you can set up a Communication :::image type="content" source="./media/smaller-bot-choose-resource.png" alt-text="Screenshot that shows how to save the selected Communication Service resource to create a new Communication Services user ID." lightbox="./media/bot-choose-resource.png"::: -1. When the resource details are verified, a bot ID is shown in the **Bot ACS Id** column. You can use the bot ID to represent the bot in a chat thread by using the Communication Services Chat AddParticipant API. After you add the bot to a chat as participant, the bot starts to receive chat-related activities, and it can respond in the chat thread. +1. When the resource details are verified, a bot ID is shown in the **Bot Azure Communication Services Id** column. You can use the bot ID to represent the bot in a chat thread by using the Communication Services Chat AddParticipant API. After you add the bot to a chat as participant, the bot starts to receive chat-related activities, and it can respond in the chat thread. :::image type="content" source="./media/smaller-acs-chat-channel-saved.png" alt-text="Screenshot that shows the new Communication Services user ID assigned to the bot." lightbox="./media/acs-chat-channel-saved.png"::: namespace Microsoft.BotBuilderSamples.Bots ### Send an adaptive card +> [!NOTE] +> Adaptive cards are only supported within Azure Communication Services use cases where all chat participants are ACS users, and not for Teams interoprability use cases. + You can send an adaptive card to the chat thread to increase engagement and efficiency. An adaptive card also helps you communicate with users in various ways. You can send an adaptive card from a bot by adding the card as a bot activity attachment. Here's an example of how to send an adaptive card: Verify that the bot's Communication Services ID is used correctly when a request ## Next steps -Try the [chat bot demo app](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App) for a 1:1 chat between a chat user and a bot via the BotFramework WebChat UI component. +Try the [chat bot demo app](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App) for a 1:1 chat between a chat user and a bot via the BotFramework WebChat UI component. |
communication-services | Add Multiple Senders Mgmt Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-multiple-senders-mgmt-sdks.md | Title: How to add and remove sender addresses in Azure Communication Services using the ACS Management Client Libraries + Title: How to add and remove sender addresses in Azure Communication Services using the Azure Communication Services Management Client Libraries -description: Learn about adding and removing sender addresses in Azure Communication Services using the ACS Management Client Libraries +description: Learn about adding and removing sender addresses in Azure Communication Services using the Azure Communication Services Management Client Libraries -# Quickstart: How to add and remove sender addresses in Azure Communication Services using the ACS Management Client Libraries +# Quickstart: How to add and remove sender addresses in Azure Communication Services using the Azure Communication Services Management Client Libraries -In this quick start, you will learn how to add and remove sender addresses in Azure Communication Services using the ACS Management Client Libraries. +In this quick start, you will learn how to add and remove sender addresses in Azure Communication Services using the Azure Communication Services Management Client Libraries. ::: zone pivot="programming-language-csharp" [!INCLUDE [Add sender addresses with .NET Management SDK](./includes/add-multiple-senders-net.md)] |
communication-services | Define Media Composition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/media-composition/define-media-composition.md | Azure Communication Services Media Composition is made up of three parts: inputs To retrieve the media sources that will be used in the layout composition, you'll need to define inputs. Inputs can be either multi-source or single source. ### Multi-source inputs-ACS Group Calls and ACS Rooms are typically made up of multiple participants. We define these as multi-source inputs. They can be used in layouts as a single input or destructured to reference a single participant. +Azure Communication Services Group Calls and Azure Communication Services Rooms are typically made up of multiple participants. We define these as multi-source inputs. They can be used in layouts as a single input or destructured to reference a single participant. -ACS Group Call json: +Azure Communication Services Group Call json: ```json { "inputs": { ACS Group Call json: } ``` -ACS Rooms Input json: +Azure Communication Services Rooms Input json: ```json { "inputs": { ACS Rooms Input json: ``` ### Single source inputs-Unlike multi-source inputs, single source inputs reference a single media source. If the single source input is from a multi-source input such as an ACS group call or rooms, it will reference the multi-source input's ID in the `call` property. The following are examples of single source inputs: +Unlike multi-source inputs, single source inputs reference a single media source. If the single source input is from a multi-source input such as an Azure Communication Services group call or rooms, it will reference the multi-source input's ID in the `call` property. The following are examples of single source inputs: Participant json: ```json The custom layout example above will result in the following composition: ## Outputs After media has been composed according to a layout, they can be outputted to your audience in various ways. Currently, you can either send the composed stream to a call or to an RTMP server. -ACS Group Call json: +Azure Communication Services Group Call json: ```json { "outputs": { ACS Group Call json: } ``` -ACS Rooms Output json: +Azure Communication Services Rooms Output json: ```json { "outputs": { |
communication-services | Receive Sms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/receive-sms.md | The `SMSReceived` event generated when an SMS is sent to an Azure Communication "MessageId": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e", "From": "15555555555", "To": "15555555555",- "Message": "Great to connect with ACS events", + "Message": "Great to connect with Azure Communication Services events", "ReceivedTimestamp": "2020-09-18T00:27:45.32Z" }, "eventType": "Microsoft.Communication.SMSReceived", |
communication-services | Get Started Chat Ui Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/ui-library/get-started-chat-ui-library.md | -Communication Services UI Library renders a full chat experience right in your application. It takes care of connecting to ACS chat services, and updates participant's presence automatically. As a developer, you need to worry about where in your app's user experience you want the chat experience to launch and only create the ACS resources as required. +Communication Services UI Library renders a full chat experience right in your application. It takes care of connecting to Azure Communication Services chat services, and updates participant's presence automatically. As a developer, you need to worry about where in your app's user experience you want the chat experience to launch and only create the Azure Communication Services resources as required. ::: zone pivot="platform-web" |
communication-services | Media Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/media-streaming.md | Title: Media streaming quickstart -description: Provides a quick start for developers to get audio streams through media streaming APIs from ACS calls. +description: Provides a quick start for developers to get audio streams through media streaming APIs from Azure Communication Services calls. |
communication-services | Web Calling Push Notifications Sample | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/web-calling-push-notifications-sample.md | Title: Azure Communication Services Web Calling SDK - Web push notifications -description: Quickstart tutorial for ACS Web Calling SDK push notifications +description: Quickstart tutorial for Azure Communication Services Web Calling SDK push notifications Last updated 04/04/2023 |
communication-services | Meeting Interop Features File Attachment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-interop/meeting-interop-features-file-attachment.md | -The Chat SDK is designed to work with Microsoft Teams seamlessly. Specifically, Chat SDK provides a solution to receive file attachments sent by users from Microsoft Teams. Currently this feature is only available in the Chat SDK for JavaScript. Please note that sending file attachments from ACS user to Teams user is not currently supported, see the current capabilities of [Teams Interop Chat](../../concepts/interop/guest/capabilities.md) for details. +The Chat SDK is designed to work with Microsoft Teams seamlessly. Specifically, Chat SDK provides a solution to receive file attachments sent by users from Microsoft Teams. Currently this feature is only available in the Chat SDK for JavaScript. Please note that sending file attachments from Azure Communication Services user to Teams user is not currently supported, see the current capabilities of [Teams Interop Chat](../../concepts/interop/guest/capabilities.md) for details. [!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)] |
communication-services | Contact Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/contact-center.md | The following list presents the set of features that are currently available for | Group of features | Capability | Public preview | General availability | |-|-|-|-|-| DTMF Support in ACS UI SDK | Allows touch tone entry | ❌ | ✔️ | +| DTMF Support in Azure Communication Services UI SDK | Allows touch tone entry | ❌ | ✔️ | | Teams Capabilities | Audio and video | ✔️ | ✔️ | | | Screen sharing | ✔️ | ✔️ | | | Record the call | ✔️ | ✔️ | |
communication-services | End Of Call Survey Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/end-of-call-survey-tutorial.md | In addition to using the End of Call Survey API you can create your own survey q - Embed Azure AppInsights into your application [Click here to know more about App Insight initialization using plain JavaScript](../../azure-monitor/app/javascript-sdk.md). Alternatively, you can use NPM to get the App Insights dependences. [Click here to know more about App Insight initialization using NPM](../../azure-monitor/app/javascript-sdk-configuration.md). - Build a UI in your application that will serve custom questions to the user and gather their input, lets assume that your application gathered responses as a string in the `improvementSuggestion` variable -- Submit survey results to ACS and send user response using App Insights:+- Submit survey results to Azure Communication Services and send user response using App Insights: ``` javascript currentCall.feature(SDK.Features.CallSurvey).submitSurvey(survey).then(res => { // `improvementSuggesstion` contains custom, user response In addition to using the End of Call Survey API you can create your own survey q appInsights.flush(); ``` User responses that were sent using AppInsights are available under your App Insights workspace. You can use [Workbooks](../../update-center/workbooks.md) to query between multiple resources, correlate call ratings and custom survey data. Steps to correlate the call ratings and custom survey data:-- Create new [Workbooks](../../update-center/workbooks.md) (Your ACS Resource -> Monitoring -> Workbooks -> New) and query Call Survey data from your ACS resource.+- Create new [Workbooks](../../update-center/workbooks.md) (Your Azure Communication Services Resource -> Monitoring -> Workbooks -> New) and query Call Survey data from your Azure Communication Services resource. - Add new query (+Add -> Add query) - Make sure `Data source` is `Logs` and `Resource type` is `Communication` - You can rename the query (Advanced Settings -> Step name [example: call-survey]) |
communication-services | File Sharing Tutorial Acs Chat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial-acs-chat.md | -In an Azure Communication Service Chat ("ACS Chat"), we can enable file sharing between communication users. Note, ACS Chat is different from the Teams Interoperability Chat ("Interop Chat"). If you want to enable file sharing in an Interop Chat, refer to [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md). +In an Azure Communication Service Chat ("ACS Chat"), we can enable file sharing between communication users. Note, Azure Communication Services Chat is different from the Teams Interoperability Chat ("Interop Chat"). If you want to enable file sharing in an Interop Chat, refer to [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md). In this tutorial, we're configuring the Azure Communication Services UI Library Chat Composite to enable file sharing. The UI Library Chat Composite provides a set of rich components and UI controls that can be used to enable file sharing. We're using Azure Blob Storage to enable the storage of the files that are shared through the chat thread. |
communication-services | File Sharing Tutorial Interop Chat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial-interop-chat.md | -In a Teams Interoperability Chat ("Interop Chat"), we can enable file sharing between Azure Communication Service end users and Teams users. Note, Interop Chat is different from the Azure Communication Service Chat ("ACS Chat"). If you want to enable file sharing in an ACS Chat, refer to [Add file sharing with UI Library in Azure Communication Service Chat](./file-sharing-tutorial-acs-chat.md). Currently, the Azure Communication Service end user is only able to receive file attachments from the Teams user. Please refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more. +In a Teams Interoperability Chat ("Interop Chat"), we can enable file sharing between Azure Communication Service end users and Teams users. Note, Interop Chat is different from the Azure Communication Service Chat ("ACS Chat"). If you want to enable file sharing in an Azure Communication Services Chat, refer to [Add file sharing with UI Library in Azure Communication Service Chat](./file-sharing-tutorial-acs-chat.md). Currently, the Azure Communication Service end user is only able to receive file attachments from the Teams user. Please refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more. >[!IMPORTANT] > Moreover, the Teams user's tenant admin might impose restrictions on file sharin Let's run `npm run start` then you should be able to access our sample app via `localhost:3000` like the following screenshot: -![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a ACS UI library.") +![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a Azure Communication Services UI library.") Simply click on the chat button located in the bottom to reveal the chat panel and now if Teams user sends some files, you should see something like the following screenshot: ![Teams sending a file](./media/file-sharing-tutorial-interop-chat-1.png "Screenshot of a Teams client sending one file.") -![ACS getting a file](./media/file-sharing-tutorial-interop-chat-2.png "Screenshot of ACS UI library receiving one file.") +![ACS getting a file](./media/file-sharing-tutorial-interop-chat-2.png "Screenshot of Azure Communication Services UI library receiving one file.") And now if the user click on the file attachment card, a new tab would be opened like the following where the user can download the file: |
communication-services | Inline Image Tutorial Interop Chat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/inline-image-tutorial-interop-chat.md | And this is all you need! And there's no other setup needed to enable inline ima Let's run `npm run start` then you should be able to access our sample app via `localhost:3000` like the following screenshot: -![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a ACS UI library.") +![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a Azure Communication Services UI library.") Simply click on the chat button located in the bottom to reveal the chat panel and now if Teams user sends an image, you should see something like the following screenshot: ![Teams sending two images](./media/inline-image-tutorial-interop-chat-1.png "Screenshot of a Teams client sending 2 inline images.") -![ACS getting two images](./media/inline-image-tutorial-interop-chat-2.png "Screenshot of ACS UI library receiving 2 inline images.") +![ACS getting two images](./media/inline-image-tutorial-interop-chat-2.png "Screenshot of Azure Communication Services UI library receiving 2 inline images.") Note that in a Teams Interop Chat, we currently only support Azure Communication Service end user to receive inline images sent by the Teams user. To learn more about what features are supported, refer to the [UI Library use cases](../concepts/ui-library/ui-library-use-cases.md) |
communication-services | Integrate Azure Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/integrate-azure-function.md | Before you get started, make sure to: }; } ```-**Explanation to code above**: The first line import the interface for the `CommunicationIdentityClient`. The connection string in the second line can be found in your Azure Communication Services resource in the Azure portal. The `ACSEndpoint` is the URL of the ACS resource that was created. +**Explanation to code above**: The first line import the interface for the `CommunicationIdentityClient`. The connection string in the second line can be found in your Azure Communication Services resource in the Azure portal. The `ACSEndpoint` is the URL of the Azure Communication Services resource that was created. 5. Open the local Azure Function folder in Visual Studio Code. Open the `index.js` and run the local Azure Function. A local Azure Function endpoint will be created and printed in the terminal. The printed message looks similar to: |
communication-services | Proxy Calling Support Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/proxy-calling-support-tutorial.md | Title: Tutorial - Proxy your ACS calling traffic across your own servers + Title: Tutorial - Proxy your Azure Communication Services calling traffic across your own servers description: Learn how to have your media and signaling traffic be proxied to servers that you can control. |
communication-services | Virtual Visits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md | These three **implementation options** are columns in the table below, while eac |--||--||| | *Manager* | Configure Business Availability | Bookings | Bookings | Custom | | *Provider* | Managing upcoming appointments | Outlook & Teams | Outlook & Teams | Custom |-| *Provider* | Join the appointment | Teams | Teams | ACS Calling & Chat | -| *Consumer* | Schedule an appointment | Bookings | Bookings | ACS Rooms | -| *Consumer*| Be reminded of an appointment | Bookings | Bookings | ACS SMS | -| *Consumer*| Join the appointment | Teams or virtual appointments | ACS Calling & Chat | ACS Calling & Chat | +| *Provider* | Join the appointment | Teams | Teams | Azure Communication Services Calling & Chat | +| *Consumer* | Schedule an appointment | Bookings | Bookings | Azure Communication Services Rooms | +| *Consumer*| Be reminded of an appointment | Bookings | Bookings | Azure Communication Services SMS | +| *Consumer*| Join the appointment | Teams or virtual appointments | Azure Communication Services Calling & Chat | Azure Communication Services Calling & Chat | There are other ways to customize and combine Microsoft tools to deliver a virtual appointments experience: - **Replace Bookings with a custom scheduling experience with Graph.** You can build your own consumer-facing scheduling experience that controls Microsoft 365 meetings with Graph APIs. The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. T 2. Consumer gets an appointment reminder through SMS and Email. 3. Provider joins the appointment using Microsoft Teams. 4. Consumer uses a link from the Bookings reminders to launch the Contoso consumer app and join the underlying Teams meeting.-5. The users communicate with each other using voice, video, and text chat in a meeting. Specifically, Teams chat interoperability enables Teams user to send inline images or file attachments directly to ACS users seamlessly. +5. The users communicate with each other using voice, video, and text chat in a meeting. Specifically, Teams chat interoperability enables Teams user to send inline images or file attachments directly to Azure Communication Services users seamlessly. ## Building a virtual appointment sample In this section, weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application is a desktop and mobile friendly browser experience, with code that you can use to explore and for production. |
communications-gateway | Plan And Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/plan-and-manage-costs.md | When you deploy Azure Communications Gateway, you're charged for how you use the For example, if you have 28,000 users assigned to the deployment each month, you're charged for: * The service availability fee for each hour in the month * 24,001 users in the 1000-25000 tier-* 3000 users in the 25001-100000 tier --> [!TIP] -> If you receive a quote through Microsoft Volume Licensing, pricing may be presented as aggregated so that the values are easily readable (for example showing the per-user meters in groups of 10 or 100 rather than the pricing for individual users). This does not impact the way you will be billed. --If you choose to deploy the Number Management Portal by selecting the API Bridge option, you'll also be charged for the Number Management Portal. Fees work in the same way as the other meters: a service fee meter and a per-user meter. The number of users charged for the Number Management Portal is always the same as the number of users charged on the other Azure Communications Gateway meters. +* 3000 users in the 25000+ tier > [!NOTE] > A Microsoft Teams Direct Routing user is any telephone number configured with Direct Routing on Azure Communications Gateway. Billing for the user starts as soon as you have configured the number. If you choose to deploy the Number Management Portal by selecting the API Bridge At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all Azure Communications Gateway costs. There's a separate line item for each meter. +> [!TIP] +> If you receive a quote through Microsoft Volume Licensing, pricing may be presented as aggregated so that the values are easily readable (for example showing the per-user meters in groups of 10 or 100 rather than the pricing for individual users). This does not impact the way you will be billed. + If you've arranged any custom work with Microsoft, you might be charged an extra fee for that work. That fee isn't included in these meters. If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). |
container-apps | Firewall Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md | Network Security Groups (NSGs) needed to configure virtual networks closely rese You can lock down a network via NSGs with more restrictive rules than the default NSG rules to control all inbound and outbound traffic for the Container Apps environment at the subscription level. -In the workload profiles environment, user-defined routes (UDRs) and securing outbound traffic with a firewall are supported. When using an external workload profiles environment, inbound traffic to Container Apps that use external ingress routes through the public IP that exists in the [managed resource group](./networking.md#workload-profiles-environment-1) rather than through your subnet. This means that locking down inbound traffic via NSG or Firewall on an external workload profiles environment is not supported. For more information, see [Networking in Azure Container Apps environments](./networking.md#user-defined-routes-udr). +In the workload profiles environment, user-defined routes (UDRs) and [securing outbound traffic with a firewall](./networking.md#configuring-udr-with-azure-firewall) are supported. When using an external workload profiles environment, inbound traffic to Azure Container Apps is routed through the public IP that exists in the [managed resource group](./networking.md#workload-profiles-environment-1) rather than through your subnet. This means that locking down inbound traffic via NSG or Firewall on an external workload profiles environment isn't supported. For more information, see [Networking in Azure Container Apps environments](./networking.md#user-defined-routes-udr). In the Consumption only environment, custom user-defined routes (UDRs) and ExpressRoutes aren't supported. ## NSG allow rules -The following tables describe how to configure a collection of NSG allow rules. ->[!NOTE] -> The subnet associated with a Container App Environment on the Consumption only environment requires a CIDR prefix of `/23` or larger. On the workload profiles environment (preview), a `/27` or larger is required. +The following tables describe how to configure a collection of NSG allow rules. The specific rules required depend on your [environment type](./environment.md#types). ### Inbound -| Protocol | Port | ServiceTag | Description | -|--|--|--|--| -| Any | \* | Infrastructure subnet address space | Allow communication between IPs in the infrastructure subnet. This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`. | -| Any | \* | AzureLoadBalancer | Allow the Azure infrastructure load balancer to communicate with your environment. | +# [Workload profiles environment](#tab/workload-profiles-env) -### Outbound with service tags +>[!Note] +> When using workload profiles, inbound NSG rules only apply for traffic going through your virtual network. If your container apps are set to accept traffic from the public internet, incoming traffic will go through the public endpoint instead of the virtual network. -The following service tags are required when using NSGs on the Consumption only environment: +| Protocol | Source | Source Ports | Destination | Destination Ports | Description | +|--|--|--|--|--|--| +| TCP | Your Client IPs | \* | Your container app's subnet<sup>1</sup> | `443`, `30,000-32,676`<sup>2</sup> | Allow your Client IPs to access Azure Container Apps. | +| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30,000-32,676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. | -| Protocol | Port | ServiceTag | Description -|--|--|--|--| -| UDP | `1194` | `AzureCloud.<REGION>` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. | -| TCP | `9000` | `AzureCloud.<REGION>` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. | -| TCP | `443` | `AzureMonitor` | Allows outbound calls to Azure Monitor. | +# [Consumption only environment](#tab/consumption-only-env) -The following service tags are required when using NSGs on the workload profiles environment: +| Protocol | Source | Source Ports | Destination | Destination Ports | Description | +|--|--|--|--|--|--| +| TCP | Your Client IPs | \* | Your container app's subnet<sup>1</sup> | `443` | Allow your Client IPs to access Azure Container Apps. | +| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30,000-32,676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. | ++++<sup>1</sup> This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`. +<sup>2</sup> The full range is required when creating your Azure Container Apps as a port within the range will by dynamically allocated. Once created, the required ports are 2 immutable, static values, and you can update your NSG rules. ->[!Note] -> If you are using Azure Container Registry (ACR) with NSGs configured on your virtual network, create a private endpoint on your ACR to allow Container Apps to pull images through the virtual network. -| Protocol | Port | Service Tag | Description -|--|--|--|--| -| TCP | `443` | `MicrosoftContainerRegistry` | This is the service tag for container registry for microsoft containers. | -| TCP | `443` | `AzureFrontDoor.FirstParty` | This is a dependency of the `MicrosoftContainerRegistry` service tag. | +### Outbound -### Outbound with wild card IP rules +# [Workload profiles environment](#tab/workload-profiles-env) ++| Protocol | Source | Source Ports | Destination | Destination Ports | Description | +|--|--|--|--|--|--| +| TCP | Your container app's subnet<sup>1</sup> | \* | Your Container Registry | Your container registry's port | This is required to communicate with your container registry. For example, when using ACR, you need `AzureContainerRegistry` and `AzureActiveDirectory` for the destination, and the port will be your container registry's port unless using private endpoints.<sup>2</sup> | +| TCP | Your container app's subnet | \* | `AzureMonitor` | `443` | Allows outbound calls to Azure Monitor. | +| TCP | Your container app's subnet | \* | `MicrosoftContainerRegistry` | `443` | This is the service tag for Microsoft container registry for system containers. | +| TCP | Your container app's subnet | \* | `AzureFrontDoor.FirstParty` | `443` | This is a dependency of the `MicrosoftContainerRegistry` service tag. | +| UDP | Your container app's subnet | \* | \* | `123` | NTP server. | +| Any | Your container app's subnet | \* | Your container app's subnet | \* | Allow communication between IPs in your container app's subnet. | +| TCP | Your container app's subnet | \* | `AzureActiveDirectory` | `443` | If you're using managed identity, this is required. | ++# [Consumption only environment](#tab/consumption-only-env) ++| Protocol | Source | Source Ports | Destination | Destination Ports | Description | +|--|--|--|--|--|--| +| TCP | Your container app's subnet<sup>1</sup> | \* | Your Container Registry | Your container registry's port | This is required to communicate with your container registry. For example, when using ACR, you need `AzureContainerRegistry` and `AzureActiveDirectory` for the destination, and the port will be your container registry's port unless using private endpoints.<sup>2</sup> | +| UDP | Your container app's subnet | \* | `AzureCloud.<REGION>` | `1194` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. | +| TCP | Your container app's subnet | \* | `AzureCloud.<REGION>` | `9000` | Required for internal AKS secure connection between underlying nodes and control plane. Replace `<REGION>` with the region where your container app is deployed. | +| TCP | Your container app's subnet | \* | `AzureMonitor` | `443` | Allows outbound calls to Azure Monitor. | +| TCP | Your container app's subnet | \* | `AzureCloud` | `443` | Allowing all outbound on port `443` provides a way to allow all FQDN based outbound dependencies that don't have a static IP. | +| UDP | Your container app's subnet | \* | \* | `123` | NTP server. | +| TCP | Your container app's subnet | \* | \* | `5671` | Container Apps control plane. | +| TCP | Your container app's subnet | \* | \* | `5672` | Container Apps control plane. | +| Any | Your container app's subnet | \* | Your container app's subnet | \* | Allow communication between IPs in your container app's subnet. | ++ -The following IP rules are required when using NSGs on both the Consumption only environment and the workload profiles environment: +<sup>1</sup> This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`. +<sup>2</sup> If you're using Azure Container Registry (ACR) with NSGs configured on your virtual network, create a private endpoint on your ACR to allow Azure Container Apps to pull images through the virtual network. You don't need to add an NSG rule for ACR when configured with private endpoints. -| Protocol | Port | IP | Description | -|--|--|--|--| -| TCP | `443` | \* | Allowing all outbound on port `443` provides a way to allow all FQDN based outbound dependencies that don't have a static IP. | -| UDP | `123` | \* | NTP server. | -| TCP | `5671` | \* | Container Apps control plane. | -| TCP | `5672` | \* | Container Apps control plane. | -| Any | \* | Infrastructure subnet address space | Allow communication between IPs in the infrastructure subnet. This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`. | #### Considerations - If you're running HTTP servers, you might need to add ports `80` and `443`.-- Adding deny rules for some ports and protocols with lower priority than `65000` may cause service interruption and unexpected behavior.+- Adding deny rules for some ports and protocols with lower priority than `65000` might cause service interruption and unexpected behavior. - Don't explicitly deny the Azure DNS address `168.63.128.16` in the outgoing NSG rules, or your Container Apps environment won't be able to function. |
container-apps | User Defined Routes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md | Azure creates a default route table for your virtual networks on create. By impl You can also use a NAT gateway or any other third party appliances instead of Azure Firewall. -For more information on networking concepts in Container Apps, see [Networking Environment in Azure Container Apps](./networking.md). +See the [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewall) in [networking in Azure Container Apps](./networking.md) for more information. ## Prerequisites A subnet called **AzureFirewallSubnet** is required in order to deploy a firewal | **Virtual network** | Select the integrated virtual network. | | **Public IP address** | Select an existing address or create one by selecting **Add new**. | -1. Select **Review + create**. After validation finishes, select **Create**. The validation step may take a few minutes to complete. +1. Select **Review + create**. After validation finishes, select **Create**. The validation step might take a few minutes to complete. 1. Once the deployment completes, select **Go to Resource**. |
cosmos-db | Continuous Backup Restore Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md | Currently the point in time restore functionality has the following limitations: * Multi-regions write accounts aren't supported. -* Currently Azure Synapse Link can be enabled, in preview, in continuous backup database accounts. The opposite situation isn't supported yet, it is not possible to turn on continuous backup in Synapse Link enabled database accounts. And analytical store isn't included in backups. For more information about backup and analytical store, see [analytical store backup](analytical-store-introduction.md#backup). +* Currently Azure Synapse Link can be enabled in continuous backup database accounts. But the opposite situation isn't supported yet, it is not possible to turn on continuous backup in Synapse Link enabled database accounts. And analytical store isn't included in backups. For more information about backup and analytical store, see [analytical store backup](analytical-store-introduction.md#backup). * The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account didn't exist. |
cosmos-db | Concepts Colocation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-colocation.md | Colocation means storing related information together on the same nodes. Queries ## Data colocation for hash-distributed tables -In Azure Cosmos DB for PostgreSQL, a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables. +In Azure Cosmos DB for PostgreSQL, a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables. The concept of hash-distributed tables is also known as [row-based sharding](concepts-sharding-models.md#row-based-sharding). In [schema-based sharding](concepts-sharding-models.md#schema-based-sharding), tables within a distributed schema are always colocated. :::image type="content" source="media/concepts-colocation/colocation-shards.png" alt-text="Diagram shows shards with the same hash range placed on the same node for events shards and page shards." border="false"::: ## A practical example of colocation -Consider the following tables that might be part of a multi-tenant web +Consider the following tables that might be part of a multitenant web analytics SaaS: ```sql In some cases, queries and table schemas must be changed to include the tenant I ## Next steps -- See how tenant data is colocated in the [multi-tenant tutorial](tutorial-design-database-multi-tenant.md).+- See how tenant data is colocated in the [multitenant tutorial](tutorial-design-database-multi-tenant.md). |
cosmos-db | Concepts Distributed Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-distributed-data.md | - Title: Distributed data ΓÇô Azure Cosmos DB for PostgreSQL -description: Learn about distributed tables, reference tables, local tables, and shards. ----- Previously updated : 05/06/2019---# Distributed data in Azure Cosmos DB for PostgreSQL ---This article outlines the three table types in Azure Cosmos DB for PostgreSQL. -It shows how distributed tables are stored as shards, and the way that shards are placed on nodes. --## Table types --There are three types of tables in a cluster, each -used for different purposes. --### Type 1: Distributed tables --The first type, and most common, is distributed tables. They -appear to be normal tables to SQL statements, but they're horizontally -partitioned across worker nodes. What this means is that the rows -of the table are stored on different nodes, in fragment tables called -shards. --Azure Cosmos DB for PostgreSQL runs not only SQL but DDL statements throughout a cluster. -Changing the schema of a distributed table cascades to update -all the table's shards across workers. --#### Distribution column --Azure Cosmos DB for PostgreSQL uses algorithmic sharding to assign rows to shards. The assignment is made deterministically based on the value -of a table column called the distribution column. The cluster -administrator must designate this column when distributing a table. -Making the right choice is important for performance and functionality. --### Type 2: Reference tables --A reference table is a type of distributed table whose entire contents are -concentrated into a single shard. The shard is replicated on every worker and -the coordinator. Queries on any worker can access the reference information -locally, without the network overhead of requesting rows from another node. -Reference tables have no distribution column because there's no need to -distinguish separate shards per row. --Reference tables are typically small and are used to store data that's -relevant to queries running on any worker node. An example is enumerated -values like order statuses or product categories. --### Type 3: Local tables --When you use Azure Cosmos DB for PostgreSQL, the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them. --A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication. --## Shards --The previous section described how distributed tables are stored as shards on -worker nodes. This section discusses more technical details. --The `pg_dist_shard` metadata table on the coordinator contains a -row for each shard of each distributed table in the system. The row -matches a shard ID with a range of integers in a hash space -(shardminvalue, shardmaxvalue). --```sql -SELECT * from pg_dist_shard; - logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue -++--++ - github_events | 102026 | t | 268435456 | 402653183 - github_events | 102027 | t | 402653184 | 536870911 - github_events | 102028 | t | 536870912 | 671088639 - github_events | 102029 | t | 671088640 | 805306367 - (4 rows) -``` --If the coordinator node wants to determine which shard holds a row of -`github_events`, it hashes the value of the distribution column in the -row. Then the node checks which shard\'s range contains the hashed value. The -ranges are defined so that the image of the hash function is their -disjoint union. --### Shard placements --Suppose that shard 102027 is associated with the row in question. The row -is read or written in a table called `github_events_102027` in one of -the workers. Which worker? That's determined entirely by the metadata -tables. The mapping of shard to worker is known as the shard placement. --The coordinator node -rewrites queries into fragments that refer to the specific tables -like `github_events_102027` and runs those fragments on the -appropriate workers. Here's an example of a query run behind the scenes to find the node holding shard ID 102027. --```sql -SELECT - shardid, - node.nodename, - node.nodeport -FROM pg_dist_placement placement -JOIN pg_dist_node node - ON placement.groupid = node.groupid - AND node.noderole = 'primary'::noderole -WHERE shardid = 102027; -``` --```output -ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ -Γöé shardid Γöé nodename Γöé nodeport Γöé -Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ -Γöé 102027 Γöé localhost Γöé 5433 Γöé -ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ -``` --## Next steps --- Learn how to [choose a distribution column](howto-choose-distribution-column.md) for distributed tables. |
cosmos-db | Concepts Nodes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-nodes.md | allows the database to scale by adding more nodes to the cluster. Every cluster has a coordinator node and multiple workers. Applications send their queries to the coordinator node, which relays it to the relevant-workers and accumulates their results. Applications are not able to connect -directly to workers. +workers and accumulates their results. -Azure Cosmos DB for PostgreSQL allows the database administrator to *distribute* tables, -storing different rows on different worker nodes. Distributed tables are the -key to Azure Cosmos DB for PostgreSQL performance. Failing to distribute tables leaves them entirely -on the coordinator node and cannot take advantage of cross-machine parallelism. +Azure Cosmos DB for PostgreSQL allows the database administrator to *distribute* tables and/or schemas, +storing different rows on different worker nodes. Distributed tables and/or schemas are the +key to Azure Cosmos DB for PostgreSQL performance. Failing to distribute tables and/or schemas leaves them entirely +on the coordinator node and can't take advantage of cross-machine parallelism. For each query on distributed tables, the coordinator either routes it to a single worker node, or parallelizes it across several depending on whether the-required data lives on a single node or multiple. The coordinator decides what +required data lives on a single node or multiple. With [schema-based sharding](concepts-sharding-models.md#schema-based-sharding), the coordinator routes the queries directly to the node that hosts the schema. In both schema-based sharding and [row-based sharding](concepts-sharding-models.md#row-based-sharding), the coordinator decides what to do by consulting metadata tables. These tables track the DNS names and health of worker nodes, and the distribution of data across nodes. ## Table types -There are three types of tables in a cluster, each +There are five types of tables in a cluster, each stored differently on nodes and used for different purposes. ### Type 1: Distributed tables values like order statuses or product categories. When you use Azure Cosmos DB for PostgreSQL, the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them. -A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication. +A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a `users` table for application sign-in and authentication. ++### Type 4: Local managed tables ++Azure Cosmos DB for PostgreSQL might automatically add local tables to metadata if a foreign key reference exists between a local table and a reference table. Additionally locally managed tables can be manually created by executing [create_reference_table](reference-functions.md#citus_add_local_table_to_metadata) citus_add_local_table_to_metadata function on regular local tables. Tables present in metadata are considered managed tables and can be queried from any node, Citus knows to route to the coordinator to obtain data from the local managed table. Such tables are displayed as local in [citus_tables](reference-metadata.md#distributed-tables-view) view. ++### Type 5: Schema tables ++With [schema-based sharding](concepts-sharding-models.md#schema-based-sharding) introduced in Citus 12.0, distributed schemas are automatically associated with individual colocation groups. Tables created in those schemas are automatically converted to colocated distributed tables without a shard key. Such tables are considered schema tables and are displayed as schema in [citus_tables](reference-metadata.md#distributed-tables-view) view. ## Shards |
cosmos-db | Concepts Sharding Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-sharding-models.md | + + Title: Sharding models - Azure Cosmos DB for PostgreSQL +description: What is sharding, and what sharding models are available in Azure Cosmos DB for PostgreSQL +++++ Last updated : 09/08/2023+++# Sharding models +++Sharding is a technique used in database systems and distributed computing to horizontally partition data across multiple servers or nodes. It involves breaking up a large database or dataset into smaller, more manageable parts called Shards. A shard contains a subset of the data, and together shards form the complete dataset. ++Azure Cosmos DB for PostgreSQL offers two types of data sharding, namely row-based and schema-based. Each option comes with its own [Sharding tradeoffs](#sharding-tradeoffs), allowing you to choose the approach that best aligns with your application's requirements. ++## Row-based sharding ++The traditional way in which Azure Cosmos DB for PostgreSQL shards tables is the single database, shared schema model also known as row-based sharding, tenants coexist as rows within the same table. The tenant is determined by defining a [distribution column](./concepts-nodes.md#distribution-column), which allows splitting up a table horizontally. ++Row-based is the most hardware efficient way of sharding. Tenants are densely packed and distributed among the nodes in the cluster. This approach however requires making sure that all tables in the schema have the distribution column and that all queries in the application filter by it. Row-based sharding shines in IoT workloads and for achieving the best margin out of hardware use. ++Benefits: ++* Best performance +* Best tenant density per node ++Drawbacks: ++* Requires schema modifications +* Requires application query modifications +* All tenants must share the same schema ++## Schema-based sharding ++Available with Citus 12.0 in Azure Cosmos DB for PostgreSQL, schema-based sharding is the shared database, separate schema model, the schema becomes the logical shard within the database. Multitenant apps can use a schema per tenant to easily shard along the tenant dimension. Query changes aren't required and the application only needs a small modification to set the proper search_path when switching tenants. Schema-based sharding is an ideal solution for microservices, and for ISVs deploying applications that can't undergo the changes required to onboard row-based sharding. ++Benefits: ++* Tenants can have heterogeneous schemas +* No schema modifications required +* No application query modifications required +* Schema-based sharding SQL compatibility is better compared to row-based sharding ++Drawbacks: ++* Fewer tenants per node compared to row-based sharding ++## Sharding tradeoffs ++<br /> ++|| Schema-based sharding | Row-based sharding| +|||| +|Multi-tenancy model|Separate schema per tenant|Shared tables with tenant ID columns| +|Citus version|12.0+|All versions| +|Extra steps compared to vanilla PostgreSQL|None, only a config change|Use create_distributed_table on each table to distribute & colocate tables by tenant ID| +|Number of tenants|1-10k|1-1 M+| +|Data modeling requirement|No foreign keys across distributed schemas|Need to include a tenant ID column (a distribution column, also known as a sharding key) in each table, and in primary keys, foreign keys| +|SQL requirement for single node queries|Use a single distributed schema per query|Joins and WHERE clauses should include tenant_id column| +|Parallel cross-tenant queries|No|Yes| +|Custom table definitions per tenant|Yes|No| +|Access control|Schema permissions|Schema permissions| +|Data sharing across tenants|Yes, using reference tables (in a separate schema)|Yes, using reference tables| +|Tenant to shard isolation|Every tenant has its own shard group by definition|Can give specific tenant IDs their own shard group via isolate_tenant_to_new_shard| |
cosmos-db | Concepts Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-upgrade.md | Last updated 05/16/2023 The Azure Cosmos DB for PostgreSQL managed service can handle upgrades of both the PostgreSQL server, and the Citus extension. All clusters are created with [the latest Citus version](./reference-extensions.md#citus-extension) available for the major PostgreSQL version you select during cluster provisioning. When you select a PostgreSQL version such as PostgreSQL 15 for in-place cluster upgrade, the latest Citus version supported for selected PostgreSQL version is going to be installed. -If you need to upgrade the Citus version only, you can do so by using an in-place upgrade. For instance, you may want to upgrade Citus 11.0 to Citus 11.3 on your PostgreSQL 14 cluster without upgrading Postgres version. +If you need to upgrade the Citus version only, you can do so by using an in-place upgrade. For instance, you might want to upgrade Citus 11.0 to Citus 11.3 on your PostgreSQL 14 cluster without upgrading Postgres version. ## Upgrade precautions Also, upgrading a major version of Citus can introduce changes in behavior. It's best to familiarize yourself with new product features and changes to avoid surprises. +Noteworthy Citus 12 changes: +* The default rebalance strategy changed from `by_shard_count` to `by_disk_size`. +* Support for PostgreSQL 13 has been dropped as of this version. + Noteworthy Citus 11 changes: -* Table shards may disappear in your SQL client. Their visibility - is now controlled by +* Table shards might disappear in your SQL client. You can control their visibility + using [citus.show_shards_for_app_name_prefixes](reference-parameters.md#citusshow_shards_for_app_name_prefixes-text). * There are several [deprecated features](https://www.citusdata.com/updates/v11-0/#deprecated-features). |
cosmos-db | Howto Scale Grow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-grow.md | queries. > [!NOTE] > To take advantage of newly added nodes you must [rebalance distributed table > shards](howto-scale-rebalance.md), which means moving some-> [shards](concepts-distributed-data.md#shards) from existing nodes +> [shards](concepts-nodes.md#shards) from existing nodes > to the new ones. Rebalancing can work in the background, and requires no > downtime. |
cosmos-db | Howto Scale Rebalance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-rebalance.md | Last updated 01/30/2023 [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] To take advantage of newly added nodes, rebalance distributed table-[shards](concepts-distributed-data.md#shards). Rebalancing moves shards from existing nodes to the new ones. Azure Cosmos DB for PostgreSQL offers +[shards](concepts-nodes.md#shards). Rebalancing moves shards from existing nodes to the new ones. Azure Cosmos DB for PostgreSQL offers zero-downtime rebalancing, meaning queries continue without interruption during shard rebalancing. |
cosmos-db | Howto Useful Diagnostic Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-useful-diagnostic-queries.md | Last updated 01/30/2023 ## Finding which node contains data for a specific tenant -In the multi-tenant use case, we can determine which worker node contains the +In the multitenant use case, we can determine which worker node contains the rows for a specific tenant. Azure Cosmos DB for PostgreSQL groups the rows of distributed tables into shards, and places each shard on a worker node in the cluster. The output contains the host and port of the worker database. ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ ``` +## Finding which node hosts a distributed schema ++Distributed schemas are automatically associated with individual colocation groups such that the tables created in those schemas are converted to colocated distributed tables without a shard key. You can find where a distributed schema resides by joining `citus_shards` with `citus_schemas`: ++```postgresql +select schema_name, nodename, nodeport + from citus_shards + join citus_schemas cs + on cs.colocation_id = citus_shards.colocation_id + group by 1,2,3; +``` ++``` + schema_name | nodename | nodeport +-+--+- + a | localhost | 9701 + b | localhost | 9702 + with_data | localhost | 9702 +``` ++You can also query `citus_shards` directly filtering down to schema table type to have a detailed listing for all tables. ++```postgresql +select * from citus_shards where citus_table_type = 'schema'; +``` ++``` + table_name | shardid | shard_name | citus_table_type | colocation_id | nodename | nodeport | shard_size | schema_name | colocation_id | schema_size | schema_owner +-++--+++--+-++-++-+-- + a.cities | 102080 | a.cities_102080 | schema | 4 | localhost | 9701 | 8192 | a | 4 | 128 kB | citus + a.map_tags | 102145 | a.map_tags_102145 | schema | 4 | localhost | 9701 | 32768 | a | 4 | 128 kB | citus + a.measurement | 102047 | a.measurement_102047 | schema | 4 | localhost | 9701 | 0 | a | 4 | 128 kB | citus + a.my_table | 102179 | a.my_table_102179 | schema | 4 | localhost | 9701 | 16384 | a | 4 | 128 kB | citus + a.people | 102013 | a.people_102013 | schema | 4 | localhost | 9701 | 32768 | a | 4 | 128 kB | citus + a.test | 102008 | a.test_102008 | schema | 4 | localhost | 9701 | 8192 | a | 4 | 128 kB | citus + a.widgets | 102146 | a.widgets_102146 | schema | 4 | localhost | 9701 | 32768 | a | 4 | 128 kB | citus + b.test | 102009 | b.test_102009 | schema | 5 | localhost | 9702 | 8192 | b | 5 | 32 kB | citus + b.test_col | 102012 | b.test_col_102012 | schema | 5 | localhost | 9702 | 24576 | b | 5 | 32 kB | citus + with_data.test | 102180 | with_data.test_102180 | schema | 11 | localhost | 9702 | 647168 | with_data | 11 | 632 kB | citus +``` + ## Finding the distribution column for a table Each distributed table has a "distribution column." (For more information, see [Distributed Data Modeling](howto-choose-distribution-column.md).) It can be important to know which column it is. For instance, when joining or filtering-tables, you may see error messages with hints like, "add a filter to the +tables, you might see error messages with hints like, "add a filter to the distribution column." The `pg_dist_*` tables on the coordinator node contain diverse metadata about |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/introduction.md | reviewed the following articles: > - Connect and query with your [app stack](quickstart-app-stacks-overview.yml). > - See how the [Azure Cosmos DB for PostgreSQL API](reference-overview.md) extends PostgreSQL, and try [useful diagnostic queries](howto-useful-diagnostic-queries.md). > - Pick the best [cluster size](howto-scale-initial.md) for your workload.+> - Learn how to use Azure Cosmos DB for PostgreSQL as the [storage backend for multiple microservices](tutorial-design-database-microservices.md). > - [Monitor](howto-monitoring.md) cluster performance. > - Ingest data efficiently with [Azure Stream Analytics](howto-ingest-azure-stream-analytics.md) > and [Azure Data Factory](howto-ingest-azure-data-factory.md). |
cosmos-db | Quickstart Build Scalable Apps Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-concepts.md | quick overview of the terms and concepts involved. ## Architectural overview -Azure Cosmos DB for PostgreSQL gives you the power to distribute tables across multiple +Azure Cosmos DB for PostgreSQL gives you the power to distribute tables and/or schemas across multiple machines in a cluster and transparently query them the same you query plain PostgreSQL: In the Azure Cosmos DB for PostgreSQL architecture, there are multiple kinds of * The **coordinator** node stores distributed table metadata and is responsible for distributed planning.-* By contrast, the **worker** nodes store the actual data and do the computation. +* By contrast, the **worker** nodes store the actual data, metadata and do the computation. * Both the coordinator and workers are plain PostgreSQL databases, with the `citus` extension loaded. run a command called `create_distributed_table()`. Once you run this command, Azure Cosmos DB for PostgreSQL transparently creates shards for the table across worker nodes. In the diagram, shards are represented as blue boxes. +To distribute a normal PostgreSQL schema, you run the `citus_schema_distribute()` command. Once you run this command, Azure Cosmos DB for PostgreSQL transparently turns tables in such schemas into a single shard colocated tables that can be moved as a unit between nodes of the cluster. + > [!NOTE] > > On a cluster with no worker nodes, shards of distributed tables are on the coordinator node. Colocation helps optimize JOINs across these tables. If you join the two tables on `site_id`, Azure Cosmos DB for PostgreSQL can perform the join locally on worker nodes without shuffling data between nodes. +Tables within a distributed schema are always colocated with each other. + ## Next steps > [!div class="nextstepaction"] |
cosmos-db | Quickstart Build Scalable Apps Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-overview.md | Last updated 01/30/2023 There are three steps involved in building scalable apps with Azure Cosmos DB for PostgreSQL: 1. Classify your application workload. There are use-case where Azure Cosmos DB for PostgreSQL- shines: multi-tenant SaaS, real-time operational analytics, and high + shines: Multitenant SaaS, microservices, real-time operational analytics, and high throughput OLTP. Determine whether your app falls into one of these categories.-2. Based on the workload, identify the optimal shard key for the distributed +2. Based on the workload, use [schema-based sharding](concepts-sharding-models.md#schema-based-sharding) or identify the optimal shard key for the distributed tables. Classify your tables as reference, distributed, or local. -3. Update the database schema and application queries to make them go fast +3. When using [row-based sharding](concepts-sharding-models.md#row-based-sharding), update the database schema and application queries to make them go fast across nodes. **Next steps** |
cosmos-db | Reference Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-functions.md | distributed functionality to Azure Cosmos DB for PostgreSQL. > [!NOTE] > > clusters running older versions of the Citus Engine may not-> offer all the functions listed below. +> offer all the functions listed on this page. ## Table and Shard DDL +### citus\_schema\_distribute ++Converts existing regular schemas into distributed schemas. Distributed schemas are autoassociated with individual colocation groups. Tables created in those schemas are converted to colocated distributed tables without a shard key. The process of distributing the schema automatically assigns and moves it to an existing node in the cluster. ++#### Arguments ++**schemaname:** Name of the schema, which needs to be distributed. ++#### Return value ++N/A ++#### Example ++```postgresql +SELECT citus_schema_distribute('tenant_a'); +SELECT citus_schema_distribute('tenant_b'); +SELECT citus_schema_distribute('tenant_c'); +``` ++For more examples, see how-to [design for microservices](tutorial-design-database-microservices.md). ++### citus\_schema\_undistribute ++Converts an existing distributed schema back into a regular schema. The process results in the tables and data being moved from the current node back to the coordinator node in the cluster. ++#### Arguments ++**schemaname:** Name of the schema, which needs to be distributed. ++#### Return value ++N/A ++#### Example ++```postgresql +SELECT citus_schema_undistribute('tenant_a'); +SELECT citus_schema_undistribute('tenant_b'); +SELECT citus_schema_undistribute('tenant_c'); +``` ++For more examples, see how-to [design for microservices](tutorial-design-database-microservices.md). + ### create\_distributed\_table The create\_distributed\_table() function is used to define a distributed table or colocation group, use the [alter_distributed_table](#alter_distributed_table) Possible values for `shard_count` are between 1 and 64000. For guidance on choosing the optimal value, see [Shard Count](howto-shard-count.md). -#### Return Value +#### Return value N/A distribution. **table_name:** Name of the distributed table whose local counterpart on the coordinator node should be truncated. -#### Return Value +#### Return value N/A worker node. **table\_name:** Name of the small dimension or reference table that needs to be distributed. -#### Return Value +#### Return value N/A defined as a reference table SELECT create_reference_table('nation'); ``` +### citus\_add\_local\_table\_to\_metadata ++Adds a local Postgres table into Citus metadata. A major use-case for this function is to make local tables on the coordinator accessible from any node in the cluster. The data associated with the local table stays on the coordinator ΓÇô only its schema and metadata are sent to the workers. ++Adding local tables to the metadata comes at a slight cost. When you add the table, Citus must track it in the [partition table](reference-metadata.md#partition-table). Local tables that are added to metadata inherit the same limitations as reference tables. ++When you undistribute the table, Citus removes the resulting local tables from metadata, which eliminates such limitations on those tables. ++#### Arguments ++**table\_name:** Name of the table on the coordinator to be added to Citus metadata. ++**cascade\_via\_foreign\_keys**: (Optional) When this argument set to ΓÇ£true,ΓÇ¥ citus_add_local_table_to_metadata adds other tables that are in a foreign key relationship with given table into metadata automatically. Use caution with this parameter, because it can potentially affect many tables. ++#### Return value ++N/A ++#### Example ++This example informs the database that the nation table should be defined as a coordinator-local table, accessible from any node: ++```postgresql +SELECT citus_add_local_table_to_metadata('nation'); +``` + ### alter_distributed_table The alter_distributed_table() function can be used to change the distribution tables that were previously colocated with the table, and the colocation will be preserved. If it is "false", the current colocation of this table will be broken. -#### Return Value +#### Return value N/A This function doesn't move any data around physically. If you want to break the colocation of a table, you should specify `colocate_with => 'none'`. -#### Return Value +#### Return value N/A undistribute_table also undistributes all tables that are related to table_name through foreign keys. Use caution with this parameter, because it can potentially affect many tables. -#### Return Value +#### Return value N/A a distributed table (or, more generally, colocation group), be sure to name that table using the `colocate_with` parameter. Then each invocation of the function will run on the worker node containing relevant shards. -#### Return Value +#### Return value N/A overridden with these GUCs: **table_name:** Name of the columnar table. **chunk_row_count:** (Optional) The maximum number of rows per chunk for-newly inserted data. Existing chunks of data won't be changed and may have +newly inserted data. Existing chunks of data won't be changed and might have more rows than this maximum value. The default value is 10000. **stripe_row_count:** (Optional) The maximum number of rows per stripe for-newly inserted data. Existing stripes of data won't be changed and may have +newly inserted data. Existing stripes of data won't be changed and might have more rows than this maximum value. The default value is 150000. **compression:** (Optional) `[none|pglz|zstd|lz4|lz4hc]` The compression type The alter_table_set_access_method() function changes access method of a table **access_method:** Name of the new access method. -#### Return Value +#### Return value N/A will contain the point end_at, and no later partitions will be created. **start_from:** (timestamptz, optional) pick the first partition so that it contains the point start_from. The default value is `now()`. -#### Return Value +#### Return value True if it needed to create new partitions, false if they all existed already. be partitioned on one column, of type date, timestamp, or timestamptz. **older_than:** (timestamptz) drop partitions whose upper range is less than or equal to older_than. -#### Return Value +#### Return value N/A or equal to older_than. **new_access_method:** (name) either 'heap' for row-based storage, or 'columnar' for columnar storage. -#### Return Value +#### Return value N/A doesn't work for the append distribution. **distribution\_value:** The value of the distribution column. -#### Return Value +#### Return value The shard ID Azure Cosmos DB for PostgreSQL associates with the distribution column value for the given table. column](howto-choose-distribution-column.md). **column\_var\_text:** The value of `partkey` in the `pg_dist_partition` table. -#### Return Value +#### Return value The name of `table_name`'s distribution column. visibility map and free space map for the shards. **logicalrelid:** the name of a distributed table. -#### Return Value +#### Return value Size in bytes as a bigint. excluding indexes (but including TOAST, free space map, and visibility map). **logicalrelid:** the name of a distributed table. -#### Return Value +#### Return value Size in bytes as a bigint. distributed table, including all indexes and TOAST data. **logicalrelid:** the name of a distributed table. -#### Return Value +#### Return value Size in bytes as a bigint. all stats, call both functions. N/A -#### Return Value +#### Return value None host names and port numbers. N/A -#### Return Value +#### Return value List of tuples where each tuple contains the following information: placement is present (\"target\" node). **target\_node\_port:** The port on the target worker node on which the database server is listening. -#### Return Value +#### Return value N/A command. The possible values are: > - `block_writes`: Use COPY (blocking writes) for tables lacking > primary key or replica identity. -#### Return Value +#### Return value N/A distributing to equalize the cost across workers is the same as equalizing the number of shards on each. The constant cost strategy is called \"by\_shard\_count\" and is the default rebalancing strategy. -The default strategy is appropriate under these circumstances: +The "by\_shard\_count" strategy is appropriate under these circumstances: * The shards are roughly the same size * The shards get roughly the same amount of traffic * Worker nodes are all the same size/type * Shards haven't been pinned to particular workers -If any of these assumptions don't hold, then the default rebalancing -can result in a bad plan. In this case you may customize the strategy, -using the `rebalance_strategy` parameter. +If any of these assumptions donΓÇÖt hold, then rebalancing ΓÇ£by_shard_countΓÇ¥ can result in a bad plan. ++The default rebalancing strategy is ΓÇ£by_disk_sizeΓÇ¥. You can always customize the strategy, using the `rebalance_strategy` parameter. It's advisable to call [get_rebalance_table_shards_plan](#get_rebalance_table_shards_plan) before other shards. If this argument is omitted, the function chooses the default strategy, as indicated in the table. -#### Return Value +#### Return value N/A The same arguments as rebalance\_table\_shards: relation, threshold, max\_shard\_moves, excluded\_shard\_list, and drain\_only. See documentation of that function for the arguments' meaning. -#### Return Value +#### Return value Tuples containing these columns: executed by `rebalance_table_shards()`. N/A -#### Return Value +#### Return value Tuples containing these columns: precisely the cumulative shard cost should be balanced between nodes minimum value allowed for the threshold argument of rebalance\_table\_shards(). Its default value is 0 -#### Return Value +#### Return value N/A when rebalancing shards. **name:** the name of the strategy in pg\_dist\_rebalance\_strategy -#### Return Value +#### Return value N/A SELECT * from citus_remote_connection_stats(); ### isolate\_tenant\_to\_new\_shard This function creates a new shard to hold rows with a specific single value in-the distribution column. It's especially handy for the multi-tenant +the distribution column. It's especially handy for the multitenant use case, where a large tenant can be placed alone on its own shard and ultimately its own physical node. assigned to the new shard. from all tables in the current table's [colocation group](concepts-colocation.md). -#### Return Value +#### Return value **shard\_id:** The function returns the unique ID assigned to the newly created shard. |
cosmos-db | Reference Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-metadata.md | distribution_argument_index | colocationid | ``` +### Distributed schemas view ++Citus 12.0 introduced the concept of [schema-based sharding](concepts-sharding-models.md#schema-based-sharding) and with it the `citus_schemas`` view, which shows which schemas have been distributed in the system. The view only lists distributed schemas, local schemas aren't displayed. ++| Name | Type | Description | +|--|--|--| +| schema_name | regnamespace | Name of the distributed schema | +| colocation_id | integer | Colocation ID of the distributed schema | +| schema_size | text | Human readable size summary of all objects within the schema | +| schema_owner | name | Role that owns the schema | ++HereΓÇÖs an example: ++``` + schema_name | colocation_id | schema_size | schema_owner +-++-+-- + userservice | 1 | 0 bytes | userservice + timeservice | 2 | 0 bytes | timeservice + pingservice | 3 | 632 kB | pingservice +``` + ### Distributed tables view The `citus_tables` view shows a summary of all tables managed by Azure Cosmos the same distribution column values will be placed on the same worker nodes. Colocation enables join optimizations, certain distributed rollups, and foreign key support. Shard colocation is inferred when the shard counts, replication factors, and partition column types all match between two tables; however, a-custom colocation group may be specified when creating a distributed table, if +custom colocation group can be specified when creating a distributed table, if so desired. | Name | Type | Description | can use to determine where to move shards. | default_strategy | boolean | Whether rebalance_table_shards should choose this strategy by default. Use citus_set_default_rebalance_strategy to update this column | | shard_cost_function | regproc | Identifier for a cost function, which must take a shardid as bigint, and return its notion of a cost, as type real | | node_capacity_function | regproc | Identifier for a capacity function, which must take a nodeid as int, and return its notion of node capacity as type real |-| shard_allowed_on_node_function | regproc | Identifier for a function that given shardid bigint, and nodeidarg int, returns boolean for whether Azure Cosmos DB for PostgreSQL may store the shard on the node | +| shard_allowed_on_node_function | regproc | Identifier for a function that given shardid bigint, and nodeidarg int, returns boolean for whether Azure Cosmos DB for PostgreSQL can store the shard on the node | | default_threshold | float4 | Threshold for deeming a node too full or too empty, which determines when the rebalance_table_shards should try to move shards | | minimum_threshold | float4 | A safeguard to prevent the threshold argument of rebalance_table_shards() from being set too low | SELECT * FROM pg_dist_rebalance_strategy; ``` -[ RECORD 1 ]-+-- Name | by_shard_count-default_strategy | true +default_strategy | false shard_cost_function | citus_shard_cost_1 node_capacity_function | citus_node_capacity_1 shard_allowed_on_node_function | citus_shard_allowed_on_node_true default_threshold | 0 minimum_threshold | 0 -[ RECORD 2 ]-+-- Name | by_disk_size-default_strategy | false +default_strategy | true shard_cost_function | citus_shard_cost_by_disk_size node_capacity_function | citus_node_capacity_1 shard_allowed_on_node_function | citus_shard_allowed_on_node_true default_threshold | 0.1 minimum_threshold | 0.01 ``` -The default strategy, `by_shard_count`, assigns every shard the same -cost. Its effect is to equalize the shard count across nodes. The other -predefined strategy, `by_disk_size`, assigns a cost to each shard -matching its disk size in bytes plus that of the shards that are -colocated with it. The disk size is calculated using -`pg_total_relation_size`, so it includes indices. This strategy attempts -to achieve the same disk space on every node. Note the threshold of 0.1--it prevents unnecessary shard movement caused by insignificant -differences in disk space. +The strategy `by_disk_size` assigns every shard the same cost. Its effect is to equalize the shard count across nodes. The default strategy, `by_disk_size`, assigns a cost to each shard matching its disk size in bytes plus that of the shards that are colocated with it. The disk size is calculated using `pg_total_relation_size`, so it includes indices. This strategy attempts to achieve the same disk space on every node. Note the threshold of `0.1`, it prevents unnecessary shard movement caused by insignificant differences in disk space. #### Creating custom rebalancer strategies with) the [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) view in PostgreSQL, which tracks statistics about query speed. -This view can trace queries to originating tenants in a multi-tenant +This view can trace queries to originating tenants in a multitenant application, which helps for deciding when to do tenant isolation. | Name | Type | Description | |
cosmos-db | Reference Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-overview.md | configuration options for: ||-| | [alter_distributed_table](reference-functions.md#alter_distributed_table) | Change the distribution column, shard count or colocation properties of a distributed table | | [citus_copy_shard_placement](reference-functions.md#master_copy_shard_placement) | Repair an inactive shard placement using data from a healthy placement |+| [citus_schema_distribute](reference-functions.md#) | Turn a PostgreSQL schema into a distributed schema | +| [citus_schema_undistribute](reference-functions.md#) | Undo the action of citus_schema_distribute | | [create_distributed_table](reference-functions.md#create_distributed_table) | Turn a PostgreSQL table into a distributed (sharded) table | | [create_reference_table](reference-functions.md#create_reference_table) | Maintain full copies of a table in sync across all nodes |+| [citus_add_local_table_to_metadata](reference-functions.md#citus_add_local_table_to_metadata) | Add a local table to metadata to enable querying it from any node | | [isolate_tenant_to_new_shard](reference-functions.md#isolate_tenant_to_new_shard) | Create a new shard to hold rows with a specific single value in the distribution column | | [truncate_local_data_after_distributing_table](reference-functions.md#truncate_local_data_after_distributing_table) | Truncate all local rows after distributing a table | | [undistribute_table](reference-functions.md#undistribute_table) | Undo the action of create_distributed_table or create_reference_table | |
cosmos-db | Reference Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-parameters.md | all worker nodes, or just for the coordinator node. > [!NOTE] >-> Clusters running older versions of [the Citus extension](./reference-versions.md#citus-and-other-extension-versions) may not +> Clusters running older versions of [the Citus extension](./reference-versions.md#citus-and-other-extension-versions) might not > offer all the parameters listed below. ### General configuration Azure Cosmos DB for PostgreSQL thus validates the version of the code and that o extension match, and errors out if they don\'t. This value defaults to true, and is effective on the coordinator. In-rare cases, complex upgrade processes may require setting this parameter +rare cases, complex upgrade processes might require setting this parameter to false, thus disabling the check. #### citus.log\_distributed\_deadlock\_detection (boolean) Allow new [local tables](concepts-nodes.md#type-3-local-tables) to be accessed by queries on worker nodes. Adds all newly created tables to Citus metadata when enabled. The default value is 'false'. +#### citus.rebalancer\_by\_disk\_size\_base\_cost (integer) ++Using the by_disk_size rebalance strategy each shard group gets this cost in bytes added to its actual disk size. This value is used to avoid creating a bad balance when thereΓÇÖs little data in some of the shards. The assumption is that even empty shards have some cost, because of parallelism and because empty shard groups are likely to grow in the future. ++The default value is `100MB`. + ### Query Statistics #### citus.stat\_statements\_purge\_interval (integer) runtime. #### citus.stat_statements_max (integer) The maximum number of rows to store in `citus_stat_statements`. Defaults to-50000, and may be changed to any value in the range 1000 - 10000000. Each row requires 140 bytes of storage, so setting `stat_statements_max` to its +50000, and can be changed to any value in the range 1000 - 10000000. Each row requires 140 bytes of storage, so setting `stat_statements_max` to its maximum value of 10M would consume 1.4 GB of memory. Changing this GUC doesn't take effect until PostgreSQL is restarted. Changing this GUC doesn't take effect until PostgreSQL is restarted. #### citus.stat_statements_track (enum) Recording statistics for `citus_stat_statements` requires extra CPU resources.-When the database is experiencing load, the administrator may wish to disable -statement tracking. The `citus.stat_statements_track` GUC can turn tracking on -and off. +When the database is experiencing load, the administrator can disable +statement tracking by setting `citus.stat_statements_track` to `none`. * **all:** (default) Track all statements. * **none:** Disable tracking. +#### citus.stat\_tenants\_untracked\_sample\_rate ++Sampling rate for new tenants in `citus_stat_tenants`. The rate can be of range between `0.0` and `1.0`. Default is `1.0` meaning 100% of untracked tenant queries are sampled. Setting it to a lower value means that already tracked tenants have 100% queries sampled, but tenants that are currently untracked are sampled only at the provided rate. + ### Data Loading #### citus.multi\_shard\_commit\_protocol (enum) case by choosing between the following commit protocols: should be increased on all the workers, typically to the same value as max\_connections. - **1pc:** The transactions in which COPY is performed on the shard- placements is committed in a single round. Data may be lost if a + placements is committed in a single round. Data might be lost if a commit fails after COPY succeeds on all placements (rare). #### citus.shard\_replication\_factor (integer) case by choosing between the following commit protocols: Sets the replication factor for shards that is, the number of nodes on which shards are placed, and defaults to 1. This parameter can be set at run-time and is effective on the coordinator. The ideal value for this parameter depends-on the size of the cluster and rate of node failure. For example, you may want -to increase this replication factor if you run large clusters and observe node +on the size of the cluster and rate of node failure. For example, you can increase this replication factor if you run large clusters and observe node failures on a more frequent basis. ### Planner Configuration SELECT * FROM citus_table JOIN postgres_table USING (x) WHERE citus_table.x = 10 #### citus.limit\_clause\_row\_fetch\_count (integer) Sets the number of rows to fetch per task for limit clause optimization.-In some cases, select queries with limit clauses may need to fetch all +In some cases, select queries with limit clauses might need to fetch all rows from each task to generate results. In those cases, and where an approximation would produce meaningful results, this configuration value sets the number of rows to fetch from each shard. Limit approximations and is effective on the coordinator. > This GUC is applicable only when > [shard_replication_factor](reference-parameters.md#citusshard_replication_factor-integer) > is greater than one, or for queries against-> [reference_tables](concepts-distributed-data.md#type-2-reference-tables). +> [reference_tables](concepts-nodes.md#type-2-reference-tables). Sets the policy to use when assigning tasks to workers. The coordinator assigns tasks to workers based on shard locations. This configuration be used. This parameter can be set at run-time and is effective on the coordinator. +#### citus.enable\_non\_colocated\_router\_query\_pushdown (boolean) ++Enables router planner for the queries that reference non-colocated distributed tables. ++The router planner is only enabled for queries that reference colocated distributed tables because otherwise shards might not be on the same node. Enabling this flag allows optimization for queries that reference such tables, but the query might not work after rebalancing the shards or altering the shard count of those tables. ++The default is `off`. + ### Intermediate Data Transfer #### citus.max\_intermediate\_result\_size (integer) subqueries. The default is 1 GB, and a value of -1 means no limit. Queries exceeding the limit are canceled and produce an error message. +### DDL ++#### citus.enable\_schema\_based\_sharding ++With the parameter set to `ON`, all created schemas are distributed by default. Distributed schemas are automatically associated with individual colocation groups such that the tables created in those schemas are converted to colocated distributed tables without a shard key. This setting can be modified for individual sessions. ++For an example of using this GUC, see [how to design for microservice](tutorial-design-database-microservices.md). + ### Executor Configuration #### General This parameter can be set at run-time and is effective on the coordinator. ##### citus.multi\_task\_query\_log\_level (enum) {#multi_task_logging} Sets a log-level for any query that generates more than one task (that is,-which hits more than one shard). Logging is useful during a multi-tenant +which hits more than one shard). Logging is useful during a multitenant application migration, as you can choose to error or warn for such queries, to find them and add a tenant\_id filter to them. This parameter can be set at runtime and is effective on the coordinator. The default value for this The supported values for this enum are: - **warning:** Logs statement at WARNING severity level. - **error:** Logs statement at ERROR severity level. -It may be useful to use `error` during development testing, +It could be useful to use `error` during development testing, and a lower log-level like `log` during actual production deployment. Choosing `log` will cause multi-task queries to appear in the database logs with the query itself shown after \"STATEMENT.\" The supported values are: * **immediate:** raises error in transactions where parallel operations like create\_distributed\_table happen before an attempted CREATE TYPE. * **automatic:** defer creation of types when sharing a transaction with a- parallel operation on distributed tables. There may be some inconsistency + parallel operation on distributed tables. There might be some inconsistency between which database objects exist on different nodes. * **deferred:** return to pre-11.0 behavior, which is like automatic but with other subtle corner cases. We recommend the automatic setting over deferred, amounts of data. Examples are when many rows are requested, the rows have many columns, or they use wide types such as `hll` from the postgresql-hll extension. -The default value is true for Postgres versions 14 and higher. For Postgres -versions 13 and lower the default is false, which means all results are encoded -and transferred in text format. +The default value is `true`. When set to `false`, all results are encoded and transferred in text format. ##### citus.max_adaptive_executor_pool_size (integer) hence update their status regularly. The task tracker executor on the coordinator synchronously assigns tasks in batches to the daemon on the workers. This parameter sets the maximum number of tasks to assign in a single batch. Choosing a larger batch size allows for-faster task assignment. However, if the number of workers is large, then it may +faster task assignment. However, if the number of workers is large, then it might take longer for all workers to get tasks. This parameter can be set at runtime and is effective on the coordinator. distributed query. In most cases, the explain output is similar across tasks. Occasionally, some of the tasks are planned differently or have much higher execution times. In those cases, it can be useful to enable this parameter, after which the EXPLAIN output includes all tasks. Explaining-all tasks may cause the EXPLAIN to take longer. +all tasks might cause the EXPLAIN to take longer. ##### citus.explain_analyze_sort_method (enum) The following [managed PgBouncer](./concepts-connection-pool.md) parameters can * [min_wal_size](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-MIN-WAL-SIZE) - Sets the minimum size to shrink the WAL to * [operator_precedence_warning](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-OPERATOR-PRECEDENCE-WARNING) - Emits a warning for constructs that changed meaning since PostgreSQL 9.4 * [parallel_setup_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-SETUP-COST) - Sets the planner's estimate of the cost of starting up worker processes for parallel query-* [parallel_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-TUPLE-COST) - Sets the planner's estimate of the cost of passing each tuple (row) from worker to master backend +* [parallel_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-TUPLE-COST) - Sets the planner's estimate of the cost of passing each tuple (row) from worker to main backend * [pg_stat_statements.save](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Saves pg_stat_statements statistics across server shutdowns * [pg_stat_statements.track](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Selects which statements are tracked by pg_stat_statements * [pg_stat_statements.track_utility](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Selects whether utility commands are tracked by pg_stat_statements |
cosmos-db | Tutorial Design Database Microservices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-design-database-microservices.md | + + Title: 'Tutorial: Design for microservices - Azure Cosmos DB for PostgreSQL' +description: This tutorial shows how to design for microservices with Azure Cosmos DB for PostgreSQL. +++++ Last updated : 09/30/2023+++# Microservices ++In this tutorial, you use Azure Cosmos DB for PostgreSQL as the storage backend for multiple microservices, demonstrating a sample setup and basic operation of such a cluster. Learn how to: ++> [!div class="checklist"] +> * Create a cluster +> * Create roles for your microservices +> * Use psql utility to create roles and distributed schemas +> * Create tables for the sample services +> * Configure services +> * Run services +> * Explore the database +++## Prerequisites +++## Create roles for your microservices ++Distributed schemas are relocatable within an Azure Cosmos DB for PostgreSQL cluster. The system can rebalance them as a whole unit across the available nodes, allowing to efficiently share resources without manual allocation. ++By design, microservices own their storage layer, we donΓÇÖt make any assumptions on the type of tables and data that they create and store. We provide a schema for every service and assume that they use a distinct ROLE to connect to the database. When a user connects, their role name is put at the beginning of the search_path, so if the role matches with the schema name you donΓÇÖt need any application changes to set the correct search_path. ++We use three services in our example: ++* user +* time +* ping ++Follow the steps describing [how to create user roles](howto-create-users.md#how-to-create-user-roles) and create the following roles for each service: ++* `userservice` +* `timeservice` +* `pingservice` ++## Use psql utility to create distributed schemas ++Once connected to the Azure Cosmos DB for PostgreSQL using psql, you can complete some basic tasks. ++There are two ways in which a schema can be distributed in Azure Cosmos DB for PostgreSQL: ++Manually by calling `citus_schema_distribute(schema_name)` function: ++```postgresql +CREATE SCHEMA AUTHORIZATION userservice; +CREATE SCHEMA AUTHORIZATION timeservice; +CREATE SCHEMA AUTHORIZATION pingservice; ++SELECT citus_schema_distribute('userservice'); +SELECT citus_schema_distribute('timeservice'); +SELECT citus_schema_distribute('pingservice'); +``` ++This method also allows you to convert existing regular schemas into distributed schemas. ++> [!NOTE] +> +> You can only distribute schemas that do not contain distributed and reference tables. ++Alternative approach is to enable citus.enable_schema_based_sharding configuration variable: ++```postgresql +SET citus.enable_schema_based_sharding TO ON; ++CREATE SCHEMA AUTHORIZATION userservice; +CREATE SCHEMA AUTHORIZATION timeservice; +CREATE SCHEMA AUTHORIZATION pingservice; +``` ++The variable can be changed for the current session or permanently in coordinator node parameters. With the parameter set to ON, all created schemas are distributed by default. ++You can list the currently distributed schemas by running: ++```postgresql +select * from citus_schemas; +``` ++``` + schema_name | colocation_id | schema_size | schema_owner +-++-+-- + userservice | 5 | 0 bytes | userservice + timeservice | 6 | 0 bytes | timeservice + pingservice | 7 | 0 bytes | pingservice +(3 rows) +``` ++## Create tables for the sample services ++You now need to connect to the Azure Cosmos DB for PostgreSQL for every microservice. You can use the \c command to swap the user within an existing psql instance. ++``` +\c citus userservice +``` ++```postgresql +CREATE TABLE users ( + id SERIAL PRIMARY KEY, + name VARCHAR(255) NOT NULL, + email VARCHAR(255) NOT NULL +); +``` ++``` +\c citus timeservice +``` ++```postgresql +CREATE TABLE query_details ( + id SERIAL PRIMARY KEY, + ip_address INET NOT NULL, + query_time TIMESTAMP NOT NULL +); +``` ++``` +\c citus pingservice +``` ++```postgresql +CREATE TABLE ping_results ( + id SERIAL PRIMARY KEY, + host VARCHAR(255) NOT NULL, + result TEXT NOT NULL +); +``` ++## Configure services ++In this tutorial, we use a simple set of services. You can obtain them by cloning this public repository: ++```bash +git clone https://github.com/citusdata/citus-example-microservices.git +``` ++``` +$ tree +. +Γö£ΓöÇΓöÇ LICENSE +Γö£ΓöÇΓöÇ README.md +Γö£ΓöÇΓöÇ ping +Γöé Γö£ΓöÇΓöÇ app.py +Γöé Γö£ΓöÇΓöÇ ping.sql +Γöé ΓööΓöÇΓöÇ requirements.txt +Γö£ΓöÇΓöÇ time +Γöé Γö£ΓöÇΓöÇ app.py +Γöé Γö£ΓöÇΓöÇ requirements.txt +Γöé ΓööΓöÇΓöÇ time.sql +ΓööΓöÇΓöÇ user + Γö£ΓöÇΓöÇ app.py + Γö£ΓöÇΓöÇ requirements.txt + ΓööΓöÇΓöÇ user.sql +``` ++Before you run the services however, edit `user/app.py`, `ping/app.py` and `time/app.py` files providing the [connection configuration](https://www.psycopg.org/docs/module.html#psycopg2.connect) for your Azure Cosmos DB for PostgreSQL cluster: ++```python +# Database configuration +db_config = { + 'host': 'c-EXAMPLE.EXAMPLE.postgres.cosmos.azure.com', + 'database': 'citus', + 'password': 'SECRET', + 'user': 'pingservice', + 'port': 5432 +} +``` ++After making the changes, save all modified files and move on to the next step of running the services. ++## Run services ++Change into every app directory and run them in their own python env. ++```postgresql +cd user +pipenv install +pipenv shell +python app.py +``` ++Repeat the commands for time and ping service, after which you can use the API. ++Create some users: ++```bash +curl -X POST -H "Content-Type: application/json" -d '[ + {"name": "John Doe", "email": "john@example.com"}, + {"name": "Jane Smith", "email": "jane@example.com"}, + {"name": "Mike Johnson", "email": "mike@example.com"}, + {"name": "Emily Davis", "email": "emily@example.com"}, + {"name": "David Wilson", "email": "david@example.com"}, + {"name": "Sarah Thompson", "email": "sarah@example.com"}, + {"name": "Alex Miller", "email": "alex@example.com"}, + {"name": "Olivia Anderson", "email": "olivia@example.com"}, + {"name": "Daniel Martin", "email": "daniel@example.com"}, + {"name": "Sophia White", "email": "sophia@example.com"} +]' http://localhost:5000/users +``` ++List the created users: ++```bash +curl http://localhost:5000/users +``` ++Get current time: ++```bash +Get current time: +``` ++Run the ping against example.com: ++```bash +curl -X POST -H "Content-Type: application/json" -d '{"host": "example.com"}' http://localhost:5002/ping +``` ++## Explore the database ++Now that you called some API functions, data has been stored and you can check if `citus_schemas` reflects what is expected: ++```postgresql +select * from citus_schemas; +``` ++``` + schema_name | colocation_id | schema_size | schema_owner +-++-+-- + userservice | 1 | 112 kB | userservice + timeservice | 2 | 32 kB | timeservice + pingservice | 3 | 32 kB | pingservice +(3 rows) +``` ++When you created the schemas, you didnΓÇÖt tell Azure Cosmos DB for PostgreSQL on which machines to create the schemas. It was done automatically. You can see where each schema resides with the following query: ++```postgresql + select nodename,nodeport, table_name, pg_size_pretty(sum(shard_size)) + from citus_shards +group by nodename,nodeport, table_name; +``` ++``` +nodename | nodeport | table_name | pg_size_pretty +--+-++- + localhost | 9701 | timeservice.query_details | 32 kB + localhost | 9702 | userservice.users | 112 kB + localhost | 9702 | pingservice.ping_results | 32 kB +``` ++For brevity of the example output on this page, instead of using `nodename` as displayed in Azure Cosmos DB for PostgreSQL we replace it with localhost. Assume that `localhost:9701` is worker one and `localhost:9702` is worker two. Node names on the managed service are longer and contain randomized elements. +++You can see that the time service landed on node `localhost:9701` while the user and ping service share space on the second worker `localhost:9702`. The example apps are simplistic, and the data sizes here are ignorable, but letΓÇÖs assume that you're annoyed by the uneven storage space utilization between the nodes. It would make more sense to have the two smaller time and ping services reside on one machine while the large user service resides alone. ++You can easily rebalance the cluster by disk size: ++```postgresql +select citus_rebalance_start(); +``` ++``` +NOTICE: Scheduled 1 moves as job 1 +DETAIL: Rebalance scheduled as background job +HINT: To monitor progress, run: SELECT * FROM citus_rebalance_status(); + citus_rebalance_start +-- + 1 +(1 row) +``` ++When done, you can check how our new layout looks: ++```postgresql + select nodename,nodeport, table_name, pg_size_pretty(sum(shard_size)) + from citus_shards +group by nodename,nodeport, table_name; +``` ++``` + nodename | nodeport | table_name | pg_size_pretty +--+-++- + localhost | 9701 | timeservice.query_details | 32 kB + localhost | 9701 | pingservice.ping_results | 32 kB + localhost | 9702 | userservice.users | 112 kB +(3 rows) +``` ++According to expectations, the schemas have been moved and we have a more balanced cluster. This operation has been transparent for the applications. You donΓÇÖt even need to restart them, they continue serving queries. ++## Next steps ++In this tutorial, you learned how to create distributed schemas, ran microservices using them as storage. You also learned how to explore and manage schema-based sharded Azure Cosmos DB for PostgreSQL. ++- Learn about cluster [node types](./concepts-nodes.md) |
cosmos-db | Tutorial Shard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-shard.md | to this one. Distributing table rows across multiple PostgreSQL servers is a key technique for scalable queries in Azure Cosmos DB for PostgreSQL. Together, multiple nodes can hold more data than a traditional database, and in many cases can use worker CPUs in-parallel to execute queries. +parallel to execute queries. The concept of hash-distributed tables is also known as [row-based sharding](concepts-sharding-models.md#row-based-sharding). In the prerequisites section, we created a cluster with two worker nodes. |
cosmos-db | Synapse Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md | Azure Synapse Link isn't recommended if you're looking for traditional data ware * Although analytical store data isn't backed up, and therefore can't be restored, you can rebuild your analytical store by reenabling Azure Synapse Link in the restored container. Check the [analytical store documentation](analytical-store-introduction.md) for more information. -* The capability to turn on Synapse Link in database accounts with continuous backup enabled is in preview now. The opposite situation, to turn on continuous backup in Synapse Link enabled database accounts, is still not supported yet. +* The capability to turn on Synapse Link in database accounts with continuous backup enabled is available now. But the opposite situation, to turn on continuous backup in Synapse Link enabled database accounts, is still not supported yet. * Granular role-based access control isn't supported when querying from Synapse. Users that have access to your Synapse workspace and have access to the Azure Cosmos DB account can access all containers within that account. We currently don't support more granular access to the containers. |
cost-management-billing | Programmatically Create Subscription Enterprise Agreement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md | An in-progress status is returned as an `Accepted` state under `provisioningStat ### [PowerShell](#tab/azure-powershell) -To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget). +To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget). Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/get-azsubscriptionalias) command, using the billing scope `"/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321"`. |
cost-management-billing | Programmatically Create Subscription Microsoft Customer Agreement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md | An in-progress status is returned as an `Accepted` state under `provisioningStat ### [PowerShell](#tab/azure-powershell) -To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget). +To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget). Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscriptionalias) command and the billing scope `"/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`. |
cost-management-billing | Programmatically Create Subscription Microsoft Partner Agreement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md | Pass the optional *resellerId* copied from the second step in the request body o ### [PowerShell](#tab/azure-powershell) -To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget). +To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget). Run the following New-AzSubscriptionAlias command, using the billing scope `"/providers/Microsoft.Billing/billingAccounts/99a13315-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/customers/2281f543-xxxx-xxxx-xxxx-xxxxxxxxxxxx"`. |
cost-management-billing | Programmatically Create Subscription Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md | In the response, as part of the header `Location`, you get back a url that you c ### [PowerShell](#tab/azure-powershell) -To install the latest version of the module that contains the `New-AzSubscription` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget). +To install the version of the module that contains the `New-AzSubscriptionAlias` cmdlet, in below example run `Install-Module Az.Subscription -RequiredVersion 0.9.0`. To install version 0.9.0 of PowerShellGet, see [Get PowerShellGet Module](/powershell/gallery/powershellget/install-powershellget). Run the [New-AzSubscription](/powershell/module/az.subscription) command below, replacing `<enrollmentAccountObjectId>` with the `ObjectId` collected in the first step (```747ddfe5-xxxx-xxxx-xxxx-xxxxxxxxxxxx```). To specify owners, see [how to get user object IDs](grant-access-to-create-subscription.md#userObjectId). |
defender-for-cloud | Support Matrix Defender For Servers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-servers.md | The following table shows feature support for Windows machines in Azure, Azure A | Missing OS patches assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes | | Security misconfigurations assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes | | [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | - | No | +| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | - | No | | Third-party vulnerability assessment (BYOL) | Γ£ö | - | No | | [Network security assessment](protect-network-resources.md) | Γ£ö | - | No | The following table shows feature support for Linux machines in Azure, Azure Arc | Missing OS patches assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes | | Security misconfigurations assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes | | [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | - | - | No |-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | - | No | +| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | - | No | | Third-party vulnerability assessment (BYOL) | Γ£ö | - | No | | [Network security assessment](protect-network-resources.md) | Γ£ö | - | No | The following table shows feature support for AWS and GCP machines. | Missing OS patches assessment | Γ£ö | Γ£ö | | Security misconfigurations assessment | Γ£ö | Γ£ö | | [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | Γ£ö | Γ£ö |-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | +| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | | Third-party vulnerability assessment | - | - | | [Network security assessment](protect-network-resources.md) | - | - | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | Γ£ö | - | |
deployment-environments | Configure Environment Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-environment-definition.md | Title: Add and configure an environment definition -description: Learn how to add and configure an environment definition to use in your dev center projects. Environment definitions contain an IaC template that defines the environment. +description: Learn how to add and configure an environment definition to use in your Azure Deployment Environments projects. Environment definitions contain an IaC template that defines the environment. +In this article, you learn how to add or update an environment definition in an Azure Deployment Environments catalog. + In Azure Deployment Environments, you can use a [catalog](concept-environments-key-concepts.md#catalogs) to provide your development teams with a curated set of predefined [infrastructure as code (IaC)](/devops/deliver/what-is-infrastructure-as-code) templates called [*environment definitions*](concept-environments-key-concepts.md#environment-definitions). An environment definition is combined of least two files: In this article, you learn how to: ## Add an environment definition +To add an environment definition to a catalog in Azure Deployment Environments, you first add the files to the repository. You then synchronize the dev center catalog with the updated repository. + To add an environment definition: 1. In your repository, create a subfolder in the repository folder path. az devcenter dev environment create --environment-definition-name Refer to the [Azure CLI devcenter extension](/cli/azure/devcenter/dev/environment) for full details of the `az devcenter dev environment create` command. ## Update an environment definition -To modify the configuration of Azure resources in an existing environment definition, update the associated ARM template JSON file in the repository. The change is immediately reflected when you create a new environment by using the specific environment definition. The update also is applied when you redeploy an environment that's associated with that environment definition. +To modify the configuration of Azure resources in an existing environment definition in Azure Deployment Environments, update the associated ARM template JSON file in the repository. The change is immediately reflected when you create a new environment by using the specific environment definition. The update also is applied when you redeploy an environment that's associated with that environment definition. To update any metadata related to the ARM template, modify *manifest.yaml*, and then [update the catalog](how-to-configure-catalog.md#update-a-catalog). ## Delete an environment definition -To delete an existing environment definition, in the repository, delete the subfolder that contains the ARM template JSON file and the associated manifest YAML file. Then, [update the catalog](how-to-configure-catalog.md#update-a-catalog). +To delete an existing environment definition, in the repository, delete the subfolder that contains the ARM template JSON file and the associated manifest YAML file. Then, [update the catalog](how-to-configure-catalog.md#update-a-catalog) in Azure Deployment Environments. After you delete an environment definition, development teams can no longer use the specific environment definition to deploy a new environment. Update the environment definition reference for any existing environments that were created by using the deleted environment definition. If the reference isn't updated and the environment is redeployed, the deployment fails. |
deployment-environments | How To Configure Catalog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md | Title: Add and configure a catalog -description: Learn how to add a catalog in your dev center to provide environment templates for your developers. Catalogs are repositories stored in GitHub or Azure DevOps. +description: Learn how to add a catalog in your Azure Deployment Environments dev center to provide environment templates for your developers. Catalogs are repositories stored in GitHub or Azure DevOps. Last updated 04/25/2023 -# Add and configure a catalog from GitHub or Azure DevOps +# Add and configure a catalog from GitHub or Azure DevOps in Azure Deployment Environments -Learn how to add and configure a [catalog](./concept-environments-key-concepts.md#catalogs) in your Azure Deployment Environments dev center. You can use a catalog to provide your development teams with a curated set of infrastructure as code (IaC) templates called [environment definitions](./concept-environments-key-concepts.md#environment-definitions). Your catalog is encrypted; Azure Deployment Environments supports encryption at rest with platform-managed encryption keys, which are managed by Microsoft for Azure Services. +Learn how to add and configure a [catalog](./concept-environments-key-concepts.md#catalogs) in your Azure Deployment Environments dev center. ++You can use a catalog to provide your development teams with a curated set of infrastructure as code (IaC) templates called [environment definitions](./concept-environments-key-concepts.md#environment-definitions). Your catalog is encrypted; Azure Deployment Environments supports encryption at rest with platform-managed encryption keys, which are managed by Microsoft for Azure Services. For more information about environment definitions, see [Add and configure an environment definition](./configure-environment-definition.md). In this article, you learn how to: ## Add a catalog +In Azure Deployment Environments, catalogs help you provide a set of curated IaC templates for your development teams to create environments. You can attach either a GitHub repository or an Azure DevOps repository as a catalog. + To add a catalog, you complete these tasks: - Get the clone URL for your repository. Get the path to the secret you created in the key vault. If you update the Azure Resource Manager template (ARM template) contents or definition in the attached repository, you can provide the latest set of environment definitions to your development teams by syncing the catalog. -To sync an updated catalog: +To sync an updated catalog in Azure Deployment Environments: 1. On the left menu for your dev center, under **Environment configuration**, select **Catalogs**, 1. Select the specific catalog, and then select **Sync**. The service scans through the repository and makes the latest list of environment definitions available to all the associated projects in the dev center. ## Delete a catalog -You can delete a catalog to remove it from the dev center. Templates in a deleted catalog aren't available to development teams when they deploy new environments. Update the environment definition reference for any existing environments that were created by using the environment definitions in the deleted catalog. If the reference isn't updated and the environment is redeployed, the deployment fails. +You can delete a catalog to remove it from the Azure Deployment Environments dev center. Templates in a deleted catalog aren't available to development teams when they deploy new environments. Update the environment definition reference for any existing environments that were created by using the environment definitions in the deleted catalog. If the reference isn't updated and the environment is redeployed, the deployment fails. To delete a catalog: |
deployment-environments | How To Configure Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md | -A [managed identity](../active-directory/managed-identities-azure-resources/overview.md) adds elevated-privileges capabilities and secure authentication to any service that supports Microsoft Entra authentication. Azure Deployment Environments uses identities to give development teams self-serve deployment capabilities without giving them access to the subscriptions in which Azure resources are created. +In this article, you learn how to add and configure a managed identity for your Azure Deployments Environments dev center to enable secure deployment for development teams. ++Azure Deployment Environments uses managed identities to give development teams self-serve deployment capabilities without giving them access to the subscriptions in which Azure resources are created. A [managed identity](../active-directory/managed-identities-azure-resources/overview.md) adds elevated-privileges capabilities and secure authentication to any service that supports Microsoft Entra authentication. The managed identity that's attached to a dev center should be [assigned both the Contributor role and the User Access Administrator in the deployment subscriptions](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) for each environment type. When an environment deployment is requested, the service grants appropriate permissions to the deployment identities that are set up for the environment type to deploy on behalf of the user. The managed identity that's attached to a dev center also is used to add to a [catalog](how-to-configure-catalog.md) and access [environment definitions](configure-environment-definition.md) in the catalog. -In this article, you learn how to: --> [!div class="checklist"] -> -> - Add a managed identity to your dev center -> - Assign a subscription role assignment to a managed identity -> - Grant access to a key vault secret for a managed identity - ## Add a managed identity In Azure Deployment Environments, you can choose between two types of managed identities: As a security best practice, if you choose to use user-assigned identities, use ## Assign a subscription role assignment to the managed identity -The identity that's attached to the dev center should be assigned the Owner role for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to the project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription. +The identity that's attached to the dev center in Azure Deployment Environments should be assigned the Owner role for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to the project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription. ### Add a role assignment to a system-assigned managed identity The identity that's attached to the dev center should be assigned the Owner role ## Grant the managed identity access to the key vault secret -You can set up your key vault to use either a [key vault access policy'](../key-vault/general/assign-access-policy.md) or [Azure role-based access control](../key-vault/general/rbac-guide.md). +You can set up your key vault to use either a [key vault access policy](../key-vault/general/assign-access-policy.md) or [Azure role-based access control](../key-vault/general/rbac-guide.md). > [!NOTE] > Before you can add a repository as a catalog, you must grant the managed identity access to the key vault secret that contains the repository's personal access token. |
deployment-environments | How To Manage Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-manage-environments.md | -# Manage your deployment environment +# Manage environments in Azure Deployment Environments -In Azure Deployment Environments, a platform engineer gives developers access to projects and the environment types that are associated with them. After a developer has access, they can create deployment environments based on the preconfigured environment types. The permissions that the creator of the environment and the rest of team have to access the environment's resources are defined in the specific environment type. +In this article, you learn how to manage environments in Azure Deployment Environments. As a developer, you can create and manage your environments from the developer portal or by using the Azure CLI. -As a developer, you can create and manage your environments from the developer portal or by using the Azure CLI. +In Azure Deployment Environments, a platform engineer gives developers access to projects and the environment types that are associated with them. After a developer has access, they can create deployment environments based on the preconfigured environment types. The permissions that the creator of the environment and the rest of team have to access the environment's resources are defined in the specific environment type. ## Prerequisites As a developer, you can create and manage your environments from the developer p ## Manage an environment by using the developer portal -The developer portal provides a graphical interface for development teams to create new environments and manage existing environments. You can create, redeploy, and delete your environments as needed in the developer portal. +The developer portal provides a graphical interface for development teams to create new environments and manage existing environments in Azure Deployment Environments. You can create, redeploy, and delete your environments as needed in the developer portal. ### Create an environment by using the developer portal When you need to update your environment, you can redeploy it. The redeployment 1. On the environment you want to redeploy, on the options menu, select **Redeploy**. - :::image type="content" source="media/how-to-manage-environments/option-redeploy.png" alt-text="Screenshot showing an environment tile with the options menu expanded and the redeploy option selected."::: + :::image type="content" source="media/how-to-manage-environments/option-redeploy.png" alt-text="Screenshot showing an environment tile with the options menu expanded and the Redeploy option selected."::: 1. If parameters are defined on the environment definition, you're prompted to make any changes you want to make. When you've made your changes, select **Redeploy**. You can delete your environment completely when you don't need it anymore. ## Manage an environment by using the Azure CLI -The Azure CLI provides a command-line interface for speed and efficiency when you create multiple similar environments, or for platforms where resources like memory are limited. You can use the `devcenter` Azure CLI extension to create, list, deploy, or delete an environment. +The Azure CLI provides a command-line interface for speed and efficiency when you create multiple similar environments, or for platforms where resources like memory are limited. You can use the `devcenter` Azure CLI extension to create, list, deploy, or delete an environment in Azure Deployment Environments. To learn how to manage your environments by using the CLI, see [Create and access an environment by using the Azure CLI](how-to-create-access-environments.md). |
deployment-environments | Quickstart Create Access Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md | In this quickstart, you learn how to: - [Create and configure a project](quickstart-create-and-configure-projects.md). ## Create an environment-You can create an environment from the developer portal. ++An environment in Azure Deployment Environments is a collection of Azure resources on which your application is deployed. You can create an environment from the developer portal. > [!NOTE] > Only a user who has the [Deployment Environments User](how-to-configure-deployment-environments-user.md) role, the [DevCenter Project Admin](how-to-configure-project-admin.md) role, or a [built-in role](../role-based-access-control/built-in-roles.md) that has appropriate permissions can create an environment. You can create an environment from the developer portal. :::image type="content" source="media/quickstart-create-access-environments/add-environment.png" alt-text="Screenshot showing add environment pane."::: -If your environment is configured to accept parameters, you are able to enter them on a separate pane. In this example, you don't need to specify any parameters. +If your environment is configured to accept parameters, you're able to enter them on a separate pane. In this example, you don't need to specify any parameters. 1. Select **Create**. You see your environment in the developer portal immediately, with an indicator that shows creation in progress. ## Access an environment-You can access and manage your environments in the Microsoft Developer portal. ++You can access and manage your environments in the Azure Deployment Environments developer portal. 1. Sign in to the [developer portal](https://devportal.microsoft.com). -1. You are able to view all of your existing environments. To access the specific resources created as part of an Environment, select the **Environment Resources** link. +1. You're able to view all of your existing environments. To access the specific resources created as part of an Environment, select the **Environment Resources** link. :::image type="content" source="media/quickstart-create-access-environments/environment-resources.png" alt-text="Screenshot showing an environment card, with the environment resources link highlighted."::: -1. You are able to view the resources in your environment listed in the Azure portal. +1. You're able to view the resources in your environment listed in the Azure portal. :::image type="content" source="media/quickstart-create-access-environments/azure-portal-view-of-environment.png" alt-text="Screenshot showing Azure portal list of environment resources."::: Creating an environment automatically creates a resource group that stores the environment's resources. The resource group name follows the pattern {projectName}-{environmentName}. You can view the resource group in the Azure portal. |
deployment-environments | Quickstart Create And Configure Devcenter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md | Title: Create and configure a dev center -description: Learn how to configure a dev center in Deployment Environments. You create a dev center, attach an identity, attach a catalog, and create environment types. +description: Learn how to configure a dev center in Azure Deployment Environments. You create a dev center, attach an identity, attach a catalog, and create environment types. Last updated 09/06/2023 # Quickstart: Create and configure a dev center for Azure Deployment Environments -This quickstart shows you how to create and configure a dev center in Azure Deployment Environments. +In this quickstart, you'll set up all the resources in Azure Deployment Environments to enable development teams to self-service deployment environments for their applications. Learn how to create and configure a dev center, add a catalog to the dev center, and define an environment type. A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications. To learn more about the components of Azure Deployment Environments, see [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md). To create and configure a Dev center in Azure Deployment Environments by using t :::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the Dev centers overview, to confirm that the dev center is created."::: ### Create a Key Vault-When you are using a GitHub repository or an Azure DevOps repository to store your [catalog](./concept-environments-key-concepts.md#catalogs), you need an Azure Key Vault to store a personal access token (PAT) that is used to grant Azure access to your repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. This quickstart assumes you're using an RBAC Key Vault and a GitHub repository. +When you're using a GitHub repository or an Azure DevOps repository to store your [catalog](./concept-environments-key-concepts.md#catalogs), you need an Azure Key Vault to store a personal access token (PAT) that is used to grant Azure access to your repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. This quickstart assumes you're using an RBAC Key Vault and a GitHub repository. If you don't have an existing key vault, use the following steps to create one: [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md). |
deployment-environments | Quickstart Create And Configure Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md | -# Quickstart: Create and configure a project +# Quickstart: Create and configure an Azure Deployment Environments project -This quickstart shows you how to create a project in Azure Deployment Environments, and associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). +This quickstart shows you how to create a project in Azure Deployment Environments, and associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). After you complete this quickstart, developers can use the developer portal to create environments to deploy their applications. The following diagram shows the steps you perform in this quickstart to configure a project associated with a dev center for Deployment Environments in the Azure portal. You need to perform the steps in both quickstarts before you can create a deploy ## Create a project -To create a project in your dev center: +In Azure Deployment Environments, a project represents a team or business function within the organization. When you associate a project with a dev center, all the settings for the dev center are automatically applied to the project. Each project can be associated with only one dev center. ++To create an Azure Deployment Environments project in your dev center: 1. In the [Azure portal](https://portal.azure.com/), go to Azure Deployment Environments. To create a project in your dev center: ## Create a project environment type +In Azure Deployment Environments, project environment types are a subset of the environment types that you configure for the dev center. They help you preconfigure the types of environments that specific development teams can create. + To configure a project, add a [project environment type](how-to-configure-project-environment-types.md): 1. In the Azure portal, go to your project. To configure a project, add a [project environment type](how-to-configure-projec |Name |Value | ||-| |**Type**| Select a dev center level environment type to enable for the specific project.|- |**Deployment subscription**| Select the subscription in which the environment will be created.| + |**Deployment subscription**| Select the subscription in which the environment is created.| |**Deployment identity** | Select either a system-assigned identity or a user-assigned managed identity that's used to perform deployments on behalf of the user.| |**Permissions on environment resources** > **Environment creator role(s)**| Select the roles to give access to the environment resources.| |**Permissions on environment resources** > **Additional access** | Select the users or Microsoft Entra groups to assign to specific roles on the environment resources.| To configure a project, add a [project environment type](how-to-configure-projec ## Give access to the development team +Before developers can create environments based on the environment types in a project, you must provide access for them through a role assignment at the level of the project. The Deployment Environments User role enables users to create, manage and delete their own environments. You must have sufficient permissions to a project before you can add users to it. + 1. In the Azure portal, go to your project. 1. In the left menu, select **Access control (IAM)**. |
deployment-environments | Tutorial Deploy Environments In Cicd Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/tutorial-deploy-environments-in-cicd-github.md | -# Tutorial: Deploy environments in CI/CD with GitHub -Continuous integration and continuous delivery (CI/CD) is a software development approach that helps teams to automate the process of building, testing, and deploying software changes. CI/CD enables you to release software changes more frequently and with greater confidence. +# Tutorial: Deploy environments in CI/CD with GitHub and Azure Deployment Environments In this tutorial, you'll Learn how to integrate Azure Deployment Environments into your CI/CD pipeline by using GitHub Actions. You can use any GitOps provider that supports CI/CD, like GitHub Actions, Azure Arc, GitLab, or Jenkins. +Continuous integration and continuous delivery (CI/CD) is a software development approach that helps teams to automate the process of building, testing, and deploying software changes. CI/CD enables you to release software changes more frequently and with greater confidence. + You use a workflow that features three branches: main, dev, and test. - The *main* branch is always considered production. - You create feature branches from the *main* branch. - You create pull requests to merge feature branches into *main*. -This workflow is a small example for the purposes of this tutorial. Real world workflows may be more complex. +This workflow is a small example for the purposes of this tutorial. Real world workflows might be more complex. Before beginning this tutorial, you can familiarize yourself with Deployment Environments resources and concepts by reviewing [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md). In this tutorial, you learn how to: ## 1. Create and configure a dev center -In this section, you create a dev center and project with three environment types; Dev, Test and Prod +In this section, you create an Azure Deployment Environments dev center and project with three environment types: Dev, Test and Prod. - The Prod environment type contains the single production environment - A new environment is created in Dev for each feature branch - A new environment is created in Test for each pull request+ ### 1.1 Setup the Azure CLI To begin, sign in to Azure. Run the following command, and follow the prompts to complete the authentication process. You can protect important branches by setting branch protection rules. Protectio ### 3.4 Create a GitHub personal access token -Next, create a [fine-grained personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#fine-grained-personal-access-tokens) to enable your dev center to connect to your repository and consume the environment catalog. +Next, create a [fine-grained personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#fine-grained-personal-access-tokens) to enable your Azure Deployment Environments dev center to connect to your repository and consume the environment catalog. > [!NOTE] > Fine-grained personal access token are currently in beta and subject to change. To leave feedback, see the [feedback discussion](https://github.com/community/community/discussions/36441). az keyvault secret set \ ## 4. Connect the catalog to your dev center -A catalog is a repository that contains a set of environment definitions. Catalog items consist of an IaC template and a manifest file. The template defines the environment, and the manifest provides metadata about the template. Development teams use environment definitions from the catalog to create environments. +In Azure Deployment Environments, a catalog is a repository that contains a set of environment definitions. Catalog items consist of an IaC template and a manifest file. The template defines the environment, and the manifest provides metadata about the template. Development teams use environment definitions from the catalog to create environments. The template you used to create your GitHub repository contains a catalog in the _Environments_ folder. You can also authenticate a service principal directly using a secret, but that With GitHub environments, you can configure environments with protection rules and secrets. A workflow job that references an environment must follow any protection rules for the environment before running or accessing the environment's secrets. -Create three environments: Dev, Test, and Prod to map to the project's environment types. +Create three environments: Dev, Test, and Prod to map to the environment types in the Azure Deployment Environments project. > [!NOTE] > Environments, environment secrets, and environment protection rules are available in public repositories for all products. For access to environments, environment secrets, and deployment branches in **private** or **internal** repositories, you must use GitHub Pro, GitHub Team, or GitHub Enterprise. For access to other environment protection rules in **private** or **internal** repositories, you must use GitHub Enterprise. For more information, see "[GitHubΓÇÖs products.](https://docs.github.com/en/get-started/learning-about-github/githubs-products)" For more information about environments and required approvals, see "[Using envi 1. Select **Required reviewers**. -2. Search for and select your GitHub user. You may enter up to six people or teams. Only one of the required reviewers needs to approve the job for it to proceed. +2. Search for and select your GitHub user. You can enter up to six people or teams. Only one of the required reviewers needs to approve the job for it to proceed. 3. Select **Save protection rules**. |
dev-box | Concept Dev Box Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md | Title: Microsoft Dev Box key concepts -description: Learn key concepts and terminology for Microsoft Dev Box. +description: Learn key concepts and terminology for Microsoft Dev Box. Get an understanding about dev center, dev box, dev box definitions, and dev box pools. -# Key concepts for Microsoft Dev Box +# Key concepts for Microsoft Dev Box -This article describes the key concepts and components of Microsoft Dev Box. +This article describes the key concepts and components of Microsoft Dev Box to help you set up the service successfully. -As you learn about Microsoft Dev Box, you'll also encounter components of [Azure Deployment Environments](../deployment-environments/overview-what-is-azure-deployment-environments.md), a complementary service that shares certain architectural components. Deployment Environments provides developers with preconfigured cloud-based environments for developing applications. +Microsoft Dev Box gives developers self-service access to preconfigured, and ready-to-code cloud-based workstations. You can configure the service to meet your development team and project structure, and manage security and network settings to access resources securely. Different components play a part in the configuration of Microsoft Dev Box. +Microsoft Dev Box builds on the same foundations as [Azure Deployment Environments](/azure/deployment-environments/overview-what-is-azure-deployment-environments). Deployment Environments provides developers with preconfigured cloud-based environments for developing applications. Both services are complementary and share certain architectural components, such as a [dev center](#dev-center) or [project](#project). + ## Dev center A dev center is a collection of [Projects](#project) that require similar settings. Dev centers enable platform engineers to: |
dev-box | How To Configure Azure Compute Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md | Title: Configure Azure Compute Gallery -description: Learn how to create an Azure Compute Gallery repository for managing and sharing Dev Box images. +description: Learn how to create and attach an Azure compute gallery to a dev center in Microsoft Dev Box. Use a compute gallery to manage and share dev box images. -Azure Compute Gallery is a service for managing and sharing images. A gallery is a repository that's stored in your Azure subscription and helps you build structure and organization around your image resources. You can use a gallery to provide custom images for your dev box users. +In this article, you learn how to configure and attach an Azure compute gallery to a dev center in Microsoft Dev Box. With Azure Compute Gallery, you can give developers customized images for their dev box. ++Azure Compute Gallery is a service for managing and sharing images. A gallery is a repository that's stored in your Azure subscription and helps you build structure and organization around your image resources. ++After you attach a compute gallery to a dev center in Microsoft Dev Box, you can create dev box definition based on images stored in the compute gallery. Advantages of using a gallery include: To learn more about Azure Compute Gallery and how to create galleries, see: - A dev center. If you don't have one available, follow the steps in [Create a dev center](quickstart-configure-dev-box-service.md#1-create-a-dev-center). - A compute gallery. Images stored in a compute gallery can be used in a dev box definition, provided they meet the requirements listed in the [Compute gallery image requirements](#compute-gallery-image-requirements) section.- + > [!NOTE] > Microsoft Dev Box doesn't support community galleries. To learn more about Azure Compute Gallery and how to create galleries, see: A gallery used to configure dev box definitions must have at least [one image definition and one image version](../virtual-machines/image-version.md). -When creating a virtual machine image, select an image from the marketplace that is Dev Box compatible, like the following examples: +When you create a virtual machine image, select an image from the Azure Marketplace that is compatible with Microsoft Dev Box. The following are examples of compatible images: - [Visual Studio 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftvisualstudio.visualstudio2019plustools?tab=Overview) - [Visual Studio 2022](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftvisualstudio.visualstudioplustools?tab=Overview) The image version must meet the following requirements: :::image type="content" source="media/how-to-configure-azure-compute-gallery/image-definition.png" alt-text="Screenshot that shows Windows 365 image requirement settings."::: > [!NOTE]-> - Dev Box image requirements exceed [Windows 365 image requirements](/windows-365/enterprise/device-images) and include settings to optimize dev box creation time and performance. +> - Microsoft Dev Box image requirements exceed [Windows 365 image requirements](/windows-365/enterprise/device-images) and include settings to optimize dev box creation time and performance. > - Images that do not meet Windows 365 requirements will not be listed for creation. ## Provide permissions for services to access a gallery -When you use an Azure Compute Gallery image to create a dev box definition, the Windows 365 service validates the image to ensure that it meets the requirements to be provisioned for a dev box. The Dev Box service replicates the image to the regions specified in the attached network connections, so the images are present in the region that's required for dev box creation. +When you use an Azure Compute Gallery image to create a dev box definition, the Windows 365 service validates the image to ensure that it meets the requirements to be provisioned for a dev box. Microsoft Dev Box replicates the image to the regions specified in the attached network connections, so the images are present in the region that's required for dev box creation. To allow the services to perform these actions, you must provide permissions to your gallery as follows. To allow the services to perform these actions, you must provide permissions to ### Assign roles -The Dev Box service behaves differently depending how you attach your gallery: +Microsoft Dev Box behaves differently depending how you attach your gallery: - When you use the Azure portal to attach the gallery to your dev center, the Dev Box service creates the necessary role assignments automatically after you attach the gallery. - When you use the Azure CLI to attach the gallery to your dev center, you must manually create the Windows 365 service principal and the dev center's managed identity role assignments before you attach the gallery. You can use the same managed identity in multiple dev centers and compute galler ## Attach a gallery to a dev center -To use the images from a gallery in dev box definitions, you must first associate the gallery with the dev center by attaching it: +To use the images from a compute gallery in dev box definitions, you must first associate the gallery with the dev center by attaching it: 1. Sign in to the [Azure portal](https://portal.azure.com). You can detach galleries from dev centers so that their images can no longer be The gallery is detached from the dev center. The gallery and its images aren't deleted, and you can reattach it if necessary. -## Next steps +## Related content - Learn more about [key concepts in Microsoft Dev Box](./concept-dev-box-concepts.md). |
dev-box | How To Configure Dev Box Hibernation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-dev-box-hibernation.md | Title: Configure hibernation for Microsoft Dev Box -description: Learn how to enable, disable and troubleshoot hibernation for your dev boxes. Configure hibernation settings for your image and dev box definition. +description: Learn how to enable, disable, and troubleshoot hibernation in Microsoft Dev Box. Configure hibernation settings for your image and dev box definition. -# Configure Dev Box Hibernation (preview) for a dev box definition +# Configure hibernation in Microsoft Dev Box ++In this article, you learn how to enable and disable hibernation in Microsoft Dev Box. You control hibernation at the dev box image and dev box definition level. Hibernating dev boxes at the end of the workday can help you save a substantial portion of your VM costs. It eliminates the need for developers to shut down their dev box and lose their open windows and applications. With the introduction of Dev Box Hibernation (Preview), you can enable this capability on new dev boxes and hibernate and resume them. This feature provides a convenient way to manage your dev boxes while maintaining your work environment. -There are two steps in enabling hibernation; you must enable hibernation on your dev box image and enable hibernation on your dev box definition. +There are two steps to enable hibernation: ++1. Enable hibernation on your dev box image +1. Enable hibernation on your dev box definition > [!IMPORTANT] > Dev Box Hibernation is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. -## Key concepts for hibernation-enabled images +## Considerations for hibernation-enabled images - The following SKUs support hibernation: 8, 16 vCPU SKUs. 32 vCPU SKUs do not support hibernation. These settings are known to be incompatible with hibernation, and aren't support 1. In the start menu, search for *Turn Windows features on or off* 1. In Turn Windows features on or off, select **Virtual Machine Platform**, and then select **OK** -## Enable hibernation on your dev box image +## Enable hibernation on your dev box image -The Visual Studio and Microsoft 365 images that Dev Box provides in the Azure Marketplace are already configured to support hibernation. You don't need to enable hibernation on these images, they're ready to use. +If you plan to use a custom image from an Azure compute gallery, you need to enable hibernation capabilities when you create the new image. You can't enable hibernation for existing images. -If you plan to use a custom image from an Azure Compute Gallery, you need to enable hibernation capabilities as you create the new image. To enable hibernation capabilities, set the IsHibernateSupported flag to true. You must set the IsHibernateSupported flag when you create the image, existing images can't be modified. +> [!NOTE] +> The Visual Studio and Microsoft 365 images that Microsoft Dev Box provides in the Azure Marketplace are already configured to support hibernation. You don't need to enable hibernation on these images, they're ready to use. -To enable hibernation capabilities, set the `IsHibernateSupported` flag to true: +To enable hibernation capabilities, set the `IsHibernateSupported` flag to `true` when you create the image: ```azurecli az sig image-definition create For more information about creating a custom image, see [Configure a dev box by ## Enable hibernation on a dev box definition -You can enable hibernation as you create a dev box definition, providing that the dev box definition uses a hibernation-enabled custom or marketplace image. You can also update an existing dev box definition that uses a hibernation-enabled custom or marketplace image. +In Microsoft Dev Box, you enable hibernation for a dev box definition, providing that the dev box definition uses a hibernation-enabled custom or marketplace image. You can also update an existing dev box definition that uses a hibernation-enabled custom or marketplace image. All new dev boxes created in dev box pools that use a dev box definition with hibernation enabled can hibernate in addition to shutting down. If a pool has dev boxes that were created before hibernation was enabled, they continue to only support shutdown. -Dev Box validates your image for hibernate support. Your dev box definition might fail validation if hibernation couldn't be successfully enabled using your image. +Microsoft Dev Box validates your image for hibernate support. Your dev box definition might fail validation if hibernation couldn't be successfully enabled using your image. You can enable hibernation on a dev box definition by using the Azure portal or the CLI. -### Enable hibernation on an existing dev box definition by using the Azure portal +### Enable hibernation for a dev box definition by using the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com). You can enable hibernation on a dev box definition by using the Azure portal or 1. Select **Save**. -### Update an existing dev box definition by using the CLI +### Enable hibernation for a dev box definition by using the Azure CLI ```azurecli az devcenter admin devbox-definition update az devcenter admin devbox-definition update ## Disable hibernation on a dev box definition - If you have issues provisioning new VMs after enabling hibernation on a pool or you want to revert to shut down only dev boxes, you can disable hibernation on the dev box definition. +If you have issues provisioning new VMs after enabling hibernation on a pool or you want to revert to shut down only dev boxes, you can disable hibernation on the dev box definition. You can disable hibernation on a dev box definition by using the Azure portal or the CLI. -### Disable hibernation on an existing dev box definition by using the Azure portal +### Disable hibernation for a dev box definition by using the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com). You can disable hibernation on a dev box definition by using the Azure portal or 1. Select **Save**. -### Disable hibernation on an existing dev box definition by using the CLI +### Disable hibernation for a dev box definition by using the CLI ```azurecli az devcenter admin devbox-definition update --dev-box-definition-name <DevBoxDefinitionName> -ΓÇôdev-center-name <devcentername> --resource-group <resourcegroupname> ΓÇô-hibernateSupport disabled ``` -## Next steps +## Related content - [Create a dev box pool](how-to-manage-dev-box-pools.md) - [Configure a dev box by using Azure VM Image Builder](how-to-customize-devbox-azure-image-builder.md) |
dev-box | How To Configure Network Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-network-connections.md | Title: Configure network connections -description: Learn how to create, delete, attach, and remove Microsoft Dev Box network connections. +description: Learn how to manage network connections for a dev center in Microsoft Dev Box. Use network connections to connect to virtual network or enable connecting to on-premises resources from a dev box. -Network connections allow dev boxes to connect to existing virtual networks. They also determine the region into which dev boxes are deployed. +In this article, you learn how to manage network connections for a dev center in Microsoft Dev Box. Network connections enable dev boxes to connect to existing virtual networks. In addition, you can configure the network settings to enable connecting to on-premises resources from your dev box. The location, or Azure region, of the network connection determines where associated dev boxes are hosted. ++You need to add at least one network connection to a dev center in Microsoft Dev Box. When you're planning network connectivity for your dev boxes, you must: To create a network connection, you need an existing virtual network and subnet. | - | -- | | **Subscription** | Select your subscription. | | **Resource group** | Select an existing resource group. Or create a new one by selecting **Create new**, entering **rg-name**, and then selecting **OK**. |- | **Name** | Enter **VNet-name**. | + | **Name** | Enter *VNet-name*. | | **Region** | Select the region for the virtual network and dev boxes. | :::image type="content" source="./media/how-to-manage-network-connection/example-basics-tab.png" alt-text="Screenshot of the Basics tab on the pane for creating a virtual network in the Azure portal." border="true"::: To create a network connection, you need an existing virtual network and subnet. 1. Select **Create**. -## Allow access to Dev Box endpoints from your network +## Allow access to Microsoft Dev Box endpoints from your network An organization can control network ingress and egress by using a firewall, network security groups, and even Microsoft Defender. The following sections show you how to create and configure a network connection ### Types of Active Directory join -The Dev Box service requires a configured and working Active Directory join, which defines how dev boxes join your domain and access resources. There are two choices: +Microsoft Dev Box requires a configured and working Active Directory join, which defines how dev boxes join your domain and access resources. There are two choices: - **Microsoft Entra join**: If your organization uses Microsoft Entra ID, you can use a Microsoft Entra join (sometimes called a native Microsoft Entra join). Dev box users sign in to Microsoft Entra joined dev boxes by using their Microsoft Entra account and access resources based on the permissions assigned to that account. Microsoft Entra join enables access to cloud-based and on-premises apps and resources. You can remove a network connection from a dev center if you no longer want to u The network connection is no longer available for use in the dev center. -## Next steps +## Related content - [Manage a dev box definition](how-to-manage-dev-box-definitions.md) - [Manage a dev box pool](how-to-manage-dev-box-pools.md) |
dev-box | How To Create Dev Boxes Developer Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-create-dev-boxes-developer-portal.md | Title: Create & configure a dev box by using the developer portal + Title: Manage a dev box in the developer portal -description: Learn how to create, delete, and connect to Microsoft Dev Box dev boxes by using the developer portal. +description: Learn how to create, delete, and connect to a dev box by using the Microsoft Dev Box developer portal. Last updated 09/11/2023 -# Manage a dev box by using the developer portal +# Manage a dev box by using the Microsoft Dev Box developer portal -Developers can manage their dev boxes through the developer portal. As a developer, you can view information about your dev boxes. You can also connect to, start, stop, restart, and delete them. +In this article, you learn how to manage a dev box by using the Microsoft Dev Box developer portal. Developers can access their dev boxes directly in the developer portal, instead of having to use the Azure portal. ++As a developer, you can view information about your dev boxes. You can also connect to, start, stop, restart, and delete them. ## Permissions As a dev box developer, you can: ## Create a dev box -You can create as many dev boxes as you need through the developer portal, but there are common ways to split up your workload. --You could create a dev box for your front-end work and a separate dev box for your back-end work. You could also create multiple dev boxes for your back end. +You can create as many dev boxes as you need through the Microsoft Dev Box developer portal. You might create a separate dev box for different scenarios, for example: -For example, say you're working on a bug. You could use a separate dev box for the bug fix to work on the specific task and troubleshoot the issue without impacting your primary machine. +- **Dev box per workload**: you could create a dev box for your front-end work and a separate dev box for your back-end work. You could also create multiple dev boxes for your back end. +- **Dev box for bug fixing**: you could use a separate dev box for the bug fix to work on the specific task and troubleshoot the issue without impacting your primary machine. You can create a dev box by using: You can create a dev box by using: ## Connect to a dev box -After you create your dev box, you can connect to it through a Remote Desktop app or through a browser. +After you create your dev box, you can connect to it in two ways: -Remote Desktop provides the highest performance and best user experience for heavy workloads. Remote Desktop also supports multi-monitor configuration. For more information, see [Tutorial: Use a Remote Desktop client to connect to a dev box](./tutorial-connect-to-dev-box-with-remote-desktop-app.md). +- **Remote desktop client application**: remote desktop provides the highest performance and best user experience for heavy workloads. Remote Desktop also supports multi-monitor configuration. For more information, see [Tutorial: Use a Remote Desktop client to connect to a dev box](./tutorial-connect-to-dev-box-with-remote-desktop-app.md). -Use the browser for lighter workloads. When you access your dev box via your phone or laptop, you can use the browser. The browser is useful for tasks such as a quick bug fix or a review of a GitHub pull request. For more information, see the [steps for using a browser to connect to a dev box](./quickstart-create-dev-box.md#connect-to-a-dev-box). +- **Browser**: use the browser for lighter workloads. When you access your dev box via your phone or laptop, you can use the browser. The browser is useful for tasks such as a quick bug fix or a review of a GitHub pull request. For more information, see the [steps for using a browser to connect to a dev box](./quickstart-create-dev-box.md#connect-to-a-dev-box). ## Shutdown, restart or start a dev box -You can perform many actions on a dev box through the developer portal by using the actions menu on the dev box tile. The options you see depend on the state of the dev box, and the configuration of the dev box pool it belongs to. For example, you can shut down or restart a running dev box, or start a stopped dev box. +You can perform many actions on a dev box in the Microsoft Dev Box developer portal by using the actions menu on the dev box tile. The available options depend on the state of the dev box and the configuration of the dev box pool it belongs to. For example, you can shut down or restart a running dev box, or start a stopped dev box. To shut down or restart a dev box. To start a dev box: ## Get information about a dev box -You can view information about a dev box, like the creation date, the dev center it belongs to, and the dev box pool it belongs to. You can also check the source image in use. +You can use the Microsoft Dev Box developer portal to view information about a dev box, such as the creation date, and the dev center and dev box pool it belongs to. You can also check the source image in use. To get more information about your dev box: To get more information about your dev box: ## Delete a dev box -When you no longer need a dev box, you can delete it. +When you no longer need a dev box, you can delete it in the developer portal. There are many reasons why you might not need a dev box anymore. Maybe you finished testing, or you finished working on a specific project within your product. |
dev-box | How To Customize Devbox Azure Image Builder | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-devbox-azure-image-builder.md | Title: Configure a dev box by using Azure VM Image Builder -description: Learn how to create a custom image by using Azure VM Image Builder, and then create a dev box by using the image. +description: Learn how to use Azure VM Image Builder to build a custom image for configuring dev boxes with Microsoft Dev Box. Last updated 04/25/2023 -# Configure a dev box by using Azure VM Image Builder +# Configure a dev box by using Azure VM Image Builder and Microsoft Dev Box -When your organization uses standardized virtual machine (VM) images, it can more easily migrate to the cloud and help ensure consistency in your deployments. +In this article, you use Azure VM Image Builder to create a customized dev box in Microsoft Dev Box by using a template. The template includes a customization step to install Visual Studio Code (VS Code). -Images ordinarily include predefined security, configuration settings, and any necessary software. Setting up your own imaging pipeline requires time, infrastructure, and many other details. With Azure VM Image Builder, you can create a configuration that describes your image. The service then builds the image and submits it to a dev box project. --In this article, you create a customized dev box by using a template. The template includes a customization step to install Visual Studio Code (VS Code). +When your organization uses standardized virtual machine (VM) images, it can more easily migrate to the cloud and help ensure consistency in your deployments. Images ordinarily include predefined security, configuration settings, and any necessary software. Setting up your own imaging pipeline requires time, infrastructure, and many other details. With Azure VM Image Builder, you can create a configuration that describes your image. The service then builds the image and submits it to a dev box project. Although it's possible to create custom VM images by hand or by using other tools, the process can be cumbersome and unreliable. VM Image Builder, which is built on HashiCorp Packer, gives you the benefits of a managed service. To provision a custom image that you created by using VM Image Builder, you need ## Create a Windows image and distribute it to Azure Compute Gallery -The next step is to use Azure VM Image Builder and Azure PowerShell to create an image version in Azure Compute Gallery and then distribute the image globally. You can also do this by using the Azure CLI. +The first step is to use Azure VM Image Builder and Azure PowerShell to create an image version in Azure Compute Gallery and then distribute the image globally. You can also do this by using the Azure CLI. 1. To use VM Image Builder, you need to register the features. $features = @($SecurityType) New-AzGalleryImageDefinition -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location -Name $imageDefName -OsState generalized -OsType Windows -Publisher 'myCompany' -Offer 'vscodebox' -Sku '1-0-0' -Feature $features -HyperVGeneration "V2" ``` -1. Copy the following Azure Resource Manger template for VM Image Builder. This template indicates the source image and the customizations applied. This template installs Choco and VS Code. It also indicates where the image will be distributed. +1. Copy the following Azure Resource Manger template for VM Image Builder. This template indicates the source image and the customizations applied. This template installs Choco and VS Code. It also indicates where the image is distributed. ```json { Alternatively, you can view the provisioning state of your image in the Azure po After your custom image has been provisioned in the gallery, you can configure the gallery to use the images in the dev center. For more information, see [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md). -## Set up the Dev Box service with a custom image +## Set up Microsoft Dev Box with a custom image -After the gallery images are available in the dev center, you can use the custom image with the Microsoft Dev Box service. For more information, see [Quickstart: Configure Microsoft Dev Box ](./quickstart-configure-dev-box-service.md). +After the gallery images are available in the dev center, you can use the custom image with Microsoft Dev Box. For more information, see [Quickstart: Configure Microsoft Dev Box](./quickstart-configure-dev-box-service.md). -## Next steps +## Related content - [3. Create a dev box definition](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition) |
dev-box | How To Dev Box User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-dev-box-user.md | Title: Provide user access to dev box projects + Title: Grant user access to dev box projects -description: Learn how to provide user-level access to projects for developers so that they can create and manage dev boxes. +description: Learn how to grant user-level access to projects in Microsoft Dev Box to enable developers to create and manage dev boxes. Last updated 04/25/2023 -# Provide user-level access to projects for developers +# Grant user-level access to projects in Microsoft Dev Box -Team members must have access to a specific Microsoft Dev Box project before they can create dev boxes. By using the built-in DevCenter Dev Box User role, you can assign permissions to Active Directory users or groups at the project level. +In this article, you learn how to grant developers access to create and manage a dev box in the Microsoft Dev Box developer portal. Microsoft Dev Box uses Azure role-based access control (Azure RBAC) to grant access to functionality in the service. ++Team members must have access to a specific Microsoft Dev Box project before they can create dev boxes. By using the built-in DevCenter Dev Box User role, you can assign permissions to Active Directory users or groups. You assign the role at the project level in Microsoft Dev Box. [!INCLUDE [supported accounts note](./includes/note-supported-accounts.md)] A DevCenter Dev Box User can: ## Assign permissions to dev box users +To grant a user access to create and manage a dev box in Microsoft Dev Box, you assign the DevCenter Dev Box User role at the project level. + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **projects**. In the list of results, select **Projects**. The users can now view the project and all the pools within it. Dev box users ca [!INCLUDE [dev box runs on creation note](./includes/note-dev-box-runs-on-creation.md)] -## Next steps +## Related content - [Quickstart: Create a dev box by using the developer portal](quickstart-create-dev-box.md) |
dev-box | How To Hibernate Your Dev Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-hibernate-your-dev-box.md | Title: Hibernate your Microsoft Dev Box + Title: Hibernate a dev box -description: Learn how to hibernate your dev boxes. +description: Learn how to hibernate a dev box in Microsoft Dev Box. Use hibernation to shut down your VM, while preserving your active work. -# How to hibernate your dev box +# Hibernate a dev box in Microsoft Dev Box ++In this article, you learn how to hibernate and resume a dev box in Microsoft Dev Box. Hibernation is a power-saving state that saves your running applications to your hard disk and then shuts down the virtual machine (VM). When you resume the VM, all your previous work is restored. -You can hibernate your dev box through the developer portal or the CLI. You can't hibernate your dev box from the dev box itself. +You can hibernate your dev box through the Microsoft Dev Box developer portal or the CLI. You can't hibernate your dev box from within the virtual machine. > [!IMPORTANT] > Dev Box Hibernation is currently in PREVIEW. You can hibernate your dev box through the developer portal or the CLI. You can' ## Hibernate your dev box using the developer portal -Hibernate your dev box through the developer portal: +To hibernate your dev box through the Microsoft Dev Box developer portal: 1. Sign in to the [developer portal](https://aka.ms/devbox-portal). 1. On the dev box you want to hibernate, on the more options menu, select **Hibernate**. -Dev boxes that support hibernation will show the **Hibernate** option. Dev boxes that only support shutdown will show the **Shutdown** option. +Dev boxes that support hibernation show the **Hibernate** option. Dev boxes that only support shutdown show the **Shutdown** option. ## Resume your dev box using the developer portal -Resume your Dev box through the developer portal: +To resume your dev box through the Microsoft Dev Box developer portal: 1. Sign in to the [developer portal](https://aka.ms/devbox-portal). 1. On the dev box you want to resume, on the more options menu, select **Resume**. -In addition, you can also double select on your dev box in the list of VMs you see in the "Remote Desktop" app. Your dev box automatically starts up and resumes from a hibernating state. +In addition, you can also double select on your dev box in the list of VMs you see in the "Remote Desktop" app. Your dev box automatically starts up and resumes from a hibernating state. -## Hibernate your dev box using the CLI +## Hibernate your dev box using the Azure CLI -You can use the CLI to hibernate your dev box: +To hibernate your dev box by using the Azure CLI: ```azurecli-interactive az devcenter dev dev-box stop --name <YourDevBoxName> --dev-center-name <YourDevCenterName> --project-name <YourProjectName> --user-id "me" --hibernate true To learn more about managing your dev box from the CLI, see: [devcenter referenc **My dev box doesn't resume from hibernated state. Attempts to connect to it fail and I receive an error from the RDP app.** -If your machine is unresponsive, it may have stalled either while going into hibernation or resuming from hibernation, you can manually reboot your dev box. +If your machine is unresponsive, it might have stalled either while going into hibernation or resuming from hibernation, you can manually reboot your dev box. To shut down your dev box, either To shut down your dev box, either **When my dev box resumes from a hibernated state, all my open windows were gone.** -Dev Box Hibernation is a preview feature, and you might run into reliability issues. Enable AutoSave on your applications to minimize the impact of session loss. +Dev Box Hibernation is a preview feature, and you might run into reliability issues. Enable AutoSave on your applications to minimize the effects of session loss. **I changed some settings on one of my dev boxes and it no longer hibernates. My other dev boxes hibernate without issues. What could be the problem?** Some settings aren't compatible with hibernation and prevent your dev box from hibernating. To learn about these settings, see: [Settings not compatible with hibernation](how-to-configure-dev-box-hibernation.md#settings-not-compatible-with-hibernation). - ## Next steps + ## Related content - [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md) - [How to configure Dev Box Hibernation (preview)](how-to-configure-dev-box-hibernation.md) |
dev-box | How To Manage Dev Box Definitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md | Title: Create, update, delete dev box definitions + Title: Manage dev box definitions -description: Microsoft Dev Box dev box definitions define a source image, compute size, and storage size for your dev boxes with. Learn how to manage dev box definitions. +description: Microsoft Dev Box dev box definitions define a source image, compute size, and storage size for your dev boxes. Learn how to manage dev box definitions. -A dev box definition is a Microsoft Dev Box resource that specifies a source image, compute size, and storage size. +In this article, you learn how to manage a dev box definition by using the Azure portal. A dev box definition is a Microsoft Dev Box resource that specifies the source image, compute size, and storage size for a dev box. Depending on their task, development teams have different software, configuration, compute, and storage requirements. You can create a new dev box definition to fulfill each team's needs. There's no limit to the number of dev box definitions that you can create, and you can use dev box definitions across multiple projects in a dev center. To manage a dev box definition, you need the following permissions: ## Sources of images -When you create a dev box definition, you can choose a preconfigured image from Azure Marketplace or a custom image from Azure Compute Gallery. +When you create a dev box definition, you need to select a virtual machine image. Microsoft Dev Box supports the following types of images: ++- Preconfigured images from the Azure Marketplace +- Custom images stored in an Azure compute gallery ### Azure Marketplace When you're selecting an Azure Marketplace image, consider using an image that h ### Azure Compute Gallery -Azure Compute Gallery enables you to store and manage a collection of custom images. You can build an image to your dev team's exact requirements and store it in a gallery. +Azure Compute Gallery enables you to store and manage a collection of custom images. You can build an image to your dev team's exact requirements and store it in a compute gallery. -To use the custom image while creating a dev box definition, attach the gallery to your dev center. To learn how to attach a gallery, see [Configure Azure Compute Gallery](how-to-configure-azure-compute-gallery.md). +To use the custom image while creating a dev box definition, attach the compute gallery to your dev center in Microsoft Dev Box. Follow these steps to [attach a compute gallery to a dev center](how-to-configure-azure-compute-gallery.md). ## Image versions -When you select an image to use in your dev box definition, you must specify whether you'll use updated versions of the image: +When you select an image to use in your dev box definition, you must specify which version of the image you want to use: - **Numbered image versions**: If you want a consistent dev box definition in which the base image doesn't change, use a specific, numbered version of the image. Using a numbered version ensures that all the dev boxes in the pool always use the same version of the image. - **Latest image versions**: If you want a flexible dev box definition in which you can update the base image as needs change, use the latest version of the image. This choice ensures that new dev boxes use the most recent version of the image. Existing dev boxes aren't modified when an image version is updated. ## Create a dev box definition -You can create multiple dev box definitions to meet the needs of your developer teams. +In Microsoft Dev Box, you can create multiple dev box definitions to meet the needs of your developer teams. You associate dev box definitions with a dev center. The following steps show you how to create a dev box definition by using an existing dev center. If you don't have an available dev center, follow the steps in [Quickstart: Configure Microsoft Dev Box](./quickstart-configure-dev-box-service.md) to create one. The following steps show you how to create a dev box definition by using an exis ## Update a dev box definition -Over time, your needs for dev boxes will change. You might want to move from a Windows 10 base operating system to a Windows 11 base operating system, or increase the default compute specification for your dev boxes. Your initial dev box definitions might no longer be appropriate for your needs. You can update a dev box definition so that new dev boxes will use the new configuration. +Over time, your needs for dev boxes can change. You might want to move from a Windows 10 base operating system to a Windows 11 base operating system, or increase the default compute specification for your dev boxes. Your initial dev box definitions might no longer be appropriate for your needs. You can update a dev box definition so that new dev boxes use the new configuration. You can update the image, image version, compute, and storage settings for a dev box definition: You can update the image, image version, compute, and storage settings for a dev You can delete a dev box definition when you no longer want to use it. Deleting a dev box definition is permanent and can't be undone. Dev box definitions can't be deleted if one or more dev box pools are using them. +To delete a dev box definition in the Azure portal: + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **dev center**. In the list of results, select **Dev centers**. You can delete a dev box definition when you no longer want to use it. Deleting :::image type="content" source="./media/how-to-manage-dev-box-definitions/delete-warning.png" alt-text="Screenshot of the warning message about deleting a dev box definition."::: -## Next steps +## Related content - [Provide access to projects for project admins](./how-to-project-admin.md) - [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md) |
dev-box | How To Manage Dev Box Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md | Title: Manage a dev box pool -description: Microsoft Dev Box dev box pools are collections of dev boxes that you manage together. Learn how to create, configure, and delete dev box pools. +description: Microsoft Dev Box dev box pools are collections of dev boxes that you manage together. Learn how to create, configure, and delete dev box pools. -# Manage a dev box pool +# Manage a dev box pool in Microsoft Dev Box -To allow developers to create their own dev boxes, you need to set up dev box pools that define the dev box specifications and network connections for new dev boxes. Developers can then create dev boxes from the dev box pools they have access to through their project memberships. +In this article, you learn how to manage a dev box pool in Microsoft Dev Box by using the Azure portal. ++A dev box pool is the collection of dev boxes that have the same settings, such as the dev box definition and network connection. A dev box pool is associated with a Microsoft Dev Box project. ++Developers that have access to the project in the dev center, can then choose to create a dev box from a dev box pool. ## Permissions To manage a dev box pool, you need the following permissions: ## Create a dev box pool -A dev box pool is a collection of dev boxes that you manage together. You must have a pool before users can create a dev box. +In Microsoft Dev Box, a dev box pool is a collection of dev boxes that you manage together. You must have at least one dev box pool before users can create a dev box. -The following steps show you how to create a dev box pool that's associated with a project. You'll use an existing dev box definition and network connection in the dev center to configure the pool. +The following steps show you how to create a dev box pool that's associated with a project. You use an existing dev box definition and network connection in the dev center to configure the pool. If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure Microsoft Dev Box ](quickstart-configure-dev-box-service.md) to create them. You can delete a dev box pool when you're no longer using it. > [!CAUTION] > When you delete a dev box pool, all existing dev boxes within the pool are permanently deleted. +To delete a dev box pool in the Azure portal: + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **projects**. In the list of results, select **Projects**. You can delete a dev box pool when you're no longer using it. :::image type="content" source="./media/how-to-manage-dev-box-pools/dev-box-pool-delete-confirm.png" alt-text="Screenshot of the confirmation message for deleting a dev box pool."::: -## Next steps +## Related content - [Provide access to projects for project admins](./how-to-project-admin.md) - [3. Create a dev box definition](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition) |
dev-box | How To Manage Dev Box Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-projects.md | -# Manage a dev box project +# Manage a Microsoft Dev Box project ++In this article, you learn how to manage a Microsoft Dev Box project by using the Azure portal. + A project is the point of access to Microsoft Dev Box for the development team members. A project contains dev box pools, which specify the dev box definitions and network connections used when dev boxes are created. Dev managers can configure the project with dev box pools that specify dev box definitions appropriate for their team's workloads. Dev box users create dev boxes from the dev box pools they have access to through their project memberships. -Each project is associated with a single dev center. When you associate a project with a dev center, all the settings at the dev center level will be applied to the project automatically. +Each project is associated with a single dev center. When you associate a project with a dev center, all the settings at the dev center level are applied to the project automatically. ## Project admins -Microsoft Dev Box makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their team, like creating and managing dev box pools. To provide users permissions to manage projects, add them to the DevCenter Project Admin role. The tasks in this quickstart can be performed by project admins. +Microsoft Dev Box makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their team, like creating and managing dev box pools. To provide users permissions to manage projects, add them to the DevCenter Project Admin role. The tasks in this article can be performed by project admins. To learn how to add a user to the Project Admin role, refer to [Provide access to projects for project admins](how-to-project-admin.md). To manage a dev box project, you need the following permissions: |Manage a dev box within the project|Owner, Contributor, or DevCenter Project Admin.| |Add a dev box user to the project|Owner permissions on the project.| -## Create a dev box project +## Create a Microsoft Dev Box project -The following steps show you how to create and configure a project in dev box. +The following steps show you how to create and configure a Microsoft Dev Box project. 1. In the [Azure portal](https://portal.azure.com), in the search box, type *Projects* and then select **Projects** from the list. The following steps show you how to create and configure a project in dev box. |-|-| |**Subscription**|Select the subscription in which you want to create the project.| |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.|- |**Dev center**|Select the dev center to which you want to associate this project. All the dev center level settings will be applied to the project.| + |**Dev center**|Select the dev center to which you want to associate this project. All the dev center level settings are applied to the project.| |**Name**|Enter a name for your project. | |**Description**|Enter a brief description of the project. | The following steps show you how to create and configure a project in dev box. 1. Confirm that the project is created successfully by checking the notifications. Select **Go to resource**. 1. Verify that you see the **Project** page.-## Delete a dev box project -You can delete a dev box project when you're no longer using it. Deleting a project is permanent and cannot be undone. You cannot delete a project that has dev box pools associated with it. ++## Delete a Microsoft Dev Box project ++You can delete a Microsoft Dev Box project when you're no longer using it. Deleting a project is permanent and can't be undone. You can't delete a project that has dev box pools associated with it. 1. Sign in to the [Azure portal](https://portal.azure.com). You can delete a dev box project when you're no longer using it. Deleting a proj :::image type="content" source="./media/how-to-manage-dev-box-projects/confirm-delete-project.png" alt-text="Screenshot of the Delete dev box pool confirmation message."::: -## Provide access to a dev box project +## Provide access to a Microsoft Dev Box project + Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage and delete their own dev boxes. You must have sufficient permissions to a project before you can add users to it. 1. Sign in to the [Azure portal](https://portal.azure.com). Before users can create dev boxes based on the dev box pools in a project, you m :::image type="content" source="media/how-to-manage-dev-box-projects/add-role-assignment-user.png" alt-text="Screenshot that shows the Add role assignment pane."::: -The user will now be able to view the project and all the pools within it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal). +The user is now able to view the project and all the pools within it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal). -To assign administrative access to a project, select the DevCenter Project Admin role. For more details on how to add a user to the Project Admin role, refer to [Provide access to projects for project admins](how-to-project-admin.md). +To assign administrative access to a project, select the DevCenter Project Admin role. For more information on how to add a user to the Project Admin role, see [Provide access to projects for project admins](how-to-project-admin.md). ## Next steps |
dev-box | How To Manage Dev Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md | +In this article, you learn how to manage a dev center in Microsoft Dev Box by using the Azure portal. + Development teams vary in the way they function and might have different needs. A dev center helps you manage these scenarios by enabling you to group similar sets of projects together and apply similar settings. ## Permissions To manage a dev center, you need the following permissions: ## Create a dev center -Your development teams' requirements change over time. You can create a new dev center to support organizational changes like a new business requirement or a new regional center. You can create as many or as few dev centers as you need, depending on how you organize and manage your development teams. +Your development teams' requirements change over time. You can create a new dev center in Microsoft Dev Box to support organizational changes like a new business requirement or a new regional center. ++You can create as many or as few dev centers as you need, depending on how you organize and manage your development teams. -To create a dev center: +To create a dev center in the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com). To create a dev center: ## Delete a dev center -You might choose to delete a dev center to reflect organizational or workload changes. Deleting a dev center is irreversible, and you must prepare for the deletion carefully. +You might choose to delete a dev center to reflect organizational or workload changes. Deleting a dev center in Microsoft Dev Box is irreversible, and you must prepare for the deletion carefully. A dev center can't be deleted while any projects are associated with it. You must delete the projects before you can delete the dev center.+ Attached network connections and their associated virtual networks are not deleted when you delete a dev center. When you're ready to delete your dev center, follow these steps: When you're ready to delete your dev center, follow these steps: You can attach existing network connections to a dev center. You must attach a network connection to a dev center before you can use it in projects to create dev box pools. +Network connections enable dev boxes to connect to existing virtual networks. The location, or Azure region, of the network connection determines where associated dev boxes are hosted. ++To attach a network connection to a dev center in Microsoft Dev Box: + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **dev centers**. In the list of results, select **Dev centers**. To make role assignments: | **Assign access to** | Select **User, group, or service principal**. | | **Members** | Select the users or groups that you want to be able to access the dev center. | -## Next steps +## Related content - [Provide access to projects for project admins](./how-to-project-admin.md) - [3. Create a dev box definition](quickstart-configure-dev-box-service.md#3-create-a-dev-box-definition) |
dev-box | How To Project Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-project-admin.md | Title: Provide administrative access to Microsoft Dev Box projects -description: Learn how to manage multiple Dev Box projects by assigning admin permissions and delegating project administration. + Title: Grant admin access to dev box projects +description: Learn how to manage multiple Microsoft Dev Box projects by granting admin permissions and delegating project administration. Last updated 04/25/2023 -# Provide administrative access to Dev Box projects for project admins +# Grant administrative access to Microsoft Dev Box projects ++In this article, you learn how to grant project administrators access to perform administrative tasks on Microsoft Dev Box projects. Microsoft Dev Box uses Azure role-based access control (Azure RBAC) to grant access to functionality in the service. You can create multiple Microsoft Dev Box projects in the dev center to align with each team's specific requirements. By using the built-in DevCenter Project Admin role, you can delegate project administration to a member of a team. Project admins can use the network connections and dev box definitions configured at the dev center level to create and manage dev box pools within their project. A DevCenter Project Admin can manage a project by: ## Assign permissions to project admins +To grant a user project admin permission in Microsoft Dev Box, you assign the DevCenter Project Admin role at the project level. + Use the following steps to assign the DevCenter Project Admin role: 1. Sign in to the [Azure portal](https://portal.azure.com). The users can now manage the project and create dev box pools within it. [!INCLUDE [permissions note](./includes/note-permission-to-create-dev-box.md)] -## Next steps +## Related content - [Quickstart: Configure Microsoft Dev Box](quickstart-configure-dev-box-service.md) |
dev-box | Quickstart Configure Dev Box Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md | Title: 'Quickstart: Configure Microsoft Dev Box' -description: In this quickstart, you learn how to configure the Microsoft Dev Box service to provide dev box workstations for users. +description: In this quickstart, you set up the Microsoft Dev Box resources to enable developers to self-service a cloud-based dev box. Create a dev center, dev box definition, and dev box pool. Last updated 04/25/2023 # Quickstart: Configure Microsoft Dev Box -This quickstart describes how to set up Microsoft Dev Box to enable development teams to self-serve their dev boxes. A dev box is a virtual machine (VM) preconfigured with the tools and resources the developer needs for a project. A dev box acts as a day-to-day workstation for the developer. +In this quickstart, you'll set up all the resources in Microsoft Dev Box to enable development teams to self-service their dev boxes. Learn how to create and configure a dev center, specify a dev box definition, and create a dev box pool. After you complete this quickstart, developers can use the developer portal to create and connect to a dev box. ++A dev box acts as a day-to-day cloud-based workstation for the developer. A dev box is a virtual machine (VM) preconfigured with the tools and resources the developer needs for a project. The process of setting up Microsoft Dev Box involves two distinct phases. In the first phase, platform engineers configure the necessary Microsoft Dev Box resources through the Azure portal. After this phase is complete, users can proceed to the next phase, creating and managing their dev boxes through the developer portal. This quickstart shows you how to complete the first phase. The following graphic shows the steps required to configure Microsoft Dev Box in First, you create a dev center and a project to organize your dev box resources. Next, you configure network components to enable dev boxes to connect to your organizational resources. Then, you create a dev box definition that is used to create dev boxes. After that, you create a dev box pool to define the network connection and dev box definition that dev boxes to use. Users who have access to a project can create dev boxes from the pools associated with that project. -After you complete this quickstart, you'll have Microsoft Dev Box set up ready for users to create and connect to dev boxes. - If you already have a Microsoft Dev Box configured and you want to learn how to create and connect to dev boxes, refer to: [Quickstart: Create a dev box by using the developer portal](quickstart-create-dev-box.md). ## Prerequisites To complete this quickstart, you need: - If your organization routes egress traffic through a firewall, open the appropriate ports. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). ## 1. Create a dev center +To get started with Microsoft Dev Box, you first create a dev center. A dev center in Microsoft Dev Box provides a centralized place to manage the collection of projects, the configuration of available dev box images and sizes, and the networking settings to enable access to organizational resources. + Use the following steps to create a dev center so that you can manage your dev box resources: 1. Sign in to the [Azure portal](https://portal.azure.com). Because you're not configuring Deployment Environments, you can safely ignore th ## 2. Configure a network connection -Network connections determine the region in which dev boxes are deployed. They also allow dev boxes to be connected to your existing virtual networks. The following steps show you how to create and configure a network connection in Microsoft Dev Box. +To determine in which Azure region the developer workstations are hosted, you need to add at least one network connection to the dev center in Microsoft Dev Box. The dev boxes are hosted in the region that is associated with the network connection. The network connection also enables you to connect to existing virtual networks or resources hosted on-premises from within a dev box. ++The following steps show you how to create and configure a network connection in Microsoft Dev Box. ### Create a virtual network and subnet After you attach a network connection, the Azure portal runs several health chec To resolve any errors, see [Troubleshoot Azure network connections](/windows-365/enterprise/troubleshoot-azure-network-connection). -Dev boxes automatically register with Microsoft Intune when they're created. If your network connection displays a warning for the **Intune Enrollment Restrictions Allow Windows Enrollment** test, check the Intune Windows platform restriction policy, as it may block you from provisioning. +Dev boxes automatically register with Microsoft Intune when they're created. If your network connection displays a warning for the **Intune Enrollment Restrictions Allow Windows Enrollment** test, check the Intune Windows platform restriction policy, as it might block you from provisioning. :::image type="content" source="media/quickstart-configure-dev-box-service/network-connection-intune-warning.png" alt-text="Intune warning"::: To learn more, see [Step 5 ΓÇô Enroll devices in Microsoft Intune: Windows enrol ## 3. Create a dev box definition -Dev box definitions define the image and SKU (compute + storage) that's used in the creation of the dev boxes. To create and configure a dev box definition: +Next, you create a dev box definition in the dev center. A dev box definition defines the VM image and the VM SKU (compute size + storage) that's used in the creation of the dev boxes. Depending on the type of development project or developer profiles, you can create multiple dev box definitions. For example, some developers might need a specific developer tool set, whereas others need a cloud workstation that has more compute resources. ++The dev box definitions you create in a dev center are available for all projects associated with that dev center. You need to add at least one dev box definition to your dev center. ++To create and configure a dev box definition for your dev center: 1. Open the dev center in which you want to create the dev box definition. Dev box definitions define the image and SKU (compute + storage) that's used in |Name|Value|Note| |-|-|-|- |**Name**|Enter a descriptive name for your dev box definition.| + |**Name**|Enter a descriptive name for your dev box definition.| | |**Image**|Select the base operating system for the dev box. You can select an image from Azure Marketplace or from Azure Compute Gallery. </br> If you're creating a dev box definition for testing purposes, consider using the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. |To access custom images when you create a dev box definition, you can use Azure Compute Gallery. For more information, see [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).| |**Image version**|Select a specific, numbered version to ensure that all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure that new dev boxes use the latest image available.|Selecting the **Latest** image version enables the dev box pool to use the most recent version of your chosen image from the gallery. This way, the created dev boxes stay up to date with the latest tools and code for your image. Existing dev boxes aren't modified when an image version is updated.| |**Compute**|Select the compute combination for your dev box definition.|| Dev box definitions define the image and SKU (compute + storage) that's used in ## 4. Create a dev box pool -A dev box pool is a collection of dev boxes that have similar settings. Dev box pools specify the dev box definitions and network connections that dev boxes use. You must associate at least one pool with your project before users can create a dev box. +Now that you've defined a network connection and dev box definition in your dev center, you can create a dev box pool in the project. A dev box pool is the collection of dev boxes that have the same settings, such as the dev box definition and network connection. Developers that have access to the project in the dev center, can then choose to create a dev box from a dev box pool. ++You must associate at least one dev box pool with your project before users can create a dev box. To create a dev box pool that's associated with a project: The Azure portal deploys the dev box pool and runs health checks to ensure that ## 5. Provide access to a dev box project -Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage, and delete their own dev boxes. You must have sufficient permissions to a project before you can add users to it. +Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage, and delete their own dev boxes. You grant access for the user at the level of the project. ++You must have sufficient permissions to a project before you can add users to it. To assign roles: You can assign the DevCenter Project Admin role by using the steps described ear [!INCLUDE [permissions note](./includes/note-permission-to-create-dev-box.md)] -## Next steps +## Next step In this quickstart, you configured the Microsoft Dev Box resources that are required to enable users to create their own dev boxes. To learn how to create and connect to a dev box, advance to the next quickstart: |
dev-box | Quickstart Create Dev Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md | Title: 'Quickstart: Create a dev box' -description: In this quickstart, you learn how to create a dev box and connect to it through a browser. +description: In this quickstart, learn how developers can create a dev box in the Microsoft Dev Box developer portal, and remotely connect to it through the browser. Last updated 09/12/2023 #Customer intent: As a dev box user, I want to understand how to create and access a dev box so that I can start work. -# Quickstart: Create a dev box by using the developer portal +# Quickstart: Create and connect to a dev box by using the Microsoft Dev Box developer portal In this quickstart, you get started with Microsoft Dev Box by creating a dev box through the developer portal. After you create the dev box, you can connect to it with a Remote Desktop session through a browser or through a Remote Desktop app. -You can create and manage multiple dev boxes as a dev box user. Create a dev box for each task that you're working on, and create multiple dev boxes within a single project to help streamline your workflow. +You can create and manage multiple dev boxes as a dev box user. Create a dev box for each task that you're working on, and create multiple dev boxes within a single project to help streamline your workflow. For example, you might switch to another dev box to fix a bug in a previous version, or if you need to work on a different part of the application. ## Prerequisites To complete this quickstart, you need: ## Create a dev box -1. Sign in to the [developer portal](https://aka.ms/devbox-portal). +Microsoft Dev Box enables you to create cloud-hosted developer workstations in a self-service way. You can create and manage dev boxes by using the developer portal. ++Depending on the project configuration and your permissions, you have access to different projects and associated dev box configurations. ++To create a dev box in the Microsoft Dev Box developer portal: ++1. Sign in to the [Microsoft Dev Box developer portal](https://aka.ms/devbox-portal). 2. Select **Get started**. To complete this quickstart, you need: ## Connect to a dev box -After you create a dev box, one way to access it quickly is through a browser: +After you create a dev box, you can connect remotely to the developer VM. Microsoft Dev Box supports connecting to a dev box in the following ways: ++- Connect through the browser from within the developer portal +- Connect by using a remote desktop client application ++To connect to a dev box by using the browser: 1. Sign in to the [developer portal](https://aka.ms/devbox-portal). After you create a dev box, one way to access it quickly is through a browser: :::image type="content" source="./media/quickstart-create-dev-box/dev-portal-open-in-browser.png" alt-text="Screenshot of dev box card that shows the option for opening in a browser."::: -A new tab opens with a Remote Desktop session through which you can use your dev box. Use a work or school account to log in to your dev box, not a personal Microsoft account. +A new tab opens with a Remote Desktop session through which you can use your dev box. Use a work or school account to sign in to your dev box, not a personal Microsoft account. ## Clean up resources When you no longer need your dev box, you can delete it: [!INCLUDE [dev box runs on creation note](./includes/clean-up-resources.md)] -## Next steps +## Related content In this quickstart, you created a dev box through the developer portal and connected to it by using a browser. -To learn how to connect to a dev box by using a Remote Desktop app, see [Tutorial: Use a Remote Desktop client to connect to a dev box](./tutorial-connect-to-dev-box-with-remote-desktop-app.md). +- Learn how to [connect to a dev box by using a Remote Desktop app](./tutorial-connect-to-dev-box-with-remote-desktop-app.md) |
dev-box | Tutorial Connect To Dev Box With Remote Desktop App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md | Title: 'Tutorial: Use a Remote Desktop client to connect to a dev box' -description: In this tutorial, you learn how to download a Remote Desktop client and connect to your dev box. You also learn how to configure a dev box to use multiple monitors during a remote desktop session. +description: In this tutorial, you download and use a remote desktop client to connect to a dev box in Microsoft Dev Box. Configure the RDP client for a multi-monitor setup. Last updated 09/11/2023 -# Tutorial: Use a Remote Desktop client to connect to a dev box +# Tutorial: Use a remote desktop client to connect to a dev box -After you configure the Microsoft Dev Box service and create dev boxes, you can connect to them by using a browser or by using a Remote Desktop client. +In this tutorial, you'll download and use a remote desktop client application to connect to a dev box in Microsoft Dev Box. Learn how to configure the application to take advantage of a multi-monitor setup. Remote Desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a Remote Desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android. +Alternately, you can also connect to your dev box through the browser from the Microsoft Dev Box developer portal. + In this tutorial, you learn how to: > [!div class="checklist"] To complete this tutorial, you must first: - [Configure Microsoft Dev Box](./quickstart-configure-dev-box-service.md). - [Create a dev box](./quickstart-create-dev-box.md#create-a-dev-box) on the [developer portal](https://aka.ms/devbox-portal). -## Download the client and connect to your dev box +## Download the remote desktop client and connect to your dev box ++You can use a remote desktop client application to connect to your dev box in Microsoft Dev Box. Remote Desktop clients are available for many operating systems and devices. -Remote Desktop clients are available for many operating systems and devices. In this tutorial, you can view the steps for Windows or the steps for a non-Windows operating system by selecting the relevant tab. +Select the relevant tab to view the steps to download and use the remote desktop client application from Windows or non-Windows operating systems. # [Windows](#tab/windows) To use a non-Windows Remote Desktop client to connect to your dev box: ## Configure Remote Desktop to use multiple monitors -Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both support up to 16 monitors. Use the following steps to configure Remote Desktop to use multiple monitors. +When you connect to your cloud-hosted developer machine in Microsoft Dev Box, you can take advantage of a multi-monitor setup. Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both support up to 16 monitors. ++Use the following steps to configure Remote Desktop to use multiple monitors. # [Windows](#tab/windows) Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both s |Single display |Remote desktop uses a single display. | |Select displays |Remote Desktop uses only the monitors you select. | - :::image type="content" source="media/tutorial-connect-to-dev-box-with-remote-desktop-app/remote-desktop-select-display.png" alt-text="Screenshot showing the remote desktop display settings. "::: + :::image type="content" source="media/tutorial-connect-to-dev-box-with-remote-desktop-app/remote-desktop-select-display.png" alt-text="Screenshot showing the remote desktop display settings, highlighting the option to select the number of displays."::: 1. Close the settings pane, and then select your dev box to begin the remote desktop session. The dev box might take a few moments to stop. ## Related content -To learn about managing your dev box, see: - - [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)+- Learn how to [connect to a dev box through the browser](./quickstart-create-dev-box.md#connect-to-a-dev-box) |
governance | Definition Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md | within a policy rule, except the following functions and user-defined functions: - deployment() - environment() - extensionResourceId()+- [lambda()](../../../azure-resource-manager/templates/template-functions-lambda.md) - listAccountSas() - listKeys() - listSecrets() |
governance | Samples By Category | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md | Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 09/01/2023 Last updated : 10/27/2023 -categories. To jump to a specific **category**, use the menu on the right side of the page. +categories. To jump to a specific **category**, use the links on the top of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature. ## Azure Advisor Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature [!INCLUDE [azure-resource-graph-samples-cat-azure-monitor](../../../../includes/resource-graph/samples/bycat/azure-monitor.md)] ++ ## Azure Policy [!INCLUDE [azure-resource-graph-samples-cat-azure-policy](../../../../includes/resource-graph/samples/bycat/azure-policy.md)] |
governance | Samples By Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md | Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 09/01/2023 Last updated : 10/27/2023 -specific **table**, use the menu on the right side of the page. Otherwise, use +specific **table**, use the links on the top of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature. For a list of tables and related details, see [Resource Graph tables](../concepts/query-language.md#resource-graph-tables). details, see [Resource Graph tables](../concepts/query-language.md#resource-grap [!INCLUDE [Azure-resource-graph-samples-table-healthresourcechanges](../../../../includes/resource-graph/samples/bytable/healthresourcechanges.md)] +## InsightResources ++ ## IoT Defender [!INCLUDE [azure-resource-graph-samples-table-iot-defender](../../../../includes/resource-graph/samples/bytable/iot-defender.md)] details, see [Resource Graph tables](../concepts/query-language.md#resource-grap [!INCLUDE [virtual-machine-basic-sku-public-ip](../../includes/resource-graph/query/virtual-machine-basic-sku-public-ip.md)] + ## SecurityResources [!INCLUDE [azure-resource-graph-samples-table-securityresources](../../../../includes/resource-graph/samples/bytable/securityresources.md)] |
hdinsight-aks | Trademarks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trademarks.md | Product names, logos and other material used on this Azure HDInsight on AKS lear - Apache, Apache Kafka, Kafka and the Kafka logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF). - Apache, Apache Flink, Flink and the Flink logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF). - Apache HBase, HBase and the HBase logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).-- Apache®, Apache Spark™, Apache HBase®, Apache Kafka®, and Apache Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. The use of these marks does not imply endorsement by The Apache Software Foundation.+- Apache, Apache Cassandra, Cassandra and Cassandra logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF). +- Apache®, Apache Spark™, Apache HBase®, Apache Kafka®, Apache Cassandra® and Apache Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. The use of these marks does not imply endorsement by The Apache Software Foundation. |
hdinsight-aks | Trino Add Catalogs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-catalogs.md | Title: Configure catalogs in Azure HDInsight on AKS description: Add catalogs to an existing Trino cluster in HDInsight on AKS Previously updated : 08/29/2023 Last updated : 10/19/2023 # Configure catalogs Last updated 08/29/2023 [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] Every Trino cluster comes by default with few catalogs - system, tpcds, `tpch`. You can add your own catalogs same way you would do with OSS Trino. -In addition, HDInsight on AKS Trino allows storing secrets in Key Vault so you donΓÇÖt have to specify them explicitly in ARM template. +In addition, Trino with HDInsight on AKS allows storing secrets in Key Vault so you donΓÇÖt have to specify them explicitly in ARM template. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal. This article demonstrates how you can add a new catalog to your cluster using AR ## Prerequisites -* An operational HDInsight on AKS Trino cluster. +* An operational Trino cluster with HDInsight on AKS. * Azure SQL database. * Azure SQL server login/password are stored in the Key Vault secrets and user-assigned MSI attached to your Trino cluster granted permissions to read them. Refer [store credentials in Key Vault and assign role to MSI](../prerequisites-resources.md#create-azure-key-vault). * Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. This article demonstrates how you can add a new catalog to your cluster using AR |files|List of Trino catalog files to be added to the cluster.| |filename|List of Trino catalog files to be added to the cluster.| |content|`json` escaped string to put into trino catalog file. This string should contain all trino-specific catalog properties, which depend on type of connector used. For more information, see OSS trino documentation.|- |${SECRET_REF:\<referenceName\>}|Special tag to reference a secret from secretsProfile. HDInsight on AKS Trino at runtime fetch the secret from Key Vault and substitute it in catalog configuration.| + |${SECRET_REF:\<referenceName\>}|Special tag to reference a secret from secretsProfile. Trino at runtime fetch the secret from Key Vault and substitute it in catalog configuration.| |values|ItΓÇÖs possible to specify catalog configuration using content property as single string, and using separate key-value pairs for each individual Trino catalog property as shown for memory catalog.| Deploy the updated ARM template to reflect the changes in your cluster. Learn how to [deploy an ARM template](/azure/azure-resource-manager/templates/deploy-portal). |
hdinsight-aks | Trino Add Delta Lake Catalog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-delta-lake-catalog.md | Title: Configure Delta Lake catalog description: How to configure Delta Lake catalog in a Trino cluster. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Configure Delta Lake catalog [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -This article provides an overview of how to configure Delta Lake catalog in your HDInsight on AKS Trino cluster. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal. +This article provides an overview of how to configure Delta Lake catalog in your Trino cluster with HDInsight on AKS. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal. ## Prerequisites |
hdinsight-aks | Trino Add Iceberg Catalog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-iceberg-catalog.md | Title: Configure Iceberg catalog description: How to configure iceberg catalog in a Trino cluster. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Configure Iceberg catalog [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -This article provides an overview of how to configure Iceberg catalog in HDInsight on AKS Trino cluster. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal. +This article provides an overview of how to configure Iceberg catalog in your Trino cluster with HDInsight on AKS. You can add a new catalog by updating your cluster ARM template except the hive catalog, which you can add during [Trino cluster creation](./trino-create-cluster.md) in the Azure portal. ## Prerequisites |
hdinsight-aks | Trino Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-airflow.md | Title: Use Airflow with Trino cluster -description: How to create Airflow DAG connecting to Azure HDInsight on AKS Trino + Title: Use Apache Airflow with Trino cluster +description: How to create Apache Airflow DAG to connect to Trino cluster with HDInsight on AKS Previously updated : 08/29/2023 Last updated : 10/19/2023 -# Use Airflow with Trino cluster +# Use Apache AirflowΓäó with Trino cluster [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -This article demonstrates how to configure available open-source [Airflow Trino provider](https://airflow.apache.org/docs/apache-airflow-providers-trino/stable/https://docsupdatetracker.net/index.html) to connect to HDInsight on AKS Trino cluster. -The objective is to show how you can connect Airflow to HDInsight on AKS Trino considering main steps as obtaining access token and running query. +This article demonstrates how to configure available open-source [Apache Airflow Trino provider](https://airflow.apache.org/docs/apache-airflow-providers-trino/stable/https://docsupdatetracker.net/index.html) to connect to your Trino cluster with HDInsight on AKS. +The objective is to show how you can connect Airflow to Trino with HDInsight on AKS considering main steps as obtaining access token and running query. ## Prerequisites -* An operational HDInsight on AKS Trino cluster. +* An operational Trino cluster with HDInsight on AKS. * Airflow cluster. * Azure service principal client ID and secret to use for authentication. * [Allow access to the service principal to Trino cluster](../hdinsight-on-aks-manage-authorization-profile.md). Now let's create simple DAG performing those steps. Complete code as follows 1. Copy the [following code](#example-code) and save it in $AIRFLOW_HOME/dags/example_trino.py, so Airlift can discover the DAG. 1. Update the script entering your Trino cluster endpoint and authentication details.-1. Trino endpoint (`trino_endpoint`) - HDInsight on AKS Trino cluster endpoint from Overview page in the Azure portal. +1. Trino endpoint (`trino_endpoint`) - Trino cluster endpoint from Overview page in the Azure portal. 1. Azure Tenant ID (`azure_tenant_id`) - Identifier of your Azure Tenant, which can be found in Azure portal. 1. Service Principal Client ID - Client ID of an application or service principal to use in Airlift for authentication to your Trino cluster. 1. Service Principal Secret - Secret for the service principal.-1. Pay attention to connection properties, which configure JWT authentication type, https and port. These values are required to connect to HDInsight on AKS Trino cluster. +1. Pay attention to connection properties, which configure JWT authentication type, https and port. These values are required to connect to your Trino cluster. > [!NOTE] > Give access to the service principal ID (object ID) to your Trino cluster. Follow the steps to [grant access](../hdinsight-on-aks-manage-authorization-profile.md). After restarting Airflow, find and run example_trino DAG. Results of the sample > For production scenarios, you should choose to handle connection and secrets diffirently, using Airflow secrets management. ## Next steps-This example demonstrates basic steps required to connect Airflow to HDInsight on AKS Trino. Main steps are obtaining access token and running query. +This example demonstrates basic steps required to connect Airflow to Trino with HDInsight on AKS. Main steps are obtaining access token and running query. ## See also * [Getting started with Airflow](https://airflow.apache.org/docs/apache-airflow/stable/start.html) * [Airflow Trino provider](https://airflow.apache.org/docs/apache-airflow-providers-trino/stable/https://docsupdatetracker.net/index.html) * [Airflow secrets](https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/https://docsupdatetracker.net/index.html)-* [HDInsight on AKS Trino authentication](./trino-authentication.md) +* [Trino authentication for HDInsight on AKS](./trino-authentication.md) * [MSAL for Python](/entra/msal/python) |
hdinsight-aks | Trino Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-authentication.md | Title: Client authentication description: How to authenticate to Trino cluster Previously updated : 08/29/2023 Last updated : 10/19/2023 # Authentication mechanism [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -Azure HDInsight on AKS Trino provides tools such as CLI client, JDBC driver etc., to access the cluster, which is integrated with Microsoft Entra ID to simplify the authentication for users. +Trino with HDInsight on AKS provides tools such as CLI client, JDBC driver etc., to access the cluster, which is integrated with Microsoft Entra ID to simplify the authentication for users. Supported tools or clients need to authenticate using Microsoft Entra ID OAuth2 standards that are, a JWT access token issued by Microsoft Entra ID must be provided to the cluster endpoint. This section describes common authentication flows supported by the tools. |
hdinsight-aks | Trino Caching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-caching.md | Title: Configure caching description: Learn how to configure caching in Trino Previously updated : 08/29/2023 Last updated : 10/19/2023 # Configure caching Last updated 08/29/2023 Querying object storage using the Hive connector is a common use case for Trino. This process often involves sending large amounts of data. Objects are retrieved from HDFS or another supported object store by multiple workers and processed by those workers. Repeated queries with different parameters, or even different queries from different users, often access and transfer the same objects. -HDInsight on AKS Trino has added **final result caching** capability, which provides the following benefits: +HDInsight on AKS has added **final result caching** capability for Trino, which provides the following benefits: * Reduce the load on object storage. * Improve the query performance. Available configuration parameters are: |`query.cache.max-result-data-size`|0|Max data size for a result. If this value exceeded, then result doesn't cache.| > [!NOTE]-> Final result caching is using query plan and ttl as a cache key. +> Final result caching uses query plan and ttl as a cache key. -Final result caching can also be controlled through the following session parameters: +### Final result caching can also be controlled through the following session parameters: |Session parameter|Default|Description| |||| Final result caching can also be controlled through the following session parame #### Prerequisites -* An operational HDInsight on AKS Trino cluster. +* An operational Trino cluster with HDInsight on AKS. * Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview). |
hdinsight-aks | Trino Catalog Glue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-catalog-glue.md | Title: Query data from AWS S3 and with Glue -description: How to configure HDInsight on AKS Trino catalogs with Glue as metastore + Title: Query data from AWS S3 and with AWS Glue +description: How to configure Trino catalogs for HDInsight on AKS with AWS Glue as metastore Previously updated : 08/29/2023 Last updated : 10/19/2023 -# Query data from AWS S3 and with Glue +# Query data from AWS S3 using AWS Glue [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -This article provides examples of how you can add catalogs to an HDInsight on AKS Trino cluster where catalogs are using AWS Glue as metastore and S3 as storage. +This article provides examples of how you can add catalogs to a Trino cluster with HDInsight on AKS where catalogs are using AWS Glue as metastore and AWS S3 as storage. ## Prerequisites -* [Understanding of HDInsight on AKS Trino cluster configurations](./trino-service-configuration.md). +* [Understanding of Trino cluster configurations for HDInsight on AKS](./trino-service-configuration.md). * [How to add catalogs to an existing cluster](./trino-add-catalogs.md). * [AWS account with Glue and S3](./trino-catalog-glue.md#quickstart-with-aws-glue-and-s3). -## Trino catalogs with S3 and Glue as metastore +## Trino catalogs with AWS S3 and AWS Glue as metastore Several Trino connectors support AWS Glue. More details on catalogs Glue configuration properties can be found in [Trino documentation](https://trino.io/docs/410/connector/hive.html#aws-glue-catalog-configuration-properties). Refer to [Quickstart with AWS Glue and S3](./trino-catalog-glue.md#quickstart-with-aws-glue-and-s3) for setting up AWS resources. Refer to [Quickstart with AWS Glue and S3](./trino-catalog-glue.md#quickstart-wi ### Add Hive catalog -You can add the following sample JSON in your HDInsight on AKS Trino cluster under `clusterProfile` section in the ARM template. +You can add the following sample JSON in your Trino cluster under `clusterProfile` section in the ARM template. <br>Update the values as per your requirement. ```json You can add the following sample JSON in your HDInsight on AKS Trino cluster und ### Add Delta Lake catalog -You can add the following sample JSON in your HDInsight on AKS Trino cluster under `clusterProfile` section in the ARM template. +You can add the following sample JSON in your Trino cluster under `clusterProfile` section in the ARM template. <br>Update the values as per your requirement. ```json You can add the following sample JSON in your HDInsight on AKS Trino cluster und ``` ### Add Iceberg catalog-You can add the following sample JSON in your HDInsight on AKS Trino cluster under `clusterProfile` section in the ARM template. +You can add the following sample JSON in your Trino cluster under `clusterProfile` section in the ARM template. <br>Update the values as per your requirement. ```json Catalog examples in the previous code refer to access keys stored as secrets in ## Quickstart with AWS Glue and S3 ### 1. Create AWS user and save access keys to Azure Key Vault.-Use existing or create new user in AWS IAM - this user is used by Trino connector to read data from Glue/S3. Create and retrieve access keys on Security Credentials tab and save them as secrets into [Azure Key Vault](/azure/key-vault/secrets/about-secrets) linked to your HDInsight on AKS Trino cluster. Refer to [Add catalogs to existing cluster](./trino-add-catalogs.md) for details on how to link Key Vault to your Trino cluster. +Use existing or create new user in AWS IAM - this user is used by Trino connector to read data from Glue/S3. Create and retrieve access keys on Security Credentials tab and save them as secrets into [Azure Key Vault](/azure/key-vault/secrets/about-secrets) linked to your Trino cluster. Refer to [Add catalogs to existing cluster](./trino-add-catalogs.md) for details on how to link Key Vault to your Trino cluster. ### 2. Create AWS S3 bucket Use existing or create new S3 bucket, it's used in Glue database as location to store data. Use existing or create new S3 bucket, it's used in Glue database as location to In AWS Glue, create new database, for example, "trinodb" and configure location, which points to your S3 bucket from previous step, for example, `s3://trinoglues3/` ### 4. Configure Trino catalog-Configure a Trino catalog using examples above [Trino catalogs with S3 and Glue as metastore](./trino-catalog-glue.md#trino-catalogs-with-s3-and-glue-as-metastore). +Configure a Trino catalog using examples above [Trino catalogs with S3 and Glue as metastore](./trino-catalog-glue.md#trino-catalogs-with-aws-s3-and-aws-glue-as-metastore). ### 5. Create and query sample table Here are few sample queries to test connectivity to AWS reading and writing data. Schema name is AWS Glue database name you created earlier. |
hdinsight-aks | Trino Connect To Metastore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connect-to-metastore.md | Title: Add external Hive metastore database description: Connecting to the HIVE metastore for Trino clusters in HDInsight on AKS Previously updated : 08/29/2023 Last updated : 10/19/2023 # Use external Hive metastore database [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -Hive metastore is used as a central repository for storing metadata about the data. This article describes how you can add a Hive metastore database to your HDInsight on AKS Trino cluster. There are two ways: +Hive metastore is used as a central repository for storing metadata about the data. This article describes how you can add a Hive metastore database to your Trino cluster with HDInsight on AKS. There are two ways: * You can add a Hive catalog and link it to an external Hive metastore database during [Trino cluster creation](./trino-create-cluster.md). Hive metastore is used as a central repository for storing metadata about the da The following example covers the addition of Hive catalog and metastore database to your cluster using ARM template. ## Prerequisites-* An operational HDInsight on AKS Trino cluster. +* An operational Trino cluster with HDInsight on AKS. * Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview). |
hdinsight-aks | Trino Create Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-create-cluster.md | Title: Create a Trino cluster - Azure portal description: Creating a Trino cluster in HDInsight on AKS on the Azure portal. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Create a Trino cluster in the Azure portal (Preview) [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -This article describes the steps to create an HDInsight on AKS Trino cluster by using the Azure portal. +This article describes the steps to create a Trino cluster with HDInsight on AKS by using the Azure portal. ## Prerequisites |
hdinsight-aks | Trino Create Delta Lake Tables Synapse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-create-delta-lake-tables-synapse.md | Title: Read Delta Lake tables (Synapse or External Location) description: How to read external tables created in Synapse or other systems into a Trino cluster. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Read Delta Lake tables (Synapse or external location) Last updated 08/29/2023 This article provides an overview of how to read a Delta Lake table without having any access to the metastore (Synapse or other metastores without public access). -You can perform the following operations on the tables using HDInsight on AKS Trino. +You can perform the following operations on the tables using Trino with HDInsight on AKS. * DELETE * UPDATE This section shows how to create a Delta table over a pre-existing location give `abfss://container@storageaccount.dfs.core.windows.net/synapse/workspaces/workspace_name/warehouse/table_name/` -1. Create a Delta Lake schema in HDInsight on AKS Trino. +1. Create a Delta Lake schema in Trino. ```sql CREATE SCHEMA delta.default; This section shows how to create a Delta table over a pre-existing location give ## Write Delta Lake tables in Synapse Spark -Use `format("delta")` to save a dataframe as a Delta table, then you can use the path where you saved the dataframe as delta format to register the table in HDInsight on AKS Trino. +Use `format("delta")` to save a dataframe as a Delta table, then you can use the path where you saved the dataframe as delta format to register the table in Trino. ```python my_dataframe.write.format("delta").save("abfss://container@storageaccount.dfs.core.windows.net/synapse/workspaces/workspace_name/warehouse/table_name") |
hdinsight-aks | Trino Custom Plugins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-custom-plugins.md | Title: Add custom plugins in Azure HDInsight on AKS description: Add custom plugins to an existing Trino cluster in HDInsight on AKS Previously updated : 08/29/2023 Last updated : 10/19/2023 # Custom plugins [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -This article provides details on how to deploy custom plugins to your HDInsight on AKS Trino cluster. +This article provides details on how to deploy custom plugins to your Trino cluster with HDInsight on AKS. Trino provides a rich interface allowing users to write their own plugins such as event listeners, custom SQL functions etc. You can add the configuration described in this article to make custom plugins available in your Trino cluster using ARM template. ## Prerequisites-* An operational HDInsight on AKS Trino cluster. +* An operational Trino cluster with HDInsight on AKS. * Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview). |
hdinsight-aks | Trino Fault Tolerance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-fault-tolerance.md | Title: Configure fault-tolerance -description: Learn how to configure fault-tolerance in HDInsight on AKS Trino. +description: Learn how to configure fault-tolerance in Trino with HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Fault-tolerant execution [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -HDInsight on AKS Trino supports [fault-tolerant execution](https://trino.io/docs/current/admin/fault-tolerant-execution.html) to mitigate query failures and increase resilience. -This article describes how you can enable fault tolerance for your HDInsight on AKS Trino cluster. +Trino supports [fault-tolerant execution](https://trino.io/docs/current/admin/fault-tolerant-execution.html) to mitigate query failures and increase resilience. +This article describes how you can enable fault tolerance for your Trino cluster with HDInsight on AKS. ## Configuration To enable fault-tolerant execution on queries/tasks with a larger result set, co ## Exchange manager Exchange manager is responsible for managing spooled data to back fault-tolerant execution. For more details, refer [Trino documentation]( https://trino.io/docs/current/admin/fault-tolerant-execution.html#fte-exchange-manager).-<br>HDInsight on AKS Trino supports `filesystem` based exchange managers that can store the data in Azure Blob Storage (ADLS Gen 2). This section describes how to configure exchange manager with Azure Blob Storage. +<br>Trino with HDInsight on AKS supports `filesystem` based exchange managers that can store the data in Azure Blob Storage (ADLS Gen 2). This section describes how to configure exchange manager with Azure Blob Storage. To set up exchange manager with Azure Blob Storage as spooling destination, you need three required properties in `exchange-manager.properties` file. You can find the connection string in *Security + Networking* -> *Access keys* s :::image type="content" source="./media/trino-fault-tolerance/connection-string.png" alt-text="Screenshot showing storage account connection string." border="true" lightbox="./media/trino-fault-tolerance/connection-string.png"::: > [!NOTE]-> HDInsight on AKS Trino currently does not support MSI authentication in exchange manager set up backed by Azure Blob Storage. +> Trino with HDInsight on AKS currently does not support MSI authentication in exchange manager set up backed by Azure Blob Storage. |
hdinsight-aks | Trino Jvm Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-jvm-configuration.md | Title: Modifying JVM heap settings description: How to modify initial and max heap size for Trino pods. Previously updated : 08/29/2023 Last updated : 10/19/2023 # Configure JVM heap size [!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] -This article describes how to modify initial and max heap size for HDInsight on AKS Trino pods. +This article describes how to modify initial and max heap size for Trino pods with HDInsight on AKS. `Xms` and `-Xmx` settings can be changed to control initial and max heap size of Trino pods. You can modify the JVM heap settings using ARM template. > [!NOTE]-> In HDInsight on AKS, Heap settings on Trino pods are already right-sized based on the selected SKU size. These settings should only be modified when a user wants to control JVM behavior on the pods and is aware of side-effects of changing these settings. +> In HDInsight on AKS, heap settings on Trino pods are already right-sized based on the selected SKU size. These settings should only be modified when a user wants to control JVM behavior on the pods and is aware of side-effects of changing these settings. ## Prerequisites-* An operational HDInsight on AKS Trino cluster. +* An operational Trino cluster with HDInsight on AKS. * Create [ARM template](../create-cluster-using-arm-template-script.md) for your cluster. * Review complete cluster [ARM template](https://hdionaksresources.blob.core.windows.net/trino/samples/arm/arm-trino-config-sample.json) sample. * Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview). |
hdinsight-aks | Trino Miscellaneous Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-miscellaneous-files.md | Title: Using miscellaneous files description: Using miscellaneous files with Trino clusters in HDInsight on AKS Previously updated : 08/29/2023 Last updated : 10/19/2023 # Using miscellaneous files This article |