Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
advisor | Advisor Reference Cost Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md | Title: Cost recommendations description: Full list of available cost recommendations in Advisor. Previously updated : 02/28/2023++ Last updated : 10/15/2023 # Cost recommendations Azure Advisor helps you optimize and reduce your overall Azure spend by identify 1. On the **Advisor** dashboard, select the **Cost** tab. ++++## AI Services ++### Potential Cost Savings on this Form Recognizer Resource ++We observed that your Form Recognizer resource has had enough usage in the past 30 days for you to consider using a Commitment tier. ++Learn more about [Cognitive Service - AzureAdvisorFRCommitment (Potential Cost Savings on this Form Recognzier Resource)](https://azure.microsoft.com/pricing/details/form-recognizer/). ++### Potential Cost Savings on this Computer Vision Resource ++We observed that your Computer Vision resource has had enough READ usage in the past 30 days for you to consider using a Commitment tier. ++Learn more about [Cognitive Service - AzureAdvisorCVReadCommitment (Potential Cost Savings on this Computer Vision Resource)](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/). ++### Potential Cost Savings on this Speech Service Resource ++We observed that your Speech Service resource has had enough usage in the past 30 days for you to consider using a Commitment tier. ++Learn more about [Cognitive Service - AzureAdvisorSpeechCommitment (Potential Cost Savings on this Speech Service Resource)](https://azure.microsoft.com/pricing/details/form-recognizer/). ++### Potential Cost Savings on this Translator Resource ++We observed that your Translator resource has had enough usage in the past 30 days for you to consider using a Commitment tier. ++Learn more about [Cognitive Service - AzureAdvisorTranslatorCommitment (Potential Cost Savings on this Translator Resource)](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). ++### Potential Cost Savings on this LUIS Resource ++We observed that your LUIS resource has had enough usage in the past 30 days for you to consider using a Commitment tier. ++Learn more about [Cognitive Service - AzureAdvisorLUISCommitment (Potential Cost Savings on this LUIS Resource)](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/). ++### Potential Cost Savings on this Language Service Resource ++We observed that your Language Service resource has had enough usage in the past 30 days for you to consider using a Commitment tier. ++Learn more about [Cognitive Service - AzureAdvisorTextAnalyticsCommitment (Potential Cost Savings on this Language Service Resource)](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/). ++### Enable Autoscaling for Azure Databricks Clusters ++Autoscaling makes it easier to achieve high cluster utilization, because you donΓÇÖt need to provision the cluster to match a workload. When you're using autoscaling, workloads can run faster and overall costs can be reduced compared to a static cluster. ++Learn more about [Databricks Workspace - DatabricksEnableAutoscaling (Enable Autoscaling for Azure Databricks Clusters)](/azure/databricks/archive/compute/configure). ++++## Analytics ++### Unused, stopped, Data Explorer resources ++This recommendation surfaces all stopped Data Explorer resources that have been stopped for at least 60 days. Consider deleting the resources. ++Learn more about [Data explorer resource - ADX stopped resource (Unused stopped Data Explorer resources)](https://aka.ms/adxunusedstoppedcluster). ++### Unused/Empty Data Explorer resources ++This recommendation surfaces all Data Explorer resources provisioned more than 10 days from the last update, and found either empty or with no activity. Consider deleting the resources. ++Learn more about [Data explorer resource - ADX Unused resource (Unused/Empty Data Explorer resources)](https://aka.ms/adxemptycluster). ++### Right-size Data Explorer resources for optimal cost ++One or more of these issues were detected: Low data capacity, CPU utilization, or memory utilization. Scale down and/or scale in the resource to the recommended configuration shown. ++Learn more about [Data explorer resource - Right-size for cost (Right-size Data Explorer resources for optimal cost)](https://aka.ms/adxskusize). ++### Reduce Data Explorer table cache policy to optimize costs ++Reducing the table cache policy frees up Data Explorer cluster nodes with low CPU utilization, memory, and a high cache size configuration. ++Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables (Reduce Data Explorer table cache policy to optimize costs)](https://aka.ms/adxcachepolicy). ++### Unused running Data Explorer resources ++This recommendation surfaces all running Data Explorer resources with no user activity. Consider stopping the resources. ++Learn more about [Data explorer resource - StopUnusedClusters (Unused running Data Explorer resources)](/azure/data-explorer/azure-advisor#azure-data-explorer-unused-cluster). ++### Cleanup unused storage in Data Explorer resources ++Over time, internal extents merge operations can accumulate redundant and unused storage artifacts that remain beyond the data retention period. While this unreferenced data doesnΓÇÖt negatively impact the performance, it can lead to more storage use and larger costs than necessary. This recommendation surfaces Data Explorer resources that have unused storage artifacts. We recommended that you run the cleanup command to detect and delete unused storage artifacts and reduce cost. Recoverability will be reset to the cleanup time and not available on data that was created before running the cleanup. ++Learn more about [Data explorer resource - RunCleanupCommandForAzureDataExplorer (Cleanup unused storage in Data Explorer resources)](https://aka.ms/adxcleanextentcontainers). ++### Enable optimized autoscale for Data Explorer resources ++Looks like your resource could have automatically scaled to reduce costs (based on the usage patterns, cache utilization, ingestion utilization, and CPU). To optimize costs and performance, we recommend enabling optimized autoscale. To make sure you don't exceed your planned budget, add a maximum instance count when you enable optimized autoscale. ++Learn more about [Data explorer resource - EnableOptimizedAutoscaleAzureDataExplorer (Enable optimized autoscale for Data Explorer resources)](https://aka.ms/adxoptimizedautoscale). ++### Change Data Explorer clusters to a more cost effective and better performing SKU ++You have resources operating under a nonoptimal SKU. We recommend migrating to a more cost effective and better performing SKU. This SKU should reduce your costs and improve overall performance. We have calculated the required instance count that meets both the CPU and cache of your cluster. ++Learn more about [Data explorer resource - SkuChangeForAzureDataExplorer (Change Data Explorer clusters to a more cost effective and better performing SKU)](https://aka.ms/clusterChooseSku). ++### Consider Changing Pricing Tier ++Based on your current usage volume, investigate changing your pricing (Commitment) tier to receive a discount and reduce costs. ++Learn more about [Log Analytics workspace - considerChangingPricingTier (Consider Changing Pricing Tier)](/azure/azure-monitor/logs/change-pricing-tier). ++### Consider configuring the low-cost Basic logs plan on selected tables ++We have identified ingestion of more than 1 GB per month to tables that are eligible for the low cost Basic log data plan. The Basic log plan gives you search capabilities for debugging and troubleshooting at a lower cost. ++Learn more about [Log Analytics workspace - EnableBasicLogs (Consider configuring the low-cost Basic logs plan on selected tables)](https://aka.ms/basiclogs). ++### Consider removing unused restored tables ++You have one or more tables with restored data active in your workspace. If you're no longer using a restored data, delete the table to avoid unnecessary charges. ++Learn more about [Log Analytics workspace - DeleteRestoredTables (Consider removing unused restored tables)](https://aka.ms/LogAnalyticsRestore). ++### Consider enabling autopause on Spark compute ++Auto-pause releases and shuts down unused Compute resources after a set idle period of inactivity. ++Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoPauseGuidance (Consider enabling autopause feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoPauseGuidance). ++### Consider enabling autoscale on Spark compute ++Autoscale automatically scales the number of nodes in a cluster instance up and down. During the creation of a new Spark pool, you can set a minimum and maximum number of nodes when autoscale is selected. Autoscale then monitors the resource requirements of the load and scales the number of nodes up or down. There's no extra charge for this feature. ++Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoScaleGuidance). +++ ## Compute -### Use Standard Storage to store Managed Disks snapshots +### Standard SSD disks billing caps. -To save 60% of cost, we recommend storing your snapshots in Standard Storage, regardless of the storage type of the parent disk. It is the default option for Managed Disks snapshots. Migrate your snapshot from Premium to Standard Storage. Refer to Managed Disks pricing details. +Customers running high IO workloads in Standard HDDs can upgrade to Standard SSDs and benefit from better performance and SLA and now experience a limit on the maximum number of billed transactions. -Learn more about [Managed Disk Snapshot - ManagedDiskSnapshot (Use Standard Storage to store Managed Disks snapshots)](https://aka.ms/aa_manageddisksnapshot_learnmore). +Learn more about [Disk - UpgradeHDDtoSDD (Standard SSD disks billing caps.)](). ++### Underutilized Disks Identified ++You have disks that are utilized less than 10%, right-size to save cost. ++Learn more about [Disk - wiprounderutilizeddisks (Underutilized Disks Identified)](). ++### You have disks that have not been attached to a VM for more than 30 days. Evaluate if you still need the disk. ++We've observed that you have disks that haven't been attached to a VM for more than 30 days. Evaluate if you still need the disk. If you decide to delete the disk, recovery isn't possible. We recommend that you create a snapshot before deletion or ensure the data in the disk is no longer required. -### Right-size or shutdown underutilized virtual machines +Learn more about [Disk - DeleteOrDowngradeUnattachedDisks (You have disks that haven't been attached to a VM for more than 30 days. Evaluate if you still need the disk.)](https://aka.ms/unattacheddisks). -We've analyzed the usage patterns of your virtual machine over the past seven days and identified virtual machines with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of virtual machines. +### Right-size or shutdown underutilized virtual machine scale sets -Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](https://aka.ms/aa_lowusagerec_learnmore). +We've analyzed the usage patterns of your virtual machine scale sets over the past seven days and identified virtual machine scale sets with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of virtual machine scale sets. -### You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk. +Learn more about [Virtual machine scale set - LowUsageVmss (Right-size or shutdown underutilized virtual machine scale sets)](https://aka.ms/aa_lowusagerec_vmss_learnmore). ++### Use Virtual Machines with Ephemeral OS Disk enabled to save cost and get better performance ++With Ephemeral OS Disk, You get these benefits: Save on storage cost for OS disk. Get lower read/write latency to OS disk. Faster VM Reimage operation by resetting OS (and Temporary disk) to its original state. It is preferable to use Ephemeral OS Disk for short-lived IaaS VMs or VMs with stateless workloads. ++Learn more about [Subscription - EphemeralOsDisk (Use Virtual Machines with Ephemeral OS Disk enabled to save cost and get better performance)](/azure/virtual-machines/windows/ephemeral-os-disks). -We've observed that you have disks which haven't been attached to a VM for more than 30 days. Please evaluate if you still need the disk. If you decide to delete the disk, recovery isn't possible. We recommend that you create a snapshot before deletion or ensure the data in the disk is no longer required. -Learn more about [Disk - DeleteOrDowngradeUnattachedDisks (You have disks which haven't been attached to a VM for more than 30 days. Please evaluate if you still need the disk.)](https://aka.ms/unattacheddisks). -## MariaDB ++## Databases ### Right-size underutilized MariaDB servers -Our internal telemetry shows that the MariaDB database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half. +Our internal telemetry shows that your MariaDB database server resources have been underutilized for an extended period of time over the last seven days. Low resource utilization results in unwanted expenditure that can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half. Learn more about [MariaDB server - OrcasMariaDbCpuRightSize (Right-size underutilized MariaDB servers)](https://aka.ms/mariadbpricing). -## MySQL - ### Right-size underutilized MySQL servers -Our internal telemetry shows that the MySQL database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure, which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half. +Our internal telemetry shows that your MySQL database server resources have been underutilized for an extended period of time over the last seven days. Low resource utilization results in unwanted expenditure that can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half. Learn more about [MySQL server - OrcasMySQLCpuRightSize (Right-size underutilized MySQL servers)](https://aka.ms/mysqlpricing). -## PostgreSQL - ### Right-size underutilized PostgreSQL servers -Our internal telemetry shows that the PostgreSQL database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure, which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half. +Our internal telemetry shows that your PostgreSQL database server resources have been underutilized for an extended period of time over the last seven days. Low resource utilization results in unwanted expenditure that can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half. Learn more about [PostgreSQL server - OrcasPostgreSqlCpuRightSize (Right-size underutilized PostgreSQL servers)](https://aka.ms/postgresqlpricing). -## Azure Cosmos DB - ### Review the configuration of your Azure Cosmos DB free tier account -Your Azure Cosmos DB free tier account is currently containing resources with a total provisioned throughput exceeding 1000 Request Units per second (RU/s). Because the free tier only covers the first 1000 RU/s of throughput provisioned across your account, any throughput beyond 1000 RU/s will be billed at the regular pricing. As a result, we anticipate that you will get charged for the throughput currently provisioned on your Azure Cosmos DB account. +Your Azure Cosmos DB free tier account currently contains resources with a total provisioned throughput exceeding 1000 Request Units per second (RU/s). Because the free tier only covers the first 1000 RU/s of throughput provisioned across your account, any throughput beyond 1000 RU/s is billed at the regular pricing. As a result, we anticipate that you're charged for the throughput currently provisioned on your Azure Cosmos DB account. Learn more about [Azure Cosmos DB account - CosmosDBFreeTierOverage (Review the configuration of your Azure Cosmos DB free tier account)](../cosmos-db/understand-your-bill.md#azure-free-tier). Learn more about [Azure Cosmos DB account - CosmosDBIdleContainers (Consider tak ### Enable autoscale on your Azure Cosmos DB database or container -Based on your usage in the past 7 days, you can save by enabling autoscale. For each hour, we compared the RU/s provisioned to the actual utilization of the RU/s (what autoscale would have scaled to) and calculated the cost savings across the time period. Autoscale helps optimize your cost by scaling down RU/s when not in use. +Based on your usage in the past seven days, you can save by enabling autoscale. For each hour, we compared the RU/s provisioned to the actual utilization of the RU/s (what autoscale would have scaled to) and calculated the cost savings across the time period. Autoscale helps optimize your cost by scaling down RU/s when not in use. Learn more about [Azure Cosmos DB account - CosmosDBAutoscaleRecommendations (Enable autoscale on your Azure Cosmos DB database or container)](../cosmos-db/provision-throughput-autoscale.md). ### Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container -Based on your usage in the past 7 days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%. +Based on your usage in the past seven days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%. Learn more about [Azure Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutoscale (Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container)](../cosmos-db/how-to-choose-offer.md). -## Data Explorer -### Unused/Empty Data Explorer resources -This recommendation surfaces all Data Explorer resources provisioned more than 10 days from the last update, and found either empty or with no activity. The recommended action is to validate and consider deleting the resources. -Learn more about [Data explorer resource - ADX Unused resource (Unused/Empty Data Explorer resources)](https://aka.ms/adxemptycluster). +## Management and Governance -### Right-size Data Explorer resources for optimal cost +### Azure Monitor -One or more of these were detected: Low data capacity, CPU utilization, or memory utilization. The recommended action to improve the performance is to scale down and/or scale in the resource to the recommended configuration shown. +For Azure Monitor cost optimization suggestions, please see [Optimize costs in Azure Monitor](../azure-monitor/best-practices-cost.md). -Learn more about [Data explorer resource - Right-size for cost (Right-size Data Explorer resources for optimal cost)](https://aka.ms/adxskusize). +### Purchasing a savings plan for compute could unlock lower prices -### Reduce Data Explorer table cache policy to optimize costs +We analyzed your compute usage over the last 30 days and recommend adding a savings plan to increase your savings. The savings plan unlocks lower prices on select compute services when you commit to spend a fixed hourly amount for 1 or 3 years. As you use select compute services globally, your usage is covered by the plan at reduced prices. During the times when your usage is above your hourly commitment, youΓÇÖll simply be billed at your regular pay-as-you-go prices. With savings automatically applying across compute usage globally, youΓÇÖll continue saving even as your usage needs change over time. Savings plan are more suited for dynamic workloads while accommodating for planned or unplanned changes while reservations are more suited for stable, predictable workloads with no planned changes. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope savings plans are available in purchase experience and can further increase savings. -Reducing the table cache policy will free up Data Explorer cluster nodes with low CPU utilization, memory, and a high cache size configuration. +Learn more about [Subscription - SavingsPlan (Purchasing a savings plan for compute could unlock lower prices)](https://aka.ms/savingsplan-compute). -Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables (Reduce Data Explorer table cache policy to optimize costs)](https://aka.ms/adxcachepolicy). -### Unused running Data Explorer resources -This recommendation surfaces all running Data Explorer resources with no user activity. Consider stopping the resources. -Learn more about [Data explorer resource - StopUnusedClusters (Unused running Data Explorer resources)](/azure/data-explorer/azure-advisor#azure-data-explorer-unused-cluster). -### Cleanup unused storage in Data Explorer resources +## Networking -Over time, internal extents merge operations can accumulate redundant and unused storage artifacts that remain beyond the data retention period. While this unreferenced data doesnΓÇÖt negatively impact the performance, it can lead to more storage use and larger costs than necessary. This recommendation surfaces Data Explorer resources that have unused storage artifacts. The recommended action is to run the cleanup command to detect and delete unused storage artifacts and reduce cost. Note that data recoverability will be reset to the cleanup time and will not be available on data that was created before running the cleanup. +### Delete ExpressRoute circuits in the provider status of Not Provisioned -Learn more about [Data explorer resource - RunCleanupCommandForAzureDataExplorer (Cleanup unused storage in Data Explorer resources)](https://aka.ms/adxcleanextentcontainers). +We noticed that your ExpressRoute circuit is in the provider status of Not Provisioned for more than one month. This circuit is currently billed hourly to your subscription. Delete the circuit if you aren't planning to provision the circuit with your connectivity provider. -### Enable optimized autoscale for Data Explorer resources +Learn more about [ExpressRoute circuit - ExpressRouteCircuit (Delete ExpressRoute circuits in the provider status of Not Provisioned)](https://aka.ms/expressroute). -Looks like your resource could have automatically scaled to reduce costs (based on the usage patterns, cache utilization, ingestion utilization, and CPU). To optimize costs and performance, we recommend enabling optimized autoscale. To make sure you don't exceed your planned budget, add a maximum instance count when you enable this. +### Repurpose or delete idle virtual network gateways -Learn more about [Data explorer resource - EnableOptimizedAutoscaleAzureDataExplorer (Enable optimized autoscale for Data Explorer resources)](https://aka.ms/adxoptimizedautoscale). +We noticed that your virtual network gateway has been idle for over 90 days. This gateway is being billed hourly. Reconfigure this gateway, or delete it if you do not intend to use it anymore. -## Network +Learn more about [Virtual network gateway - IdleVNetGateway (Repurpose or delete idle virtual network gateways)](https://aka.ms/aa_idlevpngateway_learnmore). -### Delete ExpressRoute circuits in the provider status of Not Provisioned -We noticed that your ExpressRoute circuit is in the provider status of Not Provisioned for more than one month. This circuit is currently billed hourly to your subscription. We recommend that you delete the circuit if you aren't planning to provision the circuit with your connectivity provider. -Learn more about [ExpressRoute circuit - ExpressRouteCircuit (Delete ExpressRoute circuits in the provider status of Not Provisioned)](https://aka.ms/expressroute). -### Repurpose or delete idle virtual network gateways +## Reserved instances -We noticed that your virtual network gateway has been idle for over 90 days. This gateway is being billed hourly. You may want to reconfigure this gateway, or delete it if you do not intend to use it anymore. +### Buy virtual machine reserved instances to save money over pay-as-you-go costs -Learn more about [Virtual network gateway - IdleVNetGateway (Repurpose or delete idle virtual network gateways)](https://aka.ms/aa_idlevpngateway_learnmore). +Reserved instances can provide a significant discount over pay-as-you-go prices. With reserved instances, you can prepurchase the base costs for your virtual machines. Discounts automatically apply to new or existing VMs that have the same size and region as your reserved instance. We analyzed your usage over the last 30 days and recommend money-saving reserved instances. -## Recovery Services +Learn more about [Virtual machine - ReservedInstance (Buy virtual machine reserved instances to save money over pay-as-you-go costs)](https://aka.ms/reservedinstances). -### Use differential or incremental backup for database workloads +### Consider App Service reserved instances to save over your on-demand costs -For SQL/HANA DBs in Azure VMs being backed up to Azure, using daily differential with weekly full backup is often more cost-effective than daily fully backups. For HANA, Azure Backup also supports incremental backup which is even more cost effective +We analyzed your App Service usage pattern over the selected term, look-back period, and recommend a Reserved Instance purchase that maximizes your savings. With reserved instances, you can prepurchase hourly usage for the App Service plan and save over your Pay-as-you-go costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions based on usage pattern over selected Term, look-back period. -Learn more about [Recovery Services vault - Optimize costs of database backup (Use differential or incremental backup for database workloads)](https://aka.ms/DBBackupCostOptimization). +Learn more about [Subscription - AppServiceReservedCapacity (Consider App Service reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -## Storage +### Consider Azure Cosmos DB reserved instances to save over your pay-as-you-go costs -### Revisit retention policy for classic log data in storage accounts +We analyzed your Azure Cosmos DB usage pattern over last 30 days and calculate a Reserved Instance purchase that maximizes your savings. With reserved instances, you can prepurchase Azure Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings even more. -Large classic log data is detected on your storage accounts. You are billed on capacity of data stored in storage accounts including classic logs. You are recommended to check the retention policy of classic logs and update with necessary period to retain less log data. This would reduce unnecessary classic log data and save your billing cost from less capacity. +Learn more about [Subscription - CosmosDBReservedCapacity (Consider Azure Cosmos DB reserved instances to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). -Learn more about [Storage Account - XstoreLargeClassicLog (Revisit retention policy for classic log data in storage accounts)](/azure/storage/common/manage-storage-analytics-logs#modify-retention-policy). +### Consider virtual machine reserved instances to save over your on-demand costs -## Reserved Instances +Reserved instances can provide a significant discount over on-demand prices. With reserved instances, you can prepurchase the base costs for your virtual machines. Discounts automatically apply to new or existing VMs that have the same size and region as your reserved instance. We analyzed your usage over the selected Term, look-back period, and recommend money-saving reserved instances. -### Configure automatic renewal for your expiring reservation +Learn more about [Subscription - ReservedInstance (Consider virtual machine reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -Reserved instances listed below are expiring soon or recently expired. Your resources will continue to operate normally, however, you will be billed at the on-demand rates going forward. To optimize your costs, configure automatic renewal for these reservations or purchase a replacement manually. +### Consider Cosmos DB reserved instances to save over your pay-as-you-go costs -Learn more about [Reservation - ReservedInstancePurchaseNew (Configure automatic renewal for your expiring reservation)](https://aka.ms/reservedinstances). +We analyzed your Cosmos DB usage pattern over selected Term, look-back period and calculate a Reserved Instance purchase that maximizes your savings. With reserved instances, you can prepurchase Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and usage pattern over the selected Term, look-back period. Shared scope recommendations are available in reservation purchase experience and can increase savings even more. -### Buy virtual machine reserved instances to save money over pay-as-you-go costs +Learn more about [Subscription - CosmosDBReservedCapacity (Consider Cosmos DB reserved instances to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). -Reserved instances can provide a significant discount over pay-as-you-go prices. With reserved instances, you can pre-purchase the base costs for your virtual machines. Discounts will automatically apply to new or existing VMs that have the same size and region as your reserved instance. We analyzed your usage over the last 30 days and recommend money-saving reserved instances. +### Consider SQL PaaS DB reserved instances to save over your pay-as-you-go costs -Learn more about [Virtual machine - ReservedInstance (Buy virtual machine reserved instances to save money over pay-as-you-go costs)](https://aka.ms/reservedinstances). +We analyzed your SQL PaaS usage pattern over last 30 days and recommend a Reserved Instance purchase that maximizes your savings. With reserved instances, you can prepurchase hourly usage for your SQL PaaS deployments and save over your SQL PaaS compute costs. SQL license is charged separately and is not discounted by the reservation. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Azure Cosmos DB reserved instance to save over your pay-as-you-go costs +Learn more about [Subscription - SQLReservedCapacity (Consider SQL PaaS DB reserved instances to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). -We analyzed your Azure Cosmos DB usage pattern over last 30 days and calculate reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Azure Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings even more. +### Consider App Service stamp fee reserved instances to save over your on-demand costs -Learn more about [Subscription - CosmosDBReservedCapacity (Consider Azure Cosmos DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). +We analyzed your App Service isolated environment stamp fees usage pattern over last 30 days and recommend a Reserved Instance purchase to maximize your savings. With reserved instances, you can prepurchase hourly usage for the isolated environment stamp fee and save over your pay-as-you-go costs. Reserved instances only applies to the stamp fee and not to the App Service instances. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions based on usage pattern over last 30 days. -### Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs +Learn more about [Subscription - AppServiceReservedCapacity (Consider App Service stamp fee reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your SQL PaaS usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase hourly usage for your SQL PaaS deployments and save over your SQL PaaS compute costs. SQL license is charged separately and is not discounted by the reservation. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Database for MariaDB reserved instances to save over your pay-as-you-go costs -Learn more about [Subscription - SQLReservedCapacity (Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). +We analyzed your Azure Database for MariaDB usage pattern over last 30 days and recommend a Reserved Instance purchase that maximizes your savings. With reserved instances, you can prepurchase MariaDB hourly usage and save over your compute costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider App Service stamp fee reserved instance to save over your on-demand costs +Learn more about [Subscription - MariaDBSQLReservedCapacity (Consider Database for MariaDB reserved instances to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). -We analyzed your App Service isolated environment stamp fees usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase hourly usage for the isolated environment stamp fee and save over your Pay-as-you-go costs. Note that reserved instance only applies to the stamp fee and not to the App Service instances. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions based on usage pattern over last 30 days. +### Consider Database for MySQL reserved instances to save over your pay-as-you-go costs -Learn more about [Subscription - AppServiceReservedCapacity (Consider App Service stamp fee reserved instance to save over your on-demand costs)](https://aka.ms/rirecommendations). +We analyzed your MySQL Database usage pattern over last 30 days and recommend reserved instances purchase that maximizes your savings. With reserved instances, you can prepurchase MySQL hourly usage and save over your compute costs. Reserved instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs +Learn more about [Subscription - MySQLReservedCapacity (Consider Database for MySQL reserved instances to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). -We analyzed your Azure Database for MariaDB usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase MariaDB hourly usage and save over your compute costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Database for PostgreSQL reserved instances to save over your pay-as-you-go costs -Learn more about [Subscription - MariaDBSQLReservedCapacity (Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). +We analyzed your Database for PostgreSQL usage pattern over last 30 days and recommend a Reserved Instance purchase that maximizes your savings. With reserved instances, you can prepurchase PostgreSQL Database hourly usage and save over your on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Database for MySQL reserved instance to save over your pay-as-you-go costs +Learn more about [Subscription - PostgreSQLReservedCapacity (Consider Database for PostgreSQL reserved instances to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). -We analyzed your MySQL Database usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase MySQL hourly usage and save over your compute costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Cache for Redis reserved instances to save over your pay-as-you-go costs -Learn more about [Subscription - MySQLReservedCapacity (Consider Database for MySQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). +We analyzed your Cache for Redis usage pattern over last 30 days and calculated a Reserved Instance purchase that maximizes your savings. With reserved instances, you can prepurchase Cache for Redis hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs +Learn more about [Subscription - RedisCacheReservedCapacity (Consider Cache for Redis reserved instances to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). -We analyzed your Database for PostgreSQL usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase PostgreSQL Database hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Azure Synapse Analytics (formerly SQL DW) reserved instances to save over your pay-as-you-go costs -Learn more about [Subscription - PostgreSQLReservedCapacity (Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). +We analyze your Azure Synapse Analytics usage pattern over last 30 days and recommend a Reserved Instance purchase that maximizes your savings. With reserved instances, you can prepurchase Synapse Analytics hourly usage and save over your on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Cache for Redis reserved instance to save over your pay-as-you-go costs +Learn more about [Subscription - SQLDWReservedCapacity (Consider Azure Synapse Analytics (formerly SQL DW) reserved instances to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). -We analyzed your Cache for Redis usage pattern over last 30 days and calculated reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Cache for Redis hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### (Preview) Consider Blob storage reserved instances to save on Blob v2 and Data Lake storage Gen2 costs -Learn more about [Subscription - RedisCacheReservedCapacity (Consider Cache for Redis reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). +We analyzed your Azure Blob and Data Lake storage usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Blob storage reserved instances applies only to data stored on Azure Blob (GPv2) and Azure Data Lake Storage (Gen 2). Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs +Learn more about [Subscription - BlobReservedCapacity ((Preview) Consider Blob storage reserved instances to save on Blob v2 and and Data Lake storage Gen2 costs)](https://aka.ms/rirecommendations). -We analyze your Azure Synapse Analytics usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Synapse Analytics hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Azure Dedicated Host reserved instances to save over your on-demand costs -Learn more about [Subscription - SQLDWReservedCapacity (Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). +We analyzed your Azure Dedicated Host usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### (Preview) Consider Blob storage reserved instance to save on Blob v2 and Datalake storage Gen2 costs +Learn more about [Subscription - AzureDedicatedHostReservedCapacity (Consider Azure Dedicated Host reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your Azure Blob and Datalake storage usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Blob storage reserved instance applies only to data stored on Azure Blob (GPv2) and Azure Data Lake Storage (Gen 2). Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Data Factory reserved instances to save over your on-demand costs -Learn more about [Subscription - BlobReservedCapacity ((Preview) Consider Blob storage reserved instance to save on Blob v2 and and Datalake storage Gen2 costs)](https://aka.ms/rirecommendations). +We analyzed your Data Factory usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Azure Dedicated Host reserved instance to save over your on-demand costs +Learn more about [Subscription - DataFactorybReservedCapacity (Consider Data Factory reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your Azure Dedicated Host usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Azure Data Explorer reserved instances to save over your on-demand costs -Learn more about [Subscription - AzureDedicatedHostReservedCapacity (Consider Azure Dedicated Host reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your Azure Data Explorer usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Data Factory reserved instance to save over your on-demand costs +Learn more about [Subscription - AzureDataExplorerReservedCapacity (Consider Azure Data Explorer reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your Data Factory usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Azure Files reserved instances to save over your on-demand costs -Learn more about [Subscription - DataFactorybReservedCapacity (Consider Data Factory reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your Azure Files usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Azure Data Explorer reserved instance to save over your on-demand costs +Learn more about [Subscription - AzureFilesReservedCapacity (Consider Azure Files reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your Azure Data Explorer usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Azure VMware Solution reserved instances to save over your on-demand costs -Learn more about [Subscription - AzureDataExplorerReservedCapacity (Consider Azure Data Explorer reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your Azure VMware Solution usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Azure Files reserved instance to save over your on-demand costs +Learn more about [Subscription - AzureVMwareSolutionReservedCapacity (Consider Azure VMware Solution reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your Azure Files usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider NetApp Storage reserved instances to save over your on-demand costs -Learn more about [Subscription - AzureFilesReservedCapacity (Consider Azure Files reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your NetApp Storage usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Azure VMware Solution reserved instance to save over your on-demand costs +Learn more about [Subscription - NetAppStorageReservedCapacity (Consider NetApp Storage reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your Azure VMware Solution usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Azure Managed Disk reserved instances to save over your on-demand costs -Learn more about [Subscription - AzureVMwareSolutionReservedCapacity (Consider Azure VMware Solution reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your Azure Managed Disk usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider NetApp Storage reserved instance to save over your on-demand costs +Learn more about [Subscription - AzureManagedDiskReservedCapacity (Consider Azure Managed Disk reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your NetApp Storage usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider Red Hat reserved instances to save over your on-demand costs -Learn more about [Subscription - NetAppStorageReservedCapacity (Consider NetApp Storage reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your Red Hat usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Azure Managed Disk reserved instance to save over your on-demand costs +Learn more about [Subscription - RedHatReservedCapacity (Consider Red Hat reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your Azure Managed Disk usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider RedHat OSA reserved instances to save over your on-demand costs -Learn more about [Subscription - AzureManagedDiskReservedCapacity (Consider Azure Managed Disk reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your RedHat Open Source Assurance (OSA) usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider Red Hat reserved instance to save over your on-demand costs +Learn more about [Subscription - RedHatOsaReservedCapacity (Consider RedHat OSA reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your Red Hat usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider SapHana reserved instances to save over your on-demand costs -Learn more about [Subscription - RedHatReservedCapacity (Consider Red Hat reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your SapHana usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider RedHat Osa reserved instance to save over your on-demand costs +Learn more about [Subscription - SapHanaReservedCapacity (Consider SapHana reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your RedHat Osa usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider SuseLinux reserved instances to save over your on-demand costs -Learn more about [Subscription - RedHatOsaReservedCapacity (Consider RedHat Osa reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your SuseLinux usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider SapHana reserved instance to save over your on-demand costs +Learn more about [Subscription - SuseLinuxReservedCapacity (Consider SuseLinux reserved instances to save over your on-demand costs)](https://aka.ms/rirecommendations). -We analyzed your SapHana usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Consider VMware Cloud Simple reserved instances -Learn more about [Subscription - SapHanaReservedCapacity (Consider SapHana reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your VMware Cloud Simple usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. -### Consider SuseLinux reserved instance to save over your on-demand costs +Learn more about [Subscription - VMwareCloudSimpleReservedCapacity (Consider VMware Cloud Simple reserved instances )](https://aka.ms/rirecommendations). -We analyzed your SuseLinux usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Configure automatic renewal for your expiring reservation -Learn more about [Subscription - SuseLinuxReservedCapacity (Consider SuseLinux reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +The reserved instances listed are expiring soon or recently expired. Your resources will continue to operate normally, however, you'll be billed at the on-demand rates going forward. To optimize your costs, configure automatic renewal for these reservations or purchase a replacement manually. -### Consider VMware Cloud Simple reserved instance +Learn more about [Reservation - ReservedInstancePurchaseNew (Configure automatic renewal for your expiring reservation)](https://aka.ms/reservedinstances). -We analyzed your VMware Cloud Simple usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further. +### Purchasing a savings plan for compute could unlock lower prices -Learn more about [Subscription - VMwareCloudSimpleReservedCapacity (Consider VMware Cloud Simple reserved instance )](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md). +We analyzed your compute usage over the last 30 days and recommend adding a savings plan to increase your savings. The savings plan unlocks lower prices on select compute services when you commit to spend a fixed hourly amount for 1 or 3 years. As you use select compute services globally, your usage is covered by the plan at reduced prices. During the times when your usage is above your hourly commitment, youΓÇÖll simply be billed at your regular pay-as-you-go prices. With savings automatically applying across compute usage globally, youΓÇÖll continue saving even as your usage needs change over time. Savings plan are more suited for dynamic workloads while accommodating for planned or unplanned changes while reservations are more suited for stable, predictable workloads with no planned changes. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope savings plans are available in purchase experience and can further increase savings. -## Subscription +Learn more about [Subscription - SavingsPlan (Purchasing a savings plan for compute could unlock lower prices)](https://aka.ms/savingsplan-compute). -### Use Virtual Machines with Ephemeral OS Disk enabled to save cost and get better performance +### Consider Cosmos DB reserved instances to save over your pay-as-you-go costs -With Ephemeral OS Disk, Customers get these benefits: Save on storage cost for OS disk. Get lower read/write latency to OS disk. Faster VM Reimage operation by resetting OS (and Temporary disk) to its original state. It is more preferable to use Ephemeral OS Disk for short-lived IaaS VMs or VMs with stateless workloads +We analyzed your Cosmos DB usage pattern over selected Term, look-back period and calculate a Reserved Instance purchase that maximizes your savings. With reserved instances you can pre-purchase Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved Instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and usage pattern over selected Term, look-back period. Shared scope recommendations are available in reservation purchase experience and can increase savings even more. -Learn more about [Subscription - EphemeralOsDisk (Use Virtual Machines with Ephemeral OS Disk enabled to save cost and get better performance)](/azure/virtual-machines/windows/ephemeral-os-disks). +Learn more about [Subscription - CosmosDBReservedCapacity (Consider Cosmos DB reserved instances to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations). -## Synapse -### Consider enabling autopause feature on Spark compute. -Auto-pause releases and shuts down unused compute resources after a set idle period of inactivity -Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoPauseGuidance (Consider enabling autopause feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoPauseGuidance). +## Storage ++### Use Standard Storage to store Managed Disks snapshots ++To save 60% of cost, store your snapshots in Standard Storage, regardless of the storage type of the parent disk. It is the default option for Managed Disks snapshots. Migrate your snapshot from Premium to Standard Storage. Refer to Managed Disks pricing details. ++Learn more about [Managed Disk Snapshot - ManagedDiskSnapshot (Use Standard Storage to store Managed Disks snapshots)](https://aka.ms/aa_manageddisksnapshot_learnmore). ++### Revisit retention policy for classic log data in storage accounts ++Large classic log data is detected on your storage accounts. You're billed on capacity of data stored in storage accounts including classic logs. Check the retention policy of classic logs and update with necessary period to retain less log data. This would reduce unnecessary classic log data and save your billing cost from less capacity. ++Learn more about [Storage Account - XstoreLargeClassicLog (Revisit retention policy for classic log data in storage accounts)](/azure/storage/common/manage-storage-analytics-logs#modify-retention-policy). ++### Based on your high transactions/TB ratio, premium storage might be more cost effective ++Your transactions/TB ratio might be high. Exact number would depend on transaction mix and region but anywhere over 30 or 35 TPB/TB are good candidates to evaluate a move to Premium storage. ++Learn more about [Storage Account - MoveToPremiumStorage (Based on your high transactions/TB ratio, there is a possibility that premium storage might be more cost effective in addition to being performant for your scenario. More details on pricing for premium and standard accounts can be found here)](https://aka.ms/azureblobstoragepricing). ++### Use differential or incremental backup for database workloads ++For SQL/HANA DBs in Azure VMs being backed up to Azure, using daily differential with weekly full backup is often more cost-effective than daily fully backups. For HANA, Azure Backup also supports incremental backup that is even more cost effective. ++Learn more about [Recovery Services vault - Optimize costs of database backup (Use differential or incremental backup for database workloads)](https://aka.ms/DBBackupCostOptimization). -### Consider enabling autoscale feature on Spark compute. -Apache Spark for Azure Synapse Analytics pool's Autoscale feature automatically scales the number of nodes in a cluster instance up and down. During the creation of a new Apache Spark for Azure Synapse Analytics pool, a minimum and maximum number of nodes can be set when Autoscale is selected. Autoscale then monitors the resource requirements of the load and scales the number of nodes up or down. There's no additional charge for this feature. -Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoScaleGuidance). ## Web ### Right-size underutilized App Service plans -We've analyzed the usage patterns of your app service plan over the past 7 days and identified low CPU usage. While certain scenarios can result in low utilization by design, you can often save money by choosing a less expensive SKU while retaining the same features. +We've analyzed the usage patterns of your App Service plan over the past seven days and identified low CPU usage. While certain scenarios can result in low utilization by design, you can save money by choosing a less expensive SKU while retaining the same features. > [!NOTE] > - Currently, this recommendation only works for App Service plans running on Windows on a SKU that allows you to downscale to less expensive tiers without losing any features, like from P3v2 to P2v2 or from P2v2 to P1v2. -> - CPU bursts that last only a few minutes might not be correctly detected. Please perform a careful analysis in your App Service plan metrics blade before downscaling your SKU. +> - CPU bursts that last only a few minutes might not be correctly detected. Perform a careful analysis in your App Service plan metrics blade before downscaling your SKU. Learn more about [App Service plans](../app-service/overview-hosting-plans.md). Your App Service plan has no apps running for at least 3 days. Consider deleting Learn more about [App Service plans](../app-service/overview-hosting-plans.md). -## Azure Monitor -For Azure Monitor cost optimization suggestions, please see [Optimize costs in Azure Monitor](../azure-monitor/best-practices-cost.md). ++ ## Next steps |
ai-services | Batch Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/batch-inference.md | The response contains the result status, variable information, inference paramet * **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`. -* **contributors**: This is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes. +* **contributionScore**: This is the contribution score of each variable. Higher contribution scores indicate a higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes. -* **correlationChanges**: This field only appears when a timestamp is detected as anomalous, which included in interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed. +* **correlationChanges**: This field only appears when a timestamp is detected as abnormal, which is included in the interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed. -* **changedVariables**: This field will show which variables that have significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes. +* **changedVariables**: This field will show which variables that have a significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes. > [!NOTE] > A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives. |
app-service | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md | App Service Environment is a single-tenant deployment of Azure App Service that You must delegate the subnet to `Microsoft.Web/hostingEnvironments`, and the subnet must be empty. -The size of the subnet can affect the scaling limits of the App Service plan instances within the App Service Environment. It's a good idea to use a `/24` address space (256 addresses) for your subnet, to ensure enough addresses to support production scale. +The size of the subnet can affect the scaling limits of the App Service plan instances within the App Service Environment. For production scale, we recommend a `/24` address space (256 addresses) for your subnet. If you plan to scale near max capacity of 200 instances in our App Service Environment and you plan frequent up/down scale operations, we recommend a `/23` address space (512 addresses) for your subnet. ++If you use a smaller subnet, be aware of the following limitations: ++- Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure, and uses between 7 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses). +- For any App Service plan OS/SKU combination used in your App Service Environment like I1v2 Windows, one standby instance is created for every 20 active instances. The standby instances also require IP addresses. +- When scaling App Service plans in the App Service Environment up/down, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned. +- Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic. +- After scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. In rare cases, this can be up to 12 hours. +- If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the App Service Environment. Another possibility is that you can experience increased latency during intensive traffic load, if Microsoft isn't able to scale the supporting infrastructure. >[!NOTE] > Windows Containers uses an additional IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If your App Service Environment has for example 2 Windows Container App Service plans each with 25 instances and each with 5 apps running, you will need 300 IP addresses and additional addresses to support horizontal (in/out) scale. The size of the subnet can affect the scaling limits of the App Service plan ins > > Since you have 2 App Service plans, 2 x 150 = 300 IP addresses. -If you use a smaller subnet, be aware of the following limitations: --- Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure, and uses between 7 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses).-- For any App Service plan OS/SKU combination used in your App Service Environment like I1v2 Windows, one standby instance is created for every 20 active instances. The standby instances also require IP addresses.-- When scaling App Service plans in the App Service Environment up/down, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned.-- Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic. Finally, after scale up, down, or in operations complete, there might be a short period of time before IP addresses are released.-- If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the App Service Environment. Another possibility is that you can experience increased latency during intensive traffic load, if Microsoft isn't able to scale the supporting infrastructure.- ## Addresses App Service Environment has the following network information at creation: |
app-service | Overview Access Restrictions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-access-restrictions.md | If you want to deny/block one or more specific IP addresses, you can add the IP ### Restrict access to the advanced tools site -The advanced tools site, which is also known as scm or kudu, has an individual rules collection that you can configure. You can also configure the unmatched rule for this site. A setting allows you to use the rules configured for the main site. +The advanced tools site, which is also known as scm or kudu, has an individual rules collection that you can configure. You can also configure the unmatched rule for this site. A setting allows you to use the rules configured for the main site. You can't selectively allow access to certain advanced tool site features. For example, you can't selectively allow access only to the WebJobs management console in the advanced tools site. ### Deploy through a private endpoint |
app-service | Overview App Gateway Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-app-gateway-integration.md | An external App Service Environment has a public-facing load balancer like multi ## Considerations for a Kudu/SCM site -The SCM site, also known as Kudu, is an admin site that exists for every web app. It isn't possible to reverse proxy the SCM site. You most likely also want to lock it down to individual IP addresses or a specific subnet. +The SCM site, also known as Kudu, is an admin site that exists for every web app. It isn't possible to use reverse proxy for the SCM site. You most likely also want to lock it down to individual IP addresses or a specific subnet. If you want to use the same access restrictions as the main site, you can inherit the settings by using the following command: |
app-service | Quickstart Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arc.md | Title: 'Quickstart: Create a web app on Azure Arc' description: Get started with App Service on Azure Arc deploying your first web app. Previously updated : 06/30/2022 Last updated : 10/19/2023 ms.devlang: azurecli az group create --name myResourceGroup --location eastus ## 3. Create an app -The following example creates a Node.js app. Replace `<app-name>` with a name that's unique within your cluster (valid characters are `a-z`, `0-9`, and `-`). To see all supported runtimes, run [`az webapp list-runtimes --os linux`](/cli/azure/webapp). +The following example creates a Node.js app. Replace `<app-name>` with a name that's unique within your cluster (valid characters are `a-z`, `0-9`, and `-`). ++Supported runtimes: ++| Description | Runtime Value for CLI | +|-|-| +| .NET Core 3.1 | DOTNETCORE\|3.1 | +| .NET 5.0 | DOTNETCORE\|6.0 | +| Node JS 12 | NODE\|12-lts | +| Node JS 14 | NODE\|14-lts | +| Python 3.6 | PYTHON\|3.6 | +| Python 3.7 | PYTHON\|3.7 | +| Python 3.8 | PYTHON\|3.8 | +| PHP 7.3 | PHP\|7.3 | +| PHP 7.4 | PHP\|7.4 | +| Ruby 2.5 | RUBY\|2.5 | +| Ruby 2.6 | RUBY\|2.6 | +| Java 8 | JAVA\|8-jre8 | +| Java 11 | JAVA\|11-java11 | +| Tomcat 8.5 | TOMCAT\|8.5-jre8 | +| Tomcat 8.5 | TOMCAT\|8.5-java11 | +| Tomcat 9.0 | TOMCAT\|9.0-jre8 | +| Tomcat 9.0 | TOMCAT\|9.0-java11 | ```azurecli-interactive az webapp create \ |
azure-arc | Conceptual Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md | Title: "Application deployments with GitOps (Flux v2)" description: "This article provides a conceptual overview of GitOps in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." Previously updated : 10/04/2023 Last updated : 10/18/2023 Each `fluxConfigurations` resource in Azure is associated with one Flux `GitRepo > > Sensitive customer inputs like private key and token/password are stored for less than 48 hours in the Kubernetes Configuration service. If you update any of these values in Azure, make sure that your clusters connect with Azure within 48 hours. +You can monitor Flux configuration status and compliance in the Azure portal, or use dashboards to monitor status, compliance, resource consumption, and reconciliation activity. For more information, see [Monitor GitOps (Flux v2) status and activity](monitor-gitops-flux-2.md). + ### Version support The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. Starting with `microsoft.flux` version 1.7.0, ARM64-based clusters are supported. |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 10/10/2023 Last updated : 10/20/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes." The currently supported versions of the `microsoft.flux` extension are described ### 1.8.0 (October 2023) -> [!NOTE] -> We have started to roll out this release across regions. We'll remove this note once version 1.8.0 is available to all supported regions. - Flux version: [Release v2.1.1](https://github.com/fluxcd/flux2/releases/tag/v2.1.1) - source-controller: v1.1.1 |
azure-arc | Monitor Gitops Flux 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/monitor-gitops-flux-2.md | Title: Monitor GitOps (Flux v2) status and activity Previously updated : 08/17/2023 Last updated : 10/18/2023 description: Learn how to monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2. # Monitor GitOps (Flux v2) status and activity -We provide dashboards to help you monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2 in your Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters. These JSON dashboards can be imported to Grafana to help you view and analyze your data in real time. You can also set up alerts for this information. +To monitor status and activity related to GitOps with Flux v2 in your Azure Arc-enabled Kubernetes clusters or Azure Kubernetes Service (AKS) clusters, you have several options: -## Prerequisites +- Use the Azure portal to [monitor Flux configurations and resources on individual clusters](#monitor-flux-configurations-in-the-azure-portal). +- Use a Grafana dashboard to [monitor deployment and compliance status](#monitor-deployment-and-compliance-status). +- Use the Flux Control Plane and Flux Cluster Stats dashboards to [monitor resource consumption and reconciliations](#monitor-resource-consumption-and-reconciliations). +- Enable Prometheus scraping from clusters and create your own dashboards using the data in Azure Monitor workspace. +- Create alerts on Azure Monitor using the data available through Prometheus scraping. ++This topic describes some of the ways you can monitor your Flux activity and status. ++## Monitor Flux configurations in the Azure portal ++After you've [created Flux configurations](tutorial-use-gitops-flux2.md#apply-a-flux-configuration) on your cluster, you can view status information in the Azure portal by navigating to a cluster and selecting **GitOps**. ++### View details on cluster compliance and objects ++The **Compliance** state shows whether the current state of the cluster matches the desired state. Possible values: ++- **Compliant**: The cluster's state matches the desired state. +- **Pending**: An updated desired state has been detected, but that state has not yet been reconciled on the cluster. +- **Not Compliant**: The current state doesn't match the desired state. +++To help debug reconciliation issues for a cluster, select **Configuration objects**. Here, you can view logs of each of the configuration objects that Flux creates for each Flux configuration. Select an object name to view its logs. +++To view the Kubernetes objects that have been created as a result of Flux configurations being applied, select **Workloads** in the **Kubernetes resources** section of the cluster's left navigation pane. Here, you can view all details of any resources that have been created on the cluster. ++By default, you can filter by namespace and service name. You can also add any label filter that you may be using in your applications to help narrow down the search. ++### View Flux configuration state and details ++For each Flux configuration, the **State** column indicates whether the Flux configuration object has successfully been created on the cluster. ++Select any Flux configuration to see its **Overview** page, including the following information: ++- Source commit ID for the last synchronization +- Timestamp of the latest source update +- Status update timestamp (indicating when the latest statistics were obtained) +- Repo URL and branch +- Links to view different kustomizations +++## Use dashboards to monitor GitOps status and activity ++We provide dashboards to help you monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2. These JSON dashboards can be imported to Grafana to help you view and analyze your data in real time. You can also set up alerts for this information. To import and use these dashboards, you need: |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/release-notes.md | Title: "What's new with Azure Arc-enabled Kubernetes" Previously updated : 08/21/2023 Last updated : 10/19/2023 description: "Learn about the latest releases of Arc-enabled Kubernetes." Azure Arc-enabled Kubernetes is updated on an ongoing basis. To stay up to date > > We generally recommend using the most recent versions of the agents. The [version support policy](agent-upgrade.md#version-support-policy) covers the most recent version and the two previous versions (N-2). +## October 2023 ++### Arc agents - Version 1.13.4 ++- Various enhancements and bug fixes ++## September 2023 ++### Arc agents - Version 1.13.1 ++- Various enhancements and bug fixes + ## July 2023 ### Arc agents - Version 1.12.5 |
azure-arc | Tutorial Use Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md | Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 10/10/2023 Last updated : 10/18/2023 To view detailed conditions for a configuration object, select its name. :::image type="content" source="media/tutorial-use-gitops-flux2/portal-configuration-object-conditions.png" alt-text="Screenshot showing condition details for a configuration object in the Azure portal." lightbox="media/tutorial-use-gitops-flux2/portal-configuration-object-conditions.png"::: +For more information, see [Monitor GitOps (Flux v2) status and activity](monitor-gitops-flux-2.md). + ## Work with parameters az k8s-extension delete -g <resource-group> -c <cluster-name> -n flux -t managed * Read more about [configurations and GitOps](conceptual-gitops-flux2.md). * Learn how to [use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md).+* Learn about [monitoring GitOps (Flux v2) status and activity](monitor-gitops-flux-2.md). |
azure-arc | Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md | -This article describes how Arc resource bridge (preview) is upgraded and the two ways upgrade can be performed, using cloud-managed upgrade or manual upgrade. --> [!IMPORTANT] -> Currently, you must request access in order to use cloud-managed upgrade. To do so, [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Select **Technical** for **Issue type** and **Azure Arc Resource Bridge** for **Service type**. In the **Summary** field, enter *Requesting access to cloud-managed upgrade*, and select **Resource Bridge Agent issue** for **Problem type**. Complete the rest of the support request and then select **Create**. We'll review your account and contact you to confirm your access to cloud-managed upgrade. +This article describes how Arc resource bridge (preview) is upgraded and the two ways upgrade can be performed: cloud-managed upgrade or manual upgrade. Currently, some private cloud providers differ in how they handle Arc resource bridge upgrades. For more information, refer to the [Private Cloud Providers](#private-cloud-providers) section. ## Prerequisites The upgrade process deploys a new resource bridge using the reserved appliance V Deploying a new resource bridge consists of downloading the appliance image (~3.5 GB) from the cloud, using the image to deploy a new appliance VM, verifying the new resource bridge is running, connecting it to Azure, deleting the old appliance VM, and reserving the old IP to be used for a future upgrade. -Overall, the upgrade generally takes at least 30 minutes, depending on network speeds. A short intermittent downtime may happen during the handoff between the old Arc resource bridge to the new Arc resource bridge. Additional downtime may occur if prerequisites are not met, or if a change in the network (DNS, firewall, proxy, etc.) impacts the Arc resource bridge's ability to communicate. --Upgrading takes your Arc resource bridge to the next appliance version, which might not be the latest available appliance version. Multiple upgrades could be needed to reach the minimum n-3 supported version. You can check your appliance version by checking the Azure resource of your Arc resource bridge. +Overall, the upgrade generally takes at least 30 minutes, depending on network speeds. A short intermittent downtime may happen during the handoff between the old Arc resource bridge to the new Arc resource bridge. Additional downtime may occur if prerequisites are not met, or if a change in the network (DNS, firewall, proxy, etc.) impacts the Arc resource bridge's network connectivity. There are two ways to upgrade Arc resource bridge: cloud-managed upgrades managed by Microsoft, or manual upgrades where Azure CLI commands are performed by an admin. ## Cloud-managed upgrade -Arc resource bridge is a Microsoft-managed product. Microsoft manages upgrades of Arc resource bridge through cloud-managed upgrade. Cloud-managed upgrade allows Microsoft to ensure that the resource bridge remains on a supported version. +Arc resource bridge is a Microsoft-managed product. Microsoft manages upgrades of Arc resource bridge through cloud-managed upgrade. Cloud-managed upgrade allows Microsoft to ensure that the resource bridge remains on a supported version. > [!IMPORTANT]-> As noted earlier, cloud-managed upgrades are currently available only to customers who request access by opening a support request. After the private cloud provider announces General Availability, cloud-managed upgrade will become the default experience and enabled for all customers within n-3 supported versions. +> Currently, your appliance version must be on 1.0.15 and you must request access in order to use cloud-managed upgrade. To do so, [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Select **Technical** for **Issue type** and **Azure Arc Resource Bridge** for **Service type**. In the **Summary** field, enter *Requesting access to cloud-managed upgrade*, and select **Resource Bridge Agent issue** for **Problem type**. Complete the rest of the support request and then select **Create**. We'll review your account and contact you to confirm your access to cloud-managed upgrade. + Cloud-managed upgrades are handled through Azure. A notification is pushed to Azure to reflect the state of the appliance VM as it upgrades. As the resource bridge progresses through the upgrade, its status may switch back and forth between different upgrade steps. Upgrade is complete when the appliance VM `status` is `Running` and `provisioningState` is `Succeeded`. To check the status of a cloud-managed upgrade, check the Azure resource in ARM or run the following Azure CLI command from the management machine: az arcappliance show --resource-group [REQUIRED] --name [REQUIRED] ## Manual upgrade -Arc resource bridge can be manually upgraded from the management machine. You must meet all upgrade prerequisites before attempting to upgrade. The management machine must have the kubeconfig and appliance configuration files stored locally. Manual upgrade generally takes between 30-90 minutes, depending on network speeds. +Arc resource bridge can be manually upgraded from the management machine. You must meet all upgrade prerequisites before attempting to upgrade. The management machine must have the kubeconfig and appliance configuration files stored locally. Manual upgrade generally takes between 30-90 minutes, depending on network speeds. The upgrade command takes your Arc resource bridge to the next appliance version, which might not be the latest available appliance version. Multiple upgrades could be needed to reach the minimum n-3 supported version. You can check your appliance version by checking the Azure resource of your Arc resource bridge. To manually upgrade your Arc resource bridge, make sure you have installed the latest `az arcappliance` CLI extension by running the extension upgrade command from the management machine: For example, to upgrade a resource bridge on Azure Stack HCI, run: `az arcapplia ## Private cloud providers -Currently, some private cloud providers differ in how they handle Arc resource bridge upgrades while they are in public preview. After the private cloud provider announces General Availability, cloud-managed upgrade will become the default experience and enabled for all customers within n-3 supported versions. +Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider. -For Arc-enabled VMware, both cloud-managed upgrade and manual upgrade are supported. +For Arc-enabled VMware, manual upgrade is available and cloud-managed upgrade is supported for appliances on version 1.0.15 and higher. When Arc-enabled VMware announces General Availability, appliances on 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded. -[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until Arc resource bridge version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade or a support request for cloud-managed upgrade. For additional upgrades afterwards, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq). +[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade or a support request for cloud-managed upgrade. For subsequent upgrades, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq). -For Arc-enabled SCVMM, the upgrade feature isn't available yet. Review the steps for [performing the disaster recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps.  This deploys a new resource bridge and reconnect pre-existing Azure resources. +For Arc-enabled SCVMM, the upgrade feature isn't currently available yet. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps.  This deploys a new resource bridge and reconnect pre-existing Azure resources. ## Version releases The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there is a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. A new extension is typically released on a monthly cadence at the end of the month. For detailed release info, refer to the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub. -## Notification and upgrade availability --If your Arc resource bridge is at n-3 version, then you may receive an email notification letting you know that your resource bridge may soon be out of support once the next version is released. If you receive this notification, upgrade the resource bridge as soon as possible to allow debug time for any issues with manual upgrade, or submit a support ticket if cloud-managed upgrade was unable to upgrade your resource bridge. --To check if your Arc resource bridge has an upgrade available, run the command: --```azurecli -az arcappliance get-upgrades --resource-group [REQUIRED] --name [REQUIRED] -``` --To see the current version of an Arc resource bridge appliance, run `az arcappliance show` or check the Azure resource. --To find the latest released version of Arc resource bridge, check the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub. - ## Supported versions Generally, the latest released version and the previous three versions (n-3) of Arc resource bridge are supported. For example, if the current version is 1.0.10, then the typical n-3 supported versions are: If a resource bridge is not upgraded to one of the supported versions (n-3), the If an Arc resource bridge is unable to be upgraded to a supported version, you must delete it and deploy a new resource bridge. Depending on which private cloud product you're using, there may be other steps required to reconnect the resource bridge to existing resources. For details, check the partner product's Arc resource bridge recovery documentation. +## Notification and upgrade availability ++If your Arc resource bridge is at n-3 versions, then you may receive an email notification letting you know that your resource bridge may soon be out of support once the next version is released. If you receive this notification, upgrade the resource bridge as soon as possible to allow debug time for any issues with manual upgrade, or submit a support ticket if cloud-managed upgrade was unable to upgrade your resource bridge. ++To check if your Arc resource bridge has an upgrade available, run the command: ++```azurecli +az arcappliance get-upgrades --resource-group [REQUIRED] --name [REQUIRED] +``` ++To see the current version of an Arc resource bridge appliance, run `az arcappliance show` or check the Azure resource of your Arc resource bridge. ++## + ## Next steps - Learn about [Arc resource bridge maintenance operations](maintenance.md). |
azure-arc | Enable Guest Management At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md | Title: Install Arc agent at scale for your SCVMM VMs description: Learn how to enable guest management at scale for Arc-enabled SCVMM VMs. ---+++ Last updated 09/18/2023 keywords: "VMM, Arc, Azure" -#Customer intent: As an IT infra admin, I want to install arc agents to use Azure management services for SCVMM VMs. +#Customer intent: As an IT infrastructure admin, I want to install arc agents to use Azure management services for SCVMM VMs. # Install Arc agents at scale for Arc-enabled SCVMM VMs -In this article, you will learn how to install Arc agents at scale for SCVMM VMs and use Azure management capabilities. +In this article, you learn how to install Arc agents at scale for SCVMM VMs and use Azure management capabilities. ++>[!NOTE] +>This article is applicable only if you are running: +>- SCVMM 2022 UR1 or later +>- SCVMM 2019 UR5 or later +>- VMs running Windows Server 2012 R2, 2016, 2019, 2022, Windows 10, and Windows 11 +>For other SCVMM versions, Linux VMs or Windows VMs running WS 2012 or earlier, [install Arc agents through the script](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script). ## Prerequisites Ensure the following before you install Arc agents at scale for SCVMM VMs: - The resource bridge must be in a running state.-- The SCVMM management server must be in connected state.+- The SCVMM management server must be in a connected state. - The user account must have permissions listed in Azure Arc SCVMM Administrator role. - All the target machines are: - Powered on and the resource bridge has network connectivity to the host running the VM. - Running a [supported operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems). - Able to connect through the firewall to communicate over the internet and [these URLs](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked. - >[!Note] - > If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo`, and `add <username> ALL=(ALL) NOPASSWD:ALL` at the end of the file. Ensure you replace `<username>`.<br> <br> If your VM template has these changes incorporated, you won't need to do this for the VM created from that template. - ## Install Arc agents at scale from portal An admin can install agents for multiple machines from the Azure portal if the machines share the same administrator credentials. An admin can install agents for multiple machines from the Azure portal if the m >[!Note] > For Windows VMs, the account must be part of the local administrator group; and for Linux VM, it must be a root account. - ## Next steps [Recover from accidental deletion of resource bridge virtual machine](disaster-recovery.md). |
azure-arc | Install Arc Agents Using Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script.md | + + Title: Install Arc agent using a script for SCVMM VMs +description: Learn how to enable guest management using a script for Arc enabled SCVMM VMs. + Last updated : 10/19/2023+++++++#Customer intent: As an IT infrastructure admin, I want to install arc agents to use Azure management services for SCVMM VMs. ++++# Install Arc agents using a script ++In this article, you will learn how to install Arc agents on Azure-enabled SCVMM VMs using a script. ++## Prerequisites ++Ensure the following before you install Arc agents using a script for SCVMM VMs: ++- The resource bridge must be in a running state. +- The SCVMM management server must be in a connected state. +- The user account must have permissions listed in Azure Arc SCVMM Administrator role. +- The target machine: + - Is powered on and the resource bridge has network connectivity to the host running the VM. + - Is running a [supported operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems). + - Is able to connect through the firewall to communicate over the Internet and [these URLs](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked. + - Has Azure CLI [installed](https://learn.microsoft.com/cli/azure/install-azure-cli). + - Has the Arc agent installation script downloaded from [here](https://download.microsoft.com/download/7/1/6/7164490e-6d8c-450c-8511-f8191f6ec110/arcscvmm-enable-guest-management.ps1). ++>[!NOTE] +>- If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo`, and `add <username> ALL=(ALL) NOPASSWD:ALL` at the end of the file. Ensure you replace `<username>`. +>- If your VM template has these changes incorporated, you won't need to do this for the VM created from that template. ++## Steps to install Arc agents using a script ++1. Login to the target VM as an administrator. +2. Run the Azure CLI with the `az` command from either Windows Command Prompt or PowerShell. +3. Login to your Azure account in Azure CLI using `az login --use-device-code` +4. Run the downloaded script *arcscvmm-enable-guest-management.ps1*. The `vmmServerId` parameter should denote your VMM ServerΓÇÖs ARM ID. ++```azurecli +./arcscvmm-enable-guest-management.ps1 -<vmmServerId> '/subscriptions/<subscriptionId>/resourceGroups/<rgName>/providers/Microsoft.ScVmm/vmmServers/<vmmServerName> +``` ++## Next steps ++[Manage VM extensions to use Azure management services](https://learn.microsoft.com/azure/azure-arc/servers/manage-vm-extensions). |
azure-functions | Analyze Telemetry Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/analyze-telemetry-data.md | By default, the data collected from your function app is stored in Application I To be able to view Application Insights data from a function app, you must have at least Contributor role permissions on the function app. You also need to have the [Monitoring Reader permission](../azure-monitor/roles-permissions-security.md#monitoring-reader) on the Application Insights instance. You have these permissions by default for any function app and Application Insights instance that you create. -To learn more about data retention and potential storage costs, see [Data collection, retention, and storage in Application Insights](../azure-monitor/app/data-retention-privacy.md). +To learn more about data retention and potential storage costs, see [Data collection, retention, and storage in Application Insights](/previous-versions/azure/azure-monitor/app/data-retention-privacy). ## Viewing telemetry in Monitor tab |
azure-functions | Create First Function Cli Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-java.md | If desired, you can skip to [Run the function locally](#run-the-function-locally #### Function.java *Function.java* contains a `run` method that receives request data in the `request` variable is an [HttpRequestMessage](/java/api/com.microsoft.azure.functions.httprequestmessage) that's decorated with the [HttpTrigger](/java/api/com.microsoft.azure.functions.annotation.httptrigger) annotation, which defines the trigger behavior. - The response message is generated by the [HttpResponseMessage.Builder](/java/api/com.microsoft.azure.functions.httpresponsemessage.builder) API. #### pom.xml Settings for the Azure resources created to host your app are defined in the **configuration** element of the plugin with a **groupId** of `com.microsoft.azure` in the generated pom.xml file. For example, the configuration element below instructs a Maven-based deployment to create a function app in the `java-functions-group` resource group in the `westus` region. The function app itself runs on Windows hosted in the `java-functions-app-service-plan` plan, which by default is a serverless Consumption plan. - You can change these settings to control how resources are created in Azure, such as by changing `runtime.os` from `windows` to `linux` before initial deployment. For a complete list of settings supported by the Maven plug-in, see the [configuration details](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Functions:-Configuration-Details). #### FunctionTest.java |
azure-functions | Dotnet Isolated In Process Differences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md | Use the following table to compare feature and functional differences between th <sup>3</sup> C# Script functions also run in-process and use the same libraries as in-process class library functions. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md). -<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, Service Bus triggers do not yet support message settlement scenarios. +<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). <sup>5</sup> ASP.NET Core types are not supported for .NET Framework. |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | You can change your application to 64-bit with the following command, using the az functionapp config set -g <group_name> -n <app_name> --use-32bit-worker-process false` ``` -To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following examples shows a configuration for publishing to a Windows 64-bit function app. +To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following example shows a configuration for publishing to a Windows 64-bit function app. ```xml <PropertyGroup> Each trigger and binding extension also has its own minimum version requirement, |-|-|-|-| | [Azure Blobs][blob-sdk-types] | **Generally Available** | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ | | [Azure Queues][queue-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | -| [Azure Service Bus][servicebus-sdk-types] | **Generally Available**<sup>2</sup> | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | +| [Azure Service Bus][servicebus-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Hubs][eventhub-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | -| [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ | +| [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>2</sup>_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ | | [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | Each trigger and binding extension also has its own minimum version requirement, <sup>1</sup> For output scenarios in which you would use an SDK type, you should create and work with SDK clients directly instead of using an output binding. See [Register Azure clients](#register-azure-clients) for an example of how to do this with dependency injection. -<sup>2</sup> The Service Bus trigger does not yet support message settlement scenarios for the isolated model. --<sup>3</sup> The Cosmos DB trigger uses the [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) and exposes change feed items as JSON-serializable types. The absence of SDK types is by-design for this scenario. +<sup>2</sup> The Cosmos DB trigger uses the [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) and exposes change feed items as JSON-serializable types. The absence of SDK types is by-design for this scenario. > [!NOTE] > When using [binding expressions](./functions-bindings-expressions-patterns.md) that rely on trigger data, SDK types for the trigger itself cannot be used. |
azure-functions | Functions Bindings Service Bus Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md | This example shows a [C# function](dotnet-isolated-process-guide.md) that receiv :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/ServiceBus/ServiceBusReceivedMessageFunctions.cs" id="docsnippet_servicebus_readbatch"::: +This example shows a [C# function](dotnet-isolated-process-guide.md) that receives multiple Service Bus queue messages, writes it to the logs, and then settles the message as completed: ++ # [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that reads [message metadata](#message-metadata) and |
azure-functions | Functions Bindings Service Bus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md | Functions 1.x exposed types from the deprecated [Microsoft.ServiceBus.Messaging] # [Extension 5.x+](#tab/extensionv5/isolated-process) -The isolated worker process supports parameter types according to the tables below. Support for binding to types from [Azure.Messaging.ServiceBus] is in preview. Current support does not yet include message settlement scenarios for triggers. +The isolated worker process supports parameter types according to the tables below. **Service Bus trigger** The `clientRetryOptions` settings only apply to interactions with the Service Bu |**maxConcurrentSessions**|`8`|The maximum number of sessions that can be handled concurrently per scaled instance. This setting is used only when the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) is set to `true`. This setting only applies for functions that receive a single message at a time.| |**maxMessageBatchSize**|`1000`|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.| |**minMessageBatchSize**<sup>1</sup>|`1`|The minimum number of messages desired in a batch. The minimum applies only when the function is receiving multiple messages and must be less than `maxMessageBatchSize`. <br/> The minimum size isn't strictly guaranteed. A partial batch is dispatched when a full batch can't be prepared before the `maxBatchWaitTime` has elapsed.|-|**maxBatchWaitTime**<sup>1</sup>|`00:00:30`|The maximum interval that the trigger should wait to fill a batch before invoking the function. The wait time is only considered when `minMessageBatchSize` is larger than 1 and is ignored otherwise. If less than `minMessageBatchSize` messages were available before the wait time elapses, the function is invoked with a partial batch. The longest allowed wait time is 50% of the entity message lock duration, meaning the maximum allowed is 2 minutes and 30 seconds. Otherwise, you may get lock exceptions. <br/><br/>**NOTE:** This interval is not a strict guarantee for the exact timing on which the function is invoked. There is a small magin of error due to timer precision.| +|**maxBatchWaitTime**<sup>1</sup>|`00:00:30`|The maximum interval that the trigger should wait to fill a batch before invoking the function. The wait time is only considered when `minMessageBatchSize` is larger than 1 and is ignored otherwise. If less than `minMessageBatchSize` messages were available before the wait time elapses, the function is invoked with a partial batch. The longest allowed wait time is 50% of the entity message lock duration, meaning the maximum allowed is 2 minutes and 30 seconds. Otherwise, you may get lock exceptions. <br/><br/>**NOTE:** This interval is not a strict guarantee for the exact timing on which the function is invoked. There is a small margin of error due to timer precision.| |**sessionIdleTimeout**|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the session will be closed and the function will attempt to process another session. |**enableCrossEntityTransactions**|`false`|Whether or not to enable transactions that span multiple entities on a Service Bus namespace.| |
azure-maps | Drawing Package Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md | The following snippet shows the unit property object that is associated with the You should now have all the DWG drawings prepared to meet Azure Maps Conversion service requirements. A manifest file has also been created to help describe the facility. All files need to be zipped into a single archive file, with the `.zip` extension. It's important that the manifest file is named `manifest.json` and is placed in the root directory of the zipped package. All other files can be in any directory of the zipped package if the filename includes the relative path to the manifest. For an example of a drawing package, see the [sample drawing package]. +## Next steps ++> [!div class="nextstepaction"] +> [Tutorial: Creating a Creator indoor map] + :::zone-end :::zone pivot="drawing-package-v2" When finished, select the **Create + Download** button to download a copy of the :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/review-download.png" alt-text="Screenshot showing the manifest JSON."::: -## Step 4: Prepare the drawing package --You should now have all the DWG drawings prepared to meet Azure Maps Conversion service requirements. A manifest file has also been created to help describe the facility. All files need to be compressed into a single archive file, with the `.zip` extension. It's important that the manifest file is named `manifest.json` and is placed in the root directory of the drawing package. All other files can be in any directory of the drawing package if the filename includes the relative path to the manifest. For an example of a drawing package, see the [sample drawing package v2]. -- ## Next steps > [!div class="nextstepaction"]-> [Tutorial: Creating a Creator indoor map] +> [Create indoor map with the onboarding tool] + <! Drawing Package v1 links> [sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%201.0 |
azure-maps | How To Use Best Practices For Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md | The Route Directions and Route Matrix APIs in Azure Maps [Route service] can be * An [Azure Maps account] * A [subscription key] -For more information about the coverage of the Route service, see the [Routing Coverage]. +For more information about the coverage of the Route service, see [Routing Coverage]. This article uses the [Postman] application to build REST calls, but you can choose any API development environment. ## Choose between Route Directions and Matrix Routing -The Route Directions APIs return instructions including the travel time and the coordinates for a route path. The Route Matrix API lets you calculate the travel time and distances for a set of routes defined by origin and destination locations. For every given origin, the Matrix API calculates the cost (travel time and distance) of routing from that origin to every given destination. These API allow you to specify parameters such as the desired departure time, arrival times, and the vehicle type, like car or truck. They all use real-time or predictive traffic data accordingly to return the most optimal routes. +The Route Directions APIs return instructions including the travel time and the coordinates for a route path. The Route Matrix API lets you calculate the travel time and distances for a set of routes defined by origin and destination locations. For every given origin, the Matrix API calculates the cost (travel time and distance) of routing from that origin to every given destination. These APIs allow you to specify parameters such as the desired departure time, arrival times, and the vehicle type, like car or truck. They all use real-time or predictive traffic data accordingly to return the most optimal routes. Consider calling Route Directions APIs if your scenario is to: The response contains the sections that are suitable for traffic along the given This option can be used to color the sections when rendering the map, as in The following image: -![Colored sections rendered on map](media/how-to-use-best-practices-for-routing/show-traffic-sections-img.png) ## Calculate and optimize a multi-stop route Azure Maps currently provides two forms of route optimizations: * Traveling salesman optimization, which changes the order of the waypoints to obtain the best order to visit each stop -For multi-stop routing, up to 150 waypoints may be specified in a single route request. The starting and ending coordinate locations can be the same, as would be the case with a round trip. But you need to provide at least one more waypoint to make the route calculation. Waypoints can be added to the query in-between the origin and destination coordinates. +For multi-stop routing, up to 150 waypoints can be specified in a single route request. The starting and ending coordinate locations can be the same, as would be the case with a round trip. But you need to provide at least one more waypoint to make the route calculation. Waypoints can be added to the query in-between the origin and destination coordinates. If you want to optimize the best order to visit the given waypoints, then you need to specify **computeBestOrder=true**. This scenario is also known as the traveling salesman optimization problem. The response describes the path length to be 140,851 meters, and that it would t The following image illustrates the path resulting from this query. This path is one possible route. It's not the optimal path based on time or distance. -![Non-optimized image](media/how-to-use-best-practices-for-routing/non-optimized-image-img.png) This route waypoint order is: 0, 1, 2, 3, 4, 5, and 6. The response describes the path length to be 91,814 meters, and that it would ta The following image illustrates the path resulting from this query. -![Optimized image](media/how-to-use-best-practices-for-routing/optimized-image-img.png) The optimal route has the following waypoint order: 0, 5, 1, 2, 4, 3, and 6. The optimal route has the following waypoint order: 0, 5, 1, 2, 4, 3, and 6. ## Calculate and bias alternative routes using supporting points -You might have situations where you want to reconstruct a route to calculate zero or more alternative routes for a reference route. For example, you may want to show customers alternative routes that pass your retail store. In this case, you need to bias a location using supporting points. Here are the steps to bias a location: +You might have situations where you want to reconstruct a route to calculate zero or more alternative routes for a reference route. For example, you can show customers alternative routes that pass your retail store. In this case, you need to bias a location using supporting points. Here are the steps to bias a location: 1. Calculate a route as-is and get the path from the route response 2. Use the route path to find the desired locations along or near the route path. For example, you can use the [Point of Interest] request or query your own data in your database. When calling [Post Route Directions], you can set the minimum deviation time or The following image is an example of rendering alternative routes with specified deviation limits for the time and the distance. -![Alternative routes](media/how-to-use-best-practices-for-routing/alternative-routes-img.png) ## Use the Routing service in a web app |
azure-monitor | Proactive Failure Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md | -After setting up [Application Insights for your project](../app/app-insights-overview.md), and if your app generates a certain minimum amount of data, Smart Detection of Failure Anomalies takes 24 hours to learn the normal behavior of your app, before it is switched on and can send alerts. +After setting up [Application Insights for your project](../app/app-insights-overview.md), and if your app generates a certain minimum amount of data, Smart Detection of Failure Anomalies takes 24 hours to learn the normal behavior of your app, before it's switched on and can send alerts. Here's a sample alert: :::image type="content" source="./media/proactive-failure-diagnostics/013.png" alt-text="Sample smart detection alert showing cluster analysis around failure." lightbox="./media/proactive-failure-diagnostics/013.png"::: -The alert details will tell you: +The alert details tell you: * The failure rate compared to normal app behavior. * How many users are affected - so you know how much to worry. Ordinary [metric alerts](./alerts-log.md) tell you there might be a problem. But ## How it works Smart Detection monitors the data received from your app, and in particular the failure rates. This rule counts the number of requests for which the `Successful request` property is false, and the number of dependency calls for which the `Successful call` property is false. For requests, by default, `Successful request == (resultCode < 400)` (unless you have written custom code to [filter](../app/api-filtering-sampling.md#filtering) or generate your own [TrackRequest](../app/api-custom-events-metrics.md#trackrequest) calls). -Your app's performance has a typical pattern of behavior. Some requests or dependency calls will be more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies. +Your app's performance has a typical pattern of behavior. Some requests or dependency calls are more prone to failure than others; and the overall failure rate may go up as load increases. Smart Detection uses machine learning to find these anomalies. As data comes into Application Insights from your web app, Smart Detection compares the current behavior with the patterns seen over the past few days. If an abnormal rise in failure rate is observed by comparison with previous performance, an analysis is triggered. When an analysis is triggered, the service performs a cluster analysis on the failed request, to try to identify a pattern of values that characterize the failures. -In the example above, the analysis has discovered that most failures are about a specific result code, request name, Server URL host, and role instance. +In the previous example, the analysis has discovered that most failures are about a specific result code, request name, Server URL host, and role instance. When your service is instrumented with these calls, the analyzer looks for an exception and a dependency failure that are associated with requests in the cluster it has identified, together with an example of any trace logs associated with those requests. Like the [alerts you set manually](./alerts-log.md), you can inspect the state o ### Alert logic details -The alerts are triggered by our proprietary machine learning algorithm so we can't share the exact implementation details. With that said, we understand that you sometimes need to know more about how the underlying logic works. The primary factors that are evaluated to determine if an alert should be triggered are: +Our proprietary machine learning algorithm triggers the alerts, so we can't share the exact implementation details. With that said, we understand that you sometimes need to know more about how the underlying logic works. The primary factors that are evaluated to determine if an alert should be triggered are: * Analysis of the failure percentage of requests/dependencies in a rolling time window of 20 minutes. * A comparison of the failure percentage of the last 20 minutes to the rate in the last 40 minutes and the past seven days, and looking for significant deviations that exceed X-times that standard deviation. * Using an adaptive limit for the minimum failure percentage, which varies based on the appΓÇÖs volume of requests/dependencies.-* There is logic that can automatically resolve the fired alert monitor condition, if the issue is no longer detected for 8-24 hours. +* There's logic that can automatically resolve the fired alert monitor condition, if the issue is no longer detected for 8-24 hours. Note: in the current design. a notification or action will not be sent when a Smart Detection alert is resolved. You can check if a Smart Detection alert was resolved in the Azure portal. ## Configure alerts You can disable Smart Detection alert rule from the portal or using Azure Resource Manager ([see template example](./proactive-arm-config.md)). -This alert rule is created with an associated [Action Group](./action-groups.md) named "Application Insights Smart Detection" that contains email and webhook actions, and can be extended to trigger additional actions when the alert fires. +This alert rule is created with an associated [Action Group](./action-groups.md) named "Application Insights Smart Detection" that contains email and webhook actions, and can be extended to trigger more actions when the alert fires. > [!NOTE] > Email notifications sent from this alert rule are now sent by default to users associated with the subscription's Monitoring Reader and Monitoring Contributor roles. More information on this is available [here](./proactive-email-notification.md). > Notifications sent from this alert rule follow the [common alert schema](./alerts-common-schema.md). > -Open the Alerts page. Failure Anomalies alert rules are included along with any alerts that you have set manually, and you can see whether it is currently in the alert state. +Open the Alerts page. Failure Anomalies alert rules are included along with any alerts that you have set manually, and you can see whether it's currently in the alert state. :::image type="content" source="./media/proactive-failure-diagnostics/021.png" alt-text="On the Application Insights resource page, click Alerts tile, then Manage alert rules." lightbox="./media/proactive-failure-diagnostics/021.png"::: Notice that if you delete an Application Insights resource, the associated Failu ## Triage and diagnose an alert -An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there is some problem with your app or its environment. +An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there's some problem with your app or its environment. -To investigate further, click on 'View full details in Application Insights' the links in this page will take you straight to a [search page](../app/search-and-transaction-diagnostics.md?tabs=transaction-search) filtered to the relevant requests, exception, dependency, or traces. +To investigate further, click on 'View full details in Application Insights' the links in this page take you straight to a [search page](../app/search-and-transaction-diagnostics.md?tabs=transaction-search) filtered to the relevant requests, exception, dependency, or traces. You can also open the [Azure portal](https://portal.azure.com), navigate to the Application Insights resource for your app, and open the Failures page. -Clicking on 'Diagnose failures' will help you get more details and resolve the issue. +Clicking on 'Diagnose failures' helps you get more details and resolve the issue. :::image type="content" source="./media/proactive-failure-diagnostics/051.png" alt-text="Diagnostic search." lightbox="./media/proactive-failure-diagnostics/051.png#lightbox"::: -From the percentage of requests and number of users affected, you can decide how urgent the issue is. In the example above, the failure rate of 78.5% compares with a normal rate of 2.2%, indicates that something bad is going on. On the other hand, only 46 users were affected. If it was your app, you'd be able to assess how serious that is. +From the percentage of requests and number of users affected, you can decide how urgent the issue is. In the previous example, the failure rate of 78.5% compares with a normal rate of 2.2%, indicates that something bad is going on. On the other hand, only 46 users were affected. If it was your app, you'd be able to assess how serious that is. In many cases, you will be able to diagnose the problem quickly from the request name, exception, dependency failure, and trace data provided. Smart Detection of Failure Anomalies complements other similar but distinct feat ## If you receive a Smart Detection alert *Why have I received this alert?* -* We detected an abnormal rise in failed requests rate compared to the normal baseline of the preceding period. After analysis of the failures and associated application data, we think that there is a problem that you should look into. +* We detected an abnormal rise in failed requests rate compared to the normal baseline of the preceding period. After analysis of the failures and associated application data, we think that there's a problem that you should look into. *Does the notification mean I definitely have a problem?* * We try to alert on app disruption or degradation, but only you can fully understand the semantics and the impact on the app or users. -*So, you are looking at my application data?* +*So, you're looking at my application data?* -* No. The service is entirely automatic. Only you get the notifications. Your data is [private](../app/data-retention-privacy.md). +* No. The service is entirely automatic. Only you get the notifications. Your data is [private](/previous-versions/azure/azure-monitor/app/data-retention-privacy). *Do I have to subscribe to this alert?* |
azure-monitor | Smart Detection Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/smart-detection-performance.md | Emails about smart detection performance anomalies are limited to one email per ## Frequently asked questions * *So, Microsoft staff look at my data?*- * No. The service is entirely automatic. Only you get the notifications. Your data is [private](../app/data-retention-privacy.md). + * No. The service is entirely automatic. Only you get the notifications. Your data is [private](/previous-versions/azure/azure-monitor/app/data-retention-privacy). * *Do you analyze all the data collected by Application Insights?* * Currently, we analyze request response time, dependency response time, and page load time. Analysis of other metrics is on our backlog looking forward. |
azure-monitor | Api Custom Events Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md | If you set any of these values yourself, consider removing the relevant line fro To avoid hitting the data rate limit, use [sampling](./sampling.md). -To determine how long data is kept, see [Data retention and privacy](./data-retention-privacy.md). +To determine how long data is kept, see [Data retention and privacy](/previous-versions/azure/azure-monitor/app/data-retention-privacy). ## Reference docs |
azure-monitor | Api Filtering Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md | Insert a JavaScript telemetry initializer, if needed. For more information on th #### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript) Insert a telemetry initializer by adding the onInit callback function in the [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration):-+<!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-sdk.md and 2) articles\azure-monitor\app\javascript-feature-extensions.md --> ```html <script type="text/javascript">-!function(v,y,T){<!-- Removed the JavaScript (Web) SDK Loader Script code for brevity -->}(window,document,{ +!(function (cfg){function e(){cfg.onInit&&cfg.onInit(i)}var S,u,D,t,n,i,C=window,x=document,w=C.location,I="script",b="ingestionendpoint",E="disableExceptionTracking",A="ai.device.";"instrumentationKey"[S="toLowerCase"](),u="crossOrigin",D="POST",t="appInsightsSDK",n=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=n),i=C[n]||function(l){var d=!1,g=!1,f={initialize:!0,queue:[],sv:"7",version:2,config:l};function m(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[A+"id"]=i[S](),n[A+"type"]=i,n["ai.operation.name"]=w&&w.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(f.sv||f.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:4,seq:"1",aiDataContract:undefined}}var h=-1,v=0,y=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],k=l.url||cfg.src;if(k){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~k.indexOf("ai.3")&&(k=k.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<y.length;e++)if(0<k.indexOf(y[e])){h=e;break}var i=function(e){var a,t,n,i,o,r,s,c,p,u;f.queue=[],g||(0<=h&&v+1<y.length?(a=(h+v+1)%y.length,T(k.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+y[a]+i})),v+=1):(d=g=!0,o=k,c=(p=function(){var e,t={},n=l.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][S]()]=o[1])}return t[b]||(e=(n=t.endpointsuffix)?t.location:null,t[b]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||l.instrumentationKey||"",p=(p=p[b])?p+"/v2/track":l.endpointUrl,(u=[]).push((t="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",n=o,r=p,(s=(i=m(c,"Exception")).data).baseType="ExceptionData",s.baseData.exceptions=[{typeName:"SDKLoadFailed",message:t.replace(/\./g,"-"),hasFullStack:!1,stack:t+"\nSnippet failed to load ["+n+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(w&&w.pathname||"_unknown_")+"\nEndpoint: "+r,parsedStack:[]}],i)),u.push((s=o,t=p,(r=(n=m(c,"Message")).data).baseType="MessageData",(i=r.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+s+")").replace(/\"/g,"")+'"',i.properties={endpoint:t},n)),o=u,c=p,JSON&&((r=C.fetch)&&!cfg.useXhr?r(c,{method:D,body:JSON.stringify(o),mode:"cors"}):XMLHttpRequest&&((s=new XMLHttpRequest).open(D,c),s.setRequestHeader("Content-type","application/json"),s.send(JSON.stringify(o))))))},a=function(e,t){g||setTimeout(function(){!t&&f.core||i()},500),d=!1},T=function(e){var n=x.createElement(I),e=(n.src=e,cfg[u]);return!e&&""!==e||"undefined"==n[u]||(n[u]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?x.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){x.getElementsByTagName(I)[0].parentNode.appendChild(n)},cfg.ld||0),n};T(k)}try{f.cookie=x.cookie}catch(p){}function t(e){for(;e.length;)!function(t){f[t]=function(){var e=arguments;d||f.queue.push(function(){f[t].apply(f,e)})}}(e.pop())}var r,s,n="track",o="TrackPage",c="TrackEvent",n=(t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+o,"stop"+o,"start"+c,"stop"+c,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),f.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(l.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==l[E]&&!0!==n[E]&&(t(["_"+(r="onerror")]),s=C[r],C[r]=function(e,t,n,i,a){var o=s&&s(e,t,n,i,a);return!0!==o&&f["_"+r]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},l.autoExceptionInstrumented=!0),f}(cfg.cfg),(C[n]=i).queue&&0===i.queue.length?(i.queue.push(e),i.trackPageView({})):e();})({ src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js", crossOrigin: "anonymous", onInit: function (sdk) { |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | From [client webpages](./javascript-sdk.md): > [!Note] > For some applications, such as single-page applications (SPAs), the duration may not be recorded and will default to 0. - For more information, see [Data collection, retention, and storage in Application Insights](./data-retention-privacy.md). + For more information, see [Data collection, retention, and storage in Application Insights](/previous-versions/azure/azure-monitor/app/data-retention-privacy). From other sources, if you configure them: We recommend that you use our SDKs and use the [SDK API](./api-custom-events-met Most Application Insights data has a latency of under 5 minutes. Some data can take longer, which is typical for larger log files. See the [Application Insights service-level agreement](https://azure.microsoft.com/support/legal/sla/application-insights/v1_2/). +### How does Application Insights handle data collection, retention, storage, and privacy? ++#### Collection ++Application Insights collects telemetry about your app, including web server telemetry, web page telemetry, and performance counters. This data can be used to monitor your app's performance, health, and usage. You can select the location when you [create a new Application Insights resource](./create-workspace-resource.md). ++#### Retention and Storage ++Data is sent to an Application Insights [Log Analytics workspace](../logs/log-analytics-workspace-overview.md). You can choose the retention period for raw data, from 30 to 730 days. Aggregated data is retained for 90 days, and debug snapshots are retained for 15 days. ++#### Privacy ++Application Insights doesn't handle sensitive data by default, as long as you don't put sensitive data in URLs as plain text and ensure your custom code doesn't collect personal or other sensitive details. During development and testing, check the sent data in your IDE and browser's debugging output windows. ++For archived information on this topic, see [Data collection, retention, and storage in Application Insights](/previous-versions/azure/azure-monitor/app/data-retention-privacy). + ## Help and support ### Azure technical support |
azure-monitor | Application Insights Asp Net Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md | Enable-ApplicationInsightsMonitoring -ConnectionString 'InstrumentationKey=00000 ### [Detailed instructions](#tab/detailed-instructions) This tab describes how to onboard to the PowerShell Gallery and download the ApplicationMonitor module.-Included are the most common parameters that you'll need to get started. +Included are the most common parameters that you need to get started. We've also provided manual download instructions in case you don't have internet access. ### Get a connection string -To get started, you need an connection string. For more information, see [Connection strings](sdk-connection-string.md). +To get started, you need a connection string. For more information, see [Connection strings](sdk-connection-string.md). [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 ``` -These instructions were written and tested on a computer running Windows 10 and the versions listed above. +These instructions were written and tested on a computer running Windows 10 and the following versions. ### Prerequisites for PowerShell Gallery -These steps will prepare your server to download modules from PowerShell Gallery. +These steps prepare your server to download modules from PowerShell Gallery. > [!NOTE] > PowerShell Gallery is supported on Windows 10, Windows Server 2016, and PowerShell 6+. These steps will prepare your server to download modules from PowerShell Gallery - `-Proxy`. Specifies a proxy server for the request. - `-Force`. Bypasses the confirmation prompt. - You'll receive this prompt if NuGet isn't set up: + You receive this prompt if NuGet isn't set up: ```output NuGet provider is required to continue These steps will prepare your server to download modules from PowerShell Gallery - Optional parameter: - `-Proxy`. Specifies a proxy server for the request. - You'll receive this prompt if PowerShell Gallery isn't trusted: + You receive this prompt if PowerShell Gallery isn't trusted: ```output Untrusted repository These steps will prepare your server to download modules from PowerShell Gallery [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): ``` - You can confirm this change and audit all PSRepositories by running the `Get-PSRepository` command. + You can confirm this change and audit all `PSRepositories` by running the `Get-PSRepository` command. 4. Install the newest version of PowerShellGet. - Description: This module contains the tooling used to get other modules from PowerShell Gallery. Version 1.0.0.1 ships with Windows 10 and Windows Server. Version 1.6.0 or higher is required. To determine which version is installed, run the `Get-Command -Module PowerShellGet` command. These steps will prepare your server to download modules from PowerShell Gallery - `-Proxy`. Specifies a proxy server for the request. - `-Force`. Bypasses the "already installed" warning and installs the latest version. - You'll receive this error if you're not using the newest version of PowerShellGet: + You receive this error if you're not using the newest version of PowerShellGet: ```output Install-Module : A parameter cannot be found that matches parameter name 'AllowPrerelease'. These steps will prepare your server to download modules from PowerShell Gallery FullyQualifiedErrorId : NamedParameterNotFound,Install-Module ``` -5. Restart PowerShell. You can't load the new version in the current session. New PowerShell sessions will load the latest version of PowerShellGet. +5. Restart PowerShell. You can't load the new version in the current session. New PowerShell sessions load the latest version of PowerShellGet. ### Download and install the module via PowerShell Gallery -These steps will download the Az.ApplicationMonitor module from PowerShell Gallery. +These steps download the Az.ApplicationMonitor module from PowerShell Gallery. 1. Ensure that all prerequisites for PowerShell Gallery are met. 2. Run PowerShell as Admin with an elevated execution policy. If for any reason you can't connect to the PowerShell module, you can manually d 3. Under **Installation Options**, select **Manual Download**. #### Option 1: Install into a PowerShell modules directory-Install the manually downloaded PowerShell module into a PowerShell directory so it will be discoverable by PowerShell sessions. +Install the manually downloaded PowerShell module into a PowerShell directory so it's discoverable by PowerShell sessions. For more information, see [Installing a PowerShell Module](/powershell/scripting/developer/module/installing-a-powershell-module). For more information, see [Installing a PowerShell Module](/powershell/scripting ``` #### Option 2: Unzip and import nupkg manually-Install the manually downloaded PowerShell module into a PowerShell directory so it will be discoverable by PowerShell sessions. +Install the manually downloaded PowerShell module into a PowerShell directory so it's discoverable by PowerShell sessions. For more information, see [Installing a PowerShell Module](/powershell/scripting/developer/module/installing-a-powershell-module). If you're installing the module into any other directory, manually import the module by using [Import-Module](/powershell/module/microsoft.powershell.core/import-module). If you're installing the module into any other directory, manually import the mo ### Route traffic through a proxy -When you monitor a computer on your private intranet, you'll need to route HTTP traffic through a proxy. +When you monitor a computer on your private intranet, you need to route HTTP traffic through a proxy. The PowerShell commands to download and install Az.ApplicationMonitor from the PowerShell Gallery support a `-Proxy` parameter. Review the preceding instructions when you write your installation scripts. -The Application Insights SDK will need to send your app's telemetry to Microsoft. We recommend that you configure proxy settings for your app in your web.config file. For more information, see [How do I achieve proxy passthrough?](#how-do-i-achieve-proxy-passthrough). +The Application Insights SDK needs to send your app's telemetry to Microsoft. We recommend that you configure proxy settings for your app in your web.config file. For more information, see [How do I achieve proxy passthrough?](#how-do-i-achieve-proxy-passthrough). ### Enable monitoring This tab describes the following cmdlets, which are members of the [Az.Applicati - [Start-ApplicationInsightsMonitoringTrace](?tabs=api-reference#start-applicationinsightsmonitoringtrace) > [!NOTE]-> - To get started, you need an instrumentation key. For more information, see [Create a resource](create-workspace-resource.md). +> - To get started, you need a connection string. For more information, see [Create a resource](create-workspace-resource.md). > - This cmdlet requires that you review and accept our license and privacy statement. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] This tab describes the following cmdlets, which are members of the [Az.Applicati > - This cmdlet requires that you review and accept our license and privacy statement. > - The instrumentation engine adds additional overhead and is off by default. - ### Enable-InstrumentationEngine Enables the instrumentation engine by setting some registry keys. It collects events and messages that describe the execution of a managed process Enable the instrumentation engine if: - You've already enabled monitoring with the Enable cmdlet but didn't enable the instrumentation engine.-- You've manually instrumented your app with the .NET SDKs and want to collect additional telemetry.+- You've manually instrumented your app with the .NET SDKs and want to collect extra telemetry. #### Examples Configuring registry for instrumentation engine... Enables codeless attach monitoring of IIS apps on a target computer. -This cmdlet will modify the IIS applicationHost.config and set some registry keys. -It will also create an applicationinsights.ikey.config file, which defines the instrumentation key used by each app. -IIS will load the RedfieldModule on startup, which will inject the Application Insights SDK into applications as the applications start. +This cmdlet modifies the IIS applicationHost.config and sets some registry keys. +It creates an applicationinsights.ikey.config file, which defines the instrumentation key used by each app. +IIS loads the RedfieldModule on startup, which injects the Application Insights SDK into applications as the applications start. Restart IIS for your changes to take effect. After you enable monitoring, we recommend that you use [Live Metrics](live-stream.md) to quickly check if your app is sending us telemetry. #### Examples +##### Example with a single connection string +In this example, all apps on the current computer are assigned a single connection string. ++```powershell +Enable-ApplicationInsightsMonitoring -ConnectionString 'InstrumentationKey=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/' +``` + ##### Example with a single instrumentation key In this example, all apps on the current computer are assigned a single instrumentation key. Enable-ApplicationInsightsMonitoring -InstrumentationKey xxxxxxxx-xxxx-xxxx-xxxx ##### Example with an instrumentation key map In this example: - `MachineFilter` matches the current computer by using the `'.*'` wildcard.-- `AppFilter='WebAppExclude'` provides a `null` instrumentation key. The specified app won't be instrumented.+- `AppFilter='WebAppExclude'` provides a `null` instrumentation key. The specified app isn't instrumented. - `AppFilter='WebAppOne'` assigns the specified app a unique instrumentation key. - `AppFilter='WebAppTwo'` assigns the specified app a unique instrumentation key.-- Finally, `AppFilter` also uses the `'.*'` wildcard to match all web apps that aren't matched by the earlier rules and assign a default instrumentation key.+- `AppFilter` uses the `'.*'` wildcard to match any web apps it doesn't already match and assigns a default instrumentation key. - Spaces are added for readability. ```powershell Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap ` #### Parameters +##### -ConnectionString +**Required.** Use this parameter to supply a single connection string for use by all apps on the target computer. + ##### -InstrumentationKey **Required.** Use this parameter to supply a single instrumentation key for use by all apps on the target computer. Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap ` You can create a single installation script for several computers by setting `MachineFilter`. > [!IMPORTANT]-> Apps will match against rules in the order that the rules are provided. So you should specify the most specific rules first and the most generic rules last. +> Apps matches against rules in the order that the rules are provided. So you should specify the most specific rules first and the most generic rules last. ###### Schema `@(@{MachineFilter='.*';AppFilter='.*';InstrumentationSettings=@{InstrumentationKey='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'}})` - **MachineFilter** is a required C# regex of the computer or VM name.- - '.*' will match all - - 'ComputerName' will match only computers with the exact name specified. + - '.*' matches all + - 'ComputerName' matches only computers with the exact name specified. - **AppFilter** is a required C# regex of the IIS Site Name. You can get a list of sites on your server by running the command [get-iissite](/powershell/module/iisadministration/get-iissite).- - '.*' will match all - - 'SiteName' will match only the IIS Site with the exact name specified. + - '.*' matches all + - 'SiteName' matches only the IIS Site with the exact name specified. - **InstrumentationKey** is required to enable monitoring of apps that match the preceding two filters. - Leave this value null if you want to define rules to exclude monitoring. The instrumentation engine adds overhead and is off by default. ##### -IgnoreSharedConfig When you have a cluster of web servers, you might be using a [shared configuration](/iis/web-hosting/configuring-servers-in-the-windows-web-platform/shared-configuration_211). The HttpModule can't be injected into this shared configuration.-This script will fail with the message that extra installation steps are required. +This script fails with the message that extra installation steps are required. Use this switch to ignore this check and continue installing prerequisites. For more information, see [known conflict-with-iis-shared-configuration](status-monitor-v2-troubleshoot.md#conflict-with-iis-shared-configuration) Configuring registry for instrumentation engine... ### Disable-ApplicationInsightsMonitoring Disables monitoring on the target computer.-This cmdlet will remove edits to the IIS applicationHost.config and remove registry keys. +This cmdlet removes edits to the IIS applicationHost.config and remove registry keys. #### Examples Filters: This cmdlet provides troubleshooting information about Application Insights Agent. Use this cmdlet to investigate the monitoring status, version of the PowerShell Module, and to inspect the running process.-This cmdlet will report version information and information about key files required for monitoring. +This cmdlet reports version information and information about key files required for monitoring. #### Examples AppAlreadyInstrumented : true ``` In this example;-- **Machine Identifier** is an anonymous ID used to uniquely identify your server. If you create a support request, we'll need this ID to find logs for your server.+- **Machine Identifier** is an anonymous ID used to uniquely identify your server. If you create a support request, we need this ID to find logs for your server. - **Default Web Site** is Stopped in IIS - **DemoWebApp111** has been started in IIS, but hasn't received any requests. This report shows that there's no running process (ProcessId: not found). - **DemoWebApp222** is running and is being monitored (Instrumented: true). Based on the user configuration, Instrumentation Key xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx123 was matched for this site.-- **DemoWebApp333** has been manually instrumented using the Application Insights SDK. Application Insights Agent detected the SDK and won't monitor this site.+- **DemoWebApp333** has been manually instrumented using the Application Insights SDK. Application Insights Agent detected the SDK and doesn't monitor this site. ##### Example: PowerShell module information listdlls64.exe -accepteula w3wp ##### (No parameters) -By default, this cmdlet will report the monitoring status of web applications. +By default, this cmdlet reports the monitoring status of web applications. Use this option to review if your application was successfully instrumented. You can also review which Instrumentation Key was matched to your site. Use this option if you need to identify the version of any DLL, including the Ap ##### -InspectProcess **Optional**. Use this switch to report whether IIS is running.-It will also download external tools to determine if the necessary DLLs are loaded into the IIS runtime. +It downloads external tools to determine if the necessary DLLs are loaded into the IIS runtime. If this process fails for any reason, you can run these commands manually: If this process fails for any reason, you can run these commands manually: ##### -Force -**Optional**. Used only with InspectProcess. Use this switch to skip the user prompt that appears before additional tools are downloaded. +**Optional**. Used only with InspectProcess. Use this switch to skip the user prompt that appears before more tools are downloaded. ### Set-ApplicationInsightsMonitoringConfig Restart IIS for your changes to take effect. #### Examples ##### Example with a single instrumentation key-In this example, all apps on the current computer will be assigned a single instrumentation key. +In this example, all apps on the current computer are assigned a single instrumentation key. ```powershell Enable-ApplicationInsightsMonitoring -InstrumentationKey xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Enable-ApplicationInsightsMonitoring -InstrumentationKey xxxxxxxx-xxxx-xxxx-xxxx ##### Example with an instrumentation key map In this example: - `MachineFilter` matches the current computer by using the `'.*'` wildcard.-- `AppFilter='WebAppExclude'` provides a `null` instrumentation key. The specified app won't be instrumented.+- `AppFilter='WebAppExclude'` provides a `null` instrumentation key. The specified app isn't instrumented. - `AppFilter='WebAppOne'` assigns the specified app a unique instrumentation key. - `AppFilter='WebAppTwo'` assigns the specified app a unique instrumentation key.-- Finally, `AppFilter` also uses the `'.*'` wildcard to match all web apps that aren't matched by the earlier rules and assign a default instrumentation key.+- `AppFilter` uses the `'.*'` wildcard to match web apps it doesn't already match and assigns a default instrumentation key. - Spaces are added for readability. ```powershell Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap ` You can create a single installation script for several computers by setting `MachineFilter`. > [!IMPORTANT]-> Apps will match against rules in the order that the rules are provided. So you should specify the most specific rules first and the most generic rules last. +> Apps matches against rules in the order that the rules are provided. So you should specify the most specific rules first and the most generic rules last. ###### Schema `@(@{MachineFilter='.*';AppFilter='.*';InstrumentationKey='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'})` - **MachineFilter** is a required C# regex of the computer or VM name.- - '.*' will match all - - 'ComputerName' will match only computers with the specified name. + - '.*' matches all + - 'ComputerName' matches only computers with the specified name. - **AppFilter** is a required C# regex of the computer or VM name.- - '.*' will match all - - 'ApplicationName' will match only IIS apps with the specified name. + - '.*' matches all + - 'ApplicationName' matches only IIS apps with the specified name. - **InstrumentationKey** is required to enable monitoring of the apps that match the preceding two filters. - Leave this value null if you want to define rules to exclude monitoring. C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\content\applica Collects [ETW Events](/windows/desktop/etw/event-tracing-portal) from the codeless attach runtime. This cmdlet is an alternative to running [PerfView](https://github.com/microsoft/perfview). -Collected events will be printed to the console in real-time and saved to an ETL file. The output ETL file can be opened by [PerfView](https://github.com/microsoft/perfview) for further investigation. +Events are collected, printed to the console in real-time, and saved to an ETL file. You can open the output ETL file with [PerfView](https://github.com/microsoft/perfview) for further investigation. -This cmdlet will run until it reaches the timeout duration (default 5 minutes) or is stopped manually (`Ctrl + C`). +This cmdlet runs until it reaches the timeout duration (default 5 minutes) or is stopped manually (`Ctrl + C`). #### Examples This cmdlet will run until it reaches the timeout duration (default 5 minutes) o Normally we would ask that you collect events to investigate why your application isn't being instrumented. -The codeless attach runtime will emit ETW events when IIS starts up and when your application starts up. +The codeless attach runtime emits ETW events when IIS starts up and when your application starts up. To collect these events:-1. In a cmd console with admin privileges, execute `iisreset /stop` To turn off IIS and all web apps. +1. In a cmd console with admin privileges, execute `iisreset /stop` to stop IIS and all web apps. 2. Execute this cmdlet-3. In a cmd console with admin privileges, execute `iisreset /start` To start IIS. +3. In a cmd console with admin privileges, execute `iisreset /start` to start IIS. 4. Try to browse to your app. 5. After your app finishes loading, you can manually stop it (`Ctrl + C`) or wait for the timeout. You have three options when collecting events: 1. Use the switch `-CollectSdkEvents` to collect events emitted from the Application Insights SDK. 2. Use the switch `-CollectRedfieldEvents` to collect events emitted by Application Insights Agent and the Redfield Runtime. These logs are helpful when diagnosing IIS and application startup. 3. Use both switches to collect both event types.-4. By default, if no switch is specified both event types will be collected. +4. By default, if no switch is specified both event types are collected. #### Parameters You have three options when collecting events: ##### -LogDirectory **Optional.** Use this switch to set the output directory of the ETL file.-By default, this file will be created in the PowerShell Modules directory. -The full path will be displayed during script execution. +By default, this file is created in the PowerShell Modules directory. +The full path is displayed during script execution. ##### -CollectSdkEvents The release note updates are listed here. ### 2.0.0 -- Updated the Application Insights .NET/.NET Core SDK to 2.21.0-redfield+- Updated the Application Insights .NET/.NET Core SDK to `2.21.0-redfield` ### 2.0.0-beta3 -- Updated the Application Insights .NET/.NET Core SDK to 2.20.1-redfield+- Updated the Application Insights .NET/.NET Core SDK to `2.20.1-redfield` - Enabled SQL query collection ### 2.0.0-beta2 -Updated the Application Insights .NET/.NET Core SDK to 2.18.1-redfield +Updated the Application Insights .NET/.NET Core SDK to `2.18.1-redfield` ### 2.0.0-beta1 Each of these options is described in the [detailed instructions](?tabs=detailed ### Does Application Insights Agent support ASP.NET Core applications? - Yes. Starting from [Application Insights Agent 2.0.0](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0), ASP.NET Core applications hosted in IIS are supported. + Yes. In [Application Insights Agent 2.0.0](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0) and later, ASP.NET Core applications hosted in IIS are supported. ### How do I verify that the enablement succeeded? |
azure-monitor | Data Retention Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md | - Title: Data retention and storage in Application Insights | Microsoft Docs -description: Retention and privacy policy statement for Application Insights. - Previously updated : 07/10/2023-----# Data collection, retention, and storage in Application Insights --When you install the [Application Insights][start] SDK in your app, it sends telemetry about your app to the [cloud](create-workspace-resource.md). As a responsible developer, you want to know exactly what data is sent, what happens to the data, and how you can keep control of it. In particular, could sensitive data be sent, where is it stored, and how secure is it? --First, the short answer: --* The standard telemetry modules that run "out of the box" are unlikely to send sensitive data to the service. The telemetry is concerned with load, performance and usage metrics, exception reports, and other diagnostic data. The main user data visible in the diagnostic reports are URLs. But your app shouldn't, in any case, put sensitive data in plain text in a URL. -* You can write code that sends more custom telemetry to help you with diagnostics and monitoring usage. (This extensibility is a great feature of Application Insights.) It would be possible, by mistake, to write this code so that it includes personal and other sensitive data. If your application works with such data, you should apply a thorough review process to all the code you write. -* While you develop and test your app, it's easy to inspect what's being sent by the SDK. The data appears in the debugging output windows of the IDE and browser. -* You can select the location when you create a new Application Insights resource. For more information about Application Insights availability per region, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all). -* Review the collected data because it might include data that's allowed in some circumstances but not others. A good example of this circumstance is device name. The device name from a server doesn't affect privacy and is useful. A device name from a phone or laptop might have privacy implications and be less useful. An SDK developed primarily to target servers would collect device name by default. This capability might need to be overwritten in both normal events and exceptions. --The rest of this article discusses these points more fully. The article is self-contained, so you can share it with colleagues who aren't part of your immediate team. --## What is Application Insights? --[Application Insights][start] is a service provided by Microsoft that helps you improve the performance and usability of your live application. It monitors your application all the time it's running, both during testing and after you've published or deployed it. Application Insights creates charts and tables that show you informative metrics. For example, you might see what times of day you get most users, how responsive the app is, and how well it's served by any external services that it depends on. If there are failures or performance issues, you can search through the telemetry data to diagnose the cause. The service sends you emails if there are any changes in the availability and performance of your app. --To get this functionality, you install an Application Insights SDK in your application, which becomes part of its code. When your app is running, the SDK monitors its operation and sends telemetry to an [Application Insights Log Analytics workspace](create-workspace-resource.md), which is a cloud service hosted by [Microsoft Azure](https://azure.com). Application Insights also works for any applications, not just applications that are hosted in Azure. --Application Insights stores and analyzes the telemetry. To see the analysis or search through the stored telemetry, you sign in to your Azure account and open the Application Insights resource for your application. You can also share access to the data with other members of your team, or with specified Azure subscribers. --You can have data exported from Application Insights, for example, to a database or to external tools. You provide each tool with a special key that you obtain from the service. The key can be revoked if necessary. --Application Insights SDKs are available for a range of application types: --- Web services hosted in your own Java EE or ASP.NET servers, or in Azure-- Web clients, that is, the code running in a webpage-- Desktop apps and services-- Device apps such as Windows Phone, iOS, and Android--They all send telemetry to the same service. --## What data does it collect? --There are three sources of data: --* The SDK, which you integrate with your app either [in development](./asp-net.md) or [at runtime](./application-insights-asp-net-agent.md). There are different SDKs for different application types. There's also an [SDK for webpages](./javascript.md), which loads into the user's browser along with the page. - - * Each SDK has many [modules](./configuration-with-applicationinsights-config.md), which use different techniques to collect different types of telemetry. - * If you install the SDK in development, you can use its API to send your own telemetry, in addition to the standard modules. This custom telemetry can include any data you want to send. -* In some web servers, there are also agents that run alongside the app and send telemetry about CPU, memory, and network occupancy. For example, Azure VMs, Docker hosts, and [Java application servers](./opentelemetry-enable.md?tabs=java) can have such agents. -* [Availability overview](availability-overview.md) are processes run by Microsoft that send requests to your web app at regular intervals. The results are sent to Application Insights. --### What kind of data is collected? --The main categories are: --* [Web server telemetry](./asp-net.md): HTTP requests. URI, time taken to process the request, response code, and client IP address. `Session id`. -* [Webpages](./javascript.md): Page, user, and session counts. Page load times. Exceptions. Ajax calls. -* Performance counters: Memory, CPU, IO, and network occupancy. -* Client and server context: OS, locale, device type, browser, and screen resolution. -* [Exceptions](./asp-net-exceptions.md) and crashes: Stack dumps, `build id`, and CPU type. -* [Dependencies](./asp-net-dependencies.md): Calls to external services such as REST, SQL, and AJAX. URI or connection string, duration, success, and command. -* [Availability tests](availability-overview.md): Duration of test and steps, and responses. -* [Trace logs](./asp-net-trace-logs.md) and [custom telemetry](./api-custom-events-metrics.md): Anything you code into your logs or telemetry. --For more information, see the section [Data sent by Application Insights](#data-sent-by-application-insights). --## How can I verify what's being collected? --If you're developing an app using Visual Studio, run the app in debug mode (F5). The telemetry appears in the **Output** window. From there, you can copy it and format it as JSON for easy inspection. ---There's also a more readable view in the **Diagnostics** window. --For webpages, open your browser's debugging window. Select F12 and open the **Network** tab. ---### Can I write code to filter the telemetry before it's sent? --You'll need to write a [telemetry processor plug-in](./api-filtering-sampling.md). --## How long is the data kept? --Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#configure-retention-and-archive-at-the-table-level) of 30, 60, 90, 120, 180, 270, 365, 550, or 730 days. If you need to keep data longer than 730 days, you can use [diagnostic settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor). --Data kept longer than 90 days incurs extra charges. For more information about Application Insights pricing, see the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). --Aggregated data (that is, counts, averages, and other statistical data that you see in metric explorer) are retained at a grain of 1 minute for 90 days. --[Debug snapshots](./snapshot-debugger.md) are stored for 15 days. This retention policy is set on a per-application basis. If you need to increase this value, you can request an increase by opening a support case in the Azure portal. --## Who can access the data? --The data is visible to you and, if you have an organization account, your team members. --It can be exported by you and your team members and could be copied to other locations and passed on to other people. --#### What does Microsoft do with the information my app sends to Application Insights? --Microsoft uses the data only to provide the service to you. --## Where is the data held? --You can select the location when you create a new Application Insights resource. For more information about Application Insights availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all). --## How secure is my data? --Application Insights is an Azure service. Security policies are described in the [Azure Security, Privacy, and Compliance white paper](https://go.microsoft.com/fwlink/?linkid=392408). --The data is stored in Microsoft Azure servers. For accounts in the Azure portal, account restrictions are described in the [Azure Security, Privacy, and Compliance document](https://go.microsoft.com/fwlink/?linkid=392408). --Access to your data by Microsoft personnel is restricted. We access your data only with your permission and if it's necessary to support your use of Application Insights. --Data in aggregate across all our customers' applications, such as data rates and average size of traces, is used to improve Application Insights. --#### Could someone else's telemetry interfere with my Application Insights data? --Someone could send more telemetry to your account by using the instrumentation key. This key can be found in the code of your webpages. With enough extra data, your metrics wouldn't correctly represent your app's performance and usage. --If you share code with other projects, remember to remove your instrumentation key. --## Is the data encrypted? --All data is encrypted at rest and as it moves between datacenters. --#### Is the data encrypted in transit from my application to Application Insights servers? --Yes. We use HTTPS to send data to the portal from nearly all SDKs, including web servers, devices, and HTTPS webpages. --## Does the SDK create temporary local storage? --Yes. Certain telemetry channels will persist data locally if an endpoint can't be reached. The following paragraphs describe which frameworks and telemetry channels are affected: --- Telemetry channels that utilize local storage create temp files in the TEMP or APPDATA directories, which are restricted to the specific account running your application. This situation might happen when an endpoint was temporarily unavailable or if you hit the throttling limit. After this issue is resolved, the telemetry channel will resume sending all the new and persisted data.-- This persisted data isn't encrypted locally. If this issue is a concern, review the data and restrict the collection of private data. For more information, see [Export and delete private data](../logs/personal-data-mgmt.md#exporting-and-deleting-personal-data).-- If a customer needs to configure this directory with specific security requirements, it can be configured per framework. Make sure that the process running your application has write access to this directory. Also make sure this directory is protected to avoid telemetry being read by unintended users.--### Java --The folder `C:\Users\username\AppData\Local\Temp` is used for persisting data. This location isn't configurable from the config directory, and the permissions to access this folder are restricted to the specific user with required credentials. For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-Java/blob/40809cb6857231e572309a5901e1227305c27c1a/core/src/main/java/com/microsoft/applicationinsights/internal/util/LocalFileSystemUtils.java#L48-L72). --### .NET --By default, `ServerTelemetryChannel` uses the current user's local app data folder `%localAppData%\Microsoft\ApplicationInsights` or temp folder `%TMP%`. For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/91e9c91fcea979b1eec4e31ba8e0fc683bf86802/src/ServerTelemetryChannel/Implementation/ApplicationFolderProvider.cs#L54-L84). --Via configuration file: -```xml -<TelemetryChannel Type="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.ServerTelemetryChannel, Microsoft.AI.ServerTelemetryChannel"> - <StorageFolder>D:\NewTestFolder</StorageFolder> -</TelemetryChannel> -``` --Via code: --- Remove `ServerTelemetryChannel` from the configuration file.-- Add this snippet to your configuration:-- ```csharp - ServerTelemetryChannel channel = new ServerTelemetryChannel(); - channel.StorageFolder = @"D:\NewTestFolder"; - channel.Initialize(TelemetryConfiguration.Active); - TelemetryConfiguration.Active.TelemetryChannel = channel; - ``` --### NetCore --By default, `ServerTelemetryChannel` uses the current user's local app data folder `%localAppData%\Microsoft\ApplicationInsights` or temp folder `%TMP%`. For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/91e9c91fcea979b1eec4e31ba8e0fc683bf86802/src/ServerTelemetryChannel/Implementation/ApplicationFolderProvider.cs#L54-L84). --In a Linux environment, local storage will be disabled unless a storage folder is specified. --> [!NOTE] -> With the release 2.15.0-beta3 and greater, local storage is now automatically created for Linux, Mac, and Windows. For non-Windows systems, the SDK will automatically create a local storage folder based on the following logic: -> -> - `${TMPDIR}`: If `${TMPDIR}` environment variable is set, this location is used. -> - `/var/tmp`: If the previous location doesn't exist, we try `/var/tmp`. -> - `/tmp`: If both the previous locations don't exist, we try `tmp`. -> - If none of those locations exist, local storage isn't created and manual configuration is still required. -> -> For full implementation details, see [ServerTelemetryChannel stores telemetry data in default folder during transient errors in non-Windows environments](https://github.com/microsoft/ApplicationInsights-dotnet/pull/1860). --The following code snippet shows how to set `ServerTelemetryChannel.StorageFolder` in the `ConfigureServices()` method of your `Startup.cs` class: --```csharp -services.AddSingleton(typeof(ITelemetryChannel), new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"}); -``` --For more information, see [AspNetCore custom configuration](https://github.com/Microsoft/ApplicationInsights-aspnetcore/wiki/Custom-Configuration). --### Node.js --By default, `%TEMP%/appInsights-node{INSTRUMENTATION KEY}` is used for persisting data. Permissions to access this folder are restricted to the current user and administrators. For more information, see the [implementation](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Sender.ts). --The folder prefix `appInsights-node` can be overridden by changing the runtime value of the static variable `Sender.TEMPDIR_PREFIX` found in [Sender.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/7a1ecb91da5ea0febf5ceab13d6a4bf01a63933d/Library/Sender.ts#L384). --### JavaScript (browser) --[HTML5 Session Storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/sessionStorage) is used to persist data. Two separate buffers are used: `AI_buffer` and `AI_sent_buffer`. Telemetry that's batched and waiting to be sent is stored in `AI_buffer`. Telemetry that was just sent is placed in `AI_sent_buffer` until the ingestion server responds that it was successfully received. --When telemetry is successfully received, it's removed from all buffers. On transient failures (for example, a user loses network connectivity), telemetry remains in `AI_buffer` until it's successfully received or the ingestion server responds that the telemetry is invalid (bad schema or too old, for example). --Telemetry buffers can be disabled by setting [`enableSessionStorageBuffer`](https://github.com/microsoft/ApplicationInsights-JS/blob/17ef50442f73fd02a758fbd74134933d92607ecf/legacy/JavaScript/JavaScriptSDK.Interfaces/IConfig.ts#L31) to `false`. When session storage is turned off, a local array is instead used as persistent storage. Because the JavaScript SDK runs on a client device, the user has access to this storage location via their browser's developer tools. --### OpenCensus Python --By default, OpenCensus Python SDK uses the current user folder `%username%/.opencensus/.azure/`. Permissions to access this folder are restricted to the current user and administrators. For more information, see the [implementation](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/opencensus/ext/azure/common/storage.py). The folder with your persisted data will be named after the Python file that generated the telemetry. --You can change the location of your storage file by passing in the `storage_path` parameter in the constructor of the exporter you're using. --```python -AzureLogHandler( - connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000', - storage_path='<your-path-here>', -) -``` --## How do I send data to Application Insights using TLS 1.2? --To ensure the security of data in transit to the Application Insights endpoints, we strongly encourage customers to configure their application to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable. Although they still currently work to allow backward compatibility, they *aren't recommended*. The industry is quickly moving to abandon support for these older protocols. --The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. After Azure drops legacy support, if your application or clients can't communicate over at least TLS 1.2, you wouldn't be able to send data to Application Insights. The approach you take to test and validate your application's TLS support will vary depending on the operating system or platform and the language or framework your application uses. --We do not recommend explicitly setting your application to only use TLS 1.2, unless necessary. This setting can break platform-level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available, such as TLS 1.3. We recommend that you perform a thorough audit of your application's code to check for hardcoding of specific TLS/SSL versions. --### Platform/Language-specific guidance --|Platform/Language | Support | More information | -| | | | -| Azure App Services | Supported, configuration might be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). | -| Azure Function Apps | Supported, configuration might be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). | -|.NET | Supported, Long Term Support (LTS). | For detailed configuration information, refer to [these instructions](/dotnet/framework/network-programming/tls). | -|Application Insights Agent| Supported, configuration required. | Application Insights Agent relies on [OS Configuration](/windows-server/security/tls/tls-registry-settings) + [.NET Configuration](/dotnet/framework/network-programming/tls#support-for-tls-12) to support TLS 1.2. -|Node.js | Supported, in v10.5.0, configuration might be required. | Use the [official Node.js TLS/SSL documentation](https://nodejs.org/api/tls.html) for any application-specific configuration. | -|Java | Supported, JDK support for TLS 1.2 was added in [JDK 6 update 121](https://www.oracle.com/technetwork/java/javase/overview-156328.html#R160_121) and [JDK 7](https://www.oracle.com/technetwork/java/javase/7u131-relnotes-3338543.html). | JDK 8 uses [TLS 1.2 by default](https://blogs.oracle.com/java-platform-group/jdk-8-will-use-tls-12-as-default). | -|Linux | Linux distributions tend to rely on [OpenSSL](https://www.openssl.org) for TLS 1.2 support. | Check the [OpenSSL Changelog](https://www.openssl.org/news/changelog.html) to confirm your version of OpenSSL is supported.| -| Windows 8.0 - 10 | Supported, and enabled by default. | To confirm that you're still using the [default settings](/windows-server/security/tls/tls-registry-settings). | -| Windows Server 2012 - 2016 | Supported, and enabled by default. | To confirm that you're still using the [default settings](/windows-server/security/tls/tls-registry-settings). | -| Windows 7 SP1 and Windows Server 2008 R2 SP1 | Supported, but not enabled by default. | See the [Transport Layer Security (TLS) registry settings](/windows-server/security/tls/tls-registry-settings) page for details on how to enable. | -| Windows Server 2008 SP2 | Support for TLS 1.2 requires an update. | See [Update to add support for TLS 1.2](https://support.microsoft.com/help/4019276/update-to-add-support-for-tls-1-1-and-tls-1-2-in-windows-server-2008-s) in Windows Server 2008 SP2. | -|Windows Vista | Not supported. | N/A --### Check what version of OpenSSL your Linux distribution is running --To check what version of OpenSSL you have installed, open the terminal and run: --```terminal -openssl version -a -``` --### Run a test TLS 1.2 transaction on Linux --To run a preliminary test to see if your Linux system can communicate over TLS 1.2, open the terminal and run: --```terminal -openssl s_client -connect bing.com:443 -tls1_2 -``` --## Personal data stored in Application Insights --For an in-depth discussion on this issue, see [Managing personal data in Log Analytics and Application Insights](../logs/personal-data-mgmt.md). --#### Can my users turn off Application Insights? --Not directly. We don't provide a switch that your users can operate to turn off Application Insights. --You can implement such a feature in your application. All the SDKs include an API setting that turns off telemetry collection. --## Data sent by Application Insights --The SDKs vary between platforms, and there are several components that you can install. For more information, see [Application Insights overview][start]. Each component sends different data. --#### Classes of data sent in different scenarios --| Your action | Data classes collected (see next table) | -| | | -| [Add Application Insights SDK to a .NET web project][greenbrown] |ServerContext<br/>Inferred<br/>Perf counters<br/>Requests<br/>**Exceptions**<br/>Session<br/>users | -| [Install Application Insights Agent on IIS][redfield] |Dependencies<br/>ServerContext<br/>Inferred<br/>Perf counters | -| [Add Application Insights SDK to a Java web app][java] |ServerContext<br/>Inferred<br/>Request<br/>Session<br/>users | -| [Add JavaScript SDK to webpage][client] |ClientContext <br/>Inferred<br/>Page<br/>ClientPerf<br/>Ajax | -| [Define default properties][apiproperties] |**Properties** on all standard and custom events | -| [Call TrackMetric][api] |Numeric values<br/>**Properties** | -| [Call Track*][api] |Event name<br/>**Properties** | -| [Call TrackException][api] |**Exceptions**<br/>Stack dump<br/>**Properties** | -| SDK can't collect data. For example: <br/> - Can't access perf counters<br/> - Exception in telemetry initializer |SDK diagnostics | --For [SDKs for other platforms][platforms], see their documents. --#### The classes of collected data --| Collected data class | Includes (not an exhaustive list) | -| | | -| **Properties** |**Any data - determined by your code** | -| DeviceContext |`Id`, IP, Locale, Device model, network, network type, OEM name, screen resolution, Role Instance, Role Name, Device Type | -| ClientContext |OS, locale, language, network, window resolution | -| Session |`session id` | -| ServerContext |Machine name, locale, OS, device, user session, user context, operation | -| Inferred |Geolocation from IP address, timestamp, OS, browser | -| Metrics |Metric name and value | -| Events |Event name and value | -| PageViews |URL and page name or screen name | -| Client perf |URL/page name, browser load time | -| Ajax |HTTP calls from webpage to server | -| Requests |URL, duration, response code | -| Dependencies |Type (SQL, HTTP, ...), connection string, or URI, sync/async, duration, success, SQL statement (with Application Insights Agent) | -| Exceptions |Type, message, call stacks, source file, line number, `thread id` | -| Crashes |`Process id`, `parent process id`, `crash thread id`; application patch, `id`, build; exception type, address, reason; obfuscated symbols and registers, binary start and end addresses, binary name and path, cpu type | -| Trace |Message and severity level | -| Perf counters |Processor time, available memory, request rate, exception rate, process private bytes, IO rate, request duration, request queue length | -| Availability |Web test response code, duration of each test step, test name, timestamp, success, response time, test location | -| SDK diagnostics |Trace message or exception | --You can [switch off some of the data by editing ApplicationInsights.config][config]. --> [!NOTE] -> Client IP is used to infer geographic location, but by default IP data is no longer stored and all zeroes are written to the associated field. To understand more about personal data handling, see [Managing personal data in Log Analytics and Application Insights](../logs/personal-data-mgmt.md#application-data). If you need to store IP address data, [geolocation and IP address handling](./ip-collection.md) will walk you through your options. --## Can I modify or update data after it has been collected? --No. Data is read-only and can only be deleted via the purge functionality. To learn more, see [Guidance for personal data stored in Log Analytics and Application Insights](../logs/personal-data-mgmt.md#delete). --## Frequently asked questions --This section provides answers to common questions. --### What happens to Application Insight telemetry when a server or device loses connection with Azure? --All of our SDKs, including the web SDK, include *reliable transport* or *robust transport*. When the server or device loses connection with Azure, telemetry is [stored locally on the file system](./data-retention-privacy.md#does-the-sdk-create-temporary-local-storage) (Server SDKs) or in HTML5 Session Storage (Web SDK). The SDK periodically retries to send this telemetry until our ingestion service considers it "stale" (48 hours for logs, 30 minutes for metrics). Stale telemetry is dropped. In some cases, such as when local storage is full, retry won't occur. --### Is personal data sent in the telemetry? --You can send personal data if your code sends such data. It can also happen if variables in stack traces include personal data. Your development team should conduct risk assessments to ensure that personal data is properly handled. Learn more about [data retention and privacy](./data-retention-privacy.md). - -*All* octets of the client web address are always set to 0 after the geolocation attributes are looked up. - -The [Application Insights JavaScript SDK](./javascript.md) doesn't include any personal data in its autocompletion, by default. However, some personal data used in your application might be picked up by the SDK (for example, full names in `window.title` or account IDs in XHR URL query parameters). For custom personal data masking, add a [telemetry initializer](./api-filtering-sampling.md#javascript-web-applications). --<!--Link references--> --[api]: ./api-custom-events-metrics.md -[apiproperties]: ./api-custom-events-metrics.md#properties -[client]: ./javascript.md -[config]: ./configuration-with-applicationinsights-config.md -[greenbrown]: ./asp-net.md -[java]: ./opentelemetry-enable.md?tabs=java -[platforms]: ./app-insights-overview.md#supported-languages -[pricing]: https://azure.microsoft.com/pricing/details/application-insights/ -[redfield]: ./application-insights-asp-net-agent.md -[start]: ./app-insights-overview.md |
azure-monitor | Javascript Feature Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md | Users can set up the Click Analytics Auto-Collection plug-in via JavaScript (Web #### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript) 1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights.-+ <!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-sdk.md and 2) articles\azure-monitor\app\api-filtering-sampling.md --> ```html <script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.min.js"></script> <script type="text/javascript"> Users can set up the Click Analytics Auto-Collection plug-in via JavaScript (Web }, }; // Application Insights JavaScript (Web) SDK Loader Script code- !function(v,y,T){<!-- Removed the JavaScript (Web) SDK Loader Script code for brevity -->}(window,document,{ + !(function (cfg){function e(){cfg.onInit&&cfg.onInit(i)}var S,u,D,t,n,i,C=window,x=document,w=C.location,I="script",b="ingestionendpoint",E="disableExceptionTracking",A="ai.device.";"instrumentationKey"[S="toLowerCase"](),u="crossOrigin",D="POST",t="appInsightsSDK",n=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=n),i=C[n]||function(l){var d=!1,g=!1,f={initialize:!0,queue:[],sv:"7",version:2,config:l};function m(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[A+"id"]=i[S](),n[A+"type"]=i,n["ai.operation.name"]=w&&w.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(f.sv||f.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:4,seq:"1",aiDataContract:undefined}}var h=-1,v=0,y=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],k=l.url||cfg.src;if(k){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~k.indexOf("ai.3")&&(k=k.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<y.length;e++)if(0<k.indexOf(y[e])){h=e;break}var i=function(e){var a,t,n,i,o,r,s,c,p,u;f.queue=[],g||(0<=h&&v+1<y.length?(a=(h+v+1)%y.length,T(k.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+y[a]+i})),v+=1):(d=g=!0,o=k,c=(p=function(){var e,t={},n=l.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][S]()]=o[1])}return t[b]||(e=(n=t.endpointsuffix)?t.location:null,t[b]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||l.instrumentationKey||"",p=(p=p[b])?p+"/v2/track":l.endpointUrl,(u=[]).push((t="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",n=o,r=p,(s=(i=m(c,"Exception")).data).baseType="ExceptionData",s.baseData.exceptions=[{typeName:"SDKLoadFailed",message:t.replace(/\./g,"-"),hasFullStack:!1,stack:t+"\nSnippet failed to load ["+n+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(w&&w.pathname||"_unknown_")+"\nEndpoint: "+r,parsedStack:[]}],i)),u.push((s=o,t=p,(r=(n=m(c,"Message")).data).baseType="MessageData",(i=r.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+s+")").replace(/\"/g,"")+'"',i.properties={endpoint:t},n)),o=u,c=p,JSON&&((r=C.fetch)&&!cfg.useXhr?r(c,{method:D,body:JSON.stringify(o),mode:"cors"}):XMLHttpRequest&&((s=new XMLHttpRequest).open(D,c),s.setRequestHeader("Content-type","application/json"),s.send(JSON.stringify(o))))))},a=function(e,t){g||setTimeout(function(){!t&&f.core||i()},500),d=!1},T=function(e){var n=x.createElement(I),e=(n.src=e,cfg[u]);return!e&&""!==e||"undefined"==n[u]||(n[u]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?x.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){x.getElementsByTagName(I)[0].parentNode.appendChild(n)},cfg.ld||0),n};T(k)}try{f.cookie=x.cookie}catch(p){}function t(e){for(;e.length;)!function(t){f[t]=function(){var e=arguments;d||f.queue.push(function(){f[t].apply(f,e)})}}(e.pop())}var r,s,n="track",o="TrackPage",c="TrackEvent",n=(t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+o,"stop"+o,"start"+c,"stop"+c,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),f.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(l.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==l[E]&&!0!==n[E]&&(t(["_"+(r="onerror")]),s=C[r],C[r]=function(e,t,n,i,a){var o=s&&s(e,t,n,i,a);return!0!==o&&f["_"+r]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},l.autoExceptionInstrumented=!0),f}(cfg.cfg),(C[n]=i).queue&&0===i.queue.length?(i.queue.push(e),i.trackPageView({})):e();})({ src: "https://js.monitor.azure.com/scripts/b/ai.3.gbl.min.js", crossOrigin: "anonymous", cfg: configObj // configObj is defined above. |
azure-monitor | Javascript Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md | Two methods are available to add the code to enable Application Insights via the Preferably, you should add it as the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies. If Internet Explorer 8 is detected, JavaScript SDK v2.x is automatically loaded.-+ <!-- IMPORTANT: If you're updating this code example, please remember to also update it in: 1) articles\azure-monitor\app\javascript-feature-extensions.md and 2) articles\azure-monitor\app\api-filtering-sampling.md --> ```html <script type="text/javascript"> !(function (cfg){function e(){cfg.onInit&&cfg.onInit(i)}var S,u,D,t,n,i,C=window,x=document,w=C.location,I="script",b="ingestionendpoint",E="disableExceptionTracking",A="ai.device.";"instrumentationKey"[S="toLowerCase"](),u="crossOrigin",D="POST",t="appInsightsSDK",n=cfg.name||"appInsights",(cfg.name||C[t])&&(C[t]=n),i=C[n]||function(l){var d=!1,g=!1,f={initialize:!0,queue:[],sv:"7",version:2,config:l};function m(e,t){var n={},i="Browser";function a(e){e=""+e;return 1===e.length?"0"+e:e}return n[A+"id"]=i[S](),n[A+"type"]=i,n["ai.operation.name"]=w&&w.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(f.sv||f.version),{time:(i=new Date).getUTCFullYear()+"-"+a(1+i.getUTCMonth())+"-"+a(i.getUTCDate())+"T"+a(i.getUTCHours())+":"+a(i.getUTCMinutes())+":"+a(i.getUTCSeconds())+"."+(i.getUTCMilliseconds()/1e3).toFixed(3).slice(2,5)+"Z",iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}},ver:4,seq:"1",aiDataContract:undefined}}var h=-1,v=0,y=["js.monitor.azure.com","js.cdn.applicationinsights.io","js.cdn.monitor.azure.com","js0.cdn.applicationinsights.io","js0.cdn.monitor.azure.com","js2.cdn.applicationinsights.io","js2.cdn.monitor.azure.com","az416426.vo.msecnd.net"],k=l.url||cfg.src;if(k){if((n=navigator)&&(~(n=(n.userAgent||"").toLowerCase()).indexOf("msie")||~n.indexOf("trident/"))&&~k.indexOf("ai.3")&&(k=k.replace(/(\/)(ai\.3\.)([^\d]*)$/,function(e,t,n){return t+"ai.2"+n})),!1!==cfg.cr)for(var e=0;e<y.length;e++)if(0<k.indexOf(y[e])){h=e;break}var i=function(e){var a,t,n,i,o,r,s,c,p,u;f.queue=[],g||(0<=h&&v+1<y.length?(a=(h+v+1)%y.length,T(k.replace(/^(.*\/\/)([\w\.]*)(\/.*)$/,function(e,t,n,i){return t+y[a]+i})),v+=1):(d=g=!0,o=k,c=(p=function(){var e,t={},n=l.connectionString;if(n)for(var i=n.split(";"),a=0;a<i.length;a++){var o=i[a].split("=");2===o.length&&(t[o[0][S]()]=o[1])}return t[b]||(e=(n=t.endpointsuffix)?t.location:null,t[b]="https://"+(e?e+".":"")+"dc."+(n||"services.visualstudio.com")),t}()).instrumentationkey||l.instrumentationKey||"",p=(p=p[b])?p+"/v2/track":l.endpointUrl,(u=[]).push((t="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",n=o,r=p,(s=(i=m(c,"Exception")).data).baseType="ExceptionData",s.baseData.exceptions=[{typeName:"SDKLoadFailed",message:t.replace(/\./g,"-"),hasFullStack:!1,stack:t+"\nSnippet failed to load ["+n+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(w&&w.pathname||"_unknown_")+"\nEndpoint: "+r,parsedStack:[]}],i)),u.push((s=o,t=p,(r=(n=m(c,"Message")).data).baseType="MessageData",(i=r.baseData).message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+s+")").replace(/\"/g,"")+'"',i.properties={endpoint:t},n)),o=u,c=p,JSON&&((r=C.fetch)&&!cfg.useXhr?r(c,{method:D,body:JSON.stringify(o),mode:"cors"}):XMLHttpRequest&&((s=new XMLHttpRequest).open(D,c),s.setRequestHeader("Content-type","application/json"),s.send(JSON.stringify(o))))))},a=function(e,t){g||setTimeout(function(){!t&&f.core||i()},500),d=!1},T=function(e){var n=x.createElement(I),e=(n.src=e,cfg[u]);return!e&&""!==e||"undefined"==n[u]||(n[u]=e),n.onload=a,n.onerror=i,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||a(0,t)},cfg.ld&&cfg.ld<0?x.getElementsByTagName("head")[0].appendChild(n):setTimeout(function(){x.getElementsByTagName(I)[0].parentNode.appendChild(n)},cfg.ld||0),n};T(k)}try{f.cookie=x.cookie}catch(p){}function t(e){for(;e.length;)!function(t){f[t]=function(){var e=arguments;d||f.queue.push(function(){f[t].apply(f,e)})}}(e.pop())}var r,s,n="track",o="TrackPage",c="TrackEvent",n=(t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+o,"stop"+o,"start"+c,"stop"+c,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),f.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},(l.extensionConfig||{}).ApplicationInsightsAnalytics||{});return!0!==l[E]&&!0!==n[E]&&(t(["_"+(r="onerror")]),s=C[r],C[r]=function(e,t,n,i,a){var o=s&&s(e,t,n,i,a);return!0!==o&&f["_"+r]({message:e,url:t,lineNumber:n,columnNumber:i,error:a,evt:C.event}),o},l.autoExceptionInstrumented=!0),f}(cfg.cfg),(C[n]=i).queue&&0===i.queue.length?(i.queue.push(e),i.trackPageView({})):e();})({ |
azure-monitor | Live Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md | The preceding sample is for a console app, but the same code can be used in any | Capabilities |Live Stream | Metrics explorer and Log Analytics | |||| |Latency|Data displayed within one second.|Aggregated over minutes.|-|No retention|Data persists while it's on the chart and is then discarded.|[Data retained for 90 days.](./data-retention-privacy.md#how-long-is-the-data-kept)| +|No retention|Data persists while it's on the chart and is then discarded.|[Data retained for 90 days.](/previous-versions/azure/azure-monitor/app/data-retention-privacy#how-long-is-the-data-kept)| |On demand|Data is only streamed while the Live Metrics pane is open. |Data is sent whenever the SDK is installed and enabled.| |Free|There's no charge for Live Stream data.|Subject to [pricing](../logs/cost-logs.md#application-insights-billing). |Sampling|All selected metrics and counters are transmitted. Failures and stack traces are sampled. |Events can be [sampled](./api-filtering-sampling.md).| |
azure-monitor | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md | server.on("listening", () => { By default, telemetry is buffered for 15 seconds before it's sent to the ingestion server. If your application has a short lifespan, such as a CLI tool, it might be necessary to manually flush your buffered telemetry when the application terminates by using `appInsights.defaultClient.flush()`. -If the SDK detects that your application is crashing, it calls flush for you by using `appInsights.defaultClient.flush({ isAppCrashing: true })`. With the flush option `isAppCrashing`, your application is assumed to be in an abnormal state and isn't suitable to send telemetry. Instead, the SDK saves all buffered telemetry to [persistent storage](./data-retention-privacy.md#nodejs) and lets your application terminate. When your application starts again, it tries to send any telemetry that was saved to persistent storage. +If the SDK detects that your application is crashing, it calls flush for you by using `appInsights.defaultClient.flush({ isAppCrashing: true })`. With the flush option `isAppCrashing`, your application is assumed to be in an abnormal state and isn't suitable to send telemetry. Instead, the SDK saves all buffered telemetry to [persistent storage](/previous-versions/azure/azure-monitor/app/data-retention-privacy#nodejs) and lets your application terminate. When your application starts again, it tries to send any telemetry that was saved to persistent storage. ### Preprocess data with telemetry processors |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | configure_azure_monitor( ## Offline Storage and Automatic Retries -To improve reliability and resiliency, Azure Monitor OpenTelemetry-based offerings write to offline/local storage by default when an application loses its connection with Application Insights. It saves the application telemetry to disk and periodically tries to send it again for up to 48 hours. In high-load applications, telemetry is occasionally dropped for two reasons. First, when the allowable time is exceeded, and second, when the maximum file size is exceeded or the SDK doesn't have an opportunity to clear out the file. If we need to choose, the product saves more recent events over old ones. [Learn More](data-retention-privacy.md#does-the-sdk-create-temporary-local-storage) +To improve reliability and resiliency, Azure Monitor OpenTelemetry-based offerings write to offline/local storage by default when an application loses its connection with Application Insights. It saves the application telemetry to disk and periodically tries to send it again for up to 48 hours. In high-load applications, telemetry is occasionally dropped for two reasons. First, when the allowable time is exceeded, and second, when the maximum file size is exceeded or the SDK doesn't have an opportunity to clear out the file. If we need to choose, the product saves more recent events over old ones. [Learn More](/previous-versions/azure/azure-monitor/app/data-retention-privacy#does-the-sdk-create-temporary-local-storage) ### [ASP.NET Core](#tab/aspnetcore) |
azure-monitor | Telemetry Channels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/telemetry-channels.md | For systems other than Windows, no local storage is created automatically by the > [!NOTE] > With the release 2.15.0-beta3 and greater, local storage is now automatically created for Linux, Mac, and Windows. - You can create a storage directory yourself and configure the channel to use it. In this case, you're responsible for ensuring that the directory is secured. Read more about [data protection and privacy](data-retention-privacy.md#does-the-sdk-create-temporary-local-storage). + You can create a storage directory yourself and configure the channel to use it. In this case, you're responsible for ensuring that the directory is secured. Read more about [data protection and privacy](/previous-versions/azure/azure-monitor/app/data-retention-privacy#does-the-sdk-create-temporary-local-storage). ## Open-source SDK Like every SDK for Application Insights, channels are open source. Read and contribute to the code or report problems at [the official GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet). |
azure-monitor | Tutorial Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md | - Title: Application Insights SDK for ASP.NET Core applications | Microsoft Docs -description: Application Insights SDK tutorial to monitor ASP.NET Core web applications for availability, performance, and usage. -- Previously updated : 10/11/2023----# Enable Application Insights for ASP.NET Core applications --This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application deployed as an Azure Web App. This implementation uses an SDK-based approach. An [autoinstrumentation approach](./codeless-overview.md) is also available. --Application Insights can collect the following telemetry from your ASP.NET Core application: --> [!div class="checklist"] -> * Requests -> * Dependencies -> * Exceptions -> * Performance counters -> * Heartbeats -> * Logs --For a sample application, we'll use an [ASP.NET Core MVC application](https://github.com/AaronMaxwell/AzureCafe) that targets `net6.0`. However, you can apply these instructions to all ASP.NET Core applications. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md). --> [!NOTE] -> An [OpenTelemetry-based .NET offering](./opentelemetry-enable.md?tabs=net) is available. [Learn more](./opentelemetry-overview.md). ---## Supported scenarios --The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, Application Insights can collect telemetry from it. Application Insights monitoring is supported everywhere .NET Core is supported. The following scenarios are supported: -* **Operating system**: Windows, Linux, or Mac -* **Hosting method**: In process or out of process -* **Deployment method**: Framework dependent or self-contained -* **Web server**: Internet Information Server (IIS) or Kestrel -* **Hosting platform**: The Web Apps feature of Azure App Service, Azure VM, Docker, Azure Kubernetes Service (AKS), and so on -* **.NET Core version**: All officially [supported .NET Core versions](https://dotnet.microsoft.com/download/dotnet-core) that aren't in preview -* **IDE**: Visual Studio, Visual Studio Code, or command line --## Prerequisites --To complete this tutorial, you need: --* Visual Studio 2022 -* The following Visual Studio workloads: - * ASP.NET and web development - * Data storage and processing - * Azure development -* .NET 6.0 -* Azure subscription and user account (with the ability to create and delete resources) --## Deploy Azure resources --Please follow the [guidance to deploy the sample application from its GitHub repository.](https://github.com/gitopsbook/sample-app-deployment). --In order to provide globally unique names to resources, a six-character suffix is assigned to some resources. Please make note of this suffix for use later on in this article. ---## Create an Application Insights resource --1. In the [Azure portal](https://portal.azure.com), select the **application-insights-azure-cafe** resource group. --2. From the top toolbar menu, select **+ Create**. -- :::image type="content" source="media/tutorial-asp-net-core/create-resource-menu.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the + Create button highlighted on the toolbar menu." lightbox="media/tutorial-asp-net-core/create-resource-menu.png"::: --3. On the **Create a resource** screen, search for and select **Application Insights** in the marketplace search textbox. -- :::image type="complex" source="media/tutorial-asp-net-core/search-application-insights.png" alt-text="Screenshot of the Create a resource screen in the Azure portal." lightbox="media/tutorial-asp-net-core/search-application-insights.png"::: - Screenshot of the Create a resource screen in the Azure portal. The screenshot shows a search for Application Insights highlighted and Application Insights displaying in the search results, which is also highlighted. - :::image-end::: --4. On the Application Insights resource overview screen, select **Create**. -- :::image type="content" source="media/tutorial-asp-net-core/create-application-insights-overview.png" alt-text="Screenshot of the Application Insights overview screen in the Azure portal with the Create button highlighted." lightbox="media/tutorial-asp-net-core/create-application-insights-overview.png"::: --5. On the Application Insights screen, **Basics** tab, complete the form by using the following table, then select the **Review + create** button. Fields not specified in the table below may retain their default values. -- | Field | Value | - |-|-| - | Name | Enter `azure-cafe-application-insights-{SUFFIX}`, replacing **{SUFFIX}** with the appropriate suffix value recorded earlier. | - | Region | Select the same region chosen when deploying the article resources. | - | Log Analytics Workspace | Select **azure-cafe-log-analytics-workspace**. Alternatively, you can create a new log analytics workspace. | -- :::image type="content" source="media/tutorial-asp-net-core/application-insights-basics-tab.png" alt-text="Screenshot of the Basics tab of the Application Insights screen in the Azure portal with a form populated with the preceding values." lightbox="media/tutorial-asp-net-core/application-insights-basics-tab.png"::: --6. Once validation has passed, select **Create** to deploy the resource. -- :::image type="content" source="media/tutorial-asp-net-core/application-insights-validation-passed.png" alt-text="Screenshot of the Application Insights screen in the Azure portal. The message stating validation has passed and Create button are both highlighted." lightbox="media/tutorial-asp-net-core/application-insights-validation-passed.png"::: --7. Once the resource is deployed, return to the `application-insights-azure-cafe` resource group, and select the Application Insights resource you deployed. -- :::image type="content" source="media/tutorial-asp-net-core/application-insights-resource-group.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-core/application-insights-resource-group.png"::: --8. On the Overview screen of the Application Insights resource, select the **Copy to clipboard** button to copy the connection string value. You will use the connection string value in the next section of this article. -- :::image type="complex" source="media/tutorial-asp-net-core/application-insights-connection-string-overview.png" alt-text="Screenshot of the Application Insights Overview screen in the Azure portal." lightbox="media/tutorial-asp-net-core/application-insights-connection-string-overview.png"::: - Screenshot of the Application Insights Overview screen in the Azure portal. The screenshot shows the connection string value highlighted and the Copy to clipboard button selected and highlighted. - :::image-end::: --## Configure the Application Insights connection string application setting in the web App Service --1. Return to the `application-insights-azure-cafe` resource group and open the **azure-cafe-web-{SUFFIX}** App Service resource. -- :::image type="content" source="media/tutorial-asp-net-core/web-app-service-resource-group.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the azure-cafe-web-{SUFFIX} resource highlighted." lightbox="media/tutorial-asp-net-core/web-app-service-resource-group.png"::: --2. From the left menu, under the Settings section, select **Configuration**. Then, on the **Application settings** tab, select **+ New application setting** beneath the Application settings header. -- :::image type="complex" source="media/tutorial-asp-net-core/app-service-app-setting-button.png" alt-text="Screenshot of the App Service resource screen in the Azure portal." lightbox="media/tutorial-asp-net-core/app-service-app-setting-button.png"::: - Screenshot of the App Service resource screen in the Azure portal. The screenshot shows Configuration in the left menu under the Settings section selected and highlighted, the Application settings tab selected and highlighted, and the + New application setting toolbar button highlighted. - :::image-end::: --3. In the Add/Edit application setting pane, complete the form as follows and select **OK**. -- | Field | Value | - |-|-| - | Name | APPLICATIONINSIGHTS_CONNECTION_STRING | - | Value | Paste the Application Insights connection string value you copied in the preceding section. | -- :::image type="content" source="media/tutorial-asp-net-core/add-edit-app-setting.png" alt-text="Screenshot of the Add/Edit application setting pane in the Azure portal with the preceding values populated in the Name and Value fields." lightbox="media/tutorial-asp-net-core/add-edit-app-setting.png"::: --4. On the App Service Configuration screen, select the **Save** button from the toolbar menu. When prompted to save the changes, select **Continue**. -- :::image type="content" source="media/tutorial-asp-net-core/save-app-service-configuration.png" alt-text="Screenshot of the App Service Configuration screen in the Azure portal with the Save button highlighted on the toolbar menu." lightbox="media/tutorial-asp-net-core/save-app-service-configuration.png"::: --## Install the Application Insights NuGet Package --We need to configure the ASP.NET Core MVC web application to send telemetry. This is accomplished using the [Application Insights for ASP.NET Core web applications NuGet package](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore). --1. In Visual Studio, open `1 - Starter Application\src\AzureCafe.sln`. --2. In the Visual Studio Solution Explorer panel, right-click on the AzureCafe project file and select **Manage NuGet Packages**. -- :::image type="content" source="media/tutorial-asp-net-core/manage-nuget-packages-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Manage NuGet Packages context menu item highlighted." lightbox="media/tutorial-asp-net-core/manage-nuget-packages-menu.png"::: --3. Select the **Browse** tab and then search for and select **Microsoft.ApplicationInsights.AspNetCore**. Select **Install**, and accept the license terms. It is recommended you use the latest stable version. For the full release notes for the SDK, see the [open-source GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/releases). -- :::image type="complex" source="media/tutorial-asp-net-core/asp-net-core-install-nuget-package.png" alt-text="Screenshot of the NuGet Package Manager user interface in Visual Studio." lightbox="media/tutorial-asp-net-core/asp-net-core-install-nuget-package.png"::: - Screenshot that shows the NuGet Package Manager user interface in Visual Studio with the Browse tab selected. Microsoft.ApplicationInsights.AspNetCore is entered in the search box, and the Microsoft.ApplicationInsights.AspNetCore package is selected from a list of results. In the right pane, the latest stable version of the Microsoft.ApplicationInsights.AspNetCore package is selected from a drop down list and the Install button is highlighted. - :::image-end::: -- Keep Visual Studio open for the next section of the article. --## Enable Application Insights server-side telemetry --The Application Insights for ASP.NET Core web applications NuGet package encapsulates features to enable sending server-side telemetry to the Application Insights resource in Azure. --1. From the Visual Studio Solution Explorer, open the **Program.cs** file. -- :::image type="content" source="media/tutorial-asp-net-core/solution-explorer-programcs.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Program.cs file highlighted." lightbox="media/tutorial-asp-net-core/solution-explorer-programcs.png"::: --2. Insert the following code prior to the `builder.Services.AddControllersWithViews()` statement. This code automatically reads the Application Insights connection string value from configuration. The `AddApplicationInsightsTelemetry` method registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container that will then be used to fulfill [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) and [ILogger\<TCategoryName\>](/dotnet/api/microsoft.extensions.logging.iloggerprovider) implementation requests. -- ```csharp - builder.Services.AddApplicationInsightsTelemetry(); - ``` -- :::image type="content" source="media/tutorial-asp-net-core/enable-server-side-telemetry.png" alt-text="Screenshot of a code window in Visual Studio with the preceding code snippet highlighted." lightbox="media/tutorial-asp-net-core/enable-server-side-telemetry.png"::: -- > [!TIP] - > Learn more about the [configuration options in ASP.NET Core](/aspnet/core/fundamentals/configuration). --## Enable client-side telemetry for web applications --The preceding steps are enough to help you start collecting server-side telemetry. The sample application has client-side components. Follow the next steps to start collecting [usage telemetry](./usage-overview.md). --1. In Visual Studio Solution Explorer, open `\Views\_ViewImports.cshtml`. --2. Add the following code at the end of the existing file. -- ```cshtml - @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet - ``` -- :::image type="content" source="media/tutorial-asp-net-core/view-imports-injection.png" alt-text="Screenshot of the _ViewImports.cshtml file in Visual Studio with the preceding line of code highlighted." lightbox="media/tutorial-asp-net-core/view-imports-injection.png"::: --3. To properly enable client-side monitoring for your application, in Visual Studio Solution Explorer, open `\Views\Shared\_Layout.cshtml` and insert the following code immediately before the closing `<\head>` tag. This JavaScript snippet must be inserted in the `<head>` section of each page of your application that you want to monitor. -- ```cshtml - @Html.Raw(JavaScriptSnippet.FullScript) - ``` -- :::image type="content" source="media/tutorial-asp-net-core/layout-head-code.png" alt-text="Screenshot of the _Layout.cshtml file in Visual Studio with the preceding line of code highlighted within the head section of the file." lightbox="media/tutorial-asp-net-core/layout-head-code.png"::: -- > [!TIP] - > An alternative to using `FullScript` is `ScriptBody`. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy: -- ```cshtml - <script> // apply custom changes to this script tag. - @Html.Raw(JavaScriptSnippet.ScriptBody) - </script> - ``` --> [!NOTE] -> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you are required to remove auto-injection as described above and manually add the [JavaScript SDK](./javascript.md#add-the-javascript-sdk). --## Enable monitoring of database queries --When investigating causes for performance degradation, it is important to include insights into database calls. You enable monitoring by configuring the [dependency module](./asp-net-dependencies.md). Dependency monitoring, including SQL, is enabled by default. --Follow these steps to capture the full SQL query text. --> [!NOTE] -> SQL text may contain sensitive data such as passwords and PII. Be careful when enabling this feature. --1. From the Visual Studio Solution Explorer, open the **Program.cs** file. --2. At the top of the file, add the following `using` statement. -- ```csharp - using Microsoft.ApplicationInsights.DependencyCollector; - ``` --3. To enable SQL command text instrumentation, insert the following code immediately after the `builder.Services.AddApplicationInsightsTelemetry()` code. -- ```csharp - builder.Services.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>((module, o) => { module.EnableSqlCommandTextInstrumentation = true; }); - ``` -- :::image type="content" source="media/tutorial-asp-net-core/enable-sql-command-text-instrumentation.png" alt-text="Screenshot of a code window in Visual Studio with the preceding code highlighted." lightbox="media/tutorial-asp-net-core/enable-sql-command-text-instrumentation.png"::: --## Run the Azure Cafe web application --After you deploy the web application code, telemetry will flow to Application Insights. The Application Insights SDK automatically collects incoming web requests to your application. --1. From the Visual Studio Solution Explorer, right-click on the **AzureCafe** project and select **Publish** from the context menu. -- :::image type="content" source="media/tutorial-asp-net-core/web-project-publish-context-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted." lightbox="media/tutorial-asp-net-core/web-project-publish-context-menu.png"::: --2. Select **Publish** to promote the new code to the Azure App Service. -- :::image type="content" source="media/tutorial-asp-net-core/publish-profile.png" alt-text="Screenshot of the AzureCafe publish profile with the Publish button highlighted." lightbox="media/tutorial-asp-net-core/publish-profile.png"::: -- When the Azure Cafe web application is successfully published, a new browser window opens to the Azure Cafe web application. -- :::image type="content" source="media/tutorial-asp-net-core/azure-cafe-index.png" alt-text="Screenshot of the Azure Cafe web application." lightbox="media/tutorial-asp-net-core/azure-cafe-index.png"::: --3. To generate some telemetry, follow these steps in the web application to add a review. -- 1. To view a cafe's menu and reviews, select **Details** next to a cafe. -- :::image type="content" source="media/tutorial-asp-net-core/cafe-details-button.png" alt-text="Screenshot of a portion of the Azure Cafe list in the Azure Cafe web application with the Details button highlighted." lightbox="media/tutorial-asp-net-core/cafe-details-button.png"::: -- 2. To view and add reviews, on the Cafe screen, select the **Reviews** tab. Select the **Add review** button to add a review. -- :::image type="content" source="media/tutorial-asp-net-core/cafe-add-review-button.png" alt-text="Screenshot of the Cafe details screen in the Azure Cafe web application with the Add review button highlighted." lightbox="media/tutorial-asp-net-core/cafe-add-review-button.png"::: -- 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. When finished, select **Add review**. -- :::image type="content" source="media/tutorial-asp-net-core/create-a-review-dialog.png" alt-text="Screenshot of the Create a review dialog in the Azure Cafe web application." lightbox="media/tutorial-asp-net-core/create-a-review-dialog.png"::: -- 4. If you need to generate additional telemetry, add additional reviews. --### Live metrics --You can use [Live Metrics](./live-stream.md) to quickly verify if Application Insights monitoring is configured correctly. Live Metrics shows CPU usage of the running process in near real time. It can also show other telemetry such as Requests, Dependencies, and Traces. Note that it might take a few minutes for the telemetry to appear in the portal and analytics. --### Viewing the application map --The sample application makes calls to multiple Azure resources, including Azure SQL, Azure Blob Storage, and the Azure Language Service (for review sentiment analysis). ---Application Insights introspects the incoming telemetry data and is able to generate a visual map of the system integrations it detects. --1. Sign in to the [Azure portal](https://portal.azure.com). --2. Open the resource group for the sample application, which is `application-insights-azure-cafe`. --3. From the list of resources, select the `azure-cafe-insights-{SUFFIX}` Application Insights resource. --4. From the left menu, beneath the **Investigate** heading, select **Application map**. Observe the generated Application map. -- :::image type="content" source="media/tutorial-asp-net-core/application-map.png" alt-text="Screenshot of the Application Insights application map in the Azure portal." lightbox="media/tutorial-asp-net-core/application-map.png"::: --### Viewing HTTP calls and database SQL command text --1. In the Azure portal, open the Application Insights resource. --2. On the left menu, beneath the **Investigate** header, select **Performance**. --3. The **Operations** tab contains details of the HTTP calls received by the application. To toggle between Server and Browser (client-side) views of the data, use the Server/Browser toggle. -- :::image type="complex" source="media/tutorial-asp-net-core/server-performance.png" alt-text="Screenshot of the Performance screen in the Azure portal." lightbox="media/tutorial-asp-net-core/server-performance.png"::: - Screenshot of the Application Insights Performance screen in the Azure portal. The screenshot shows the Server/Browser toggle and HTTP calls received by the application highlighted. - :::image-end::: --4. Select an Operation from the table, and choose to drill into a sample of the request. - - :::image type="complex" source="media/tutorial-asp-net-core/select-operation-performance.png" alt-text="Screenshot of the Application Insights Performance screen in the Azure portal with operations and sample operations listed." lightbox="media/tutorial-asp-net-core/select-operation-performance.png"::: - Screenshot of the Application Insights Performance screen in the Azure portal. The screenshot shows a POST operation and a sample operation from the suggested list selected and highlighted and the Drill into samples button is highlighted. - :::image-end::: -- The end-to-end transaction displays for the selected request. In this case, a review was created, including an image, so it includes calls to Azure Storage and the Language Service (for sentiment analysis). It also includes database calls into SQL Azure to persist the review. In this example, the first selected Event displays information relative to the HTTP POST call. -- :::image type="content" source="media/tutorial-asp-net-core/e2e-http-call.png" alt-text="Screenshot of the end-to-end transaction in the Azure portal with the HTTP Post call selected." lightbox="media/tutorial-asp-net-core/e2e-http-call.png"::: --5. Select a SQL item to review the SQL command text issued to the database. -- :::image type="content" source="media/tutorial-asp-net-core/e2e-db-call.png" alt-text="Screenshot of the end-to-end transaction in the Azure portal with SQL command details." lightbox="media/tutorial-asp-net-core/e2e-db-call.png"::: --6. Optionally, select the Dependency (outgoing) requests to Azure Storage or the Language Service. --7. Return to the **Performance** screen and select the **Dependencies** tab to investigate calls into external resources. Notice the Operations table includes calls into Sentiment Analysis, Blob Storage, and Azure SQL. -- :::image type="content" source="media/tutorial-asp-net-core/performance-dependencies.png" alt-text="Screenshot of the Application Insights Performance screen in the Azure portal with the Dependencies tab selected and the Operations table highlighted." lightbox="media/tutorial-asp-net-core/performance-dependencies.png"::: --## Application logging with Application Insights --### Logging overview --Application Insights is one type of [logging provider](/dotnet/core/extensions/logging-providers) available to ASP.NET Core applications that becomes available to applications when the [Application Insights for ASP.NET Core](#install-the-application-insights-nuget-package) NuGet package is installed and [server-side telemetry collection is enabled](#enable-application-insights-server-side-telemetry). --As a reminder, the following code in **Program.cs** registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container. --```csharp -builder.Services.AddApplicationInsightsTelemetry(); -``` --With the `ApplicationInsightsLoggerProvider` registered as the logging provider, the app is ready to log into Application Insights by using either constructor injection with <xref:Microsoft.Extensions.Logging.ILogger> or the generic-type alternative <xref:Microsoft.Extensions.Logging.ILogger%601>. --> [!NOTE] -> By default, the logging provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater. --Consider the following example controller. It demonstrates the injection of ILogger, which is resolved with the `ApplicationInsightsLoggerProvider` that is registered with the dependency injection container. Observe in the **Get** method that an Informational, Warning, and Error message are recorded. --> [!NOTE] -> By default, the Information level trace will not be recorded. Only the Warning and above levels are captured. --```csharp -using Microsoft.AspNetCore.Mvc; --[Route("api/[controller]")] -[ApiController] -public class ValuesController : ControllerBase -{ - private readonly ILogger _logger; -- public ValuesController(ILogger<ValuesController> logger) - { - _logger = logger; - } -- [HttpGet] - public ActionResult<IEnumerable<string>> Get() - { - //Info level traces are not captured by default - _logger.LogInformation("An example of an Info trace.."); - _logger.LogWarning("An example of a Warning trace.."); - _logger.LogError("An example of an Error level message"); -- return new string[] { "value1", "value2" }; - } -} -``` --For more information, see [Logging in ASP.NET Core](/aspnet/core/fundamentals/logging). --## View logs in Application Insights --The ValuesController above is deployed with the sample application and is located in the **Controllers** folder of the project. --1. Using an internet browser, open the sample application. In the address bar, append `/api/Values` and press <kbd>Enter</kbd>. -- :::image type="content" source="media/tutorial-asp-net-core/values-api-url.png" alt-text="Screenshot of a browser window with /api/Values appended to the URL in the address bar." lightbox="media/tutorial-asp-net-core/values-api-url.png"::: --2. In the [Azure portal](https://portal.azure.com), wait a few moments and then select the **azure-cafe-insights-{SUFFIX}** Application Insights resource. -- :::image type="content" source="media/tutorial-asp-net-core/application-insights-resource-group.png" alt-text="Screenshot of the application-insights-azure-cafe resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-core/application-insights-resource-group.png"::: --3. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Logs**. - -4. In the **Tables** pane, under the **Application Insights** tree, double-click on the **traces** table. --5. Modify the query to retrieve traces for the **Values** controller as follows, then select **Run** to filter the results. -- ```kql - traces - | where operation_Name == "GET Values/Get" - ``` -- The results display the logging messages present in the controller. A log severity of 2 indicates a warning level, and a log severity of 3 indicates an Error level. --6. Alternatively, you can also write the query to retrieve results based on the category of the log. By default, the category is the fully qualified name of the class where the ILogger is injected. In this case, the category name is **ValuesController** (if there is a namespace associated with the class, the name will be prefixed with the namespace). Re-write and run the following query to retrieve results based on category. -- ```kql - traces - | where customDimensions.CategoryName == "ValuesController" - ``` --## Control the level of logs sent to Application Insights --`ILogger` implementations have a built-in mechanism to apply [log filtering](/dotnet/core/extensions/logging#how-filtering-rules-are-applied). This filtering lets you control the logs that are sent to each registered provider, including the Application Insights provider. You can use the filtering either in configuration (using an *appsettings.json* file) or in code. For more information about log levels and guidance on how to use them appropriately, see the [Log Level](/aspnet/core/fundamentals/logging#log-level) documentation. --The following examples show how to apply filter rules to the `ApplicationInsightsLoggerProvider` to control the level of logs sent to Application Insights. --### Create filter rules with configuration --The `ApplicationInsightsLoggerProvider` is aliased as **ApplicationInsights** in configuration. The following section of an *appsettings.json* file sets the default log level for all providers to <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType>. The configuration for the ApplicationInsights provider, specifically for categories that start with "ValuesController," overrides this default value with <xref:Microsoft.Extensions.Logging.LogLevel.Error?displayProperty=nameWithType> and higher. --```json -{ - //... additional code removed for brevity - "Logging": { - "LogLevel": { // No provider, LogLevel applies to all the enabled providers. - "Default": "Warning" - }, - "ApplicationInsights": { // Specific to the provider, LogLevel applies to the Application Insights provider. - "LogLevel": { - "ValuesController": "Error" //Log Level for the "ValuesController" category - } - } - } -} -``` --Deploying the sample application with the preceding code in *appsettings.json* will yield only the error trace being sent to Application Insights when interacting with the **ValuesController**. This is because the **LogLevel** for the **ValuesController** category is set to **Error**. Therefore, the **Warning** trace is suppressed. --## Turn off logging to Application Insights --To disable logging by using configuration, set all LogLevel values to "None". --```json -{ - //... additional code removed for brevity - "Logging": { - "LogLevel": { // No provider, LogLevel applies to all the enabled providers. - "Default": "None" - }, - "ApplicationInsights": { // Specific to the provider, LogLevel applies to the Application Insights provider. - "LogLevel": { - "ValuesController": "None" //Log Level for the "ValuesController" category - } - } - } -} -``` --Similarly, within the code, set the default level for the `ApplicationInsightsLoggerProvider` and any subsequent log levels to **None**. --```csharp -var builder = WebApplication.CreateBuilder(args); -builder.Logging.AddFilter<ApplicationInsightsLoggerProvider>("", LogLevel.None); -builder.Logging.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("ValuesController", LogLevel.None); -``` --## Open-source SDK --* [Read and contribute to the code](https://github.com/microsoft/ApplicationInsights-dotnet). --For the latest updates and bug fixes, see the [release notes](./release-notes.md). --## Next steps --* [Explore user flows](./usage-flows.md) to understand how users navigate through your app. -* [Configure a snapshot collection](./snapshot-debugger.md) to see the state of source code and variables at the moment an exception is thrown. -* [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage. -* [Availability overview](availability-overview.md) -* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection) -* [Logging in ASP.NET Core](/aspnet/core/fundamentals/logging) -* [.NET trace logs in Application Insights](./asp-net-trace-logs.md) -* [Autoinstrumentation for Application Insights](./codeless-overview.md) |
azure-monitor | Tutorial Asp Net Custom Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-custom-metrics.md | - Title: Application Insights custom metrics with .NET and .NET Core -description: Learn how to use Application Insights to capture locally pre-aggregated metrics for .NET and .NET Core applications. - Previously updated : 08/22/2022-----# Capture Application Insights custom metrics with .NET and .NET Core --In this article, you'll learn how to capture custom metrics with Application Insights in .NET and .NET Core apps. --Insert a few lines of code in your application to find out what users are doing with it or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use. --## ASP.NET Core applications --### Prerequisites --To complete this tutorial, you need: --* Visual Studio 2022 -* The following Visual Studio Workloads: - * ASP.NET and web development - * Data storage and processing - * Azure development -* .NET 6.0 -* Azure subscription and user account (with the ability to create and delete resources) -* The [completed sample application](./tutorial-asp-net-core.md) deployed or an existing ASP.NET Core application with the [Application Insights for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) NuGet package installed and [configured to gather server-side telemetry](asp-net-core.md#enable-application-insights-server-side-telemetry-visual-studio). --### Custom metrics overview --The Application Insights .NET and .NET Core SDKs have two different methods for collecting custom metrics, which are `TrackMetric()`, and `GetMetric()`. The key difference between these two methods is local aggregation. `TrackMetric()` lacks pre-aggregation while `GetMetric()` has pre-aggregation. The recommended approach is to use aggregation. Therefore, `TrackMetric()` is no longer the preferred method for collecting custom metrics. This article will walk you through using the GetMetric() method, and some of the rationale behind how it works. --#### Pre-aggregating vs non pre-aggregating API --`TrackMetric()` sends raw telemetry that denotes a metric. `TrackMetric()` is inefficient because it sends a single telemetry item for each value. `TrackMetric()` is also inefficient in terms of performance because every `TrackMetric(item)` goes through the full SDK pipeline of telemetry initializers and processors. --Unlike `TrackMetric()`, `GetMetric()` handles local pre-aggregation for you and then only submits an aggregated summary metric at a fixed interval of one minute. If you need to closely monitor some custom metric at the second or even millisecond level, you can use `GetMetric()` to do so while only incurring the storage and network traffic cost of only monitoring every minute. This behavior also greatly reduces the risk of throttling occurring because the total number of telemetry items that need to be sent for an aggregated metric are greatly reduced. --In Application Insights, custom metrics collected via `TrackMetric()` and `GetMetric()` aren't subject to [sampling](./sampling.md). Sampling important metrics can lead to scenarios where the alerting you may have built around these metrics could become unreliable. By never sampling your custom metrics, you can generally be confident that when your alert thresholds are breached, an alert will fire. Because custom metrics aren't sampled, there are some potential concerns, which are described below. --Trend tracking in a metric every second or at a more granular interval can result in: --- Increased data storage costs. There's a cost associated with how much data you send to Azure Monitor. (The more data you send the greater the overall cost of monitoring.)-- Increased network traffic/performance overhead. (In some scenarios, this overhead could have both a monetary and application performance cost.)-- Risk of ingestion throttling. (The Azure Monitor service drops ("throttles") data points when your app sends a high rate of telemetry in a short time interval.)--Throttling is a concern because it can lead to missed alerts. The condition to trigger an alert could occur locally and then be dropped at the ingestion endpoint due to too much data being sent. We don't recommend using `TrackMetric()` for .NET and .NET Core unless you've implemented your own local aggregation logic. If you're trying to track every instance when an event occurs over a given time period, you may find that [`TrackEvent()`](./api-custom-events-metrics.md#trackevent) is a better fit. Keep in mind that unlike custom metrics, custom events are subject to sampling. You can still use `TrackMetric()` even without writing your own local pre-aggregation, but be aware of the pitfalls if you do so. --In summary, `GetMetric()` is the recommended approach because it does pre-aggregation, accumulates values from all the Track() calls, and sends a summary/aggregate once every minute. `GetMetric()` can significantly reduce the cost and performance overhead by sending fewer data points, while still collecting all of the relevant information. --## Getting a TelemetryClient instance --Get an instance of `TelemetryClient` from the dependency injection container in **HomeController.cs**: --```csharp -//... additional code removed for brevity -using Microsoft.ApplicationInsights; --namespace AzureCafe.Controllers -{ - public class HomeController : Controller - { - private readonly ILogger<HomeController> _logger; - private AzureCafeContext _cafeContext; - private BlobContainerClient _blobContainerClient; - private TextAnalyticsClient _textAnalyticsClient; - private TelemetryClient _telemetryClient; -- public HomeController(ILogger<HomeController> logger, AzureCafeContext context, BlobContainerClient blobContainerClient, TextAnalyticsClient textAnalyticsClient, TelemetryClient telemetryClient) - { - _logger = logger; - _cafeContext = context; - _blobContainerClient = blobContainerClient; - _textAnalyticsClient = textAnalyticsClient; - _telemetryClient = telemetryClient; - } -- //... additional code removed for brevity - } -} -``` --`TelemetryClient` is thread safe. --## TrackMetric --Application Insights can chart metrics that aren't attached to particular events. For example, you could monitor a queue length at regular intervals. With metrics, the individual measurements are of less interest than the variations and trends, so statistical charts are useful. --To send metrics to Application Insights, you can use the `TrackMetric(..)` API. --### Aggregation --Aggregation is the recommended way to send a metric. --When you work with metrics, every single measurement is rarely of interest. Instead, a summary of what happened during a particular time period is important. Such a summary is called _aggregation_. --For example, the aggregate metric sum for that time period is `1` and the count of the metric values is `2`. When you use the aggregation approach, you invoke `TrackMetric` only once per time period and send the aggregate values. We recommend this approach because it can significantly reduce the cost and performance overhead by sending fewer data points to Application Insights, while still collecting all of the relevant information. --### TrackMetric example --1. From the Visual Studio Solution Explorer, locate and open the **HomeController.cs** file. --2. Locate the `CreateReview` method and the following code. -- ```csharp - if (model.Comments != null) - { - var response = _textAnalyticsClient.AnalyzeSentiment(model.Comments); - review.CommentsSentiment = response.Value.Sentiment.ToString(); - } - ``` --3. To add a custom metric, insert the following code immediately after the previous code. -- ```csharp - _telemetryClient.TrackMetric("ReviewPerformed", model.Rating); - ``` --4. Right-click on the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted." lightbox="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png"::: --5. To promote the new code to the Azure App Service, select **Publish**. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/publish-profile.png" alt-text="Screenshot of the Azure Cafe publish profile screen with the Publish button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/publish-profile.png"::: -- When the Azure Cafe web application is successfully published, a new browser window opens to the Azure Cafe web application. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png" alt-text="Screenshot of the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png"::: --6. To generate some telemetry, follow these steps in the web application to add a review. -- 1. To view a cafe's menu and reviews, select **Details** next to a cafe. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-details-button.png" alt-text="Screenshot of a portion of the Azure Cafe list in the Azure Cafe web application with the Details button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-details-button.png"::: -- 2. To view and add reviews, on the Cafe screen, select the **Reviews** tab. Select the **Add review** button to add a review. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png" alt-text="Screenshot of the Cafe details screen in the Azure Cafe web application with the Add review button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png"::: -- 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. When finished, select **Add review**. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png" alt-text="Screenshot of the Create a review dialog in the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png"::: -- 4. If you need to generate additional telemetry, add additional reviews. --### View metrics in Application Insights --1. In the [Azure portal](https://portal.azure.com), select the **Application Insights** resource. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="First screenshot of a resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png"::: --2. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Logs**. --3. In the **Tables** pane, under the **Application Insights** tree, double-click on the **customMetrics** table. --4. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows: -- ```kql - customMetrics - | where name == "ReviewPerformed" - ``` --5. Select **Run** to filter the results. -- The results display the rating value present in your review. --## GetMetric --As mentioned before, `GetMetric(..)` is the preferred method for sending metrics. In order to make use of this method, we'll be performing some changes to the existing code from the TrackMetric example. --When running the sample code, you'll see that no telemetry is being sent from the application right away. A single telemetry item will be sent by around the 60-second mark. --> [!NOTE] -> GetMetric does not support tracking the last value (i.e. "gauge") or histograms/distributions. --### GetMetric example --1. From the Visual Studio Solution Explorer, locate and open the **HomeController.cs** file. --2. Locate the `CreateReview` method and the code you added in the previous [TrackMetric example](#trackmetric-example). --3. Replace the code you inserted in the previous TrackMetric example with the following code. -- ```csharp - var metric = _telemetryClient.GetMetric("ReviewPerformed"); - metric.TrackValue(model.Rating); - ``` --4. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted." lightbox="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png"::: --5. To promote the new code to the Azure App Service, select **Publish**. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/publish-profile.png" alt-text="Screenshot of the Azure Cafe publish profile with the Publish button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/publish-profile.png"::: -- When the Azure Cafe web application is successfully published, a new browser window opens to the Azure Cafe web application. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png" alt-text="Screenshot of the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png"::: --6. To generate some telemetry, follow these steps in the web application to add a review. -- 1. To view a cafe's menu and reviews, select **Details** next to a cafe. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-details-button.png" alt-text="Screenshot of a portion of the Azure Cafe list in the Azure Cafe web application with the Details button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-details-button.png"::: -- 2. To view and add reviews, on the Cafe screen, select the **Reviews** tab. Select the **Add review** button to add a review. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png" alt-text="Screenshot of the Cafe details in the Azure Cafe web application with the Add review button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png"::: -- 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. When finished, select **Add review**. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png" alt-text="Screenshot of the Create a review dialog in the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png"::: -- 4. If you need to generate additional telemetry, add additional reviews. --### View metrics in Application Insights --1. In the [Azure portal](https://portal.azure.com), select the **Application Insights** resource. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="Second screenshot of a resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png"::: --2. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Logs**. --3. In the **Tables** pane, under the **Application Insights** tree, double-click on the **customMetrics** table. --4. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows: -- ```kql - customMetrics - | where name == "ReviewPerformed" - ``` -- 5. Select **Run** to filter the results. -- The results display the rating value present in your review. --## Multi-dimensional metrics --The examples in the previous section show zero-dimensional metrics. Metrics can also be multi-dimensional. We currently support up to 10 dimensions. --By default, multi-dimensional metrics within the Metric explorer experience aren't turned on in Application Insights resources. -->[!NOTE] -> This is a preview feature and additional billing may apply in the future. --### Enable multi-dimensional metrics --This section walks through enabling multi-dimensional metrics for an Application Insights resource. --1. In the [Azure portal](https://portal.azure.com), select the **Application Insights** resource. -1. Select **Usage and estimated costs**. -1. Select **Custom Metrics**. -1. Select **Send custom metrics to Azure Metric Store (With dimensions)**. -1. Select **OK**. --After you enable multi-dimensional metrics for an Application Insights resource and send new multi-dimensional telemetry, you can split a metric by dimension. --> [!NOTE] -> Only metrics that are sent after the feature is turned on in the portal will have dimensions stored. --### Multi-dimensional metrics example --1. From the Visual Studio Solution Explorer, locate and open the **HomeController.cs** file. --2. Locate the `CreateReview` method and the code added in the previous [GetMetric example](#getmetric-example). --3. Replace the code you inserted in the previous GetMetric example with the following code. -- ```csharp - var metric = _telemetryClient.GetMetric("ReviewPerformed", "IncludesPhoto"); - ``` --4. In the `CreateReview` method, change the code to match the following code. -- ```csharp - [HttpPost] - [ValidateAntiForgeryToken] - public ActionResult CreateReview(int id, CreateReviewModel model) - { - //... additional code removed for brevity - var metric = _telemetryClient.GetMetric("ReviewPerformed", "IncludesPhoto"); -- if ( model.ReviewPhoto != null ) - { - using (Stream stream = model.ReviewPhoto.OpenReadStream()) - { - //... additional code removed for brevity - } - - metric.TrackValue(model.Rating, bool.TrueString); - } - else - { - metric.TrackValue(model.Rating, bool.FalseString); - } - //... additional code removed for brevity - } - ``` --5. Right-click the on **AzureCafe** project in Solution Explorer and select **Publish** from the context menu. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png" alt-text="Screenshot of the Visual Studio Solution Explorer with the Azure Cafe project selected and the Publish context menu item highlighted." lightbox="media/tutorial-asp-net-custom-metrics/web-project-publish-context-menu.png"::: --6. To promote the new code to the Azure App Service, select **Publish**. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/publish-profile.png" alt-text="Screenshot of the Azure Cafe publish profile with the Publish button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/publish-profile.png"::: -- When the Azure Cafe web application is successfully published, a new browser window opens to the Azure Cafe web application. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png" alt-text="Screenshot of the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/azure-cafe-index.png"::: --7. To generate some telemetry, follow these steps in the web application to add a review. -- 1. To view a cafe's menu and reviews, select **Details** next to a cafe. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-details-button.png" alt-text="Screenshot of a portion of the Azure Cafe list in the Azure Cafe web application with the Details button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-details-button.png"::: -- 2. To view and add reviews, on the Cafe screen, select the **Reviews** tab. Select the **Add review** button to add a review. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png" alt-text="Screenshot of the Cafe details screen in the Azure Cafe web application with the Add review button highlighted." lightbox="media/tutorial-asp-net-custom-metrics/cafe-add-review-button.png"::: -- 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. When finished, select **Add review**. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png" alt-text="Screenshot of the Create a review dialog in the Azure Cafe web application." lightbox="media/tutorial-asp-net-custom-metrics/create-a-review-dialog.png"::: -- 4. If you need to generate additional telemetry, add additional reviews. --### View logs in Application Insights --1. In the [Azure portal](https://portal.azure.com), select the **Application Insights** resource. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="Third screenshot of a resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png"::: --2. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Logs**. --3. In the **Tables** pane, under the **Application Insights** tree, double-click on the **customMetrics** table. --4. Modify the query to retrieve metrics for the **ReviewPerformed** custom named metric as follows: -- ```kql - customMetrics - | where name == "ReviewPerformed" - ``` --5. Select **Run** to filter the results. -- The results display the rating value present in your review and the aggregated values. --6. To extract the **IncludesPhoto** dimension into a separate variable (column) to better observe the dimension, use the following query. -- ```kql - customMetrics - | extend IncludesPhoto = tobool(customDimensions.IncludesPhoto) - | where name == "ReviewPerformed" - ``` -- Because we reused the same custom metric name as before, results with and without the custom dimension will be displayed. --7. To only display results with the custom dimension, update the query to match the following query. -- ```kql - customMetrics - | extend IncludesPhoto = tobool(customDimensions.IncludesPhoto) - | where name == "ReviewPerformed" and isnotnull(IncludesPhoto) - ``` --### View metrics in Application Insights --1. In the [Azure portal](https://portal.azure.com), select the **Application Insights** resource. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png" alt-text="Fourth screenshot of a resource group in the Azure portal with the Application Insights resource highlighted." lightbox="media/tutorial-asp-net-custom-metrics/application-insights-resource-group.png"::: --2. From the left menu of the Application Insights resource, under the **Monitoring** section, select **Metrics**. --3. In the **Metric Namespace** drop-down menu, select **azure.applicationinsights**. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/metrics-explorer-namespace.png" alt-text="Screenshot of metrics explorer in the Azure portal with the Metric Namespace highlighted." lightbox="media/tutorial-asp-net-custom-metrics/metrics-explorer-namespace.png"::: --4. In the **Metric** drop-down menu, select **ReviewPerformed**. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/metrics-explorer-metric.png" alt-text="Screenshot of metrics explorer in the Azure portal with the Metric highlighted." lightbox="media/tutorial-asp-net-custom-metrics/metrics-explorer-metric.png"::: -- You'll notice that you can't split the metric by your new custom dimension or view your custom dimension with the metrics view. --5. To split the metric by dimension, select **Apply Splitting**. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/apply-splitting.png" alt-text="Screenshot of the Apply Splitting button in the Azure portal." lightbox="media/tutorial-asp-net-custom-metrics/apply-splitting.png"::: --6. To view your custom dimension, in the **Values** drop-down menu, select **IncludesPhoto**. -- :::image type="content" source="media/tutorial-asp-net-custom-metrics/splitting-dimension.png" alt-text="Screenshot of the Azure portal. It illustrates splitting by using a custom dimension." lightbox="media/tutorial-asp-net-custom-metrics/splitting-dimension.png"::: --## Next steps --* [Metric Explorer](../essentials/metrics-getting-started.md) -* How to enable Application Insights for [ASP.NET Core Applications](./asp-net-core.md) |
azure-monitor | Logs Dedicated Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md | For more information on Log Analytics permissions, see [Manage access to log dat Provide the following properties when creating new dedicated cluster: - **ClusterName**: Must be unique for the resource group.-- **ResourceGroupName**: Use a central IT resource group because many teams in the organization usually share clusters. For more design considerations, review Design a Log Analytics workspace configuration(../logs/workspace-design.md).+- **ResourceGroupName**: Use a central IT resource group because many teams in the organization usually share clusters. For more design considerations, review [Design a Log Analytics workspace configuration](../logs/workspace-design.md). - **Location** - **SkuCapacity**: You can set the commitment tier (formerly called capacity reservations) to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters). - **Managed identity**: Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): |
azure-monitor | Personal Data Mgmt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/personal-data-mgmt.md | To manage system resources, we limit purge requests to 50 requests an hour. Batc ## Next steps - Learn more about [how Log Analytics collects, processes, and secures data](../logs/data-security.md).-- Learn more about [how Application Insights collects, processes, and secures data](../app/data-retention-privacy.md).+- Learn more about [how Application Insights collects, processes, and secures data](/previous-versions/azure/azure-monitor/app/data-retention-privacy). |
azure-monitor | Tutorial Monitor Vm Alert Recommended | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-alert-recommended.md | description: Enable set of recommended metric alert rules for an Azure virtual m Previously updated : 09/28/2023 Last updated : 10/20/2023 To complete the steps in this article you need the following: ## Create recommended alert rules-From the menu for the VM, select **Alerts**. You'll see a brief description of recommended alerts and the option to enable them. Click **Enable recommended alert rules**. +From the menu for the VM, select **Alerts** in the **Monitoring** section. Select **View + enable**. A list of recommended alert rules is displayed. You can select which ones to create and change their recommended threshold if you want. Ensure that **Email** is enabled and provide an email address to be notified when any of the alerts fire. An [action group](../alerts/action-groups.md) will be created with this address. If you already have an action group that you want to use, you can specify it instead. A list of recommended alert rules is displayed. You can select which ones to cre :::image type="content" source="media/tutorial-monitor-vm/recommended-alerts-configure.png" alt-text="Screenshot of recommended alert rule configuration." lightbox="media/tutorial-monitor-vm/recommended-alerts-configure.png"::: -Expand each of the alert rules to inspect its details. By default, the severity for each is **Informational**. You may want to change to another severity such as **Error**. +Expand each of the alert rules to inspect its details. By default, the severity for each is **Informational**. You might want to change to another severity such as **Error**. :::image type="content" source="media/tutorial-monitor-vm/recommended-alerts-configure-severity.png" alt-text="Screenshot of recommended alert rule severity configuration." lightbox="media/tutorial-monitor-vm/recommended-alerts-configure-severity.png"::: |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | Alerts|[Manage your alert rules](alerts/alerts-manage-alert-rules.md)|Recommende Application-insights|[Sampling in Application Insights](app/sampling.md)|ASP.NET Core applications can be configured in code or through the `appsettings.json` file. Removed conflicting information.| Application-insights|[How many Application Insights resources should I deploy?](app/create-workspace-resource.md#how-many-application-insights-resources-should-i-deploy)|Added clarification on setting iKey dynamically in code.| Application-insights|[Application Map: Triage distributed applications](app/app-map.md)|Documented App Map Filters, an exciting new feature.|-Application-insights|[Enable Application Insights for ASP.NET Core applications](app/tutorial-asp-net-core.md)|The Azure Café sample app is now hosted and linked on Git.| +Application-insights|[Enable Application Insights for ASP.NET Core applications](/previous-versions/azure/azure-monitor/app/tutorial-asp-net-core)|The Azure Café sample app is now hosted and linked on Git.| Application-insights|[What is auto-instrumentation for Azure Monitor Application Insights?](app/codeless-overview.md)|Updated the auto-instrumentation supported languages chart.| Application-insights|[Application monitoring for Azure App Service and ASP.NET](app/azure-web-apps-net.md)|Corrected links to check versions.| Application-insights|[Sampling overrides (preview) - Azure Monitor Application Insights for Java](app/java-standalone-sampling-overrides.md)|Updated OpenTelemetry Span information for Java.| Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to | Article | Description | ||| |[Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](./app/opentelemetry-enable.md?tabs=java)|Added new OpenTelemetry `@WithSpan` annotation guidance.|-|[Capture Application Insights custom metrics with .NET and .NET Core](./app/tutorial-asp-net-custom-metrics.md)|Updated tutorial steps and images.| +|[Capture Application Insights custom metrics with .NET and .NET Core](/previous-versions/azure/azure-monitor/app/tutorial-asp-net-custom-metrics)|Updated tutorial steps and images.| |[Configuration options: Azure Monitor Application Insights for Java](./app/opentelemetry-enable.md)|Updated connection string guidance.|-|[Enable Application Insights for ASP.NET Core applications](./app/tutorial-asp-net-core.md)|Updated tutorial steps and images.| +|[Enable Application Insights for ASP.NET Core applications](/previous-versions/azure/azure-monitor/app/tutorial-asp-net-core)|Updated tutorial steps and images.| |[Enable Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications (preview)](./app/opentelemetry-enable.md)|Fixed the product feedback link at the bottom of each document.| |[Filter and preprocess telemetry in the Application Insights SDK](./app/api-filtering-sampling.md)|Added sample initializer to control which client IP gets used as part of geo-location mapping.| |[Java Profiler for Azure Monitor Application Insights](./app/java-standalone-profiler.md)|Announced the new Java Profiler at Ignite. Read all about it.| Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to |[Incoming request tracking in Application Insights with OpenCensus Python](/previous-versions/azure/azure-monitor/app/opencensus-python-request)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.| |[Monitor Python applications with Azure Monitor](/previous-versions/azure/azure-monitor/app/opencensus-python)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.| |[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)|Updated connection string overrides example.|-|[Application Insights SDK for ASP.NET Core applications](app/tutorial-asp-net-core.md)|Added a new tutorial with step-by-step instructions on how to use the Application Insights SDK with .NET Core applications.| +|[Application Insights SDK for ASP.NET Core applications](/previous-versions/azure/azure-monitor/app/tutorial-asp-net-core)|Added a new tutorial with step-by-step instructions on how to use the Application Insights SDK with .NET Core applications.| |[Application Insights SDK support guidance](app/sdk-support-guidance.md)|Updated and clarified the SDK support guidance.| |[Application Insights: Dependency autocollection](app/asp-net-dependencies.md#dependency-auto-collection)|Updated the latest currently supported node.js modules.|-|[Application Insights custom metrics with .NET and .NET Core](app/tutorial-asp-net-custom-metrics.md)|Added a new tutorial with step-by-step instructions on how to enable custom metrics with .NET applications.| +|[Application Insights custom metrics with .NET and .NET Core](/previous-versions/azure/azure-monitor/app/tutorial-asp-net-custom-metrics)|Added a new tutorial with step-by-step instructions on how to enable custom metrics with .NET applications.| |[Migrate an Application Insights classic resource to a workspace-based resource](app/convert-classic-resource.md)|Added a comprehensive FAQ section to assist with migration to workspace-based resources.| |[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)| Updated this article for 3.4.0-BETA.| |
azure-resource-manager | Conditional Resource Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/conditional-resource-deployment.md | -# Conditional deployment in Bicep +# Conditional deployments in Bicep with the if expression -Sometimes you need to optionally deploy a resource or module in Bicep. Use the `if` keyword to specify whether the resource or module is deployed. The value for the condition resolves to true or false. When the value is true, the resource is created. When the value is false, the resource isn't created. The value can only be applied to the whole resource or module. +To optionally deploy a resource or module in Bicep, use the `if` expression. An `if` expression includes a condition that resolves to true or false. When the `if` condition is true, the resource is deployed. When the value is false, the resource isn't created. The value can only be applied to the whole resource or module. > [!NOTE] > Conditional deployment doesn't cascade to [child resources](child-resource-name-type.md). If you want to conditionally deploy a resource and its child resources, you must apply the same condition to each resource type. If you would rather learn about conditions through step-by-step guidance, see [B ## Define condition for deployment -In Bicep, you can conditionally deploy a resource by passing in a parameter that specifies whether the resource is deployed. You test the condition with an `if` statement in the resource declaration. The following example shows a Bicep file that conditionally deploys a DNS zone. When `deployZone` is `true`, it deploys the DNS zone. When `deployZone` is `false`, it skips deploying the DNS zone. +In Bicep, you can conditionally deploy a resource by passing in a parameter that specifies whether the resource is deployed. You test the condition with an `if` expression in the resource declaration. The following example shows the syntax for an `if` expression in a Bicep file. It conditionally deploys a DNS zone. When `deployZone` is `true`, it deploys the DNS zone. When `deployZone` is `false`, it skips deploying the DNS zone. ```bicep param deployZone bool |
azure-resource-manager | File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/file.md | For more information, see [Use Bicep modules](./modules.md). ## Resource and module decorators -You can add a decorator to a resource or module definition. The supported decorators are `batchSize(int) and description. You can only apply it to a resource or module definition that uses a `for` expression. +You can add a decorator to a resource or module definition. The supported decorators are `batchSize(int)` and `description`. You can only apply it to a resource or module definition that uses a `for` expression. -By default, resources are deployed in parallel. When you add the `batchSize` decorator, you deploy instances serially. +By default, resources are deployed in parallel. When you add the `batchSize(int)` decorator, you deploy instances serially. ```bicep @batchSize(3) |
azure-vmware | Attach Azure Netapp Files To Azure Vmware Solution Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md | There are some important best practices to follow for optimal performance of NFS >[!IMPORTANT] > If you've changed the Azure NetApp Files volumes performance tier after creating the volume and datastore, see [Service level change for Azure NetApp files datastore](#service-level-change-for-azure-netapp-files-datastore) to ensure that volume/datastore metadata is in sync to avoid unexpected behavior in the portal or the API due to metadata mismatch. -- Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type will determine volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 64, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).+- Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type will determine volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 8, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). - Ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones) using the [the availability zone volume placement](../azure-netapp-files/manage-availability-zone-volume-placement.md) in the same subscription. Information regarding your AVS private cloud's availability zone can be viewed from the overview pane within the AVS private cloud. For performance benchmarks that Azure NetApp Files datastores deliver for VMs on Azure VMware Solution, see [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](../azure-netapp-files/performance-benchmarks-azure-vmware-solution.md). Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y - **How many datastores are we supporting with Azure VMware Solution?** - The default limit is 64 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). + The default maximum is 8 but it can be increased to 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). - **What latencies and bandwidth can be expected from the datastores backed by Azure NetApp Files?** |
communication-services | Phone Number Management For Japan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-japan.md | Use the below tables to find all the relevant information on number availability | Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :- | :- | :- | : |-| Toll-Free |- | - | General Availability | General Availability\* | -| National | - | - | General Availability | General Availability\* | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| National | - | - | Public Preview | Public Preview\* | \* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. |
communication-services | Classification Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/classification-concepts.md | Once a Job has been classified, it can be reclassified in the following ways: 1. You can update the Job labels, which cause the Job Router to evaluate the new labels with the previous Classification Policy. 2. You can update the Classification Policy ID of a Job, which causes Job Router to process the existing Job against the new policy. 3. An Exception Policy **trigger** can take the **action** of requesting a Job be reclassified.+4. You can Reclassify the job, which causes the Job Router to re-evaluate the current labels and Classification Policy. <!-- LINKS --> [subscribe_events]: ../../how-tos/router-sdk/subscribe-events.md |
communication-services | Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/managed-identity.md | -Azure Communication Services (ACS) is a fully managed communication platform that enables developers to build real-time communication features into their applications. By using Managed Identity with Azure Communication Services, you can simplify the authentication process for your application, while also increasing its security. This document covers how to use Managed Identity with Azure Communication Services. +Azure Communication Services is a fully managed communication platform that enables developers to build real-time communication features into their applications. By using Managed Identity with Azure Communication Services, you can simplify the authentication process for your application, while also increasing its security. This document covers how to use Managed Identity with Azure Communication Services. -## Using Managed Identity with ACS +## Using Managed Identity with Azure Communication Services ACS supports using Managed Identity to authenticate with the service. By using Managed Identity, you can eliminate the need to manage your own access tokens and credentials. -Your ACS resource can be assigned two types of identity: +Your Azure Communication Services resource can be assigned two types of identity: 1. A **System Assigned Identity** which is tied to your resource and is deleted when your resource is deleted. Your resource can only have one system-assigned identity.-2. A **User Assigned Identity** which is an Azure resource that can be assigned to your ACS resource. This identity isn't deleted when your resource is deleted. Your resource can have multiple user-assigned identities. +2. A **User Assigned Identity** which is an Azure resource that can be assigned to your Azure Communication Services resource. This identity isn't deleted when your resource is deleted. Your resource can have multiple user-assigned identities. To use Managed Identity with ACS, follow these steps: 1. Grant your Managed Identity access to the Communication Services resource. This assignment can be through the Azure portal, Azure CLI and the Azure Communication Management SDKs.-2. Use the Managed Identity to authenticate with ACS. Authentication can be done through the Azure SDKs or REST APIs that support Managed Identity. +2. Use the Managed Identity to authenticate with Azure Communication Services. Authentication can be done through the Azure SDKs or REST APIs that support Managed Identity. -- Managed Identity can also be assigned to your ACS resource using the Azure Commu This assignment can be achieved by introducing the identity property in the resource definition either on creation or when updating the resource. # [.NET](#tab/dotnet)-You can assign your managed identity to your ACS resource using the Azure Communication Management SDK for .NET by setting the `Identity` property on the `CommunicationServiceResourceData `. +You can assign your managed identity to your Azure Communication Services resource using the Azure Communication Management SDK for .NET by setting the `Identity` property on the `CommunicationServiceResourceData `. For example: |
cost-management-billing | Find Tenant Id Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/find-tenant-id-domain.md | tags: billing Previously updated : 08/04/2022 Last updated : 10/19/2023 |
cost-management-billing | Manage Tenants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md | tags: billing Previously updated : 04/05/2023 Last updated : 10/19/2023 |
data-factory | Continuous Integration Delivery Sample Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-sample-script.md | The following YAML code executes a script that can be used to stop triggers befo ScriptArguments: -armTemplate "<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $true -deleteDeployment $false errorActionPreference: stop FailOnStandardError: False- azurePowerShellVersion: azurePowerShellVersion - preferredAzurePowerShellVersion: 3.1.0 - pwsh: False + azurePowerShellVersion: 'LatestVersion' + pwsh: True workingDirectory: ../ ``` The following YAML code executes a script that can be used to stop triggers befo ScriptArguments: -armTemplate "<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name>-predeployment $false -deleteDeployment $true errorActionPreference: stop FailOnStandardError: False- azurePowerShellVersion: azurePowerShellVersion - preferredAzurePowerShellVersion: 3.1.0 - pwsh: False + azurePowerShellVersion: 'LatestVersion' + pwsh: True workingDirectory: ../ ``` |
defender-for-cloud | Concept Devops Posture Management Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-devops-posture-management-overview.md | + + Title: Microsoft Defender for DevOps - DevOps security posture management overview +description: Learn how to discover security posture violations in DevOps environments Last updated : 10/17/2023++++++# Improve DevOps security posture ++With an increase of cyber attacks on source code management systems and continuous integration/continuous delivery pipelines, securing DevOps platforms against the diverse range of threats identified in the [DevOps Threat Matrix](https://www.microsoft.com/security/blog/2023/04/06/devops-threat-matrix/) is crucial. Such cyber attacks can enable code injection, privilege escalation, and data exfiltration, potentially leading to extensive impact. ++DevOps posture management is a feature in Microsoft Defender for Cloud that: ++- Provides insights into the security posture of the entire software supply chain lifecycle. +- Uses advanced scanners for in-depth assessments. +- Covers various resources, from organizations, pipelines, and repositories. +- Allows customers to reduce their attack surface by uncovering and acting on the provided recommendations. ++## DevOps scanners ++To provide findings, DevOps posture management uses DevOps scanners to identify weaknesses in source code management and continuous integration/continuous delivery pipelines by running checks against the security configurations and access controls. ++Azure DevOps and GitHub scanners are used internally within Microsoft to identify risks associated with DevOps resources, reducing attack surface and strengthening corporate DevOps systems. ++Once a DevOps environment is connected, Defender for Cloud autoconfigures these scanners to conduct recurring scans every eight hours across multiple DevOps resources, including: ++- Builds +- Secure Files +- Variable Groups +- Service Connections +- Organizations +- Repositories ++## DevOps threat matrix risk reduction ++DevOps posture management assists organizations in discovering and remediating harmful misconfigurations in the DevOps platform. This leads to a resilient, zero-trust DevOps environment, which is strengthened against a range of threats defined in the DevOps threat matrix. The primary posture management controls include: ++- **Scoped secret access**: Minimize the exposure of sensitive information and reduce the risk of unauthorized access, data leaks, and lateral movements by ensuring each pipeline only has access to the secrets essential to its function. +- **Restriction of self-hosted runners and high permissions**: prevent unauthorized executions and potential escalations by avoiding self-hosted runners and ensuring that pipeline permissions default to read-only. +- **Enhanced branch protection**: Maintain the integrity of the code by enforcing branch protection rules and preventing malicious code injections. +- **Optimized permissions and secure repositories**: Reduce the risk of unauthorized access, modifications by tracking minimum base permissions, and enablement of [secret push protection](https://docs.github.com/enterprise-cloud@latest/code-security/secret-scanning/push-protection-for-repositories-and-organizations) for repositories. ++- Learn more about the [DevOps threat matrix](https://www.microsoft.com/security/blog/2023/04/06/devops-threat-matrix/). ++## DevOps posture management recommendations ++When the DevOps scanners uncover deviations from security best practices within source code management systems and continuous integration/continuous delivery pipelines, Defender for Cloud outputs precise and actionable recommendations. These recommendations have the following benefits: ++- **Enhanced visibility**: Obtain comprehensive insights into the security posture of DevOps environments, ensuring a well-rounded understanding of any existing vulnerabilities. Identify missing branch protection rules, privilege escalation risks, and insecure connections to prevent attacks. +- **Priority-based action**: Filter results by severity to spend resources and efforts more effectively by addressing the most critical vulnerabilities first. +- **Attack surface reduction**: Address highlighted security gaps to significantly minimize vulnerable attack surfaces, thereby hardening defenses against potential threats. +- **Real-time notifications**: Ability to integrate with workflow automations to receive immediate alerts when secure configurations alter, allowing for prompt action and ensuring sustained compliance with security protocols. ++## Next steps ++- [Connect your GitHub repositories to Microsoft Defender for Cloud](quickstart-onboard-github.md). +- [Connect your Azure DevOps repositories to Microsoft Defender for Cloud](quickstart-onboard-devops.md). |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | If you're looking for items older than six months, you can find them in the [Arc |Date |Update | |-|-|+| October 19 |[DevOps security posture management recommendations available in public preview](#devops-security-posture-management-recommendations-available-in-public-preview) | October 18 | [Releasing CIS Azure Foundations Benchmark v2.0.0 in Regulatory Compliance dashboard](#releasing-cis-azure-foundations-benchmark-v200-in-regulatory-compliance-dashboard) +## DevOps security posture management recommendations available in public preview ++October 19, 2023 ++New DevOps posture management recommendations are now available in public preview for all customers with a connector for Azure DevOps or GitHub. DevOps posture management helps to reduce the attack surface of DevOps environments by uncovering weaknesses in security configurations and access controls. Learn more about [DevOps posture management](concept-devops-posture-management-overview.md). + ### Releasing CIS Azure Foundations Benchmark v2.0.0 in regulatory compliance dashboard October 18, 2023 Microsoft Defender for Cloud now supports the latest [CIS Azure Security Foundations Benchmark - version 2.0.0](https://www.cisecurity.org/benchmark/azure) in the Regulatory Compliance [dashboard](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/22), as well as a built-in policy initiative in Azure Policy. The release of version 2.0.0 in Microsoft Defender for Cloud is a joint collaborative effort between Microsoft, the Center for Internet Security (CIS), and the user communities. The version 2.0.0 significantly expands assessment scope which now includes 90+ built-in Azure policies and will succeed the prior versions 1.4.0 and 1.3.0 and 1.0 in Microsoft Defender for Cloud and Azure Policy. Please refer to this [blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-cloud-now-supports-cis-azure-security/ba-p/3944860) for more details.+Microsoft Defender Cloud now supports the latest [CIS Azure Security Foundations Benchmark - version 2.0.0](https://www.cisecurity.org/benchmark/azure) in the Regulatory Compliance [dashboard](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/22), as well as a built-in policy initiative in Azure Policy. The release of version 2.0.0 in Microsoft Defender for Cloud is a joint collaborative effort between Microsoft, the Center for Internet Security (CIS), and the user communities. The version 2.0.0 significantly expands assessment scope which now includes 90+ built-in Azure policies and will succeed the prior versions 1.4.0 and 1.3.0 and 1.0 in Microsoft Defender for Cloud and Azure Policy. Please refer to this [blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-cloud-now-supports-cis-azure-security/ba-p/3944860) for more details. ## September 2023 June 15, 2023 The NIST 800-53 standards (both R4 and R5) have recently been updated with control changes in Microsoft Defender for Cloud regulatory compliance. The Microsoft-managed controls have been removed from the standard, and the information on the Microsoft responsibility implementation (as part of the cloud shared responsibility model) is now available only in the control details pane under **Microsoft Actions**. -These controls were previously calculated as passed controls, so you may see a significant dip in your compliance score for NIST standards between April 2023 and May 2023. +These controls were previously calculated as passed controls, so you might see a significant dip in your compliance score for NIST standards between April 2023 and May 2023. For more information on compliance controls, see [Tutorial: Regulatory compliance checks - Microsoft Defender for Cloud](regulatory-compliance-dashboard.md#investigate-regulatory-compliance-issues). If you have already configured continuous export of your alerts to a Log Analyti The security alert quality improvement process for Defender for Servers includes the deprecation of some alerts for both Windows and Linux servers. The deprecated alerts are now sourced from and covered by Defender for Endpoint threat alerts. -If you already have the Defender for Endpoint integration enabled, no further action is required. You may experience a decrease in your alerts volume in April 2023. +If you already have the Defender for Endpoint integration enabled, no further action is required. You might experience a decrease in your alerts volume in April 2023. If you don't have the Defender for Endpoint integration enabled in Defender for Servers, you'll need to enable the Defender for Endpoint integration to maintain and improve your alert coverage. |
defender-for-cloud | Support Matrix Cloud Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-cloud-environment.md | In the support table, **NA** indicates that the feature isn't available. [Security recommendations](security-policy-concept.md) based on the [Microsoft Cloud Security Benchmark](concept-regulatory-compliance.md) | GA | GA | GA [Recommendation exemptions](exempt-resource.md) | Preview | NA | NA [Secure score](secure-score-security-controls.md) | GA | GA | GA+[DevOps security posture](concept-devops-posture-management-overview.md) | Preview | NA | NA **DEFENDER FOR CLOUD PLANS** | | | [Defender CSPM](concept-cloud-security-posture-management.md)| GA | NA | NA [Defender for APIs](defender-for-apis-introduction.md). [Review support preview regions](defender-for-apis-prepare.md#cloud-and-region-support). | Preview | NA | NA |
deployment-environments | Concept Environments Key Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-key-concepts.md | An environment is a collection of Azure resources on which your application is d in Azure Deployment Environments, you use [managed identities](../active-directory/managed-identities-azure-resources/overview.md) to provide elevation-of-privilege capabilities. Identities can help you provide self-serve capabilities to your development teams without giving them access to the target subscriptions in which the Azure resources are created. -The managed identity that's attached to the dev center needs to be granted appropriate access to connect to the catalogs. You should grant owner access to the target deployment subscriptions that are configured at the project level. The Azure Deployment Environments service uses the specific managed identity to perform the deployment on behalf of the developer. +The managed identity that's attached to the dev center needs to be granted appropriate access to connect to the catalogs. You should grant Contributor and User Access Administrator access to the target deployment subscriptions that are configured at the project level. The Azure Deployment Environments service uses the specific managed identity to perform the deployment on behalf of the developer. ## Dev center environment types |
deployment-environments | How To Configure Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md | -The managed identity that's attached to a dev center should be [assigned the Owner role in the deployment subscriptions](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) for each environment type. When an environment deployment is requested, the service grants appropriate permissions to the deployment identities that are set up for the environment type to deploy on behalf of the user. +The managed identity that's attached to a dev center should be [assigned both the Contributor role and the User Access Administrator in the deployment subscriptions](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) for each environment type. When an environment deployment is requested, the service grants appropriate permissions to the deployment identities that are set up for the environment type to deploy on behalf of the user. The managed identity that's attached to a dev center also is used to add to a [catalog](how-to-configure-catalog.md) and access [environment definitions](configure-environment-definition.md) in the catalog. In this article, you learn how to: |
deployment-environments | How To Configure Project Environment Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-environment-types.md | Add a new project environment type as follows: :::image type="content" source="./media/configure-project-environment-types/add-project-environment-type-page.png" alt-text="Screenshot that shows adding details on the page for adding a project environment type."::: > [!NOTE]-> At least one identity (system assigned or user assigned) must be enabled for deployment identity. It will be used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [granted Owner access to the deployment subscription](how-to-configure-managed-identity.md) configured per environment type. +> At least one identity (system assigned or user assigned) must be enabled for deployment identity. It will be used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [granted Contributor and User Access Administrator access to the deployment subscription](how-to-configure-managed-identity.md) configured per environment type. ## Update a project environment type |
deployment-environments | Quickstart Create And Configure Devcenter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md | You need to perform the steps in both quickstarts before you can create a deploy ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).+- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor). ## Create a dev center To create and configure a Dev center in Azure Deployment Environments by using the Azure portal: The managed identity that represents your dev center requires access to the subs ||-| |**Scope**|Subscription| |**Subscription**|Select the subscription in which to use the managed identity.|- |**Role**|Owner| + |**Role**|Contributor| ++1. To give access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**: + + |Name |Value | + ||-| + |**Scope**|Subscription| + |**Subscription**|Select the subscription in which to use the managed identity.| + |**Role**|User Access Administrator| 1. To give access to the key vault, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**: |
deployment-environments | Quickstart Create And Configure Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md | The following diagram shows the steps you perform in this quickstart to configur :::image type="content" source="media/quickstart-create-configure-projects/create-environment-steps.png" alt-text="Diagram showing the stages required to configure a project for Deployment Environments."::: -First, you create a project. Then, assign the dev center managed identity the Owner role to the subscription. Then, you configure the project by creating a project environment type. Finally, you give the development team access to the project by assigning the [Deployment Environments User](how-to-configure-deployment-environments-user.md) role to the project. +First, you create a project. Then, assign the dev center managed identity the Contributor and the User Access Administrator roles to the subscription. Then, you configure the project by creating a project environment type. Finally, you give the development team access to the project by assigning the [Deployment Environments User](how-to-configure-deployment-environments-user.md) role to the project. You need to perform the steps in both quickstarts before you can create a deployment environment. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).+- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) - An Azure Deployment Environments dev center with a catalog attached. If you don't have a dev center with a catalog, see [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). ## Create a project To configure a project, add a [project environment type](how-to-configure-projec :::image type="content" source="./media/quickstart-create-configure-projects/add-project-environment-type-page.png" alt-text="Screenshot that shows adding details in the Add project environment type pane."::: > [!NOTE]-> At least one identity (system-assigned or user-assigned) must be enabled for deployment identity. The identity is used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [assigned the Owner role](how-to-configure-managed-identity.md) for access to the deployment subscription for each environment type. +> At least one identity (system-assigned or user-assigned) must be enabled for deployment identity. The identity is used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [assigned the Contributor and the User Access Admistrator roles](how-to-configure-managed-identity.md) for access to the deployment subscription for each environment type. ## Give access to the development team |
dev-box | How To Request Quota Increase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-request-quota-increase.md | You'll find submitting a support request for additional quota is quicker if you Follow these steps to request a limit increase: -1. On the Azure portal home page, select Support & troubleshooting, and then select **Help + support** +1. On the Azure portal home page, select Support & troubleshooting from the top left, and then select **Help + support**. :::image type="content" source="./media/how-to-request-capacity-increase/submit-new-request.png" alt-text="Screenshot of the Azure portal home page, highlighting the Request core limit increase button." lightbox="./media/how-to-request-capacity-increase/submit-new-request.png"::: To complete the support request, enter the following information: ## Related content - To learn how to check your quota usage, see [Determine usage and quota](./how-to-determine-your-quota-usage.md).-- Check the default quota for each resource type by subscription type: [Microsoft Dev Box limits](../azure-resource-manager/management/azure-subscription-service-limits.md#microsoft-dev-box-limits)+- Check the default quota for each resource type by subscription type: [Microsoft Dev Box limits](../azure-resource-manager/management/azure-subscription-service-limits.md#microsoft-dev-box-limits) |
dns | Dns Import Export Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-import-export-portal.md | + + Title: Import and export a domain zone file - Azure portal ++description: Learn how to import and export a DNS (Domain Name System) zone file to Azure DNS by using Azure portal +++ Last updated : 10/20/2023+++++# Import and export a DNS zone file using the Azure portal ++In this article, you learn how to import and export a DNS zone file in Azure DNS using Azure portal. You can also [import and export a zone file using Azure PowerShell](dns-import-export.md). ++## Introduction to DNS zone migration ++A DNS zone file is a text file containing information about every DNS record in the zone. It follows a standard format, making it suitable for transferring DNS records between DNS systems. Using a zone file is a fast and convenient way to import DNS zones into Azure DNS. You can also export a zone file from Azure DNS to use with other DNS systems. ++Azure DNS supports importing and exporting zone files via the Azure CLI and the Azure portal. ++## Obtain your existing DNS zone file ++Before you import a DNS zone file into Azure DNS, you need to obtain a copy of the zone file. The source of this file depends on where the DNS zone is hosted. ++* If your DNS zone is hosted by a partner service, the service should provide a way for you to download the DNS zone file. Partner services include domain registrar, dedicated DNS hosting provider, or an alternative cloud provider. +* If your DNS zone is hosted on Windows DNS, the default folder for the zone files is **%systemroot%\system32\dns**. The full path to each zone file is also shown on the **General** tab of the DNS console. +* If your DNS zone is hosted using BIND, the location of the zone file for each zone gets specified in the BIND configuration file **named.conf**. ++> [!IMPORTANT] +> If the zone file that you import contains CNAME entries that point to names in a private zone, Azure DNS resolution of the CNAME fails unless the other zone is also imported, or the CNAME entries are modified. ++## Import a DNS zone file into Azure DNS ++Importing a zone file creates a new zone in Azure DNS if the zone doesn't already exist. If the zone exists, then the record sets in the zone file are merged with the existing record sets. ++### Merge behavior ++* By default, the new record sets get merged with the existing record sets. Identical records within a merged record set aren't duplicated. +* When record sets are merged, the time to live (TTL) of pre-existing record sets is used. +* Start of Authority (SOA) parameters, except `host` are always taken from the imported zone file. The name server record set at the zone apex also always uses the TTL taken from the imported zone file. +* An imported CNAME record doesn't replace an existing CNAME record with the same name. +* When a conflict happens between a CNAME record and another record with the same name of different type, the existing record gets used. ++### Additional information about importing ++The following notes provide more details about the zone import process. ++* The `$TTL` directive is optional, and is supported. When no `$TTL` directive is given, records without an explicit TTL are imported set to a default TTL of 3600 seconds. When two records in the same record set specify different TTLs, the lower value is used. +* The `$ORIGIN` directive is optional, and is supported. When no `$ORIGIN` is set, the default value used is the zone name as specified on the command line, including the ending dot (.). +* The `$INCLUDE` and `$GENERATE` directives aren't supported. +* The following record types are supported: A, AAAA, CAA, CNAME, MX, NS, SOA, SRV, and TXT. +* The SOA record is created automatically by Azure DNS when a zone is created. When you import a zone file, all SOA parameters are taken from the zone file *except* the `host` parameter. This parameter uses the value provided by Azure DNS because it needs to refer to the primary name server provided by Azure DNS. +* The name server record set at the zone apex is also created automatically by Azure DNS when the zone is created. Only the TTL of this record set is imported. These records contain the name server names provided by Azure DNS. The record data isn't overwritten by the values contained in the imported zone file. +* Azure DNS supports only single-string TXT records. Multistring TXT records are to be concatenated and truncated to 255 characters. +* The zone file to be imported must contain 10k or fewer lines with no more than 3k record sets. ++## Import a zone file ++1. Obtain a copy of the zone file for the zone you wish to import. ++ The following small zone file and resource records are used in this example: ++ ```text + $ORIGIN adatum.com. + $TTL 86400 + @ IN SOA dns1.adatum.com. hostmaster.adatum.com. ( + 2023091201 ; serial + 21600 ; refresh after 6 hours + 3600 ; retry after 1 hour + 604800 ; expire after 1 week + 86400 ) ; minimum TTL of 1 day + + IN NS dns1.adatum.com. + IN NS dns2.adatum.com. ++ IN MX 10 mail.adatum.com. + IN MX 20 mail2.adatum.com. ++ dns1 IN A 5.4.3.2 + dns2 IN A 4.3.2.1 + server1 IN A 4.4.3.2 + server2 IN A 5.5.4.3 + ftp IN A 3.3.2.1 + IN A 3.3.3.2 + mail IN CNAME server1 + mail2 IN CNAME server2 + www IN CNAME server1 + ``` + Names used: + - Origin zone name:ΓÇ»**adatum.com**ΓÇ» + - Destination zone name: **adatum.com** + - Zone filename:ΓÇ»**adatum.com.txt**ΓÇ» + - Resource group:ΓÇ»**myresourcegroup** +2. Open the **DNS zones** overview page and select **Create**. +3. On the **Create DNS zone** page, type or select the following values: + - **Resource group**: Choose an existing resource group, or select **Create new**, enter **myresourcegroup**, and select **OK**. The resource group name must be unique within the Azure subscription. + - **Name**: Type **adatum.com** for this example. The DNS zone name can be any value that is not already configured on the Azure DNS servers. A real-world value would be a domain that you bought from a domain name registrar. +4. Select **Review create** and then select **Create**. +5. When deployment is complete, select **Go to resource**. NS and SOA records compatible with Azure public DNS are automatically added to the zone. See the following example: ++ ![The adatum.com zone overview](./media/dns-import-export-portal/adatum-overview.png) ++6. Select **Import** and then on the **Import DNS zone** page, select **Browse**. +7. Select the **adatum.com.txt** file and then select **Open**. The zone file is displayed in the DNS Zone Editor. See the following example: ++ ![The adatum.com zone displayed in the DNS Zone Editor](./media/dns-import-export-portal/dns-zone-editor.png) ++8. Edit the zone data values before proceeding to the next step. ++ > [!NOTE] + > If old NS records are present in the zone file, a non-blocking error is displayed during zone import. Azure NS records are not overwritten. Ideally the old NS records are removed prior to import.<br> + > If you wish to reset the zone serial number, delete the old serial number from the SOA prior to import. ++9. Select **Review Create** and review information in the DNS Zone Diff Viewer. See the following example: ++ ![The adatum.com zone displayed in the DNS Zone Diff Viewer](./media/dns-import-export-portal/diff-viewer.png) ++10. Select **Create**. The zone data is imported and the zone is displayed. See the following example: ++ [ ![The adatum.com zone displayed in the overview pane](./media/dns-import-export-portal/adatum-imported.png) ](./media/dns-import-export-portal/adatum-imported.png#lightbox) ++## Export a zone file ++1. Open the **DNS zones** overview page and select the zone you wish to export. For example, **adatum.com**. See the following example: ++ [ ![The adatum.com zone ready to export](./media/dns-import-export-portal/adatum-export.png) ](./media/dns-import-export-portal/adatum-export.png#lightbox) ++2. Select **Export**. The file is downloaded to your default downloads directory as a text file with the name AzurePublicDnsZone-adatum.com`number`.txt where `number` is an autogenerated index number. +3. Open the file to view the contents. See the following example: ++ ```text + ; Exported zone file from Azure DNS + ; Zone name: adatum.com + ; Date and time (UTC): Tue, 12 Sep 2023 21:33:17 GMT ++ $TTL 86400 + $ORIGIN adatum.com ++ ; SOA Record + @ 3600 IN SOA SOA dns1.adatum.com. ( + 0 ;serial + 21600 ;refresh + 3600 ;retry + 604800 ;expire + 86400 ;minimum ttl + ) ++ ; NS Records + @ 172800 IN NS ns1-36.azure-dns.com. + @ 172800 IN NS ns2-36.azure-dns.net. + @ 172800 IN NS ns3-36.azure-dns.org. + @ 172800 IN NS ns4-36.azure-dns.info. ++ ; MX Records + @ 3600 IN MX 10 mail.adatum.com. + @ 3600 IN MX 20 mail2.adatum.com. ++ ; A Records + dns1 3600 IN A 5.4.3.2 + dns2 3600 IN A 4.3.2.1 + ftp 3600 IN A 3.3.2.1 + ftp 3600 IN A 3.3.3.2 + server1 3600 IN A 4.4.3.2 + server2 3600 IN A 5.5.4.3 ++ ; AAAA Records ++ ; CNAME Records + mail 3600 IN CNAME server1 + mail2 3600 IN CNAME server2 + www 3600 IN CNAME server1 ++ ; PTR Records ++ ; TXT Records ++ ; SRV Records ++ ; SPF Records ++ ; CAA Records ++ ; DS Records ++ ; Azure Alias Records + ``` ++## Next steps ++* Learn how to [manage record sets and records](./dns-getstarted-cli.md) in your DNS zone. +* Learn how to [delegate your domain to Azure DNS](dns-domain-delegation.md). |
dns | Dns Import Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-import-export.md | -In this article, you'll learn how to import and export a DNS zone file in Azure DNS using Azure CLI. +In this article, you learn how to import and export a DNS zone file in Azure DNS using Azure CLI. You can also [import and export a zone file using the Azure portal](dns-import-export-portal.md). ## Introduction to DNS zone migration A DNS zone file is a text file containing information about every Domain Name System (DNS) record in the zone. It follows a standard format, making it suitable for transferring DNS records between DNS systems. Using a zone file is a fast and convenient way to import DNS zones into Azure DNS. You can also export a zone file from Azure DNS to use with other DNS systems. -Azure DNS supports importing and exporting zone files via the Azure CLI. Importing zone files via Azure PowerShell or the Azure portal is **not** supported currently. +Azure DNS supports importing and exporting zone files via the Azure CLI and the Azure portal. Azure CLI is a cross-platform command-line tool used for managing Azure services. It's available for Windows, Mac, and Linux from the [Azure downloads page](https://azure.microsoft.com/downloads/). ## Obtain your existing DNS zone file -Before you import a DNS zone file into Azure DNS, you need to obtain a copy of the zone file. The source of this file depends on where the DNS zone is currently hosted. +Before you import a DNS zone file into Azure DNS, you need to obtain a copy of the zone file. The source of this file depends on where the DNS zone is hosted. -* If your DNS zone is currently hosted by a partner service, they'll have a way for you to download the DNS zone file. Partner services include domain registrar, dedicated DNS hosting provider, or an alternative cloud provider. +* If your DNS zone is hosted by a partner service, the service should have a way for you to download the DNS zone file. Partner services include domain registrar, dedicated DNS hosting provider, or an alternative cloud provider. * If your DNS zone is hosted on Windows DNS, the default folder for the zone files is **%systemroot%\system32\dns**. The full path to each zone file is also shown on the **General** tab of the DNS console. * If your DNS zone is hosted using BIND, the location of the zone file for each zone gets specified in the BIND configuration file **named.conf**. > [!IMPORTANT]-> If the zone file that you import contains CNAME entries that point to names in another private zone, Azure DNS resolution of the CNAME will fail unless the other zone is also imported, or the CNAME entries are modified. +> If the zone file that you import contains CNAME entries that point to names in another private zone, Azure DNS resolution of the CNAME fails unless the other zone is also imported, or the CNAME entries are modified. ## Import a DNS zone file into Azure DNS -Importing a zone file creates a new zone in Azure DNS if the zone doesn't already exist. If the zone exists, then the record sets in the zone file will be merged with the existing record sets. +Importing a zone file creates a new zone in Azure DNS if the zone doesn't already exist. If the zone exists, then the record sets in the zone file are merged with the existing record sets. ### Merge behavior * By default, the new record sets get merged with the existing record sets. Identical records within a merged record set aren't duplicated. * When record sets are merged, the time to live (TTL) of pre-existing record sets is used.-* Start of Authority (SOA) parameters, except `host` are always taken from the imported zone file. The name server record set at the zone apex will also always use the TTL taken from the imported zone file. +* Start of Authority (SOA) parameters, except `host` are always taken from the imported zone file. The name server record set at the zone apex also always uses the TTL taken from the imported zone file. * An imported CNAME record doesn't replace an existing CNAME record with the same name. * When a conflict happens between a CNAME record and another record with the same name of different type, the existing record gets used. Importing a zone file creates a new zone in Azure DNS if the zone doesn't alread The following notes provide more technical details about the zone import process. * The `$TTL` directive is optional, and is supported. When no `$TTL` directive is given, records without an explicit TTL are imported set to a default TTL of 3600 seconds. When two records in the same record set specify different TTLs, the lower value is used.-* The `$ORIGIN` directive is optional, and is supported. When no `$ORIGIN` is set, the default value used is the zone name as specified on the command line including the ending ".". +* The `$ORIGIN` directive is optional, and is supported. When no `$ORIGIN` is set, the default value used is the zone name as specified on the command line, including the ending dot (.). * The `$INCLUDE` and `$GENERATE` directives aren't supported. * These record types are supported: A, AAAA, CAA, CNAME, MX, NS, SOA, SRV, and TXT. * The SOA record is created automatically by Azure DNS when a zone is created. When you import a zone file, all SOA parameters are taken from the zone file *except* the `host` parameter. This parameter uses the value provided by Azure DNS because it needs to refer to the primary name server provided by Azure DNS.-* The name server record set at the zone apex is also created automatically by Azure DNS when the zone is created. Only the TTL of this record set is imported. These records contain the name server names provided by Azure DNS. The record data is not overwritten by the values contained in the imported zone file. +* The name server record set at the zone apex is also created automatically by Azure DNS when the zone is created. Only the TTL of this record set is imported. These records contain the name server names provided by Azure DNS. The record data isn't overwritten by the values contained in the imported zone file. * During Public Preview, Azure DNS supports only single-string TXT records. Multistring TXT records are to be concatenated and truncated to 255 characters. ### CLI format and values Values: * `<zone name>` is the name of the zone. * `<zone file name>` is the path/name of the zone file to be imported. -If a zone with this name doesn't already exist in the resource group, one will be created for you. For an existing zone, the imported record sets will get merged with existing record sets. +If a zone with this name doesn't already exist in the resource group, one is created for you. For an existing zone, the imported record sets are merged with existing record sets. ### Import a zone file To import a zone file for the zone **contoso.com**. az group create --resource-group myresourcegroup -l westeurope ``` -1. To import the zone **contoso.com** from the file **contoso.com.txt** into a new DNS zone in the resource group **myresourcegroup**, you'll run the command `az network dns zone import`. +1. To import the zone **contoso.com** from the file **contoso.com.txt** into a new DNS zone in the resource group **myresourcegroup**, run the command `az network dns zone import`. - This command loads the zone file and parses it. The command executes a series of operations on the Azure DNS service to create the zone and all the record sets in the zone. The command will report the progress in the console window along with any errors or warnings. Since record sets are created in series, it may take a few minutes to import a large zone file. + This command loads the zone file and parses it. The command executes a series of operations on the Azure DNS service to create the zone and all the record sets in the zone. The command reports the progress in the console window along with any errors or warnings. Since record sets are created in series, it could take a few minutes to import a large zone file. ```azurecli-interactive az network dns zone import -g myresourcegroup -n contoso.com -f contoso.com.txt az network dns zone export -g myresourcegroup -n contoso.com -f contoso.com.txt ## Next steps * Learn how to [manage record sets and records](./dns-getstarted-cli.md) in your DNS zone.- * Learn how to [delegate your domain to Azure DNS](dns-domain-delegation.md). |
dns | Private Dns Import Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-import-export.md | Title: Import and export a domain zone file for Azure private DNS - Azure CLI description: Learn how to import and export a DNS zone file to Azure private DNS by using Azure CLI -+ Previously updated : 09/27/2022- Last updated : 10/20/2023+ The following notes provide additional technical details about the zone import p * These record types are supported: A, AAAA, CAA, CNAME, MX, NS, SOA, SRV, and TXT. * The SOA record is created automatically by Azure DNS when a zone is created. When you import a zone file, all SOA parameters are taken from the zone file *except* the `host` parameter. This parameter uses the value provided by Azure DNS. This is because this parameter must refer to the primary name server provided by Azure DNS. * The name server record set at the zone apex is also created automatically by Azure DNS when the zone is created. Only the TTL of this record set is imported. These records contain the name server names provided by Azure DNS. The record data is not overwritten by the values contained in the imported zone file.-* During Public Preview, Azure DNS supports only single-string TXT records. Multistring TXT records will be concatenated and truncated to 255 characters. +* Azure DNS supports only single-string TXT records. Multistring TXT records will be concatenated and truncated to 255 characters. ### CLI format and values |
firewall-manager | Quick Firewall Policy Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy-terraform.md | Title: 'Quickstart: Create an Azure Firewall and a firewall policy - Terraform' description: In this quickstart, you deploy an Azure Firewall and a firewall policy using Terraform. -+ Last updated 09/05/2023 Multiple Azure resources are defined in the Terraform code. The following resour ## Next steps > [!div class="nextstepaction"]-> [Azure Firewall Manager policy overview](policy-overview.md) +> [Azure Firewall Manager policy overview](policy-overview.md) |
firewall-manager | Quick Secure Virtual Hub Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub-terraform.md | Title: 'Quickstart: Secure virtual hub using Azure Firewall Manager - Terraform' description: In this quickstart, you learn how to secure your virtual hub using Azure Firewall Manager and Terraform. -+ Last updated 09/05/2023 Multiple Azure resources are defined in the Terraform code. The following resour ## Next steps > [!div class="nextstepaction"]-> [Learn about security partner providers](trusted-security-partners.md) +> [Learn about security partner providers](trusted-security-partners.md) |
firewall | Deploy Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-terraform.md | Title: 'Quickstart: Create an Azure Firewall with Availability Zones - Terraform' description: In this quickstart, you deploy Azure Firewall using Terraform. The virtual network has one VNet with three subnets. Two Windows Server virtual machines, a jump box, and a server are deployed. -+ |
firewall | Quick Create Ipgroup Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-terraform.md | Title: 'Quickstart: Create an Azure Firewall and IP Groups - Terraform' description: In this quickstart, you learn how to use Terraform to create an Azure Firewall and IP Groups. -+ |
firewall | Quick Create Multiple Ip Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-bicep.md | For more information about Azure Firewall with multiple public IP addresses, see This Bicep file creates an Azure Firewall with two public IP addresses, along with the necessary resources to support the Azure Firewall. + The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/fw-docs-qs). :::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.network/fw-docs-qs/main.bicep"::: |
firewall | Quick Create Multiple Ip Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-template.md | |
firewall | Quick Create Multiple Ip Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-terraform.md | Title: 'Quickstart: Create an Azure Firewall with multiple public IP addresses - Terraform' description: In this quickstart, you learn how to use Terraform to create an Azure Firewall with multiple public IP addresses. -+ |
firewall | Web Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/web-categories.md | -Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. The categories are organized based on severity under Liability, High-Bandwidth, Business use, Productivity loss, General surfing, and Uncategorized. +Web categories let administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. The categories are organized based on severity under Liability, High-Bandwidth, Business use, Productivity loss, General surfing, and Uncategorized. For more information, see [Azure Firewall Premium features](premium-features.md#web-categories). For more information, see [Azure Firewall Premium features](premium-features.md# ||| |Alcohol + tobacco |Sites that are contain, promote, or sell alcohol- or tobacco-related products or services.| |Child abuse images |Sites that present or discuss children in abusive or sexual acts.|-|Child inappropriate |Sites that are unsuitable for children, which may contain R-rated or tasteless content, profanity, or adult material.| |Criminal activity|Sites that promote or advise on how to commit illegal or criminal activity, or to avoid detection for such activity. Criminal activity includes murder, building bombs, illegal manipulation of electronic devices, hacking, fraud, and illegal distribution of software.| |Dating + personals |Sites that promote networking for relationships such as dating and marriage, such as matchmaking, online dating, and spousal introduction.| |Gambling |Sites that offer or related to online gambling, lottery, betting agencies involving chance, and casinos.| For more information, see [Azure Firewall Premium features](premium-features.md# |Illegal software |Sites that illegally distribute software or copyrighted materials such as movies, music, software cracks, illicit serial numbers, illegal license key generators.| |Lingerie + swimsuits|Sites that offer images of models in suggestive costume, with semi-nudity permitted. Includes sites offering lingerie or swimwear.| |Marijuana |Sites that contain information, discussions, or sale of marijuana and associated products or services, including legalizing marijuana and/or using marijuana for medicinal purposes.|-|Nudity | Sites that contain full or partial nudity that are not necessarily overtly sexual in intent.| +|Nudity | Sites that contain full or partial nudity that aren't necessarily overtly sexual in intent.| |Pornography/sexually explicit |Sites that contain explicit sexual content. Includes adult products such as sex toys, CD-ROMs, and videos, adult services such as videoconferencing, escort services, and strip clubs, erotic stories, and textual descriptions of sexual acts. |-|School cheating | Sites that promote unethical practices such as cheating or plagiarism by providing test answers, written essays, research papers, or term papers. | |Self-harm |Sites that promote actions that are relating to harming oneself, such as suicide, anorexia, bulimia, etc. | |Sex education | Sites relating to sex education, including subjects such as respect for partner, abortion, contraceptives, sexually transmitted diseases, and pregnancy. | |Tasteless |Sites with offensive or tasteless content, including profanity. | For more information, see [Azure Firewall Premium features](premium-features.md# |Computers + technology |Sites that contain information such as product reviews, discussions, and news about computers, software, hardware, peripheral, and computer services. | |Education | Sites sponsored by educational institutions and schools of all types including distance education. Includes general educational and reference materials such as dictionaries, encyclopedias, online courses, teaching aids and discussion guides. | |Finance | Sites related to banking, finance, payment or investment, including banks, brokerages, online stock trading, stock quotes, fund management, insurance companies, credit unions, credit card companies, and so on. |-|Forums + newsgroups | Sites for sharing information in the form of newsgroups, forums, bulletin boards. Does not include personal blogs. | +|Forums + newsgroups | Sites for sharing information in the form of newsgroups, forums, bulletin boards. Doesn't include personal blogs. | |Government | Sites run by governmental or military organizations, departments, or agencies, including police departments, fire departments, customs bureaus, emergency services, civil defense, and counterterrorism organizations. | |Health + medicine | Sites containing information pertaining to health, healthcare services, fitness and well-being, including information about medical equipment, hospitals, drugstores, nursing, medicine, procedures, prescription medications, etc. | |Information security | Sites that provide legitimate information about data protection, including newly discovered vulnerabilities and how to block them. | |Job search | Sites containing job listings, career information, assistance with job searches (such as resume writing, interviewing tips, etc.), employment agencies or head hunters. | |News | Sites covering news and current events such as newspapers, newswire services, personalized news services, broadcasting sites, and magazines. |-|Non-profits + NGOs | Sites devoted to clubs, communities, unions, and non-profit organizations. Many of these groups exist for educational or charitable purposes. | +|Nonprofits + NGOs | Sites devoted to clubs, communities, unions, and non-profit organizations. Many of these groups exist for educational or charitable purposes. | |Personal sites | Sites about or hosted by personal individuals, including those hosted on commercial sites such as Blogger, AOL, etc. |-|Private IP addresses | Sites that are private IP addresses as defined in RFC 1918, that is, hosts that do not require access to hosts in other enterprises (or require limited access) and whose IP address may be ambiguous between enterprises but are well-defined within a certain enterprise. | +|Private IP addresses | Sites that are private IP addresses as defined in RFC 1918, that is, hosts that don't require access to hosts in other enterprises (or require limited access) and whose IP address might be ambiguous between enterprises but are well-defined within a certain enterprise. | |Professional networking | Sites that enable professional networking for online communities. | |Search engines + portals |Sites enabling the searching of the Web, newsgroups, images, directories, and other online content. Includes portal and directory sites such as white/yellow pages. |-|Translators | Sites that translate Web pages or phrases from one language to another. These sites bypass the proxy server, presenting the risk that unauthorized content may be accessed, similar to using an anonymizer. | +|Translators | Sites that translate Web pages or phrases from one language to another. These sites bypass the proxy server, presenting the risk that unauthorized content might be accessed, similar to using an anonymizer. | |Web repository + storage | Web pages including collections of shareware, freeware, open source, and other software downloads. | |Web-based email | Sites that enable users to send and receive email through a web accessible email account. | | | | For more information, see [Azure Firewall Premium features](premium-features.md# ||| |Advertisements + popups | Sites that provide advertising graphics or other ad content files that appear on Web pages. | |Chat | Sites that enable web-based exchange of real-time messages through chat services or chat rooms. |-|Cults | Sites relating to non-traditional religious practice typically known as "cults," that is, considered to be false, unorthodox, extremist, or coercive, with members often living under the direction of a charismatic leader. | +|Cults | Sites relating to nontraditional religious practice typically known as "cults," that is, considered to be false, unorthodox, extremist, or coercive, with members often living under the direction of a charismatic leader. | |Games | Sites relating to computer or other games, information about game producers, or how to obtain cheat codes. Game-related publication sites. | |Instant messaging | Sites that enable logging in to instant messaging services such as ICQ, AOL Instant Messenger, IRC, MSN, Jabber, Yahoo Messenger, and the like. | |Shopping | Sites for online shopping, catalogs, online ordering, auctions, classified ads. Excludes shopping for products and services exclusively covered by another category such as health & medicine. | For more information, see [Azure Firewall Premium features](premium-features.md# ||| |Arts | Sites with artistic content or relating to artistic institutions such as theaters, museums, galleries, dance companies, photography, and digital graphic resources. | |Fashion + Beauty | Sites concerning fashion, jewelry, glamour, beauty, modeling, cosmetics, or related products or services. Includes product reviews, comparisons, and general consumer information. |-|General | Sites that do not clearly fall into other categories, for example, blank web pages. | -|Greeting cards |Sites that allow people to send and receive greeting cards and postcards. | +|General | Sites that don't clearly fall into other categories, for example, blank web pages. | |Leisure + recreation | Sites relating to recreational activities and hobbies including zoos, public recreation centers, pools, amusement parks, and hobbies such as gardening, literature, arts & crafts, home improvement, home décor, family, etc. | |Nature + conservation | Sites with information related to environmental issues, sustainable living, ecology, nature, and the environment. | |Politics | Sites that promote political parties or political advocacy, or provide information about political parties, interest groups, elections, legislation, or lobbying. Also includes sites that offer legal information and advice. | For more information, see [Azure Firewall Premium features](premium-features.md# |Sports | Sites relating to sports teams, fan clubs, scores, and sports news. Relates to all sports, whether professional or recreational. | |Transportation | Sites that include information about motor vehicles such as cars, motorcycles, boats, trucks, RVs and the like, including online purchase sites. Includes manufacturer sites, dealerships, review sites, pricing, enthusiast’s clubs, and public transportation etc. | |Travel | Sites that provide travel and tourism information or online booking or travel services such as airlines, accommodations, car rentals. Includes regional or city information sites. |-|Uncategorized |Sites that have not been categorized, such as new websites, personal sites, and so on. | +|Uncategorized |Sites that haven't been categorized, such as new websites, personal sites, and so on. | | | | ## Next steps |
iot-central | Overview Iot Central Developer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md | A gateway device manages one or more downstream devices that connect to your IoT As you connect a device to IoT Central, it goes through the following stages: _registered_, _provisioned_, and _connected_. -To learn how to monitor the status of a device, see [Monitor your devices](howto-manage-devices-individually.md#monitor-your-devices). +- To learn why devices should always use the Device Provisioning Service to connect to IoT Central, see [Device implementation and best practices for IoT central](concepts-device-implementation.md). ++- To learn how to monitor the status of a device, see [Monitor your devices](howto-manage-devices-individually.md#monitor-your-devices). ### Register a device |
iot-central | Troubleshoot Data Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-data-export.md | Before you configure or enable the export destination, make sure that you comple - Configure any virtual networks, private endpoints, and firewall policies. +> [!NOTE] +> If you're using a managed identity to authorize the connection to an export destination, IoT Central doesn't export data from simulated devices. + To learn more, see [Export data](howto-export-data.md?tabs=managed-identity). ## Next steps |
kubernetes-fleet | Architectural Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/architectural-overview.md | Once a cluster is joined to a fleet resource, a MemberCluster custom resource is The member clusters can be viewed by running the following command: ```bash-kubectl get crd memberclusters.fleet.azure.com -o yaml +kubectl get memberclusters ``` The complete specification of the `MemberCluster` custom resource can be viewed by running the following command: ```bash-kubectl get crd memberclusters -o yaml +kubectl get crd memberclusters.fleet.azure.com -o yaml ``` The following labels are added automatically to all member clusters, which can then be used for target cluster selection in resource propagation. |
load-testing | How To Move Between Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-move-between-regions.md | Refer to the test configuration in the `config.yaml` files you downloaded earlie If you invoke the load tests in a CI/CD workflow, update the `loadTestResource` parameter in the CI/CD pipeline definition to match the new Azure load testing resource name. > [!NOTE]-> If you have configured any of your load test with secrets from Azure Key Vault, make sure to [grant the new resource access to the Key Vault](./how-to-use-a-managed-identity.md?tabs=azure-portal#grant-access-to-your-azure-key-vault). +> If you have configured any of your load tests with secrets or certificates from Azure Key Vault, make sure to [grant the new resource access to the Key Vault](./how-to-parameterize-load-tests.md#grant-access-to-your-azure-key-vault). ## Clean up source resources |
load-testing | How To Parameterize Load Tests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-parameterize-load-tests.md | To use secrets with Azure Load Testing, you perform the following steps: ### <a name="akv_secrets"></a> Use Azure Key Vault to store load test secrets -You can use Azure Key Vault to pass secret values to your test script in Azure Load Testing. You'll add a reference to the secret in the Azure Load Testing configuration. Azure Load Testing then uses this reference to retrieve the secret value in the Apache JMeter script. +You can use Azure Key Vault to pass secret values to your test script in Azure Load Testing. You add a reference to the secret in the Azure Load Testing configuration. Azure Load Testing then uses this reference to retrieve the secret value in the Apache JMeter script. -You'll also need to grant Azure Load Testing access to your Azure key vault to retrieve the secret value. +You also need to grant Azure Load Testing access to your Azure key vault to retrieve the secret value. > [!NOTE] > If you run a load test as part of your CI/CD process, you might also use the related secret store. Skip to [Use the CI/CD secret store](#cicd_secrets). +#### Create a secret in Azure Key Vault + 1. [Add the secret value to your key vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault), if you haven't already done so. > [!IMPORTANT] > If you restricted access to your Azure key vault by a firewall or virtual networking, follow these steps to [grant access to trusted Azure services](/azure/key-vault/general/overview-vnet-service-endpoints#grant-access-to-trusted-azure-services). -1. Retrieve the key vault **secret identifier** for your secret. You'll use this secret identifier to configure your load test. +1. Retrieve the key vault **secret identifier** for your secret. You use this secret identifier to configure your load test. :::image type="content" source="media/how-to-parameterize-load-tests/key-vault-secret.png" alt-text="Screenshot that shows the details of a secret in an Azure key vault."::: The **secret identifier** is the full URI of the secret in the Azure key vault. Optionally, you can also include a version number. For example, `https://myvault.vault.azure.net/secrets/mysecret/` or `https://myvault.vault.azure.net/secrets/mysecret/abcdef01-2345-6789-0abc-def012345678`. -1. Grant your Azure Load Testing resource access to the key vault. - - To retrieve the secret from your Azure key vault, you need to give read permission to your Azure Load Testing resource. To enable this, you need to first specify an identity for your load testing resource. Azure Load Testing can use a system-assigned or user-assigned identity. - - To provide Azure Load Testing access to your key vault, see [Use managed identities for Azure Load Testing](how-to-use-a-managed-identity.md). +#### Add the secret to your load test 1. Reference the secret in the load test configuration. You'll also need to grant Azure Load Testing access to your Azure key vault to r * In the Azure portal, select your load test, select **Configure**, select the **Parameters** tab, and then configure the **Key Vault reference identity**. - :::image type="content" source="media/how-to-parameterize-load-tests/key-vault-reference-identity.png" alt-text="Screenshot that shows how to select key vault reference identity."::: + :::image type="content" source="media/how-to-parameterize-load-tests/key-vault-reference-identity.png" alt-text="Screenshot that shows how to select key vault reference identity."::: * If you're configuring a CI/CD workflow and use Azure Key Vault, you can specify the reference identity in the YAML configuration file by using the `keyVaultReferenceIdentity` property. For more information about the syntax, see the [Test configuration YAML reference](./reference-test-config-yaml.md). -You've now specified a secret in Azure Key Vault and configured your Azure Load Testing resource to retrieve its value. You can now move to [Use secrets in Apache JMeter](#jmeter_secrets). +#### Grant access to your Azure key vault -### <a name="cicd_secrets"></a> Use the CI/CD secret store to save load test secrets -You can use Azure Key Vault to pass secret values to your test script in Azure Load Testing. You'll add a reference to the secret in the Azure Load Testing configuration. Azure Load Testing then uses this reference to retrieve the secret value in the Apache JMeter script. +Now that you've added a secret in Azure Key Vault, configured a secret for your load test, you can now move to [Use secrets in Apache JMeter](#jmeter_secrets). -You'll also need to grant Azure Load Testing access to your Azure key vault to retrieve the secret value. +### <a name="cicd_secrets"></a> Use the CI/CD secret store to save load test secrets If you're using Azure Load Testing in your CI/CD workflow, you can also use the associated secret store. For example, you can use [GitHub repository secrets](https://docs.github.com/actions/security-guides/encrypted-secrets), or [secret variables in Azure Pipelines](/azure/devops/pipelines/process/variables?view=azure-devopsd&tabs=yaml%2Cbatch#secret-variables&preserve-view=true). -You'll first add a secret to the CI/CD secret store. In the CI/CD workflow you'll then pass the secret value to the Azure Load Testing task/action. - > [!NOTE] > If you're already using a key vault, you might also use it to store the load test secrets. Skip to [Use Azure Key Vault](#akv_secrets). +To use secrets in the CI/CD secret store and pass them to your load test in CI/CD: + 1. Add the secret value to the CI/CD secret store, if it doesn't exist yet. In Azure Pipelines, you can edit the pipeline and [add a variable](/azure/devops/pipelines/process/variables?view=azure-devopsd&tabs=yaml%2Cbatch#secret-variables&preserve-view=true). You've now specified a secret in the CI/CD secret store and passed a reference t ### <a name="jmeter_secrets"></a> Use secrets in Apache JMeter -In this section, you'll update the Apache JMeter script to use the secret that you specified earlier. +Next, you update the Apache JMeter script to use the secret that you specified earlier. You first create a user-defined variable that retrieves the secret value. Then, you can use this variable in your test (for example, to pass an API token in an HTTP request header). You first define a user-defined variable that reads the environment variable, an 1. Create a user-defined variable in your JMX file, and assign the environment variable's value to it by using the `System.getenv` function. - The `System.getenv("<my-variable-name>")` function takes the environment variable name as an argument. You'll use this same name when you configure the load test. + The `System.getenv("<my-variable-name>")` function takes the environment variable name as an argument. You use this same name when you configure the load test. You can create a user-defined variable by using the Apache JMeter IDE, as shown in the following image: The following YAML snippet shows an Azure Pipelines example: ### Does the Azure Load Testing service store my secret values? -No. The Azure Load Testing service doesn't store the values of secrets. When you use a key vault secret URI, the service stores only the secret URI, and it fetches the value of the secret for each test run. If you provide the value of secrets in a CI/CD workflow, the secret values aren't available after the test run. You'll provide these values for each test run. +No. The Azure Load Testing service doesn't store the values of secrets. When you use a key vault secret URI, the service stores only the secret URI, and it fetches the value of the secret for each test run. If you provide the value of secrets in a CI/CD workflow, the secret values aren't available after the test run. You provide these values for each test run. ### What happens if I have parameters in both my YAML configuration file and the CI/CD workflow? -If a parameter exists in both the YAML configuration file and the Azure Load Testing action or Azure Pipelines task, the value from the CI/CD workflow will be used for the test run. +If a parameter exists in both the YAML configuration file and the Azure Load Testing action or Azure Pipelines task, the value from the CI/CD workflow is used for the test run. ### I created and ran a test from my CI/CD workflow by passing parameters using the Azure Load Testing task or action. Can I run this test from the Azure portal with the same parameters? -The values of the parameters aren't stored when they're passed from the CI/CD workflow. You'll have to provide the parameter values again when you run the test from the Azure portal. You'll get a prompt to enter the missing values. For secret values, you'll enter the key vault secret URI. The values that you enter at the test run or rerun page are valid only for that test run. For making changes at the test level, go to **Configure Test** and enter your parameter values. +The values of the parameters aren't stored when they're passed from the CI/CD workflow. You have to provide the parameter values again when you run the test from the Azure portal. You get a prompt to enter the missing values. For secret values, you enter the key vault secret URI. The values that you enter at the test run or rerun page are valid only for that test run. For making changes at the test level, go to **Configure Test** and enter your parameter values. ## Next steps |
load-testing | How To Test Secured Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-secured-endpoints.md | The flow for authenticating with client certificates is: To avoid storing, and disclosing, the client certificate alongside the JMeter script, you store the certificate in Azure Key Vault. -1. Follow the steps in [Import a certificate](/azure/key-vault/certificates/tutorial-import-certificate) to store your certificate in Azure Key Vault. +Follow the steps in [Import a certificate](/azure/key-vault/certificates/tutorial-import-certificate) to store your certificate in Azure Key Vault. - > [!IMPORTANT] - > Azure Load Testing only supports PKCS12 certificates. Upload the client certificate in PFX file format. +> [!IMPORTANT] +> Azure Load Testing only supports PKCS12 certificates. Upload the client certificate in PFX file format. -1. Verify that your load testing resource has permissions to retrieve the certificate from your key vault. +### Grant access to your Azure key vault - Azure Load Testing retrieves the certificate as a *secret* to ensure that the private key for the certificate is available. - - [Assign the Get secret permission to your load testing resource](./how-to-use-a-managed-identity.md#grant-access-to-your-azure-key-vault) in Azure Key Vault. ### Reference the certificate in the load test configuration |
load-testing | How To Use A Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-use-a-managed-identity.md | Title: Use managed identity to access Azure key vault + Title: Use managed identities for Azure Load Testing -description: Learn how to enable managed identity for Azure Load Testing and use it to read secrets from your Azure key vault. +description: Learn how to enable a managed identity for Azure Load Testing. You can use managed identities for reading secrets or certificates from Azure Key Vault in your JMeter test script. Previously updated : 10/20/2022 Last updated : 10/19/2023 # Use managed identities for Azure Load Testing -This article shows how to create a managed identity for Azure Load Testing. You can use a managed identity to authenticate with and read secrets from Azure Key Vault. +This article shows how to create a managed identity for Azure Load Testing. You can use a managed identity to securely access other Azure resources. For example, you use a managed identity to read secrets or certificates from Azure Key Vault in your load test. A managed identity from Microsoft Entra ID allows your load testing resource to easily access other Microsoft Entra protected resources, such as Azure Key Vault. The identity is managed by the Azure platform and doesn't require you to manage or rotate any secrets. For more information about managed identities in Microsoft Entra ID, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview). To set up a managed identity in the portal, you first create an Azure load testi 1. On the left pane, select **Identity**. -1. Select the **System assigned** tab. --1. Switch the **Status** to **On**, and then select **Save**. +1. In the **System assigned** tab, switch **Status** to **On**, and then select **Save**. :::image type="content" source="media/how-to-use-a-managed-identity/system-assigned-managed-identity.png" alt-text="Screenshot that shows how to assign a system-assigned managed identity for Azure Load Testing in the Azure portal."::: 1. On the confirmation window, select **Yes** to confirm the assignment of the managed identity. -1. After assigning the managed identity finishes, the page will show the **Object ID** of the managed identity, and let you assign permissions to it. +1. After this operation completes, the page shows the **Object ID** of the managed identity, and lets you assign permissions to it. :::image type="content" source="media/how-to-use-a-managed-identity/system-assigned-managed-identity-completed.png" alt-text="Screenshot that shows the system-assigned managed identity information for a load testing resource in the Azure portal."::: -You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault). +# [Azure CLI](#tab/cli) ++Run the `az load update` command with `--identity-type SystemAssigned` to add a system-assigned identity to your load testing resource: ++```azurecli-interactive +az load update --name <load-testing-resource-name> --resource-group <group-name> --identity-type SystemAssigned +``` # [ARM template](#tab/arm) After the resource creation finishes, the following properties are configured fo The `tenantId` property identifies which Microsoft Entra tenant the managed identity belongs to. The `principalId` is a unique identifier for the resource's new identity. Within Microsoft Entra ID, the service principal has the same name as the Azure load testing resource. -You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault). - ## Assign a user-assigned identity to a load testing resource You can add multiple user-assigned managed identities to your resource. For exam :::image type="content" source="media/how-to-use-a-managed-identity/user-assigned-managed-identity.png" alt-text="Screenshot that shows how to turn on user-assigned managed identity for Azure Load Testing."::: -You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault). +# [Azure CLI](#tab/cli) ++1. Create a user-assigned identity. ++ ```azurecli-interactive + az identity create --resource-group <group-name> --name <identity-name> + ``` ++1. Run the `az load update` command with `--identity-type UserAssigned` to add a user-assigned identity to your load testing resource: ++ ```azurecli-interactive + az load update --name <load-testing-resource-name> --resource-group <group-name> --identity-type UserAssigned --user-assigned <identity-id> + ``` # [ARM template](#tab/arm) You can create an Azure load testing resource by using an ARM template and the r The `principalId` is a unique identifier for the identity that's used for Microsoft Entra administration. The `clientId` is a unique identifier for the resource's new identity that's used for specifying which identity to use during runtime calls. -You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault). - -## Grant access to your Azure key vault --Using managed identities for Azure resources, your Azure load testing resource can access tokens that enable authentication to your Azure key vault. Grant the managed identity access by assigning the [appropriate role](/azure/role-based-access-control/built-in-roles) to the managed identity. --To grant your Azure load testing resource permissions to read secrets from your Azure key vault: ---1. In the [Azure portal](https://portal.azure.com/), go to your Azure key vault resource. -- If you don't have a key vault, follow the instructions in [Azure Key Vault quickstart](/azure/key-vault/secrets/quick-create-cli) to create one. --1. On the left pane, under **Settings**, select **Access Policies**, and then **Add Access Policy**. --1. In the **Secret permissions** dropdown list, select **Get**. -- :::image type="content" source="media/how-to-use-a-managed-identity/key-vault-add-policy.png" alt-text="Screenshot that shows how to add an access policy to your Azure key vault."::: --1. Select **Select principal**, and then select the system-assigned or user-assigned principal for your Azure load testing resource. -- If you're using a system-assigned managed identity, the name matches that of your Azure load testing resource. --1. Select **Add**. +## Configure target resource -You've now granted access to your Azure load testing resource to read the secret values from your Azure key vault. +You might need to configure the target resource to allow access from your load testing resource. For example, if you [read a secret or certificate from Azure Key Vault](./how-to-parameterize-load-tests.md), or if you [use customer-managed keys for encryption](./how-to-configure-customer-managed-keys.md), you must also add an access policy that includes the managed identity of your resource. Otherwise, your calls to Azure Key Vault are rejected, even if you use a valid token. -## Next steps +## Related content -* Learn how to [Parameterize a load test with secrets](./how-to-parameterize-load-tests.md). -* Learn how to [Manage users and roles in Azure Load Testing](./how-to-assign-roles.md). +* [Use secrets or certificates in your load test](./how-to-parameterize-load-tests.md) +* [Configure customer-managed keys for encryption](how-to-configure-customer-managed-keys.md) * [What are managed identities for Azure resources?](/azure/active-directory/managed-identities-azure-resources/overview) |
load-testing | How To Use Jmeter Plugins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-use-jmeter-plugins.md | -Azure Load Testing lets you use plugins from https://jmeter-plugins.org, or upload a Java archive (JAR) file with your own plugin code. You can use multiple plugins in a load test. +When you use a JMeter plugin in your test script, the plugin needs to be uploaded onto the test engine instances in Azure Load Testing. You have two options for using JMeter plugins with Azure Load Testing: -Azure Load Testing preinstalls plugins from https://jmeter-plugins.org on the load test engine instances. For other plugins, you add the plugin JAR file to the load test configuration. +- **Plugins from https://jmeter-plugins.org**. Azure Load Testing automatically preinstalls plugins from https://jmeter-plugins.org. ++- **Other plugins**. When you create the load test, you need to add the JMeter plugin Java archive (JAR) file to your load test configuration. Azure Load Testing uploads the plugin JAR file onto the test engine instances when the load test starts. ++> [!NOTE] +> If you use your own plugin code, we recommend that you build the executable JAR using Java 17. ## Prerequisites Azure Load Testing preinstalls plugins from https://jmeter-plugins.org on the lo * An Azure Load Testing resource. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md). * (Optional) Apache JMeter GUI to author your test script. To install Apache JMeter, see [Apache JMeter Getting Started](https://jmeter.apache.org/usermanual/get-started.html). -## Reference a JMeter plugin in your test script +## Reference the JMeter plugin in your test script -To reference a JMeter plugin in your JMeter script by using the JMeter GUI, first install the plugin on your local JMeter instance in either of two ways: +To use a JMeter plugin in your load test, you have to author the JMX test script and reference the plugin. There are no special instructions for referencing plugins in your script when you use Azure Load Testing. -- Use the [Plugins Manager](https://jmeter-plugins.org/wiki/PluginsManager/), if the plugin is available.-- To use your own plugin code, copy the plugin JAR file to the `lib/ext` folder of your local JMeter installation.+Follow these steps to use the JMeter GUI to install and reference the plugin in your test script: -After you install the plugin, the plugin functionality appears in the Apache JMeter user interface. You can now reference it in your test script. The following screenshot shows an example of how to use an *Example Sampler* plugin: +1. Install the JMeter plugin on your local JMeter instance in either of two ways: + - Use the [Plugins Manager](https://jmeter-plugins.org/wiki/PluginsManager/), if the plugin is available. -> [!NOTE] -> You can also reference the JMeter plugin directly by editing the JMX file. In this case you don't have to install the plugin locally. + - To use your own plugin code, copy the plugin JAR file to the `lib/ext` folder of your local JMeter installation. -## Upload the JMeter plugin JAR file to your load test + After you install the plugin, the plugin functionality appears in the Apache JMeter user interface. -To use your own plugins during the load test, you have to upload the plugin JAR file to your load test. Azure Load Testing then installs your plugin on the load test engines. +1. You can now reference the plugin functionality in your test script. -You can add a plugin JAR file when you create a new load test, or anytime when you update an existing test. + The following screenshot shows an example of how to use an *Example Sampler* plugin. Depending on the type of plugin, you might have different options in the user interface. -For plugins from https://jmeter-plugins.org, you don't need to upload the JAR file. Azure Load Testing automatically configures these plugins for you. + :::image type="content" source="media/how-to-use-jmeter-plugins/jmeter-add-custom-sampler.png" alt-text="Screenshot that shows how to add a custom sampler to a test plan by using the JMeter user interface."::: > [!NOTE]-> We recommend that you build the executable JAR using Java 17. +> You can also reference the JMeter plugin directly by editing the JMX file. In this case you don't have to install the plugin locally. ++## Create a load test that uses JMeter plugins ++If you only reference plugins from https://jmeter-plugins.org, you can [create a load test by uploading your JMX test script](./how-to-create-and-run-load-test-with-jmeter-script.md). Azure Load Testing preinstalls the plugin JAR files onto the test engine instances. ++If you use your own plugins in your test script, you have to add the plugin JAR file to your load test configuration. Azure Load Testing then installs your plugin on the load test engines when the test starts. ++You can add a plugin JAR file when you create a new load test, or anytime when you update an existing test. # [Azure portal](#tab/portal) Follow these steps to upload a JAR file by using the Azure portal: When the test runs, Azure Load Testing deploys the plugin on each test engine instance. -# [GitHub Actions](#tab/github) --When you run a load test within your CI/CD workflow, you reference the plugin JAR file in the `configurationFiles` setting in the [test configuration YAML file](./reference-test-config-yaml.md). --To reference the plugin JAR file in the test configuration YAML file: --1. Add the plugin JAR file to the source control repository, which contains your load test configuration. --1. Open the YAML test configuration file in Visual Studio Code or your editor of choice. --1. Add the JAR file to the `configurationFiles` setting. You can use wildcards or specify multiple individual files. -- ```yaml - testName: MyTest - testPlan: SampleApp.jmx - description: Run a load test for my sample web app - engineInstances: 1 - configurationFiles: - - examplesampler-1.0.jar - ``` -- > [!NOTE] - > If you store the JAR file in a separate folder, specify the file with a relative path name. For more information, see the [Test configuration YAML syntax](./reference-test-config-yaml.md). --1. Save the YAML configuration file and commit it to your source control repository. - - The next time the CI/CD workflow runs, it will use the updated configuration and Azure Load Testing will deploy the plugin on each test engine instance. --# [Azure Pipelines](#tab/pipelines) +# [GitHub Actions / Azure Pipelines](#tab/github+pipelines) When you run a load test within your CI/CD workflow, you reference the plugin JAR file in the `configurationFiles` setting in the [test configuration YAML file](./reference-test-config-yaml.md). To reference the plugin JAR file in the test configuration YAML file: The next time the CI/CD workflow runs, it will use the updated configuration and Azure Load Testing will deploy the plugin on each test engine instance. --## Next steps --- Learn how to [Download JMeter logs to troubleshoot a load test](./how-to-troubleshoot-failing-test.md).-- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).-- Learn how to [Automate load tests with CI/CD](./tutorial-identify-performance-regression-with-cicd.md). |
machine-learning | Concept Plan Manage Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-plan-manage-cost.md | Understand that the costs for Azure Machine Learning are only a portion of the m For more information on optimizing costs, see [how to manage and optimize cost in Azure Machine Learning](how-to-manage-optimize-cost.md). -> [!IMPORTANT] -> Items marked (preview) in this article are currently in public preview. -> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. -> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - ## Prerequisites Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). When you create resources for an Azure Machine Learning workspace, resources for * [Application Insights](https://azure.microsoft.com/pricing/details/monitor?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. -* [Enable idle shutdown (preview)](how-to-create-compute-instance.md#configure-idle-shutdown) to save on cost when the VM has been idle for a specified time period. -* Or [set up a schedule](how-to-create-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it. +* [Enable idle shutdown](how-to-create-compute-instance.md#configure-idle-shutdown) to save on cost when the VM has been idle for a specified time period. +* Or [set up a schedule](how-to-create-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance to save cost when you aren't planning to use it. ### Costs might accrue before resource deletion |
machine-learning | How To Authenticate Online Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-online-endpoint.md | Access to retrieve the key or token for an online endpoint is restricted by Azur For more information on using Azure RBAC with Azure Machine Learning, see [Manage access to Azure Machine Learning](how-to-assign-roles.md). +On the option to retrieve the token via REST API, see [Invoke the endpoint to score data with your model](how-to-deploy-with-rest.md#invoke-the-endpoint-to-score-data-with-your-model). + # [Azure CLI](#tab/azure-cli) To get the key or token, use [az ml online-endpoint get-credentials](/cli/azure/ml/online-endpoint#az-ml-online-endpoint-get-credentials). This command returns a JSON document that contains the key or token. |
machine-learning | How To High Availability Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-high-availability-machine-learning.md | Azure Machine Learning builds on top of other services. Some services can be con | Key Vault | Microsoft | Use the same Key Vault instance with the Azure Machine Learning workspace and resources in both regions. Key Vault automatically fails over to a secondary region. For more information, see [Azure Key Vault availability and redundancy](../../key-vault/general/disaster-recovery-guidance.md).| | Container Registry | Microsoft | Configure the Container Registry instance to geo-replicate registries to the paired region for Azure Machine Learning. Use the same instance for both workspace instances. For more information, see [Geo-replication in Azure Container Registry](../../container-registry/container-registry-geo-replication.md). | | Storage Account | You | Azure Machine Learning does not support __default storage-account__ failover using geo-redundant storage (GRS), geo-zone-redundant storage (GZRS), read-access geo-redundant storage (RA-GRS), or read-access geo-zone-redundant storage (RA-GZRS). Create a separate storage account for the default storage of each workspace. </br>Create separate storage accounts or services for other data storage. For more information, see [Azure Storage redundancy](../../storage/common/storage-redundancy.md). |-| Application Insights | You | Create Application Insights for the workspace in both regions. To adjust the data-retention period and details, see [Data collection, retention, and storage in Application Insights](../../azure-monitor/app/data-retention-privacy.md#how-long-is-the-data-kept). | +| Application Insights | You | Create Application Insights for the workspace in both regions. To adjust the data-retention period and details, see [Data collection, retention, and storage in Application Insights](/previous-versions/azure/azure-monitor/app/data-retention-privacy#how-long-is-the-data-kept). | To enable fast recovery and restart in the secondary region, we recommend the following development practices: |
managed-grafana | Known Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md | Azure Managed Grafana has the following known limitations: | Team sync with Microsoft Entra ID | ❌ | ❌ | | Enterprise plugins | ❌ | ❌ | -* The *Current User* authentication option available for Azure Data Explorer triggers the following limitation. Grafana offers some automated features such as alerts and reporting, that are expected to run in the background periodically. The Current User authentication method relies on a user being logged in, in an interactive session, to connect Azure Data Explorer to the database. Therefore, when this authentication method is used and no user is logged in, automated tasks can't run in the background. To leverage automated tasks for Azure Data Explorer, we recommend setting up another Azure Data Explorer data source using another authentication method. - ## Next steps > [!div class="nextstepaction"] |
mysql | Concepts Troubleshooting Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-troubleshooting-best-practices.md | Last updated 07/22/2022 [!INCLUDE[azure-database-for-mysql-single-server-deprecation](../includes/azure-database-for-mysql-single-server-deprecation.md)] -Use the sections below to keep your MySQL databases running smoothly and use this information as guiding principles for ensuring that the schemas are designed optimally and provide the best performance for your applications. +Use the following sections to keep your MySQL databases running smoothly and use this information as guiding principles for ensuring that the schemas are designed optimally and provide the best performance for your applications. ## Check the number of indexes -In a busy database environment, you may observe high I/O usage, which can be an indicator of poor data access patterns. Unused indexes can negatively impact performance as they consume disk space and cache, and slow down write operations (INSERT / DELETE / UPDATE). Unused indexes unnecessarily consume additional storage space and increase the backup size. +In a busy database environment, you might observe high I/O usage, which can be an indicator of poor data access patterns. Unused indexes can have a negative impact on performance as they consume disk space and cache, and slow down write operations (INSERT / DELETE / UPDATE). Unused indexes unnecessarily consume more storage space and increase the backup size. -Before you remove any index, be sure to gather enough information to verify that it's no longer in use. This can help you avoid inadvertently removing an index that is perhaps critical for a query that runs only quarterly or annually. Also, be sure to consider whether an index is used to enforce uniqueness or ordering. +Before you remove any index, be sure to gather enough information to verify that it's no longer in use. This verification can help you avoid inadvertently removing an index that is critical for a query that runs only quarterly or annually. Also, be sure to consider whether an index is used to enforce uniqueness or ordering. > [!NOTE] > Remember to review indexes periodically and perform any necessary updates based on any modifications to the table data. on (statistics.table_name = tables.table_name and statistics.table_schema = '<YOUR DATABASE NAME HERE>' and ((tables.table_rows / statistics.cardinality) > 1000));` +## List the busiest indexes on the server ++The output from the following query provides information about the most used indexes across all tables and schemas on the database server. This information is helpful in identifying the ratio of writes to reads against each index and the latency numbers for reads as well as individual write operations, which can indicate that further tuning is required against the underlying table and dependent queries. ++``` +SELECT +object_schema AS table_schema, +object_name AS table_name, +index_name, count_star AS all_accesses, +count_read, +count_write, +Concat(Truncate(count_read / count_star * 100, 0), ':', +Truncate(count_write / count_star * 100, 0)) AS read_write_ratio, + count_fetch AS rows_selected , + count_insert AS rows_inserted, + count_update AS rows_updated, + count_delete AS rows_deleted, + Concat(Round(sum_timer_wait / 1000000000000, 2), ' s') AS total_latency , + Concat(Round(sum_timer_fetch / 1000000000000, 2), ' s') AS select_latency, + Concat(Round(sum_timer_insert / 1000000000000, 2), ' s') AS insert_latency, +Concat(Round(sum_timer_update / 1000000000000, 2), ' s') AS update_latency, + Concat(Round(sum_timer_delete / 1000000000000, 2), ' s') AS delete_latency +FROM performance_schema.table_io_waits_summary_by_index_usage +WHERE index_name IS NOT NULL AND count_star > 0 +ORDER BY sum_timer_wait DESC +``` + ## Review the primary key design -Azure Database for MySQL uses the InnoDB storage engine for all non-temporary tables. With InnoDB, data is stored within a clustered index using a B-Tree structure. The table is physically organized based on primary key values, which means that rows are stored in the primary key order. +Azure Database for MySQL uses the InnoDB storage engine for all nontemporary tables. With InnoDB, data is stored within a clustered index using a B-Tree structure. The table is physically organized based on primary key values, which means that rows are stored in the primary key order. + Each secondary key entry in an InnoDB table contains a pointer to the primary key value in which the data is stored. In other words, a secondary index entry contains a copy of the primary key value to which the entry is pointing. Therefore, primary key choices have a direct effect on the amount of storage overhead in your tables. -If a key is derived from actual data (e.g., username, email, SSN, etc.), itΓÇÖs called a *natural key*. If a key is artificial and not derived from data (e.g., an auto-incremented integer), itΓÇÖs referred to as a *synthetic key* or *surrogate key*. +If a key is derived from actual data (e.g., username, email, SSN, etc.), itΓÇÖs called a *natural key*. If a key is artificial and not derived from data (e.g., an autoincremented integer), itΓÇÖs referred to as a *synthetic key* or *surrogate key*. -ItΓÇÖs generally recommended to avoid using natural primary keys. These keys are often very wide and contain long values from one or multiple columns. This in turn can introduce severe storage overhead with the primary key value being copied into each secondary key entry. Moreover, natural keys donΓÇÖt usually follow a pre-determined order, which dramatically reduces performance and provokes page fragmentation when rows are inserted or updated. To avoid these issues, use monotonically increasing surrogate keys instead of natural keys. An auto-increment (big)integer column is a good example of a monotonically increasing surrogate key. If you require a certain combination of columns, be unique, declare those columns as a unique secondary key. +ItΓÇÖs generally recommended to avoid using natural primary keys. These keys are often very wide and contain long values from one or multiple columns. This in turn can introduce severe storage overhead with the primary key value being copied into each secondary key entry. Moreover, natural keys donΓÇÖt usually follow a predetermined order, which dramatically reduces performance and provokes page fragmentation when rows are inserted or updated. To avoid these issues, use monotonically increasing surrogate keys instead of natural keys. An autoincrement (big)integer column is a good example of a monotonically increasing surrogate key. If you require a certain combination of columns, be unique, declare those columns as a unique secondary key. -During the initial stages of building an application, you may not think ahead to imagine a time when your table begins to approach having two billion rows. As a result, you might opt to use a signed 4 byte integer for the data type of an ID (primary key) column. Be sure to check all table primary keys and switch to use 8 byte integer (BIGINT) columns to accommodate the potential for a high volume or growth. +During the initial stages of building an application, you might not think ahead to imagine a time when your table begins to approach having two billion rows. As a result, you might opt to use a signed 4-byte integer for the data type of an ID (primary key) column. Be sure to check all table primary keys and switch to use 8-byte integer (BIGINT) columns to accommodate the potential for a high volume or growth. > [!NOTE] > For more information about data types and their maximum values, in the MySQL Reference Manual, see [Data Types](https://dev.mysql.com/doc/refman/5.7/en/data-types.html). ## Use covering indexes -The previous section explains how indexes in MySQL are organized as B-Trees and in a clustered index, the leaf nodes contain the data pages of the underlying table. Secondary indexes have the same B-tree structure as clustered indexes, and you can define them on a table or view with a clustered index or a heap. Each index row in the secondary index contains the non-clustered key value and a row locator. This locator points to the data row in the clustered index or heap having the key value. As a result, any lookup involving a secondary index must navigate starting from the root node through the branch nodes to the correct leaf node to take the primary key value. The System then executes a random IO read on the primary key index (once again navigating from the root node through the branch nodes to the correct leaf node) to get the data row. +The previous section explains how indexes in MySQL are organized as B-Trees and in a clustered index, the leaf nodes contain the data pages of the underlying table. Secondary indexes have the same B-tree structure as clustered indexes, and you can define them on a table or view with a clustered index or a heap. Each index row in the secondary index contains the nonclustered key value and a row locator. This locator points to the data row in the clustered index or heap having the key value. As a result, any lookup involving a secondary index must navigate starting from the root node through the branch nodes to the correct leaf node to take the primary key value. The System then executes a random IO read on the primary key index (once again navigating from the root node through the branch nodes to the correct leaf node) to get the data row. To avoid this extra random IO read on the primary key index to get the data row, use a covering index, which includes all fields required by the query. Generally, using this approach is beneficial for I/O bound workloads and cached workloads. So as a best practice, use covering indexes because they fit in memory and are smaller and more efficient to read than scanning all the rows. possible_keys: NULL 1 row in set, 1 warning (0.01 sec) ``` -However, if you were to add an index that covers the column in the where clause, along with the projected columns you would see that the index is being used to locate the columns much more quickly and efficiently. +However, if you added an index that covers the column in the where clause, along with the projected columns you would see that the index is being used to locate the columns much more quickly and efficiently. `mysql> CREATE INDEX cvg_idx_ex ON employee (joindate, empid, fname, lname);` |
mysql | How To Troubleshoot High Cpu Utilization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-high-cpu-utilization.md | Last updated 06/20/2022 Azure Database for MySQL provides a range of metrics that you can use to identify resource bottlenecks and performance issues on the server. To determine whether your server is experiencing high CPU utilization, monitor metrics such as ΓÇ£Host CPU percentΓÇ¥, ΓÇ£Total ConnectionsΓÇ¥, ΓÇ£Host Memory PercentΓÇ¥, and ΓÇ£IO PercentΓÇ¥. At times, viewing a combination of these metrics will provide insights into what might be causing the increased CPU utilization on your Azure Database for MySQL server. -For example, consider a sudden surge in connections that initiates surge of database queries that cause CPU utilization to shoot up. +For example, consider a sudden surge in connections that initiates surge of database queries that cause CPU utilization to shoot up. Besides capturing metrics, itΓÇÖs important to also trace the workload to understand if one or more queries are causing the spike in CPU utilization. ## Capturing details of the current workload The SHOW (FULL) PROCESSLIST command displays a list of all user sessions currently connected to the Azure Database for MySQL server. It also provides details about the current state and activity of each session.+ This command only produces a snapshot of the current session status and doesn't provide information about historical session activity. LetΓÇÖs take a look at sample output from running this command. Notice that there are two sessions owned by customer owned user ΓÇ£adminuserΓÇ¥, * Session 24835 has been executing a SELECT statement for the last seven seconds. * Session 24837 is executing ΓÇ£show full processlistΓÇ¥ statement. -When necessary, it may be required to terminate a query, such as a reporting or HTAP query that has caused your production workload CPU usage to spike. However, always consider the potential consequences of terminating a query before taking the action in an attempt to reduce CPU utilization. Other times if there are any long running queries identified that are leading to CPU spikes, tune these queries so the resources are optimally utilized. +When necessary, it might be required to terminate a query, such as a reporting or HTAP query that has caused your production workload CPU usage to spike. However, always consider the potential consequences of terminating a query before taking the action in an attempt to reduce CPU utilization. Other times if there are any long running queries identified that are leading to CPU spikes, tune these queries so the resources are optimally utilized. ## Detailed current workload analysis You need to use at least two sources of information to obtain accurate informati With information from only one of these sources, itΓÇÖs impossible to describe the connection and transaction state. For example, the process list doesnΓÇÖt inform you whether thereΓÇÖs an open transaction associated with any of the sessions. On the other hand, the transaction metadata doesnΓÇÖt show session state and time spent in that state. -An example query that combines process list information with some of the important pieces of InnoDB transaction metadata is shown below: +The following example query that combines process list information with some of the important pieces of InnoDB transaction metadata: ``` mysql> select p.id as session_id, p.user, p.host, p.db, p.command, p.time, p.state, substring(p.info, 1, 50) as info, t.trx_started, unix_timestamp(now()) - unix_timestamp(t.trx_started) as trx_age_seconds, t.trx_rows_modified, t.trx_isolation_level from information_schema.processlist p left join information_schema.innodb_trx t on p.id = t.trx_mysql_thread_id \G ``` -An example of the output from this query is shown below: +The following example shows the output from this query below: ``` *************************** 1. row *************************** An analysis of this information, by session, is listed in the following table. Note that if a session is reported as idle, itΓÇÖs no longer executing any statements. At this point, the session has completed any prior work and is waiting for new statements from the client. However, idle sessions are still responsible for some CPU consumption and memory usage. +## Listing open transactions ++The output from the following query provides a list of all the transactions currently running against the database server in order of transaction start time so that you can easily identify if there are any long running and blocking transactions exceeding their expected runtime. ++``` +SELECT trx_id, trx_mysql_thread_id, trx_state, Unix_timestamp() - ( To_seconds(trx_started) - To_seconds('1970-01-01 00:00:00') ) AS trx_age_seconds, trx_weight, trx_query, trx_tables_in_use, trx_tables_locked, trx_lock_structs, trx_rows_locked, trx_rows_modified, trx_isolation_level, trx_unique_checks, trx_is_read_only FROM information_schema.innodb_trx ORDER BY trx_started ASC; +``` + ## Understanding thread states Transactions that contribute to higher CPU utilization during execution can have threads in various states, as described in the following sections. Use this information to better understand the query lifecycle and various thread states. This state usually means that the thread is performing a write operation. Check ### Waiting for <lock_type> lock -This state indicates that the thread is waiting for a second lock. In most cases, it may be a metadata lock. You should review all other threads and see who is taking the lock. +This state indicates that the thread is waiting for a second lock. In most cases, it might be a metadata lock. You should review all other threads and see who is taking the lock. ## Understanding and analyzing wait events -ItΓÇÖs important to understand the underlying wait events in MySQL engine, because long waits or a large number of waits in a database can lead to increased CPU utilization. The following shows the appropriate command and sample output. +ItΓÇÖs important to understand the underlying wait events in MySQL engine, because long waits or a large number of waits in a database can lead to increased CPU utilization. The following example shows the appropriate command and sample output. ``` SELECT event_name AS wait_event, If you donΓÇÖt know about the execution cost and execution time for database ope ## Recommendations -* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more CPU cores to accommodate your workload. +* Ensure that your database has enough resources allocated to run your queries. At times, you might need to scale up the instance size to get more CPU cores to accommodate your workload. * Avoid large or long-running transactions by breaking them into smaller transactions. * Run SELECT statements on read replica servers when possible. * Use alerts on ΓÇ£Host CPU PercentΓÇ¥ so that you get notifications if the system exceeds any of the specified thresholds. |
mysql | How To Troubleshoot Query Performance New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-query-performance-new.md | Usually, you should focus on queries with high values for Query_time and Rows_ex ## Profiling a query -After youΓÇÖve identified a specific slow running query, you can use the EXPLAIN command and profiling to gather additional detail. +After youΓÇÖve identified a specific slow running query, you can use the EXPLAIN command and profiling to gather more detail. To check the query plan, run the following command: ORDER BY sum_timer_wait DESC LIMIT 10; > Use this query to benchmark the top executed queries in your database server and determine if thereΓÇÖs been a change in the top queries or if any existing queries in the initial benchmark have increased in run duration. > +## Listing the 10 most expensive queries by total execution time ++The output from the following query provides information about the top 10 queries running against the database server and their number of executions on the database server. It also provides other useful information such as the query latencies, their lock times, the number of temp tables created as part of query runtime, etc. Use this query output to keep track of the top queries on the database and changes to factors such as latencies, which might indicate a chance to fine tune the query further to help avoid any future risks. ++``` +SELECT REPLACE(event_name, 'statement/sql/', '') AS statement, + count_star AS all_occurrences , + Concat(Round(sum_timer_wait / 1000000000000, 2), ' s') AS total_latency, + Concat(Round(avg_timer_wait / 1000000000000, 2), ' s') AS avg_latency, + Concat(Round(sum_lock_time / 1000000000000, 2), ' s') AS total_lock_time , + sum_rows_affected AS sum_rows_changed, + sum_rows_sent AS sum_rows_selected, + sum_rows_examined AS sum_rows_scanned, + sum_created_tmp_tables, sum_created_tmp_disk_tables, + IF(sum_created_tmp_tables = 0, 0, Concat( Truncate(sum_created_tmp_disk_tables / + sum_created_tmp_tables * 100, 0))) AS + tmp_disk_tables_percent, + sum_select_scan, + sum_no_index_used, + sum_no_good_index_used +FROM performance_schema.events_statements_summary_global_by_event_name +WHERE event_name LIKE 'statement/sql/%' + AND count_star > 0 +ORDER BY sum_timer_wait DESC +LIMIT 10; +``` + ## Monitoring InnoDB garbage collection When InnoDB garbage collection is blocked or delayed, the database can develop a substantial purge lag that can negatively affect storage utilization and query performance. When interpreting HLL values, consider the guidelines listed in the following ta | **Value** | **Notes** | ||| | Less than ~10,000 | Normal values, indicating that garbage collection isn't falling behind. |-| Between ~10,000 and ~1,000,000 | These values indicate a minor lag in garbage collection. Such values may be acceptable if they remain steady and don't increase. | -| Greater than ~1,000,000 | These values should be investigated and may require corrective actions | +| Between ~10,000 and ~1,000,000 | These values indicate a minor lag in garbage collection. Such values might be acceptable if they remain steady and don't increase. | +| Greater than ~1,000,000 | These values should be investigated and might require corrective actions | ### Addressing excessive HLL values command: Query ## Recommendations -* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more CPU cores and additional memory to accommodate your workload. +* Ensure that your database has enough resources allocated to run your queries. At times, you might need to scale up the instance size to get more CPU cores and additional memory to accommodate your workload. * Avoid large or long-running transactions by breaking them into smaller transactions. * Configure innodb_purge_threads as per your workload to improve efficiency for background purge operations. > [!NOTE] |
mysql | How To Troubleshoot Query Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-query-performance.md | |
nat-gateway | Nat Gateway Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-gateway-resource.md | A NAT gateway provides a configurable idle timeout range of 4 minutes to 120 min When a connection goes idle, the NAT gateway holds onto the SNAT port until the connection idle times out. Because long idle timeout timers can unnecessarily increase the likelihood of SNAT port exhaustion, it isn't recommended to increase the TCP idle timeout duration to longer than the default time of 4 minutes. The idle timer doesn't affect a flow that never goes idle. -TCP keepalives can be used to provide a pattern of refreshing long idle connections and endpoint liveness detection. For more information, see these [.NET examples] (/dotnet/api/system.net.servicepoint.settcpkeepalive). TCP keepalives appear as duplicate ACKs to the endpoints, are low overhead, and invisible to the application layer. +TCP keepalives can be used to provide a pattern of refreshing long idle connections and endpoint liveness detection. For more information, see these [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). TCP keepalives appear as duplicate ACKs to the endpoints, are low overhead, and invisible to the application layer. UDP idle timeout timers aren't configurable, UDP keepalives should be used to ensure that the idle timeout value isn't reached, and that the connection is maintained. Unlike TCP connections, a UDP keepalive enabled on one side of the connection only applies to traffic flow in one direction. UDP keepalives must be enabled on both sides of the traffic flow in order to keep the traffic flow alive. |
operator-nexus | Howto Kubernetes Service Load Balancer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-service-load-balancer.md | The IP address pool configuration requires the presence of two fields: `addresse * The `addresses` field specifies the list of IP address ranges that can be used for allocation within the pool. You can define each range as a subnet in CIDR format or as an explicit start-end range of IP addresses. * The `name` field serves as a unique identifier for the IP address pool. It helps associate the pool with a BGP (Border Gateway Protocol) advertisement, enabling effective communication within the cluster. +> [!NOTE] +> To enable the Kubernetes `LoadBalancer` service to have a dual-stack address, make sure that the IP pool configuration includes both IPv4 and IPv6 CIDR/addresses. + ### Optional parameters In addition to the required fields, there are also optional fields available for further customization of the IP address pool configuration. |
orbital | About Ground Stations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/about-ground-stations.md | + + Title: Azure Orbital Ground Stations - About Microsoft and partner ground stations +description: Provides specs on Microsoft ground stations and outlines partner ground station network. ++++ Last updated : 10/20/2023++#Customer intent: As a satellite operator or user, I want to learn about Microsoft and partner ground stations. +++# About Microsoft and partner ground stations ++## Microsoft ground stations ++Microsoft owns and operates five ground stations around the world. +++Our antennas are 6.1 meters in diameter and support X-band and S-band. ++### X-bank +| Downlink Frequencies (MHz) | G/T (dB/K) | +|-|| +| 8000-8400 | 30.0 | ++### S-band +| Uplink Frequencies (MHz) | EIRP (dBW) | Downlink Frequencies (MHz) | G/T (dB/K) | +|--||-|| +| 2025-2120 | 52.0 | 2200-2300 | 15.0 | ++## Partner ground stations ++Azure Orbital Ground Station offers a common data plane and API to access all antenna in the global network. An active contract with the partner network(s) you wish to integrate with Azure Orbital Ground Station is required to onboard with a partner. |
orbital | Concepts Contact Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact-profile.md | Title: Ground station contact profile - Azure Orbital -description: Learn more about the contact profile object, including how to create, modify, and delete the profile. + Title: Azure Orbital Ground Station - contact profile +description: Learn more about the contact profile resource, including how to create, modify, and delete the profile. Last updated 07/13/2022 -#Customer intent: As a satellite operator or user, I want to understand how to use the contact profile so that I can take passes using the Azure Orbital Ground Station (AOGS) service. +#Customer intent: As a satellite operator or user, I want to understand how to use the contact profile so that I can take passes using Azure Orbital Ground Station. # Ground station contact profile -The contact profile object stores pass requirements such as links and endpoint details for each link. Use this object along with the spacecraft object during pass scheduling to view and schedule available passes. +The contact profile resource stores pass requirements such as links and endpoint details. Use this resource along with the spacecraft resource during contact scheduling to view and schedule available passes. -You can create many contact profiles to represent different types of passes depending on your mission operations. For example, you can create a contact profile for a command and control pass or a contact profile for a downlink only pass. +You can create many contact profiles to represent different types of passes depending on your mission operations. For example, you can create a contact profile for a command and control pass or a contact profile for a downlink-only pass. -These objects are mutable and do not undergo an authorization process like the spacecraft objects do. One contact profile can be used with many spacecraft objects. +These resources are mutable and do not undergo an authorization process like the spacecraft resources do. One contact profile can be used with many spacecraft resources. -See [how to configure a contact profile](contact-profile.md) for the full list of parameters. +See [how to configure a contact profile](contact-profile.md) for a full list of parameters. ## Prerequisites +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Subnet that is created in the relevant VNET and resource group. See [prepare network for Azure Orbital Ground Station integration](prepare-network.md). ## Creating a contact profile Follow these steps to [create a contact profile](contact-profile.md). ## Adjusting pass parameters -Specify a minimum pass time to ensure your passes are a certain duration. Specify a minimum elevation to ensure passes are above a certain elevation. +Specify a minimum pass time to ensure passes are a certain duration. Specify a minimum elevation to ensure passes are above a certain elevation. -The minimum pass time and minimum elevation parameters are used by the service during the contact scheduling. Avoid changing these on a pass-by-pass basis and instead create multiple contact profiles if you require flexibility. +The minimum pass time and minimum elevation parameters are used by Azure Orbital Ground Station during the contact scheduling. Avoid changing these on a pass-by-pass basis and instead create multiple contact profiles if you require flexibility. ## Understanding links and channels Refer to the example below to understand how to specify an RHCP channel and an L ## Modifying or deleting a contact profile -You can modify or delete the contact profile via the Orbital Portal or through the API. +You can modify or delete the contact profile via the [Azure portal](https://aka.ms/orbital/portal) or [Azure Orbital Ground Station API](/rest/api/orbital/). -## Configuring contact profile for third party ground stations +## Configuring a contact profile for third party ground stations -When you onboard a third party network, you will receive a token that identifies your profile. Use this token in the contact profile object to link a contact profile to the third party network. +When you onboard a third party network, you will receive a token that identifies your profile. Use this token in the contact profile resource to link a contact profile to the third party network. ## Next steps - [Schedule a contact](schedule-contact.md) - [Configure the RF chain](modem-chain.md) - [Update the Spacecraft TLE](update-tle.md)- |
orbital | Mission Phases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/mission-phases.md | + + Title: Azure Orbital Ground Station - Mission Phases +description: Points users to relevant resources depending on the phase of their mission. ++++ Last updated : 10/12/2023++#Customer intent: As a satellite operator or user, I want to know how to use AOGS at each phase in my satellite mission. +++# Mission Phases ++Azure Orbital Ground Station provides easy and secure access to communication products and services required to support all phases of satellite missions. ++## Pre-launch ++- Initiate ground station licensing ahead of launch to ensure you can communicate with your spacecraft. +- [Create and authorize a spacecraft](register-spacecraft.md) resource for your satellite. +- [Configure a contact profile](contact-profile.md) with links and channels. +- [Prepare your network](prepare-network.md) to send and receive data between the spacecraft and Azure Orbital Ground Station. +- [Add a modem configuration file](modem-chain.md) to the contact profile. +- Prepare for launch with RF compatibility testing and enrollment in Launch Window Scheduling (in preview). ++## Launch and nominal operations ++- Keep the [spacecraft TLE](update-tle.md) up to date. +- [Receive real-time telemetry](receive-real-time-telemetry.md) from the contact. +- [Use sample queries](resource-graph-samples.md) for Azure Resource Graph. |
orbital | Schedule Contact | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/schedule-contact.md | Title: How to schedule a contact on Azure Orbital Earth Observation service + Title: Azure Orbital Ground Station - schedule a contact description: Learn how to schedule a contact. -# Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure. +# Customer intent: As a satellite operator, I want to schedule a contact to ingest data from my satellite into Azure. # Schedule a contact Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal). :::image type="content" source="media/orbital-eos-view-scheduled-contacts.png" alt-text="View scheduled contacts page" lightbox="media/orbital-eos-view-scheduled-contacts.png"::: +## Cancel a contact ++To cancel a scheduled contact, you must delete the contact resource. Learn more at [contact resource](concepts-contact.md). + ## Next steps - [Register Spacecraft](register-spacecraft.md) |
orbital | Spacecraft Object | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/spacecraft-object.md | Title: Spacecraft object - Azure Orbital -description: Learn about how you can represent your spacecraft details in Azure Orbital. + Title: Spacecraft resource - Azure Orbital Ground Station +description: Learn about how you can represent your spacecraft details in Azure Orbital Ground Station. Last updated 07/13/2022 -#Customer intent: As a satellite operator or user, I want to understand what the spacecraft object does so I can manage my mission. +#Customer intent: As a satellite operator or user, I want to understand what the spacecraft resource does so I can manage my mission. -# Spacecraft object +# Spacecraft resource Learn about how you can represent your spacecraft details in Azure Orbital Ground Station. ## Spacecraft details -The spacecraft object captures three types of information: +The spacecraft resource captures three types of information: - **Links** - RF details on center frequency, direction, and bandwidth for each link. - **Ephemeris** - The latest spacecraft TLE.-- **Licensing** - Authorizations held on a per-link, per-site basis.+- **Licensing** - Authorizations are held on a per-link, per-site basis. ### Links -Make sure to capture each link that you wish to use with Azure Orbital Ground Station when you create the spacecraft object. The following details are required: +Make sure to capture each link that you wish to use with Azure Orbital Ground Station when you create the spacecraft resource. The following details are required: | **Field** | **Values** | ||--| As TLEs are prone to expiration, the user must keep the TLE up-to-date using the ### Licensing -In order to uphold regulatory requirements across the world, the spacecraft object contains authorizations on a per-link and per-site level that permits usage of the Azure Orbital Ground Station sites. +In order to uphold regulatory requirements across the world, the spacecraft resource contains authorizations for specific links and sites that permit usage of the Azure Orbital Ground Station sites. -The platform will deny scheduling or execution of contacts if the spacecraft object links are not authorized. The platform will also deny contact if a profile contains links that are not included in the spacecraft object authorized links. +The platform will deny scheduling or execution of contacts if the spacecraft resource links aren't authorized. The platform will also deny contact if a profile contains links that aren't included in the spacecraft resource authorized links. For more information, see the [spacecraft authorization and ground station licensing](register-spacecraft.md) documentation. For more information, see the [spacecraft authorization and ground station licen For more information on how to create a spacecraft resource, see the details listed in the [register a spacecraft](register-spacecraft.md) article. -## Managing spacecraft objects +## Managing spacecraft resources -Spacecraft objects can be created and deleted via the Portal and Azure Orbital Ground Station APIs. Once the object is created, modification to the object is dependent on the authorization status. +Spacecraft resources can be created and deleted via the Portal and Azure Orbital Ground Station APIs. Once the resource is created, modification to the resource is dependent on the authorization status. -When the spacecraft is unauthorized, then the spacecraft object can be modified. The API is the best way to make changes to the spacecraft object as the Portal only allows TLE updates. +When the spacecraft is unauthorized, then the spacecraft resource can be modified. The API is the best way to make changes to the spacecraft resource as the Portal only allows TLE updates. Once the spacecraft is authorized, TLE updates are the only modifications possible. Other fields, such as links, become immutable. The TLE updates are possible via the Portal and Orbital API. ## Delete spacecraft resource -You can delete the spacecraft object via the Portal or the API. See [How-to: Delete Contact](delete-contact.md). +You can delete the spacecraft resource via the Azure portal or the Azure Orbital Ground Station API. You must first delete all scheduled contacts associated with that spacecraft resource. See [contact resource](concepts-contact.md) for more information. ## Next steps |
security | Encryption Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md | The Azure services that support each encryption model: | Microsoft Entra ID | Yes | - | - | | Microsoft Entra Domain Services | Yes | Yes | - | | **Integration** | | | |-| Service Bus | Yes | Yes | Yes | +| Service Bus | Yes | Yes | - | | Event Grid | Yes | - | - | | API Management | Yes | - | - | | **IoT Services** | | | | |
service-bus-messaging | Message Sequencing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-sequencing.md | To learn more about Service Bus messaging, see the following topics: * [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md) * [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md) +## Additional resource ++* [A blog post that describes techniques for reordering messages that arrive out of order](https://particular.net/blog/you-dont-need-ordered-delivery) |
service-connector | How To Integrate Cosmos Cassandra | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-cassandra.md | -# Integrate the Azure Cosmos DB for Cassandra with Service Connector +# Integrate Azure Cosmos DB for Cassandra with Service Connector This page shows the supported authentication types and client types for the Azure Cosmos DB for Apache Cassandra using Service Connector. You might still be able to connect to the Azure Cosmos DB for Cassandra in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md). This page shows the supported authentication types and client types for the Azur Supported authentication and clients for App Service, Container Apps and Azure Spring Apps: -### [Azure App Service](#tab/app-service) --| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | -|--|--|--|--|--| -| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | -| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | --### [Azure Container Apps](#tab/container-apps) - | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | Supported authentication and clients for App Service, Container Apps and Azure S | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -### [Azure Spring Apps](#tab/spring-apps) --| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | -|--|--|--|--|--| -| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Go | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | -| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | - -## Default environment variable names or application properties --Use the connection details below to connect your compute services to the Azure Cosmos DB for Apache Cassandra. For each example below, replace the placeholder texts `<Azure-Cosmos-DB-account>`, `keyspace`, `<username>`, `<password>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`,`<client-secret>`, `<tenant-id>`, and `<Azure-region>` with your own information. +## Default environment variable names or application properties and Sample code -### Azure App Service and Azure Container Apps +Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect your compute services to Azure Cosmos DB for Apache Cassandra. **Please go to beginning of the documentation to choose authentication type.** -#### Secret / Connection string --| Default environment variable name | Description | Example value | -|--|--|| -| AZURE_COSMOS_CONTACTPOINT | Azure Cosmos DB for Apache Cassandra contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` | -| AZURE_COSMOS_PORT | Cassandra connection port | 10350 | -| AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` | -| AZURE_COSMOS_USERNAME | Cassandra username | `<username>` | -| AZURE_COSMOS_PASSWORD | Cassandra password | `<password>` | --#### System-assigned managed identity +### Connect with System-assigned Managed Identity | Default environment variable name | Description | Example value | |--|--|--| Use the connection details below to connect your compute services to the Azure C | AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` | | AZURE_COSMOS_USERNAME | Cassandra username | `<username>` | -#### User-assigned managed identity +#### Sample code ++Refer to the steps and code below to connect to Azure Cosmos DB for Cassandra using a system-assigned managed identity. ++### Connect with User-assigned Managed Identity | Default environment variable name | Description | Example value | |--|--|--| Use the connection details below to connect your compute services to the Azure C | AZURE_COSMOS_USERNAME | Cassandra username | `<username>` | | AZURE_COSMOS_CLIENTID | Your client ID | `<client-ID>` | +#### Sample code ++Refer to the steps and code below to connect to Azure Cosmos DB for Cassandra using a user-assigned managed identity. ++### Connect with Connection String ++#### SpringBoot client type ++| Default environment variable name | Description | Example value | +|-|--|--| +| spring.data.cassandra.contact-points | Azure Cosmos DB for Apache Cassandra contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` | +| spring.data.cassandra.port | Cassandra connection port | 10350 | +| spring.data.cassandra.keyspace-name | Cassandra keyspace | `<keyspace>` | +| spring.data.cassandra.username | Cassandra username | `<username>` | +| spring.data.cassandra.password | Cassandra password | `<password>` | +| spring.data.cassandra.local-datacenter | Azure Region | `<Azure-region>` | +| spring.data.cassandra.ssl | SSL status | true | ++#### Other client types ++| Default environment variable name | Description | Example value | +|--|--|| +| AZURE_COSMOS_CONTACTPOINT | Azure Cosmos DB for Apache Cassandra contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` | +| AZURE_COSMOS_PORT | Cassandra connection port | 10350 | +| AZURE_COSMOS_KEYSPACE | Cassandra keyspace | `<keyspace>` | +| AZURE_COSMOS_USERNAME | Cassandra username | `<username>` | +| AZURE_COSMOS_PASSWORD | Cassandra password | `<password>` | ++#### Sample code ++Refer to the steps and code below to connect to Azure Cosmos DB for Cassandra using a connection string. ++ #### Service principal | Default environment variable name | Description | Example value | Use the connection details below to connect your compute services to the Azure C | AZURE_COSMOS_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` | -### Azure Spring Apps +#### Sample code -| Default environment variable name | Description | Example value | -|-|--|--| -| spring.data.cassandra.contact_points | Azure Cosmos DB for Apache Cassandra contact point | `<Azure-Cosmos-DB-account>.cassandra.cosmos.azure.com` | -| spring.data.cassandra.port | Cassandra connection port | 10350 | -| spring.data.cassandra.keyspace_name | Cassandra keyspace | `<keyspace>` | -| spring.data.cassandra.username | Cassandra username | `<username>` | -| spring.data.cassandra.password | Cassandra password | `<password>` | -| spring.data.cassandra.local_datacenter | Azure Region | `<Azure-region>` | -| spring.data.cassandra.ssl | SSL status | true | +Refer to the steps and code below to connect to Azure Cosmos DB for Cassandra using a service principal. ## Next steps |
service-connector | How To Integrate Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md | -This page shows the supported authentication types, client types and sample codes of Azure Database for MySQL - Flexible Server using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. Also detail steps with sample codes about how to make connection to the database. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md). +This page shows the supported authentication types, client types and sample code of Azure Database for MySQL - Flexible Server using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. Also detail steps with sample code about how to make connection to the database. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md). [!INCLUDE [Azure-database-for-mysql-single-server-deprecation](../mysql/includes/azure-database-for-mysql-single-server-deprecation.md)] Supported authentication and clients for App Service, Container Apps, and Azure > [!NOTE] > System-assigned managed identity, User-assigned managed identity and Service principal are only supported on Azure CLI. -## Default environment variable names or application properties and Sample codes +## Default environment variable names or application properties and Sample code -Reference the connection details and sample codes in following tables, according to your connection's authentication type and client type, to connect compute services to Azure Database for MySQL. +Reference the connection details and sample code in following tables, according to your connection's authentication type and client type, to connect compute services to Azure Database for MySQL. ### System-assigned Managed Identity Reference the connection details and sample codes in following tables, according -#### Sample codes +#### Sample code -Follow these steps and sample codes to connect to Azure Database for MySQL. +Refer to the steps and code below to connect to Azure Database for MySQL. [!INCLUDE [code sample for mysql system mi](./includes/code-mysql-me-id.md)] ### User-assigned Managed Identity Follow these steps and sample codes to connect to Azure Database for MySQL. -#### Sample codes +#### Sample code -Follow these steps and sample codes to connect to Azure Database for MySQL. +Refer to the steps and code below to connect to Azure Database for MySQL. [!INCLUDE [code sample for mysql system mi](./includes/code-mysql-me-id.md)] ### Connection String After created a `springboot` client type connection, Service Connector service w -#### Sample codes +#### Sample code -Follow these steps and sample codes to connect to Azure Database for MySQL. +Refer to the steps and code below to connect to Azure Database for MySQL. [!INCLUDE [code sample for mysql secrets](./includes/code-mysql-secret.md)] ### Service Principal Follow these steps and sample codes to connect to Azure Database for MySQL. -#### Sample codes +#### Sample code -Follow these steps and sample codes to connect to Azure Database for MySQL. +Refer to the steps and code below to connect to Azure Database for MySQL. [!INCLUDE [code sample for mysql system mi](./includes/code-mysql-me-id.md)] ## Next steps |
service-connector | How To Integrate Postgres | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md | Supported authentication and clients for App Service, Container Apps, and Azure > [!NOTE] > System-assigned managed identity, User-assigned managed identity and Service principal are only supported on Azure CLI. -## Default environment variable names or application properties and Sample codes +## Default environment variable names or application properties and Sample code -Reference the connection details and sample codes in following tables, according to your connection's authentication type and client type, to connect compute services to Azure Database for PostgreSQL. +Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect compute services to Azure Database for PostgreSQL. ### Connect with System-assigned Managed Identity Reference the connection details and sample codes in following tables, according -### Sample codes +### Sample code -Follow these steps and sample codes to connect to Azure Database for PostgreSQL. +Refer to the steps and code below to connect to Azure Database for PostgreSQL. [!INCLUDE [code sample for postgresql system mi](./includes/code-postgres-me-id.md)] Follow these steps and sample codes to connect to Azure Database for PostgreSQL. -### Sample codes +### Sample code -Follow these steps and sample codes to connect to Azure Database for PostgreSQL. +Refer to the steps and code below to connect to Azure Database for PostgreSQL. [!INCLUDE [code sample for postgresql user mi](./includes/code-postgres-me-id.md)] ### Connect with Connection String Follow these steps and sample codes to connect to Azure Database for PostgreSQL. -### Sample codes +### Sample code -Follow these steps and sample codes to connect to Azure Database for PostgreSQL. +Refer to the steps and code below to connect to Azure Database for PostgreSQL. [!INCLUDE [code sample for postgresql secrets](./includes/code-postgres-secret.md)] ### Connect with Service Principal Follow these steps and sample codes to connect to Azure Database for PostgreSQL. -### Sample codes +### Sample code -Follow these steps and sample codes to connect to Azure Database for PostgreSQL. +Refer to the steps and code below to connect to Azure Database for PostgreSQL. [!INCLUDE [code sample for postgresql service principal](./includes/code-postgres-me-id.md)] |
service-connector | How To Integrate Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md | -zone_pivot_group_filename: service-connector/zone-pivot-groups.json -zone_pivot_groups: howto-authtype Last updated : 10/20/2023 # Integrate Azure Blob Storage with Service Connector -This page shows the supported authentication types, client types and sample codes of Azure Blob Storage using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. Also detail steps with sample codes about how to make connection to the blob storage. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md). +This page shows the supported authentication types, client types and sample code of Azure Blob Storage using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. Also detail steps with sample code about how to make connection to the blob storage. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md). ## Supported compute service Supported authentication and clients for App Service, Container Apps and Azure S -## Default environment variable names or application properties and sample codes +## Default environment variable names or application properties and sample code -Reference the connection details and sample codes in following tables, accordings to your connection's authentication type and client type, to connect compute services to Azure Blob Storage. Please go to beginning of the documentation to choose authentication type. -+Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect compute services to Azure Blob Storage. ### System-assigned managed identity-For default environment variables and sample codes of other authentication type, please choose from beginning of the documentation. +For default environment variables and sample code of other authentication type, please choose from beginning of the documentation. | Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | -#### Sample codes +#### Sample code -Follow these steps and sample codes to connect to Azure Blob Storage with system-assigned managed identity. +Refer to the steps and code below to connect to Azure Blob Storage using a system-assigned managed identity. [!INCLUDE [code sample for blob](./includes/code-blob-me-id.md)] -- ### User-assigned managed identity -For default environment variables and sample codes of other authentication type, please choose from beginning of the documentation. +For default environment variables and sample code of other authentication type, please choose from beginning of the documentation. | Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | | AZURE_STORAGEBLOB_CLIENTID | Your client ID | `<client-ID>` | -#### Sample codes +#### Sample code -Follow these steps and sample codes to connect to Azure Blob Storage with user-assigned managed identity. +Refer to the steps and code below to connect to Azure Blob Storage using a user-assigned managed identity. [!INCLUDE [code sample for blob](./includes/code-blob-me-id.md)] --- ### Connection string -For default environment variables and sample codes of other authentication type, please choose from beginning of the documentation. +For default environment variables and sample code of other authentication type, please choose from beginning of the documentation. #### SpringBoot client type For default environment variables and sample codes of other authentication type, | AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob Storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` | -#### Sample codes +#### Sample code -Follow these steps and sample codes to connect to Azure Blob Storage with connection string. +Refer to the steps and code below to connect to Azure Blob Storage using a connection string. [!INCLUDE [code sample for blob](./includes/code-blob-secret.md)] -- ### Service principal -For default environment variables and sample codes of other authentication type, please choose from beginning of the documentation. +For default environment variables and sample code of other authentication type, please choose from beginning of the documentation. | Default environment variable name | Description | Example value | ||--|| For default environment variables and sample codes of other authentication type, | AZURE_STORAGEBLOB_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_STORAGEBLOB_TENANTID | Your tenant ID | `<tenant-ID>` | -#### Sample codes +#### Sample code -Follow these steps and sample codes to connect to Azure Blob Storage with service principal. +Refer to the steps and code below to connect to Azure Blob Storage using a service principal. [!INCLUDE [code sample for blob](./includes/code-blob-me-id.md)] - ## Next steps Follow the tutorials to learn more about Service Connector. |
service-fabric | How To Managed Cluster Application Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-application-gateway.md | + + Title: Use Application Gateway in a Service Fabric managed cluster +description: This article describes how to use Application Gateway in a Service Fabric managed cluster. ++++++ Last updated : 09/05/2023+++# Use Azure Application Gateway in a Service Fabric managed cluster ++[Azure Application Gateway](../application-gateway/overview.md) is a web traffic load balancer that enables you to manage traffic to your web applications. There are [several benefits to using Application Gateway](https://azure.microsoft.com/products/application-gateway/#overview). Service Fabric managed cluster supports Azure Application Gateway and allows you to connect your node types to an Application Gateway. You can [create an Azure Application Gateway](../application-gateway/quick-create-portal.md) and pass the resource ID to the service fabric managed cluster ARM template. +++## How to use Application Gateway in a Service Fabric managed cluster ++### Requirements ++ Use Service Fabric API version 2022-08-01-Preview (or newer). ++### Steps ++The following section describes the steps that should be taken to use Azure Application Gateway in a Service Fabric managed cluster: ++1. Follow the steps in the [Quickstart: Direct web traffic using the portal - Azure Application Gateway](../application-gateway/quick-create-portal.md). Note the resource ID for use in a later step. ++2. Link your Application Gateway to the node type of your Service Fabric managed cluster. To do this, you must grant SFMC permission to join the application gateway. This permission is granted by assigning SFMC the “Network Contributor” role on the application gateway resource as described in steps below: ++ A. Get the service `Id` from your subscription for Service Fabric Resource Provider application. ++ ```powershell + Login-AzAccount + Select-AzSubscription -SubscriptionId <SubId> + Get-AzADServicePrincipal -DisplayName "Azure Service Fabric Resource Provider" + ``` ++ > [!NOTE] + > Make sure you are in the correct subscription, the principal ID will change if the subscription is in a different tenant. ++ ```powershell + ServicePrincipalNames : {74cb6831-0dbb-4be1-8206-fd4df301cdc2} + ApplicationId : 74cb6831-0dbb-4be1-8206-fd4df301cdc2 + ObjectType : ServicePrincipal + DisplayName : Azure Service Fabric Resource Provider + Id : 00000000-0000-0000-0000-000000000000 + ``` ++ Note the **Id** of the previous output as **principalId** for use in a later step ++ |Role definition name|Role definition ID| + |-|-| + |Network Contributor|4d97b98b-1d4f-4787-a291-c67834d212e7| ++ Note the `Role definition name` and `Role definition ID` property values for use in a later step +++ B. The [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-DDoSNwProtection) adds a role assignment to the application gateway with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md#all). This role assignment is defined in the resources section of template with PrincipalId and a role definition ID determined from the first step. +++ ```json + "variables": { + "sfApiVersion": "2022-08-01-preview", + "networkApiVersion": "2020-08-01", + "clusterResourceId": "[resourceId('Microsoft.ServiceFabric/managedclusters', parameters('clusterName'))]", + "rgRoleAssignmentId": "[guid(resourceGroup().id, 'SFRP-NetworkContributor')]", + "auxSubnetName": "AppGateway", + "auxSubnetNsgName": "AppGatewayNsg", + "auxSubnetNsgID": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('auxSubnetNsgName'))]", + "frontendIPName": "[concat(parameters('clusterName'), '-AppGW-IP')]", + "appGatewayName": "[concat(parameters('clusterName'), '-AppGW')]", + "appGatewayDnsName": "[concat(parameters('clusterName'), '-appgw')]", + "appGatewayResourceId": "[resourceId('Microsoft.Network/applicationGateways', variables('appGatewayName'))]", + "appGatewayFrontendPort": 80, + "appGatewayBackendPort": 8000, + "appGatewayBackendPool": "AppGatewayBackendPool", + "frontendConfigAppGateway": [ + { + "applicationGatewayBackendAddressPoolId": "[resourceId('Microsoft.Network/applicationGateways/backendAddressPools', variables('appGatewayName'), variables('appGatewayBackendPool'))]" + } + ], + "primaryNTFrontendConfig": "[if(parameters('enableAppGateway'), variables('frontendConfigAppGateway'), createArray())]", + "secondaryNTFrontendConfig": "[if(parameters('enableAppGateway'), variables('frontendConfigAppGateway'), createArray())]" + }, + "resources": [ + { + "type": "Microsoft.Authorization/roleAssignments", + "apiVersion": "2020-04-01-preview", + "name": "[variables('rgRoleAssignmentId')]", + "properties": { + "roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/4d97b98b-1d4f-4787-a291-c67834d212e7')]", + "principalId": "[parameters('sfrpPrincipalId')]" + } + }, + ``` +++ or you can also add role assignment via PowerShell using PrincipalId determined from the first step and role definition name as "Contributor" where applicable. ++ ```powershell + New-AzRoleAssignment -PrincipalId "sfrpPrincipalId" ` + -RoleDefinitionId "4d97b98b-1d4f-4787-a291-c67834d212e7" ` + -ResourceName <resourceName> ` + -ResourceType <resourceType> ` + -ResourceGroupName <resourceGroupName> + ``` ++4. Use a [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-DDoSNwProtection) that assigns roles and adds application gateway configuration as part of the service fabric managed cluster creation. Update the template with `principalId`, `appGatewayName`, and `appGatewayBackendPoolId` obtained above. +5. You can also modify your existing ARM template and add new property `appGatewayBackendPoolId` under Microsoft.ServiceFabric/managedClusters resource that takes the resource ID of the application gateway. ++ #### ARM template: + + ```JSON + "frontendConfigurations": [ + { + "applicationGatewayBackendAddressPoolId": "<appGatewayBackendPoolId>" + } + ] + ``` |
service-fabric | How To Managed Cluster Ddos Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-ddos-protection.md | The following section describes the steps that should be taken to use DDoS Netwo Note the `Role definition name` and `Role definition ID` property values for use in a later step - B. The [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-DDoSNwProtection) adds a role assignment to the DDoS Protection Plan with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role- based-access-control/built-in-roles.md#all). This role assignment is defined in the resources section of template with PrincipalId and a role definition ID determined from the first step. + B. The [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-DDoSNwProtection) adds a role assignment to the DDoS Protection Plan with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md#all). This role assignment is defined in the resources section of template with PrincipalId and a role definition ID determined from the first step. ```json |
spring-apps | How To Bind Cosmos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md | For the default environment variable names, see the following articles: * [Azure Cosmos DB for NoSQL](../service-connector/how-to-integrate-cosmos-sql.md?tabs=spring-apps#default-environment-variable-names-or-application-properties) * [Azure Cosmos DB for MongoDB](../service-connector/how-to-integrate-cosmos-db.md?tabs=spring-apps#default-environment-variable-names-or-application-properties) * [Azure Cosmos DB for Gremlin](../service-connector/how-to-integrate-cosmos-gremlin.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)-* [Azure Cosmos DB for Cassandra](../service-connector/how-to-integrate-cosmos-cassandra.md?tabs=spring-apps#default-environment-variable-names-or-application-properties) +* [Azure Cosmos DB for Cassandra](../service-connector/how-to-integrate-cosmos-cassandra.md?tabs=spring-apps#default-environment-variable-names-or-application-properties-and-sample-code) |
spring-apps | How To Bind Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-mysql.md | With Azure Spring Apps, you can connect selected Azure services to your applicat All the connection strings and credentials are injected as environment variables, which you can reference in your application code. -For the default environment variable names, see [Integrate Azure Database for MySQL with Service Connector](../service-connector/how-to-integrate-mysql.md#default-environment-variable-names-or-application-properties-and-sample-codes). +For the default environment variable names, see [Integrate Azure Database for MySQL with Service Connector](../service-connector/how-to-integrate-mysql.md#default-environment-variable-names-or-application-properties-and-sample-code). |
spring-apps | How To Bind Postgres | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-postgres.md | Use the following steps to prepare your project. All the connection strings and credentials are injected as environment variables, which you can reference in your application code. -For the default environment variable names, see [Integrate Azure Database for PostgreSQL with Service Connector](../service-connector/how-to-integrate-postgres.md#default-environment-variable-names-or-application-properties-and-sample-codes). +For the default environment variable names, see [Integrate Azure Database for PostgreSQL with Service Connector](../service-connector/how-to-integrate-postgres.md#default-environment-variable-names-or-application-properties-and-sample-code). Use the following steps to prepare your project. All the connection strings and credentials will be injected as the environment variables, which can be referenced in your application codes. -You can find the default environment variable names in this doc: [Integrate Azure Database for PostgreSQL with Service Connector](../service-connector/how-to-integrate-postgres.md#default-environment-variable-names-or-application-properties-and-sample-codes) +You can find the default environment variable names in this doc: [Integrate Azure Database for PostgreSQL with Service Connector](../service-connector/how-to-integrate-postgres.md#default-environment-variable-names-or-application-properties-and-sample-code) |
static-web-apps | Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md | You can define configuration for Azure Static Web Apps in the _staticwebapp.conf ## File location -The recommended location for the _staticwebapp.config.json_ is in the folder set as the `app_location` in the [workflow file](./build-configuration.md). However, the file may be placed in any subfolder within the folder set as the `app_location`. +The recommended location for the _staticwebapp.config.json_ is in the folder set as the `app_location` in the [workflow file](./build-configuration.md). However, the file may be placed in any subfolder within the folder set as the `app_location`. Additionally, if there is a build step, you must ensure that the build step outputs the file to the root of the output_location. See the [example configuration](#example-configuration-file) file for details. |
storage | Access Tiers Online Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-online-manage.md | az storage blob upload-batch \ ### [AzCopy](#tab/azcopy) +++ To upload a blob to a specific tier by using AzCopy, use the [azcopy copy](../common/storage-ref-azcopy-copy.md) command and set the `--block-blob-tier` parameter to `hot`, `cool`, or `archive`. > [!NOTE] azcopy copy '<local-directory-path>' 'https://<storage-account-name>.blob.core.w azcopy copy '<local-directory-path>\*' 'https://<storage-account-name>.blob.core.windows.net/<container-name>/<blob-name>' --block-blob-tier <blob-tier> --recursive=true ``` ++ ### Upload a blob to the default tier To change the access tier for all blobs in a virtual directory, refer to the vir azcopy set-properties 'https://<storage-account-name>.blob.core.windows.net/<container-name>/myvirtualdirectory' --block-blob-tier=<tier> --recursive=true ``` ++ ### Copy a blob to a different online tier To copy a blob from cool to hot with AzCopy, use [azcopy copy](..\common\storage > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). <br>This example excludes the SAS token because it assumes that you've provided authorization credentials by using Microsoft Entra ID. See the [Get started with AzCopy](../common/storage-use-azcopy-v10.md) article to learn about the ways that you can provide authorization credentials to the storage service. +++ ```azcopy azcopy copy 'https://mystorageeaccount.blob.core.windows.net/mysourcecontainer/myTextFile.txt' 'https://mystorageaccount.blob.core.windows.net/mydestinationcontainer/myTextFile.txt' --block-blob-tier=hot ``` The copy operation is synchronous so when the command returns, all files are cop +### Bulk tiering ++To move blobs to another tier in a container or a folder, enumerate blobs and call the Set Blob Tier operation on each one. The following example shows how to perform this operation: ++#### [Portal](#tab/azure-portal) ++N/A ++#### [PowerShell](#tab/azure-powershell) ++```azurepowershell +# Initialize these variables with your values. + $rgName = "<resource-group>" + $accountName = "<storage-account>" + $containerName = "<container>" + $folderName = "<folder>/" ++ $ctx = (Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName).Context ++ $blobCount = 0 + $Token = $Null + $MaxReturn = 5000 ++ do { + $Blobs = Get-AzStorageBlob -Context $ctx -Container $containerName -Prefix $folderName -MaxCount $MaxReturn -ContinuationToken $Token + if($Blobs -eq $Null) { break } ++ #Set-StrictMode will cause Get-AzureStorageBlob returns result in different data types when there is only one blob + if($Blobs.GetType().Name -eq "AzureStorageBlob") + { + $Token = $Null + } + else + { + $Token = $Blobs[$Blobs.Count - 1].ContinuationToken; + } ++ $Blobs | ForEach-Object { + if($_.BlobType -eq "BlockBlob") { + $_.BlobClient.SetAccessTier("Cold", $null) + } + } + } + While ($Token -ne $Null) + +``` ++#### [Azure CLI](#tab/azure-cli) ++```azurecli +az storage blob list --account-name $accountName --account-key $key \ + --container-name $containerName --prefix $folderName \ + --query "[?properties.blobTier == 'Cool'].name" --output tsv \ + | xargs -I {} -P 10 \ + az storage blob set-tier --account-name $accountName --account-key $key \ + --container-name $containerName --tier Cold --name "{}" +``` ++#### [AzCopy](#tab/azcopy) ++N/A ++++When moving a large number of blobs to another tier, use a batch operation for optimal performance. A batch operation sends multiple API calls to the service with a single request. The suboperations supported by the [Blob Batch](/rest/api/storageservices/blob-batch) operation include [Delete Blob](/rest/api/storageservices/delete-blob) and [Set Blob Tier](/rest/api/storageservices/set-blob-tier). ++> [!NOTE] +> The [Set Blob Tier](/rest/api/storageservices/set-blob-tier) suboperation of the [Blob Batch](/rest/api/storageservices/blob-batch) operation is not yet supported in accounts that have a hierarchical namespace. ++To change access tier of blobs with a batch operation, use one of the Azure Storage client libraries. The following code example shows how to perform a basic batch operation with the .NET client library: +++For an in-depth sample application that shows how to change tiers with a batch operation, see [AzBulkSetBlobTier](/samples/azure/azbulksetblobtier/azbulksetblobtier/). + ## Next steps - [Access tiers for blob data](access-tiers-overview.md) - [Archive a blob](archive-blob.md) - [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md)+ |
storage | Elastic San Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-best-practices.md | Last updated 10/19/2023 -# Elastic SAN Preview best practices +# Optimize the performance of your Elastic SAN Preview This article provides some general guidance on getting optimal performance with an environment that uses an Azure Elastic SAN. |
storage | Elastic San Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md | Title: Create an Azure Elastic SAN (preview) -description: Learn how to deploy an Azure Elastic SAN (preview) with the Azure portal, Azure PowerShell module, or Azure CLI. + Title: Create an Azure Elastic SAN Preview +description: Learn how to deploy an Azure Elastic SAN Preview with the Azure portal, Azure PowerShell module, or Azure CLI. Previously updated : 09/12/2023 Last updated : 10/19/2023 -# Deploy an Elastic SAN (preview) +# Deploy an Elastic SAN Preview This article explains how to deploy and configure an elastic storage area network (SAN). If you're interested in Azure Elastic SAN, or have any feedback you'd like to provide, fill out this optional survey [https://aka.ms/ElasticSANPreviewSignUp](https://aka.ms/ElasticSANPreviewSignUp). |
storage | Elastic San Networking Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md | Title: Azure Elastic SAN networking Preview concepts + Title: Azure Elastic SAN Preview networking concepts description: An overview of Azure Elastic SAN Preview networking options, including storage service endpoints, private endpoints, and iSCSI. Previously updated : 08/16/2023 Last updated : 10/19/2023 -# Elastic SAN Preview networking +# Learn about networking configurations for Elastic SAN Preview Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require. This article describes the options for allowing users and applications access to Elastic SAN volumes from an [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md). |
storage | Elastic San Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-performance.md | description: Learn how your workload's performance is handled by Azure Elastic S Previously updated : 07/28/2023 Last updated : 10/19/2023 -# Elastic SAN Preview and virtual machine performance +# How performance works when Virtual Machines are connected to Elastic SAN Preview volumes This article clarifies how Elastic SAN performance works, and how the combination of Elastic SAN limits and Azure Virtual Machines (VM) limits can affect the performance of your workloads. |
storage | Elastic San Scale Targets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-scale-targets.md | description: Learn about the capacity, IOPS, and throughput rates for Azure Elas Previously updated : 05/02/2023 Last updated : 10/19/2023 -# Elastic SAN Preview scale targets +# Scale targets for Elastic SAN Preview There are three main components to an elastic storage area network (SAN): the SAN itself, volume groups, and volumes. |
storage | Elastic San Shared Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-shared-volumes.md | Title: Use clustered applications on Azure Elastic SAN -description: Learn more about using clustered applications on an Elastic SAN volume and sharing volumes between compute clients. + Title: Use clustered applications on Azure Elastic SAN Preview +description: Learn more about using clustered applications on an Elastic SAN Preview volume and sharing volumes between compute clients. Previously updated : 08/15/2023 Last updated : 10/19/2023 -# Use clustered applications on Azure Elastic SAN +# Use clustered applications on Azure Elastic SAN Preview Azure Elastic SAN volumes can be simultaneously attached to multiple compute clients, allowing you to deploy or migrate cluster applications to Azure. You need to use a cluster manager to share an Elastic SAN volume, like Windows Server Failover Cluster (WSFC), or Pacemaker. The cluster manager handles cluster node communications and write locking. Elastic SAN doesn't natively offer a fully managed filesystem that can be accessed over SMB or NFS. |
storage | Files Nfs Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md | description: Learn about file shares hosted in Azure Files using the Network Fil Previously updated : 08/09/2023 Last updated : 10/16/2023 # NFS file shares in Azure Files-Azure Files offers two industry-standard file system protocols for mounting Azure file shares: the [Server Message Block (SMB)](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) protocol and the [Network File System (NFS)](https://en.wikipedia.org/wiki/Network_File_System) protocol, allowing you to pick the protocol that is the best fit for your workload. Azure file shares don't support accessing an individual Azure file share with both the SMB and NFS protocols, although you can create SMB and NFS file shares within the same storage account. Azure Files offers enterprise-grade file shares that can scale up to meet your storage needs and can be accessed concurrently by thousands of clients. +Azure Files offers two industry-standard file system protocols for mounting Azure file shares: the [Server Message Block (SMB)](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) protocol and the [Network File System (NFS)](https://en.wikipedia.org/wiki/Network_File_System) protocol, allowing you to pick the protocol that is the best fit for your workload. Azure file shares don't support accessing an individual Azure file share with both the SMB and NFS protocols, although you can create SMB and NFS file shares within the same FileStorage storage account. Azure Files offers enterprise-grade file shares that can scale up to meet your storage needs and can be accessed concurrently by thousands of clients. This article covers NFS Azure file shares. For information about SMB Azure file shares, see [SMB file shares in Azure Files](files-smb-protocol.md). For more details on the available networking options, see [Azure Files networkin The following table shows the current level of support for Azure Storage features in accounts that have the NFS 4.1 feature enabled. -The status of items that appear in this table may change over time as support continues to expand. +The status of items that appear in this table might change over time as support continues to expand. | Storage feature | Supported for NFS shares | |--|| The status of items that appear in this table may change over time as support co | [Azure file share soft delete](storage-files-prevent-file-share-deletion.md) | ⛔ | | [Azure File Sync](../file-sync/file-sync-introduction.md)| ⛔ | | [Azure file share backups](../../backup/azure-file-share-backup-overview.md)| ⛔ |-| [Azure file share snapshots](storage-snapshots-files.md)| ⛔ | +| [Azure file share snapshots](storage-snapshots-files.md)| ✔️ (preview) | | [GRS or GZRS redundancy types](storage-files-planning.md#redundancy)| ⛔ | | [AzCopy](../common/storage-use-azcopy-v10.md?toc=/azure/storage/files/toc.json)| ⛔ | | Azure Storage Explorer| ⛔ | The status of items that appear in this table may change over time as support co [!INCLUDE [files-nfs-regional-availability](../../../includes/files-nfs-regional-availability.md)] ## Performance-NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned model](understanding-billing.md#provisioned-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress may face additional latencies due to the high number of open and close operations. +NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned model](understanding-billing.md#provisioned-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress might face additional latencies due to the high number of open and close operations. > [!NOTE] > You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md). |
storage | Storage Files How To Mount Nfs Shares | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md | description: Learn how to mount a Network File System (NFS) Azure file share on Previously updated : 10/03/2023 Last updated : 10/18/2023 +## Applies to +| File share type | SMB | NFS | +|-|:-:|:-:| +| Standard file shares (GPv2), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | +| Standard file shares (GPv2), GRS/GZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | +| Premium file shares (FileStorage), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) | + ## Support [!INCLUDE [files-nfs-limitations](../../../includes/files-nfs-limitations.md)] You have now mounted your NFS share. If you want the NFS file share to automatically mount every time the Linux server or VM boots, create a record in the **/etc/fstab** file for your Azure file share. Replace `YourStorageAccountName` and `FileShareName` with your information. ```bash-<YourStorageAccountName>.file.core.windows.net:/<YourStorageAccountName>/<FileShareName> /mount/<YourStorageAccountName>/<FileShareName> nfs vers=4,minorversion=1,sec=sys 0 0 +<YourStorageAccountName>.file.core.windows.net:/<YourStorageAccountName>/<FileShareName> /media/<YourStorageAccountName>/<FileShareName> nfs vers=4,minorversion=1,sec=sys 0 0 ``` For more information, enter the command `man fstab` from the Linux command line. For more information, enter the command `man fstab` from the Linux command line. If your mount failed, it's possible that your private endpoint wasn't set up correctly or isn't accessible. For details on confirming connectivity, see [Verify connectivity](storage-files-networking-endpoints.md#verify-connectivity). +## NFS file share snapshots (preview) ++Customers using NFS Azure file shares can now create, list, and delete NFS Azure file share snapshots. This capability allows users to roll back entire file systems or recover files that were accidentally deleted or corrupted. ++> [!IMPORTANT] +> You should mount your file share before creating snapshots. If you create a new NFS file share and take snapshots before mounting the share, attempting to list the snapshots for the share will return an empty list. We recommend deleting any snapshots taken before the first mount and re-creating them after you've mounted the share. ++### Limitations ++Only file management APIs (`AzRmStorageShare`) are supported for NFS Azure file shares. File data plane APIs (`AzStorageShare`) aren't supported. ++Azure Backup isn't currently supported for NFS file shares. ++AzCopy isn't currently supported for NFS file shares. To copy data from an NFS Azure file share or share snapshot, use file system copy tools such as rsync or fpsync. ++### Regional availability for NFS Azure file share snapshots +++### Create a snapshot ++You can create a snapshot of an NFS Azure file share using Azure PowerShell or Azure CLI. A share can support the creation of up to 200 share snapshots. ++# [Azure PowerShell](#tab/powershell) ++To create a snapshot of an existing file share, run the following PowerShell command. Replace `<resource-group-name>`, `<storage-account-name>`, and `<file-share-name>` with your own values. ++```azurepowershell +New-AzRmStorageShare -ResourceGroupName "<resource-group-name>" -StorageAccountName "<storage-account-name>" -Name "<file-share-name>" -Snapshot +``` ++# [Azure CLI](#tab/cli) +To create a snapshot of an existing file share, run the following Azure CLI command. Replace `<file-share-name>` and `<storage-account-name>` with your own values. ++```azurecli +az storage share snapshot --name <file-share-name> --account-name <storage-account-name> +``` +++### List file shares and snapshots ++You can list all file shares in a storage account, including the share snapshots, using Azure PowerShell or Azure CLI. ++# [Azure PowerShell](#tab/powershell) ++To list all file shares and snapshots in a storage account, run the following PowerShell command. Replace `<resource-group-name>` and `<storage-account-name>` with your own values. ++```azurepowershell +Get-AzRmStorageShare -ResourceGroupName "<resource-group-name>" -StorageAccountName "<storage-account-name>" -IncludeSnapshot +``` ++# [Azure CLI](#tab/cli) +To list all file shares and snapshots in a storage account, run the following Azure CLI command. Replace `<storage-account-name>` with your own value. ++```azurecli +az storage share list --account-name <storage-account-name> --include-snapshots +``` +++### Delete snapshots ++Existing share snapshots are never overwritten. They must be deleted explicitly. You can delete share snapshots using Azure PowerShell or Azure CLI. ++# [Azure PowerShell](#tab/powershell) ++To delete a file share snapshot, run the following PowerShell command. Replace `<resource-group-name>`, `<storage-account-name>`, and `<file-share-name>` with your own values. The `SnapshotTime` parameter must follow the correct name format, such as `2021-05-10T08:04:08Z`. ++```azurepowershell +Remove-AzRmStorageShare -ResourceGroupName "<resource-group-name>" -StorageAccountName "<storage-account-name>" -Name "<file-share-name>" -SnapshotTime "<snapshot-time>" +``` ++To delete a file share and all its snapshots, run the following PowerShell command. Replace `<resource-group-name>`, `<storage-account-name>`, and `<file-share-name>` with your own values. ++```azurepowershell +Remove-AzRmStorageShare "<resource-group-name>" -StorageAccountName "<storage-account-name>" -Name "<file-share-name>" -Include Snapshots +``` ++# [Azure CLI](#tab/cli) ++To delete a file share snapshot, run the following Azure CLI command. Replace `<storage-account-name>` and `<file-share-name>` with your own values. The `--snapshot` parameter must follow the correct name format, such as `2021-05-10T08:04:08Z`. ++```azurecli +az storage share delete --account-name <storage-account-name> --name <file-share-name> --snapshot <snapshot-time> +``` ++To delete a file share and all its snapshots, run the following Azure CLI command. Replace `<storage-account-name>` and `<file-share-name>` with your own values. ++```azurecli +az storage share delete --account-name <storage-account-name> --name <file-share-name> --delete-snapshots include +``` +++### Mount an NFS Azure file share snapshot ++To mount an NFS Azure file share snapshot to a Linux VM (NFS client) and restore files, follow these steps. ++1. Run the following command in a console. See [Mount options](#mount-options) for other recommended mount options. To improve copy performance, mount the snapshot with [nconnect](nfs-performance.md#nconnect) to use multiple TCP channels. + + ```bash + sudo mount -o vers=4,minorversion=1,proto=tcp,sec=sys $server:/nfs4account/share /media/nfs + ``` + +1. Change the directory to `/media/nfs/.snapshots` so you can view the available snapshots. The `.snapshots` directory is hidden by default, but you can access and read from it like any directory. + + ```bash + cd /media/nfs/.snapshots + ``` + +1. List the contents of the `.snapshots` folder. + + ```bash + ls + ``` + +1. Each snapshot has its own directory that serves as a recovery point. Change to the snapshot directory for which you want to restore files. + + ```bash + cd <snapshot-name> + ``` + +1. List the contents of the directory to view a list of files and directories that can be recovered. + + ```bash + ls + ``` + +1. Copy all files and directories from the snapshot to a *restore* directory to complete the restore. + + ```bash + cp -r <snapshot-name> ../restore + ``` + +The files and directories from the snapshot should now be available in the `/media/nfs/restore` directory. + ## Next steps - Learn more about Azure Files with [Planning for an Azure Files deployment](storage-files-planning.md). |
storage | Storage Files Identity Ad Ds Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md | description: Learn how to enable Active Directory Domain Services authentication Previously updated : 09/27/2023 Last updated : 10/19/2023 recommendations: false Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser # Import AzFilesHybrid module Import-Module -Name AzFilesHybrid -# Login with an Azure AD credential that has either storage account owner or contributor Azure role +# Login to Azure using a credential that has either storage account owner or contributor Azure role # assignment. If you are logging into an Azure environment other than Public (ex. AzureUSGovernment) # you will need to specify that. # See https://learn.microsoft.com/azure/azure-government/documentation-government-get-started-connect-with-ps |
storage | Storage Snapshots Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-snapshots-files.md | description: A share snapshot is a read-only version of an Azure file share that Previously updated : 06/07/2023 Last updated : 10/16/2023 # Overview of share snapshots for Azure Files Azure Files provides the capability to take snapshots of SMB file shares. Share snapshots capture the share state at that point in time. This article describes the capabilities that file share snapshots provide and how you can take advantage of them in your use case. +Snapshots for NFS file shares are currently in [public preview](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots-preview) with limited regional availability. + ## Applies to | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |-| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | +| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ## When to use share snapshots After you create a file share, you can periodically create a share snapshot of t ## Capabilities -A share snapshot is a point-in-time, read-only copy of your data. Share snapshot capability is provided at the file share level. Retrieval is provided at the individual file level, to allow for restoring individual files. You can restore a complete file share by using SMB, the REST API, the Azure portal, the client library, or PowerShell/CLI. +A share snapshot is a point-in-time, read-only copy of your data. Share snapshot capability is provided at the file share level. Retrieval is provided at the individual file level, to allow for restoring individual files. You can restore a complete file share by using SMB, NFS (preview), REST API, the Azure portal, the client library, or PowerShell/CLI. -You can view snapshots of a share by using the REST API or SMB. You can retrieve the list of versions of the directory or file, and you can mount a specific version directly as a drive (only available on Windows - see [Limits](#limits)). +You can view snapshots of a share by using the REST API, SMB, or NFS (preview). You can retrieve the list of versions of the directory or file, and you can mount a specific version directly as a drive (only available on Windows - see [Limits](#limits)). After a share snapshot is created, it can be read, copied, or deleted, but not modified. You can't copy a whole share snapshot to another storage account. You have to do that file by file, by using AzCopy or other copying mechanisms. Snapshots don't count towards the maximum share size limit, which is 100 TiB for ## Limits -The maximum number of share snapshots that Azure Files allows today is 200 per share. After 200 share snapshots, you have to delete older share snapshots in order to create new ones. You can retain snapshots for up to 10 years. +The maximum number of share snapshots that Azure Files allows today is 200 per share. After 200 share snapshots, you must delete older share snapshots in order to create new ones. You can retain snapshots for up to 10 years. -There's no limit to the simultaneous calls for creating share snapshots. There's no limit to amount of space that share snapshots of a particular file share can consume. +There's no limit to the simultaneous calls for creating share snapshots. There's no limit to the amount of space that share snapshots of a particular file share can consume. -Taking snapshots of NFS Azure file shares isn't currently supported. +Taking snapshots of NFS Azure file shares is currently in [public preview](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots-preview) with limited regional availability. The preview only supports management APIs (`AzRmStorageShare`), not data plane APIs (`AzStorageShare`), allowing users to create, list, and delete snapshots of NFS Azure file shares. ## Copying data back to a share from share snapshot When a destination file is overwritten with a copy, any share snapshots associat ## General best practices -Automate backups for data recovery whenever possible. Automated actions are more reliable than manual processes, helping to improve data protection and recoverability. You can use Azure file share backup, the REST API, the Client SDK, or scripting for automation. +Automate backups for data recovery whenever possible. Automated actions are more reliable than manual processes, helping to improve data protection and recoverability. You can use Azure file share backup (SMB file shares only), the REST API, the Client SDK, or scripting for automation. Before you deploy the share snapshot scheduler, carefully consider your share snapshot frequency and retention settings to avoid incurring unnecessary charges. |
storage | Nasuni Deployment Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/nasuni-deployment-guide.md | + + Title: Nasuni configuration guide for Microsoft Azure ++description: Deployment guide for Nasuni and Azure Blob Storage ++ Last updated : 09/13/2023++++++# Nasuni Configuration Guide for Microsoft Azure ++Nasuni® enables organizations to store, protect, synchronize, and collaborate on unstructured file data across all locations. Built for the cloud and powered by UniFS, the world’s only global file system, the Nasuni File Data Platform couples the performance of local file servers with the infinite scale of the cloud to provide a global file-sharing platform at half the cost of traditional file infrastructures. ++How Nasuni works: +- Stores all files and metadata in private (on-premises) or public cloud object storage. +- Provides unlimited primary or archive file storage capacity. +- Intelligently caches just the active data on lightweight Nasuni Edge Appliances. +- Intelligently caches just the active data on lightweight Nasuni Edge Appliances. ++Microsoft Azure is a cloud computing service created by Microsoft for building, testing, deploying, and managing applications and services through Microsoft-managed data centers. It provides software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS) and supports many different programming languages, tools, and frameworks, including both Microsoft-specific and third-party software and systems. ++Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data. ++> [!TIP] +> For Microsoft Azure configuration suggestions to prevent accidental or malicious deletion of data, see [Deletion Security](https://b.link/Nasuni_Deletion_Security) ++## Creating an Azure storage account (using Azure portal) ++> [!NOTE] +> You must have at least one subscription for this purpose. ++> [!NOTE] +> Selecting the “Secure transfer required” feature for an Azure Storage account does not affect the operation of the Nasuni Edge Appliance ++> [!TIP] +> In the Nasuni model, customers provide their own cloud accounts for the storage of their data. Customers should leverage their cloud provider's role-based access and identity access management features as part of their overall security strategy. Such features can be used to limit or prohibit administrative access to the cloud account, based on customer policies ++### Introduction +This document describes how to deploy a Nasuni environment in Microsoft Azure, using Azure Blob storage to store your file data. ++### Creating a storage account using the Azure portal. ++If you don't already have a storage account in Microsoft Azure, create a storage account in Microsoft Azure by following these steps: +1. Sign in to the [Azure portal](https://portal.azure.com). The Microsoft Azure dashboard page appears. +2. On the top left of the page, select “Create a resource." The “Create a resource” dialog appears. +++3. In the Search box, enter “storage account" then select Storage account from the list of results. The Storage account pane appears. +++4. Select Create. The “Create a storage account” pane appears. +++5. If there is more than one subscription, from the Subscription drop-down list, select the subscription to use for this storage account. +6. To select an existing Resource Group, select an existing Resource Group from the Resource Group drop-down list. + Alternatively, create a new Resource Group by clicking “Create new” and then entering a name for the new Resource Group and clicking OK. +7. Select Next: Advanced. The Advanced pane appears. +++8. If your security policy requires it, enable “Require secure transfer for REST API operations." +9. For “Access tier,” select Cool for production data. +> [!NOTE] +> Nasuni also supports [Azure Cold Storage](/azure/storage/blobs/access-tiers-overview). To use Azure Cold Storage, configure Lifecycle Management rules that are based on access tracking. When enabled, access tracking checks when a blob was last accessed. A rule can be defined to move objects that have not been accessed for 90 days or longer. Enabling this feature may incur additional cost. ++10. Set “Azure Files” to disabled. +11. Configure other features according to your needs. +12. Select “Next: Networking >.” The Networking pane appears. +++13. Select the “Connectivity method” to match your security requirements. +> [!NOTE] +> Consider where Edge Appliances will be deployed and how they will access the storage account, for example, via the Internet, Azure ExpressRoute, or a VPN connection to Azure. Most customers select the default “Public endpoint (all networks)”. ++14. Configure other features according to your needs +15. Select “Next: Data protection.” The Data protection pane appears. ++ **Nasuni recommends enabling Soft Delete for all storage accounts being used for Nasuni volumes. If data is deleted, instead of the data being permanently lost, the data changes to a “soft deleted” state and remains available for a configurable number of days.** +16. Select “Enable soft delete for blobs." +17. Specify “Days to retain deleted blobs” by entering or selecting the number of days to retain data. (You can retain soft-deleted data for between 1 and 365 days.) + - Nasuni recommends specifying at least 30 days. ++ **Nasuni recommends enabling Soft Delete for containers. Containers marked for deletion remain available for a configurable number of days.** +18. After configuring your storage account, select “Enable soft delete for containers.” +19. Specify “Days to retain deleted containers” by entering or selecting the number of days to retain data. (You can retain soft-deleted data for between 1 and 365 days.) + - Nasuni recommends specifying at least 30 days. + - For details see [soft delete for containers](/azure/storage/blobs/soft-delete-container-overview) +++20. Configure other features according to your needs +21. Select “Next: Tags >." The Tags pane appears. +22. Define any Tags based on your internal policies. +23. Select “Next: Review + create >” +24. Select Create. + - The storage account starts being created. When the storage account is created, select Storage Accounts in the left-hand list. The new storage account appears in the list of storage accounts. +25. Select the name of your storage account. The pane for your storage account settings appears. ++> [!TIP] +> It is possible to recover a deleted storage account. For details, see [Recovering a deleted storage account](/azure/storage/blobs/soft-delete-container-overview). ++## Configuring storage account firewalls ++Storage account firewalls must be configured to allow connections from the internal customer network or any other networks that Nasuni Edge Appliances either exist on or are using. ++To configure storage account firewalls, follow these steps: +1. Select the storage account. +2. In the left-hand column, select Networking, then select the “Firewalls and virtual networks” tab. The “Firewalls and virtual networks” pane appears. +++3. Select “Selected networks.” + Alternatively, if allowing access from all networks, select “All networks” and skip to step 7. +++4. To add an existing virtual network, in the Virtual Networks area, select “Add existing virtual network.” Select Virtual networks and Subnets options, and then select Add. +5. To create a new virtual network and grant it access, in the Virtual Networks area, select “Add new virtual network.” Provide the information necessary to create the new virtual network, and then select Create. +6. To grant access to an IP range, in the Firewall area, enter the IP address or address range (in CIDR format) in Address Range. Include the internal customer network and other networks that Edge Appliances exist on or are using. Take network routing into account. For example, if connecting to the storage account over a private connection, use internal subnets; if connecting to the storage account over the public Internet, use public IPs. +7. Select Save to apply your changes. ++## Finding Microsoft Azure User Credentials ++> [!NOTE] +> You must have at least one subscription for this purpose ++> [!NOTE] +> Confirm with Nasuni Sales or Support that your Nasuni account is configured for supplying your own Microsoft Azure credentials. ++To locate Microsoft Azure credentials, follow these steps: +1. Sign in to the [Azure portal](https://portal.azure.com). The Microsoft Azure dashboard page appears. +2. Select Storage Accounts in the left-hand list. +3. Select your storage account. The pane for your storage account settings appears. +4. Select Access keys. Your account access key information appears. +5. Record the Microsoft Azure Storage Account Name for later use +6. Select “Show keys” to view key values. Key values appear. +7. Under key1, find the Key value. Select the copy button to copy the Microsoft Azure Primary Access Key. Save this value for creating Microsoft Azure cloud credentials +8. Under key1, find the “Connection string” value. Select the copy button to copy the Connection string. Save this value for possible later use. ++## Configuration +Nasuni provides a Nasuni Connector for Microsoft Azure. ++> [!TIP] +> If you have a requirement to change Cloud Credentials on a regular basis, use the following procedure, preferably outside office hours: +> - Obtain new credentials. Credentials typically consist of a pair of values, such as Access Key ID and Secret Access Key, Account Name and Primary Access Key, or User and Secret. +> - On the Cloud Credentials page, edit the cloud credentials to use the new credentials. +> - The change in cloud credentials is registered on the next snapshot that contains unprotected data. +> - Manually performing a snapshot also causes the change in cloud credentials to be registered, even if there is no unprotected data for the volume. +> - After each Edge Appliance has performed such a snapshot, the original credentials can be retired with the cloud provider ++> [!WARNING] +> Do not retire the original credentials with the cloud provider until you are certain that they are no longer necessary. Otherwise, data might become unavailable ++To configure Nasuni for Microsoft Azure, follow these steps: +1. Ensure that port 443 (HTTPS) is open between the Nasuni Edge Appliance and the object storage solution. +2. Select Configuration. On NMC, select Account. +3. Select Cloud Credentials. +4. Select Add New Credentials, then select "Windows Azure Platform" from the drop-down menu +5. Enter credentials information: + - For Microsoft Azure, enter the following information: + - Name: A name for this set of credentials, which is used for display purposes, such as ObjectStorageCluster1. + - Account Name: The Microsoft Azure Storage Account Name for this set of credentials, obtained in step 5 on page 9 above. + - Primary Access Key: The Microsoft Azure Primary Access Key for this set of credentials, obtained in step 7 on page 9 above. + - Hostname: The hostname for the location of the object storage solution. Use the default setting: blob.core.windows.net. + - Verify SSL Certificates: Use the default On setting. + - Filers (on NMC only): The target Nasuni Edge Appliance(s). + - For Microsoft Azure Gov Cloud, enter the following information: + - Name: A name for this set of credentials, which is used for display purposes, such as ObjectStorageCluster1. + - Account Name: The Microsoft Azure Storage Account Name for this set of credentials, obtained in step 5 on page 9 above. + - Primary Access Key: The Microsoft Azure Primary Access Key for this set of credentials, obtained in step 7 on page 9 above. + - Hostname: The hostname for the location of the object storage solution. Use: blob.core.usgovcloudapi.net. + - Verify SSL Certificates: Use the default On setting. + - Filers (on NMC only): The target Nasuni Edge Appliance(s). +> [!WARNING] +> Be careful changing existing credentials. The connection between the Nasuni Edge Appliance and the container could become invalid, causing loss of data access. Credential editing is to update access after changes to the account name or the access key on the Microsoft Azure system. +6. Select Save Credentials. ++You're now ready to add volumes to the Nasuni Edge Appliance. ++## Adding volumes +To add volumes to your Nasuni system, follow these steps: +1. Select Volumes, then select Add New Volume. The Add New Volume page appears. +2. Enter the following information for the new volume: + - Name: Enter a human-readable name for the volume. + - Cloud Provider: Select Windows Azure Platform. + - Credentials: Select the Cloud Credentials that you defined in step 5 for this volume, such as ObjectStorageCluster1 + - For the remaining options, select what is appropriate for this volume. +3. Select Save. ++You have successfully created a new volume on your Nasuni Filer. ++## Recovering a deleted storage account +It's possible to recover a deleted storage account, if the following conditions are true: +- It has been less than 14 days since the storage account was deleted. +- You created the storage account with the Azure Resource Manager deployment model. Storage accounts created using the Azure portal satisfy this requirement. The older “classic” storage accounts don't. +- A new storage account with the same name hasn't been created since the original storage account was deleted. ++For details, review [Recover a Deleted Storage Account](/azure/storage/common/storage-account-recover) ++## Azure Private Endpoints +Nasuni supports Azure Private Endpoints relying on the DNS layer to resolve the +private endpoint IP. ++It's important to correctly configure your DNS settings to resolve the private endpoint IP address to the fully qualified domain name (FQDN) of the connection string. ++Existing Microsoft Azure services might already have a DNS configuration for a public endpoint. This configuration must be overridden to connect using your private endpoint. ++The network interface associated with the private endpoint contains the information to configure your DNS. The network interface information includes FQDN and private IP addresses for your private link resource. ++You can use the following options to configure your DNS settings for private endpoints: +- Use a private DNS zone. You can use private DNS zones to override the DNS resolution for a private endpoint. A private DNS zone can be linked to your virtual network to resolve specific domains. +- Use your DNS forwarder (optional). You can use your DNS forwarder to override the DNS resolution for a private link resource. Create a DNS forwarding rule to use a private DNS zone on your DNS server hosted in a virtual network. ++> [!NOTE] +> Using the Host file on the Nasuni Edge Appliance is not supported. ++> [!NOTE] +> Nasuni’s default Host URL endpoint for Nasuni’s Azure Cloud Credentials should not be changed. ++### Azure services DNS zone configuration +Azure creates a canonical name DNS record (CNAME) on the public DNS. The CNAME record redirects the resolution to the private domain name. You can override the resolution with the private IP address of your private endpoints. Your applications don't need to change the connection URL. When resolving names via a public DNS service, the DNS server resolves to your private endpoints. The process doesn't affect Nasuni Edge Appliances. ++For Azure services, use the recommended zone names as described [here](/azure/private-link/private-endpoint-dns#azure-services-dns-zone-configuration) ++### DNS configuration scenarios +The FQDN of the services resolves automatically to a public IP address. To resolve to the private IP address of the private endpoint, change your DNS configuration. ++DNS is a critical component to make the application work correctly by successfully resolving the private endpoint IP address. ++Based on your configuration requirements, the following scenarios are available with DNS resolution integrated: +- [Virtual network workloads without custom DNS server](/azure/private-link/private-endpoint-dns#virtual-network-workloads-without-custom-dns-server) +- [On-premises workloads using a DNS forwarder](/azure/private-link/private-endpoint-dns#on-premises-workloads-using-a-dns-forwarder) +- [Virtual network and on-premises workloads using a DNS forwarder](/azure/private-link/private-endpoint-dns#on-premises-workloads-using-a-dns-forwarder) |
stream-analytics | Cosmos Db Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cosmos-db-managed-identity.md | New-AzCosmosDBSqlRoleAssignment -AccountName $accountName -ResourceGroupName $re > [!NOTE] > Due to global replication or caching latency, there may be a delay when permissions are revoked or granted. Changes should be reflected within 10 minutes. Even though test connection can pass initially, jobs may fail when they are started before the permissions fully propagate. +> [!IMPORTANT] +> If the CosmosDB account is not configured to accept connections from **All networks**, you must select **Accept connections from within public Azure datacenters**. + ### Add Azure Cosmos DB as an output |
update-manager | Assessment Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/assessment-options.md | + + Title: Assessment options in Update Manager. +description: The article describes the assessment options available in Update Manager. + Last updated : 09/18/2023++++++# Assessment options in Update Manager ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article provides an overview of the assessment options available by Update Manager. ++Update Manager provides you the flexibility to assess the status of available updates and manage the process of installing required updates for your machines. ++## Periodic assessment + + Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by Update Manager. We recommend that you enable this property on your machines as it allows Update Manager to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-a-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md). +++## Check for updates now/On-demand assessment ++Update Manager allows you to check for latest updates on your machines at any time, on-demand. You can view the latest update status and act accordingly. Go to **Updates** blade on any VM and select **Check for updates** or select multiple machines from Update Manager and check for updates for all machines at once. For more information, see [check and install on-demand updates](view-updates.md). ++## Update assessment scan + You can initiate a software updates compliance scan on a machine to get a current list of operating system updates available. ++ - **On Windows** - the software update scan is actually performed by the Windows Update Agent. + - **On Linux** - the software update scan is performed using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL Definitions for that platform, which is retrieved from a local or remote repository. ++ In the **Updates** page, after you initiate an assessment, a notification is generated to inform you the activity has started and another is displayed when it is finished. ++ :::image type="content" source="media/assessment-options/updates-preview-page.png" alt-text="Screenshot of the Updates page."::: +++The **Recommended updates** section is updated to reflect the OS updates applicable. You can also select **Refresh** to update the information on the page and review the assessment details of the selected machine. ++In the **History** section, you can view: +- **Total deployments**ΓÇöthe total number of deployments. +- **Failed deployments**ΓÇöthe number out of the total deployments that failed. +- **Successful deployments**ΓÇöthe number out of the total deployments that were successful. ++A list of the deployments created are shown in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, represented as **Activity ID**, which is listed along with **Status**, **Updates Installed**, and **Time details**. You can filter the results listed in the grid in the following ways: ++- Select one of the tile visualizations +- Select a specific time period. Options are: **Last 30 Days**, **Last 15 Days**, **Last 7 Days**, and **Last 24 hrs**. By default, deployments from the last 30 days are shown. +- Select a specific deployment status. Options are: **Succeeded**, **Failed**, **CompletedWithWarnings**, **InProgress**, and **NotStarted**. By default, all status types are selected. +Selecting any one of the update deployments from the list will open the **Assessment run** page. Here, it shows a detailed breakdown of the updates and the installation results for the Azure VM or Arc-enabled server. ++In the **Scheduling** section, you can either **create a maintenance configuration** or **attach existing maintenance configuration**. See the section for more information on [how to create a maintenance configuration](scheduled-patching.md#create-a-new-maintenance-configuration) and [how to attach existing maintenance configuration](scheduled-patching.md#attach-a-maintenance-configuration). +++## Next steps ++* To view update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager. |
update-manager | Configure Wu Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/configure-wu-agent.md | + + Title: Configure Windows Update settings in Azure Update Manager +description: This article tells how to configure Windows update settings to work with Azure Update Manager. + Last updated : 09/18/2023++++++# Configure Windows update settings for Azure Update Manager ++Azure Update Manager relies on the [Windows Update client](/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. Many of these settings can be managed by: ++- Local Group Policy Editor +- Group Policy +- PowerShell +- Directly editing the Registry ++The Update Manager respects many of the settings specified to control the Windows Update client. If you use settings to enable non-Windows updates, the Update Manager will also manage those updates. If you want to enable downloading of updates before an update deployment occurs, update deployment can be faster, more efficient, and less likely to exceed the maintenance window. ++For additional recommendations on setting up WSUS in your Azure subscription and to secure your Windows virtual machines up to date, review [Plan your deployment for updating Windows virtual machines in Azure using WSUS](/azure/architecture/example-scenario/wsus). ++## Pre-download updates ++To configure the automatic downloading of updates without automatically installing them, you can use Group Policy to [configure the Automatic Updates setting](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates) to 3. This setting enables downloads of the required updates in the background, and notifies you that the updates are ready to install. In this way, Update Manager remains in control of schedules, but allows downloading of updates outside the maintenance window. This behavior prevents `Maintenance window exceeded` errors in Update Manager. ++You can enable this setting in PowerShell: ++```powershell +$WUSettings = (New-Object -com "Microsoft.Update.AutoUpdate").Settings +$WUSettings.NotificationLevel = 3 +$WUSettings.Save() +``` ++## Configure reboot settings ++The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Update Deployment** settings. Configure these registry keys to best suit your environment. ++## Enable updates for other Microsoft products ++By default, the Windows Update client is configured to provide updates only for Windows operating system. In Windows update, select **Check online for Windows updates**. It will check updates for other Microsoft products to enable the **Give me updates for other Microsoft products when I update Windows** to receive updates for other Microsoft products, including security patches for Microsoft SQL Server and other Microsoft software. ++Use one of the following options to perform the settings change at scale: ++- For Servers configured to patch on a schedule from Update Manager (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change. ++ ```powershell + $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager") + $ServiceManager.Services + $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d" + $ServiceManager.AddService2($ServiceId,7,"") + ``` ++- For servers running Server 2016 or later which are not using Update Manager scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](/troubleshoot/windows-client/group-policy/create-and-manage-central-store). +++## Configure a Windows server for Microsoft updates ++The Windows update client on Windows servers can get their patches from either of the following Microsoft hosted patch repositories: +- Windows update - hosts operating system patches. +- Microsoft update - hosts operating system and other Microsoft patches. For example MS Office, SQL Server and so on. ++> [!NOTE] +> For the application of patches, you can choose the update client at the time of installation, or later using Group policy or by directly editing the registry. +> To get the non-operating system Microsoft patches or to install only the OS patches, we recommend you to change the patch repository as this is an operating system setting and not an option that you can configure within Update management center (preview). ++### Edit the registry ++If scheduled patching is configured on your machine using the Update management center (preview), the Auto update on the client is disabled. To edit the registry and configure the setting, see [First party updates on Windows](support-matrix.md#first-party-updates-on-windows). ++### Patching using group policy on Azure Update management ++If your machine is patched using Automation Update management, and has Automatic updates enabled on the client, you can use the group policy to have complete control. To patch using group policy, follow these steps: ++1. Go to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Windows Update** > **Manage end user experience**. +1. Select **Configure Automatic Updates**. +1. Select or deselect the **Install updates for other Microsoft products** option. ++ :::image type="content" source="./media/configure-wu-agent/configure-updates-group-policy-inline.png" alt-text="Screenshot of selection or deselection of install updates for other Microsoft products." lightbox="./media/configure-wu-agent/configure-updates-group-policy-expanded.png"::: +++## Make WSUS configuration settings ++Update Manager supports WSUS settings. You can specify sources for scanning and downloading updates using instructions in [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings#specify-intranet-microsoft-update-service-location). By default, the Windows Update client is configured to download updates from Windows Update. When you specify a WSUS server as a source for your machines, the update deployment fails, if the updates aren't approved in WSUS. ++To restrict machines to the internal update service, see [do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#do-not-connect-to-any-windows-update-internet-locations). ++## Next steps ++Configure an update deployment by following instructions in [Deploy updates](deploy-updates.md). |
update-manager | Deploy Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/deploy-updates.md | + + Title: Deploy updates and track results in Azure Update Manager +description: This article details how to use Azure Update Manager in the Azure portal to deploy updates and view results for supported machines. + Last updated : 09/18/2023+++++++# Deploy updates now and track results with Azure Update Manager ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article describes how to perform an on-demand update on a single virtual machine (VM) or multiple VMs by using Azure Update Manager. ++See the following sections for more information: ++- [Install updates on a single VM](#install-updates-on-a-single-vm) +- [Install updates at scale](#install-updates-at-scale) ++## Supported regions ++Update Manager is available in all [Azure public regions](support-matrix.md#supported-regions). ++## Configure reboot settings ++The registry keys listed in [Configure automatic updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot. A reboot can happen even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment. ++## Install updates on a single VM ++You can install updates from **Overview** or **Machines** on the **Update Manager** page or from the selected VM. ++# [From Overview pane](#tab/install-single-overview) ++To install one-time updates on a single VM: ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On **Update Manager** > **Overview**, select your subscription and select **One-time update** to install updates. ++ :::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Screenshot that shows an example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png"::: ++1. Select **Install now** to proceed with the one-time updates: ++ - **Install one-time updates**: Select **Add machine** to add the machine for deploying one time. + - **Select resources**: Choose the machine and select **Add**. ++1. On the **Updates** pane, specify the updates to include in the deployment. For each product, select or clear all supported update classifications and specify the ones to include in your update deployment. ++ If your deployment is meant to apply only for a select set of updates, it's necessary to clear all the preselected update classifications when you configure the **Inclusion/exclusion** updates described in the following steps. This action ensures only the updates you've specified to include in this deployment are installed on the target machine. ++ > [!NOTE] + > - **Selected Updates** shows a preview of OS updates that you can install based on the last OS update assessment information available. If the OS update assessment information in Update Manager is obsolete, the actual updates installed would vary. Especially if you've chosen to install a specific update category, where the OS updates applicable might vary as new packages or KB IDs might be available for the category. + > - Update Manager doesn't support driver updates. ++ - Select **Include update classification**. Select the appropriate classifications that must be installed on your machines. + + :::image type="content" source="./media/deploy-updates/include-update-classification-inline.png" alt-text="Screenshot that shows update classification." lightbox="./media/deploy-updates/include-update-classification-expanded.png"::: + + - Select **Include KB ID/package** to include in the updates. Enter a comma separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, use `3103696` or `3134815`. For Windows, you can refer to the [MSRC webpage](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base release. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, use `kernel*`, `glibc`, or `libc=1.0.1`. Based on the options specified, Update Manager shows a preview of OS updates under the **Selected Updates** section. + - To exclude updates that you don't want to install, select **Exclude KB ID/package**. We recommend selecting this option because updates that aren't displayed here might be installed, as newer updates might be available. + - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date**. Select the date and select **Add** > **Next**. + + :::image type="content" source="./media/deploy-updates/include-patch-publish-date-inline.png" alt-text="Screenshot that shows the patch publish date." lightbox="./media/deploy-updates/include-patch-publish-date-expanded.png"::: ++1. On the **Properties** pane, specify the reboot and maintenance window: + - Use the **Reboot** option to specify the way to handle reboots during deployment. The following options are available: + * Reboot if required + * Never reboot + * Always reboot + - Use **Maximum duration (in minutes)** to specify the amount of time allowed for updates to install. The maximum limit supported is 235 minutes. Consider the following details when you specify the window: + * It controls the number of updates that must be installed. + * New updates continue to install if the maintenance window limit is approaching. + * In-progress updates aren't terminated if the maintenance window limit is exceeded. + * Any remaining updates that aren't yet installed aren't attempted. We recommend that you reevaluate the maintenance window if this issue is consistently encountered. + * If the limit is exceeded on Windows, it's often because of a service pack update that's taking a long time to install. ++1. After you're finished configuring the deployment, verify the summary in **Review + install** and select **Install**. ++# [From Machines pane](#tab/install-single-machine) ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On **Update Manager** > **Machine**, select your subscription, select your machine, and select **One-time update** to install updates. ++1. Select **Install now** to proceed with installing updates. ++1. On the **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next**, and follow the procedure from step 4 listed in **From Overview pane** of [Install updates on a single VM](#install-updates-on-a-single-vm). ++ A notification informs you when the activity starts, and another tells you when it's finished. After it's successfully finished, you can view the installation operation results in **History**. You can view the status of the operation at any time from the [Azure activity log](../azure-monitor/essentials/activity-log.md). ++# [From a selected VM](#tab/singlevm-deploy-home) ++1. Select your virtual machine and the **virtual machines | Updates** page opens. +1. Under **Operations**, select **Updates**. +1. In **Updates**, select **Go to Updates using Azure Update Manager**. +1. In **Updates**, select **One-time update** to install the updates. +1. In **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next** and follow the procedure from step 4 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-a-single-vm). + +++## Install updates at scale ++Follow these steps to create a new update deployment for multiple machines. ++> [!NOTE] +> You can check the updates from **Overview** or **Machines**. ++You can schedule updates. ++# [From Overview pane](#tab/install-scale-overview) ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On **Update Manager** > **Overview**, select your subscription and select **One-time update** > **Install now** to install updates. ++ :::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Screenshot that shows installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png"::: ++1. On the **Install one-time updates** pane, you can select the resources and machines to install the updates. ++1. On the **Machines** page, you can view all the machines available in your subscription. You can also use **Add machine** to add the machines for deploying one-time updates. You can add up to 20 machines. Choose **Select all** and select **Add**. ++**Machines** displays a list of machines for which you can deploy a one-time update. Select **Next** and follow the procedure from step 6 listed in **From Overview pane** of [Install updates on a single VM](#install-updates-on-a-single-vm). ++# [From Machines pane](#tab/install-scale-machines) ++1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**. ++1. Go to **Machines**, select your subscription, and choose your machines. You can choose **Select all** to select all the machines. ++1. Select **One-time update** > **Install now** to deploy one-time updates. ++1. On the **Install one-time updates** pane, you can select the resources and machines to install the updates. ++1. On the **Machines** page, you can view all the machines available in your subscription. You can also select by using **Add machine** to add the machines for deploying one-time updates. You can add up to 20 machines. Choose **Select all** and select **Add**. ++**Machines** displays a list of machines for which you want to deploy a one-time update. Select **Next** and follow the procedure from step 6 listed in **From Overview pane** of [Install updates on a single VM](#install-updates-on-a-single-vm). ++- ++A notification informs you when the activity starts, and another tells you when it's finished. After it's successfully finished, you can view the installation operation results in **History**. You can view the status of the operation at any time from the [Azure activity log](../azure-monitor/essentials/activity-log.md). ++## View update history for a single VM ++You can browse information about your Azure VMs and Azure Arc-enabled servers across your Azure subscriptions. For more information, see [Update deployment history](manage-multiple-machines.md#update-deployment-history). ++After your scheduled deployment starts, you can see its status on the **History** tab. It displays the total number of deployments, including the successful and failed deployments. +++**Windows update history** currently doesn't show the updates that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update Manager** > **Manage** > **History**. ++> [!NOTE] +> The **Windows update history** currently doesn't show the updates summary that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update manager** > **Manage** > **History**. + +A list of the deployments created are shown in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, represented as **Operation ID**, which is listed along with **Status**, **Updates Installed** and **Time** details. You can filter the results listed in the grid. ++Select any one of the update deployments from the list to open the **Update deployment run** page. Here, you can see a detailed breakdown of the updates and the installation results for the Azure VM or Azure Arc-enabled server. +++ The available values are: ++- **Not attempted**: The update wasn't installed because insufficient time was available, based on the defined maintenance window duration. +- **Not selected**: The update wasn't selected for deployment. +- **Succeeded**: The update succeeded. +- **Failed**: The update failed. ++## Next steps ++* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot issues with Azure Update Manager](troubleshoot.md). |
update-manager | Dynamic Scope Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/dynamic-scope-overview.md | + + Title: An overview of Dynamic Scoping +description: This article provides information about Dynamic Scoping, its purpose and advantages. + Last updated : 09/18/2023++++++# About Dynamic Scoping ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure VMs :heavy_check_mark: Azure Arc-enabled servers. ++Dynamic Scoping is an advanced capability of schedule patching that allows users to: ++- Group machines based on criteria such as subscription, resource group, location, resource type, OS Type, and Tags. This becomes the definition of the scope. +- Associate the scope to a schedule/maintenance configuration to apply updates at scale as per a pre-defined scope. ++The criteria will be evaluated at the scheduled run time, which will be the final list of machines that will be patched by the schedule. The machines evaluated during create or edit phase may differ from the group at schedule run time. ++## Key benefits ++**At Scale and simplified patching** - You don't have to manually change associations between machines and schedules. For example, if you want to remove a machine from a schedule and your scope was defined based on tag(s) criteria, removing the tag on the machine will automatically drop the association. These associations can be dropped and added for multiple machines at scale. + > [!NOTE] + > Subscription is mandatory for the creation of dynamic scope and you can't edit it after the dynamic scope is created. ++**Reusability of the same schedule** - You can associate a schedule to multiple machines dynamically, statically, or both. + > [!NOTE] + > You can associate one dynamic scope to one schedule. ++++## Permissions ++For Dynamic Scoping and configuration assignment, ensure that you have the following permissions: ++- Write permissions to create or modify a schedule. +- Read permissions to assign or read a schedule. ++## Service limits ++The following are the Dynamic scope limits for **each dynamic scope**. ++| Resource | Limit | +|-|-| +| Resource associations | 1000 | +| Number of tag filters | 50 | +| Number of Resource Group filters | 50 | ++> [!NOTE] +> The above limits are for Dynamic scope in the Guest scope only. ++## Next steps ++ Learn about deploying updates to your machines to maintain security compliance by reading [deploy updates](deploy-updates.md) |
update-manager | Guidance Migration Automation Update Management Azure Update Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-automation-update-management-azure-update-manager.md | + + Title: Guidance to move virtual machines from Automation Update Management to Azure Update Manager +description: Guidance overview on migration from Automation Update Management to Azure Update Manager +++ Last updated : 09/14/2023++++# Guidance to move virtual machines from Automation Update Management to Azure Update Manager ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article provides guidance to move virtual machines from Automation Update Management to Azure Update Manager. ++Azure Update Manager provides a SaaS solution to manage and govern software updates to Windows and Linux machines across Azure, on-premises, and multicloud environments. It is an evolution of [Azure Automation Update management solution](../automation/update-management/overview.md) with new features and functionality, for assessment and deployment of software updates on a single machine or on multiple machines at scale. ++For the Azure Update Manager, both AMA and MMA aren't a requirement to manage software update workflows as it relies on the Microsoft Azure VM Agent for Azure VMs and Azure connected machine agent for Arc-enabled servers. When you perform an update operation for the first time on a machine, an extension is pushed to the machine and it interacts with the agents to assess missing updates and install updates. +++> [!NOTE] +> - If you are using Azure Automation Update Management Solution, we recommend that you don't remove MMA agents from the machines without completing the migration to Azure Update Manager for the machine's patch management needs. If you remove the MMA agent from the machine without moving to Azure Update Manager, it would break the patching workflows for that machine. +> +> - All capabilities of Azure Automation Update Management will be available on Azure Update Manager before the deprecation date. ++## Guidance to move virtual machines from Automation Update Management to Azure Update Manager ++Guidance to move various capabilities is provided in table below: ++**S.No** | **Capability** | **Automation Update Management** | **Azure Update Manager** | **Steps using Azure portal** | **Steps using API/script** | + | | | | | | +1 | Patch management for Off-Azure machines. | Could run with or without Arc connectivity. | Azure Arc is a prerequisite for non-Azure machines. | 1. [Create service principal](../app-service/quickstart-php.md#1get-the-sample-repository) </br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 1. [Create service principal](../azure-arc/servers/onboard-service-principal.md#azure-powershell) <br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | +2 | Enable periodic assessment to check for latest updates automatically every few hours. | Machines automatically receive the latest updates every 12 hours for Windows and every 3 hours for Linux. | Periodic assessment is an update setting on your machine. If it's turned on, the Update Manager fetches updates every 24 hours for the machine and shows the latest update status. | 1. [Single machine](manage-update-settings.md#configure-settings-on-a-single-vm) </br> 2. [At scale](manage-update-settings.md#configure-settings-at-scale) </br> 3. [At scale using policy](periodic-assessment-at-scale.md) | 1. [For Azure VM](../virtual-machines/automatic-vm-guest-patching.md#azure-powershell-when-updating-a-windows-vm) </br> 2.[For Arc-enabled VM](/powershell/module/az.connectedmachine/update-azconnectedmachine?view=azps-10.2.0) | +3 | Static Update deployment schedules (Static list of machines for update deployment). | Automation Update management had its own schedules. | Azure Update Manager creates a [maintenance configuration](../virtual-machines/maintenance-configurations.md) object for a schedule. So, you need to create this object, copying all schedule settings from Automation Update Management to Azure Update Manager schedule. | 1. [Single VM](scheduled-patching.md#schedule-recurring-updates-on-a-single-vm) </br> 2. [At scale](scheduled-patching.md#schedule-recurring-updates-at-scale) </br> 3. [At scale using policy](scheduled-patching.md#onboard-to-schedule-by-using-azure-policy) | [Create a static scope](manage-vms-programmatically.md) | +4 | Dynamic Update deployment schedules (Defining scope of machines using resource group, tags, etc. which is evaluated dynamically at runtime).| Same as static update schedules. | Same as static update schedules. | [Add a dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope | [Create a dynamic scope]( tutorial-dynamic-grouping-for-scheduled-patching.md#create-a-dynamic-scope) | +5 | Deboard from Azure Automation Update management. | After you complete the steps 1, 2, and 3, you need to clean up Azure Update management objects. | | 1. [Remove machines from solution](../automation/update-management/remove-feature.md#remove-management-of-vms) </br> 2. [Remove Update Management solution](../automation/update-management/remove-feature.md#remove-updatemanagement-solution) </br> 3. [Unlink workspace from Automation account](../automation/update-management/remove-feature.md#unlink-workspace-from-automation-account) </br> 4. [Cleanup Automation account](../automation/update-management/remove-feature.md#cleanup-automation-account) | NA | +6 | Reporting | Custom update reports using Log Analytics queries. | Update data is stored in Azure Resource Graph (ARG). Customers can query ARG data to build custom dashboards, workbooks etc. | The old Automation Update Management data stored in Log analytics can be accessed, but there's no provision to move data to ARG. You can write ARG queries to access data that will be stored to ARG after virtual machines are patched via Azure Update Manager. With ARG queries you can, build dashboards and workbooks using following instructions: </br> 1. [Log structure of Azure Resource graph updates data](query-logs.md) </br> 2. [Sample ARG queries](sample-query-logs.md) </br> 3. [Create workbooks](manage-workbooks.md) | NA | +7 | Customize workflows using pre and post scripts. | Available as Automation runbooks. | We recommend that you use Automation runbooks once they are available. | | | +8 | Create alerts based on updates data for your environment | Alerts can be set up on updates data stored in Log Analytics. |We recommend that you use alerts once they are available. | | | +++ +## Next steps +- [An overview on Azure Update Manager](overview.md) +- [Check update compliance](view-updates.md) +- [Deploy updates now (on-demand) for single machine](deploy-updates.md) +- [Schedule recurring updates](scheduled-patching.md) |
update-manager | Guidance Migration Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-azure.md | + + Title: Patching guidance overview for Microsoft Configuration Manager to Azure +description: Patching guidance overview for Microsoft Configuration Manager to Azure. View on how to get started with Azure Update Manager, mapping capabilities of MCM software and FAQs. +++ Last updated : 09/18/2023++++# Guidance on migrating Azure VMs from Microsoft Configuration Manager to Azure Update Manager ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article provides a guide to start using Azure Update Manager (for update management) for Azure virtual machines that are currently using Microsoft Configuration Manager (MCM). ++Microsoft Configuration Manager (MCM), previously known as System Center Configuration Manager (SCCM), helps you to manage PCs and servers, keep software up to date, set configuration and security policies, and monitor system status. ++MCM supports several [cloud services](/mem/configmgr/core/understand/use-cloud-services) that can supplement on-premises infrastructure and can help solve business problems such as: +- How to manage clients that roam onto the internet. +- How to provide content resources to isolated clients or resources on the intranet, outside your firewall. +- How to scale out infrastructure when the physical hardware isn't available or isn't logically placed to support your needs. ++Customers [extend and migrate an on-premises site to Azure](/mem/configmgr/core/support/azure-migration-tool) and create Azure virtual machines (VMs) for Configuration Manager and install the various site roles with default settings. The validation of new roles and removal of the on-premises site system role enables MCM to provide all the on-premises capabilities and experiences in Azure. For more information, see [Configuration Manager on Azure FAQ](/mem/configmgr/core/understand/configuration-manager-on-azure). +++## Migrate to Azure Update Manager ++MCM offers [multiple features and capabilities](/mem/configmgr/core/plan-design/changes/features-and-capabilities) and software [update management](/mem/configmgr/sum/understand/software-updates-introduction) is one of these.By using MCM in Azure, you can continue with the existing investments in MCM and processes to manage update cycle for Windows VMs. ++**Specifically for update management or patching**, as per your requirements, you can also use the native [Azure Update Manager](overview.md) to manage and govern update compliance for Windows and Linux machines across your deployments in a consistent manner. Unlike MCM that needs maintaining Azure virtual machines for hosting the different Configuration Manager roles. Azure Update Manager is designed as a standalone Azure service to provide SaaS experience on Azure to manage hybrid environments. You don't need license to use Azure Update Manager. ++> [!NOTE] +> Azure Update Manager does not provide migration support for Azure VMs in MCM. For example, configurations. ++## Software update management capability map ++The following table maps the **software update management capabilities** of MCM to Azure Update Manager. ++**Capability** | **Microsoft Configuration Manager** | **Azure Update Manager** | + | | | +Synchronize software updates between sites (Central Admin site, Primary, Secondary sites) | The top site (either central admin site or stand-alone primary site) connects to Microsoft Update to retrieve software update. [Learn more](/mem/configmgr/sum/understand/software-updates-introduction). After the top sites are synchronized, the child sites are synchronized. | There's no hierarchy of machines in Azure and therefore all machines connected to Azure receive updates from the source repository. +Synchronize software updates/check for updates (retrieve patch metadata) | You can scan for updates periodically by setting configuration on the Software update point. [Learn more](/mem/configmgr/sum/get-started/synchronize-software-updates#to-schedule-software-updates-synchronization) | You can enable periodic assessment to enable scan of patches every 24 hours. [Learn more](assessment-options.md)| +Configuring classifications/products to synchronize/scan/assess | You can choose the update classifications (security or critical updates) to synchronize/scan/assess. [Learn more](/mem/configmgr/sum/get-started/configure-classifications-and-products) | There's no such capability here. The entire software metadata is scanned. | +Deploy software updates (install patches) | Provides three modes of deploying updates: </br> Manual deployment </br> Automatic deployment </br> Phased deployment [Learn more](/mem/configmgr/sum/deploy-use/deploy-software-updates) | Manual deployment is mapped to deploy [one-time updates](deploy-updates.md) and Automatic deployment is mapped to [scheduled updates](scheduled-patching.md) (The [Automatic Deployment Rules (ADRs)](/mem/configmgr/sum/deploy-use/automatically-deploy-software-updates#BKMK_CreateAutomaticDeploymentRule)) can be mapped to schedules. There's no phased deployment option. ++## Manage software updates using Azure Update Manager ++1. Sign in to the [Azure portal](https://portal.azure.com) and search for **Azure Update Manager**. ++ :::image type="content" source="./media/guidance-migration-azure/update-manager-service-selection-inline.png" alt-text="Screenshot of selecting the Azure Update Manager from Azure portal." lightbox="./media/guidance-migration-azure/update-manager-service-selection-expanded.png"::: ++1. In the **Azure Update Manager** home page, under **Manage** > **Machines**, select your subscription to view all your machines. +1. Filter as per the available options to know the status of your specific machines. ++ :::image type="content" source="./media/guidance-migration-azure/filter-machine-status-inline.png" alt-text="Screenshot of selecting the filters in Azure Update Manager to view the machines." lightbox="./media/guidance-migration-azure/filter-machine-status-expanded.png"::: ++1. Select the suitable [assessment](assessment-options.md) and [patching](updates-maintenance-schedules.md) options as per your requirement. +++### Patch machines ++After you set up configuration for assessment and patching, you can deploy/install either through [on-demand updates](deploy-updates.md) (One-time or manual update)or [schedule updates](scheduled-patching.md) (automatic update) only. You can also deploy updates using [Azure Update Manager's API](manage-vms-programmatically.md). ++## Limitations in Azure Update Manager ++The following are the current limitations: ++- **Orchestration groups with Pre/Post scripts** - [Orchestration groups](/mem/configmgr/sum/deploy-use/orchestration-groups) can't be created in Azure Update Manager to specify a maintenance sequence, allow some machines for updates at the same time and so on. (The orchestration groups allow you to use the pre/post scripts to run tasks before and after a patch deployment). ++## Frequently asked questions ++### Where does Azure Update Manager get its updates from? ++Azure Update Manager refers to the repository that the machines point to. Most Windows machines by default point to the Windows Update catalog and Linux machines are configured to get updates from the `apt` or `yum` repositories. If the machines point to another repository such as [WSUS](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or a local repository then Azure Update Manager gets the updates from that repository. ++### Can Azure Update Manager patch OS, SQL and Third party software? ++Azure Update Manager refers to the repositories (or endpoints) that the VMs point to. If the repository (or endpoints) contains updates for Microsoft products, third party software etc. then Azure Update Manager can install these patches. ++By default, Windows VMs point to Windows Update server. Windows Update server doesn't contain updates for Microsoft products, and third party software. If the VMs point to Microsoft Update, Azure Update Manager patches OS and Microsoft products. ++For the third party software patching, Azure Update Manager should be connected to WSUS and you must publish the third party updates. We can't patch third party software for Windows VMs unless they're available in WSUS. ++### Do I need to configure WSUS to use Azure Update Manager? ++WSUS is a way to manage patches. Azure Update Manager will refer to whichever endpoint it's pointed to. (Windows Update, Microsoft Update, or WSUS). + +## Next steps +- [An overview on Azure Update Manager](overview.md) +- [Check update compliance](view-updates.md) +- [Deploy updates now (on-demand) for single machine](deploy-updates.md) +- [Schedule recurring updates](scheduled-patching.md) |
update-manager | Guidance Patching Sql Server Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-patching-sql-server-azure-vm.md | + + Title: Guidance on patching for SQL Server on Azure VMs (preview) using Azure Update Manager. +description: An overview on patching guidance for SQL Server on Azure VMs (preview) using Azure Update Manager +++ Last updated : 09/27/2023++++# Guidance on patching for SQL Server on Azure VMs (preview) using Azure Update Manager ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article provides the details on how to integrate [Azure Update Manager](overview.md) with your [SQL virtual machines](/azure/azure-sql/virtual-machines/windows/manage-sql-vm-portal) resource for your [SQL Server on Azure Virtual Machines (VMs)](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) ++## Overview ++[Azure Update Manager](overview.md) is a unified service that allows you to manage and govern updates for all your Windows and Linux virtual machines across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. ++Azure Update Manager designed as a standalone Azure service to provide SaaS experience to manage hybrid environments in Azure. ++Using Azure Update Manager you can manage and govern updates for all your SQL Server instances at scale. Unlike with [Automated Patching](/azure/azure-sql/virtual-machines/windows/automated-patching), Update Manager installs cumulative updates for SQL server. ++++ +## Next steps +- [An overview on Azure Update Manager](overview.md) +- [Check update compliance](view-updates.md) +- [Deploy updates now (on-demand) for single machine](deploy-updates.md) +- [Schedule recurring updates](scheduled-patching.md) |
update-manager | Manage Arc Enabled Servers Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-arc-enabled-servers-programmatically.md | + + Title: Programmatically manage updates for Azure Arc-enabled servers in Azure Update Manager +description: This article tells how to use Azure Update Manager using REST API with Azure Arc-enabled servers. +++ Last updated : 09/18/2023++++# How to programmatically manage updates for Azure Arc-enabled servers ++This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with Azure Update Manager in Azure. If you're new to Azure Update Manager and you want to learn more, see [overview of Update Manager](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md). ++Update Manager in Azure enables you to use the [Azure REST API](/rest/api/azure) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure) and [Azure CLI](/cli/azure). ++Support for Azure REST API to manage Azure Arc-enabled servers is available through the Update Manager virtual machine extension. ++## Update assessment ++To trigger an update assessment on your Azure Arc-enabled server, specify the following POST request: ++```rest +POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.HybridCompute/machines/machineName/assessPatches?api-version=2020-08-15-preview` +{ +} +``` ++# [Azure CLI](#tab/cli) ++To specify the POST request, you can use the Azure CLI [az rest](/cli/azure/reference-index#az_rest) command. ++```azurecli +az rest --method post --url https://management.azure.com/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.HybridCompute/machines/machineName/assessPatches?api-version=2020-08-15-preview --body @body.json +``` ++The format of the request body for version 2020-08-15 is as follows: ++```json +{ +} +``` ++# [Azure PowerShell](#tab/powershell) ++To specify the POST request, you can use the Azure PowerShell [Invoke-AzRestMethod-Path](/powershell/module/az.accounts/invoke-azrestmethod) cmdlet. ++```azurepowershell +Invoke-AzRestMethod-Path + "/subscriptions/subscriptionId/resourceGroups/resourcegroupname/providers/Microsoft.HybridCompute/machines/machinename/assessPatches?api-version=2020-08-15-preview" + -Payload '{}' -Method POST +``` +++## Update deployment ++To trigger an update deployment to your Azure Arc-enabled server, specify the following POST request: ++```rest +POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.HybridCompute/machines/machineName/installPatches?api-version=2020-08-15-preview` +``` ++#### Request body ++The following table describes the elements of the request body: ++| Property | Description | +|-|-| +| `maximumDuration` | Maximum amount of time in minutes the OS update operation can take. It must be an ISO 8601-compliant duration string such as `PT100M`. | +| `rebootSetting` | Flag to state if you should reboot the machine and if the Guest OS update installation needs it for completion. Acceptable values are: `IfRequired, NeverReboot, AlwaysReboot`. | +| `windowsParameters` | Parameter options for Guest OS update on machine running a supported Microsoft Windows Server operating system. | +| `windowsParameters - classificationsToInclude` | List of categories or classifications of OS updates to apply, as supported and provided by Windows Server OS. Acceptable values are: `Critical, Security, UpdateRollUp, FeaturePack, ServicePack, Definition, Tools, Update` | +| `windowsParameters - kbNumbersToInclude` | List of Windows Update KB IDs that are available to the machine and that you need install. If you've included any 'classificationsToInclude', the KBs available in the category are installed. 'kbNumbersToInclude' is an option to provide list of specific KB IDs over and above that you want to get installed. For example: `1234` | +| `windowsParameters - kbNumbersToExclude` | List of Windows Update KB Ids that are available to the machine and that should **not** be installed. If you've included any 'classificationsToInclude', the KBs available in the category will be installed. 'kbNumbersToExclude' is an option to provide list of specific KB IDs that you want to ensure don't get installed. For example: `5678` | +| `linuxParameters` | Parameter options for Guest OS update when machine is running supported Linux distribution | +| `linuxParameters - classificationsToInclude` | List of categories or classifications of OS updates to apply, as supported & provided by Linux OS's package manager used. Acceptable values are: `Critical, Security, Others`. For more information, see [Linux package manager and OS support](./support-matrix.md#supported-operating-systems). | +| `linuxParameters - packageNameMasksToInclude` | List of Linux packages that are available to the machine and need to be installed. If you've included any 'classificationsToInclude', the packages available in the category will be installed. 'packageNameMasksToInclude' is an option to provide list of packages over and above that you want to get installed. For example: `mysql, libc=1.0.1.1, kernel*` | +| `linuxParameters - packageNameMasksToExclude` | List of Linux packages that are available to the machine and should **not** be installed. If you've included any 'classificationsToInclude', the packages available in the category will be installed. 'packageNameMasksToExclude' is an option to provide list of specific packages that you want to ensure don't get installed. For example: `mysql, libc=1.0.1.1, kernel*` | +++# [Azure REST API](#tab/rest) ++To specify the POST request, you can use the following Azure REST API call with valid parameters and values. ++```rest +POST on 'subscriptions/subscriptionI/resourceGroups/resourceGroupName/providers/Microsoft.HybridCompute/machines/machineName/installPatches?api-version=2020-08-15-preview ++{ + "maximumDuration": "PT120M", + "rebootSetting": "IfRequired", + "windowsParameters": { + "classificationsToInclude": [ + "Security", + "UpdateRollup", + "FeaturePack", + "ServicePack" + ], + "kbNumbersToInclude": [ + "11111111111", + "22222222222222" + ], + "kbNumbersToExclude": [ + "333333333333", + "55555555555" + ] + } + }' ++``` ++# [Azure CLI](#tab/azurecli) ++To specify the POST request, you can use the Azure CLI [az rest](/cli/azure/reference-index#az_rest) command. ++```azurecli +az rest --method post --url https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/Test/providers/Microsoft.HybridCompute/machines/WIN-8/installPatches?api-version=2020-08-15-preview @body.json +``` ++The format of the request body for version 2020-08-15 is as follows: ++```json +{ + "maximumDuration": "PT120M", + "rebootSetting": "IfRequired", + "windowsParameters": { + "classificationsToInclude": [ + "Security", + "UpdateRollup", + "FeaturePack", + "ServicePack" + ], + "kbNumbersToInclude": [ + "11111111111", + "22222222222222" + ], + "kbNumbersToExclude": [ + "333333333333", + "55555555555" + ] + } + } +``` ++# [Azure PowerShell](#tab/azurepowershell) ++To specify the POST request, you can use the Azure PowerShell [Invoke-AzRestMethod](/powershell/module/az.accounts/invoke-azrestmethod) cmdlet. ++```azurepowershell +Invoke-AzRestMethod +-Path "/subscriptions/subscriptionId/resourceGroups/resourcegroupname/providers/Microsoft.HybridCompute/machines/machinename/installPatches?api-version=2020-08-15-preview" +-Payload '{ + "maximumDuration": "PT120M", + "rebootSetting": "IfRequired", + "windowsParameters": { + "classificationsToInclude": [ + "Security", + "UpdateRollup", + "FeaturePack", + "ServicePack" + ], + "kbNumbersToInclude": [ + "11111111111", + "22222222222222" + ], + "kbNumbersToExclude": [ + "333333333333", + "55555555555" + ] + } + }' + -Method POST +``` ++## Create a maintenance configuration schedule ++To create a maintenance configuration schedule, specify the following PUT request: ++```rest +PUT on `/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.Maintenance/maintenanceConfigurations/<maintenanceConfigurationsName>?api-version=2021-09-01-preview` +``` ++#### Request body ++The following table describes the elements of the request body: ++| Property | Description | +|-|-| +| `id` | Fully qualified identifier of the resource | +| `location` | Gets or sets location of the resource | +| `name` | Name of the resource | +| `properties.extensionProperties` | Gets or sets extensionProperties of the maintenanceConfiguration | +| `properties.maintenanceScope` | Gets or sets maintenanceScope of the configuration | +| `properties.maintenanceWindow.duration` | Duration of the maintenance window in HH:mm format. If not provided, default value will be used based on maintenance scope provided. Example: 05:00. | +| `properties.maintenanceWindow.expirationDateTime` | Effective expiration date of the maintenance window in YYYY-MM-DD hh:MM format. The window is created in the time zone provided to daylight savings according to that time zone. You must set the expiration date to a future date. If not provided, it will be set to the maximum datetime 9999-12-31 23:59:59. | +| `properties.maintenanceWindow.recurEvery` | Rate at which a Maintenance window is expected to recur. The rate can be expressed as daily, weekly, or monthly schedules. You can format daily schedules as recurEvery: [Frequency as integer]['Day(s)']. If no frequency is provided, the default frequency is 1. Daily schedule examples are recurEvery: Day, recurEvery: 3Days. Weekly schedule are formatted as recurEvery: [Frequency as integer]['Week(s)'] [Optional comma separated list of weekdays Monday-Sunday]. Weekly schedule examples are recurEvery: 3Weeks, recurEvery: Week Saturday, Sunday. You can format monthly schedules as [Frequency as integer]['Month(s)'] [Comma separated list of month days] or [Frequency as integer]['Month(s)'] [Week of Month (First, Second, Third, Fourth, Last)] [Weekday Monday-Sunday]. Monthly schedule examples are recurEvery: Month, recurEvery: 2Months, recurEvery: Month day23, day24, recurEvery: Month Last Sunday, recurEvery: Month Fourth Monday. | +| `properties.maintenanceWindow.startDateTime` | Effective start date of the maintenance window in YYYY-MM-DD hh:mm format. You can set the start date to either the current date or future date. The window will be created in the time zone provided and adjusted to daylight savings according to that time zone. | +| `properties.maintenanceWindow.timeZone` | Name of the timezone. You can obtain the list of timezones by executing [System.TimeZoneInfo]:GetSystemTimeZones() in PowerShell. Example: Pacific Standard Time, UTC, W. Europe Standard Time, Korea Standard Time, Cen. Australia Standard Time. | +| `properties.namespace` | Gets or sets namespace of the resource | +| `properties.visibility` | Gets or sets the visibility of the configuration. The default value is 'Custom' | +| `systemData` | Azure Resource Manager metadata containing createdBy and modifiedBy information. | +| `tags` | Gets or sets tags of the resource | +| `type` | Type of the resource | ++# [Azure REST API](#tab/rest) ++To specify the POST request, you can use the following Azure REST API call with valid parameters and values. ++```rest +PUT on '/subscriptions/0f55bb56-6089-4c7e-9306-41fb78fc5844/resourceGroups/atscalepatching/providers/Microsoft.Maintenance/maintenanceConfigurations/TestAzureInGuestAdv2?api-version=2021-09-01-preview ++{ + "location": "eastus2euap", + "properties": { + "namespace": null, + "extensionProperties": { + "InGuestPatchMode" : "User" + }, + "maintenanceScope": "InGuestPatch", + "maintenanceWindow": { + "startDateTime": "2021-08-21 01:18", + "expirationDateTime": "2221-05-19 03:30", + "duration": "01:30", + "timeZone": "India Standard Time", + "recurEvery": "Day" + }, + "visibility": "Custom", + "installPatches": { + "rebootSetting": "IfRequired", + "windowsParameters": { + "classificationsToInclude": [ + "Security", + "Critical", + "UpdateRollup" + ] + }, + "linuxParameters": { + "classificationsToInclude": [ + "Other" + ] + } + } + } +}' +``` ++# [Azure CLI](#tab/azurecli) ++```azurecli-interactive +az maintenance configuration create \ + --resource-group myMaintenanceRG \ + --resource-name myConfig \ + --maintenance-scope InGuestPatch \ + --location eastus \ + --maintenance-window-duration "02:00" \ + --maintenance-window-recur-every "20days" \ + --maintenance-window-start-date-time "2022-12-30 07:00" \ + --maintenance-window-time-zone "Pacific Standard Time" \ + --install-patches-linux-parameters package-name-masks-to-exclude="ppt" package-name-masks-to-include="apt" classifications-to-include="Other" \ + --install-patches-windows-parameters kb-numbers-to-exclude="KB123456" kb-numbers-to-include="KB123456" classifications-to-include="FeaturePack" \ + --reboot-setting "IfRequired" \ + --extension-properties InGuestPatchMode="User" +``` ++# [Azure PowerShell](#tab/azurepowershell) ++You can use the `New-AzMaintenanceConfiguration` cmdlet to create your configuration. ++```azurepowershell-interactive +New-AzMaintenanceConfiguration + -ResourceGroup $RGName ` + -Name $configName ` + -MaintenanceScope $scope ` + -Location $location ` + -StartDateTime $startDateTime ` + -TimeZone $timeZone ` + -Duration $duration ` + -RecurEvery $recurEvery ` + -WindowParameterClassificationToInclude $WindowsParameterClassificationToInclude ` + -WindowParameterKbNumberToInclude $WindowParameterKbNumberToInclude ` + -WindowParameterKbNumberToExclude $WindowParameterKbNumberToExclude ` + -InstallPatchRebootSetting $RebootOption ` + -LinuxParameterPackageNameMaskToInclude $LinuxParameterPackageNameMaskToInclude ` + -LinuxParameterClassificationToInclude $LinuxParameterClassificationToInclude ` + -LinuxParameterPackageNameMaskToExclude $LinuxParameterPackageNameMaskToExclude ` + -ExtensionProperty @{"InGuestPatchMode"="User"} +``` +++## Associate a VM with a schedule ++To associate a VM with a maintenance configuration schedule, specify the following PUT request: ++```rest +PUT on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configurationAssignments/<configurationAssignment name>?api-version=2021-09-01-preview` +``` ++# [Azure REST API](#tab/rest) ++To specify the PUT request, you can use the following Azure REST API call with valid parameters and values. ++```rest +PUT on '/subscriptions/0f55bb56-6089-4c7e-9306-41fb78fc5844/resourceGroups/atscalepatching/providers/Microsoft.Compute/virtualMachines/win-atscalepatching-1/providers/Microsoft.Maintenance/configurationAssignments/TestAzureInGuestAdv?api-version=2021-09-01-preview ++{ + "properties": { + "maintenanceConfigurationId": "/subscriptions/0f55bb56-6089-4c7e-9306-41fb78fc5844/resourcegroups/atscalepatching/providers/Microsoft.Maintenance/maintenanceConfigurations/TestAzureInGuestIntermediate2" + }, + "location": "eastus2euap" +}' +``` ++# [Azure CLI](#tab/azurecli) ++```azurecli-interactive +az maintenance assignment create \ + --resource-group myMaintenanceRG \ + --location eastus \ + --resource-name myVM \ + --resource-type virtualMachines \ + --provider-name Microsoft.Compute \ + --configuration-assignment-name myConfig \ + --maintenance-configuration-id "/subscriptions/{subscription ID}/resourcegroups/myMaintenanceRG/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig" +``` ++# [Azure PowerShell](#tab/azurepowershell) ++```azurepowershell-interactive +New-AzConfigurationAssignment ` + -ResourceGroupName "myResourceGroup" ` + -Location "eastus" ` + -ResourceName "myGuest" ` + -ResourceType "VirtualMachines" ` + -ProviderName "Microsoft.Compute" ` + -ConfigurationAssignmentName "configName" ` + -MaintenanceConfigurationId "configID" +``` ++## Remove machine from the schedule ++To remove a machine from the schedule, get all the configuration assignment names for the machine that you have created to associate the machine with the current schedule from the Azure Resource Graph as listed: ++```kusto +maintenanceresources +| where type =~ "microsoft.maintenance/configurationassignments" +| where properties.maintenanceConfigurationId =~ "<maintenance configuration Resource ID>" +| where properties.resourceId =~ "<Machine Resource Id>" +| project name, id +``` +After you obtain the name from above, delete the configuration assignment by following the DELETE request - +```rest +DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configurationAssignments/<configurationAssignment name>?api-version=2021-09-01-preview` +``` ++## Next steps ++* To view update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager. |
update-manager | Manage Dynamic Scoping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-dynamic-scoping.md | + + Title: Manage various operations of Dynamic Scoping. +description: This article describes how to manage Dynamic Scoping operations +++ Last updated : 09/18/2023++++# Manage a Dynamic scope ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article describes how to view, add, edit and delete a dynamic scope. +++## Add a Dynamic scope +To add a Dynamic scope to an existing configuration, follow these steps: ++1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Update Manager**. +1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. +1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to add a Dynamic scope. +1. In the given maintenance configuration page > select **Dynamic scopes** > **Add a dynamic scope**. +1. In the **Add a dynamic scope** page, select **subscriptions**(mandatory). +1. In **Filter by**, choose **Select** and in the **Select Filter by**, specify the Resource group, Resource type, Location, Tags and OS type and then select **Ok**. These filters are optional fields. +1. In the **Preview of machines based on above scope**, you can view the list of machines for the selected criteria and then select **Add**. + > [!NOTE] + > The list of machines may be different at run time. +1. In the **Configure Azure VMs for schedule updates** page, select any one of the following options to provide your consent: + 1. **Change the required options to ensure schedule supportability** - this option confirms that you want to update the patch orchestration from existing option to *Customer Managed Schedules*: This updates the following two properties on your behalf: + + - *Patch mode = AutomaticByPlatform* + - *Set the BypassPlatformSafetyChecksOnUserSchedule = True*. + 1. **Continue with supported machines only** - this option confirms that you want to proceed with only the machines that already have patch orchestration set to *Customer Managed Schedules*. + + > [!NOTE] + > In the **Preview of machines based on above scope** page, you can view only the machines that don't have patch orchestration set to *Customer Managed Schedules*. ++1. Select **Save** to go back to the Dynamic scopes tab. In this tab, you can view and edit the Dynamic scope that you have created. ++## View Dynamic scope +To view the list of Dynamic scopes associated to a given maintenance configuration, follow these steps: ++1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Update Manager**. +1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. +1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to view the Dynamic scope. +1. In the given maintenance configuration page, select **Dynamic scopes** to view all the Dynamic scopes that are associated with the maintenance configuration. ++## Edit a Dynamic scope ++1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Update Manager**. +1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. +1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope. +1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to edit. Under **Actions** column, select the edit icon. +1. In the **Edit Dynamic scope**, select the edit icon in the **Filter By** to edit the filters as needed and select **Ok**. + > [!NOTE] + > Subscription is mandatory for the creation of dynamic scope and you can't edit it after the dynamic scope is created. +1. Select **Save**. ++## Delete a Dynamic scope ++1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Update Manager**. +1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. +1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope. +1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to delete. Select **Remove dynamic scope** and then select **Ok**. ++## View patch history of a Dynamic scope ++1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Update Manager**. +1. Select **History** > **Browse maintenance configurations** > **Maintenance configurations** to view the patch history of a dynamic scope. ++## Provide consent to apply updates ++Obtaining consent to apply updates is an important step in the workflow of dynamic scoping and listed are the various ways to provide consent. ++#### [From Virtual Machine](#tab/vm) ++1. In [Azure portal](https://portal.azure.com), go to **+Create a resource** > **Virtual machine** > **Create**. +1. In **Create a virtual machine**, select **Management** tab and under the **Guest OS Updates**, in **Patch orchestration options**, you can do the following: + 1. Select **Azure-orchestrated with user managed schedules (Preview)** to confirm that: ++ - Patch Orchestration is set to *Azure orchestration* + - Set the Bypass platform safety checks on user schedule = *True*. ++ The selection allows you to provide consent to apply the update settings, ensures that auto patching isn't applied and that patching on the VM(s) runs as per the schedule you've defined. ++1. Complete the details under **Monitoring**, **Advanced** and **Tags** tabs. +1. Select **Review + Create** and under the **Management** you can view the values as **Periodic assessment** - *Off* and **Patch orchestration options** - *Azure-orchestrated with user managed schedules (Preview)*. +1. Select **Create**. + ++#### [From Schedule updates tab](#tab/sc) ++1. Follow the steps from 1 to 5 listed in [Add a Dynamic scope](#add-a-dynamic-scope). +1. In **Machines** tab, select **Add machine**, In **Select resources** page, select the machines and select **Add** +1. In **Configure Azure VMs for schedule updates**, select **Continue to schedule updates** option to confirm that: ++ - Patch Orchestration is set to *Azure orchestration* + - Set the Bypass platform safety checks on user schedule = *True*. ++1. Select **Continue to schedule updates** to update the patch mode as **Azure-orchestrated** and enable the scheduled patching for the VMs after obtaining the consent. ++#### [From Update Settings](#tab/us) ++1. In **Azure Update Manager**, go to **Overview** > **Update settings**. +1. In **Change Update settings**, select **+Add machine** to add the machines. +1. In the list of machines sorted as per the operating system, go to the **Patch orchestration** option and select **Azure-orchestrated with user managed schedules (Preview)** to confirm that: ++ - Patch Orchestration is set to *Azure orchestration* + - Set the Bypass platform safety checks on user schedule = *True* +1. Select **Save**. ++ The selection made in this workflow automatically applies the update settings and no consent is explicitly obtained. +++## Next steps ++* [View updates for single machine](view-updates.md) +* [Deploy updates now (on-demand) for single machine](deploy-updates.md) +* [Schedule recurring updates](scheduled-patching.md) +* [Manage update settings via Portal](manage-update-settings.md) +* [Manage multiple machines using update Manager](manage-multiple-machines.md) |
update-manager | Manage Multiple Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-multiple-machines.md | + + Title: Manage multiple machines in Azure Update Manager +description: This article explains how to use Azure Update Manager in Azure to manage multiple supported machines and view their compliance state in the Azure portal. + Last updated : 09/18/2023++++++# Manage multiple machines with Azure Update Manager ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++> [!IMPORTANT] +> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md). ++This article describes the various features that Azure Update Manager offers to manage the system updates on your machines. By using the Update Manager, you can: ++- Quickly assess the status of available operating system updates. +- Deploy updates. +- Set up a recurring update deployment schedule. +- Get insights on the number of machines managed. +- Obtain information on how they're managed and other relevant details. ++Instead of performing these actions from a selected Azure VM or Azure Arc-enabled server, you can manage all your machines in the Azure subscription. +++## View update Manager status ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. To view update assessment across all machines, including Azure Arc-enabled servers navigate to **Azure Update Manager**. ++ :::image type="content" source="./media/manage-multiple-machines/overview-page-inline.png" alt-text="Screenshot that shows the Update Manager Overview page in the Azure portal." lightbox="./media/manage-multiple-machines/overview-page-expanded.png"::: + + On the **Overview** page, the summary tiles show the following status: ++ - **Filters**: Use filters to focus on a subset of your resources. The selectors above the tiles return **Subscription**, **Resource group**, **Resource type** (Azure VMs and Azure Arc-enabled servers), **Location**, and **OS** type (Windows or Linux) based on the Azure role-based access rights you've been granted. You can combine filters to scope to a specific resource. + - **Update status of machines**: Shows the update status information for assessed machines that had applicable or needed updates. You can filter the results based on classification types. By default, all [classifications](../automation/update-management/overview.md#update-classifications) are selected. According to the classification selection, the tile is updated. ++ The graph provides a snapshot for all your machines in your subscription, regardless of whether you've used Update Manager for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days. ++ From the assessment data available, machines are classified into the following categories: ++ - **No updates available**: No updates are pending for these machines and these machines are up to date. + - **Updates available**: Updates are pending for these machines and these machines aren't up to date. + - **Reboot required**: Pending a reboot for the updates to take effect. + - **No updates data**: No assessment data is available for these machines. + + The following reasons could explain why there's no assessment data: + - No assessment has been done over the last seven days. + - The machine has an unsupported OS. + - The machine is in an unsupported region and you can't perform an assessment. ++ - **Patch orchestration configuration of Azure virtual machines**: All the Azure machines inventoried in the subscription are summarized by each update orchestration method. Values are: ++ - **Customer Managed Schedules**ΓÇöenables schedule patching on your existing VMs. + - **Azure Managed - Safe Deployment**ΓÇöthis mode enables automatic VM guest patching for the Azure virtual machine. Subsequent patch installation is orchestrated by Azure. + - **Image Default**ΓÇöfor Linux machines, it uses the default patching configuration. + - **OS orchestrated**ΓÇöthe OS automatically updates the machine. + - **Manual updates**ΓÇöyou control the application of patches to a machine by applying patches manually inside the machine. In this mode, automatic updates are disabled for Windows OS. + + + + For more information about each orchestration method see, [automatic VM guest patching for Azure VMs](../virtual-machines/automatic-vm-guest-patching.md#patch-orchestration-modes). ++ For more information about each orchestration method, see [Automatic VM guest patching for Azure VMs](../virtual-machines/automatic-vm-guest-patching.md#patch-orchestration-modes). ++ - **Update installation status**: By default, the tile shows the status for the last 30 days. By using the **Time** picker, you can choose a different range. The values are: + - **Failed**: One or more updates in the deployment have failed. + - **Completed**: The deployment ends successfully by the time range selected. + - **Completed with warnings**: The deployment is completed successfully but had warnings. + - **In progress**: The deployment is currently running. ++- Select **Update status of machines** or **Patch orchestration configuration of Azure virtual machines** to go to the **Machines** page. +- Select **Update installation status** to go to the **History** page. +- **Pending Windows updates**: Status of pending updates for Windows machines in your subscription. +- **Pending Linux updates**: Status of pending updates for Linux machines in your subscription. ++## Summary of machine status ++Update Manager in Azure enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update Manager. The section shows how you can filter information to understand the update status of your machine resources, and for multiple machines, initiate an update assessment, update deployment, and manage their update settings. ++ In the Azure Update Manager page, select **Machines** from the left menu. ++ On the **Update Manager** page, select **Machines** from the left menu. ++ :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot that shows the Update Manager Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png"::: ++ The table lists all the machines in the specified subscription, and for each machine it helps you understand the following details that show up based on the latest assessment: ++ * **Customer Managed Schedules**ΓÇöenables schedule patching on your existing VMs. The new patch orchestration option enables the two VM properties - **Patch mode = Azure-orchestrated** and **BypassPlatformSafetyChecksOnUserSchedule = TRUE** on your behalf after receiving your consent. + * **Azure Managed - Safe Deployment**ΓÇöfor a group of virtual machines undergoing an update, the Azure platform will orchestrate updates. The VM is set to [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md).(i.e), the patch mode is **AutomaticByPlatform**. + * **Automatic by OS**ΓÇöthe machine is automatically updated by the OS. + * **Image Default**ΓÇöfor Linux machines, its default patching configuration is used. + * **Manual**ΓÇöyou control the application of patches to a machine by applying patches manually inside the machine. In this mode automatic updates are disabled for Windows OS. + ++The **Patch orchestration** column in the machine's patch mode has the following values: ++ * **Customer Managed Schedules (preview)**: Enables schedule patching on your existing VMs. The new patch orchestration option enables the two VM properties: `Patch mode = Azure-orchestrated` and `BypassPlatformSafetyChecksOnUserSchedule = TRUE` on your behalf after receiving your consent. + * **Azure Managed - Safe Deployment**: For a group of virtual machines undergoing an update, the Azure platform orchestrates updates. The VM is set to [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). For example, the patch mode is `AutomaticByPlatform`. + * **Automatic by OS**: The machine is automatically updated by the OS. + * **Image default**: For Linux machines, its default patching configuration is used. + * **Manual**: You control the application of patches to a machine by applying patches manually inside the machine. In this mode, automatic updates are disabled for Windows OS. ++**The machine's status**: For an Azure VM, it shows its [power state](../virtual-machines/states-billing.md#power-states-and-billing). For an Azure Arc-enabled server, it shows if it's connected or not. ++Use filters to focus on a subset of your resources. The selectors above the tiles return subscriptions, resource groups, resource types (that is, Azure VMs and Azure Arc-enabled servers), and regions. They're based on the Azure role-based access rights you've been granted. You can combine filters to scope to a specific resource. ++The summary tiles at the top of the page summarize the number of machines that have been assessed and their update status. ++To manage the machine's update settings, see [Manage update configuration settings](manage-update-settings.md). ++### Check for updates ++For machines that haven't had a compliance assessment scan for the first time, you can select one or more of them from the list. Then select **Check for updates**. You receive status messages as the configuration is performed. ++ :::image type="content" source="./media/manage-multiple-machines/update-center-assess-now-multi-selection-inline.png" alt-text="Screenshot that shows initiating a scan assessment for selected machines with the Check for updates option." lightbox="./media/manage-multiple-machines/update-center-assess-now-multi-selection-expanded.png"::: ++ Otherwise, a compliance scan begins and the results are forwarded and stored in Azure Resource Graph. This process takes several minutes. When the assessment is finished, a confirmation message appears on the page. ++ :::image type="content" source="./media/manage-multiple-machines/update-center-assess-now-complete-banner-inline.png" alt-text="Screenshot that shows an assessment banner on the Manage Machines page." lightbox="./media/manage-multiple-machines/update-center-assess-now-complete-banner-expanded.png"::: ++Select a machine from the list to open Update Manager scoped to that machine. Here, you can view its detailed assessment status and update history, configure its patch orchestration options, and begin an update deployment. ++### Deploy the updates ++For assessed machines that are reporting updates available, select one or more of the machines from the list and begin an update deployment that starts immediately. Select the machine and go to **One-time update**. ++ :::image type="content" source="./media/manage-multiple-machines/update-center-install-updates-now-multi-selection-inline.png" alt-text="Screenshot that shows installing one-time updates for machines on the Updates (Preview) page." lightbox="./media/manage-multiple-machines/update-center-install-updates-now-multi-selection-expanded.png"::: ++ A notification confirms when an activity starts and another tells you when it's finished. After it's successfully finished, the installation operation results are available to view. You can use the **Update history** tab, when you select the machine from the **Machines** page. You can also select the **History** page. You're redirected to this page automatically after you begin the update deployment. You can view the status of the operation at any time from the [Azure activity log](../azure-monitor/essentials/activity-log.md). ++### Set up a recurring update deployment ++You can create a recurring update deployment for your machines. Select your machine and select **Scheduled updates**. A [Create new maintenance configuration](scheduled-patching.md) flow opens. ++## Update deployment history ++Update Manager enables you to browse information about your Azure VMs and Azure Arc-enabled servers across your Azure subscriptions relevant to Update Manager. You can filter information to understand the update assessment and deployment history for multiple machines. On the **Update Manager** page, select **History** from the left menu. ++## Update deployment history by machines ++The update deployment history provides a summarized status of update and assessment actions performed against your Azure VMs and Azure Arc-enabled servers. You can also drill into a specific machine to view update-related details and manage it directly. You can review the detailed update or assessment history for the machine and other related details in the table. +++Each record shows: ++ - **Machine Name** + - **Status** + - **Update installed** + - **Update operation** + - **Operation type** + - **Operation start time** + - **Resource Type** + - **Tags** + - **Last assessed time** ++## Update deployment history by maintenance run ID ++On the **History** page, select **By maintenance run ID** to view the history of the maintenance run schedules. ++ :::image type="content" source="./media/manage-multiple-machines/update-center-history-by-maintenance-run-id-inline.png" alt-text="Screenshot that shows the update center History page By maintenance run ID in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-history-by-maintenance-run-id-expanded.png"::: ++Each record shows: ++- **Maintenance run ID** +- **Status** +- **Updated machines** +- **Maintenance Configuration** +- **Operation start time** +- **Operation end time** ++When you select any one maintenance run ID record, you can view an expanded status of the maintenance run. It contains information about machines and updates. It includes the number of machines that were updated and updates installed on them. A pie chart shows the status of each of the machines. At the end of the page, a list view shows both machines and updates that were a part of this maintenance run. ++ :::image type="content" source="./media/manage-multiple-machines/update-center-maintenance-run-record-inline.png" alt-text="Screenshot that shows a maintenance run ID record." lightbox="./media/manage-multiple-machines/update-center-maintenance-run-record-expanded.png"::: ++### Resource Graph ++The update assessment and deployment data are available for querying in Azure Resource Graph. You can apply this data to scenarios that include security compliance, security operations, and troubleshooting. Select **Go to resource graph** to go to the Azure Resource Graph Explorer. It enables running Resource Graph queries directly in the Azure portal. Resource Graph supports the Azure CLI, Azure PowerShell, Azure SDK for Python, and more. For more information, see [First query with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md). ++When the Resource Graph Explorer opens, it's automatically populated with the same query used to generate the results presented in the table on the **History** page in Update Manager. Ensure that you review [Overview of query logs in Azure Update Manager](query-logs.md) to learn about the log records and their properties, and the sample queries included. ++## Next steps ++* To set up and manage recurring deployment schedules, see [Schedule recurring updates](scheduled-patching.md). +* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). |
update-manager | Manage Update Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-update-settings.md | + + Title: Manage update configuration settings in Azure Update Manager +description: The article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager. +++ Last updated : 09/18/2023++++# Manage update configuration settings ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article describes how to configure update settings from Azure Update Manager to control the update settings on your Azure virtual machines (VMs) and Azure Arc-enabled servers for one or more machines. +++## Configure settings on a single VM ++To configure update settings on your machines on a single VM: ++You can schedule updates from **Overview** or **Machines** on the **Update Manager** page or from the selected VM. ++>[!NOTE] +> You can schedule updates from the Overview blade or Machines blade in Update Manager page or from the selected VM. ++# [From Overview blade](#tab/manage-single-overview) ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. In **Azure Update Manager**, select **Overview**, select your **Subscription**, and select **Update settings**. +1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings. +1. In **Select resources**, select the machine and select **Add**. +1. In the **Change update settings** page, you will see the machine classified as per the operating system with the list of following updates that you can select and apply. ++ :::image type="content" source="./media/manage-update-settings/update-setting-to-change.png" alt-text="Screenshot that shows highlighting the Update settings to change option in the Azure portal."::: + + The following update settings are available for configuration for the selected machines: ++ - **Periodic assessment**: The periodic assessment is set to run every 24 hours. You can either enable or disable this setting. + - **Hotpatch**: You can enable [hotpatching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition VMs. Hotpatching is a new way to install updates on supported Windows Server Azure Edition VMs that doesn't require a reboot after installation. You can use Update Manager to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable, or reset this setting. + - **Patch orchestration** option provides: + + - **Customer Managed Schedules**ΓÇöenables schedule patching on your existing VMs. The new patch orchestration option enables the two VM properties - **Patch mode = Azure-orchestrated** and **BypassPlatformSafetyChecksOnUserSchedule = TRUE** on your behalf after receiving your consent. + - **Azure Managed - Safe Deployment**ΓÇöfor a group of virtual machines undergoing an update, the Azure platform will orchestrate updates. (not applicable for Arc-enabled server). The VM is set to [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md).(i.e), the patch mode is **AutomaticByPlatform**. There are different implications depending on whether customer schedule is attached to it or not. For more information, see the [user scenarios](prerequsite-for-schedule-patching.md#user-scenarios). + - Available *Critical* and *Security* patches are downloaded and applied automatically on the Azure VM using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required. + - **Windows Automatic Updates** (AutomaticByOS) - When the workload running on the VM doesn't have to meet availability targets, the operating system updates are automatically downloaded and installed. Machines are rebooted as needed. + - **Manual updates** - This mode disables Windows automatic updates on VMs. Patches are installed manually or using a different solution. + - **Image Default** - Only supported for Linux Virtual Machines, this mode uses the default patching configuration in the image used to create the VM. ++1. After you make the selection, select **Save**. ++# [From Machines pane](#tab/manage-single-machines) ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. In **Azure Update Manager**, select **Machines** > your **subscription**. +1. Select the checkbox of your machine from the list and select **Update settings**. +1. Select **Update Settings** to proceed with the type of update for your machine. +1. On the **Change update settings** pane, select **Add machine** to select the machine for which you want to change the update settings. +1. On the **Select resources** pane, select the machine and select **Add**. Follow the procedure from step 5 listed in **From Overview pane** of [Configure settings on a single VM](#configure-settings-on-a-single-vm). ++# [From a selected VM](#tab/singlevm-schedule-home) ++1. Select your virtual machine and the **virtual machines | Updates** page opens. +1. Under **Operations**, select **Updates**. +1. In **Updates**, select **Update Settings**. +1. In **Change update settings**, you can select the update settings that you want to change for your machine and follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-a-single-vm). ++++A notification appears to confirm that the update settings are successfully changed. ++## Configure settings at scale ++Follow these steps to configure update settings on your machines at scale. ++> [!NOTE] +> You can schedule updates from **Overview** or **Machines**. ++# [From Overview blade](#tab/manage-scale-overview) + +1. Sign in to the [Azure portal](https://portal.azure.com). ++1. In **Azure Update Manager**, select **Overview**, select your **Subscription** and select **Update settings**. ++1. In **Change update settings**, select the update settings that you want to change for your machines. Follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-a-single-vm). ++# [From Machines blade](#tab/manage-scale-machines) ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. In **Azure Update Manager**, select **Machines** > your **subscription**, and select the checkbox for all your machines from the list. +1. Select **Update Settings** to proceed with the type of update for your machines. +1. In **Change update settings**, you can select the update settings that you want to change for your machine. Follow the procedure from step 3 listed in **From Overview pane** of [Configure settings on a single VM](#configure-settings-on-a-single-vm). ++++A notification appears to confirm that the update settings are successfully changed. +++## Next steps ++* [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Azure Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal. +* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot issues with Update Manager](troubleshoot.md). |
update-manager | Manage Updates Customized Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-updates-customized-images.md | + + Title: Overview of customized images in Azure Update Manager +description: This article describes customized image support, how to register and validate customized images for public preview, and limitations. +++ Last updated : 09/27/2023++++# Manage updates for customized images ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article describes customized image support, how to enable a subscription, and limitations. ++> [!NOTE] +> Currently, schedule patching and periodic assessment on [specialized images](../virtual-machines/linux/imaging.md) and **VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery** are supported in preview. ++## Asynchronous check to validate customized image support ++If you're using Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update Manager operations such as **Check for updates**, **One-time update**, **Schedule updates**, or **Periodic assessment** to validate if the VMs are supported for guest patching. If the VMs are supported, you can begin patching. ++With marketplace images, support is validated even before Update Manager operation is triggered. Here, there are no preexisting validations in place and the Update Manager operations are triggered. Only their success or failure determines support. ++For instance, an assessment call attempts to fetch the latest patch that's available from the image's OS family to check support. It stores this support-related data in an Azure Resource Graph table, which you can query to see the support status for your Azure Compute Gallery image. +++## Limitations ++The Azure Compute Gallery images are of two types: +- [Generalized](../virtual-machines/linux/imaging.md#generalized-images) images +- [Specialized](../virtual-machines/linux/imaging.md#specialized-images) images ++Currently, scheduled patching and periodic assessment on [specialized images](../virtual-machines/linux/imaging.md#specialized-images) and VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery are supported in preview. + The following supported scenarios are for both types. ++| Images | Currently supported scenarios | Unsupported scenarios | +| | | | +| Azure Compute Gallery: Generalized images | - On-demand assessment </br> - On-demand patching </br> - Periodic assessment </br> - Scheduled patching | Automatic VM guest patching | +| Azure Compute Gallery: Specialized images | - On-demand assessment </br> - On-demand patching </br> - Periodic assessment (preview) </br> - Scheduled patching (preview) </br> | Automatic VM guest patching | +| Non-Azure Compute Gallery images (non-SIG)| - On-demand assessment </br> - On-demand patching </br> - Periodic assessment (preview) </br> - Scheduled patching (preview) </br> | Automatic VM guest patching | ++Automatic VM guest patching doesn't work on Azure Compute Gallery images even if Patch orchestration mode is set to `Azure orchestrated/AutomaticByPlatform`. You can use scheduled patching to patch the machines and define your own schedules. ++## Next steps ++[Learn more](support-matrix.md) about supported operating systems. |
update-manager | Manage Vms Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-vms-programmatically.md | + + Title: Programmatically manage updates for Azure VMs +description: This article tells how to use Azure Update Manager in Azure using REST API with Azure virtual machines. +++ Last updated : 09/18/2023++++# How to programmatically manage updates for Azure VMs ++This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with Azure Update Manager in Azure. If you're new to Update Manager and you want to learn more, see [overview of Azure Update Manager](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md). ++Azure Update Manager in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/). ++Support for Azure REST API to manage Azure VMs is available through the Update Manager virtual machine extension. ++## Update assessment ++To trigger an update assessment on your Azure VM, specify the following POST request: ++```rest +POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.Compute/virtualMachines/virtualMachineName/assessPatches?api-version=2020-12-01` +``` ++# [Azure CLI](#tab/cli) ++To specify the POST request, you can use the Azure CLI [az vm assess-patches](/cli/azure/vm#az-vm-assess-patches) command. ++```azurecli +az vm assess-patches -g MyResourceGroup -n MyVm +``` +++# [Azure PowerShell](#tab/powershell) ++To specify the POST request, you can use the Azure PowerShell [Invoke-AzVMPatchAssessment](/powershell/module/az.compute/invoke-azvmpatchassessment) cmdlet. ++```azurepowershell +Invoke-AzVMPatchAssessment -ResourceGroupName "myRG" -VMName "myVM" +``` ++++## Update deployment ++To trigger an update deployment to your Azure VM, specify the following POST request: ++```rest +POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.Compute/virtualMachines/virtualMachineName/installPatches?api-version=2020-12-01` +``` ++#### Request body ++The following table describes the elements of the request body: ++| Property | Description | +|-|-| +| `maximumDuration` | Maximum amount of time that the operation runs. It must be an ISO 8601-compliant duration string such as `PT4H` (4 hours). | +| `rebootSetting` | Flag to state if machine should be rebooted and if Guest OS update installation requires it for completion. Acceptable values are: `IfRequired, NeverReboot, AlwaysReboot`. | +| `windowsParameters` | Parameter options for Guest OS update on Azure VMs running a supported Microsoft Windows Server operating system. | +| `windowsParameters - classificationsToInclude` | List of categories/classifications to be used for selecting the updates to be installed on the machine. Acceptable values are: `Critical, Security, UpdateRollUp, FeaturePack, ServicePack, Definition, Tools, Updates` | +| `windowsParameters - kbNumbersToInclude` | List of Windows Update KB Ids that should be installed. All updates belonging to the classifications provided in `classificationsToInclude` list will be installed. `kbNumbersToInclude` is an optional list of specific KBs to be installed in addition to the classifications. For example: `1234` | +| `windowsParameters - kbNumbersToExclude` | List of Windows Update KB Ids that should **not** be installed. This parameter overrides `windowsParameters - classificationsToInclude`, meaning a Windows Update KB ID specified here won't be installed even if it belongs to the classification provided under `classificationsToInclude` parameter. | +| `linuxParameters` | Parameter options for Guest OS update on Azure VMs running a supported Linux server operating system. | +| `linuxParameters - classificationsToInclude` | List of categories/classifications to be used for selecting the updates to be installed on the machine. Acceptable values are: `Critical, Security, Other` | +| `linuxParameters - packageNameMasksToInclude` | List of Linux packages that should be installed. All updates belonging to the classifications provided in `classificationsToInclude` list will be installed. `packageNameMasksToInclude` is an optional list of package names to be installed in addition to the classifications. For example: `mysql, libc=1.0.1.1, kernel*` | +| `linuxParameters - packageNameMasksToExclude` | List of updates that should **not** be installed. This parameter overrides `linuxParameters - packageNameMasksToExclude`, meaning a package specified here won't be installed even if it belongs to the classification provided under `classificationsToInclude` parameter. | +++# [Azure REST API](#tab/rest) ++To specify the POST request, you can use the following Azure REST API call with valid parameters and values. ++```rest +POST on 'subscriptions/{subscriptionId}/resourceGroups/acmedemo/providers/Microsoft.Compute/virtualMachines/ameacr/installPatches?api-version=2020-12-01 ++{ + "maximumDuration": "PT120M", + "rebootSetting": "IfRequired", + "windowsParameters": { + "classificationsToInclude": [ + "Security", + "UpdateRollup", + "FeaturePack", + "ServicePack" + ], + "kbNumbersToInclude": [ + "11111111111", + "22222222222222" + ], + "kbNumbersToExclude": [ + "333333333333", + "55555555555" + ] + } + }' +``` ++# [Azure CLI](#tab/azurecli) ++To specify the POST request, you can use the Azure CLI [az vm install-patches](/cli/azure/vm#az-vm-install-patches) command. ++```azurecli +az vm install-patches -g MyResourceGroup -n MyVm --maximum-duration PT4H --reboot-setting IfRequired --classifications-to-include-linux Critical +``` ++The format of the request body for version 2020-12-01 is as follows: ++```json +{ + "maximumDuration" + "rebootSetting" + "windowsParameters": { + "classificationsToInclude": [ + ], + "kbNumbersToInclude": [ + ], + "kbNumbersToExclude": [ + ] + } + } +``` ++# [Azure PowerShell](#tab/azurepowershell) ++To specify the POST request, you can use the Azure PowerShell [Invoke-AzVMInstallPatch](/powershell/module/az.accounts/invoke-azrestmethod) cmdlet. ++```azurepowershell +Invoke-AzVmInstallPatch -ResourceGroupName 'MyRG' -VmName 'MyVM' -Windows -RebootSetting 'never' -MaximumDuration PT2H -ClassificationToIncludeForWindows Critical +``` ++++## Create a maintenance configuration schedule ++To create a maintenance configuration schedule, specify the following PUT request: ++```rest +PUT on `/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.Maintenance/maintenanceConfigurations/<maintenanceConfigurationsName>?api-version=2021-09-01-preview` +``` ++#### Request body ++The following table describes the elements of the request body: ++| Property | Description | +|-|-| +| `id` | Fully qualified identifier of the resource | +| `location` | Gets or sets location of the resource | +| `name` | Name of the resource | +| `properties.extensionProperties` | Gets or sets extensionProperties of the maintenanceConfiguration | +| `properties.maintenanceScope` | Gets or sets maintenanceScope of the configuration | +| `properties.maintenanceWindow.duration` | Duration of the maintenance window in HH:MM format. If not provided, default value is used based on maintenance scope provided. Example: 05:00. | +| `properties.maintenanceWindow.expirationDateTime` | Effective expiration date of the maintenance window in YYYY-MM-DD hh:mm format. The window is created in the time zone provided to daylight savings according to that time zone. Expiration date must be set to a future date. If not provided, it's set to the maximum datetime 9999-12-31 23:59:59. | +| `properties.maintenanceWindow.recurEvery` | Rate at which a maintenance window is expected to recur. The rate can be expressed as daily, weekly, or monthly schedules. Daily schedules are formatted as recurEvery: [Frequency as integer]['Day(s)']. If no frequency is provided, the default frequency is 1. Daily schedule examples are recurEvery: Day, recurEvery: 3Days. Weekly schedules are formatted as recurEvery: [Frequency as integer]['Week(s)'] [Optional comma separated list of weekdays Monday-Sunday]. Weekly schedule examples are recurEvery: 3Weeks, recurEvery: Week Saturday, Sunday. Monthly schedules are formatted as [Frequency as integer]['Month(s)'] [Comma separated list of month days] or [Frequency as integer]['Month(s)'] [Week of Month (First, Second, Third, Fourth, Last)] [Weekday Monday-Sunday]. Monthly schedule examples are recurEvery: Month, recurEvery: 2Months, recurEvery: Month day23, day24, recurEvery: Month Last Sunday, recurEvery: Month Fourth Monday. | +| `properties.maintenanceWindow.startDateTime` | Effective start date of the maintenance window in YYYY-MM-DD hh:mm format. You can set the start date to either the current date or future date. The window will be created in the time zone provided and adjusted to daylight savings according to that time zone. | +| `properties.maintenanceWindow.timeZone` | Name of the timezone. List of timezones can be obtained by executing [System.TimeZoneInfo]:GetSystemTimeZones() in PowerShell. Example: Pacific Standard Time, UTC, W. Europe Standard Time, Korea Standard Time, Cen. Australia Standard Time. | +| `properties.namespace` | Gets or sets namespace of the resource | +| `properties.visibility` | Gets or sets the visibility of the configuration. The default value is 'Custom' | +| `systemData` | Azure Resource Manager metadata containing createdBy and modifiedBy information. | +| `tags` | Gets or sets tags of the resource | +| `type` | Type of the resource | ++# [Azure REST API](#tab/rest) ++To specify the POST request, you can use the following Azure REST API call with valid parameters and values. ++```rest +PUT on '/subscriptions/0f55bb56-6089-4c7e-9306-41fb78fc5844/resourceGroups/atscalepatching/providers/Microsoft.Maintenance/maintenanceConfigurations/TestAzureInGuestAdv2?api-version=2021-09-01-preview ++{ + "location": "eastus2euap", + "properties": { + "namespace": null, + "extensionProperties": { + "InGuestPatchMode" : "User" + }, + "maintenanceScope": "InGuestPatch", + "maintenanceWindow": { + "startDateTime": "2021-08-21 01:18", + "expirationDateTime": "2221-05-19 03:30", + "duration": "01:30", + "timeZone": "India Standard Time", + "recurEvery": "Day" + }, + "visibility": "Custom", + "installPatches": { + "rebootSetting": "IfRequired", + "windowsParameters": { + "classificationsToInclude": [ + "Security", + "Critical", + "UpdateRollup" + ] + }, + "linuxParameters": { + "classificationsToInclude": [ + "Other" + ] + } + } + } +}' +``` ++# [Azure CLI](#tab/azurecli) ++```azurecli-interactive +az maintenance configuration create \ + --resource-group myMaintenanceRG \ + --resource-name myConfig \ + --maintenance-scope InGuestPatch \ + --location eastus \ + --maintenance-window-duration "02:00" \ + --maintenance-window-recur-every "20days" \ + --maintenance-window-start-date-time "2022-12-30 07:00" \ + --maintenance-window-time-zone "Pacific Standard Time" \ + --install-patches-linux-parameters package-name-masks-to-exclude="ppt" package-name-masks-to-include="apt" classifications-to-include="Other" \ + --install-patches-windows-parameters kb-numbers-to-exclude="KB123456" kb-numbers-to-include="KB123456" classifications-to-include="FeaturePack" \ + --reboot-setting "IfRequired" \ + --extension-properties InGuestPatchMode="User" +``` ++# [Azure PowerShell](#tab/azurepowershell) ++You can use the `New-AzMaintenanceConfiguration` cmdlet to create your configuration. ++```azurepowershell-interactive +New-AzMaintenanceConfiguration + -ResourceGroup $RGName ` + -Name $configName ` + -MaintenanceScope $scope ` + -Location $location ` + -StartDateTime $startDateTime ` + -TimeZone $timeZone ` + -Duration $duration ` + -RecurEvery $recurEvery ` + -WindowParameterClassificationToInclude $WindowsParameterClassificationToInclude ` + -WindowParameterKbNumberToInclude $WindowParameterKbNumberToInclude ` + -WindowParameterKbNumberToExclude $WindowParameterKbNumberToExclude ` + -InstallPatchRebootSetting $RebootOption ` + -LinuxParameterPackageNameMaskToInclude $LinuxParameterPackageNameMaskToInclude ` + -LinuxParameterClassificationToInclude $LinuxParameterClassificationToInclude ` + -LinuxParameterPackageNameMaskToExclude $LinuxParameterPackageNameMaskToExclude ` + -ExtensionProperty @{"InGuestPatchMode"="User"} +``` +++## Associate a VM with a schedule ++To associate a VM with a maintenance configuration schedule, specify the following PUT request: ++```rest +PUT on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configurationAssignments/<configurationAssignment name>?api-version=2021-09-01-preview` +``` ++# [Azure REST API](#tab/rest) ++To specify the PUT request, you can use the following Azure REST API call with valid parameters and values. ++```rest +PUT on '/subscriptions/0f55bb56-6089-4c7e-9306-41fb78fc5844/resourceGroups/atscalepatching/providers/Microsoft.Compute/virtualMachines/win-atscalepatching-1/providers/Microsoft.Maintenance/configurationAssignments/TestAzureInGuestAdv?api-version=2021-09-01-preview ++{ + "properties": { + "maintenanceConfigurationId": "/subscriptions/0f55bb56-6089-4c7e-9306-41fb78fc5844/resourcegroups/atscalepatching/providers/Microsoft.Maintenance/maintenanceConfigurations/TestAzureInGuestIntermediate2" + }, + "location": "eastus2euap" +}' +``` ++# [Azure CLI](#tab/azurecli) ++```azurecli-interactive +az maintenance assignment create \ + --resource-group myMaintenanceRG \ + --location eastus \ + --resource-name myVM \ + --resource-type virtualMachines \ + --provider-name Microsoft.Compute \ + --configuration-assignment-name myConfig \ + --maintenance-configuration-id "/subscriptions/{subscription ID}/resourcegroups/myMaintenanceRG/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig" +``` ++# [Azure PowerShell](#tab/azurepowershell) ++```azurepowershell-interactive +New-AzConfigurationAssignment ` + -ResourceGroupName "myResourceGroup" ` + -Location "eastus" ` + -ResourceName "myGuest" ` + -ResourceType "VirtualMachines" ` + -ProviderName "Microsoft.Compute" ` + -ConfigurationAssignmentName "configName" ` + -MaintenanceConfigurationId "configID" +``` ++## Remove machine from the schedule ++To remove a machine from the schedule, get all the configuration assignment names for the machine that were created to associate the machine with the current schedule from the Azure Resource Graph as listed: ++```kusto +maintenanceresources +| where type =~ "microsoft.maintenance/configurationassignments" +| where properties.maintenanceConfigurationId =~ "<maintenance configuration Resource ID>" +| where properties.resourceId =~ "<Machine Resource Id>" +| project name, id +``` +After you obtain the name from above, delete the configuration assignment by following the DELETE request - +```rest +DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configurationAssignments/<configurationAssignment name>?api-version=2021-09-01-preview` +``` ++## Next steps ++* To view update assessment and deployment logs generated by Update Manager, see [query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager. |
update-manager | Manage Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-workbooks.md | + + Title: Create reports by using workbooks in Azure Update Manager +description: This article describes how to create and manage workbooks for VM insights. +++ Last updated : 09/18/2023++++# Create reports in Azure Update Manager ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article describes how to create and edit a workbook and make customized reports. ++## Create a workbook ++1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**. +1. Under **Monitoring**, selectΓÇ»**Update reports** to view the **Update Manager | Update reports | Gallery** page. +1. Select **Quick start** tile > **Empty**. Alternatively, you can select **New** to create a workbook. +1. Select **Add** to select any [elements](../azure-monitor/visualize/workbooks-create-workbook.md#create-a-new-azure-workbook) to add to the workbook. ++ :::image type="content" source="./media/manage-workbooks/create-workbook-elements.png" alt-text="Screenshot that shows how to create a workbook by using elements."::: ++1. Select **Done Editing**. ++## Edit a workbook ++1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**. +1. Under **Monitoring**, selectΓÇ»**Update reports** to view **Azure Update Manager | Update reports | Gallery**. +1. Select the **Azure Update Manager** tile > **Overview** to view the **Azure Update Manager | Update reports | Overview** page. +1. Select your subscription, and select **Edit** to enable the edit mode for all four options: ++ - **Machines overall status & configuration** + - **Updates Data Overview** + - **Schedules/Maintenance configurations** + - **History of Installation runs** ++ :::image type="content" source="./media/manage-workbooks/edit-workbooks-inline.png" alt-text="Screenshot that shows enabling the edit mode for all the options in workbooks." lightbox="./media/manage-workbooks/edit-workbooks-expanded.png"::: ++ You can customize the visualization to create interactive reports and edit the parameters, chart size, and chart settings to define how the chart must be rendered. ++ :::image type="content" source="./media/manage-workbooks/workbooks-edit-query-inline.png" alt-text="Screenshot that shows various edit options in workbooks." lightbox="./media/manage-workbooks/workbooks-edit-query-expanded.png"::: ++1. Select **Done Editing**. ++## Next steps ++* [View updates for a single machine](view-updates.md) +* [Deploy updates now (on-demand) for a single machine](deploy-updates.md) +* [Schedule recurring updates](scheduled-patching.md) +* [Manage update settings via portal](manage-update-settings.md) +* [Manage multiple machines using Update Manager](manage-multiple-machines.md) |
update-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/overview.md | + + Title: Azure Update Manager overview +description: This article tells what Azure Update Manager in Azure is and the system updates for your Windows and Linux machines in Azure, on-premises, and other cloud environments. +++ Last updated : 09/25/2023++++# About Azure Update Manager ++> [!Important] +> Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) will be [retired in August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Azure Automation Update Management solution relies on this agent and may encounter issues once the agent is retired as it does not work with Azure Monitoring Agent (AMA). Therefore, if you are using the Azure Automation Update Management solution, we recommend that you move to Azure Update Manager for your software update needs. All the capabilities of Azure Automation Update management solution will be available on Azure Update Manager before the retirement date. Follow the [guidance](guidance-migration-automation-update-management-azure-update-manager.md) to move your machines and schedules from Automation Update Management to Azure Update Manager. ++Update Manager is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on other cloud platforms from a single dashboard. You can also use Update Manager to make real-time updates or schedule them within a defined maintenance window. ++You can use Update Manager in Azure to: ++- Oversee update compliance for your entire fleet of machines in Azure, on-premises, and in other cloud environments. +- Instantly deploy critical updates to help secure your machines. +- Use flexible patching options such as [automatic virtual machine (VM) guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hot patching](../automanage/automanage-hotpatch.md), and customer-defined maintenance schedules. ++We also offer other capabilities to help you manage updates for your Azure VMs that you should consider as part of your overall update management strategy. To learn more about the options that are available, see the Azure VM [update options](../virtual-machines/updates-maintenance-overview.md). ++Before you enable your machines for Update Manager, make sure that you understand the information in the following sections. ++## Key benefits ++Update Manager has been redesigned and doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [Azure Automation Update Management feature](../automation/update-management/overview.md). Update Manager offers many new features and provides enhanced functionality over the original version available with Azure Automation. Some of those benefits are listed here: ++- Provides native experience with zero on-boarding. + - Built as native functionality on Azure compute and the Azure Arc for Servers platform for ease of use. + - No dependency on Log Analytics and Azure Automation. + - Azure Policy support. + - Global availability in all Azure compute and Azure Arc regions. +- Works with Azure roles and identity. + - Granular access control at the per-resource level instead of access control at the level of the Azure Automation account and Log Analytics workspace. + - Update Manager now has Azure Resource Manager-based operations. It allows role-based access control and roles based on Azure Resource Manager in Azure. +- Offers enhanced flexibility. + - Ability to take immediate action either by installing updates immediately or scheduling them for a later date. + - Check updates automatically or on demand. + - Helps secure machines with new ways of patching, such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hot patching](../automanage/automanage-hotpatch.md), or custom maintenance schedules. + - Sync patch cycles in relation to "patch Tuesday," the unofficial term for Microsoft's scheduled security fix release on every second Tuesday of each month. ++The following diagram illustrates how Update Manager assesses and applies updates to all Azure machines and Azure Arc-enabled servers for both Windows and Linux. ++![Diagram that shows the Update Manager workflow.](./media/overview/update-management-center-overview.png) ++To support management of your Azure VM or non-Azure machine, Update Manager relies on a new [Azure extension](../virtual-machines/extensions/overview.md) designed to provide all the functionality required to interact with the operating system to manage the assessment and application of updates. This extension is automatically installed when you initiate any Update Manager operations, such as **Check for updates**, **Install one-time update**, and **Periodic Assessment** on your machine. The extension supports deployment to Azure VMs or Azure Arc-enabled servers by using the extension framework. The Update Manager extension is installed and managed by using: ++- [Azure VM Windows agent](../virtual-machines/extensions/agent-windows.md) or the [Azure VM Linux agent](../virtual-machines/extensions/agent-linux.md) for Azure VMs. +- [Azure Arc-enabled servers agent](../azure-arc/servers/agent-overview.md) for non-Azure Linux and Windows machines or physical servers. ++ Update Manager manages the extension agent installation and configuration. Manual intervention isn't required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The Update Manager extension runs code locally on the machine to interact with the operating system, and it includes: ++- Retrieving the assessment information about status of system updates for it specified by the Windows Update client or Linux package manager. +- Initiating the download and installation of approved updates with the Windows Update client or Linux package manager. ++All assessment information and update installation results are reported to Update Manager from the extension and is available for analysis with [Azure Resource Graph](../governance/resource-graph/overview.md). You can view up to the last seven days of assessment data, and up to the last 30 days of update installation results. ++The machines assigned to Update Manager report how up to date they are based on what source they're configured to synchronize with. You can configure [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) on Windows machines to report to [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or Microsoft Update, which is by default. You can configure Linux machines to report to a local or public YUM or APT package repository. If the Windows Update Agent is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft Update, the results in Update Manager might differ from what Microsoft Update shows. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository. ++You can manage your Azure VMs or Azure Arc-enabled servers directly or at scale with Update Manager. ++## Prerequisites ++Along with the following prerequisites, see [Support matrix](support-matrix.md) for Update Manager. ++### Role ++Resource | Role + | +|Azure VM | [Azure Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) or Azure [Owner](../role-based-access-control/built-in-roles.md#owner) +Azure Arc-enabled server | [Azure Connected Machine Resource Administrator](../azure-arc/servers/security-overview.md#identity-and-access-control) ++### Permissions ++You need the following permissions to create and manage update deployments. The table shows the permissions that are needed when you use Update Manager. ++Actions |Permission |Scope | + | | | +|Install update on Azure VMs |Microsoft.Compute/virtualMachines/installPatches/action || +|Update assessment on Azure VMs |Microsoft.Compute/virtualMachines/assessPatches/action || +|Install update on Azure Arc-enabled server |Microsoft.HybridCompute/machines/installPatches/action || +|Update assessment on Azure Arc-enabled server |Microsoft.HybridCompute/machines/assessPatches/action || +|Register the subscription for the Microsoft.Maintenance resource provider| Microsoft.Maintenance/register/action | Subscription| +|Create/modify maintenance configuration |Microsoft.Maintenance/maintenanceConfigurations/write |Subscription/resource group | +|Create/modify configuration assignments |Microsoft.Maintenance/configurationAssignments/write |Machine | +|Read permission for Maintenance updates resource |Microsoft.Maintenance/updates/read |Machine | +|Read permission for Maintenance apply updates resource |Microsoft.Maintenance/applyUpdates/read |Machine | ++### VM images ++For more information, see the [list of supported operating systems and VM images](support-matrix.md#supported-operating-systems). ++Currently, Update Manager has the following limitations regarding operating system support: ++ - Marketplace images other than the [list of supported Marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images) are currently not supported. + - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and *VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery* aren't fully supported for now. You can use on-demand operations such as **One-time update** and **Check for updates** in Update Manager. ++For the preceding limitations, we recommend that you use [Automation Update Management](../automation/update-management/overview.md) until support is available in Update Manager. To learn more, see [Supported operating systems](support-matrix.md#supported-operating-systems). ++## VM extensions ++Azure VM extensions and Azure Arc-enabled VM extensions are available. ++#### [Azure VM extensions](#tab/azure-vms) ++| Operating system| Extension +|-|-| +|Windows | Microsoft.CPlat.Core.WindowsPatchExtension| +|Linux | Microsoft.CPlat.Core.LinuxPatchExtension | ++#### [Azure Arc-enabled VM extensions](#tab/azure-arc-vms) ++| Operating system| Extension +|-|-| +|Windows | Microsoft.CPlat.Core.WindowsPatchExtension (Periodic assessment) <br> Microsoft.SoftwareUpdateManagement.WindowsOsUpdateExtension (On-demand operations and Schedule patching) | +|Linux | Microsoft.SoftwareUpdateManagement.LinuxOsUpdateExtension (On-demand operations and Schedule patching) <br> Microsoft.CPlat.Core.LinuxPatchExtension (Periodic assessment) | ++To view the available extensions for a VM in the Azure portal: ++1. Go to the [Azure portal](https://portal.azure.com) and select a VM. +1. On the VM home page, under **Settings**, select **Extensions + applications**. +1. On the **Extensions** tab, you can view the available extensions. +++### Network planning ++To prepare your network to support Update Manager, you might need to configure some infrastructure components. ++For Windows machines, you must allow traffic to any endpoints required by the Windows Update agent. You can find an updated list of required endpoints in [Issues related to HTTP/Proxy](/windows/deployment/update/windows-update-troubleshooting#issues-related-to-httpproxy). If you have a local [WSUS](/windows-server/administration/windows-server-update-services/plan/plan-your-wsus-deployment) deployment, you must also allow traffic to the server specified in your [WSUS key](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry). ++For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../virtual-machines/workloads/redhat/redhat-rhui.md#the-ips-for-the-rhui-content-delivery-servers) for required endpoints. For other Linux distributions, see your provider documentation. ++## Next steps ++- [View updates for a single machine](view-updates.md) +- [Deploy updates now (on-demand) for a single machine](deploy-updates.md) +- [Schedule recurring updates](scheduled-patching.md) +- [Manage update settings via the portal](manage-update-settings.md) +- [Manage multiple machines by using Update Manager](manage-multiple-machines.md) |
update-manager | Periodic Assessment At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/periodic-assessment-at-scale.md | + + Title: Enable Periodic Assessment using policy +description: This article shows how to manage update settings for your Windows and Linux machines managed by Azure Update Manager. +++ Last updated : 09/18/2023++++# Automate assessment at scale by using Azure Policy ++This article describes how to enable Periodic Assessment for your machines at scale by using Azure Policy. **Periodic Assessment** is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. After you enable this setting, Update Manager fetches updates on your machine once every 24 hours. ++## Enable Periodic Assessment for your Azure machines by using Azure Policy ++1. Go to **Policy** in the Azure portal and select **Authoring** > **Definitions**. +1. From the **Category** dropdown, select **Update Manager**. Select **[Preview]: Configure periodic checking for missing system updates on Azure virtual machines** for Azure machines. +1. When **Policy definition** opens, select **Assign**. +1. On the **Basics** tab, select your subscription as your scope. You can also specify a resource group within your subscription as the scope. Select **Next**. +1. On the **Parameters** tab, clear **Only show parameters that need input or review** so that you can see the values of parameters. In **Assessment** mode, select **AutomaticByPlatform** > **Operating system** > **Next**. You need to create separate policies for Windows and Linux. +1. On the **Remediation** tab, select **Create a remediation task** so that periodic assessment is enabled on your machines. Select **Next**. +1. On the **Non-compliance message** tab, provide the message that you want to see if there was noncompliance. For example, use **Your machine doesn't have periodic assessment enabled.** Select **Review + Create.** +1. On the **Review + Create** tab, select **Create** to trigger **Assignment and Remediation Task** creation, which can take a minute or so. ++You can monitor the compliance of resources under **Compliance** and remediation status under **Remediation** on the Azure Policy home page. ++## Enable Periodic Assessment for your Azure Arc-enabled machines by using Azure Policy ++1. Go to **Policy** in the Azure portal and select **Authoring** > **Definitions**. +1. From the **Category** dropdown, select **Update Manager**. Select **[Preview]: Configure periodic checking for missing system updates on Azure Arc-enabled servers** for Azure Arc-enabled machines. +1. When **Policy definition** opens, select **Assign**. +1. On the **Basics** tab, select your subscription as your scope. You can also specify a resource group within your subscription as the scope. Select **Next**. +1. On the **Parameters** tab, clear **Only show parameters that need input or review** so that you can see the values of parameters. In **Assessment** mode, select **AutomaticByPlatform** > **Operating system** > **Next**. You need to create separate policies for Windows and Linux. +1. On the **Remediation** tab, select **Create a remediation task** so that periodic assessment is enabled on your machines. Select **Next**. +1. On the **Non-compliance message** tab, provide the message that you want to see if there was noncompliance. For example, use **Your machine doesn't have periodic assessment enabled.** Select **Review + Create.** +1. On the **Review + Create** tab, select **Create** to trigger **Assignment and Remediation Task** creation, which can take a minute or so. ++You can monitor compliance of resources under **Compliance** and remediation status under **Remediation** on the Azure Policy home page. ++## Monitor if Periodic Assessment is enabled for your machines ++This procedure applies to both Azure and Azure Arc-enabled machines. ++1. Go to **Policy** in the Azure portal and select **Authoring** > **Definitions**. +1. From the **Category** dropdown, select **Update Manager**. Select **[Preview]: Machines should be configured to periodically check for missing system updates**. +1. When **Policy definition** opens, select **Assign**. +1. On the **Basics** tab, select your subscription as your scope. You can also specify a resource group within your subscription as the scope. Select **Next**. +1. On the **Parameters** and **Remediation** tabs, select **Next**. +1. On the **Non-compliance message** tab, provide the message that you want to see if there was noncompliance. For example, use **Your machine doesn't have periodic assessment enabled.** Select **Review + Create.** +1. On the **Review + Create** tab, select **Create** to trigger **Assignment and Remediation Task** creation, which can take a minute or so. ++You can monitor compliance of resources under **Compliance** and remediation status under **Remediation** on the Azure Policy home page. ++## Next steps ++* [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Azure Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal. +* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md). |
update-manager | Prerequsite For Schedule Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/prerequsite-for-schedule-patching.md | + + Title: Configure schedule patching on Azure VMs for business continuity +description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Azure Update Manager. + Last updated : 09/18/2023++++++# Configure schedule patching on Azure VMs for business continuity ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Azure VMs. ++This article is an overview on how to configure schedule patching and automatic guest virtual machine (VM) patching on Azure VMs by using the new prerequisite to ensure business continuity. The steps to configure both the patching options on Azure Arc VMs remain the same. ++Currently, you can enable [automatic guest VM patching](../virtual-machines/automatic-vm-guest-patching.md) (autopatch) by setting the patch mode to **Azure-orchestrated** in the Azure portal or **AutomaticByPlatform** in the REST API, where patches are automatically applied during off-peak hours. ++For customizing control over your patch installation, you can use [schedule patching](updates-maintenance-schedules.md#scheduled-patching) to define your maintenance window. You can [enable schedule patching](scheduled-patching.md#schedule-recurring-updates-on-a-single-vm) by setting the patch mode to **Azure-orchestrated** in the Azure portal or **AutomaticByPlatform** in the REST API and attaching a schedule to the Azure VM. So, the VM properties couldn't be differentiated between **schedule patching** or **Automatic guest VM patching** because both had the patch mode set to **Azure-orchestrated**. ++In some instances, when you remove the schedule from a VM, there's a possibility that the VM might be autopatched and rebooted. To overcome the limitations, we've introduced a new prerequisite, `ByPassPlatformSafetyChecksOnUserSchedule`, which can now be set to `true` to identify a VM by using schedule patching. It means that VMs with this property set to `true` are no longer autopatched when the VMs don't have an associated maintenance configuration. ++> [!IMPORTANT] +> For a continued scheduled patching experience, you must ensure that the new VM property, `BypassPlatformSafetyChecksOnUserSchedule`, is enabled on all your Azure VMs (existing or new) that have schedules attached to them by **June 30, 2023**. This setting ensures that machines are patched by using your configured schedules and not autopatched. Failing to enable by June 30, 2023, gives an error that the prerequisites aren't met. ++## Schedule patching in an availability set ++All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. ++VMs in a common availability set are updated within Update Domain boundaries. VMs across multiple Update Domains aren't updated concurrently. ++## Find VMs with associated schedules ++To identify the list of VMs with the associated schedules for which you have to enable a new VM property: ++1. Go to **Azure Update Manager** home page and select the **Machines** tab. +1. In the **Patch orchestration** filter, select **Azure Managed - Safe Deployment**. +1. Use the **Select all** option to select the machines and then select **Export to CSV**. +1. Open the CSV file and in the column **Associated schedules**, select the rows that have an entry. ++ In the corresponding **Name** column, you can view the list of VMs to which you need to enable the `ByPassPlatformSafetyChecksOnUserSchedule` flag. ++## Enable schedule patching on Azure VMs ++To enable schedule patching on Azure VMs, follow these steps. ++# [Azure portal](#tab/new-prereq-portal) ++## Prerequisites ++Patch orchestration = Customer Managed Schedules ++Select the patch orchestration option as **Customer Managed Schedules**. The new patch orchestration option enables the following VM properties on your behalf after receiving your consent: ++ - Patch mode = `Azure-orchestrated` + - `BypassPlatformSafetyChecksOnUserSchedule` = TRUE ++### Enable for new VMs ++You can select the patch orchestration option for new VMs that would be associated with the schedules. ++To update the patch mode: ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Go to **Virtual machine** and select **Create** to open the **Create a virtual machine** page. +1. On the **Basics** tab, fill in all the mandatory fields. +1. On the **Management** tab, under **Guest OS updates**, for **Patch orchestration options**, select **Azure-orchestrated**. +1. Fill in the entries on the **Monitoring**, **Advanced**, and **Tags** tabs. +1. Select **Review + Create**. Select **Create** to create a new VM with the appropriate patch orchestration option. ++To schedule patch the newly created VMs, follow the procedure from step 2 in the next section, "Enable for existing VMs." ++### Enable for existing VMs ++You can update the patch orchestration option for existing VMs that either already have schedules associated or will be newly associated with a schedule. ++If **Patch orchestration** is set as **Azure-orchestrated** or **Azure Managed - Safe Deployment (AutomaticByPlatform)**, `BypassPlatformSafetyChecksOnUserSchedule` is set to `false`, and there's no schedule associated, the VMs will be autopatched. ++To update the patch mode: ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Go to **Azure Update Manager** and select **Update Settings**. +1. In **Change update settings**, select **Add machine**. +1. In **Select resources**, select your VMs and then select **Add**. +1. On the **Change update settings** pane, under **Patch orchestration**, select **Customer Managed Schedules** and then select **Save**. ++Attach a schedule after you finish the preceding steps. ++To check if `BypassPlatformSafetyChecksOnUserSchedule` is enabled, go to the **Virtual machine** home page and select **Overview** > **JSON View**. ++# [REST API](#tab/new-prereq-rest-api) ++## Prerequisites ++- Patch mode = `AutomaticByPlatform` +- `BypassPlatformSafetyChecksOnUserSchedule` = TRUE ++### Enable on Windows VMs ++``` +PATCH on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01` +``` ++```json +{ + "location":"<location>", + "properties": { + "osProfile": { + "windowsConfiguration": { + "provisionVMAgent": true, + "enableAutomaticUpdates": true, + "patchSettings": { + "patchMode": "AutomaticByPlatform", + "automaticByPlatformSettings":{ + "bypassPlatformSafetyChecksOnUserSchedule":true + } + } + } + } + } +} ++``` ++### Enable on Linux VMs ++``` +PATCH on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01` +``` ++```json +{ ++ "location":"<location>", + "properties": { + "osProfile": { + "linuxConfiguration": { + "provisionVMAgent": true, + "patchSettings": { + "patchMode": "AutomaticByPlatform", + "automaticByPlatformSettings":{ + "bypassPlatformSafetyChecksOnUserSchedule":true + } + } + } + } + } +} +``` +++> [!NOTE] +> Currently, you can only enable the new prerequisite for schedule patching via the Azure portal and the REST API. It can't be enabled via the Azure CLI or PowerShell. ++## Enable automatic guest VM patching on Azure VMs ++To enable automatic guest VM patching on your Azure VMs now, follow these steps. ++# [Azure portal](#tab/auto-portal) ++## Prerequisite ++Patch mode = `Azure-orchestrated` ++### Enable for new VMs ++You can select the patch orchestration option for new VMs that would be associated with the schedules. ++To update the patch mode: ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Go to **Virtual machine** and select **Create** to open the **Create a virtual machine** page. +1. On the **Basics** tab, fill in all the mandatory fields. +1. On the **Management** tab, under **Guest OS updates**, for **Patch orchestration options**, select **Azure-orchestrated**. +1. Fill in the entries on the **Monitoring**, **Advanced**, and **Tags** tabs. +1. Select **Review + Create**. Select **Create** to create a new VM with the appropriate patch orchestration option. ++### Enable for existing VMs ++To update the patch mode: ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Go to **Update Manager** and select **Update settings**. +1. On the **Change update settings** pane, select **Add machine**. +1. On the **Select resources** pane, select your VMs and then select **Add**. +1. On the **Change update settings** pane, under **Patch orchestration**, select **Azure Managed - Safe Deployment** and then select **Save**. ++# [REST API](#tab/auto-rest-api) ++## Prerequisites ++- Patch mode = `AutomaticByPlatform` +- `BypassPlatformSafetyChecksOnUserSchedule` = FALSE ++### Enable on Windows VMs ++``` +PATCH on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01` +``` ++```json +{ ++ "location":"<location>", + "properties": { + "osProfile": { + "windowsConfiguration": { + "provisionVMAgent": true, + "enableAutomaticUpdates": true, + "patchSettings": { + "patchMode": "AutomaticByPlatform", + "automaticByPlatformSettings":{ + "bypassPlatformSafetyChecksOnUserSchedule":false + } + } + } + } + } +} +``` ++### Enable on Linux VMs ++``` +PATCH on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVirtualMachine?api-version=2023-03-01` +``` ++```json +{ + "location":"<location>", + "properties": { + "osProfile": { + "linuxConfiguration": { + "provisionVMAgent": true, + "patchSettings": { + "patchMode": "AutomaticByPlatform", + "automaticByPlatformSettings":{ + "bypassPlatformSafetyChecksOnUserSchedule":false + } + } + } + } + } +} +``` +++## User scenarios ++Scenarios | Azure-orchestrated | BypassPlatformSafetyChecksOnUserSchedule | Schedule associated |Expected behavior in Azure | + | | | | | +Scenario 1 | Yes | True | Yes | The schedule patch runs as defined by user. | +Scenario 2 | Yes | True | No | Autopatch and schedule patch don't run.| +Scenario 3 | Yes | False | Yes | Autopatch and schedule patch don't run. You get an error that the prerequisites for schedule patch aren't met.| +Scenario 4 | Yes | False | No | The VM is autopatched.| +Scenario 5 | No | True | Yes | Autopatch and schedule patch don't run. You get an error that the prerequisites for schedule patch aren't met. | +Scenario 6 | No | True | No | Autopatch and schedule patch don't run.| +Scenario 7 | No | False | Yes | Autopatch and schedule patch don't run. You get an error that the prerequisites for schedule patch aren't met.| +Scenario 8 | No | False | No | Autopatch and schedule patch don't run.| ++## Next steps ++To troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md). |
update-manager | Query Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/query-logs.md | + + Title: Query logs and results from Update Manager +description: This article provides details on how you can review logs and search results from Azure Update Manager by using Azure Resource Graph. +++ Last updated : 09/18/2023++++# Overview of query logs in Azure Update Manager ++Logs created from operations like update assessments and installations are stored by Azure Update Manager in [Azure Resource Graph](../governance/resource-graph/overview.md). Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update Manager uses Resource Graph to store its results. You can view the update history of the last 30 days from the resources. ++This article describes the structure of the logs from Update Manager and how you can use [Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) to analyze them in support of your reporting, visualizing, and export needs. ++## Log structure ++Update Manager sends the results of all its operations into Azure Resource Graph as logs, which are available for 30 days. Listed here are the structure of logs being sent to Azure Resource Graph. ++### Patch assessment results ++The table `patchassessmentresources` includes resources related to machine patch assessment. The following table describes its properties. ++| Property | Description | +|-|-| +| `ID` | The Azure Resource Manager ID forwarding the result. It's similar to the [REST API](manage-vms-programmatically.md) path for Guest OS assessment. Typically, `<resourcePath>/patchAssessmentResults/latest` or `<resourcePath>/patchAssessmentResults/latest/softwarePatches/<update>`. | +| `NAME` | If the ID is of type `<resourcePath>/patchAssessmentResults/latest`, then the record contains the unique GUID for the assessment operation finished. If `<resourcePath>/patchAssessmentResults/latest/softwarePatches/<update>`, then the record contains the update name or label. | +| `TYPE` |Specifies the type of log for assessment. If the type is `patchassessmentresults`, then the record provides a summary of OS assessment with numerical aggregate statistics. If the type is `patchassessmentresults/softwarepatches`, then the record describes a specific OS update available for the resource. | +| `TENANTID` | Azure tenant ID for the Azure VM or Azure Arc-enabled server resource.| +| `KIND` | Intentionally left blank for future use. | +| `LOCATION` | Azure cloud region where the Azure VM or Azure Arc-enabled server resource exists.| +| `RESOURCEGROUP` | Azure resource group hosting the Azure VM or Azure Arc-enabled server resource.| +| `SUBSCRIPTIONID` | Azure subscription ID for the Azure VM or Azure Arc-enabled server resource. | +| `MANAGEDBY` | Intentionally left blank for future use. | +| `SKU` | Intentionally left blank for future use. | +| `PLAN` | Intentionally left blank for future use. | +| `PROPERTIES` | Captures details of operation in JSON format. More information follows this table.| +| `TAGS` | Azure tags defined for the Azure VM or Azure Arc-enabled servers resource. | +| `IDENTITY` | Intentionally left blank for future use. | +| `ZONES` | Intentionally left blank for future use. | +| `EXTENDEDLOCATION` | Intentionally left blank for future use. | ++### Description of the patchassessmentresources property ++If the property for the resource type is `patchassessmentresources`, it includes the information in the following table. ++|Value |Description | +||| +| `rebootPending` |Flag to specify if the specific update requires the OS to reboot to finish installation. As provided by machine's OS update service or package manager. If your OS package manager or update service doesn't require a reboot, the value of the field is set to `false`.| +|`patchServiceUsed` |OS service used on the machine to install updates. `WU-WSUS` for Windows Update service or Windows Server Update Service. For Linux, it's the OS package manager like `YUM`, `APT`, or `Zypper`.| +|`osType` |Represents the type of operating system: `Windows` or `Linux`.| +|`startDateTime` |Timestamp (UTC) representing when the OS update assessment task started execution on the machine.| +|`lastModifiedDateTime` |Timestamp (UTC) representing when the record was last updated.| +|`startedBy` |Identifies if a user or an Azure service triggered the OS update installation. For more information on the operation, see [Azure activity log](/azure/azure-resource-manager/management/view-activity-logs).| +|`errorDetails` |First five error messages generated while executing update installation from the machine's OS package manager or update service.| +|`availablePatchCountByClassification` |Number of OS updates by the category that the specific updates belong to based on the OS vendor. The machine's OS update service or package manager generates the information. If the OS package manager or update service doesn't provide the detail of category, the value is `Others` (for Linux) or `Updates` (for Windows Server).| +| ++If the property for the resource type is `patchassessmentresults/softwarepatches`, it includes the information in the following table. ++|Value |Description | +||| +|`lastModifiedDateTime` |Timestamp (UTC) representing when the record was last updated.| +|`publishedDateTime` |Timestamp representing when the specific update was made available by the OS vendor. The machine's OS update service or package manager generates the information. If your OS package manager or update service doesn't provide the detail of when an update was provided by OS vendor, the value is null.| +|`classifications` |Category that the specific update belongs to according to the OS vendor. The machine's OS update service or package manager generates the information. If your OS package manager or update service doesn't provide the detail of category, the value is `Others` (for Linux) or `Updates` (for Windows Server). | +|`rebootRequired` |Value indicates if the specific update requires the OS to reboot to finish the installation. The machine's OS update service or package manager generates the information. If your OS package manager or update service doesn't require a reboot, the value is `false`.| +|`rebootBehavior` |Behavior set in the OS update installation runs the job when configuring the update deployment if Update Manager can reboot the target machine. | +|`patchName` |Name or label for the specific update generated by the machine's OS package manager or update service.| +|`Kbid` |If the machine's OS is Windows Server, the value includes the unique KB ID for the update provided by the Windows Update service.| +|`version` |If the machine's OS is Linux, the value includes the version details for the update as provided by the Linux package manager. For example, `1.0.1.el7.3`.| ++### Patch installation results ++The table `patchinstallationresources` includes resources related to machine patch assessment. The following table describes its properties. ++| Property | Description | +|-|-| +| `ID` | The Azure Resource Manager ID forwarding the result. It's similar to the [REST API](manage-vms-programmatically.md) path for Guest OS assessment. Typically, `<resourcePath>/patchInstallationResults/<GUID>` or `<resourcePath>/patchAssessmentResults/latest/softwarePatches/<update>`. | +| `NAME` | If the ID is of type `<resourcePath>/patchInstallationResults`, then the record contains unique GUID for the update operation finished. If `<resourcePath>/patchInstallationResults/softwarePatches/<update>`, then the record contains the update name or label being installed on the machine. | +| `TYPE` |Specifies the type of log for assessment. If type is `patchinstallationresults`, then the record provides a summary of OS installation with numerical aggregate statistics. If type is `patchinstallationresults/softwarepatches`, then the record describes a specific OS update installed for the resource. | +| `TENANTID` | Azure tenant ID for the Azure VM or Azure Arc-enabled server resource. | +| `KIND` | Intentionally left blank for future use. | +| `LOCATION` | Azure cloud region where the Azure VM or Azure Arc-enabled server resource exists.| +| `RESOURCEGROUP` | Azure resource group hosting the Azure VM or Azure Arc-enabled server resource.| +| `SUBSCRIPTIONID` | Azure subscription ID for the Azure VM or Azure Arc-enabled server resource.| +| `MANAGEDBY` | Intentionally left blank for future use. | +| `SKU` | Intentionally left blank for future use. | +| `PLAN` | Intentionally left blank for future use. | +| `PROPERTIES` | Captures details of operation in JSON format. More information follows this table.| +| `TAGS` | Azure tags defined for the Azure VM or Azure Arc-enabled servers resource. | +| `IDENTITY` | Intentionally left blank for future use. | +| `ZONES` | Intentionally left blank for future use. | +| `EXTENDEDLOCATION` | Intentionally left blank for future use. | ++### Description of the patchinstallationresults property ++If the property for the resource type is `patchinstallationresults`, it includes the information in the following table. ++|Value |Description | +||| +|`installationActivityId` | Unique GUID for the OS update installation run. | +|`maintenanceWindowExceeded` | Values are `True` or `False` if the update installation run exceeded the defined maintenance window. | +|`lastModifiedDateTime` |Timestamp (UTC) representing when the record was last updated. | +|`notSelectedPatchCount` |Number of OS updates available on the machine not selected for installation in an update deployment. | +|`installedPatchCount` |Number of OS updates that were successfully installed that were specified in an update deployment. | +|`excludedPatchCount` |Number of OS updates available on the machine and excluded for installation in an update deployment.| +|`pendingPatchCount` |Number of OS updates still awaiting to be installed that were specified in an update deployment. | +|`patchServiceUsed` |OS service used on the machine to install updates. `WU-WSUS` for Windows Update service or Windows Server Update Service. For Linux, it's the OS package manager like `YUM`, `APT`, or `Zypper`. | +|`failedPatchCount` |Number of OS updates that failed to successfully get installed that were specified in an update deployment. | +|`startDateTime` |Timestamp (UTC) representing when the OS update installation task started execution on the machine. | +|`rebootStatus` |Information from the OS update service or package manager if the OS needs to be restarted to finish the update installation. Status values are `NotNeeded` (no restart is needed), `Required` (OS restart is needed for completion), `Started` (restart was initiated), `Failed` (OS couldn't be restarted), and `Completed` (restart was done successfully). | +|`startedBy` |Identifies if a user or an Azure service triggered the OS update installation. For more information on the operation, see [Azure activity log](/azure/azure-resource-manager/management/view-activity-logs). | +|`status` |Status of the OS update installation run. Values can be `NotStarted`, `InProgress`, `Failed`, `Succeeded`, and `CompletedWithWarnings`. The update installation run is deemed `Failed` status if one or more OS update installations is unsuccessful. | +|`osType` |Represents the type of operating system: `Windows` or `Linux`. | +|`errorDetails` |Includes the first five error messages generated while running update installation from the machine's OS package manager or update service. | +|`maintenanceRunId` | This value is used as a maintenance run identifier for Auto VM Guest Patching or schedule run ID instead of recurring updates. | ++If the property for the resource type is `patchinstallationresults/softwarepatches`, it includes the information in the following table. ++|Value |Description | +||| +|`installationState` |Installation status for the specific OS update. Values are `Installed`, `Failed`, `Pending`, `NotSelected`, and `Excluded`. | +|`lastModifiedDateTime` |Timestamp (UTC) representing when the record was last updated. | +|`publishedDateTime` |Timestamp representing when the specific update was made available by the OS vendor. The machine's OS update service or package manager generates the information. If your OS package manager or update service doesn't provide the detail of when an update was provided by the OS vendor, the value is null. | +|`classifications` |Category that the specific update belongs to according to the OS vendor as provided by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of category, the value of the field is `Others` (for Linux) and `Updates` (for Windows Server). | +|`rebootRequired` |Flag to specify if the specific update requires the OS to reboot to finish the installation, as provided by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide information regarding need of OS reboot, the value of the field is set to `false`. | +|`rebootBehavior` |Behavior set in the OS update installation runs the job by user, regarding allowing Update Manager to reboot the OS. | +|`patchName` |Name or label for the specific update as provided by the machine's OS package manager or update service. | +|`Kbid` |If the machine's OS is Windows Server, the value includes the unique KB ID for the update provided by the Windows Update service. | +|`version` |If the machine's OS is Linux, the value includes the version details for the update as provided by the Linux package manager. For example, `1.0.1.el7.3`. | ++### Maintenance resources ++The table `maintenanceresources` includes resources related to maintenance configuration. The following table describes its properties. ++| Property | Description | +|-|-| +| `ID` | The Azure Resource Manager ID forwarding the result. It's similar to the [REST API](manage-vms-programmatically.md) path for creating a maintenance configuration. | +| `NAME` | If the ID is of type `<resourcePath>/applyupdates`, then the record contains a unique GUID for the maintenance run. If `<resourcePath>/configurationassignments`, then the record contains the assignment of maintenance configuration to an Azure or Azure Arc VM. | +| `TYPE` |Specifies the type of log for assessment. If type is `applyupdates`, then the record provides details of the maintenance run record at machine level. If type is `configurationassignments`, then the record describes the link between an Azure VM or Azure Arc VM and a maintenance configuration. | +| `TENANTID` | Azure tenant ID for the Azure VM or Azure Arc-enabled server resource. | +| `KIND` | Intentionally left blank for future use. | +| `LOCATION` | Pure cloud region where the Azure VM or Azure Arc-enabled server resource exists.| +| `RESOURCEGROUP` | Azure resource group hosting the Azure VM or Azure Arc-enabled server resource.| +| `SUBSCRIPTIONID` | Azure subscription ID for the Azure VM or Azure Arc-enabled server resource.| +| `MANAGEDBY` | Intentionally left blank for future use. | +| `SKU` | Intentionally left blank for future use. | +| `PLAN` | Intentionally left blank for future use. | +| `PROPERTIES` | Captures details of operation in JSON format. More information follows this table.| +| `TAGS` | Azure tags defined for the Azure VM or Azure Arc-enabled servers resource. | +| `IDENTITY` | Intentionally left blank for future use. | +| `ZONES` | Intentionally left blank for future use. | +| `EXTENDEDLOCATION` | Intentionally left blank for future use. | ++### Description of the applyupdates property ++If the property for the resource type is `applyupdates`, it includes the information in the following table. ++|Value |Description | +||| +|`maintenanceConfigurationId` | Azure Resource Manager ID of applied maintenance configuration. | +|`maintenanceScope` | Maintenance scope of applied maintenance configuration. | +|`resourceId` | Azure Resource Manager template resource ID of ARC/Azure VM. | +|`correlationId` | Schedule run ID of maintenance/schedule run. This information can be used to find all the VMs that were part of the same schedule. | +|`startDateTime` | Start date and time of a schedule. | +|`endDateTime` | End date and time of a schedule. | ++If the property for the resource type is `configurationassignments`, it includes the information in the following table. ++|Value |Description | +||| +|`resourceId` | Azure Resource Manager resource ID of ARC/Azure VM | +|`maintenanceConfigurationId` | Azure Resource Manager ID of the applied maintenance configuration | ++## Next steps ++- For details of sample queries, see [Sample query logs](sample-query-logs.md). +- To troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md). |
update-manager | Quickstart On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/quickstart-on-demand.md | + + Title: 'Quickstart: Deploy updates by using Update Manager in the Azure portal' +description: This quickstart helps you to deploy updates immediately and view results for supported machines in Azure Update Manager by using the Azure portal. + Last updated : 09/18/2023++++++# Quickstart: Check and install on-demand updates ++By using Azure Update Manager, you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis. You can also take control by checking and installing updates manually. ++This quickstart explains how to perform manual assessment and apply updates on selected Azure virtual machines (VMs) or an Azure Arc-enabled server on-premises or in cloud environments. ++## Prerequisites ++- An Azure account with an active subscription. If you don't have one yet, sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- Your role must be either an [Owner](../role-based-access-control/built-in-roles.md#owner) or [Contributor](../role-based-access-control/built-in-roles.md#contributor) for an Azure VM and resource administrator for Azure Arc-enabled servers. +- Ensure that the target machines meet the specific operating system requirements of the Windows Server and Linux. For more information, see [Overview](overview.md). ++## Check updates ++1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**. ++1. SelectΓÇ»**Get started** > **On-demand assessment and updates** >ΓÇ»**Check for updates**. ++ :::image type="content" source="./media/quickstart-on-demand/quickstart-check-updates.png" alt-text="Screenshot that shows accessing check for updates."::: ++ On the **Select resources and check for updates** pane, a table lists all the machines in the specific Azure subscription. ++1. Select one or more machines from the list and select **Check for updates** to initiate a compliance scan. + +After the assessment is finished, a confirmation message appears in the upper-right corner of the page. ++## Configure settings ++For the assessed machines that are reporting updates, you can configure [periodic assessment](assessment-options.md#periodic-assessment), [hotpatching](updates-maintenance-schedules.md#hotpatching),and [patch orchestration](manage-multiple-machines.md#summary-of-machine-status) either immediately or schedule the updates by defining the maintenance window. ++To configure the settings on your machines: ++1. On the **Azure Update Manager | Get started** page, in **On-demand assessment and updates**, selectΓÇ»**Update settings**. ++ :::image type="content" source="./media/quickstart-on-demand/quickstart-update-settings.png" alt-text="Screenshot that shows how to access the Update settings option to configure updates for virtual machines."::: ++1. On the **Update settings to change** page, select **Periodic assessment**, **Hotpatch**, or **Patch orchestration** to configure. Select **Next**. For more information, see [Configure settings on virtual machines](manage-update-settings.md#configure-settings-on-a-single-vm). ++1. On the **Review and change** tab, verify the resource selection and update settings and select **Review and change**. ++A notification confirms that the update settings were successfully applied. ++## Install updates ++Based on the last assessment performed on the selected machines, you can now select resources and machines to install the updates. ++1. On the **Azure Update Manager | Get started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**. ++ :::image type="content" source="./media/quickstart-on-demand/quickstart-install-updates.png" alt-text="Screenshot that shows how to access the Install update settings option to install the updates for virtual machines."::: ++1. On the **Install one-time updates** pane, select one or more machines from the list on the **Machines** tab. Select **Next**. ++1. On the **Updates** tab, specify the updates to include in the deployment and select **Next**: ++ - Include update classification. + - Include the Knowledge Base (KB) ID/package, by specific KB IDs or package. For Windows, see the [Microsoft Security Response Center (MSRC)](https://msrc.microsoft.com/update-guide/deployments) for the latest information. + - Exclude the KB ID/package that you don't want to install as part of the process. Updates not shown in the list can be installed based on the time between last assessment and release of new updates. + - Include by maximum patch publish date includes the updates published on or before a specific date. ++1. On the **Properties** tab, select **Reboot** and **Maintenance window** (in minutes). Select **Next**. ++1. On the **Review + install** tab, verify the update deployment options and select **Install**. ++A notification confirms that the installation of updates is in progress. After the update is finished, you can view the results on the **Update Manager | History** page. ++## Next steps ++Learn about [managing multiple machines](manage-multiple-machines.md). |
update-manager | Sample Query Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/sample-query-logs.md | + + Title: Sample query logs and results from Azure Update Manager +description: The article provides details of sample query logs from Azure Update Manager in Azure using Azure Resource Graph +++ Last updated : 09/18/2023++++# Sample queries ++The following are some sample queries to help you get started querying the update assessment and deployment information collected from your managed machines. For more information on logs created from operations such as update assessments and installations, see [overview of query logs](query-logs.md). + +## List available updates for all your machines grouped by update category ++The following query returns a list of pending updates for your machine with the time when the assessment was performed, the resource ID for the assessment, OS type on the machine, and the OS updates available based on update classification. ++```kusto +patchassessmentresources +| where type !has "softwarepatches" +| extend prop = parse_json(properties) +| extend lastTime = properties.lastModifiedDateTime +| extend updateRollupCount = prop.availablePatchCountByClassification.updateRollup, featurePackCount = prop.availablePatchCountByClassification.featurePack, servicePackCount = prop.availablePatchCountByClassification.servicePack, definitionCount = prop.availablePatchCountByClassification.definition, securityCount = prop.availablePatchCountByClassification.security, criticalCount = prop.availablePatchCountByClassification.critical, updatesCount = prop.availablePatchCountByClassification.updates, toolsCount = prop.availablePatchCountByClassification.tools, otherCount = prop.availablePatchCountByClassification.other, OS = prop.osType +| project lastTime, id, OS, updateRollupCount, featurePackCount, servicePackCount, definitionCount, securityCount, criticalCount, updatesCount, toolsCount, otherCount +``` ++## Count of update installations ++The following query returns a list of update installations with their status for your machines from the last seven days. Results include the time when the update deployment was run, the resource ID of the installation, machine details, and the count of OS updates installed based on their status and your selection. ++```kusto +patchinstallationresources +| where type !has "softwarepatches" +| extend machineName = tostring(split(id, "/", 8)), resourceType = tostring(split(type, "/", 0)), tostring(rgName = split(id, "/", 4)) +| extend prop = parse_json(properties) +| extend lTime = todatetime(prop.lastModifiedDateTime), OS = tostring(prop.osType), installedPatchCount = tostring(prop.installedPatchCount), failedPatchCount = tostring(prop.failedPatchCount), pendingPatchCount = tostring(prop.pendingPatchCount), excludedPatchCount = tostring(prop.excludedPatchCount), notSelectedPatchCount = tostring(prop.notSelectedPatchCount) +| where lTime > ago(7d) +| project lTime, RunID=name,machineName, rgName, resourceType, OS, installedPatchCount, failedPatchCount, pendingPatchCount, excludedPatchCount, notSelectedPatchCount +``` ++## List of Windows Server OS update installations ++The following query returns a list of update installations for Windows Server with their status for your machines from the last seven days. Results include the time when the update deployment was run, the resource ID of the installation, machine details, and other related deployment details. ++```kusto +patchinstallationresources +| where type has "softwarepatches" and properties !has "version" +| extend machineName = tostring(split(id, "/", 8)), resourceType = tostring(split(type, "/", 0)), tostring(rgName = split(id, "/", 4)), tostring(RunID = split(id, "/", 10)) +| extend prop = parse_json(properties) +| extend lTime = todatetime(prop.lastModifiedDateTime), patchName = tostring(prop.patchName), kbId = tostring(prop.kbId), installationState = tostring(prop.installationState), classifications = tostring(prop.classifications) +| where lTime > ago(7d) +| project lTime, RunID, machineName, rgName, resourceType, patchName, kbId, classifications, installationState +| sort by RunID +``` ++## List of Linux OS update installations ++The following query returns a list of update installations for Linux with their status for your machines from the last seven days. Results include the time when the update deployment was run, the resource ID of the installation, machine details, and other related deployment details. ++```kusto +patchinstallationresources +| where type has "softwarepatches" and properties has "version" +| extend machineName = tostring(split(id, "/", 8)), resourceType = tostring(split(type, "/", 0)), tostring(rgName = split(id, "/", 4)), tostring(RunID = split(id, "/", 10)) +| extend prop = parse_json(properties) +| extend lTime = todatetime(prop.lastModifiedDateTime), patchName = tostring(prop.patchName), version = tostring(prop.version), installationState = tostring(prop.installationState), classifications = tostring(prop.classifications) +| where lTime > ago(7d) +| project lTime, RunID, machineName, rgName, resourceType, patchName, version, classifications, installationState +| sort by RunID +``` ++## List of maintenance run record at VM level +The following query returns a list of all the maintenance run records for a VM ++```kusto +maintenanceresources +| where ['id'] contains "/subscriptions/<subscription-id>/resourcegroups/<resource-group>/providers/microsoft.compute/virtualmachines/<vm-name>" //VM Id here +| where ['type'] == "microsoft.maintenance/applyupdates" +| where properties.maintenanceScope == "InGuestPatch" +``` ++## Next steps +- Review logs and search results from Update Manager in Azure using [Azure Resource Graph](query-logs.md). +- Troubleshoot issues in Update Manager, see the [Troubleshoot](troubleshoot.md). |
update-manager | Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/scheduled-patching.md | + + Title: Scheduling recurring updates in Azure Update Manager +description: This article details how to use Azure Update Manager to set update schedules that install recurring updates on your machines. + Last updated : 09/18/2023++++++# Schedule recurring updates for machines by using the Azure portal and Azure Policy ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++> [!IMPORTANT] +> For a seamless scheduled patching experience, we recommend that for all Azure virtual machines (VMs), you update the patch orchestration to **Customer Managed Schedules** by **June 30, 2023**. If you fail to update the patch orchestration by June 30, 2023, you can experience a disruption in business continuity because the schedules will fail to patch the VMs. [Learn more](prerequsite-for-schedule-patching.md). ++You can use Azure Update Manager to create and save recurring deployment schedules. You can create a schedule on a daily, weekly, or hourly cadence. You can specify the machines that must be updated as part of the schedule and the updates to be installed. ++This schedule then automatically installs the updates according to the created schedule for a single VM and at scale. ++Update Manager uses a maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control). ++## Prerequisites for scheduled patching ++1. See [Prerequisites for Update Manager](./overview.md#prerequisites). +1. Patch orchestration of the Azure machines should be set to **Customer Managed Schedules**. For more information, see [Enable schedule patching on existing VMs](prerequsite-for-schedule-patching.md#enable-schedule-patching-on-azure-vms). For Azure Arc-enabled machines, it isn't a requirement. ++ > [!NOTE] + > If you set the patch mode to **Azure orchestrated** (`AutomaticByPlatform`) but do not enable the **BypassPlatformSafetyChecksOnUserSchedule** flag and do not attach a maintenance configuration to an Azure machine, it's treated as an [automatic guest patching](../virtual-machines/automatic-vm-guest-patching.md)-enabled machine. The Azure platform automatically installs updates according to its own schedule. [Learn more](./overview.md#prerequisites). ++## Schedule patching in an availability set ++All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. ++VMs in a common availability set are updated within Update Domain boundaries. VMs across multiple Update Domains aren't updated concurrently. ++## Configure reboot settings ++The registry keys listed in [Configure Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot. A reboot can occur even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment. ++## Service limits ++We recommend the following limits for the indicators. ++| Indicator | Limit | +|-|-| +| Number of schedules per subscription per region | 250 | +| Total number of resource associations to a schedule | 3,000 | +| Resource associations on each dynamic scope | 1,000 | +| Number of dynamic scopes per resource group or subscription per region | 250 | ++## Schedule recurring updates on a single VM ++You can schedule updates from the **Overview** or **Machines** pane on the **Update Manager** page or from the selected VM. ++To schedule recurring updates on a single VM: ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On the **Azure Update Manager** | **Overview** page, select your subscription, and then select **Schedule updates**. ++1. On the **Create new maintenance configuration** page, you can create a schedule for a single VM. ++ Currently, VMs and maintenance configuration in the same subscription are supported. ++1. On the **Basics** page, select **Subscription**, **Resource Group**, and all options in **Instance details**. + - Select **Maintenance scope** as **Guest (Azure VM, Azure Arc-enabled VMs/servers)**. + - Select **Add a schedule**. In **Add/Modify schedule**, specify the schedule details, such as: + + - **Start on** + - **Maintenance window** (in hours). The upper maintenance window is 3 hours 55 minutes. + - **Repeats** (monthly, daily, or weekly) + - **Add end date** + - **Schedule summary** ++ The hourly option isn't supported in the portal but can be used through the [API](./manage-vms-programmatically.md#create-a-maintenance-configuration-schedule). ++ :::image type="content" source="./media/scheduled-updates/scheduled-patching-basics-page.png" alt-text="Screenshot that shows the Scheduled patching basics page."::: ++ For **Repeats monthly**, there are two options: ++ - Repeat on a calendar date (optionally run on the last date of the month). + - Repeat on nth (first, second, etc.) x day (for example, Monday, Tuesday) of the month. You can also specify an offset from the day set. It could be +6/-6. For example, if you want to patch on the first Saturday after a patch on Tuesday, set the recurrence as the second Tuesday of the month with a +4 day offset. Optionally, you can also specify an end date when you want the schedule to expire. ++1. On the **Machines** tab, select your machine, and then select **Next**. ++ Update Manager doesn't support driver updates. ++1. On the **Tags** tab, assign tags to maintenance configurations. ++1. On the **Review + create** tab, verify your update deployment options, and then select **Create**. ++# [From the Machines pane](#tab/schedule-updates-single-machine) ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On the **Azure Update Manager** | **Machines** page, select your subscription, select your machine, and then select **Schedule updates**. ++1. In **Create new maintenance configuration**, you can create a schedule for a single VM and assign a machine and tags. Follow the procedure from step 3 listed in **From the Overview pane** of [Schedule recurring updates on a single VM](#schedule-recurring-updates-on-a-single-vm) to create a maintenance configuration and assign a schedule. ++# [From a selected VM](#tab/singlevm-schedule-home) ++1. Select your virtual machine to open the **Virtual machines | Updates** page. +1. Under **Operations**, select **Updates**. +1. On the **Updates** tab, select **Go to Updates using Update Center**. +1. In **Updates preview**, select **Schedule updates**. In **Create new maintenance configuration**, you can create a schedule for a single VM. Follow the procedure from step 3 listed in **From the Overview pane** of [Schedule recurring updates on a single VM](#schedule-recurring-updates-on-a-single-vm) to create a maintenance configuration and assign a schedule. +++A notification confirms that the deployment was created. ++## Schedule recurring updates at scale ++To schedule recurring updates at scale, follow these steps. ++You can schedule updates from the **Overview** or **Machines** pane. ++# [From the Overview pane](#tab/schedule-updates-scale-overview) ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On the **Azure Update Manager** | **Overview** page, select your subscription, and then select **Schedule updates**. ++1. On the **Create new maintenance configuration** page, you can create a schedule for multiple machines. ++ Currently, VMs and maintenance configuration in the same subscription are supported. ++1. On the **Basics** tab, select **Subscription**, **Resource Group**, and all options in **Instance details**. + - Select **Add a schedule**. In **Add/Modify schedule**, specify the schedule details, such as: + + - **Start on** + - **Maintenance window** (in hours) + - **Repeats** (monthly, daily, or weekly) + - **Add end date** + - **Schedule summary** ++ The hourly option isn't supported in the portal but can be used through the [API](./manage-vms-programmatically.md#create-a-maintenance-configuration-schedule). ++1. On the **Machines** tab, verify if the selected machines are listed. You can add or remove machines from the list. Select **Next**. ++1. On the **Updates** tab, specify the updates to include in the deployment, such as update classifications or KB ID/packages that must be installed when you trigger your schedule. ++ Update Manager doesn't support driver updates. ++1. On the **Tags** tab, assign tags to maintenance configurations. ++1. On the **Review + create** tab, verify your update deployment options, and then select **Create**. ++# [From the Machines pane](#tab/schedule-updates-scale-machine) ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On the **Azure Update Manager** | **Machines** page, select your subscription, select your machines, and then select **Schedule updates**. ++On the **Create new maintenance configuration** page, you can create a schedule for a single VM. Follow the procedure from step 3 listed in **From the Overview pane** of [Schedule recurring updates on a single VM](#schedule-recurring-updates-on-a-single-vm) to create a maintenance configuration and assign a schedule. +++A notification confirms that the deployment was created. ++## Attach a maintenance configuration ++ A maintenance configuration can be attached to multiple machines. It can be attached to machines at the time of creating a new maintenance configuration or even after you create one. ++ 1. On the **Azure Update Manager** page, select **Machines**, and then select your subscription. + 1. Select your machine, and on the **Updates** pane, select **Scheduled updates** to create a maintenance configuration or attach an existing maintenance configuration to the scheduled recurring updates. +1. On the **Scheduling** tab, select **Attach maintenance configuration**. +1. Select the maintenance configuration that you want to attach, and then select **Attach**. +1. On the **Updates** pane, select **Scheduling** > **Attach maintenance configuration**. +1. On the **Attach existing maintenance configuration** page, select the maintenance configuration that you want to attach, and then select **Attach**. ++ :::image type="content" source="./media/scheduled-updates/scheduled-patching-attach-maintenance-inline.png" alt-text="Screenshot that shows Scheduled patching attach maintenance configuration." lightbox="./media/scheduled-updates/scheduled-patching-attach-maintenance-expanded.png"::: ++## Schedule recurring updates from maintenance configuration ++You can browse and manage all your maintenance configurations from a single place. ++1. Search **Maintenance configurations** in the Azure portal. It shows a list of all maintenance configurations along with the maintenance scope, resource group, location, and the subscription to which it belongs. ++1. You can filter maintenance configurations by using filters at the top. Maintenance configurations related to guest OS updates are the ones that have maintenance scope as **InGuestPatch**. ++You can create a new guest OS update maintenance configuration or modify an existing configuration. +++### Create a new maintenance configuration ++1. Go to **Machines** and select machines from the list. +1. On the **Updates** pane, select **Scheduled updates**. +1. On the **Create a maintenance configuration** pane, follow step 3 in this [procedure](#schedule-recurring-updates-on-a-single-vm) to create a maintenance configuration. +1. On the **Basics** tab, select the **Maintenance scope** as **Guest (Azure VM, Arc-enabled VMs/servers)**. ++ :::image type="content" source="./media/scheduled-updates/create-maintenance-configuration.png" alt-text="Screenshot that shows creating a maintenance configuration."::: ++### Add or remove machines from maintenance configuration ++1. Go to **Machines** and select the machines from the list. +1. On the **Updates** page, select **One-time updates**. +1. On the **Install one-time updates** pane, select **Machines** > **Add machine**. ++ :::image type="content" source="./media/scheduled-updates/add-or-remove-machines-from-maintenance-configuration-inline.png" alt-text="Screenshot that shows adding or removing machines from maintenance configuration." lightbox="./media/scheduled-updates/add-or-remove-machines-from-maintenance-configuration-expanded.png"::: ++### Change update selection criteria ++1. On the **Install one-time updates** pane, select the resources and machines to install the updates. +1. On the **Machines** tab, select **Add machine** to add machines that weren't previously selected, and then select **Add**. +1. On the **Updates** tab, specify the updates to include in the deployment. +1. Select **Include KB ID/package** and **Exclude KB ID/package**, respectively, to select updates like **Critical**, **Security**, and **Feature updates**. ++ :::image type="content" source="./media/scheduled-updates/change-update-selection-criteria-of-maintenance-configuration-inline.png" alt-text="Screenshot that shows changing update selection criteria of Maintenance configuration." lightbox="./media/scheduled-updates/change-update-selection-criteria-of-maintenance-configuration-expanded.png"::: ++## Onboard to schedule by using Azure Policy ++Update Manager allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using a policy keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags, or regions to define the scope. You can use this feature for the built-in policies, which you can customize according to your use case. ++> [!NOTE] +> This policy also ensures that the patch orchestration property for Azure machines is set to **Customer Managed Schedules** because it's a prerequisite for scheduled patching. ++### Assign a policy ++Azure Policy allows you to assign standards and assess compliance at scale. For more information, see [Overview of Azure Policy](../governance/policy/overview.md). To assign a policy to scope: ++1. Sign in to the [Azure portal](https://portal.azure.com) and select **Policy**. +1. Under **Assignments**, select **Assign policy**. +1. On the **Assign policy** page, on the **Basics** tab: + - For **Scope**, choose your subscription and resource group and choose **Select**. + - Select **Policy definition** to view a list of policies. + - On the **Available Definitions** pane, select **Built in** for **Type**. In **Search**, enter **Schedule recurring updates using Azure Update Manager** and click **Select**. ++ :::image type="content" source="./media/scheduled-updates/dynamic-scoping-defintion.png" alt-text="Screenshot that shows how to select the definition."::: + + - Ensure that **Policy enforcement** is set to **Enabled**, and then select **Next**. +1. On the **Parameters** tab, by default, only the **Maintenance configuration ARM ID** is visible. ++ If you don't specify any other parameters, all machines in the subscription and resource group that you selected on the **Basics** tab are covered under scope. If you want to scope further based on resource group, location, OS, tags, and so on, clear **Only show parameters that need input or review** to view all parameters: ++ - **Maintenance Configuration ARM ID**: A mandatory parameter to be provided. It denotes the Azure Resource Manager (ARM) ID of the schedule that you want to assign to the machines. + - **Resource groups**: You can optionally specify a resource group if you want to scope it down to a resource group. By default, all resource groups within the subscription are selected. + - **Operating System types**: You can select Windows or Linux. By default, both are preselected. + - **Machine locations**: You can optionally specify the regions that you want to select. By default, all are selected. + - **Tags on machines**: You can use tags to scope down further. By default, all are selected. + - **Tags operator**: If you select multiple tags, you can specify if you want the scope to be machines that have all the tags or machines that have any of those tags. ++ :::image type="content" source="./media/scheduled-updates/dynamic-scoping-assign-policy.png" alt-text="Screenshot that shows how to assign a policy."::: ++1. On the **Remediation** tab, in **Managed Identity** > **Type of Managed Identity**, select **System assigned managed identity**. **Permissions** is already set as **Contributor** according to the policy definition. ++ If you select **Remediation**, the policy is in effect on all the existing machines in the scope or else it's assigned to any new machine that's added to the scope. ++1. On the **Review + create** tab, verify your selections, and then select **Create** to identify the noncompliant resources to understand the compliance state of your environment. ++### View compliance ++To view the current compliance state of your existing resources: ++1. In **Policy Assignments**, select **Scope** to select your subscription and resource group. +1. In **Definition type**, select the policy. In the list, select the assignment name. +1. Select **View compliance**. **Resource compliance** lists the machines and reasons for failure. ++ :::image type="content" source="./media/scheduled-updates/dynamic-scoping-policy-compliance.png" alt-text="Screenshot that shows policy compliance."::: ++## Check your scheduled patching run ++You can check the deployment status and history of your maintenance configuration runs from the Update Manager portal. For more information, see [Update deployment history by maintenance run ID](./manage-multiple-machines.md#update-deployment-history-by-maintenance-run-id). ++## Next steps ++* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md). |
update-manager | Security Awareness Ubuntu Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/security-awareness-ubuntu-support.md | + + Title: Security awareness and Ubuntu Pro support in Azure Update Manager +description: Guidance on security awareness and Ubuntu Pro support in Azure Update Manager. +++ Last updated : 09/18/2023++++# Guidance on security awareness and Ubuntu Pro support ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. +++This article provides the details on security vulnerabilities and Ubuntu Pro support in Azure Update Manager. ++If you are using Ubuntu 18.04 LTS, you must take the necessary steps against security vulnerabilities as the Ubuntu 18.04 image has reached the end of its [standard security maintenance](https://ubuntu.com/blog/18-04-end-of-standard-support) in May 2023. As Canonical has stopped publishing new security or critical updates after May 2023, the risk of systems and data to potential security threats is high. Without software updates, you may experience performance issues or compatibility issues whenever a new hardware or software is released. ++You can either upgrade to [Ubuntu Pro](https://ubuntu.com/azure/pro) or migrate to a newer version of LTS to avoid any future disruption to the patching mechanisms. When you [upgrade to Ubuntu Pro](https://ubuntu.com/blog/enhancing-the-ubuntu-experience-on-azure-introducing-ubuntu-pro-updates-awareness), you can avoid any security or performance issues. +++## Ubuntu Pro on Azure Update Manager + +Azure Update Manager assesses both Azure and Arc-enabled VMs to indicate any action. AUM helps to identify Ubuntu instances that don't have the available security updates and allows an upgrade to Ubuntu Pro from the Azure portal. For example, an Ubuntu server 18.04 LTS instance on Azure Update Manager has information about upgrading to Ubuntu Pro. +++You can continue to use the Azure Update Manager [capabilities](updates-maintenance-schedules.md) to remain secure after migrating to a supported model from Canonical. ++> [!NOTE] +> - [Ubuntu Pro](https://ubuntu.com/azure/pro) will provide the support on 18.04 LTS from Canonical until 2028 through Expanded Security Maintenance (ESM). You can also [upgrade to Ubuntu Pro from Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.0001-com-ubuntu-pro-bionic?tab=Overview) as well. +> - Ubuntu offers 20.04 LTS and 22.04 LTS as a migration from 18.04 LTS. [Learn more](https://ubuntu.com/18-04/azure). ++ +## Next steps +- [An overview on Azure Update Manager](overview.md) +- [View updates for single machine](view-updates.md) +- [Deploy updates now (on-demand) for single machine](deploy-updates.md) +- [Schedule recurring updates](scheduled-patching.md) |
update-manager | Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md | + + Title: Azure Update Manager support matrix +description: This article provides a summary of supported regions and operating system settings. +++ Last updated : 09/18/2023+++++# Support matrix for Azure Update Manager ++This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Azure Update Manager. The article includes the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure virtual machines (VMs) or machines managed by Azure Arc-enabled servers. ++## Update sources supported ++**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the last synchronization from WSUS with Microsoft Update, the results in Update Manager might differ from what Microsoft Update shows. ++To specify sources for scanning and downloading updates, see [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations). ++**Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in Update Manager depend on where the machines are configured to report. ++## Types of updates supported ++The following types of updates are supported. ++### Operating system updates ++Update Manager supports operating system updates for both Windows and Linux. ++Update Manager doesn't support driver updates. ++### Extended Security Updates (ESU) for Windows Server ++Using Azure Update Manager, you can deploy Extended Security Updates for your Azure Arc-enabled Windows Server 2012 / R2 machines. To enroll in Windows Server 2012 Extended Security Updates, follow the guidance on [How to get Extended Security Updates (ESU) for Windows Server 2012 and 2012 R2](/windows-server/get-started/extended-security-updates-deploy#extended-security-updates-enabled-by-azure-arc) ++### First-party updates on Windows ++By default, the Windows Update client is configured to provide updates only for the Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products. Updates include security patches for Microsoft SQL Server and other Microsoft software. ++Use one of the following options to perform the settings change at scale: ++- For servers configured to patch on a schedule from Update Manager (with VM `PatchSettings` set to `AutomaticByPlatform = Azure-Orchestrated`), and for all Windows Servers running on an earlier operating system than Windows Server 2016, run the following PowerShell script on the server you want to change: ++ ```powershell + $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager") + $ServiceManager.Services + $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d" + $ServiceManager.AddService2($ServiceId,7,"") + ``` ++- For servers running Windows Server 2016 or later that aren't using Update Manager scheduled patching (with VM `PatchSettings` set to `AutomaticByOS = Azure-Orchestrated`), you can use Group Policy to control this process by downloading and using the latest Group Policy [Administrative template files](/troubleshoot/windows-client/group-policy/create-and-manage-central-store). ++> [!NOTE] +> Run the following PowerShell script on the server to disable first-party updates: +> +> ```powershell +> $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager") +> $ServiceManager.Services +> $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d" +> $ServiceManager.RemoveService($ServiceId) +> ``` ++### Third-party updates ++**Windows**: Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher). ++**Linux**: If you include a specific third-party software repository in the Linux package manager repository location, it's scanned when it performs software update operations. The package isn't available for assessment and installation if you remove it. ++Update Manager doesn't support managing the Configuration Manager client. ++## Supported regions ++Update Manager scales to all regions for both Azure VMs and Azure Arc-enabled servers. The following table lists the Azure public cloud where you can use Update Manager. ++# [Azure VMs](#tab/azurevm) ++Azure Update Manager is available in all Azure public regions where compute virtual machines are available. ++# [Azure Arc-enabled servers](#tab/azurearc) ++Azure Update Manager is currently supported in the following regions. It implies that VMs must be in the following regions. ++**Geography** | **Supported regions** + | +Africa | South Africa North +Asia Pacific | East Asia </br> South East Asia +Australia | Australia East </br> Australia Southeast +Brazil | Brazil South +Canada | Canada Central </br> Canada East +Europe | North Europe </br> West Europe +France | France Central +India | Central India +Japan | Japan East +Korea | Korea Central +Norway | Norway East +Sweden | Sweden Central +Switzerland | Switzerland North +United Kingdom | UK South </br> UK West +United States | Central US </br> East US </br> East US 2</br> North Central US </br> South Central US </br> West Central US </br> West US </br> West US 2 </br> West US 3 ++++## Supported operating systems ++All operating systems are assumed to be x64. For this reason, x86 isn't supported for any operating system. +Update Manager doesn't support CIS-hardened images. ++> [!NOTE] +> Currently, schedule patching and periodic assessment on [specialized images](../virtual-machines/linux/imaging.md) and **VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery** are supported in preview. ++# [Azure VMs](#tab/azurevm-os) ++### Azure Marketplace/PIR images ++The Azure Marketplace image has the following attributes: ++- **Publisher**: The organization that creates the image. Examples are `Canonical` and `MicrosoftWindowsServer`. +- **Offer**: The name of the group of related images created by the publisher. Examples are `UbuntuServer` and `WindowsServer`. +- **SKU**: An instance of an offer, such as a major release of a distribution. Examples are `18.04LTS` and `2019-Datacenter`. +- **Version**: The version number of an image SKU. ++Update Manager supports the following operating system versions. You might experience failures if there are any configuration changes on the VMs, such as package or repository. ++#### Windows operating systems ++| **Publisher**| **Versions** +|-|-| +|Microsoft Windows Server | 1709, 1803, 1809, 2012, 2016, 2019, 2022| +|Microsoft Windows Server HPC Pack | 2012, 2016, 2019 | +|Microsoft SQL Server | 2008, 2012, 2014, 2016, 2017, 2019, 2022 | +|Microsoft Visual Studio | ws2012r2, ws2016, ws2019, ws2022 | +|Microsoft Azure Site Recovery | Windows 2012 +|Microsoft BizTalk Server | 2016, 2020 | +|Microsoft DynamicsAx | ax7 | +|Microsoft Power BI | 2016, 2017, 2019, 2022 | +|Microsoft SharePoint | sp* | ++#### Linux operating systems ++| **Publisher**| **Versions** +|-|-| +|Canonical | Ubuntu 16.04, 18.04, 20.04, 22.04 | +|Red Hat | RHEL 7,8,9| +|OpenLogic | CentOS 7| +|SUSE 12 |sles, sles-byos, sap, sap-byos, sapcal, sles-standard | +|SUSE 15 | basic, hpc, opensuse, sles, sap, sapcal| +|Oracle Linux | 7*, ol7*, ol8*, ol9* | +|Oracle Database | 21, 19-0904, 18.*| ++#### Unsupported operating systems ++The following table lists the operating systems for Azure Marketplace images that aren't supported. ++| **Publisher**| **OS offer** | **SKU**| +|-|-|--| +|OpenLogic | CentOS | 8* | +|OpenLogic | centos-hpc| * | +|Oracle | Oracle-Linux | 8, 8-ci, 81, 81-ci , 81-gen2, ol82, ol8_2-gen2,ol82-gen2, ol83-lvm, ol83-lvm-gen2, ol84-lvm,ol84-lvm-gen2 | +|Red Hat | RHEL | 74-gen2 | +|Red Hat | RHEL-HANA | 7.4, 7.5, 7.6, 8.1, 81_gen2 | +|Red Hat | RHEL-SAP | 7.4, 7.5, 7.7 | +|Red Hat | RHEL-SAP-HANA | 7.5 | +|Microsoft SQL Server | SQL 2019-SLES* | * | +|Microsoft SQL Server | SQL 2019-RHEL7 | * | +|Microsoft SQL Server | SQL 2017-RHEL7 | * | +|Microsoft | microsoft-ads |*.* | +|SUSE| sles-sap-15-*-byos | gen *| ++### Custom images ++We support [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images. Currently, scheduled patching and periodic assessment on [specialized images](../virtual-machines/linux/imaging.md#specialized-images) and VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery are supported in preview. ++The following table lists the operating systems that we support for customized images. For instructions on how to use Update Manager to manage updates on custom images, see [Custom images (preview)](manage-updates-customized-images.md). ++ |**Windows operating system**| + || + |Windows Server 2022| + |Windows Server 2019| + |Windows Server 2016| + |Windows Server 2012 R2| + |Windows Server 2012| + |Windows Server 2008 R2 (RTM and SP1 Standard)| ++ |**Linux operating system**| + || + |CentOS 7, 8| + |Oracle Linux 7.x, 8x| + |Red Hat Enterprise 7, 8, 9| + |SUSE Linux Enterprise Server 12.x, 15.0-15.4| + |Ubuntu 16.04 LTS, 18.04 LTS, 20.04 LTS, 22.04 LTS| ++# [Azure Arc-enabled servers](#tab/azurearc-os) ++The following table lists the operating systems supported on [Azure Arc-enabled servers](../azure-arc/servers/overview.md). ++ |**Operating system**| + |-| + | Amazon Linux 2023 | + | Windows Server 2012 R2 and higher (including Server Core) | + | Windows Server 2008 R2 SP1 with PowerShell enabled and .NET Framework 4.0+ | + | Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS | + | CentOS Linux 7 and 8 (x64) | + | SUSE Linux Enterprise Server (SLES) 12 and 15 (x64) | + | Red Hat Enterprise Linux (RHEL) 7, 8, 9 (x64) | + | Amazon Linux 2 (x64) | + | Oracle 7.x, 8.x| + | Debian 10 and 11| + | Rocky Linux 8| ++++## Unsupported operating systems ++The following table lists the operating systems that aren't supported. ++ | **Operating system**| **Notes** + |-|-| + | Windows client | For client operating systems such as Windows 10 and Windows 11, we recommend [Microsoft Intune](/mem/intune/) to manage updates.| + | Virtual machine scale sets| We recommend that you use [Automatic upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) to patch the virtual machine scale sets.| + | Azure Kubernetes Service nodes| We recommend the patching described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](/azure/aks/node-updates-kured).| ++Because Update Manager depends on your machine's OS package manager or update service, ensure that the Linux package manager or Windows Update client is enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [Configure Windows Update settings](configure-wu-agent.md). ++## Next steps ++- [View updates for a single machine](view-updates.md) +- [Deploy updates now (on-demand) for a single machine](deploy-updates.md) +- [Schedule recurring updates](scheduled-patching.md) +- [Manage update settings via the portal](manage-update-settings.md) |
update-manager | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md | + + Title: Troubleshoot known issues with Azure Update Manager +description: This article provides details on known issues and how to troubleshoot any problems with Azure Update Manager. + Last updated : 09/18/2023++++++# Troubleshoot issues with Azure Update Manager ++This article describes the errors that might occur when you deploy or use Azure Update Manager, how to resolve them, and the known issues and limitations of scheduled patching. ++## General troubleshooting ++The following troubleshooting steps apply to the Azure virtual machines (VMs) related to the patch extension on Windows and Linux machines. ++### Azure Linux VM ++To verify if the Microsoft Azure Virtual Machine agent (VM agent) is running and has triggered appropriate actions on the machine and the sequence number for the autopatching request, check the agent log for more information in `/var/log/waagent.log`. Every autopatching request has a unique sequence number associated with it on the machine. Look for a log similar to `2021-01-20T16:57:00.607529Z INFO ExtHandler`. ++The package directory for the extension is `/var/lib/waagent/Microsoft.CPlat.Core.Edp.LinuxPatchExtension-<version>`. The `/status` subfolder has a `<sequence number>.status` file. It includes a brief description of the actions performed during a single autopatching request and the status. It also includes a short list of errors that occurred while applying updates. ++To review the logs related to all actions performed by the extension, check for more information in `/var/log/azure/Microsoft.CPlat.Core.Edp.LinuxPatchExtension/`. It includes the following two log files of interest: ++* `<seq number>.core.log`: Contains information related to the patch actions. This information includes patches assessed and installed on the machine and any problems encountered in the process. +* `<Date and Time>_<Handler action>.ext.log`: There's a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains information about the wrapper. For autopatching, the log `<Date and Time>_Enable.ext.log` has information on whether the specific patch operation was invoked. ++### Azure Windows VM ++To verify if the VM agent is running and has triggered appropriate actions on the machine and the sequence number for the autopatching request, check the agent log for more information in `C:\WindowsAzure\Logs\AggregateStatus`. The package directory for the extension is `C:\Packages\Plugins\Microsoft.CPlat.Core.WindowsPatchExtension<version>`. ++To review the logs related to all actions performed by the extension, check for more information in `C:\WindowsAzure\Logs\Plugins\Microsoft.CPlat.Core.WindowsPatchExtension<version>`. It includes the following two log files of interest: ++* `WindowsUpdateExtension.log`: Contains information related to the patch actions. This information includes patches assessed and installed on the machine and any problems encountered in the process. +* `CommandExecution.log`: There's a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains information about the wrapper. For autopatching, the log has information on whether the specific patch operation was invoked. ++## Unable to change the patch orchestration option to manual updates from automatic updates ++Here's the scenario. ++### Issue ++The Azure machine has the patch orchestration option as `AutomaticByOS/Windows` automatic updates and you're unable to change the patch orchestration to Manual Updates by using **Change update settings**. ++### Resolution ++If you don't want any patch installation to be orchestrated by Azure or aren't using custom patching solutions, you can change the patch orchestration option to **Customer Managed Schedules (Preview)** or `AutomaticByPlatform` and `ByPassPlatformSafetyChecksOnUserSchedule` and not associate a schedule/maintenance configuration to the machine. This setting ensures that no patching is performed on the machine until you change it explicitly. For more information, see **Scenario 2** in [User scenarios](prerequsite-for-schedule-patching.md#user-scenarios). +++## Machine shows as "Not assessed" and shows an HRESULT exception ++Here's the scenario. ++### Issue ++* You have machines that show as `Not assessed` under **Compliance**, and you see an exception message below them. +* You see an `HRESULT` error code in the portal. ++### Cause ++The Update Agent (Windows Update Agent on Windows and the package manager for a Linux distribution) isn't configured correctly. Update Manager relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Manager can't properly report on the patches that are needed or installed. ++### Resolution ++Try to perform updates locally on the machine. If this operation fails, it typically means that there's an Update Agent configuration error. ++This issue is frequently caused by network configuration and firewall problems. Use the following checks to correct the issue: ++* For Linux, check the appropriate documentation to make sure you can reach the network endpoint of your package repository. +* For Windows, check your agent configuration as described in [Updates aren't downloading from the intranet endpoint (WSUS/SCCM)](/windows/deployment/update/windows-update-troubleshooting#updates-arent-downloading-from-the-intranet-endpoint-wsussccm). ++ * If the machines are configured for Windows Update, make sure that you can reach the endpoints described in [Issues related to HTTP/proxy](/windows/deployment/update/windows-update-troubleshooting#issues-related-to-httpproxy). + * If the machines are configured for Windows Server Update Services (WSUS), make sure that you can reach the WSUS server configured by the [WUServer registry key](/windows/deployment/update/waas-wu-settings). ++If you see an `HRESULT` error code, double-click the exception displayed in red to see the entire exception message. Review the following table for potential resolutions or recommended actions. ++|Exception |Resolution or action | +||| +|`Exception from HRESULT: 0x……C` | Search the relevant error code in the [Windows Update error code list](https://support.microsoft.com/help/938205/windows-update-error-code-list) to find more information about the cause of the exception. | +|`0x8024402C`</br>`0x8024401C`</br>`0x8024402F` | Indicates network connectivity problems. Make sure your machine has network connectivity to Update Management. For a list of required ports and addresses, see the [Network planning](../automation/update-management/plan-deployment.md#ports) section. | +|`0x8024001E`| The update operation didn't finish because the service or system was shutting down.| +|`0x8024002E`| Windows Update service is disabled.| +|`0x8024402C` | If you're using a WSUS server, make sure the registry values for `WUServer` and `WUStatusServer` under the `HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate` registry key specify the correct WSUS server. | +|`0x80072EE2`|There's a network connectivity problem or a problem in talking to a configured WSUS server. Check WSUS settings and make sure the service is accessible from the client.| +|`The service cannot be started, either because it is disabled or because it has no enabled devices associated with it. (Exception from HRESULT: 0x80070422)` | Make sure the Windows Update service (`wuauserv`) is running and not disabled. | +|`0x80070005`| An access denied error can be caused by any one of the following problems:<br> - Infected computer.<br> - Windows Update settings not configured correctly.<br> - File permission error with the `%WinDir%\SoftwareDistribution` folder.<br> - Insufficient disk space on the system drive (drive C). +|Any other generic exception | Run a search on the internet for possible resolutions and work with your local IT support. | ++Reviewing the `%Windir%\Windowsupdate.log` file can also help you determine possible causes. For more information about how to read the log, see [Read the Windowsupdate.log file](https://support.microsoft.com/help/902093/how-to-read-the-windowsupdate-log-file). ++You can also download and run the [Windows Update troubleshooter](https://support.microsoft.com/help/4027322/windows-update-troubleshooter) to check for any problems with Windows Update on the machine. ++> [!NOTE] +> The [Windows Update troubleshooter](https://support.microsoft.com/help/4027322/windows-update-troubleshooter) documentation indicates that it's for use on Windows clients, but it also works on Windows Server. ++### Azure Arc-enabled servers ++For Azure Arc-enabled servers, see [Troubleshoot VM extensions](../azure-arc/servers/troubleshoot-vm-extensions.md) for general troubleshooting steps. ++To review the logs related to all actions performed by the extension, on Windows, check for more information in `C:\ProgramData\GuestConfig\extension_Logs\Microsoft.SoftwareUpdateManagement\WindowsOsUpdateExtension`. It includes the following two log files of interest: ++* `WindowsUpdateExtension.log`: Contains information related to the patch actions. This information includes the patches assessed and installed on the machine and any problems encountered in the process. +* `cmd_execution_<numeric>_stdout.txt`: There's a wrapper above the patch action. It's used to manage the extension and invoke specific patch operation. This log contains information about the wrapper. For autopatching, the log has information on whether the specific patch operation was invoked. +* `cmd_excution_<numeric>_stderr.txt` ++## Known issues in schedule patching ++- For a concurrent or conflicting schedule, only one schedule is triggered. The other schedule is triggered after a schedule is finished. +- If a machine is newly created, the schedule might have 15 minutes of schedule trigger delay in the case of Azure VMs. +- Policy definition **Schedule recurring updates using Azure Update Manager** with version 1.0.0-preview successfully remediates resources. However, it always shows them as noncompliant. The current value of the existence condition is a placeholder that always evaluates to false. ++### Unable to apply patches for the shutdown machines ++Here's the scenario. ++#### Issue ++Patches aren't getting applied for the machines that are in shutdown state. You might also see that machines are losing their associated maintenance configurations or schedules. ++#### Cause ++The machines are in a shutdown state. ++### Resolution ++Keep your machines turned on at least 15 minutes before the scheduled update. For more information, see [Shut down machines](../virtual-machines/maintenance-configurations.md#shut-down-machines). ++### Patch run failed with Maintenance window exceeded property showing true even if time remained ++Here's the scenario. ++#### Issue ++When you view an update deployment in **Update History**, the property **Failed with Maintenance window exceeded** shows **true** even though enough time was left for execution. In this case, one of the following problems is possible: ++* No updates are shown. +* One or more updates are in a **Pending** state. +* Reboot status is **Required**, but a reboot wasn't attempted even when the reboot setting passed was `IfRequired` or `Always`. ++#### Cause ++During an update deployment, Maintenance window utilization is checked at multiple steps. Ten minutes of the Maintenance window are reserved for reboot at any point. Before the deployment gets a list of missing updates or downloads or installs any update (except Windows service pack updates), it checks to verify if there are 15 minutes + 10 minutes for reboot (that is, 25 minutes left in the Maintenance window). ++For Windows service pack updates, the deployment checks for 20 minutes + 10 minutes for reboot (that is, 30 minutes). If the deployment doesn't have the sufficient time left, it skips the scan/download/installation of updates. The deployment run then checks if a reboot is needed and if 10 minutes are left in the Maintenance window. If there are, the deployment triggers a reboot. Otherwise, the reboot is skipped. ++In such cases, the status is updated to **Failed**, and the **Maintenance window exceeded** property is updated to **true**. For cases where the time left is less than 25 minutes, updates aren't scanned or attempted for installation. ++To find more information, review the logs in the file path provided in the error message of the deployment run. ++#### Resolution ++Set a longer time range for maximum duration when you're triggering an [on-demand update deployment](deploy-updates.md) to help avoid the problem. ++## Next steps ++* To learn more about Update Manager, see the [Overview](overview.md). +* To view logged results from all your machines, see [Querying logs and results from Update Manager](query-logs.md). |
update-manager | Tutorial Assessment Deployment Using Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/tutorial-assessment-deployment-using-policy.md | + + Title: Schedule updates and enable periodic assessment at scale using policy. +description: In this tutorial, you learn on how enable periodic assessment or update the deployment using policy. + Last updated : 09/18/2023++++#Customer intent: As an IT admin, I want dynamically apply patches or enable periodic assessment on the machines at scale using a policy. +++# Tutorial: Enable periodic assessment and schedule updates on Azure VMs using policy ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. + +This tutorial explains how you can enable periodic assessment and schedule updates on your Azure VMs at scale using Azure policy. A policy allows you to assign standards and assess compliance at scale. [Learn more](../governance/policy/overview.md). ++**Periodic Assessment** - is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, update manager fetches updates on your machine once every 24 hours. ++**Schedule patching** - is a setting to target a group of machines for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> - Enable periodic assessment +> - Enable schedule patching +++## Prerequisites ++- You must have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++## Enable Periodic assessment ++Go to **Policy** from the Azure portal and under **Authoring**, go to **Definitions**. +1. From the **Category** dropdown, select **Azure Update Manager**. Select *Configure periodic checking for missing system updates on Azure virtual machines* for Azure machines. +1. When the Policy Definition opens, select **Assign**. +1. In **Basics**, select your subscription as your scope. You can also specify a resource group within subscription as the scope and select Next. +1. In **Parameters**, uncheck **Only show parameters that need input or review** so that you can see the values of parameters. +1. In **Assessment**: select *AutomaticByPlatform* and select *Operating system* and then select **Next**. You need to create separate policies for Windows and Linux. +1. In **Remediation**, check **Create a remediation task**, so that periodic assessment is enabled on your machines and click **Next**. +1. In **Non-compliance**, provide the message that you would like to see in case of non-compliance. For example: *Your machine doesn't have periodic assessment enabled.* and then select **Review+Create**. +1. In **Review+Create**, select **Create**. This action triggers Assignment and Remediation Task creation, which can take a minute or so. ++You can monitor the compliance of resources under **Compliance** and remediation status under **Remediation** from the Policy home page. ++## Enable schedule patching ++1. Sign in to the [Azure portal](https://portal.azure.com) and select **Policy**. +1. In **Assignments**, select **Assign policy**. +1. Under **Basics**, in the **Assign policy** page: + - In **Scope**, choose your subscription, resource group, and choose **Select**. + - Select **Policy definition** to view a list of policies. + - In **Available Definitions**, select **Built in** for Type and in search, enter - *Schedule recurring updates using Azure Update Manager* and click **Select**. + - Ensure that **Policy enforcement** is set to **Enabled** and select **Next**. + +1. In **Parameters**, by default, only the Maintenance configuration ARM ID is visible. ++ > [!NOTE] + > If you do not specify any other parameters, all machines in the subscription and resource group that you selected in **Basics** will be covered under scope. However, if you want to scope further based on resource group, location, OS, tags and so on, deselect **Only show parameters that need input or review** to view all parameters. ++ - Maintenance Configuration ARM ID: A mandatory parameter to be provided. It denotes the ARM ID of the schedule that you want to assign to the machines. + - Resource groups: You can specify a resource group optionally if you want to scope it down to a resource group. By default, all resource groups within the subscription are selected. + - Operating System types: You can select Windows or Linux. By default, both are preselected. + - Machine locations: You can optionally specify the regions that you want to select. By default, all are selected. + - Tags on machines: You can use tags to scope down further. By default, all are selected. + - Tags operator: In case you have selected multiple tags, you can specify if you want the scope to be machines that have all the tags or machines which have any of those tags. ++1. In **Remediation**, **Managed Identity**, **Type of Managed Identity**, select System assigned managed identity and **Permissions** is already set as *Contributor* according to the policy definition. ++ > [!NOTE] + > If you select Remediation, the policy would be effective on all the existing machines in the scope else, it is assigned to any new machine which is added to the scope. ++1. In **Review + Create**, verify your selections, and select **Create** to identify the non-compliant resources to understand the compliance state of your environment. +++## Next steps +Learn about [managing multiple machines](manage-multiple-machines.md). + |
update-manager | Tutorial Dynamic Grouping For Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/tutorial-dynamic-grouping-for-scheduled-patching.md | + + Title: Schedule updates on Dynamic scoping. +description: In this tutorial, you learn how to group machines, dynamically apply the updates at scale. + Last updated : 09/18/2023++++#Customer intent: As an IT admin, I want dynamically apply patches on the machines as per a schedule. +++# Tutorial: Schedule updates on Dynamic scopes ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure VMs :heavy_check_mark: Azure Arc-enabled servers. + +This tutorial explains how you can create a dynamic scope, and apply patches based on the criteria. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> - Create and edit groups +> - Associate a schedule +++If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +++## Create a Dynamic scope ++To create a dynamic scope, follow these steps: ++#### [Azure portal](#tab/az-portal) ++1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Azure Update Manager**. +1. Select **Overview** > **Schedule updates** > **Create a maintenance configuration**. +1. In the **Create a maintenance configuration** page, enter the details in the **Basics** tab and select **Maintenance scope** as *Guest* (Azure VM, Arc-enabled VMs/servers). +1. Select **Dynamic Scopes** and follow the steps to [Add Dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope). +1. In **Machines** tab, select **Add machines** to add any individual machines to the maintenance configuration and select **Updates**. +1. In the **Updates** tab, select the patch classification that you want to include/exclude and select **Tags**. +1. Provide the tags in **Tags** tab. +1. Select **Review** and then **Review + Create**. ++> [!NOTE] +> A dynamic scope exists within the context of a schedule only. You can use one schedule to link to a machine, dynamic scope, or both. One dynamic scope cannot have more than one schedule. +++#### [Azure CLI](#tab/az-cli) ++```azurecli ++ az maintenance assignment create-or-update-subscription --maintenance-configuration-id "/subscriptions/{subscription_id}/resourcegroups/{rg}/providers/Microsoft.Maintenance/maintenanceConfigurations/clitestmrpconfinguestadvanced" --name cli_dynamicscope_recording01 --filter-locations eastus2euap centraluseuap --filter-os-types windows linux --filter-tags {{tagKey1:[tagKey1Val1,tagKey1Val2],tagKey2:[tagKey2Val1,tagKey2Val2]}} --filter-resource-group rg1, rg2 --filter-tags-operator All -l global +``` +#### [PowerShell](#tab/az-ps) ++```powershell + New-AzConfigurationAssignment -ConfigurationAssignmentName $maintenanceConfigurationName -MaintenanceConfigurationId $maintenanceConfigurationInGuestPatchCreated.Id -FilterLocation eastus2euap,centraluseuap -FilterOsType Windows,Linux -FilterTag '{"tagKey1" : ["tagKey1Value1", "tagKey1Value2"], "tagKey2" : ["tagKey2Value1", "tagKey2Value2", "tagKey2Value3"] }' -FilterOperator "Any" +``` +++## Provide the consent +Obtaining consent to apply updates is an important step in the workflow of scheduled patching and follow the steps on various ways to [provide the consent](manage-dynamic-scoping.md#provide-consent-to-apply-updates). ++++## Next steps +Learn about [managing multiple machines](manage-multiple-machines.md). + |
update-manager | Update Manager Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/update-manager-faq.md | + + Title: Azure Update Manager FAQ +description: This article gives answers to frequently asked questions about Azure Update Manager ++ Last updated : 09/25/2023+++#Customer intent: As an implementer, I want answers to various questions. +++# Azure Update Manager frequently asked questions ++This FAQ is a list of commonly asked questions about Azure Update Manager. If you have any other questions about its capabilities, go to the discussion forum and post your questions. When a question is frequently asked, we add it to this article so that it's found quickly and easily. ++## Fundamentals ++### What are the benefits of using Azure Update Manager? ++Azure Update Manager provides a SaaS solution to manage and govern software updates to Windows and Linux machines across Azure, on-premises, and multicloud environments. +Following are the benefits of using Azure Update +- Oversee update compliance for your entire fleet of machines in Azure (Azure VMs), on premises, and multicloud environments (Arc-enabled Servers). +- View and deploy pending updates to secure your machines [instantly](updates-maintenance-schedules.md#update-nowone-time-update). +- Manage [extended security updates (ESUs)](../azure-arc/servers/prepare-extended-security-updates.md) for your Azure Arc-enabled Windows Server 2012/2012 R2 machines. Get consistent experience for deployment of ESUs and other updates. +- Define recurring time windows during which your machines receive updates and might undergo reboots using [scheduled patching](scheduled-patching.md). Enforce machines grouped together based on standard Azure constructs (Subscriptions, Location, Resource Group, Tags etc.) to have common patch schedules using [dynamic scoping](dynamic-scope-overview.md). Sync patch schedules for Windows machines in relation to patch Tuesday, the unofficial term for month. +- Enable incremental rollout of updates to Azure VMs in off-peak hours using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) and reduce reboots by enabling [hotpatching](updates-maintenance-schedules.md#hotpatching). +- Automatically [assess](assessment-options.md#periodic-assessment) machines for pending updates every 24 hours, and flag machines that are out of compliance. Enforce enabling periodic assessments on multiple machines at scale using [Azure Policy](periodic-assessment-at-scale.md). +- Create [custom reports](workbooks.md) for deeper understanding of the updates data of the environment. +- Granular access management to Azure resources with Azure roles and identity, to control who can perform update operations and edit schedules. ++### How does the new Azure Update Manager work on machines? ++Whenever you trigger any Azure Update Manager operation on your machine, it pushes an extension on your machine that interacts with the VM agent (for Azure machine) or Arc agent (for Arc-enabled machines) to fetch and install updates. ++### Is enabling Azure Arc mandatory for patch management for machines not running on Azure? ++Yes, machines that aren't running on Azure must be enabled for Arc, for management using Update Manager. ++### Is the new Azure Update Manager dependent on Azure Automation and Log Analytics? ++No, it's a native capability on a virtual machine. ++### Where is updates data stored in Azure Update Manager? ++All Azure Update Manager data is stored in Azure Resource Graph (ARG). Custom reports can be generated on the updates data for deeper understanding and patterns using Azure Workbooks [Learn more](query-logs.md) ++### Are there programmatic ways to interact with Azure Update Manager? ++Yes, Azure Update Manager supports REST API, CLI and PowerShell for [Azure machines](manage-vms-programmatically.md) and [Arc-enabled machines](manage-arc-enabled-servers-programmatically.md). ++### Do I need MMA or AMA for using Azure Update Manager to manage my machines? ++No, it's a native capability on a virtual machine and doesn't rely either on MMA or AMA. ++### Which operating systems are supported by Azure Update Manager? ++For more information, see [Azure Update Manager OS support](support-matrix.md). ++### Does Update Manager support Windows 10, 11? ++Automation Update Management didn't provide support for patching Windows 10 and 11. The same is true for Azure Update Manager. We recommend that you use Microsoft Intune as the solution for keeping Windows 10 and 11 devices up to date. +++## Impact of Log Analytics Agent retirement ++### How do I move from Automation Update Management to Azure Update Manager? ++Follow the [guidance](guidance-migration-automation-update-management-azure-update-manager.md) to move from Automation Update Management to Azure Update Manager. +++### LA agent (also known as MMA) is retiring and will be replaced with AMA. Is it necessary to move to Update Manager or can I continue to use Automation Update Management with AMA? ++The Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) will be [retired in August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Azure Automation Update management solution relies on this agent and might encounter issues once the agent is retired. It doesn't work with Azure Monitoring Agent (AMA) either. ++Therefore, if you're using Azure Automation Update management solution, you're encouraged to move to Azure Update Manager for their software update needs. All capabilities of Azure Automation Update Management Solution will be available on Azure Update Manager before the retirement date. Follow the [guidance](guidance-migration-automation-update-management-azure-update-manager.md) to move update management for your machines to Azure Update Manager. + ++### If I move to AMA while I'm still using Automation Update Management, will my solution break? ++Yes. Automation Update Management isn't compatible with AMA. We recommend that you move the machine to Azure Update Manager before removing MMA from the machine. Update Manager doesn't rely either on MMA or AMA. +++### Will I lose my Automation Update Management update related data if I move to Azure Update Manager? ++Automation Update Management uses Log Analytics workspace for storing updates data. Azure Update Manager uses Azure Resource Graph for data storage. You can continue using the historical data in Log Analytics workspace for old data and use Azure Resource Graph for new data. ++### I have some reports/dashboards built for Automation Update Management. How do I move those? ++You can rebuild custom dashboards/reports on updates data from Azure Resource Graph (ARG). For more information, see [how to query ARG data](query-logs.md) and [sample queries](sample-query-logs.md). These are a few built-in workbooks that you can modify as per your needs to get started. For more information, see [how to create reports using workbooks](manage-workbooks.md). ++### I have been using saved searches in Automation Update Management for schedules. How do I migrate to Azure Update Manager? ++Arc-enabling of machines is a prerequisite for management with Update Manager. To move the saved searches. You can Arc-enable them and then use dynamic scoping feature to define the same scope of machines. [Learn more](manage-dynamic-scoping.md). +++### If I have been using pre and post-script or alerting capability in Automation Update management, how can I move to Azure Update Manager? ++These capabilities will be added to Azure Update Manager. For more information, see [guidance for moving from Automation Update management to Azure Update Manager](guidance-migration-automation-update-management-azure-update-manager.md). ++### I'm using Automation Update Management on sovereign clouds; will I get region support in the new Azure Update Manager? ++Yes, Automation Update Manager will be rolled out to sovereign clouds soon. ++## Pricing ++### What is the pricing for Azure Update Manager? ++Azure Update Manager is available at no extra charge for managing Azure VMs and Arc-enabled Azure Stack HCI VMs (for which Azure Benefits are enabled). For Arc-enabled Servers, the price is $5 per server per month (assuming 31 days of usage). ++### How is Azure Update Manager price calculated for Arc-enabled servers? ++For Arc-enabled servers, Azure Update Manager is charged $5/server/month (assuming 31 days of connected usage). It's charged at a daily prorated value of 0.16/server/day. An Arc-enabled machine would only be charged for the days when it's connected and managed by Azure Update Manager. ++### When is an Arc-enabled server considered managed by Azure Update Manager? ++An Arc-enabled server is considered managed by Azure Update Manager for days on which the machine fulfills the following conditions: + - *Connected* status for Arc at any time during the day. + - An update operation (patched on demand or through a scheduled job, assessed on demand or through periodic assessment) is triggered on it, or it's associated with a schedule. + +### Are there scenarios in which Arc-enabled Server isn't charged for Azure Update Manager? ++An Arc-enabled server managed with Azure Update Manager is not charged in following scenarios: + - If the machine is enabled for delivery of Extended Security Updates (ESUs) enabled by Azure Arc. + - Microsoft Defender for Servers Plan 2 is enabled for the subscription hosting the Arc-enabled server. ++### Will I be charged if I move from Automation Update Management to Update Manager? ++Customers using Automation Update Management moving to Azure Update Manager won't be charged till retirement of LA agent. ++### I'm a Defender for Server customer and use update recommendations powered by Azure Update Manager namely "periodic assessment should be enabled on your machines" and "system updates should be installed on your machines". Would I be charged for Azure Update Manager? ++If you have purchased a Defender for Servers Plan 2, then you won't have to pay to remediate the unhealthy resources for the above two recommendations. But if you're using any other Defender for server plan for your Arc machines, then you would be charged for those machines at the daily prorated $0.16/server by Azure Update Manager. ++### Is Azure Update Manager chargeable on Azure Stack HCI? +Azure Update Manager is not charged for machines hosted Azure Stack HCI clusters that have been enabled for Azure benefits and Azure Arc VM management. [Learn more](/azure-stack/hci/manage/azure-benefits?tabs=wac#azure-benefits-available-on-azure-stack-hci). + ++## Update Manager support and integration ++### Does Azure Update Manager support integration with Azure Lighthouse? ++Azure Update Manager doesn't currently support Azure Lighthouse integration. ++### Does Azure Update Manager support Azure Policy? ++Yes, Azure Update Manager supports update features via policies. For more information, see [how to enable periodic assessment at scale using policy](periodic-assessment-at-scale.md) and [how to enable schedules on your machines at scale using Azure Policy](scheduled-patching.md#onboard-to-schedule-by-using-azure-policy). ++### I have machines across multiple subscriptions in Automation Update Management. Is this scenario supported in Azure Update Manager? ++Yes, Azure Update Manager supports multi-subscription scenarios. ++### Is there guidance available to move VMs and schedules from SCCM to Azure Update Manager? ++Customers can follow this [guide](guidance-migration-azure.md) to move update configurations from SCCM to Azure Update Manager. ++## Miscellaneous ++### Can I configure my machines to fetch updates from WSUS (Windows) and private repository (Linux)? ++By default, Azure Update Manager relies on Windows Update (WU) client running on your machine to fetch updates. You can configure WU client to fetch updates from Microsoft Update/WSUS repository and manage patch schedules using Azure Update Manager. ++Similarly for Linux, you can fetch updates by pointing your machine to a public repository or clone a private repository that regularly pulls updates from the upstream. ++Azure Update Manager honors machine settings and installs updates accordingly. ++### Does Azure Update Manager store customer data? ++No, Azure Update Manager doesn't store any customer identifiable data outside of the Azure Resource Graph for the subscription. ++## Next steps ++- [An overview of Azure Update Manager](overview.md) +- [What's new in Azure Update Manager](whats-new.md) |
update-manager | Updates Maintenance Schedules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/updates-maintenance-schedules.md | + + Title: Updates and maintenance in Azure Update Manager +description: This article describes the updates and maintenance options available in Azure Update Manager. + Last updated : 09/18/2023++++++# Update options and orchestration in Azure Update Manager ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++> [!IMPORTANT] +> - For a seamless scheduled patching experience, we recommend that for all Azure virtual machines (VMs), you update the patch orchestration to **Customer Managed Schedules**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules fail to patch the VMs. For more information, see [Configure schedule patching on Azure VMs to ensure business continuity](prerequsite-for-schedule-patching.md). +> - For Azure Arc-enabled servers, the updates and maintenance options such as automatic VM guest patching in Azure, Windows automatic updates, and hot patching aren't supported. +++This article provides an overview of the various update options and orchestration in Azure Update Manager. ++## Update Options ++### Automatic OS image upgrade ++When you enable the [automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) on your [Azure Virtual Machine Scale Set](../virtual-machine-scale-sets/overview.md), it helps ease Azure Update Manager to safely and automatically upgrade the OS disk for all instances in the scale set. ++Automatic OS upgrade has the following characteristics: +- After you configure, the latest OS image published by the image publishers is automatically applied to the scale set without any user intervention. +- It upgrades batches of instances in a rolling manner every time a new image is published by the publisher. +- Integrates with application health probes and [Application Health extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md). +- Works for all VM sizes, for both Windows and Linux images including the custom images through the [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md). +- Flexibility to opt out of automatic upgrades at any time. (OS upgrades can be initiated manually as well). +- The OS Disk of a VM is replaced with the new OS Disk created with the latest image version. Configured extensions and custom data scripts are run while persisted data disks are retained. +- Supports Extension sequencing. +- You can enable on a scale set of any size. ++> [!NOTE] +> We recommend that you check on the following: +> - Requirements before you enable automatic OS image upgrades +> - Supported OS images +> - Requirements to support custom images. [Learn more](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) +++### Automatic VM guest patching ++When you enable [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) on your Azure VMs, it helps ease Azure Update Manager to safely and automatically patch virtual machines to maintain security compliance. ++Automatic VM guest patching has the following characteristics: +- Patches classified as *Critical* or *Security* are automatically downloaded and applied on the VM. +- Patches are applied during off-peak hours for IaaS VMs in the VM's time zone. +- Patches are applied during all hours for Azure Virtual Machine Scale Sets [VMSS Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration). +- Patch orchestration is managed by Azure and patches are applied following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). +- Virtual machine health, as determined through platform health signals, is monitored to detect patching failures. +- You can monitor application health through the [Application Health Extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md). +- It works for all VM sizes. ++#### Enable VM property ++To enable the VM property, follow these steps: ++1. On the Azure Update Manager home page, go to **Update Settings**. +1. Select Patch Orchestration as **Azure Managed-Safe Deployment**. ++> [!NOTE] +> We recommend the following: +> - Obtain an understanding how the Automatic VM guest patching works. +> - Check the requirements before you enable Automatic VM guest patching. +> - Check for supported OS images. [Learn more](../virtual-machines/automatic-vm-guest-patching.md) ++++## Hotpatching ++[Hotpatching](/windows-server/get-started/hotpatch?context=%2Fazure%2Fvirtual-machines%2Fcontext%2Fcontext) allows you to install OS security updates on supported *Windows Server Datacenter: Azure Edition* virtual machines that don't require a reboot after installation. It works by patching the in-memory code of running processes without the need to restart the process. ++Following are the features of Hotpatching: ++- Fewer binaries mean update install faster and consume less disk and CPU resources. +- Lower workload impact with fewer reboots. +- Better protection, as the hotpatch update packages are scoped to Windows security updates that install faster without rebooting. +- Reduces the time exposed to security risks and change windows, and easier patch orchestration with Azure Update Manager +++Hotpatching property is available as a setting in Azure Update Manager that you can enable by using Update settings flow. For more information, see [Hotpatch for virtual machines and supported platforms](/windows-server/get-started/hotpatch). ++## Automatic extension upgrade ++[Automatic Extension Upgrade](../virtual-machines/automatic-extension-upgrade.md) is available for Azure VMs and [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md). When Automatic Extension Upgrade is enabled on a VM or scale set, the extension is upgraded automatically whenever the extension publisher releases a new version for that extension. ++Automatic Extension Upgrade has the following features: ++- It's supported for Azure VMs and Azure Virtual Machine Scale Sets. +- Upgrades are applied on an [availability-first-deployment-model](../virtual-machines/automatic-extension-upgrade.md#availability-first-updates). +- For a Virtual Machine Scale Set, no more than 20% of the scale set virtual machines will be upgraded in a single batch. The minimum batch size is one virtual machine. +- Works for all VM sizes and for both Windows and Linux extensions. +- Enabled on a Virtual Machine Scale Sets of any size. +- Each supported extension is enrolled individually, and you can choose the extensions to upgrade automatically. +- Supported in all public cloud regions. For more information, see [supported extensions and Automatic Extension upgrade](../virtual-machines/automatic-extension-upgrade.md#availability-first-updates) + + ### Windows automatic updates +This mode of patching allows operating system to automatically install updates on Windows VMs as soon as they're available. It uses the VM property that is enabled by setting the patch orchestration to OS orchestrated/Automatic by OS. ++> [!NOTE] +> - Windows automatic updates is not an Azure Update Manager setting but a Windows level setting. +> - Azure Update Manager doesn't support [In-place upgrade for VMs running Windows Server in Azure](../virtual-machines/windows-in-place-upgrade.md). ++## Update or Patch orchestration ++Azure Update Manager provides the flexibility to either install updates immediately or schedule updates within a defined maintenance window. These settings allow you to orchestrate patching for your virtual machine. ++### Update Now/One-time update ++Azure Update Manager allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-a-single-vm) +++### Scheduled patching ++You can create a schedule for a daily, weekly or hourly cadence as per your requirement, specify the machines that must be updated as part of the schedule, and the updates that you must install. The schedule will then automatically install the updates as per the specifications. ++Azure Update Manager uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control](../virtual-machines/maintenance-configurations.md). ++Use [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules. ++> [!NOTE] +> Patch orchestration property for Azure machines should be set to **Customer Managed Schedules** as it is a prerequisite for scheduled patching. For more information, see the [list of prerequisites](scheduled-patching.md#prerequisites-for-scheduled-patching). ++> [!IMPORTANT] +> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you must update the patch orchestration to **Customer Managed Schedules**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs. [Learn more](prerequsite-for-schedule-patching.md). +> - For Arc-enabled servers, the updates and maintenance options such as Automatic VM Guest patching in Azure, Windows automatic updates and Hotpatching aren't supported. + + +## Next steps ++* To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot Azure Update Manager issues, see [Troubleshoot issues](troubleshoot.md). |
update-manager | View Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/view-updates.md | + + Title: Check update compliance in Azure Update Manager +description: This article explains how to use Azure Update Manager in the Azure portal to assess update compliance for supported machines. + Last updated : 09/18/2023++++++# Check update compliance with Azure Update Manager ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article explains how to check the status of available updates on a single VM or multiple VMs by using Azure Update Manager. ++## Check updates on a single VM ++You can check the updates from the **Overview** or **Machines** pane on the **Update Manager** page or from the selected VM. ++# [From the Overview pane](#tab/singlevm-overview) ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On the **Azure Update Manager** | **Overview** page, select your subscription to view all your machines, and then select **Check for updates**. ++1. On the **Select resources and check for updates** pane, choose the machine that you want to check for updates, and then select **Check for updates**. ++ An assessment is performed and a notification appears as a confirmation. ++ :::image type="content" source="./media/view-updates/check-updates-overview-inline.png" alt-text="Screenshot that shows checking updates from Overview." lightbox="./media/view-updates/check-updates-overview-expanded.png"::: + + The **Update status of machines**, **Patch orchestration configuration** of Azure VMs, and **Total installation runs** tiles are refreshed and display the results. ++# [From the Machines pane](#tab/singlevm-machines) ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On the **Azure Update Manager** | **Machines** page, select your subscription to view all your machines. ++1. Select the checkbox for your machine, and then select **Check for updates** > **Assess now**. Alternatively, you can select your machine and in **Updates**, select **Assess updates**. In **Trigger assess now**, select **OK**. ++ An assessment is performed and a notification appears first that says **Assessment is in progress**. After a successful assessment, you see **Assessment successful**. Otherwise, you see the notification **Assessment Failed**. For more information, see [Update assessment scan](assessment-options.md#update-assessment-scan). ++# [From a selected VM](#tab/singlevm-home) ++1. Select your virtual machine to open the **Virtual machines | Updates** page. +1. Under **Operations**, select **Updates**. +1. On the **Updates** pane, select **Go to Updates using Update Manager**. ++ :::image type="content" source="./media/view-updates/resources-check-updates.png" alt-text="Screenshot that shows selection of updates from the home page."::: ++1. On the **Updates** page, select **Check for updates**. In **Trigger assess now**, select **OK**. ++ An assessment is performed and a notification says **Assessment is in progress**. After the assessment, you see **Assessment successful** or **Assessment failed**. ++ :::image type="content" source="./media/view-updates/check-updates-home-inline.png" alt-text="Screenshot that shows the status after checking updates." lightbox="./media/view-updates/check-updates-home-expanded.png"::: ++ For more information, see [Update assessment scan](assessment-options.md#update-assessment-scan). ++++## Check updates at scale ++To check the updates on your machines at scale, follow these steps. ++You can check the updates from the **Overview** or **Machines** pane. ++# [From the Overview pane](#tab/at-scale-overview) ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On the **Azure Update Manager** | **Overview** page, select your subscription to view all your machines and select **Check for updates**. ++1. On the **Select resources and check for updates** pane, choose the machines that you want to check for updates and select **Check for updates**. ++ An assessment is performed and a notification appears as a confirmation. + + The **Update status of machines**, **Patch orchestration configuration** of Azure virtual machines, and **Total installation runs** tiles are refreshed and display the results. ++# [From the Machines pane](#tab/at-scale-machines) ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. On the **Azure Update Manager** | **Machines** page, select your subscription to view all your machines. ++1. Choose **Select all** to select all your machines, and then select **Check for updates**. ++1. Select **Assess now** to perform the assessment. ++ A notification appears when the operation is initiated and finished. After a successful scan, the **Update Manager | Machines** page is refreshed to display the updates. ++++> [!NOTE] +> In Update Manager, you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates, including the security and critical updates. On Windows, the Windows Update Agent performs the software update scan. On Linux, the software update scan is performed by using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL definitions for that platform, which are retrieved from a local or remote repository. ++## Next steps ++* To learn how to deploy updates on your machines to maintain security compliance, see [Deploy updates](deploy-updates.md). +* To view the update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md). |
update-manager | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md | + + Title: What's new in Azure Update Manager +description: Learn about what's new and recent updates in the Azure Update Manager service. ++++ Last updated : 09/18/2023+++# What's new in Azure Update Manager ++[Azure Update Manager](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Azure Update Manager. ++## October 2023 ++### Azure Migrate, Azure Backup, Azure Site Recovery VMs support (preview) ++Azure Update Manager now supports scheduled patching and periodic assessment for [specialized](../virtual-machines/linux/imaging.md#specialized-images) VMs including the VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery in preview. ++## September 2023 ++**Azure Update Manager is now Generally Available**. ++For more information, see [the announcement](https://techcommunity.microsoft.com/t5/azure-governance-and-management/generally-available-azure-update-manager/ba-p/3928878). ++## August 2023 ++### Service rebranding ++Update management center is now rebranded as Azure Update Manager. ++### New region support ++Azure Update Manager is now available in Canada East and Sweden Central regions for Arc-enabled servers. [Learn more](support-matrix.md#supported-regions). ++### SQL Server patching (preview) ++SQL Server patching (preview) allows you to patch SQL Servers. You can now manage and govern updates for all your SQL Servers using the patching capabilities provided by Azure Update Manager. [Learn more](guidance-patching-sql-server-azure-vm.md). ++## July 2023 ++### Dynamic scope ++Dynamic scope is an advanced capability of schedule patching. You can now create a group of [machines based on a schedule and apply patches](dynamic-scope-overview.md) on those machines at scale. [Learn more](tutorial-dynamic-grouping-for-scheduled-patching.md). + ++## May 2023 ++### Customized image support ++Update Manager now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images.See the [list of supported operating systems](support-matrix.md#supported-operating-systems). ++### Multi-subscription support ++The limit on the number of subscriptions that you can manage to use the Update Manager portal has now been removed. You can now manage all your subscriptions using the Update Manager portal. ++## April 2023 ++### New prerequisite for scheduled patching ++A new patch orchestration - **Customer Managed Schedules (Preview)** is introduced as a prerequisite to enable scheduled patching on Azure VMs. The new patch enables the *Azure-orchestrated* and *BypassPlatformSafteyChecksOnUserSchedule* VM properties on your behalf after receiving the consent. [Learn more](prerequsite-for-schedule-patching.md). ++> [!IMPORTANT] +> For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules (Preview)** by **30th June 2023**. If you fail to update the patch orchestration by **30th June 2023**, you can experience a disruption in business continuity because the schedules will fail to patch the VMs. +++## November 2022 ++### New region support ++Update Manager now supports new five regions for Azure Arc-enabled servers. [Learn more](support-matrix.md#supported-regions). ++## October 2022 ++### Improved on-boarding experience ++You can now enable periodic assessment for your machines at scale using [Policy](periodic-assessment-at-scale.md) or from the [portal](manage-update-settings.md#configure-settings-on-a-single-vm). +++## Next steps ++- [Learn more](support-matrix.md) about supported regions. |
update-manager | Whats Upcoming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-upcoming.md | + + Title: What's upcoming in Azure Update Manager +description: Learn about what's upcoming and updates in Azure Update Manager. ++++ Last updated : 09/27/2023+++# What are the upcoming features in Azure Update Manager ++The article [What's new in Azure Update Manager](whats-new.md) contains updates of feature releases. This article lists all the upcoming features for Azure Update Manager. ++## Alerting +Enable alerts to address events as captured in updates data. ++## Prescript and postscript ++The ability to execute Azure Automation runbook scripts before or after deploying scheduled updates to machines will be available by the fourth quarter of 2023. ++## Next steps ++For more information about supported regions, see [Support matrix for Update Manager](support-matrix.md). |
update-manager | Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/workbooks.md | + + Title: An overview of workbooks +description: This article provides information on how workbooks provide a flexible canvas for data analysis and the creation of rich visual reports. + Last updated : 09/18/2023++++++# About workbooks ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++Workbooks help you to create visual reports that help in data analysis. This article describes the various features that workbooks offer in Azure Update Manager. ++## Key benefits ++- Use as a canvas for data analysis and creation of visual reports. +- Access specific metrics from within the reports. +- Create interactive reports with various kinds of visualizations. +- Create, share, and pin workbooks to the dashboard. +- Combine text, log queries, metrics, and parameters to make rich visual reports. ++## The gallery ++The gallery lists all the saved workbooks and templates for your workspace. You can easily organize, sort, and manage workbooks of all types. ++ :::image type="content" source="./media/workbooks/workbooks-gallery.png" alt-text="Screenshot that shows the workbooks gallery."::: ++The following four tabs help you organize workbook types. ++ | Tab | Description | + ||| + | **All** | Shows the top four items for **Workbooks**, **Public Templates**, and **My Templates**. Workbooks are sorted by modified date, so you see the most recent eight modified workbooks.| + | **Workbooks** | Shows the list of all the available workbooks that you created or are shared with you. | + | **Public Templates** | Shows the list of all the available ready-to-use, get-started functional workbook templates published by Microsoft. Grouped by category. | + | **My Templates** | Shows the list of all the available deployed workbook templates that you created or are shared with you. Grouped by category. | ++- On the **Quick start** tile, you can create new workbooks. ++ :::image type="content" source="./media/workbooks/quickstart-workbooks.png" alt-text="Screenshot that shows creating a new workbook by using Quick start."::: ++- On the **Azure Update Manager** tile, you can view the following summary. +- + :::image type="content" source="./media/workbooks/workbooks-summary-inline.png" alt-text="Screenshot that shows a workbook summary." lightbox="./media/workbooks/workbooks-summary-expanded.png"::: + + - **Machines overall status and configurations**: Provides the status of all machines in a specific subscription. ++ :::image type="content" source="./media/workbooks/workbooks-machine-overall-status-inline.png" alt-text="Screenshot that shows the overall status and configuration of machines." lightbox="./media/workbooks/workbooks-machine-overall-status-expanded.png"::: ++ - **Updates data overview**: Provides a summary of machines that have no updates, assessments, and reboot needed, including the pending Windows and Linux updates by classification and by machine count. ++ :::image type="content" source="./media/workbooks/workbooks-machines-updates-status-inline.png" alt-text="Screenshot that shows a summary of machines that have no updates and assessments needed." lightbox="./media/workbooks/workbooks-machines-updates-status-expanded.png"::: + + - **Schedules/Maintenance configurations**: Provides a summary of schedules, maintenance configurations, and a list of machines attached to the schedule. You can also access the maintenance configuration overview page from this section. + + :::image type="content" source="./media/workbooks/workbooks-schedules-maintenance-inline.png" alt-text="Screenshot that shows a summary of schedules and maintenance configurations." lightbox="./media/workbooks/workbooks-schedules-maintenance-expanded.png"::: ++ - **History of installation runs**: Provides a history of machines and maintenance runs. ++ :::image type="content" source="./media/workbooks/workbooks-history-installation-inline.png" alt-text="Screenshot that shows a history of installation runs." lightbox="./media/workbooks/workbooks-history-installation-expanded.png"::: ++For information on how to use the workbooks for customized reporting, see [Edit a workbook](manage-workbooks.md#edit-a-workbook). ++## Next steps ++ To learn how to deploy updates to your machines to maintain security compliance, see [Deploy updates](deploy-updates.md). |
virtual-desktop | Set Up Golden Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-golden-image.md | Title: Create an Azure Virtual Desktop golden image description: A walkthrough for how to set up a golden image for your Azure Virtual Desktop deployment in the Azure portal.-+ Last updated 12/01/2021 |
virtual-machines | Prepay Reserved Vm Instances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/prepay-reserved-vm-instances.md | -When you commit to an Azure reserved VM instance you can save money. The reservation discount is applied automatically to the number of running virtual machines that match the reservation scope and attributes. You don't need to assign a reservation to a virtual machine to get the discounts. A reserved instance purchase covers only the compute part of your VM usage. For Windows VMs, the usage meter is split into two separate meters. There's a compute meter, which is same as the Linux meter, and a Windows IP meter. The charges that you see when you make the purchase are only for the compute costs. Charges don't include Windows software costs. For more information about software costs, see [Software costs not included with Azure Reserved VM Instances](../cost-management-billing/reservations/reserved-instance-windows-software-costs.md). +When you commit to an Azure reserved VM instance you can save money. The reservation discount is applied automatically to the number of running virtual machines that match the reservation scope and attributes. You don't need to assign a reservation to a virtual machine to get the discounts. A reserved instance purchase covers only the compute part of your VM usage. For Windows VMs, the usage meter is split into two separate meters. There's a compute meter, which is same as the Linux meter, and a Windows server license. The charges that you see when you make the purchase are only for the compute costs. Charges don't include Windows software costs. For more information about software costs, see [Software costs not included with Azure Reserved VM Instances](../cost-management-billing/reservations/reserved-instance-windows-software-costs.md). ## Determine the right VM size before you buy |
virtual-network | Virtual Networks Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md | Certain services (such as Azure SQL and Azure Cosmos DB) allow exceptions to the Turning on the service endpoints on the network side can lead to a connectivity drop, because the source IP changes from a public IPv4 address to a private address. Setting up virtual network ACLs on the Azure service side before turning on service endpoints on the network side can help avoid a connectivity drop. +>[!NOTE] +> If you enable Service Endpoint on certain services likes "Microsoft.AzureActiveDirectory" you can see IPV6 address connections on Sign-In Logs. Microsoft use an internal IPV6 private range for this type of connections. + ### Do all Azure services reside in the Azure virtual network that the customer provides? How does a virtual network service endpoint work with Azure services? Not all Azure services reside in the customer's virtual network. Most Azure data services (such as Azure Storage, Azure SQL, and Azure Cosmos DB) are multitenant services that can be accessed over public IP addresses. For more information, see [Deploy dedicated Azure services into virtual networks](virtual-network-for-azure-services.md). |
vpn-gateway | Gateway Sku Resize | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/gateway-sku-resize.md | + + Title: 'Resize a gateway SKU' ++description: Learn how to resize a gateway SKU. +++ Last updated : 10/20/2023++++# Resize a gateway SKU for VPN Gateway ++This article helps you resize a gateway SKU. Resizing a gateway SKU is a relatively fast process. You don't need to delete and recreate your existing VPN gateway to resize. However, there are certain limitations and restrictions for resizing and not all SKUs are available when resizing. ++When using the portal to resize your SKU, notice that the dropdown list of available SKUs is based on the SKU you currently have. If you don't see the SKU you want to resize to, instead of resizing, you have to change to a new SKU. For more information, see [About VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md). ++## Resize a SKU ++1. Go to the **Configuration** page for your virtual network gateway. +1. On the right side of the page, click the dropdown arrow to show a list of available SKUs. The options listed are based on the starting SKU and SKU Generation. ++ :::image type="content" source="./media/gateway-sku-resize/resize-sku.png" alt-text="Screenshot showing how to resize the gateway SKU." lightbox ="./media/gateway-sku-resize/resize-sku.png"::: +1. Select the SKU from the dropdown. +1. **Save** your changes. ++## Next steps ++For more information about SKUs, see [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md). |