Updates from: 07/25/2022 01:06:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
You can use scoping filters to define attribute-based rules that determine which
### B2B (guest) users
-It's possible to use the Azure AD user provisioning service to provision B2B (or guest) users in Azure AD to SaaS applications.
+It's possible to use the Azure AD user provisioning service to provision B2B (guest) users in Azure AD to SaaS applications.
However, for B2B users to sign in to the SaaS application using Azure AD, the SaaS application must have its SAML-based single sign-on capability configured in a specific way. For more information on how to configure SaaS applications to support sign-ins from B2B users, see [Configure SaaS apps for B2B collaboration](../external-identities/configure-saas-apps.md). > [!NOTE]
-> The userPrincipalName for a guest user is often displayed as "alias#EXT#@domain.com". When the userPrincipalName is included in your attribute mappings as a source attribute, the #EXT# is stripped from the userPrincipalName. If you require the #EXT# to be present, replace userPrincipalName with originalUserPrincipalName as the source attribute.
-userPrincipalName = alias@domain.com
-originalUserPrincipalName = alias#EXT#@domain.com
+> The userPrincipalName for a B2B user represents the external user's email address alias@theirdomain as "alias_theirdomain#EXT#@yourdomain". When the userPrincipalName attribute is included in your attribute mappings as a source attribute, and a B2B user is being provisioned, the #EXT# and your domain is stripped from the userPrincipalName, so only their original alias@theirdomain is used for matching or provisioning. If you require the full user principal name including #EXT# and your domain to be present, replace userPrincipalName with originalUserPrincipalName as the source attribute. <br />
+userPrincipalName = alias@theirdomain<br />
+originalUserPrincipalName = alias_theirdomain#EXT#@yourdomain
## Provisioning cycles: Initial and incremental
active-directory Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/feature-availability.md
This following tables list Azure AD feature availability in Azure Government.
|| Service-level agreement | &#x2705; | |**Applications access**|SaaS apps with modern authentication (Azure AD application gallery apps, SAML, and OAUTH 2.0) | &#x2705; | || Group assignment to applications | &#x2705; |
-|| Cloud app discovery (Microsoft Cloud App Security) | &#x2705; |
+|| Cloud app discovery (Microsoft Defender for Cloud Apps) | &#x2705; |
|| Application Proxy for on-premises, header-based, and Integrated Windows Authentication | &#x2705; | || Secure hybrid access partnerships (Kerberos, NTLM, LDAP, RDP, and SSH authentication) | &#x2705; | |**Authorization and Conditional Access**|Role-based access control (RBAC) | &#x2705; |
active-directory Identity Governance Applications Existing Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-existing-users.md
The first time your organization uses these cmdlets for this scenario, you need
1. Retrieve the IDs of those users in Azure AD.
- The following PowerShell script uses the `$dbusers`, `$db_match_column_name`, and `$azuread_match_attr_name` values specified earlier. It will query Azure AD to locate a user that has a matching value for each record in the source file. If there are many users in the database, this script might take several minutes to finish.
+ The following PowerShell script uses the `$dbusers`, `$db_match_column_name`, and `$azuread_match_attr_name` values specified earlier. It will query Azure AD to locate a user that has an attribute with a matching value for each record in the source file. If there are many users in the database, this script might take several minutes to finish. If you don't have an attribute in Azure AD that has the value, and need to use a `contains` or other filter expression, then you will need to customize this script and that in step 11 below to use a different filter expression.
```powershell $dbu_not_queried_list = @()
The first time your organization uses these cmdlets for this scenario, you need
$dbu_match_ambiguous_list = @() $dbu_query_failed_list = @() $azuread_match_id_list = @()
+ $azuread_not_enabled_list = @()
+ $dbu_values = @()
+ $dbu_duplicate_list = @()
foreach ($dbu in $dbusers) { if ($null -ne $dbu.$db_match_column_name -and $dbu.$db_match_column_name.Length -gt 0) { $val = $dbu.$db_match_column_name $escval = $val -replace "'","''"
+ if ($dbu_values -contains $escval) { $dbu_duplicate_list += $dbu; continue } else { $dbu_values += $escval }
$filter = $azuread_match_attr_name + " eq '" + $escval + "'" try {
- $ul = @(Get-MgUser -Filter $filter -All -ErrorAction Stop)
+ $ul = @(Get-MgUser -Filter $filter -All -Property Id,accountEnabled -ErrorAction Stop)
if ($ul.length -eq 0) { $dbu_not_matched_list += $dbu; } elseif ($ul.length -gt 1) {$dbu_match_ambiguous_list += $dbu } else { $id = $ul[0].id; $azuread_match_id_list += $id;
+ if ($ul[0].accountEnabled -eq $false) {$azuread_not_enabled_list += $id }
} } catch { $dbu_query_failed_list += $dbu } } else { $dbu_not_queried_list += $dbu }
The first time your organization uses these cmdlets for this scenario, you need
if ($dbu_not_queried_count -ne 0) { Write-Error "Unable to query for $dbu_not_queried_count records as rows lacked values for $db_match_column_name." }
+ $dbu_duplicate_count = $dbu_duplicate_list.Count
+ if ($dbu_duplicate_count -ne 0) {
+ Write-Error "Unable to locate Azure AD users for $dbu_duplicate_count rows as multiple rows have the same value"
+ }
$dbu_not_matched_count = $dbu_not_matched_list.Count if ($dbu_not_matched_count -ne 0) { Write-Error "Unable to locate $dbu_not_matched_count records in Azure AD by querying for $db_match_column_name values in $azuread_match_attr_name." } $dbu_match_ambiguous_count = $dbu_match_ambiguous_list.Count if ($dbu_match_ambiguous_count -ne 0) {
- Write-Error "Unable to locate $dbu_match_ambiguous_count records in Azure AD."
+ Write-Error "Unable to locate $dbu_match_ambiguous_count records in Azure AD as attribute match ambiguous."
} $dbu_query_failed_count = $dbu_query_failed_list.Count if ($dbu_query_failed_count -ne 0) { Write-Error "Unable to locate $dbu_query_failed_count records in Azure AD as queries returned errors." }
- if ($dbu_not_queried_count -ne 0 -or $dbu_not_matched_count -ne 0 -or $dbu_match_ambiguous_count -ne 0 -or $dbu_query_failed_count -ne 0) {
+ $azuread_not_enabled_count = $azuread_not_enabled_list.Count
+ if ($azuread_not_enabled_count -ne 0) {
+ Write-Error "$azuread_not_enabled_count users in Azure AD are blocked from sign-in."
+ }
+ if ($dbu_not_queried_count -ne 0 -or $dbu_duplicate_count -ne 0 -or $dbu_not_matched_count -ne 0 -or $dbu_match_ambiguous_count -ne 0 -or $dbu_query_failed_count -ne 0 -or $azuread_not_enabled_count) {
Write-Output "You will need to resolve those issues before access of all existing users can be reviewed." } $azuread_match_count = $azuread_match_id_list.Count
The first time your organization uses these cmdlets for this scenario, you need
For example, someone's email address might have been changed in Azure AD without their corresponding `mail` property being updated in the application's data source. Or, the user might have already left the organization but is still in the application's data source. Or there might be a vendor or super-admin account in the application's data source that does not correspond to any specific person in Azure AD.
-1. If there were users who couldn't be located in Azure AD, but you want to have their access reviewed or their attributes updated in the database, you need to create Azure AD users for them. You can create users in bulk by using either:
+1. If there were users who couldn't be located in Azure AD, or weren't active and able to sign in, but you want to have their access reviewed or their attributes updated in the database, you need to update or create Azure AD users for them. You can create users in bulk by using either:
- A CSV file, as described in [Bulk create users in the Azure AD portal](../enterprise-users/users-bulk-add.md) - The [New-MgUser](/powershell/module/microsoft.graph.users/new-mguser?view=graph-powershell-1.0#examples) cmdlet
The first time your organization uses these cmdlets for this scenario, you need
$dbu_match_ambiguous_list = @() $dbu_query_failed_list = @() $azuread_match_id_list = @()
+ $azuread_not_enabled_list = @()
+ $dbu_values = @()
+ $dbu_duplicate_list = @()
foreach ($dbu in $dbusers) { if ($null -ne $dbu.$db_match_column_name -and $dbu.$db_match_column_name.Length -gt 0) { $val = $dbu.$db_match_column_name $escval = $val -replace "'","''"
+ if ($dbu_values -contains $escval) { $dbu_duplicate_list += $dbu; continue } else { $dbu_values += $escval }
$filter = $azuread_match_attr_name + " eq '" + $escval + "'" try {
- $ul = @(Get-MgUser -Filter $filter -All -ErrorAction Stop)
+ $ul = @(Get-MgUser -Filter $filter -All -Property Id,accountEnabled -ErrorAction Stop)
if ($ul.length -eq 0) { $dbu_not_matched_list += $dbu; } elseif ($ul.length -gt 1) {$dbu_match_ambiguous_list += $dbu } else { $id = $ul[0].id; $azuread_match_id_list += $id;
+ if ($ul[0].accountEnabled -eq $false) {$azuread_not_enabled_list += $id }
} } catch { $dbu_query_failed_list += $dbu } } else { $dbu_not_queried_list += $dbu }
The first time your organization uses these cmdlets for this scenario, you need
if ($dbu_not_queried_count -ne 0) { Write-Error "Unable to query for $dbu_not_queried_count records as rows lacked values for $db_match_column_name." }
+ $dbu_duplicate_count = $dbu_duplicate_list.Count
+ if ($dbu_duplicate_count -ne 0) {
+ Write-Error "Unable to locate Azure AD users for $dbu_duplicate_count rows as multiple rows have the same value"
+ }
$dbu_not_matched_count = $dbu_not_matched_list.Count if ($dbu_not_matched_count -ne 0) { Write-Error "Unable to locate $dbu_not_matched_count records in Azure AD by querying for $db_match_column_name values in $azuread_match_attr_name." } $dbu_match_ambiguous_count = $dbu_match_ambiguous_list.Count if ($dbu_match_ambiguous_count -ne 0) {
- Write-Error "Unable to locate $dbu_match_ambiguous_count records in Azure AD."
+ Write-Error "Unable to locate $dbu_match_ambiguous_count records in Azure AD as attribute match ambiguous."
} $dbu_query_failed_count = $dbu_query_failed_list.Count if ($dbu_query_failed_count -ne 0) { Write-Error "Unable to locate $dbu_query_failed_count records in Azure AD as queries returned errors." }
- if ($dbu_not_queried_count -ne 0 -or $dbu_not_matched_count -ne 0 -or $dbu_match_ambiguous_count -ne 0 -or $dbu_query_failed_count -ne 0) {
+ $azuread_not_enabled_count = $azuread_not_enabled_list.Count
+ if ($azuread_not_enabled_count -ne 0) {
+ Write-Error "$azuread_not_enabled_count users in Azure AD are blocked from sign-in."
+ }
+ if ($dbu_not_queried_count -ne 0 -or $dbu_duplicate_count -ne 0 -or $dbu_not_matched_count -ne 0 -or $dbu_match_ambiguous_count -ne 0 -or $dbu_query_failed_count -ne 0 -or $azuread_not_enabled_count -ne 0) {
Write-Output "You will need to resolve those issues before access of all existing users can be reviewed." } $azuread_match_count = $azuread_match_id_list.Count
When an application role assignment is created in Azure AD for a user to an appl
If you don't see users being provisioned, check the [troubleshooting guide for no users being provisioned](../app-provisioning/application-provisioning-config-problem-no-users-provisioned.md). If you see an error in the provisioning status and are provisioning to an on-premises application, check the [troubleshooting guide for on-premises application provisioning](../app-provisioning/on-premises-ecma-troubleshoot.md).
+1. Check the [provisioning log](../reports-monitoring/concept-provisioning-logs.md). Filter the log to the status **Failure**. If there are failures with an ErrorCode of **DuplicateTargetEntries**, this indicates an ambiguity in your provisioning matching rules, and you'll need to update the Azure AD users or the mappings that are used for matching to ensure each Azure AD user matches one application user. Then filter the log to the action **Create** and status **Skipped**. If users were skipped with the SkipReason code of **NotEffectivelyEntitled**, this may indicate that the user accounts in Azure AD were not matched because the user account status was **Disabled**.
+ After the Azure AD provisioning service has matched the users based on the application role assignments you've created, subsequent changes will be sent to the application. ## Next steps
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [B2C IEF Keyset Administrator](#b2c-ief-keyset-administrator) | Can manage secrets for federation and encryption in the Identity Experience Framework (IEF). | aaf43236-0c0d-4d5f-883a-6955382ac081 | > | [B2C IEF Policy Administrator](#b2c-ief-policy-administrator) | Can create and manage trust framework policies in the Identity Experience Framework (IEF). | 3edaf663-341e-4475-9f94-5c398ef6c070 | > | [Billing Administrator](#billing-administrator) | Can perform common billing related tasks like updating payment information. | b0f54661-2d74-4c50-afa3-1ec803f12efe |
-> | [Cloud App Security Administrator](#cloud-app-security-administrator) | Can manage all aspects of the Cloud App Security product. | 892c5842-a9a6-463a-8041-72aa08ca3cf6 |
+> | [Cloud App Security Administrator](#cloud-app-security-administrator) | Can manage all aspects of the Defender for Cloud Apps product. | 892c5842-a9a6-463a-8041-72aa08ca3cf6 |
> | [Cloud Application Administrator](#cloud-application-administrator) | Can create and manage all aspects of app registrations and enterprise apps except App Proxy. | 158c047a-c907-4556-b7ef-446551a6b5f7 | > | [Cloud Device Administrator](#cloud-device-administrator) | Limited access to manage devices in Azure AD. | 7698a772-787b-4ac8-901f-60d6b08affd2 | > | [Compliance Administrator](#compliance-administrator) | Can read and manage compliance configuration and reports in Azure AD and Microsoft 365. | 17315797-102d-40b4-93e0-432062caca18 |
Makes purchases, manages subscriptions, manages support tickets, and monitors se
## Cloud App Security Administrator
-Users with this role have full permissions in Cloud App Security. They can add administrators, add Microsoft Cloud App Security (MCAS) policies and settings, upload logs, and perform governance actions.
+Users with this role have full permissions in Defender for Cloud Apps. They can add administrators, add Microsoft Defender for Cloud Apps policies and settings, upload logs, and perform governance actions.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security |
+> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Defender for Cloud Apps |
> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | ## Cloud Application Administrator
In | Can do
> | Actions | Description | > | | | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy |
-> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security |
+> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Defender for Cloud Apps |
> | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users | > | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policy | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
-> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security |
+> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Defender for Cloud Apps |
> | microsoft.directory/connectors/create | Create application proxy connectors | > | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors | > | microsoft.directory/connectorGroups/create | Create application proxy connector groups |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/users/authenticationMethods/standard/restrictedRead | Read standard properties of authentication methods that do not include personally identifiable information for users | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
-> | microsoft.directory/cloudAppSecurity/allProperties/read | Read all properties for Cloud app security |
+> | microsoft.directory/cloudAppSecurity/allProperties/read | Read all properties for Defender for Cloud Apps |
> | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors | > | microsoft.directory/connectorGroups/allProperties/read | Read all properties of application proxy connector groups | > | microsoft.directory/contacts/allProperties/read | Read all properties for contacts |
api-management Api Management Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md
Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct
| Azure Monitor logs and metrics | No | Yes | Yes | Yes | Yes | | Static IP | No | Yes | Yes | Yes | Yes | | [WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes |
-| [GraphQL APIs](graphql-api.md) | Yes | Yes | Yes | Yes | Yes |
-| [GraphQL resolvers (preview)](graphql-schema-resolve-api.md) | Yes | Yes | Yes | Yes | Yes |
+| [GraphQL APIs](graphql-api.md)<sup>5</sup> | Yes | Yes | Yes | Yes | Yes |
+| [Synthetic GraphQL APIs (preview)](graphql-schema-resolve-api.md) | No | Yes | Yes | Yes | Yes |
<sup>1</sup> Enables the use of Azure AD (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality e.g. users, groups, issues, applications and email templates and notifications.<br/> <sup>3</sup> In the Developer tier self-hosted gateways are limited to single gateway node.<br/>
-<sup>4</sup>The following policies aren't available in the Consumption tier: rate limit by key and quota by key.
+<sup>4</sup> The following policies aren't available in the Consumption tier: rate limit by key and quota by key.<br/>
+<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier.
azure-arc Manage Vm Extensions Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-template.md
The Custom Script extension configuration specifies things like script location
## Deploy the Dependency agent extension
-To use the Azure Monitor Dependency agent extension, the following sample is provided to run on Windows and Linux. If you are unfamiliar with the Dependency agent, see [Overview of Azure Monitor agents](../../azure-monitor/agents/agents-overview.md#dependency-agent).
+To use the Azure Monitor Dependency agent extension, the following sample is provided to run on Windows and Linux. If you are unfamiliar with the Dependency agent, see [Overview of Azure Monitor agents](../../azure-monitor/vm/vminsights-dependency-agent-maintenance.md).
### Template file for Linux
azure-cache-for-redis Cache Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-administration.md
Title: How to administer Azure Cache for Redis description: Learn how to perform administration tasks such as reboot and schedule updates for Azure Cache for Redis + Previously updated : 05/21/2021 Last updated : 07/22/2021
If you have a premium cache with clustering enabled, you can select which shards
:::image type="content" source="media/cache-administration/redis-cache-reboot-cluster-2.png" alt-text="screenshot of shard options":::
-To reboot one or more nodes of your cache, select the nodes and select **Reboot**. If you have a premium cache with clustering enabled, select the shards to reboot and then select **Reboot**. After a few minutes, the selected nodes reboot, and are back online a few minutes later.
+To reboot one or more nodes of your cache, select the nodes and select **Reboot**. If you have a premium cache with clustering enabled, select the shards to reboot, and then select **Reboot**. After a few minutes, the selected nodes reboot, and are back online a few minutes later.
The effect on your client applications varies depending on which nodes you reboot.
If you don't specify a maintenance window, updates can be made at any time.
### What type of updates are made during the scheduled maintenance window?
-Only Redis server updates are made during the scheduled maintenance window. The maintenance window doesn't apply to Azure updates or updates to the VM operating system.
+Only Redis server updates are made during the scheduled maintenance window. The maintenance window doesn't apply to Azure updates or updates to the host operating system.
### Can I manage scheduled updates using PowerShell, CLI, or other management tools?
azure-cache-for-redis Cache How To Premium Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
Title: Configure a virtual network - Premium-tier Azure Cache for Redis instance description: Learn how to create and manage virtual network support for your Premium-tier Azure Cache for Redis instance+ Previously updated : 05/06/2022 Last updated : 07/22/2022
Last updated 05/06/2022
[Azure Virtual Network](https://azure.microsoft.com/services/virtual-network/) deployment provides enhanced security and isolation along with: subnets, access control policies, and other features to restrict access further. When an Azure Cache for Redis instance is configured with a virtual network, it isn't publicly addressable. Instead, the instance can only be accessed from virtual machines and applications within the virtual network. This article describes how to configure virtual network support for a Premium-tier Azure Cache for Redis instance. > [!NOTE]
-> Azure Cache for Redis supports both classic deployment model and Azure Resource Manager virtual networks.
+>Classic deployment model is retiring in August 2024. For more information, see [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/).
> > [!IMPORTANT]
azure-functions Create Resources Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-resources-azure-powershell.md
+
+ Title: Create function app resources in Azure using PowerShell
+description: Azure PowerShell scripts that show you how to create the Azure resources required to host your functions code in Azure.
+ Last updated : 07/18/2022+
+# Create function app resources in Azure using PowerShell
+
+The Azure PowerShell example scripts in this article create function apps and other resources required to host your functions in Azure. A function app provides an execution context in which your functions are executed. All functions running in a function app share the same resources and connections, and they're all scaled together.
+
+After the resources are created, you can deploy your project files to the new function app. To learn more, see [Deployment methods](functions-deployment-technologies.md#deployment-methods).
+
+Every function app requires your PowerShell scripts to create the following resources:
+
+| Resource | cmdlet | Description |
+| | | |
+| Resource group | [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a [resource group](../azure-resource-manager/management/overview.md) in which you'll create your function app. |
+| Storage account | [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) | Creates a [storage account](../storage/common/storage-account-create.md) used by your function app. Storage account names must be between 3 and 24 characters in length and can contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](storage-considerations.md#storage-account-requirements). |
+| App Service plan | [New-AzFunctionAppPlan](/powershell/module/az.functions/new-azfunctionappplan) | Explicitly creates a hosting plan, which defines how resources are allocated to your function app. Used only when hosting in a Premium or Dedicated plan. You won't use this cmdlet when hosting in a serverless [Consumption plan](consumption-plan.md), since Consumption plans are created when you run `New-AzFunctionApp`. For more information, see [Azure Functions hosting options](functions-scale.md). |
+| Function app | [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) | Creates the function app using the required resources. The `-Name` parameter must be a globally unique name across all of Azure App Service. Valid characters in `-Name` are `a-z` (case insensitive), `0-9`, and `-`. Most examples create a function app that supports C# functions. You can change the language by using the `-Runtime` parameter, with supported values of `DotNet`, `Java`, `Node`, `PowerShell`, and `Python`. Use the `-RuntimeVersion` to choose a [specific language version](supported-languages.md#languages-by-runtime-version). |
+
+This article contains the following examples:
+
+* [Create a serverless function app for C#](#create-a-serverless-function-app-for-c)
+* [Create a serverless function app for Python](#create-a-serverless-function-app-for-python)
+* [Create a scalable function app in a Premium plan](#create-a-scalable-function-app-in-a-premium-plan)
+* [Create a function app in a Dedicated plan](#create-a-function-app-in-a-dedicated-plan)
+* [Create a function app with a named Storage connection](#create-a-function-app-with-a-named-storage-connection)
+* [Create a function app with an Azure Cosmos DB connection](#create-a-function-app-with-an-azure-cosmos-db-connection)
+* [Create a function app with an Azure Cosmos DB connection](#create-a-function-app-with-an-azure-cosmos-db-connection)
+* [Create a function app with continuous deployment](#create-a-function-app-with-continuous-deployment)
+* [Create a serverless Python function app and mount file share](#create-a-serverless-python-function-app-and-mount-file-share)
+++
+## Create a serverless function app for C#
+
+The following script creates a serverless C# function app in the default Consumption plan:
++
+## Create a serverless function app for Python
+
+The following script creates a serverless Python function app in a Consumption plan:
++
+## Create a scalable function app in a Premium plan
+
+The following script creates a C# function app in an Elastic Premium plan that supports [dynamic scale](event-driven-scaling.md):
++
+## Create a function app in a Dedicated plan
+
+The following script creates a function app hosted in a Dedicated plan, which isn't scaled dynamically by Functions:
++
+## Create a function app with a named Storage connection
+
+The following script creates a function app with a named Storage connection in application settings:
++
+## Create a function app with an Azure Cosmos DB connection
+
+The following script creates a function app and a connected Azure Cosmos DB account:
++
+## Create a function app with continuous deployment
+
+The following script creates a function app that has continuous deployment configured to publish from a public GitHub repository:
++
+## Create a serverless Python function app and mount file share
+
+The following script creates a Python function app on Linux and creates and mounts an external Azure Files share:
++
+Mounted file shares are only supported on Linux. For more information, see [Mount file shares](storage-considerations.md#mount-file-shares).
++
+## Next steps
+
+For more information on Azure PowerShell, see [Azure PowerShell documentation](/powershell/azure).
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Because Functions use Azure Files during parts of the the dynamic scale-out proc
_This functionality is current only available when running on Linux._
-You can mount existing Azure Files shares to your Linux function apps. By mounting a share to your Linux function app, you can leverage existing machine learning models or other data in your functions. You can use the [`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az-webapp-config-storage-account-add) command to mount an existing share to your Linux function app.
+You can mount existing Azure Files shares to your Linux function apps. By mounting a share to your Linux function app, you can leverage existing machine learning models or other data in your functions. You can use the following command to mount an existing share to your Linux function app.
+
+# [Azure CLI](#tab/azure-cli)
+
+[`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az-webapp-config-storage-account-add)
In this command, `share-name` is the name of the existing Azure Files share, and `custom-id` can be any string that uniquely defines the share when mounted to the function app. Also, `mount-path` is the path from which the share is accessed in your function app. `mount-path` must be in the format `/dir-name`, and it can't start with `/home`. For a complete example, see the scripts in [Create a Python function app and mount a Azure Files share](scripts/functions-cli-mount-files-storage-linux.md).
+# [Azure PowerShell](#tab/azure-powershell)
+
+[`New-AzWebAppAzureStoragePath`](/powershell/module/az.websites/new-azwebappazurestoragepath)
+
+In this command, `-ShareName` is the name of the existing Azure Files share, and `-MountPath` is the path from which the share is accessed in your function app. `-MountPath` must be in the format `/dir-name`, and it can't start with `/home`. After you create the path, use the `-AzureStoragePath` parameter of [`Set-AzWebApp`](/powershell/module/az.websites/set-azwebapp) to add the share to the app.
+
+For a complete example, see the script in [Create a serverless Python function app and mount file share](create-resources-azure-powershell.md#create-a-serverless-python-function-app-and-mount-file-share).
+++ Currently, only a `storage-type` of `AzureFiles` is supported. You can only mount five shares to a given function app. Mounting a file share may increase the cold start time by at least 200-300ms, or even more when the storage account is in a different region. The mounted share is available to your function code at the `mount-path` specified. For example, when `mount-path` is `/path/to/mount`, you can access the target directory by file system APIs, as in the following Python example:
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The following tables provide a quick comparison of the telemetry agents for Wind
### Windows agents
-| | Azure Monitor agent | Diagnostics<br>extension (WAD) | Log Analytics<br>agent | Dependency<br>agent |
-|:|:-|:|:|:|
-| **Environments supported** | Azure<br><br>Other cloud (Azure Arc)<br><br>On-premises (Azure Arc)<br><br>[Windows Client OS (preview)](./azure-monitor-agent-windows-client.md) | Azure | Azure<br><br>Other cloud<br><br>On-premises | Azure<br><br>Other cloud<br><br>On-premises |
-| **Agent requirements** | None | None | None | Requires Log Analytics agent |
-| **Data collected** | Event logs<br><br>Performance<br><br>File-based logs (preview)<br> | Event logs<br><br>ETW events<br><br>Performance<br><br>File-based logs<br><br>IIS logs<br><br>.NET app logs<br><br>Crash dumps<br><br>Agent diagnostics logs | Event logs<br><br>Performance<br><br>File-based logs<br><br>IIS logs<br><br>Insights and solutions<br><br>Other services | Process dependencies<br><br>Network connection metrics |
-| **Data sent to** | Azure Monitor Logs<br><br>Azure Monitor Metrics<sup>1</sup> | Azure Storage<br><br>Azure Monitor Metrics<br><br>Event hub | Azure Monitor Logs | Azure Monitor Logs<br>(through Log Analytics agent) |
-| **Services and**<br>**features**<br>**supported** | Log Analytics<br><br>Metrics Explorer<br><br>Microsoft Sentinel ([view scope](./azure-monitor-agent-overview.md#supported-services-and-features)) | Metrics Explorer | VM insights<br><br>Log Analytics<br><br>Azure Automation<br><br>Microsoft Defender for Cloud<br><br>Microsoft Sentinel | VM insights<br><br>Service Map |
+| | Azure Monitor agent | Diagnostics<br>extension (WAD) | Log Analytics<br>agent |
+|:|:-|:|:|
+| **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc)<br>[Windows Client OS (preview)](./azure-monitor-agent-windows-client.md) | Azure | Azure<br>Other cloud<br>On-premises |
+| **Agent requirements** | None | None | None |
+| **Data collected** | Event Logs<br>Performance<br>File based logs (preview)<br> | Event Logs<br>ETW events<br>Performance<br>File based logs<br>IIS logs<br>.NET app logs<br>Crash dumps<br>Agent diagnostics logs | Event Logs<br>Performance<br>File based logs<br>IIS logs<br>Insights and solutions<br>Other services |
+| **Data sent to** | Azure Monitor Logs<br>Azure Monitor Metrics<sup>1</sup> | Azure Storage<br>Azure Monitor Metrics<br>Event Hub | Azure Monitor Logs |
+| **Services and**<br>**features**<br>**supported** | Log Analytics<br>Metrics explorer<br>Microsoft Sentinel ([view scope](./azure-monitor-agent-overview.md#supported-services-and-features)) | Metrics explorer | VM insights<br>Log Analytics<br>Azure Automation<br>Microsoft Defender for Cloud<br>Microsoft Sentinel |
### Linux agents
-| | Azure Monitor agent | Diagnostics<br>extension (LAD) | Telegraf<br>agent | Log Analytics<br>agent | Dependency<br>agent |
-|:|:-|:|:|:|:|
-| **Environments supported** | Azure<br><br>Other cloud (Azure Arc)<br><br>On-premises (Azure Arc) | Azure | Azure<br><br>Other cloud<br><br>On-premises | Azure<br><br>Other cloud<br><br>On-premises | Azure<br><br>Other cloud<br><br>On-premises |
-| **Agent requirements** | None | None | None | None | Requires Log Analytics agent |
-| **Data collected** | Syslog<br><br>Performance<br><br>File-based logs (preview)<br> | Syslog<br><br>Performance | Performance | Syslog<br><br>Performance| Process dependencies<br><br>Network connection metrics |
-| **Data sent to** | Azure Monitor Logs<br><br>Azure Monitor Metrics<sup>1</sup> | Azure Storage<br><br>Event hub | Azure Monitor Metrics | Azure Monitor Logs | Azure Monitor Logs<br>(through Log Analytics agent) |
-| **Services and**<br>**features**<br>**supported** | Log Analytics<br><br>Metrics Explorer<br><br>Microsoft Sentinel ([view scope](./azure-monitor-agent-overview.md#supported-services-and-features)) | | Metrics Explorer | VM insights<br><br>Log Analytics<br><br>Azure Automation<br><br>Microsoft Defender for Cloud<br><br>Microsoft Sentinel | VM insights<br><br>Service Map |
+| | Azure Monitor agent | Diagnostics<br>extension (LAD) | Telegraf<br>agent | Log Analytics<br>agent |
+| : | :- | :-- | :- | :-- |
+| **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises |
+| **Agent requirements** | None | None | None | None |
+| **Data collected** | Syslog<br>Performance<br>File based logs (preview)<br> | Syslog<br>Performance | Performance | Syslog<br>Performance |
+| **Data sent to** | Azure Monitor Logs<br>Azure Monitor Metrics<sup>1</sup> | Azure Storage<br>Event Hub | Azure Monitor Metrics | Azure Monitor Logs |
+| **Services and**<br>**features**<br>**supported** | Log Analytics<br>Metrics explorer<br>Microsoft Sentinel ([view scope](./azure-monitor-agent-overview.md#supported-services-and-features)) | | Metrics explorer | VM insights<br>Log Analytics<br>Azure Automation<br>Microsoft Defender for Cloud<br>Microsoft Sentinel |
<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
Use the Telegraf agent if you need to:
* Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [Metrics Explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Linux only).
-## Dependency agent
-
-The Dependency agent collects discovered data about processes running on the machine and external process dependencies.
-
-Use the Dependency agent if you need to:
-
-* Use the Map feature [VM insights](../vm/vminsights-overview.md) or the [Service Map](../vm/service-map.md) solution.
-
-Consider the following factors when you use the Dependency agent:
--- The Dependency agent requires the Log Analytics agent to be installed on the same machine.-- On Linux computers, the Log Analytics agent must be installed before the Azure Diagnostics extension.-- On both the Windows and Linux versions of the Dependency agent, data collection is done by using a user-space service and a kernel driver.- ## Virtual machine extensions
-The [Azure Monitor agent](./azure-monitor-agent-manage.md#virtual-machine-extension-details) is only available as a virtual machine extension. The Log Analytics extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md) installs the Log Analytics agent on Azure virtual machines. The Azure Monitor Dependency extension for [Windows](../../virtual-machines/extensions/agent-dependency-windows.md) and [Linux](../../virtual-machines/extensions/agent-dependency-linux.md) installs the Dependency agent on Azure virtual machines. These are the same agents previously described, but they allow you to manage them through [virtual machine extensions](../../virtual-machines/extensions/overview.md). You should use extensions to install and manage the agents whenever possible.
+The [Azure Monitor agent](./azure-monitor-agent-manage.md#virtual-machine-extension-details) is only available as a virtual machine extension. The Log Analytics extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md) install the Log Analytics agent on Azure virtual machines. These are the same agents described above but allow you to manage them through [virtual machine extensions](../../virtual-machines/extensions/overview.md). You should use extensions to install and manage the agents whenever possible.
On hybrid machines, use [Azure Arc-enabled servers](../../azure-arc/servers/manage-vm-extensions.md) to deploy the Azure Monitor agent, Log Analytics, and Azure Monitor Dependency VM extensions.
The following tables list the operating systems that are supported by the Azure
### Windows
-| Operating system | Azure Monitor agent | Log Analytics agent | Dependency agent | Diagnostics extension |
+| Operating system | Azure Monitor agent | Log Analytics agent | Diagnostics extension |
|:|::|::|::|::|
-| Windows Server 2022 | X | | | |
-| Windows Server 2022 Core | X | | | |
-| Windows Server 2019 | X | X | X | X |
-| Windows Server 2019 Core | X | | | |
-| Windows Server 2016 | X | X | X | X |
-| Windows Server 2016 Core | X | | | X |
-| Windows Server 2012 R2 | X | X | X | X |
-| Windows Server 2012 | X | X | X | X |
-| Windows Server 2008 R2 SP1 | X | X | X | X |
-| Windows Server 2008 R2 | | | | X |
-| Windows Server 2008 SP2 | | X | | |
-| Windows 11 client OS | X<sup>2</sup> | | | |
-| Windows 10 1803 (RS4) and higher | X<sup>2</sup> | | | |
-| Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only<sup>1</sup>) | X | X | X | X |
-| Windows 8 Enterprise and Pro<br>(Server scenarios only<sup>1</sup>) | | X | X | |
-| Windows 7 SP1<br>(Server scenarios only<sup>1</sup>) | | X | X | |
-| Azure Stack HCI | | X | | |
+| Windows Server 2022 | X | | |
+| Windows Server 2022 Core | X | | |
+| Windows Server 2019 | X | X | X |
+| Windows Server 2019 Core | X | | |
+| Windows Server 2016 | X | X | X |
+| Windows Server 2016 Core | X | | X |
+| Windows Server 2012 R2 | X | X | X |
+| Windows Server 2012 | X | X | X |
+| Windows Server 2008 R2 SP1 | X | X | X |
+| Windows Server 2008 R2 | | | X |
+| Windows Server 2008 SP2 | | X | |
+| Windows 11 client OS | X<sup>2</sup> | | |
+| Windows 10 1803 (RS4) and higher | X<sup>2</sup> | | |
+| Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only<sup>1</sup>) | X | X | X |
+| Windows 8 Enterprise and Pro<br>(Server scenarios only<sup>1</sup>) | | X | |
+| Windows 7 SP1<br>(Server scenarios only<sup>1</sup>) | | X | |
+| Azure Stack HCI | | X | |
<sup>1</sup> Running the OS on server hardware, for example, machines that are always connected, always turned on, and not running other workloads (PC, office, browser)<br> <sup>2</sup> Using the Azure Monitor agent [client installer (preview)](./azure-monitor-agent-windows-client.md) ### Linux
-> [!NOTE]
-> For Dependency agent, check for supported kernel versions. For more information, see the "Dependency agent Linux kernel support" table.
-
-| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Dependency agent | Diagnostics extension <sup></sup>|
+| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Diagnostics extension <sup>2</sup>|
|:|::|::|::|::
-| AlmaLinux | X | X | | |
-| Amazon Linux 2017.09 | | X | | |
-| Amazon Linux 2 | | X | | |
-| CentOS Linux 8 | X <sup>2</sup> | X | X | |
-| CentOS Linux 7 | X | X | X | X |
-| CentOS Linux 6 | | X | | |
-| CentOS Linux 6.5+ | | X | X | X |
-| Debian 11 <sup>1</sup> | X | | | |
-| Debian 10 <sup>1</sup> | X | | | |
-| Debian 9 | X | X | X | X |
-| Debian 8 | | X | X | |
-| Debian 7 | | | | X |
-| OpenSUSE 13.1+ | | | | X |
-| Oracle Linux 8 | X <sup>2</sup> | X | | |
-| Oracle Linux 7 | X | X | | X |
-| Oracle Linux 6 | | X | | |
-| Oracle Linux 6.4+ | | X | | X |
-| Red Hat Enterprise Linux Server 8.5, 8.6 | X | X | | |
-| Red Hat Enterprise Linux Server 8, 8.1, 8.2, 8.3, 8.4 | X <sup>2</sup> | X | X | |
-| Red Hat Enterprise Linux Server 7 | X | X | X | X |
-| Red Hat Enterprise Linux Server 6 | | X | X | |
-| Red Hat Enterprise Linux Server 6.7+ | | X | X | X |
-| Rocky Linux | X | X | | |
-| SUSE Linux Enterprise Server 15.2 | X <sup>2</sup> | | | |
-| SUSE Linux Enterprise Server 15.1 | X <sup>2</sup> | X | | |
-| SUSE Linux Enterprise Server 15 SP1 | X | X | X | |
-| SUSE Linux Enterprise Server 15 | X | X | X | |
-| SUSE Linux Enterprise Server 12 SP5 | X | X | X | X |
-| SUSE Linux Enterprise Server 12 | X | X | X | X |
-| Ubuntu 22.04 LTS | X | | | |
-| Ubuntu 20.04 LTS | X | X | X | X <sup>3</sup> |
-| Ubuntu 18.04 LTS | X | X | X | X |
-| Ubuntu 16.04 LTS | X | X | X | X |
-| Ubuntu 14.04 LTS | | X | | X |
+| AlmaLinux | X | X | |
+| Amazon Linux 2017.09 | | X | |
+| Amazon Linux 2 | | X | |
+| CentOS Linux 8 | X <sup>3</sup> | X | |
+| CentOS Linux 7 | X | X | X |
+| CentOS Linux 6 | | X | |
+| CentOS Linux 6.5+ | | X | X |
+| Debian 11 <sup>1</sup> | X | | |
+| Debian 10 <sup>1</sup> | X | | |
+| Debian 9 | X | X | X |
+| Debian 8 | | X | |
+| Debian 7 | | | X |
+| OpenSUSE 13.1+ | | | X |
+| Oracle Linux 8 | X <sup>3</sup> | X | |
+| Oracle Linux 7 | X | X | X |
+| Oracle Linux 6 | | X | |
+| Oracle Linux 6.4+ | | X | X |
+| Red Hat Enterprise Linux Server 8.5, 8.6 | X | X | |
+| Red Hat Enterprise Linux Server 8, 8.1, 8.2, 8.3, 8.4 | X <sup>3</sup> | X | |
+| Red Hat Enterprise Linux Server 7 | X | X | X |
+| Red Hat Enterprise Linux Server 6 | | X | |
+| Red Hat Enterprise Linux Server 6.7+ | | X | X |
+| Rocky Linux | X | X | |
+| SUSE Linux Enterprise Server 15.2 | X <sup>3</sup> | | |
+| SUSE Linux Enterprise Server 15.1 | X <sup>3</sup> | X | |
+| SUSE Linux Enterprise Server 15 SP1 | X | X | |
+| SUSE Linux Enterprise Server 15 | X | X | |
+| SUSE Linux Enterprise Server 12 SP5 | X | X | X |
+| SUSE Linux Enterprise Server 12 | X | X | X |
+| Ubuntu 22.04 LTS | X | | |
+| Ubuntu 20.04 LTS | X | X | X <sup>4</sup> |
+| Ubuntu 18.04 LTS | X | X | X |
+| Ubuntu 16.04 LTS | X | X | X |
+| Ubuntu 14.04 LTS | | X | X |
<sup>1</sup> Requires Python (2 or 3) to be installed on the machine.<br> <sup>2</sup> Known issue collecting Syslog events in versions prior to 1.9.0.<br> <sup>3</sup> Not all kernel versions are supported. Check the supported kernel versions in the following table.
-#### Dependency agent Linux kernel support
-
-Because the Dependency agent works at the kernel level, support is also dependent on the kernel version. As of Dependency agent version 9.10.*, the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
-
-| Distribution | OS version | Kernel version |
-|:|:|:|
-| Red Hat Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
-| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
-| | 8.3 | 4.18.0-240.\*el8_3.x86_64 |
-| | 8.2 | 4.18.0-193.\*el8_2.x86_64 |
-| | 8.1 | 4.18.0-147.\*el8_1.x86_64 |
-| | 8.0 | 4.18.0-80.\*el8.x86_64<br>4.18.0-80.\*el8_0.x86_64 |
-| Red Hat Linux 7 | 7.9 | 3.10.0-1160 |
-| | 7.8 | 3.10.0-1136 |
-| | 7.7 | 3.10.0-1062 |
-| | 7.6 | 3.10.0-957 |
-| | 7.5 | 3.10.0-862 |
-| | 7.4 | 3.10.0-693 |
-| Red Hat Linux 6 | 6.10 | 2.6.32-754 |
-| | 6.9 | 2.6.32-696 |
-| CentOS Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
-| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
-| | 8.3 | 4.18.0-240.\*el8_3.x86_64 |
-| | 8.2 | 4.18.0-193.\*el8_2.x86_64 |
-| | 8.1 | 4.18.0-147.\*el8_1.x86_64 |
-| | 8.0 | 4.18.0-80.\*el8.x86_64<br>4.18.0-80.\*el8_0.x86_64 |
-| CentOS Linux 7 | 7.9 | 3.10.0-1160 |
-| | 7.8 | 3.10.0-1136 |
-| | 7.7 | 3.10.0-1062 |
-| CentOS Linux 6 | 6.10 | 2.6.32-754.3.5<br>2.6.32-696.30.1 |
-| | 6.9 | 2.6.32-696.30.1<br>2.6.32-696.18.7 |
-| Ubuntu Server | 20.04 | 5.8<br>5.4\* |
-| | 18.04 | 5.3.0-1020<br>5.0 (includes Azure-tuned kernel)<br>4.18*<br>4.15* |
-| | 16.04.3 | 4.15.\* |
-| | 16.04 | 4.13.\*<br>4.11.\*<br>4.10.\*<br>4.8.\*<br>4.4.\* |
-| SUSE Linux 12 Enterprise Server | 12 SP5 | 4.12.14-122.\*-default, 4.12.14-16.\*-azure|
-| | 12 SP4 | 4.12.\* (includes Azure-tuned kernel) |
-| | 12 SP3 | 4.4.\* |
-| | 12 SP2 | 4.4.\* |
-| SUSE Linux 15 Enterprise Server | 15 SP1 | 4.12.14-197.\*-default, 4.12.14-8.\*-azure |
-| | 15 | 4.12.14-150.\*-default |
-| Debian | 9 | 4.9 |
- ## Next steps For more information on each of the agents, see the following articles:
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 7/8/2022 Last updated : 7/21/2022
We strongly recommended to update to the latest version at all times, or opt in
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
+| June 2022 | Bugfixes with user assigned identity support, and reliability improvements | 1.6.0.0 | Coming soon |
| May 2022 | <ul><li>Fixed issue where agent stops functioning due to faulty XPath query. With this version, only query related Windows events will fail, other data types will continue to be collected</li><li>Collection of Windows network troubleshooting logs added to 'CollectAMAlogs.ps1' tool</li><li>Linux support for Debian 11 distro</li><li>Fixed issue to list mount paths instead of device names for Linux disk metrics</li></ul> | 1.5.0.0 | 1.21.0 | | April 2022 | <ul><li>Private IP information added in Log Analytics <i>Heartbeat</i> table for Windows and Linux</li><li>Fixed bugs in Windows IIS log collection (preview) <ul><li>Updated IIS site column name to match backend KQL transform</li><li>Added delay to IIS upload task to account for IIS buffering</li></ul></li><li>Fixed Linux CEF syslog forwarding for Sentinel</li><li>Removed 'error' message for Azure MSI token retrieval failure on Arc to show as 'Info' instead</li><li>Support added for Ubuntu 22.04, RHEL 8.5, 8.6, AlmaLinux and RockyLinux distros</li></ul> | 1.4.1.0<sup>Hotfix</sup> | 1.19.3 | | March 2022 | <ul><li>Fixed timestamp and XML format bugs in Windows Event logs</li><li>Full Windows OS information in Log Analytics Heartbeat table</li><li>Fixed Linux performance counters to collect instance values instead of 'total' only</li></ul> | 1.3.0.0 | 1.17.5.0 |
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
There are multiple methods to install the Log Analytics agent and connect your m
### Azure virtual machine -- Use [VM insights](../vm/vminsights-enable-overview.md) to install the agent for a [single machine by using the Azure portal](../vm/vminsights-enable-portal.md) or for [multiple machines at scale](../vm/vminsights-enable-policy.md). This process will install the Log Analytics agent and [Dependency agent](agents-overview.md#dependency-agent).-- Install the Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md) with the Azure portal, the Azure CLI, Azure PowerShell, or an Azure Resource Manager template.-- Use [Microsoft Defender for Cloud to provision the Log Analytics agent](../../security-center/security-center-enable-data-collection.md) on all supported Azure VMs and any new ones that are created if you've enabled it to monitor for security vulnerabilities and threats.-- Install individual Azure virtual machines [manually from the Azure portal](../vm/monitor-virtual-machine.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+- Use [VM insights](../vm/vminsights-enable-overview.md) to install the agent for a [single machine using the Azure portal](../vm/vminsights-enable-portal.md) or for [multiple machines at scale](../vm/vminsights-enable-policy.md). This installs the Log Analytics agent and [Dependency agent](../vm/vminsights-dependency-agent-maintenance.md).
+- Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md) can be installed with the Azure portal, Azure CLI, Azure PowerShell, or a Azure Resource Manager template.
+- [Microsoft Defender for Cloud can provision the Log Analytics agent](../../security-center/security-center-enable-data-collection.md) on all supported Azure VMs and any new ones that are created if you enable it to monitor for security vulnerabilities and threats.
+- Install for individual Azure virtual machines [manually from the Azure portal](../vm/monitor-virtual-machine.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
- Connect the machine to a workspace from the **Virtual machines** option in the **Log Analytics workspaces** menu in the Azure portal. ### Windows virtual machine on-premises or in another cloud
azure-monitor Alerts Dynamic Thresholds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-dynamic-thresholds.md
When an alert rule is first created, the thresholds appearing in the chart are c
If you have a new resource or missing metric data, Dynamic Thresholds won't trigger alerts before three days and at least 30 samples of metric data are available, to ensure accurate thresholds. For existing resources with sufficient metric data, Dynamic Thresholds can trigger alerts immediately.
+## How does prolong outages affected the thresholds that are calculated?
+
+Dynamic Threholds automatically recognizes prolonged outages and removes them from thresholds training. The results are thresholds that fit the data and can detect service issues with the same sensitivity as before an outage occurred.
+ ## Dynamic Thresholds best practices Dynamic Thresholds can be applied to most platform and custom metrics in Azure Monitor and it was also tuned for the common application and infrastructure metrics.
azure-monitor Alerts Metric Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-logs.md
description: Tutorial on creating near-real time metric alerts on popular log an
Previously updated : 2/23/2022 Last updated : 7/24/2022
To achieve the same, one can use the sample Azure Resource Manager Template belo
} }, "variables": {
- "convertRuleTag": "hidden-link:/subscriptions/1234-56789-1234-567a/resourceGroups/resourceGroupName/providers/Microsoft.OperationalInsights/workspaces/workspaceName",
"convertRuleSourceWorkspace": { "SourceId": "/subscriptions/1234-56789-1234-567a/resourceGroups/resourceGroupName/providers/Microsoft.OperationalInsights/workspaces/workspaceName" }
To achieve the same, one can use the sample Azure Resource Manager Template belo
"type": "Microsoft.Insights/scheduledQueryRules", "apiVersion": "2018-04-16", "location": "[parameters('convertRuleRegion')]",
- "tags": {
- "[variables('convertRuleTag')]": "Resource"
- },
"properties": { "description": "[parameters('convertRuleDescription')]", "enabled": "[parameters('convertRuleStatus')]",
To achieve the same, one can use the sample Azure Resource Manager Template belo
} }, "variables": {
- "convertRuleTag": "hidden-link:/subscriptions/1234-56789-1234-567a/resourceGroups/resourceGroupName/providers/Microsoft.OperationalInsights/workspaces/workspaceName",
"convertRuleSourceWorkspace": { "SourceId": "/subscriptions/1234-56789-1234-567a/resourceGroups/resourceGroupName/providers/Microsoft.OperationalInsights/workspaces/workspaceName" }
To achieve the same, one can use the sample Azure Resource Manager Template belo
"type": "Microsoft.Insights/scheduledQueryRules", "apiVersion": "2018-04-16", "location": "[parameters('convertRuleRegion')]",
- "tags": {
- "[variables('convertRuleTag')]": "Resource"
- },
"properties": { "description": "[parameters('convertRuleDescription')]", "enabled": "[parameters('convertRuleStatus')]",
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.3.0.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.3.0/applicationinsights-agent-3.3.0.jar) file.
+Download the [applicationinsights-agent-3.3.1.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.3.1/applicationinsights-agent-3.3.1.jar) file.
> [!WARNING] >
-> If you're upgrading from 3.2.x to 3.3.0:
+> If you're upgrading from 3.2.x to 3.3.1:
>
-> - Starting from 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
+> - Starting from 3.3.1, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
> - Exception records are no longer recorded for failed dependencies, they are only recorded for failed requests. > > If you're upgrading from 3.1.x:
Download the [applicationinsights-agent-3.3.0.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:"path/to/applicationinsights-agent-3.3.0.jar"` to your application's JVM args.
+Add `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:"path/to/applicationinsights-agent-3.3.0.jar"` to your applicati
APPLICATIONINSIGHTS_CONNECTION_STRING = <Copy connection string from Application Insights Resource Overview> ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.3.0.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.3.1.jar` with the following content:
```json {
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
You can enable the Azure Monitor Application Insights agent for Java by adding a
### Usual case
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.3.0.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.3.0.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.3.1.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.3.0.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.3.0.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.3.1.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.3.0.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.3.0.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.3.1.jar" -jar <myapp.jar>
``` ## Programmatic configuration
To use the programmatic configuration and attach the Application Insights agent
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.3.0</version>
+ <version>3.3.1</version>
</dependency> ```
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Read the Spring Boot documentation [here](../app/java-in-process-agent.md).
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.3.0.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.3.1.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.3.0.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.3.0.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.3.1.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.3.0.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.3.1.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.3.0.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.3.1.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.3.0.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.3.1.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.3.0.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.3.1.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.3.0.jar
+-javaagent:path/to/applicationinsights-agent-3.3.1.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.3.1.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.3.0.jar>
+ -javaagent:path/to/applicationinsights-agent-3.3.1.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following JVM argument: ```--javaagent:path/to/applicationinsights-agent-3.3.0.jar
+-javaagent:path/to/applicationinsights-agent-3.3.1.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.3.0.jar
+-javaagent:path/to/applicationinsights-agent-3.3.1.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.3.0.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.3.1.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.3.0.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.3.1.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
You can also set the connection string using the environment variable `APPLICATI
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.3.0.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.3.1.jar` is located.
```json {
Instrumentation key overrides allow you to override the [default instrumentation
## Cloud role name overrides (preview)
-This feature is in preview, starting from 3.3.0.
+This feature is in preview, starting from 3.3.1.
Cloud role name overrides allow you to override the [default cloud role name](#cloud-role-name), for example: * Set one cloud role name for one http path prefix `/myapp1`.
These are the valid `level` values that you can specify in the `applicationinsig
### LoggingLevel
-Starting from version 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is aleady captured in the `SeverityLevel` field.
+Starting from version 3.3.1, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is aleady captured in the `SeverityLevel` field.
If needed, you can re-enable the previous behavior:
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from version 3.3.0, you can capture request and response headers on your server (request) telemetry:
+Starting from version 3.3.1, you can capture request and response headers on your server (request) telemetry:
```json {
Again, the header names are case-insensitive, and the examples above will be cap
By default, http server requests that result in 4xx response codes are captured as errors.
-Starting from version 3.3.0, you can change this behavior to capture them as success if you prefer:
+Starting from version 3.3.1, you can change this behavior to capture them as success if you prefer:
```json {
Starting from version 3.2.0, the following preview instrumentations can be enabl
``` > [!NOTE] > Akka instrumentation is available starting from version 3.2.2
-> Vertx HTTP Library instrumentation is available starting from version 3.3.0
+> Vertx HTTP Library instrumentation is available starting from version 3.3.1
## Metric interval
When sending telemetry to the Application Insights service fails, Application In
to disk and continue retrying from disk. The default limit for disk persistence is 50 Mb. If you have high telemetry volume, or need to be able to recover from
-longer network or ingestion service outages, you can increase this limit starting from version 3.3.0:
+longer network or ingestion service outages, you can increase this limit starting from version 3.3.1:
```json {
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.3.0.jar` is located.
+`applicationinsights-agent-3.3.1.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
All data from the table will be exported unless limitations are specified. This
| AlertInfo | | | AmlOnlineEndpointConsoleLog | | | ApiManagementGatewayLogs | |
+| AppAvailabilityResults | |
+| AppBrowserTimings | |
| AppCenterError | |
+| AppEvents | |
+| AppExceptions | |
+| AppDependencies | |
+| AppMetrics | |
+| AppPageViews | |
+| AppPerformanceCounters | |
| AppPlatformSystemLogs | |
+| AppRequests | |
| AppServiceAppLogs | | | AppServiceAuditLogs | | | AppServiceConsoleLogs | | | AppServiceFileAuditLogs | | | AppServiceHTTPLogs | | | AppServicePlatformLogs | |
+| AppSystemEvents | |
+| AppTraces | |
| ASimDnsActivityLogs | | | ATCExpressRouteCircuitIpfix | | | AuditLogs | |
azure-monitor Tables Feature Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tables-feature-support.md
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [AlertEvidence](/azure/azure-monitor/reference/tables/alertevidence) | | | [AmlOnlineEndpointConsoleLog](/azure/azure-monitor/reference/tables/amlonlineendpointconsolelog) | | | [ApiManagementGatewayLogs](/azure/azure-monitor/reference/tables/apimanagementgatewaylogs) | |
+| [AppAvailabilityResults](/azure/azure-monitor/reference/tables/appavailabilityresults) | |
+| [AppBrowserTimings](/azure/azure-monitor/reference/tables/appbrowsertimings) | |
| [AppCenterError](/azure/azure-monitor/reference/tables/appcentererror) | |
+| [AppDependencies](/azure/azure-monitor/reference/tables/appdependencies) | |
+| [AppEvents](/azure/azure-monitor/reference/tables/appevents) | |
+| [AppExceptions](/azure/azure-monitor/reference/tables/appexceptions) | |
+| [AppMetrics](/azure/azure-monitor/reference/tables/appmetrics) | |
+| [AppPageViews](/azure/azure-monitor/reference/tables/apppageviews) | |
+| [AppPerformanceCounters](/azure/azure-monitor/reference/tables/appperformancecounters) | |
| [AppPlatformSystemLogs](/azure/azure-monitor/reference/tables/appplatformsystemlogs) | |
+| [AppRequests](/azure/azure-monitor/reference/tables/apprequests) | |
| [AppServiceAppLogs](/azure/azure-monitor/reference/tables/appserviceapplogs) | | | [AppServiceAuditLogs](/azure/azure-monitor/reference/tables/appserviceauditlogs) | | | [AppServiceConsoleLogs](/azure/azure-monitor/reference/tables/appserviceconsolelogs) | | | [AppServiceFileAuditLogs](/azure/azure-monitor/reference/tables/appservicefileauditlogs) | | | [AppServiceHTTPLogs](/azure/azure-monitor/reference/tables/appservicehttplogs) | | | [AppServicePlatformLogs](/azure/azure-monitor/reference/tables/appserviceplatformlogs) | |
+| [AppSystemEvents](/azure/azure-monitor/reference/tables/appsystemevents) | |
+| [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | |
| [ATCExpressRouteCircuitIpfix](/azure/azure-monitor/reference/tables/atcexpressroutecircuitipfix) | | | [AuditLogs](/azure/azure-monitor/reference/tables/auditlogs) | | | [AutoscaleEvaluationsLog](/azure/azure-monitor/reference/tables/autoscaleevaluationslog) | |
azure-monitor Tutorial Logs Ingestion Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md
To complete this tutorial, you need the following:
- Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac) . - [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- PowerShell 7.2 or higher.
## Overview of tutorial
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
## Generate sample data
+> [!IMPORTANT]
+> You must be using PowerShell version 7.2 or higher.
+ The following PowerShell script both generates sample data to configure the custom table and sends sample data to the logs ingestion API to test the configuration. 1. Run the following PowerShell command which adds a required assembly for the script.
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md
Any monitoring tool, such as Azure Monitor, requires an agent installed on a mac
- [Azure Monitor agent](../agents/agents-overview.md#azure-monitor-agent): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Microsoft Defender for Cloud, and Microsoft Sentinel, then it will completely replace the Log Analytics agent and diagnostic extension. - [Log Analytics agent](../agents/agents-overview.md#log-analytics-agent): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This agent is the same agent used for System Center Operations Manager.-- [Dependency agent](../agents/agents-overview.md#dependency-agent): Collects data about the processes running on the virtual machine and their dependencies. Relies on the Log Analytics agent to transmit data into Azure and supports VM insights, Service Map, and Wire Data 2.0 solutions.
+- [Dependency agent](vminsights-dependency-agent-maintenance.md): Collects data about the processes running on the virtual machine and their dependencies. Relies on the Log Analytics agent to transmit data into Azure and supports VM insights, Service Map, and Wire Data 2.0 solutions.
- [Azure Diagnostic extension](../agents/agents-overview.md#azure-diagnostics-extension): Available for Azure Monitor virtual machines only. Can send data to Azure Event Hubs and Azure Storage. ## Next steps
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
Title: How to upgrade the VM insights Dependency agent
+ Title: VM Insights Dependency Agent
description: This article describes how to upgrade the VM insights Dependency agent using command-line, setup wizard, and other methods.
Last updated 04/16/2020
-# How to upgrade the VM insights Dependency agent
+# Dependency Agent
-After initial deployment of the VM insights Dependency agent, updates are released that include bug fixes or support of new features or functionality. This article helps you understand the methods available and how to perform the upgrade manually or through automation.
+The Dependency Agent collects data about processes running on the virtual machine and external process dependencies. Dependency Agent updates include bug fixes or support of new features or functionality. This article describes Dependency Agent requirements and how to upgrade Dependency Agent manually or through automation.
-## Upgrade options
+## Dependency Agent requirements
-The Dependency agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on the deployment scenario and environment the machine is running in. The following methods can be used to upgrade the agent.
+- The Dependency Agent requires the Log Analytics Agent to be installed on the same machine.
+- On both the Windows and Linux versions, the Dependency Agent collects data using a user-space service and a kernel driver.
+ - Dependency Agent supports the same [Windows versions Log Analytics Agent supports](/azure/azure-monitor/agents/agents-overview#supported-operating-systems), except Windows Server 2008 SP2 and Azure Stack HCI.
+ - For Linux, see [Dependency Agent Linux support](#dependency-agent-linux-support).
+
+## Upgrade Dependency Agent
+
+You can upgrade the Dependency agent for Windows and Linux manually or automatically, depending on the deployment scenario and environment the machine is running in, using these methods:
|Environment |Installation method |Upgrade method | ||--||
The Dependency agent for Windows and Linux can be upgraded to the latest release
| Custom Azure VM images | Manual install of Dependency agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle.| | Non-Azure VMs | Manual install of Dependency agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle. |
-## Upgrade Windows agent
+### Upgrade Windows agent
-To update the agent on a Windows VM to the latest version not installed using the Dependency agent VM extension, you either run from the Command Prompt, script or other automation solution, or by using the InstallDependencyAgent-Windows.exe Setup Wizard.
+Update the agent on a Windows VM from the command prompt, with a script or other automation solution, or by using the InstallDependencyAgent-Windows.exe Setup Wizard.
-You can download the latest version of the Windows agent from [here](https://aka.ms/dependencyagentwindows).
+[Download the latest version of the Windows agent](https://aka.ms/dependencyagentwindows).
-### Using the Setup Wizard
+#### Using the Setup Wizard
1. Sign on to the computer with an account that has administrative rights.
You can download the latest version of the Windows agent from [here](https://aka
3. Follow the **Dependency Agent Setup** wizard to uninstall the previous version of the dependency agent and then install the latest version.
-### From the command line
+#### From the command line
1. Sign on to the computer with an account that has administrative rights.
You can download the latest version of the Windows agent from [here](https://aka
3. To confirm the upgrade was successful, check the `install.log` for detailed setup information. The log directory is *%Programfiles%\Microsoft Dependency Agent\logs*.
-## Upgrade Linux agent
+### Upgrade Linux agent
Upgrade from prior versions of the Dependency agent on Linux is supported and performed following the same command as a new installation.
You can download the latest version of the Linux agent from [here](https://aka.m
If the Dependency agent fails to start, check the logs for detailed error information. On Linux agents, the log directory is */var/opt/microsoft/dependency-agent/log*.
+## Dependency Agent Linux support
+
+Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. As of Dependency agent version 9.10.* the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
+
+| Distribution | OS version | Kernel version |
+|:|:|:|
+| Red Hat Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
+| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
+| | 8.3 | 4.18.0-240.\*el8_3.x86_64 |
+| | 8.2 | 4.18.0-193.\*el8_2.x86_64 |
+| | 8.1 | 4.18.0-147.\*el8_1.x86_64 |
+| | 8.0 | 4.18.0-80.\*el8.x86_64<br>4.18.0-80.\*el8_0.x86_64 |
+| Red Hat Linux 7 | 7.9 | 3.10.0-1160 |
+| | 7.8 | 3.10.0-1136 |
+| | 7.7 | 3.10.0-1062 |
+| | 7.6 | 3.10.0-957 |
+| | 7.5 | 3.10.0-862 |
+| | 7.4 | 3.10.0-693 |
+| Red Hat Linux 6 | 6.10 | 2.6.32-754 |
+| | 6.9 | 2.6.32-696 |
+| CentOS Linux 8 | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
+| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
+| | 8.3 | 4.18.0-240.\*el8_3.x86_64 |
+| | 8.2 | 4.18.0-193.\*el8_2.x86_64 |
+| | 8.1 | 4.18.0-147.\*el8_1.x86_64 |
+| | 8.0 | 4.18.0-80.\*el8.x86_64<br>4.18.0-80.\*el8_0.x86_64 |
+| CentOS Linux 7 | 7.9 | 3.10.0-1160 |
+| | 7.8 | 3.10.0-1136 |
+| | 7.7 | 3.10.0-1062 |
+| CentOS Linux 6 | 6.10 | 2.6.32-754.3.5<br>2.6.32-696.30.1 |
+| | 6.9 | 2.6.32-696.30.1<br>2.6.32-696.18.7 |
+| Ubuntu Server | 20.04 | 5.8<br>5.4\* |
+| | 18.04 | 5.3.0-1020<br>5.0 (includes Azure-tuned kernel)<br>4.18*<br>4.15* |
+| | 16.04.3 | 4.15.\* |
+| | 16.04 | 4.13.\*<br>4.11.\*<br>4.10.\*<br>4.8.\*<br>4.4.\* |
+| SUSE Linux 12 Enterprise Server | 12 SP5 | 4.12.14-122.\*-default, 4.12.14-16.\*-azure|
+| | 12 SP4 | 4.12.\* (includes Azure-tuned kernel) |
+| | 12 SP3 | 4.4.\* |
+| | 12 SP2 | 4.4.\* |
+| SUSE Linux 15 Enterprise Server | 15 SP1 | 4.12.14-197.\*-default, 4.12.14-8.\*-azure |
+| | 15 | 4.12.14-150.\*-default |
+| Debian | 9 | 4.9 |
+ ## Next steps
-If you want to stop monitoring your VMs for a period of time or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
+If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
azure-web-pubsub Concept Billing Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-billing-model.md
The unit is an abstract concept of the capability of Azure Web PubSub service. E
The units are counted based on the number of units and the usage time (seconds) of the units, and billed daily.
-For example, imagine you have one Azure Web PubSub service instance with 5 units, scale up to 10 units from 10:00 AM to 16:00 PM and then scale back to 5 units after 16:00 PM. It turns out 5 units for 18 hours and 10 units for 6 hours in a specific day.
+For example, imagine you have one Azure Web PubSub service instance with five units, scale up to 10 units from 10:00 AM to 16:00 PM and then scale back to five units after 16:00 PM. Total usage for the day is 5 units for 18 hours and 10 units for 6 hours.
-> Usage of the units for billing = (5 units * 18 hours + 10 units * 6 hours) / 24 hours = 6.25 Unit/Day
+> Total units are used for billing = (5 units * 18 hours + 10 units * 6 hours) / 24 hours = 6.25 Unit/Day
## How outbound traffic is counted with billing model
The outbound traffic is the messages sent out of Azure Web PubSub service. You c
- The messages broadcasted from service to receivers. - The messages sent from the service to the upstream webhooks.-- The resource logs with [live trace tool](./howto-troubleshoot-resource-logs.md#capture-resource-logs-with-live-trace-tool).
+- The resource logs with [live trace tool](./howto-troubleshoot-resource-logs.md#capture-resource-logs-by-using-the-live-trace-tool).
The inbound traffic is the messages sent to the Azure Web PubSub service.
The message count for billing purpose is an abstract concept and defined as the
### How traffic is counted with billing model
-For billing, only the outbound traffic is counted.
+Only the outbound traffic is counted for billing.
-For example, imagine you have an application with Azure Web PubSub service and Azure Functions. One user broadcast 4 KB data to 10 connections in a group. It turns out 4 KB for upstream from service to function and 40 KB from service broadcast to 10 connections.
+For example, imagine you have an application with Azure Web PubSub service and Azure Functions. One user broadcast 4 KB of data to 10 connections in a group. Total data is 4 KB for upstream from service to function and 40 KB from the service broadcast to 10 connections * 4 KB each.
> Outbound traffic for billing = 4 KB (upstream traffic) + 4 KB * 10 (service broadcasting to clients traffic) = 44 KB > Equivalent message count = 44 KB / 2 KB = 22
-The Azure Web PubSub service also offers daily free quota of outbound traffic (message count) based on the usage of the units. The outbound traffic (message count) beyond the free quota is the additional outbound traffic (additional messages). Taking standard tier as example, the free quota is 2,000,000-KB outbound traffic (1,000,000 messages) per unit/day.
+The Azure Web PubSub service also offers a daily free quota of outbound traffic (message count) based on the usage of the units. The outbound traffic (message count) beyond the free quota is the extra outbound traffic not included in the base quota. Consider standard tier as example, the free quota is 2,000,000-KB outbound traffic (1,000,000 messages) per unit/day.
-Using the previous unit usage example, the application use 6.25 units per day that ensures the daily free quota as 12,500,000-KB outbound traffic (6.25 million messages). Imaging the daily outbound traffic is 30,000,000 KB (15 million messages), the additional messages will be 17,500,000-KB outbound traffic (8.75 million messages). As a result, you'll be billed with 6.25 standard units and 8.75 additional message units for the day.
+In the previous unit usage example, the application uses 6.25 units per day that ensures the daily free quota as 12,500,000-KB outbound traffic (6.25 million messages). Assuming the daily outbound traffic is 30,000,000 KB (15 million messages), the extra messages will be 17,500,000-KB outbound traffic (8.75 million messages). As a result, you'll be billed with 6.25 standard units and 8.75 additional message units for the day.
## Pricing
-The Azure Web PubSub service offers multiple tiers with different pricing. Once you understand how the number of units and size of outbound traffic (message count) are counted with billing model, you could learn more pricing details from [Azure Web PubSub service pricing](https://azure.microsoft.com/pricing/details/web-pubsub).
+The Azure Web PubSub service offers multiple tiers with different pricing. For more information about Web PubSub pricing, see [Azure Web PubSub service pricing](https://azure.microsoft.com/pricing/details/web-pubsub).
azure-web-pubsub Howto Troubleshoot Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-troubleshoot-resource-logs.md
Title: How to troubleshoot with Azure Web PubSub service resource logs
-description: Learn how to get and troubleshoot with resource logs
+description: Learn what resource logs are and how to use them for troubleshooting common problems.
Previously updated : 04/01/2022 Last updated : 07/21/2022 # How to troubleshoot with resource logs
-This how-to guide shows you the options to get the resource logs and how to troubleshoot with them.
+This how-to guide provides an overview of Azure Web PubSub resource logs and some tips for using the logs to troubleshoot certain problems. Logs can be used for issue identification, connection tracking, message tracing, HTTP request tracing, and analysis.
-## What's the resource logs?
+## What are resource logs?
-The resource logs provide richer view of connectivity, messaging and HTTP request information to the Azure Web PubSub service instance. They can be used for issue identification, connection tracking, message tracing, HTTP request tracing and analysis.
+There are three types of resource logs: *Connectivity*, *Messaging*, and *HTTP requests*.
+- **Connectivity** logs provide detailed information for Azure Web PubSub hub connections. For example, basic information (user ID, connection ID, and so on) and event information (connect, disconnect, and so on).
+- **Messaging** logs provide tracing information for the Azure Web PubSub hub messages received and sent via Azure Web PubSub service. For example, tracing ID and message type of the message.
+- **HTTP requests** logs provide tracing information for HTTP requests to the Azure Web PubSub service. For example, HTTP method and status code. Typically the HTTP request is recorded when it arrives at or leave from service.
-There are three types of logs: connectivity log, messaging log and HTTP request logs.
+## Capture resource logs by using the live trace tool
-### Connectivity logs
-
-Connectivity logs provide detailed information for Azure Web PubSub hub connections. For example, basic information (user ID, connection ID, and so on) and event information (connect, disconnect, and abort event, and so on). That's why the connectivity log is helpful to troubleshoot connection-related issues.
-
-### Messaging logs
-
-Messaging logs provide tracing information for the Azure Web PubSub hub messages received and sent via Azure Web PubSub service. For example, tracing ID and message type of the message. Typically the message is recorded when it arrives at or leaves from service. So messaging logs are helpful for troubleshooting message-related issues.
-
-### HTTP request logs
-
-Http request logs provide tracing information for HTTP requests to the Azure Web PubSub service. For example, HTTP method and status code. Typically the HTTP request is recorded when it arrives at or leave from service. So HTTP request logs are helpful for troubleshooting request-related issues.
-
-## Capture resource logs with live trace tool
-
-The Azure Web PubSub service live trace tool has ability to collect resource logs in real time, and is helpful to trace with customer's development environment. The live trace tool could capture connectivity logs, messaging logs and HTTP request logs.
-
-> [!NOTE]
-> The real-time resource logs captured by live trace tool will be billed as messages (outbound traffic).
+The Azure Web PubSub service live trace tool has ability to collect resource logs in real time, which is helpful for troubleshooting problems in your development environment. The live trace tool can capture connectivity logs, messaging logs, and HTTP request logs.
> [!NOTE]
-> The Azure Web PubSub service instance created as free tier has the daily limit of messages (outbound traffic).
+> The following considerations apply to using the live trace tool:
+> - The real-time resource logs captured by live trace tool will be billed as messages (outbound traffic).
+> - The live trace tool does not currently support Azure Active Directory authentication. You must enable access keys to use live trace. Under **Settings**, select **Keys**, and then enable **Access Key**.
+> - The Azure Web PubSub service Free Tier instance has a daily limit of 20,000 messages (outbound traffic). Live trace can cause you to unexpectedly reach the daily limit.
### Launch the live trace tool
-1. Go to the Azure portal.
-2. On the **Live trace settings** page of your Azure Web PubSub service instance, check **Enable Live Trace** if it's disabled.
-3. Check any log category you need.
-4. Click **Save** button and wait until the settings take effect.
-5. Click *Open Live Trace Tool*
+1. Go to the Azure portal and your Web PubSub service.
+1. On the left menu, under **Monitoring**, select **Live trace settings.**
+1. On the **Live trace settings** page, select **Enable Live Trace**.
+1. Choose the log categories to collect.
+1. Select **Save** and then wait until the settings take effect.
+1. Select **Open Live Trace Tool**.
:::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-logs-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool.":::
-> Azure Active Directory access to live trace tool is not yet supported, please enable `Access Key` in `Keys` menu.
- ### Capture the resource logs
-The live trace tool provides some fundamental functionalities to help you capture the resource logs for troubleshooting.
+The live trace tool provides functionality to help you capture the resource logs for troubleshooting.
-* **Capture**: Begin to capture the real-time resource logs from Azure Web PubSub instance with live trace tool.
+* **Capture**: Begin to capture the real-time resource logs from Azure Web PubSub.
* **Clear**: Clear the captured real-time resource logs.
-* **Log filter**: The live trace tool allows you filtering the captured real-time resource logs with one specific key word. The common separator (for example, space, comma, semicolon, and so on) will be treated as part of the key word.
+* **Log filter**: The live trace tool lets you filter the captured real-time resource logs with one specific key word. The common separators (for example, space, comma, semicolon, and so on) will be treated as part of the key word.
* **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance. :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/live-trace-tool-capture.png" alt-text="Screenshot of capturing resource logs with live trace tool.":::
The real-time resource logs captured by live trace tool contain detailed informa
| Name | Description | | | | | Time | Log event time |
-| Log Level | Log event level (Trace/Debug/Informational/Warning/Error) |
+| Log Level | Log event level, can be [Trace \| Debug \| Informational \| Warning \| Error] |
| Event Name | Operation name of the event |
-| Message | Detailed message of log event |
+| Message | Detailed message for the event |
| Exception | The run-time exception of Azure Web PubSub service |
-| Hub | User-defined Hub Name |
+| Hub | User-defined hub name |
| Connection ID | Identity of the connection |
-| User ID | Identity of the user |
-| IP | The IP address of client |
+| User ID | User identity|
+| IP | Client IP address |
| Route Template | The route template of the API | | Http Method | The Http method (POST/GET/PUT/DELETE) | | URL | The uniform resource locator | | Trace ID | The unique identifier to the invocation | | Status Code | The Http response code |
-| Duration | The duration between the request is received and processed |
+| Duration | The duration between receiving the request and processing the request |
| Headers | The additional information passed by the client and the server with an HTTP request or response |
-After the Azure Web PubSub service is GA, the live trace tool will also support to export the logs as a specific format and then help you share with others for troubleshooting.
- ## Capture resource logs with Azure Monitor ### How to enable resource logs
-Currently Azure Web PubSub supports integrate with [Azure Storage](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage).
+Currently Azure Web PubSub supports integration with [Azure Storage](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage).
1. Go to Azure portal.
-2. On **Diagnostic settings** page of your Azure Web PubSub service instance, click **+ Add diagnostic setting** link.
-3. In **Diagnostic setting name**, input the setting name.
-4. In **Category details**, select any log category you need.
-5. In **Destination details**, check **Archive to a storage account**.
-6. Click **Save** button to save the diagnostic setting.
---
+1. On **Diagnostic settings** page of your Azure Web PubSub service instance, select **+ Add diagnostic setting**.
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-list.png" alt-text="Screenshot of viewing diagnostic settings and create a new one":::
+1. In **Diagnostic setting name**, input the setting name.
+1. In **Category details**, select any log category you need.
+1. In **Destination details**, check **Archive to a storage account**.
+
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-details.png" alt-text="Screenshot of configuring diagnostic setting detail":::
+1. Select **Save** to save the diagnostic setting.
> [!NOTE]
-> The storage account should be the same region to Azure Web PubSub service.
+> The storage account should be in the same region as Azure Web PubSub service.
-### Archive to Azure Storage Account
+### Archive to an Azure Storage Account
-Logs are stored in the storage account that configured in **Diagnostics setting** pane. A container named `insights-logs-<CATEGORY_NAME>` is created automatically to store resource logs. Inside the container, logs are stored in the file `resourceId=/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/XXXX/PROVIDERS/MICROSOFT.SIGNALRSERVICE/SIGNALR/XXX/y=YYYY/m=MM/d=DD/h=HH/m=00/PT1H.json`. Basically, the path is combined by `resource ID` and `Date Time`. The log files are split by `hour`. Therefore, the minutes always be `m=00`.
+Logs are stored in the storage account that's configured in the **Diagnostics setting** pane. A container named `insights-logs-<CATEGORY_NAME>` is created automatically to store resource logs. Inside the container, logs are stored in the file `resourceId=/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/XXXX/PROVIDERS/MICROSOFT.SIGNALRSERVICE/SIGNALR/XXX/y=YYYY/m=MM/d=DD/h=HH/m=00/PT1H.json`. The path is combined by `resource ID` and `Date Time`. The log files are split by `hour`. The minute value is always `m=00`.
All logs are stored in JavaScript Object Notation (JSON) format. Each entry has string fields that use the format described in the following sections.
resourceId | Resource ID of your Azure SignalR Service
location | Location of your Azure SignalR Service category | Category of the log event operationName | Operation name of the event
-callerIpAddress | IP address of your server/client
+callerIpAddress | IP address of your server or client
properties | Detailed properties related to this log event. For more detail, see the properties table below **Properties Table**
The following code is an example of an archive log JSON string:
### Archive to Azure Log Analytics
-Once you check `Send to Log Analytics`, and select target Azure Log Analytics, the logs will be stored in the target. To view resource logs, follow these steps:
+To send logs to a Log Analytics workspace:
+1. On the **Diagnostic setting** page, under **Destination details**, select **Send to Log Analytics workspace.
+1. Select the **Subscription** you want to use.
+1. Select the **Log Analytics workspace** to use as the destination for the logs.
+
+To view the resource logs, follow these steps:
1. Select `Logs` in your target Log Analytics. :::image type="content" alt-text="Log Analytics menu item" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png":::
-2. Enter `WebPubSubConnectivity`, `WebPubSubMessaging` or `WebPubSubHttpRequest` and select time range to query [connectivity log](#connectivity-logs), [messaging log](#messaging-logs) or [http request logs](#http-request-logs) correspondingly. For advanced query, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md)
+1. Enter `WebPubSubConnectivity`, `WebPubSubMessaging` or `WebPubSubHttpRequest`, and then select the time range to query the log. For advanced queries, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md).
:::image type="content" alt-text="Query log in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png":::
-To use sample query for SignalR service, please follow the steps below:
+
+To use a sample query for SignalR service, follow the steps below.
1. Select `Logs` in your target Log Analytics.
-2. Select `Queries` to open query explorer.
-3. Select `Resource type` to group sample queries in resource type.
-4. Select `Run` to run the script.
+1. Select `Queries` to open query explorer.
+1. Select `Resource type` to group sample queries in resource type.
+1. Select `Run` to run the script.
:::image type="content" alt-text="Sample query in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png":::
-Archive log columns include elements listed in the following table:
+Archive log columns include elements listed in the following table.
Name | Description - | -
TransportType | Transport type of the connection. Allowed values are: `Websocket
## Troubleshoot with the resource logs
-When finding connection unexpected growing or dropping situation, you can take advantage of resource logs to troubleshoot. Typical issues are often about connections' unexpected quantity changes, connections reach connection limits and authorization failure.
+If you find unexpected changes in the number of connections, either increasing or decreasing, you can take advantage of resource logs to troubleshoot the problem. Typical issues are often about connections' unexpected quantity changes, connections reach connection limits, and authorization failure.
-### Unexpected connection number changes
+### Unexpected changes in number of connections
#### Unexpected connection dropping
-If a connection disconnects, the resource logs will record this disconnecting event with `ConnectionAborted` or `ConnectionEnded` in `operationName`.
+If a connection disconnects, the resource logs will record the disconnection event with `ConnectionAborted` or `ConnectionEnded` in `operationName`.
-The difference between `ConnectionAborted` and `ConnectionEnded` is that `ConnectionEnded` is an expected disconnecting which is triggered by client or server side. While the `ConnectionAborted` is usually an unexpected connection dropping event, and aborting reason will be provided in `message`.
+The difference between `ConnectionAborted` and `ConnectionEnded` is that `ConnectionEnded` is an expected disconnection that is triggered by the client or server side. While the `ConnectionAborted` is usually an unexpected connection dropping event, and the reason for disconnection will be provided in `message`.
The abort reasons are listed in the following table:
The abort reasons are listed in the following table:
| Service reloading, reconnect | Azure Web PubSub service is reloading. You need to implement your own reconnect mechanism or manually reconnect to Azure Web PubSub service | | Internal server transient error | Transient error occurs in Azure Web PubSub service, should be auto recovered
-#### Unexpected connection growing
+#### Unexpected increase in connections
-To troubleshoot about unexpected connection growing, the first thing you need to do is to filter out the extra connections. You can add unique test user ID to your test client connection. Then verify it in with resource logs, if you see more than one client connections have the same test user ID or IP, then it is likely the client side create and establish more connections than expectation. Check your client side.
+When the number of client connections unexpectedly increases, the first thing you need to do is to filter out the superfluous connections. Add a unique test user ID to your test client connection. Then check the resource logs; if you see more than one client connection has the same test user ID or IP, then it's likely the client is creating more connections than expected. Check your client code to find the source of the extra connections.
### Authorization failure
-If you get 401 Unauthorized returned for client requests, check your resource logs. If you meet `Failed to validate audience. Expected Audiences: <valid audience>. Actual Audiences: <actual audience>`, it means all audiences in your access token are invalid. Try to use the valid audiences suggested in the log.
+If you get 401 Unauthorized returned for client requests, check your resource logs. If you find `Failed to validate audience. Expected Audiences: <valid audience>. Actual Audiences: <actual audience>`, it means all audiences in your access token are invalid. Try to use the valid audiences suggested in the log.
### Throttling
-If you find that you cannot establish client connections to Azure Web PubSub service, check your resource logs. If you meet `Connection count reaches limit` in resource log, you establish too many connections to Azure Web PubSub service, which reach the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you meet `Message count reaches limit` in resource log, it means you use free tier, and you use up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to standard tier to send additional messages. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
+If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
defender-for-cloud Other Threat Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/other-threat-protections.md
Title: Additional threat protections from Microsoft Defender for Cloud description: Learn about the threat protections available from Microsoft Defender for Cloud Previously updated : 11/09/2021 Last updated : 07/24/2022 # Additional threat protections in Microsoft Defender for Cloud
To defend against DDoS attacks, purchase a license for Azure DDoS Protection and
If you have Azure DDoS Protection enabled, your DDoS alerts are streamed to Defender for Cloud with no additional configuration needed. For more information on the alerts generated by DDoS Protection, see [Reference table of alerts](alerts-reference.md#alerts-azureddos).
+## Entra Permission Management (formerly Cloudknox)
+
+[Microsoft Entra Permissions Management](../active-directory/cloud-infrastructure-entitlement-management/index.yml) is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility and control over permissions for any identity and any resource in Azure, AWS, and GCP.
+
+As part of the integration, each onboarded Azure subscription, AWS account, and GCP project give you a view of your [Permission Creep Index (PCI)](../active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md). The PCI is an aggregated metric that periodically evaluates the level of risk associated with the number of unused or excessive permissions across identities and resources. PCI measures how risky identities can potentially be, based on the permissions available to them.
+ ## Next steps To learn more about the security alerts from these threat protection features, see the following articles:
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
If you experience issues loading the workload protection dashboard, make sure th
You can also find troubleshooting information for Defender for Cloud at the [Defender for Cloud Q&A page](/answers/topics/azure-security-center.html). If you need further troubleshooting, you can open a new support request using **Azure portal** as shown below:
-![Microsoft Support.](./media/troubleshooting-guide/troubleshooting-guide-fig2.png)
## See also
defender-for-iot Appliance Catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/appliance-catalog-overview.md
- Title: OT monitoring appliance reference overview - Microsoft Defender for IoT
-description: Provides an overview of all appliances available for use with Microsoft Defender for IoT OT sensors and on-premises management consoles.
Previously updated : 07/10/2022---
-# OT monitoring appliance reference
-
-This article provides an overview of the OT monitoring appliances supported with Microsoft Defender for IoT.
-
-Each article provides details about the appliance and any extra software installation procedures required. For more information, see [Install OT system software](../how-to-install-software.md) and [Update Defender for IoT OT monitoring software](../update-ot-software.md).
-
-## Corporate environments
-
-The following OT monitoring appliances are available for corporate deployments:
--- [HPE ProLiant DL360](hpe-proliant-dl360.md)-
-## Large enterprises
-
-The following OT monitoring appliances are available for large enterprise deployments:
--- [HPE ProLiant DL20/DL20 Plus (4SFF)](hpe-proliant-dl20-plus-enterprise.md)-
-## Production line
-
-The following OT monitoring appliances are available for production line deployments:
--- [HPE ProLiant DL20/DL20 Plus (NHP 2LFF) for SMB deployments](hpe-proliant-dl20-plus-smb.md)-- [Dell Edge 5200 (Rugged)](dell-edge-5200.md)-- [YS-techsystems YS-FIT2 (Rugged)](ys-techsystems-ys-fit2.md)-
-## Next steps
-
-For more information, see:
--- [Which appliances do I need?](../ot-appliance-sizing.md)-- [Pre-configured physical appliances for OT monitoring](../ot-pre-configured-appliances.md)-- [OT monitoring with virtual appliances](../ot-virtual-appliances.md)
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
Title: Install OT network monitoring software - Microsoft Defender for IoT description: Learn how to install agentless monitoring software for an OT sensor and an on-premises management console for Microsoft Defender for IoT. Use this article if you're reinstalling software on a preconfigured appliance, or if you've chosen to install software on your own appliances. Previously updated : 07/11/2022 Last updated : 07/13/2022
This article describes how to install agentless monitoring software for OT sensors and on-premises management consoles. You might need the procedures in this article if you're reinstalling software on a preconfigured appliance, or if you've chosen to install software on your own appliances.
-## Pre-installation configuration
-
-Each appliance type comes with its own set of instructions that are required before installing Defender for IoT software.
-
-Make sure that you've completed the procedures as instructed in the **Reference > OT monitoring appliance** section of our documentation before installing Defender for IoT software.
-
-For more information, see:
--- [Which appliances do I need?](ot-appliance-sizing.md)-- [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md), including the catalog of available appliances-- [OT monitoring with virtual appliances](ot-virtual-appliances.md)- ## Download software files from the Azure portal Make sure that you've downloaded the relevant software file for the sensor or on-premises management console.
Mount the ISO file using one of the following options:
- **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
+## Pre-installation configuration
+
+Each appliance type comes with its own set of instructions that are required before installing Defender for IoT software.
+
+Make sure that you've completed any specific procedures required for your appliance before installing Defender for IoT software. For more information, see the [OT monitoring appliance catalog](appliance-catalog/appliance-catalog-overview.md).
+
+For more information, see:
+
+- [Which appliances do I need?](ot-appliance-sizing.md)
+- [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md), including the catalog of available appliances
+- [OT monitoring with virtual appliances](ot-virtual-appliances.md)
++ ## Install OT monitoring software This section provides generic procedures for installing OT monitoring software on sensors or an on-premises management console.
Select one of the following tabs, depending on which type of software you're ins
# [OT sensor](#tab/sensor)
-This procedure describes how to install OT sensor software on a physical or virtual appliance.
+This procedure describes how to install OT sensor software on a physical or virtual appliance after you've booted the ISO file on your appliance.
> [!Note]
-> At the end of this process you will be presented with the usernames and passwords for your device. Make sure to copy these down as these passwords will not be presented again.
+> Towards the end of this process you will be presented with the usernames and passwords for your device. Make sure to copy these down as these passwords will not be presented again.
**To install the sensor's software**:
-1. Select the installation language.
+1. When the installation boots, you're first prompted to select the hardware profile you want to install.
- :::image type="content" source="media/tutorial-install-components/language-select.png" alt-text="Screenshot of the sensor's language select screen.":::
+ :::image type="content" source="media/tutorial-install-components/sensor-architecture.png" alt-text="Screenshot of the sensor's hardware profile options." lightbox="media/tutorial-install-components/sensor-architecture.png":::
-1. Select the sensor's architecture. For example:
+ For more information, see [Which appliances do I need?](ot-appliance-sizing.md).
- :::image type="content" source="media/tutorial-install-components/sensor-architecture.png" alt-text="Screenshot of the sensor's architecture select screen.":::
+ System files are installed, the sensor reboots, and then sensor files are installed. This process can take a few minutes.
-1. The sensor will reboot, and the **Package configuration** screen will appear. Press the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
+ When the installation steps are complete, the Ubuntu **Package configuration** screen is displayed, with the `Configuring iot-sensor` wizard, showing a prompt to select your monitor interfaces.
-1. Select the monitor interface. For example:
+ In this wizard, use the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
+
+1. In the `Select monitor interfaces` screen, select the interfaces you want to monitor.
+
+ By default, eno1 is reserved for the management interface. and we recommend that you leave this option unselected.
+
+ For example:
:::image type="content" source="media/tutorial-install-components/monitor-interface.png" alt-text="Screenshot of the select monitor interface screen.":::
-1. If one of the monitoring ports is for ERSPAN, select it. For example:
+1. In the `Select erspan monitor interfaces` screen, select any ERSPAN monitoring ports that you have. The wizard lists available interfaces, even if you don't have any ERSPAN monitoring ports in your system. If you have no ERSPAN monitoring ports, leave all options unselected.
+
+ For example:
:::image type="content" source="media/tutorial-install-components/erspan-monitor.png" alt-text="Screenshot of the select erspan monitor screen.":::
-1. Select the interface to be used as the management interface. For example:
+1. In the `Select management interface` screen, we recommend keeping the default `eno1` value selected as the management interface.
+
+ For example:
:::image type="content" source="media/tutorial-install-components/management-interface.png" alt-text="Screenshot of the management interface select screen.":::
-1. Enter the sensor's IP address. For example:
+1. In the `Enter sensor IP address` screen, enter the IP address for the sensor appliance you're installing.
:::image type="content" source="media/tutorial-install-components/sensor-ip-address.png" alt-text="Screenshot of the sensor IP address screen.":::
-1. Enter the path of the mounted logs folder. We recommend using the default path. For example:
+1. In the `Enter path to the mounted backups folder` screen, enter the path to the sensor's mounted backups. We recommend using the default path of `/opt/sensor/persist/backups`. For example:
:::image type="content" source="media/tutorial-install-components/mounted-backups-path.png" alt-text="Screenshot of the mounted backup path screen.":::
-1. Enter the Subnet Mask IP address. For example:
+1. In the `Enter Subnet Mask` screen, enter the IP address for the sensor's subnet mask. For example:
+
+ :::image type="content" source="media/tutorial-install-components/sensor-subnet-ip.png" alt-text="Screenshot of the Enter Subnet Mask screen.":::
-1. Enter the default gateway IP address.
+1. In the `Enter Gateway` screen, enter the sensor's default gateway IP address. For example:
-1. Enter the DNS Server IP address.
+ :::image type="content" source="media/tutorial-install-components/sensor-gateway-ip.png" alt-text="Screenshot of the Enter Gateway screen.":::
-1. Enter the sensor hostname. For example:
+1. In the `Enter DNS server` screen, enter the sensor's DNS server IP address. For example:
- :::image type="content" source="media/tutorial-install-components/sensor-hostname.png" alt-text="Screenshot of the screen where you enter a hostname for your sensor.":::
+ :::image type="content" source="media/tutorial-install-components/sensor-dns-ip.png" alt-text="Screenshot of the Enter DNS server screen.":::
- The installation process runs.
+1. In the `Enter hostname` screen, enter the sensor hostname. For example:
-1. When the installation process completes, save the appliance ID, and passwords. Copy these credentials to a safe place as you'll need them to access the platform the first time you use it.
+ :::image type="content" source="media/tutorial-install-components/sensor-hostname.png" alt-text="Screenshot of the Enter hostname screen.":::
+
+1. In the `Run this sensor as a proxy server (Preview)` screen, select `<Yes>` only if you want to configure a proxy, and then enter the proxy credentials as prompted.
+
+ The default configuration is without a proxy.
+
+ For more information, see [Connect Microsoft Defender for IoT sensors without direct internet access by using a proxy (legacy)](how-to-connect-sensor-by-proxy.md).
++
+1. <a name=credentials></a>The installation process starts running and then shows the credentials screen. For example:
:::image type="content" source="media/tutorial-install-components/login-information.png" alt-text="Screenshot of the final screen of the installation with usernames, and passwords.":::
+ Save the usernames and passwords listed, as the passwords are unique and this is the only time that the credentials are listed. Copy the credentials to a safe place so that you can use them when signing into the sensor for the first time.
+
+ Select `<Ok>` when you're ready to continue.
+
+ The installation continues running again, and then reboots when the installation is complete. Upon reboot, you're prompted to enter credentials to sign in. For example:
+
+ :::image type="content" source="media/tutorial-install-components/sensor-sign-in.png" alt-text="Screenshot of a sensor sign-in screen after installation.":::
+
+1. Enter the credentials for one of the users that you'd copied down in the [previous step](#credentials).
+
+ - If the `iot-sensor login:` prompt disappears, press **ENTER** to have it shown again.
+ - When you enter your password, the password characters don't display on the screen. Make sure you enter them carefully.
+
+ When you've successfully signed in, the following confirmation screen appears:
+
+ :::image type="content" source="media/tutorial-install-components/install-complete.png" alt-text="Screenshot of the sign-in confirmation.":::
+
+Make sure that your sensor is connected to your network, and then you can sign in to your sensor via a network-connected browser. For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor).
+ # [On-premises management console](#tab/on-prem)
During the installation process, you can add a secondary NIC. If you choose not
:::image type="content" source="media/tutorial-install-components/on-prem-language-select.png" alt-text="Select your preferred language for the installation process.":::
+1. Select your location. For example:
+
+1. Detect keyboard layout? default no, then select a keyboard layout
+
+1. Configure the network - your system has detected multiple interfaces.
+ 1. Select **MANAGEMENT-RELEASE-\<version\>\<deployment type\>**. :::image type="content" source="media/tutorial-install-components/on-prem-install-screen.png" alt-text="Select your version.":::
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
Use the **Device inventory** page from a sensor console to manage all OT and IT devices detected by that console. Identify new devices detected, devices that might need troubleshooting, and more.
-For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device).
+For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
> [!TIP] > Alternately, view your device inventory from a [the Azure portal](how-to-manage-device-inventory-for-organizations.md), or from an [on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md).
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
Then, use any of the following procedures to continue:
- [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console) - [Install software](how-to-install-software.md)
-Our OT monitoring appliance reference articles also include extra installation procedures in case you need to install software on your own appliances, or reinstall software on preconfigured appliances. For more information, see [OT monitoring appliance reference](appliance-catalog/appliance-catalog-overview.md).
+Our OT monitoring appliance reference articles also include extra installation procedures in case you need to install software on your own appliances, or reinstall software on preconfigured appliances. For more information, see [OT monitoring appliance reference](appliance-catalog/index.yml).
event-grid Configure Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-custom-topic.md
+
+ Title: Configure an Azure Event Grid topic
+description: This article shows how to configure local auth, public or private access, managed identity, and data residency for an Event Grid custom topic.
Last updated : 07/21/2022++++
+# Configure a custom topic or a domain in Azure Event Grid
+This article shows how to update or configure a custom topic or a domain in Azure Event Grid.
+
+## Navigate to your topic or domain
+
+1. Sign in to [Azure portal](https://portal.azure.com/).
+2. In the search bar at the top, type **Event Grid Topics**, and then select **Event Grid Topics** from the drop-down list. If you are configuring a domain, search for **Event Grid Domains**.
+
+ :::image type="content" source="./media/custom-event-quickstart-portal/select-event-grid-topics.png" alt-text="Screenshot showing the Azure port search bar to search for Event Grid topics.":::
+3. On the **Event Grid Topics** or **Event Grid Domains** page, select your topic or domain.
+
+ :::image type="content" source="./media/configure-custom-topic/select-topic.png" alt-text="Screenshot showing the selection of the topic in the list of Event Grid topics.":::
+
+## Enable to disable local authentication
+
+1. On the **Overview** page, in the **Essentials** section, select the current value for **Local Authentication**.
+1. On the **Local Authentication** page, select **Enabled** or **Disabled**.
+
+ :::image type="content" source="./media/configure-custom-topic/local-authentication.png" alt-text="Screenshot showing the Local Authentication page.":::
+1. Select **OK** to close the **Local Authentication** page.
+
+## Configure public or private access
+
+1. On the left menu, select **Networking** under **Settings**.
+2. Select **Public networks** to allow all networks, including the internet, to access the resource.
+
+ You can restrict the traffic using IP firewall rules. Specify a single IPv4 address or a range of IP addresses in Classless inter-domain routing (CIDR) notation.
+
+ :::image type="content" source="./media/configure-firewall/public-networks-page.png" alt-text="Screenshot that shows the Public network access page with Public networks selected.":::
+3. Select **Private endpoints only** to allow only private endpoint connections to access this resource. Use the **Private endpoint connections** tab on this page to manage connections.
+
+ For step-by-step instructions to create a private endpoint connection, see [Add a private endpoint using Azure portal](configure-private-endpoints.md#use-azure-portal).
+
+ :::image type="content" source="./media/configure-firewall/select-private-endpoints.png" alt-text="Screenshot that shows the Public network access page with Private endpoints only option selected.":::
+4. Select **Save** on the toolbar.
+
+## Assign managed identity
+When you use Azure portal, you can assign one system assigned identity and up to two user assigned identities to an existing topic or a domain. The following procedures show you how to enable an identity for a custom topic. The steps for enabling an identity for a domain are similar.
+
+### To assign a system-assigned identity to a topic
+1. On the left menu, select **Identity** under **Settings**.
+1. In the **System assigned** tab, turn **on** the switch to enable the identity.
+1. Select **Save** on the toolbar to save the setting.
+
+ :::image type="content" source="./media/managed-service-identity/identity-existing-topic.png" alt-text="Screenshot showing the Identity page for a custom topic.":::
+
+### To assign a user-assigned identity to a topic
+1. Create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) article.
+1. On the **Identity** page, switch to the **User assigned** tab in the right pane, and then select **+ Add** on the toolbar.
+
+ :::image type="content" source="./media/managed-service-identity/user-assigned-identity-add-button.png" alt-text="Screenshot showing the User Assigned Identity tab of the Identity page.":::
+1. In the **Add user managed identity** window, follow these steps:
+ 1. Select the **Azure subscription** that has the user-assigned identity.
+ 1. Select the **user-assigned identity**.
+ 1. Select **Add**.
+1. Refresh the list in the **User assigned** tab to see the added user-assigned identity.
+
+You can use similar steps to enable an identity for an event grid domain.
+
+## Configure data residency
+
+1. On the left menu, select **Configuration** under **Settings**.
+1. 2. For **Data residency**, select whether you don't want any data to be replicated to another region (**Regional**) or you want the metadata to be replicated to a predefined secondary region (**Cross-Geo**).
+
+ The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](/azure/event-grid/event-grid-faq).
+
+ If you select the **Regional** option, you may define your own disaster recovery plan. For more information, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md).
+
+ :::image type="content" source="./media/configure-custom-topic/data-residency.png" alt-text="Screenshot showing the Configuration page with data residency settings.":::
+1. After updating the setting, select **Apply** to apply changes.
+
+## Next steps
+
+Learn more about what Event Grid can help you do:
+
+- [Route custom events to web endpoint with the Azure portal and Event Grid](custom-event-quickstart-portal.md)
+- [About Event Grid](overview.md)
+- [Event handlers](event-handlers.md)
+
+See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+
+- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)
+- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
+- [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)
+- [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-firewall.md
This section shows you how to enable public or private network access for an Eve
1. On the **Basics** page of the **Create topic** wizard, select **Next: Networking** at the bottom of the page after filling the required fields.
- :::image type="content" source="./media/configure-firewall/networking-link.png" alt-text="Image showing the selection of Networking link at the bottom of the page. ":::
+ :::image type="content" source="./media/configure-firewall/networking-link.png" alt-text="Screenshot showing the selection of Networking link at the bottom of the page. ":::
1. If you want to allow clients to connect to the topic endpoint via a public IP address, keep the **Public access** option selected.
- :::image type="content" source="./media/configure-firewall/networking-page-public-access.png" alt-text="Image showing the selection of Public access option on the Networking page of the Create topic wizard. ":::
+ :::image type="content" source="./media/configure-firewall/networking-page-public-access.png" alt-text="Screenshot showing the selection of Public access option on the Networking page of the Create topic wizard. ":::
1. To allow access to the Event Grid topic via a private endpoint, select the **Private access** option.
- :::image type="content" source="./media/configure-firewall/networking-page-private-access.png" alt-text="Image showing the selection of Private access option on the Networking page of the Create topic wizard. ":::
+ :::image type="content" source="./media/configure-firewall/networking-page-private-access.png" alt-text="Screenshot showing the selection of Private access option on the Networking page of the Create topic wizard. ":::
1. Follow instructions in the [Add a private endpoint using Azure portal](configure-private-endpoints.md#use-azure-portal) section to create a private endpoint. ### For an existing topic
event-grid Create Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-custom-topic.md
+
+ Title: Create an Azure Event Grid topic or a domain
+description: This article shows how to create an Event Grid topic or domain.
Last updated : 07/21/2022++++
+# Create a custom topic or a domain in Azure Event Grid
+This article shows how to create a custom topic or a domain in Azure Event Grid.
+
+## Prerequisites
+If you new to Azure Event Grid, read through [Event Grid overview](overview.md) before starting this tutorial.
++
+## Create a custom topic or domain
+An Event Grid topic provides a user-defined endpoint that you post your events to.
+
+1. Sign in to [Azure portal](https://portal.azure.com/).
+2. In the search bar at the top, type **Event Grid Topics**, and then select **Event Grid Topics** from the drop-down list. If you are create a domain, search for **Event Grid Domains**.
+
+ :::image type="content" source="./media/custom-event-quickstart-portal/select-event-grid-topics.png" alt-text="Screenshot showing the Azure port search bar to search for Event Grid topics.":::
+3. On the **Event Grid Topics** or **Event Grid Domains** page, select **+ Create** on the toolbar.
+
+ :::image type="content" source="./media/custom-event-quickstart-portal/create-topic-button.png" alt-text="Screenshot showing the Create Topic button on Event Grid topics page.":::
+
+## Basics page
+On the **Basics** page of **Create Topic** or **Create Event Grid Domain** wizard, follow these steps:
+
+1. Select your Azure **subscription**.
+2. Select an existing resource group or select **Create new**, and enter a **name** for the **resource group**.
+3. Provide a unique **name** for the custom topic or domain. The name must be unique because it's represented by a DNS entry. Don't use the name shown in the image. Instead, create your own name - it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-".
+4. Select a **location** for the Event Grid topic or domain.
+1. Select **Next: Networking** at the bottom of the page to switch to the **Networking** page.
+
+ :::image type="content" source="./media/create-custom-topic/basics-page.png" alt-text="Screenshot showing the Networking page of the Create Topic wizard.":::
+
+## Networking page
+On the **Networking** page of the **Create Topic** or **Create Event Grid Domain** wizard, follow these steps:
+
+1. If you want to allow clients to connect to the topic or domain endpoint via a public IP address, keep the **Public access** option selected.
+
+ :::image type="content" source="./media/configure-firewall/networking-page-public-access.png" alt-text="Screenshot showing the selection of Public access option on the Networking page of the Create topic wizard.":::
+1. To allow access to the topic or domain via a private endpoint, select the **Private access** option.
+
+ :::image type="content" source="./media/configure-firewall/networking-page-private-access.png" alt-text="Screenshot showing the selection of Private access option on the Networking page of the Create topic wizard. ":::
+1. Follow instructions in the [Add a private endpoint using Azure portal](configure-private-endpoints.md#use-azure-portal) section to create a private endpoint.
+1. Select **Next: Security** at the bottom of the page to switch to the **Security** page.
++
+## Security page
+On the **Security** page of the **Create Topic** or **Create Event Grid Domain** wizard, follow these steps:
+
+1. To assign a system-assigned managed identity to your topic or domain, select **Enable system assigned identity**.
+
+ :::image type="content" source="./media/managed-service-identity/create-topic-identity.png" alt-text="Screenshot of the Identity page with system assigned identity option selected.":::
+1. To assign a user-assigned identity, select **Add user assigned identity** in the **User assigned identity** section of the page.
+1. In the **Select user assigned identity** window, select the subscription that has the user-assigned identity, select the **user-assigned identity**, and then click **Select**.
+
+ :::image type="content" source="./media/managed-service-identity/create-page-add-user-assigned-identity-link.png" alt-text="Screenshot of the Identity page with user assigned identity option selected." lightbox="./media/managed-service-identity/create-page-add-user-assigned-identity-link.png":::
+1. To disable local authentication, select **Disabled**. When you do it, the topic or domain can't be accessed using accesskey and SAS authentication, but only via Azure AD authentication.
+
+ :::image type="content" source="./media/authenticate-with-active-directory/create-topic-disable-local-auth.png" alt-text="Screenshot showing the Advanced tab of Create Topic page when you can disable local authentication.":::
+1. Select **Advanced** at the bottom of the page to switch to the **Advanced** page.
+
+## Advanced page
+1. On the **Advanced** page of the **Create Topic** or **Create Event Grid Domain** wizard, select the schema for events that will be published to this topic.
+
+ :::image type="content" source="./media/create-custom-topic/select-schema.png" alt-text="Screenshot showing the selection of a schema on the Advanced page.":::
+2. For **Data residency**, select whether you don't want any data to be replicated to another region (**Regional**) or you want the metadata to be replicated to a predefined secondary region (**Cross-Geo**).
+
+ :::image type="content" source="./media/create-custom-topic/data-residency.png" alt-text="Screenshot showing the Data residency section of the Advanced page in the Create Topic wizard.":::
+
+ The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](/azure/event-grid/event-grid-faq).
+
+ If you select the **Regional** option, you may define your own disaster recovery plan. For more information, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md).
+3. Select **Next: Tags** to move to the **Tags** page.
+
+## Tags page
+The **Tags** page has no fields that are specific to Event Grid. You can assign a tag (name-value pair) as you do for any other Azure resource. Select **Next: Review + create** to switch to the **Review + create** page.
+
+## Review + create page
+On the **Review + create** page, review all your settings, confirm the validation succeeded, and then select **Create** to create the topic or the domain.
+++
+## Next steps
+
+Now that you know how to create custom topics or domains, learn more about what Event Grid can help you do:
+
+- [Route custom events to web endpoint with the Azure portal and Event Grid](custom-event-quickstart-portal.md)
+- [About Event Grid](overview.md)
+- [Event handlers](event-handlers.md)
+
+See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+
+- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)
+- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
+- [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)
+- [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Custom Event Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart-portal.md
Title: 'Send custom events to web endpoint - Event Grid, Azure portal' description: 'Quickstart: Use Azure Event Grid and Azure portal to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.' Previously updated : 07/01/2021 Last updated : 07/21/2022
An event grid topic provides a user-defined endpoint that you post your events t
1. Sign in to [Azure portal](https://portal.azure.com/). 2. In the search bar at the topic, type **Event Grid Topics**, and then select **Event Grid Topics** from the drop-down list.
- :::image type="content" source="./media/custom-event-quickstart-portal/select-event-grid-topics.png" alt-text="Search for and select Event Grid Topics":::
+ :::image type="content" source="./media/custom-event-quickstart-portal/select-event-grid-topics.png" alt-text="Screenshot showing the Azure port search bar to search for Event Grid Topics.":::
3. On the **Event Grid Topics** page, select **+ Create** on the toolbar.
-4. On the **Create Topic** page, follow these steps:
+
+ :::image type="content" source="./media/custom-event-quickstart-portal/create-topic-button.png" alt-text="Screenshot showing the Create Topic button on Event Grid Topics page.":::
+1. On the **Create Topic** page, follow these steps:
1. Select your Azure **subscription**. 2. Select an existing resource group or select **Create new**, and enter a **name** for the **resource group**. 3. Provide a unique **name** for the custom topic. The topic name must be unique because it's represented by a DNS entry. Don't use the name shown in the image. Instead, create your own name - it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-".
An event grid topic provides a user-defined endpoint that you post your events t
:::image type="content" source="./media/custom-event-quickstart-portal/review-create-page.png" alt-text="Review settings and create"::: 5. After the deployment succeeds, select **Go to resource** to navigate to the **Event Grid Topic** page for your topic. Keep this page open. You use it later in the quickstart.
- :::image type="content" source="./media/custom-event-quickstart-portal/event-grid-topic-home-page.png" alt-text="Screenshot showing the Event Grid Topic home page":::
+ :::image type="content" source="./media/custom-event-quickstart-portal/event-grid-topic-home-page.png" alt-text="Screenshot showing the Event Grid Topic home page.":::
+
+ > [!NOTE]
+ > To keep the quickstart simple, you'll be using only the **Basics** page to create a topic. For detailed steps about configuring network, security, and data residency settings on other pages of the wizard, see [Create a custom topic](create-custom-topic.md).
## Create a message endpoint Before you create a subscription for the custom topic, create an endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
event-grid Enable Identity Custom Topics Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/enable-identity-custom-topics-domains.md
Title: Enable managed identity on Azure Event Grid custom topics and domains description: This article describes how enable managed service identity for an Azure Event Grid custom topic or domain. Previously updated : 11/09/2021 Last updated : 07/21/2022 # Assign a managed identity to an Event Grid custom topic or domain
This article shows you how to use the Azure portal and CLI to assign a system-as
In the **Azure portal**, when creating a topic or a domain, you can assign either a system-assigned identity or two user-assigned identities, but not both types of identities. Once the topic or domain is created, you can assign both types of identities by following steps in the [Enable identity for an existing topic or domain](#enable-identity-for-an-existing-custom-topic-or-domain) section. ### Enable system-assigned identity
-On the **Advanced** tab of the topic or domain creation wizard, select **Enable system assigned identity**.
+On the **Security** page of the topic or domain creation wizard, select **Enable system assigned identity**.
### Enable user-assigned identity
-1. On the **Advanced** page of the topic or domain creation wizard, select **Enable user-assigned identity**, and then select **Add user assigned identity**.
-
- :::image type="content" source="./media/managed-service-identity/create-page-add-user-assigned-identity-link.png" alt-text="Image showing the Enable user assigned identity option selected.":::
+1. On the **Security** page of the topic or domain creation wizard, select **Add user assigned identity**.
1. In the **Select user assigned identity** window, select the subscription that has the user-assigned identity, select the **user-assigned identity**, and then click **Select**.
+ :::image type="content" source="./media/managed-service-identity/create-page-add-user-assigned-identity-link.png" alt-text="Screenshot showing the Enable user assigned identity option selected." lightbox="./media/managed-service-identity/create-page-add-user-assigned-identity-link.png":::
+ # [Azure CLI](#tab/cli) You can also use Azure CLI to create a custom topic or a domain with a system-assigned identity. Currently, Azure CLI doesn't support assigning a user-assigned identity to a topic or a domain.
The following procedures show you how to enable an identity for a custom topic.
1. Create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) article. 1. On the **Identity** page, switch to the **User assigned** tab in the right pane, and then select **+ Add** on the toolbar.
- :::image type="content" source="./media/managed-service-identity/user-assigned-identity-add-button.png" alt-text="Image showing the User Assigned Identity tab":::
+ :::image type="content" source="./media/managed-service-identity/user-assigned-identity-add-button.png" alt-text="Screenshot showing the User Assigned Identity tab":::
1. In the **Add user managed identity** window, follow these steps: 1. Select the **Azure subscription** that has the user-assigned identity. 1. Select the **user-assigned identity**.
event-grid Enable Identity Partner Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/enable-identity-partner-topic.md
+
+ Title: Enable managed identity for an Azure Event Grid partner topic
+description: This article describes how enable managed service identity for an Azure Event Grid partner topic.
+ Last updated : 07/21/2022++
+# Assign a managed identity to an Azure Event Grid partner topic
+This article shows you how to use the Azure portal to assign a system-assigned or a user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to an Event Grid partner topic. When you use the Azure portal, you can assign one system assigned identity and up to two user assigned identities to an existing partner topic.
+
+## Navigate to your partner topic
+1. Go to the [Azure portal](https://portal.azure.com).
+2. Search for **event grid partner topics** in the search bar at the top.
+3. Select the **partner topic** for which you want to enable the managed identity.
+4. Select **Identity** on the left menu.
+
+## Assign a system-assigned identity
+1. In the **System assigned** tab, turn **on** the switch to enable the identity.
+1. Select **Save** on the toolbar to save the setting.
+
+ :::image type="content" source="./media/enable-identity-partner-topic/identity-existing-topic.png" alt-text="Screenshot showing the Identity page for a partner topic.":::
+
+## Assign a user-assigned identity
+1. Create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) article.
+1. On the **Identity** page, switch to the **User assigned** tab in the right pane, and then select **+ Add** on the toolbar.
+
+ :::image type="content" source="./media/enable-identity-partner-topic/user-assigned-identity-add-button.png" alt-text="Screenshot showing the User Assigned Identity tab":::
+1. In the **Add user managed identity** window, follow these steps:
+ 1. Select the **Azure subscription** that has the user-assigned identity.
+ 1. Select the **user-assigned identity**.
+ 1. Select **Add**.
+1. Refresh the list in the **User assigned** tab to see the added user-assigned identity.
++
+## Next steps
+Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For detailed steps, see [Grant managed identity the access to Event Grid destination](add-identity-roles.md).
expressroute Expressroute Howto Add Gateway Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-gateway-portal-resource-manager.md
Title: 'Tutorial: Azure ExpressRoute - Add a gateway to a VNet (Azure portal)'
+ Title: 'Tutorial: Configure a virtual network gateway for ExpressRoute using Azure portal'
description: This tutorial walks you through adding a virtual network gateway to a VNet for ExpressRoute using the Azure portal.
> * [Classic - PowerShell](expressroute-howto-add-gateway-classic.md) >
-This tutorial walks you through the steps to add, resize, and remove a virtual network gateway for a pre-existing virtual network (VNet). The steps for this configuration apply to VNets that were created using the Resource Manager deployment model for an ExpressRoute configuration. For more information about virtual network gateways and gateway configuration settings for ExpressRoute, see [About virtual network gateways for ExpressRoute](expressroute-about-virtual-network-gateways.md).
+This tutorial walks you through the steps to add and remove a virtual network gateway for a pre-existing virtual network (VNet). The steps for this configuration apply to VNets that were created using the Resource Manager deployment model for an ExpressRoute configuration. For more information about virtual network gateways and gateway configuration settings for ExpressRoute, see [About virtual network gateways for ExpressRoute](expressroute-about-virtual-network-gateways.md).
In this tutorial, you learn how to: > [!div class="checklist"]
expressroute Expressroute Howto Add Gateway Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-gateway-resource-manager.md
Title: 'Tutorial - Azure ExpressRoute: Add a gateway to a VNet - Azure PowerShell'
-description: This tutorial helps you add VNet gateway to an already created Resource Manager VNet for ExpressRoute using Azure PowerShell.
+ Title: 'Tutorial: Configure a virtual network gateway for ExpressRoute using PowerShell'
+description: This tutorial walks you through adding a virtual network gateway to a VNet for ExpressRoute using Azure PowerShell.
Previously updated : 10/05/2020 Last updated : 07/22/2022 -+ # Tutorial: Configure a virtual network gateway for ExpressRoute using PowerShell
> * [Resource Manager - Azure portal](expressroute-howto-add-gateway-portal-resource-manager.md) > * [Resource Manager - PowerShell](expressroute-howto-add-gateway-resource-manager.md) > * [Classic - PowerShell](expressroute-howto-add-gateway-classic.md)
-> * [Video - Azure portal](https://azure.microsoft.com/documentation/videos/azure-expressroute-how-to-create-a-vpn-gateway-for-your-virtual-network)
>
-This tutorial helps you add, resize, and remove a virtual network (VNet) gateway for a pre-existing VNet. The steps for this configuration apply to VNets that were created using the Resource Manager deployment model for an ExpressRoute configuration. For more information, see [About virtual network gateways for ExpressRoute](expressroute-about-virtual-network-gateways.md).
+This tutorial walks you through the steps to add, resize, and remove a virtual network gateway for a pre-existing virtual network (VNet) using PowerShell. The steps for this configuration apply to VNets that were created using the Resource Manager deployment model for an ExpressRoute configuration. For more information about virtual network gateways and gateway configuration settings for ExpressRoute, see [About virtual network gateways for ExpressRoute](expressroute-about-virtual-network-gateways.md).
In this tutorial, you learn how to: > [!div class="checklist"]
The steps for this task use a VNet based on the values in the following configur
1. To connect with Azure, run `Connect-AzAccount`.
-1. Declare your variables for this exercise. Be sure to edit the sample to reflect the settings that you want to use.
+1. Declare your variables for this tutorial. Be sure to edit the sample to reflect the settings that you want to use.
```azurepowershell-interactive $RG = "TestRG"
The steps for this task use a VNet based on the values in the following configur
```azurepowershell-interactive Add-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet -AddressPrefix 192.168.200.0/26 ```
- If you are using a dual stack virtual network and plan to use IPv6-based private peering over ExpressRoute, create a dual stack gateway subnet instead.
+ If you're using a dual stack virtual network and plan to use IPv6-based private peering over ExpressRoute, create a dual stack gateway subnet instead.
```azurepowershell-interactive Add-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet -AddressPrefix "10.0.0.0/26","ace:daa:daaa:deaa::/64"
The steps for this task use a VNet based on the values in the following configur
$pip = New-AzPublicIpAddress -Name $GWIPName -ResourceGroupName $RG -Location $Location -AllocationMethod Dynamic ```
- If you plan to use IPv6-based private peering over ExpressRoute, please set the IP SKU to Standard and the AllocationMethod to Static:
+ If you plan to use IPv6-based private peering over ExpressRoute, set the IP SKU to Standard and the AllocationMethod to Static:
```azurepowershell-interactive $pip = New-AzPublicIpAddress -Name $GWIPName -ResourceGroupName $RG -Location $Location -AllocationMethod Static -SKU Standard ```
Get-AzVirtualNetworkGateway -ResourceGroupName $RG
``` ## Resize a gateway
-There are a number of [Gateway SKUs](expressroute-about-virtual-network-gateways.md). You can use the following command to change the Gateway SKU at any time.
-
-> [!IMPORTANT]
-> This command doesn't work for UltraPerformance gateway. To change your gateway to an UltraPerformance gateway, first remove the existing ExpressRoute gateway, and then create a new UltraPerformance gateway. To downgrade your gateway from an UltraPerformance gateway, first remove the UltraPerformance gateway, and then create a new gateway.
->
+There are a number of [gateway SKUs](expressroute-about-virtual-network-gateways.md). You can use the following command to change the Gateway SKU at any time.
```azurepowershell-interactive $gw = Get-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG
Remove-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG
After you've created the VNet gateway, you can link your VNet to an ExpressRoute circuit. > [!div class="nextstepaction"]
-> [Link a Virtual Network to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
+> [Link a virtual network to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
governance Guest Configuration Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/guest-configuration-assignments.md
Title: Understand guest configuration assignment resources description: Guest configuration creates extension resources named guest configuration assignments that map configurations to machines. Previously updated : 08/15/2021+ Last updated : 07/15/2022 + # Understand guest configuration assignment resources + When an Azure Policy is assigned, if it's in the category "Guest Configuration" there's metadata included to describe a guest assignment.
An example deployment template:
"configurationParameter": {} } }
- }
+ }
] } ```
governance Guest Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/guest-configuration-custom.md
Title: Changes to behavior in PowerShell Desired State Configuration for guest configuration description: This article provides an overview of the platform used to deliver configuration changes to machines through Azure Policy. Previously updated : 05/31/2021+ Last updated : 07/15/2022 + # Changes to behavior in PowerShell Desired State Configuration for guest configuration + Before you begin, it's a good idea to read the overview of [guest configuration](./guest-configuration.md).
governance Guest Configuration Policy Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/guest-configuration-policy-effects.md
Title: Remediation options for guest configuration description: Azure Policy's guest configuration feature offers options for continuous remediation or control using remediation tasks. Previously updated : 07/12/2021+ Last updated : 07/15/2022 + # Remediation options for guest configuration + Before you begin, it's a good idea to read the overview page for [guest configuration](../concepts/guest-configuration.md).
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/guest-configuration.md
Title: Understand the guest configuration feature of Azure Policy description: Learn how Azure Policy uses the guest configuration feature to audit or configure settings inside virtual machines. Previously updated : 07/15/2021+ Last updated : 07/15/2022 + # Understand the guest configuration feature of Azure Policy + Azure Policy's guest configuration feature provides native capability to audit or configure operating system settings as code, both for machines running in Azure and hybrid
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-portal.md
Title: Create Azure HDInsight - Azure Data Lake Storage Gen2 - portal description: Learn how to use Azure Data Lake Storage Gen2 with Azure HDInsight clusters using the portal.--++ Previously updated : 09/07/2021 Last updated : 07/21/2022 # Create a cluster with Data Lake Storage Gen2 using the Azure portal
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
For a complete list of improvements, see the [Apache Spark 3.1 release notes.](h
For more details on migration, see the [migration guide.](https://spark.apache.org/docs/latest/migration-guide.html)
-## Kafka 2.4 is now generally available
+### Kafka 2.4 is now generally available
Kafka 2.4.1 is now Generally Available. For more information, please see [Kafka 2.4.1 Release Notes.](http://kafka.apache.org/24/documentation.html) Other features include MirrorMaker 2 availability, new metric category AtMinIsr topic partition, Improved broker start-up time by lazy on demand mmap of index files, More consumer metrics to observe user poll behavior.
-## Map Datatype in HWC is now supported in HDInsight 4.0
+### Map Datatype in HWC is now supported in HDInsight 4.0
This release includes Map Datatype Support for HWC 1.0 (Spark 2.4) Via the spark-shell application, and all other all spark clients that HWC supports. Following improvements are included like any other data types:
OSS backports that are included in Hive including HWC 1.0 (Spark 2.4) which supp
| LLAP external client - Handle nested values when the parent struct is null | [HIVE-25243](https://issues.apache.org/jira/browse/HIVE-25243) | | Upgrade arrow version to 0.11.0 | [HIVE-23987](https://issues.apache.org/jira/browse/HIVE-23987) |
-## Deprecation notices
-### Azure Virtual Machine Scale Sets on HDInsight
+### Deprecation notices
+#### Azure Virtual Machine Scale Sets on HDInsight
HDInsight will no longer use Azure Virtual Machine Scale Sets to provision the clusters, no breaking change is expected. Existing HDInsight clusters on virtual machine scale sets will have no impact, any new clusters on latest images will no longer use Virtual Machine Scale Sets.
-### Scaling of Azure HDInsight HBase workloads will now be supported only using manual scale
+#### Scaling of Azure HDInsight HBase workloads will now be supported only using manual scale
Starting from March 01, 2022, HDInsight will only support manual scale for HBase, there's no impact on running clusters. New HBase clusters won't be able to enable schedule based Autoscaling. For more information on how to  manually scale your HBase cluster, refer our documentation on [Manually scaling Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
healthcare-apis How To Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-display-metrics.md
Title: Display MedTech service metrics logging - Azure Health Data Services
-description: This article explains how to display MedTech service Metrics
+ Title: Display MedTech service metrics - Azure Health Data Services
+description: This article explains how to display MedTech service metrics.
Previously updated : 03/22/2022 Last updated : 07/22/2022
-# How to display MedTech service metrics
+# How to display the MedTech service metrics
-In this article, you'll learn how to display MedTech service metrics in the Azure portal.
+In this article, you'll learn how to display MedTech service metrics in the Azure portal and how to pin the MedTech service metrics tile to an Azure portal dashboard.
-## Display metrics
+## Metric types for the MedTech service
-1. Within your Azure Health Data Services workspace, select **MedTech service**.
+The MedTech service metrics that you can select and display are listed in the following table:
- :::image type="content" source="media\iot-metrics\iot-workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service button." lightbox="media\iot-metrics\iot-connectors-button.png":::
+|Metric type|Metric purpose|
+|--|--|
+|Number of Incoming Messages|Displays the number of received raw incoming messages (for example, the device events).|
+|Number of Normalized Messages|Displays the number of normalized messages.|
+|Number of Message Groups|Displays the number of groups that have messages aggregated in the designated time window.|
+|Average Normalized Stage Latency|Displays the average latency of the normalized stage. The normalized stage performs normalization on raw incoming messages.|
+|Average Group Stage Latency|Displays the average latency of the group stage. The group stage performs buffering, aggregating, and grouping on normalized messages.|
+|Total Error Count|Displays the total number of errors.|
+
+## Display the MedTech service metrics
+
+1. Within your Azure Health Data Services workspace, select **MedTech service** under **Services**.
+
+ :::image type="content" source="media\iot-metrics-display\iot-workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service button within the workspace." lightbox="media\iot-metrics-display\iot-connectors-button.png":::
2. Select the MedTech service that you would like to display the metrics for.
- :::image type="content" source="media\iot-metrics\iot-connector-select.png" alt-text="Screenshot of select MedTech service you would like to display metrics for." lightbox="media\iot-metrics\iot-connector-select.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-connector-select.png" alt-text="Screenshot of select the MedTech service you would like to display metrics for." lightbox="media\iot-metrics-display\iot-connector-select.png":::
3. Select **Metrics** button within the MedTech service page.
- :::image type="content" source="media\iot-metrics\iot-select-metrics.png" alt-text="Screenshot of Select the Metrics button." lightbox="media\iot-metrics\iot-metrics-button.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-select-metrics.png" alt-text="Screenshot of Select the Metrics button within your MedTech service." lightbox="media\iot-metrics-display\iot-metrics-button.png":::
-4. From the metrics page, you can create the metrics that you want to display for your MedTech service. For this example, we'll be choosing the following selections:
+4. From the metrics page, you can create the metrics combinations that you want to display for your MedTech service. For this example, we'll be choosing the following selections:
- * **Scope** = MedTech service name (**Default**)
- * **Metric Namespace** = Standard Metrics (**Default**)
- * **Metric** = MedTech service metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
+ * **Scope** = Your MedTech service name (**Default**)
+ * **Metric Namespace** = Standard metrics (**Default**)
+ * **Metric** = The MedTech service metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
* **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**.
- :::image type="content" source="media\iot-metrics\iot-select-metrics-to-display.png" alt-text="Screenshpt of select metrics to display." lightbox="media\iot-metrics\iot-metrics-selection-close-up.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-select-metrics-to-display.png" alt-text="Screenshot of select metrics to display." lightbox="media\iot-metrics-display\iot-metrics-selection-close-up.png":::
5. We can now see the MedTech service metrics for **Number of Incoming Messages** displayed on the Azure portal. > [!TIP] > You can add additional metrics by selecting the **Add metric** button and making your choices.
- :::image type="content" source="media\iot-metrics\iot-metrics-add-button.png" alt-text="Screenshot of select Add metric button to add more metrics." lightbox="media\iot-metrics\iot-add-metric-button.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-metrics-add-button.png" alt-text="Screenshot of select Add metric button to add more metrics." lightbox="media\iot-metrics-display\iot-add-metric-button.png":::
> [!IMPORTANT]
- > If you leave the metrics page, the metrics settings are lost and will have to be recreated. If you would like to save your MedTech service metrics for future viewing, you can pin them to an Azure dashboard as a tile.
+ > If you leave the metrics page, the metrics settings for your MedTech service are lost and will have to be recreated. If you would like to save your MedTech service metrics for future viewing, you can pin them to an Azure dashboard as a tile.
-## Pinning metrics tile on Azure portal dashboard
+## How to pin the MedTech service metrics tile to an Azure portal dashboard
-1. To pin the metrics tile to an Azure portal dashboard, select the **Pin to dashboard** button.
+1. To pin the MedTech service metrics tile to an Azure portal dashboard, select the **Pin to dashboard** button.
- :::image type="content" source="media\iot-metrics\iot-metrics-select-add-pin-to-dashboard.png" alt-text="Screenshot of select the Pin to dashboard button." lightbox="media\iot-metrics\iot-pin-to-dashboard-button.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-metrics-select-add-pin-to-dashboard.png" alt-text="Screenshot of select the Pin to dashboard button." lightbox="media\iot-metrics-display\iot-pin-to-dashboard-button.png":::
-2. Select the dashboard you would like to display MedTech service metrics on. For this example, we'll use a private dashboard named `MedTech service Metrics`. Select **Pin** to add the metrics tile to the dashboard.
+2. Select the dashboard you would like to display your MedTech service metrics on. For this example, we'll use a private dashboard named `MedTech service metrics`. Select **Pin** to add your MedTech service metrics tile to the dashboard.
- :::image type="content" source="media\iot-metrics\iot-select-pin-to-dashboard.png" alt-text="Screenshot of select dashboard and Pin button to complete the dashboard pinning process." lightbox="media\iot-metrics\iot-select-pin-to-dashboard.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-select-pin-to-dashboard.png" alt-text="Screenshot of select dashboard and Pin button to complete the dashboard pinning process." lightbox="media\iot-metrics-display\iot-select-pin-to-dashboard.png":::
-3. You'll receive a confirmation that the metrics tile was successfully added to the dashboard.
+3. You'll receive a confirmation that your MedTech service metrics tile was successfully added to your selected Azure portal dashboard.
- :::image type="content" source="media\iot-metrics\iot-select-dashboard-pinned-successful.png" alt-text="Screenshot of metrics tile successfully pinned to dashboard." lightbox="media\iot-metrics\iot-select-dashboard-pinned-successful.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-pinned-successful.png" alt-text="Screenshot of metrics tile successfully pinned to dashboard." lightbox="media\iot-metrics-display\iot-select-dashboard-pinned-successful.png":::
-4. Once you've received a successful confirmation, select **Dashboard**.
+4. Once you've received a successful confirmation, select the **Dashboard** button.
- :::image type="content" source="media\iot-metrics\iot-select-dashboard-with-metrics-tile.png" alt-text="Screenshot of select the Dashboard button." lightbox="media\iot-metrics\iot-dashboard-button.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-with-metrics-tile.png" alt-text="Screenshot of select the Dashboard button." lightbox="media\iot-metrics-display\iot-dashboard-button.png":::
-5. Select the dashboard that you pinned the metrics tile to. For this example, the dashboard is **MedTech service Metrics**. The dashboard will display the MedTech service metrics tile that you created in the previous steps.
+5. Select the dashboard that you pinned your MedTech service metrics tile to. For this example, the dashboard is named **MedTech service metrics**. The dashboard will display the MedTech service metrics tile that you created in the previous steps.
- :::image type="content" source="media\iot-metrics\iot-dashboard-with-metrics-tile-displayed.png" alt-text="Screenshot of dashboard with pinned MedTech service metrics tile." lightbox="media\iot-metrics\iot-dashboard-with-metrics-tile-displayed.png":::
+ :::image type="content" source="media\iot-metrics-display\iot-dashboard-with-metrics-tile-displayed.png" alt-text="Screenshot of dashboard with pinned MedTech service metrics tile." lightbox="media\iot-metrics-display\iot-dashboard-with-metrics-tile-displayed.png":::
> [!TIP] > See the [MedTech service troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors, conditions and issues. ## Next steps
-To learn how to export MedTech service metrics, see
+To learn how to export the MedTech service metrics, see
>[!div class="nextstepaction"]
->[Configure diagnostic setting for MedTech service metrics exporting](./iot-metrics-diagnostics-export.md)
+>[How to configure diagnostic settings for exporting the MedTech service metrics](./iot-metrics-diagnostics-export.md)
-(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
migrate Common Questions Discovery Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-discovery-assessment.md
The differences between agentless visualization and agent-based visualization ar
**Requirement** | **Agentless** | **Agent-based** | | Support | This option is currently in preview, and is only available for servers in VMware environment. [Review](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless) supported operating systems. | In general availability (GA).
-Agent | No need to install agents on machines you want to cross-check. | Agents to be installed on each on-premises machine that you want to analyze: The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md), and the [Dependency agent](../azure-monitor/agents/agents-overview.md#dependency-agent).
+Agent | No need to install agents on machines you want to cross-check. | Agents to be installed on each on-premises machine that you want to analyze: The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md), and the [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md).
Prerequisites | [Review](concepts-dependency-visualization.md#agentless-analysis) the prerequisites and deployment requirements. | [Review](concepts-dependency-visualization.md#agent-based-analysis) the prerequisites and deployment requirements. Log Analytics | Not required. | Azure Migrate uses the [Service Map](../azure-monitor/vm/service-map.md) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization. [Learn more](concepts-dependency-visualization.md#agent-based-analysis). How it works | Captures TCP connection data on machines enabled for dependency visualization. After discovery, it gathers data at intervals of five minutes. | Service Map agents installed on a machine gather data about TCP processes and inbound/outbound connections for each process.
No. Learn more about [Azure Migrate pricing](https://azure.microsoft.com/pricing
To use agent-based dependency visualization, download and install agents on each on-premises machine that you want to evaluate: - [Microsoft Monitoring Agent (MMA)](../azure-monitor/agents/agent-windows.md)-- [Dependency agent](../azure-monitor/agents/agents-overview.md#dependency-agent)
+- [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)
- If you have machines that don't have internet connectivity, download and install the Log Analytics gateway on them. You need these agents only if you use agent-based dependency visualization.
migrate Concepts Dependency Visualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-dependency-visualization.md
After discovery of dependency data begins, polling begins:
## Agent-based analysis
-For agent-based analysis, Azure Migrate: Discovery and assessment uses the [Service Map](../azure-monitor/vm/service-map.md) solution in Azure Monitor. You install the [Microsoft Monitoring Agent/Log Analytics agent](../azure-monitor/agents/agents-overview.md#log-analytics-agent) and the [Dependency agent](../azure-monitor/agents/agents-overview.md#dependency-agent), on each server you want to analyze.
+For agent-based analysis, Azure Migrate: Discovery and assessment uses the [Service Map](../azure-monitor/vm/service-map.md) solution in Azure Monitor. You install the [Microsoft Monitoring Agent/Log Analytics agent](../azure-monitor/agents/agents-overview.md#log-analytics-agent) and the [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md), on each server you want to analyze.
### Dependency data
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
Support | Details
**Before deployment** | You should have a project in place, with the Azure Migrate: Discovery and assessment tool added to the project.<br/><br/> You deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers.<br/><br/> [Learn how](create-manage-projects.md) to create a project for the first time.<br/> [Learn how](how-to-assess.md) to add Azure Migrate: Discovery and assessment tool to an existing project.<br/> Learn how to set up the appliance for discovery and assessment of [servers on Hyper-V](how-to-set-up-appliance-hyper-v.md). **Azure Government** | Dependency visualization isn't available in Azure Government. **Log Analytics** | Azure Migrate uses the [Service Map](../azure-monitor/vm/service-map.md) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br/><br/> You associate a new or existing Log Analytics workspace with a project. The workspace for a project can't be modified after it's added. <br/><br/> The workspace must be in the same subscription as the project.<br/><br/> The workspace must reside in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br/><br/> The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br/><br/> In Log Analytics, the workspace associated with Azure Migrate is tagged with the Migration Project key, and the project name.
-**Required agents** | On each server you want to analyze, install the following agents:<br/><br/> The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md).<br/> The [Dependency agent](../azure-monitor/agents/agents-overview.md#dependency-agent).<br/><br/> If on-premises servers aren't connected to the internet, you need to download and install Log Analytics gateway on them.<br/><br/> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
+**Required agents** | On each server you want to analyze, install the following agents:<br/><br/> The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md).<br/> The [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md).<br/><br/> If on-premises servers aren't connected to the internet, you need to download and install Log Analytics gateway on them.<br/><br/> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
**Log Analytics workspace** | The workspace must be in the same subscription as the project.<br/><br/> Azure Migrate supports workspaces residing in the East US, Southeast Asia, and West Europe regions.<br/><br/> The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br/><br/> The workspace for a project can't be modified after it's added. **Costs** | The Service Map solution doesn't incur any charges for the first 180 days (from the day that you associate the Log Analytics workspace with the project)/<br/><br/> After 180 days, standard Log Analytics charges will apply.<br/><br/> Using any solution other than Service Map in the associated Log Analytics workspace will incur [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br/><br/> When the project is deleted, the workspace is not deleted along with it. After deleting the project, Service Map usage isn't free, and each node will be charged as per the paid tier of Log Analytics workspace/<br/><br/>If you have projects that you created before Azure Migrate general availability (GA- 28 February 2018), you might have incurred additional Service Map charges. To ensure payment after 180 days only, we recommend that you create a new project, since existing workspaces before GA are still chargeable. **Management** | When you register agents to the workspace, you use the ID and key provided by the project.<br/><br/> You can use the Log Analytics workspace outside Azure Migrate.<br/><br/> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br/><br/> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality will not work as expected.
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
Support | Details
**Before deployment** | You should have a project in place, with the Azure Migrate: Discovery and assessment tool added to the project.<br/><br/> You deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers<br/><br/> [Learn how](create-manage-projects.md) to create a project for the first time.<br/> [Learn how](how-to-assess.md) to add an assessment tool to an existing project.<br/> Learn how to set up the Azure Migrate appliance for assessment of [Hyper-V](how-to-set-up-appliance-hyper-v.md), [VMware](how-to-set-up-appliance-vmware.md), or physical servers. **Azure Government** | Dependency visualization isn't available in Azure Government. **Log Analytics** | Azure Migrate uses the [Service Map](../azure-monitor/vm/service-map.md) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br/><br/> You associate a new or existing Log Analytics workspace with a project. The workspace for a project can't be modified after it's added. <br/><br/> The workspace must be in the same subscription as the project.<br/><br/> The workspace must reside in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br/><br/> The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br/><br/> In Log Analytics, the workspace associated with Azure Migrate is tagged with the Migration Project key, and the project name.
-**Required agents** | On each server you want to analyze, install the following agents:<br/><br/> The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md).<br/> The [Dependency agent](../azure-monitor/agents/agents-overview.md#dependency-agent).<br/><br/> If on-premises servers aren't connected to the internet, you need to download and install Log Analytics gateway on them.<br/><br/> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
+**Required agents** | On each server you want to analyze, install the following agents:<br/><br/> The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md).<br/> The [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md).<br/><br/> If on-premises servers aren't connected to the internet, you need to download and install Log Analytics gateway on them.<br/><br/> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
**Log Analytics workspace** | The workspace must be in the same subscription a project.<br/><br/> Azure Migrate supports workspaces residing in the East US, Southeast Asia, and West Europe regions.<br/><br/> The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br/><br/> The workspace for a project can't be modified after it's added. **Costs** | The Service Map solution doesn't incur any charges for the first 180 days (from the day that you associate the Log Analytics workspace with the project)/<br/><br/> After 180 days, standard Log Analytics charges will apply.<br/><br/> Using any solution other than Service Map in the associated Log Analytics workspace will incur [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br/><br/> When the project is deleted, the workspace isn't deleted along with it. After deleting the project, Service Map usage isn't free, and each node will be charged as per the paid tier of Log Analytics workspace/<br/><br/>If you have projects that you created before Azure Migrate general availability (GA- 28 February 2018), you might have incurred additional Service Map charges. To ensure payment after 180 days only, we recommend that you create a new project, since existing workspaces before GA are still chargeable. **Management** | When you register agents to the workspace, you use the ID and key provided by the project.<br/><br/> You can use the Log Analytics workspace outside Azure Migrate.<br/><br/> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br/><br/> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality won't work as expected.
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
Requirement | Details
**Before deployment** | You should have a project in place, with the Azure Migrate: Discovery and assessment tool added to the project.<br /><br />Deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers.<br /><br />Learn how to [create a project for the first time](create-manage-projects.md).<br /> Learn how to [add a discovery and assessment tool to an existing project](how-to-assess.md).<br /> Learn how to set up the Azure Migrate appliance for assessment of [Hyper-V](how-to-set-up-appliance-hyper-v.md), [VMware](how-to-set-up-appliance-vmware.md), or physical servers. **Supported servers** | Supported for all servers in your on-premises environment. **Log Analytics** | Azure Migrate uses the [Service Map](../azure-monitor/vm/service-map.md) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br /><br /> You associate a new or existing Log Analytics workspace with a project. The workspace for a project can't be modified after the workspace is added. <br /><br /> The workspace must be in the same subscription as the project.<br /><br /> The workspace must be located in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br /><br /> The workspace must be in a [region in which Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br /><br /> In Log Analytics, the workspace that's associated with Azure Migrate is tagged with the project key and project name.
-**Required agents** | On each server that you want to analyze, install the following agents:<br />- [Microsoft Monitoring Agent (MMA)](../azure-monitor/agents/agent-windows.md)<br />- [Dependency agent](../azure-monitor/agents/agents-overview.md#dependency-agent)<br /><br /> If on-premises servers aren't connected to the internet, download and install the Log Analytics gateway on them.<br /><br /> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and the [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
+**Required agents** | On each server that you want to analyze, install the following agents:<br />- [Microsoft Monitoring Agent (MMA)](../azure-monitor/agents/agent-windows.md)<br />- [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br /><br /> If on-premises servers aren't connected to the internet, download and install the Log Analytics gateway on them.<br /><br /> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and the [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
**Log Analytics workspace** | The workspace must be in the same subscription as the project.<br /><br /> Azure Migrate supports workspaces that are located in the East US, Southeast Asia, and West Europe regions.<br /><br /> The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br /><br /> The workspace for a project can't be modified after the workspace is added. **Cost** | The Service Map solution doesn't incur any charges for the first 180 days (from the day you associate the Log Analytics workspace with the project).<br /><br /> After 180 days, standard Log Analytics charges apply.<br /><br /> Using any solution other than Service Map in the associated Log Analytics workspace will incur [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br /><br /> When the project is deleted, the workspace is not automatically deleted. After deleting the project, Service Map usage isn't free, and each node will be charged per the paid tier of Log Analytics workspace.<br /><br />If you have projects that you created before Azure Migrate general availability (February 28, 2018), you might have incurred additional Service Map charges. To ensure that you are charged only after 180 days, we recommend that you create a new project. Workspaces that were created before GA are still chargeable. **Management** | When you register agents to the workspace, use the ID and key provided by the project.<br /><br /> You can use the Log Analytics workspace outside Azure Migrate.<br /><br /> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br /><br /> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality won't work as expected.
orbital Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/downlink-aqua.md
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
> You can confirm that your spacecraft resource for AQUA is authorized by checking that the **Authorization status** shows **Allowed** in the spacecraft's overiew page.
-## Prepare a virtual machine (VM) to receive the downlinked AQUA data
+## Prepare your virtual machine (VM) and network to receive AQUA data
1. [Create a virtual network](../virtual-network/quick-create-portal.md) to host your data endpoint virtual machine (VM) 2. [Create a virtual machine (VM)](../virtual-network/quick-create-portal.md#create-virtual-machines) within the virtual network above. Ensure that this VM has the following specifications:
sudo mount -t tmpfs -o size=28G tmpfs /media/aqua
```console sudo apt install socat ```
-5. Edit the [Network Security Group](../virtual-network/network-security-groups-overview.md) for the subnet that your virtual machine is using to allow inbound connections from the following IPs over TCP port 56001:
-- 20.47.120.4-- 20.47.120.38-- 20.72.252.246-- 20.94.235.188-- 20.69.186.50-- 20.47.120.177
+5. [Prepare the network for Azure Orbital Ground Station integration](prepare-network.md) to configure your network.
## Configure a contact profile for an AQUA downlink mission 1. In the Azure portal search box, enter **Contact profile**. Select **Contact profile** in the search results.
sudo apt install socat
| Bandwidth MHz | **15.0** | | Polarization | **RHCP** | | Endpoint name | Enter the name of the virtual machine (VM) you created above |
- | IP Address | Enter the Public IP address of the virtual machine you created above (VM) |
+ | IP Address | Enter the Private IP address of the virtual machine you created above (VM) |
| Port | **56001** | | Protocol | **TCP** | | Demodulation Configuration | Leave this field **blank** or request a demodulation configuration from the [Azure Orbital team](mailto:msazureorbital@microsoft.com) to use a software modem. Include your Subscription ID, Spacecraft resource ID, and Contact Profile resource ID in your email request.|
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
Optional setup steps for capturing the ground station telemetry are included in
You must first follow the steps listed in [Tutorial: Downlink data from NASA's AQUA public satellite](downlink-aqua.md). > [!NOTE]
-> In the section [Prepare a virtual machine (VM) to receive the downlinked AQUA data](downlink-aqua.md#prepare-a-virtual-machine-vm-to-receive-the-downlinked-aqua-data), use the following values:
+> In the section [Prepare a virtual machine (VM) to receive the downlinked AQUA data](downlink-aqua.md#prepare-your-virtual-machine-vm-and-network-to-receive-aqua-data), use the following values:
> > - **Name:** receiver-vm > - **Operating System:** Linux (CentOS Linux 7 or higher)
IPOPP will produce output products in the following directories:
### Capture ground station telemetry
-An Azure Orbital Ground station emits telemetry events that can be used to analyze the ground station operation during the contact. You can configure your contact profile to send such telemetry events to Azure Event Hubs. The steps below describe how to create Event Hubs and grant Azure Orbital access to send events to it.
-
-1. In your subscription, go to **Resource Provider** settings and register Microsoft.Orbital as a provider.  
-2. [Create Azure Event Hubs](../event-hubs/event-hubs-create.md) in your subscription.
-3. From the left menu, select **Access Control (IAM)**. Under **Grant Access to this Resource**, select **Add Role Assignment**.
-4. Select **Azure Event Hubs Data Sender**.  
-5. Assign access to '**User, group, or service principal**'.
-6. Click '**+ Select members**'. 
-7. Search for '**Azure Orbital Resource Provider**' and press **Select**. 
-8. Press **Review + Assign** to grant Azure Orbital the rights to send telemetry into your event hub.
-9. To confirm the newly added role assignment, go back to the Access Control (IAM) page and select **View access to this resource**.
-
-Congrats! Orbital can now communicate with your hub. 
-
-### Enable telemetry for a contact profile in the Azure portal 
-
-1. Go to **Contact Profile** resource, and click **Create**. 
-2. Choose a namespace using the **Event Hubs Namespace** dropdown. 
-3. Choose an instance using the **Event Hubs Instance** dropdown that appears after namespace selection. 
-
-### Test telemetry on a contact 
-
-1. Schedule a contact using the Contact Profile that you previously configured for Telemetry. 
-2. Once the contact begins, you should begin to see data in your Event Hubs soon after. 
-
-To verify that events are being received in your Event Hubs, you can check the graphs present on the Event Hubs namespace **Overview** page. The graphs show data across all Event Hubs instances within a namespace. You can navigate to the Overview page of a specific instance to see the graphs for that instance. 
-
-You can enable an Event Hubs [Capture feature](../event-hubs/event-hubs-capture-enable-through-portal.md) that will automatically deliver the telemetry data to an Azure Blob storage account of your choosing. 
-
-Once enabled, you can check your container and view or download the data. 
- 
-The Event Hubs documentation provides a great deal of guidance on how to write simple consumer apps to receive events from Event Hubs: 
--- [Python](../event-hubs/event-hubs-python-get-started-send.md)--- [.NET](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) --- [Java](../event-hubs/event-hubs-java-get-started-send.md) --- [JavaScript](../event-hubs/event-hubs-node-get-started-send.md)  -
-Other helpful resources: 
--- [Event Hubs using Python Getting Started](../event-hubs/event-hubs-python-get-started-send.md) --- [Azure Event Hubs client library for Python code samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventhub/azure-eventhub/samples/async_samples) 
+Follow steps here to [receive real-time telemetry from the ground stations](receive-real-time-telemetry.md).
## Next steps
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
As you're creating private endpoints, consider the following:
- Multiple private endpoints can be created on the same or different subnets within the same virtual network. There are limits to the number of private endpoints you can create in a subscription. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits). -- The subscription from the private-link resource must also be registered with the Microsoft network resource provider. For more information, seeΓÇ»[Azure Resource Providers](../azure-resource-manager/management/resource-providers-and-types.md).
+- The subscription that contains the private link resource must be registered with the Microsoft network resource provider. The subscription that contains the private endpoint must also be registered with the Microsoft network resource provider. For more information, seeΓÇ»[Azure Resource Providers](../azure-resource-manager/management/resource-providers-and-types.md).
## Private-link resource A private-link resource is the destination target of a specified private endpoint. The following table lists the available resources that support a private endpoint:
search Search Faceted Navigation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-faceted-navigation.md
Previously updated : 12/13/2021- Last updated : 07/22/2022+ # Add faceted navigation to a search app
Code in the presentation layer does the heavy lifting in a faceted navigation ex
Facets are dynamic and returned on a query. A search response brings with it all of the facet categories used to navigate the documents in the result. The query executes first, and then facets are pulled from the current results and assembled into a faceted navigation structure.
-In Cognitive Search, facets are one layer deep and cannot be hierarchical. If you aren't familiar with faceted navigation structured, the following example shows one on the left. Counts indicate the number of matches for each facet. The same document can be represented in multiple facets.
+In Cognitive Search, facets are one layer deep and can't be hierarchical. If you aren't familiar with faceted navigation structured, the following example shows one on the left. Counts indicate the number of matches for each facet. The same document can be represented in multiple facets.
:::image source="media/tutorial-csharp-create-first-app/azure-search-facet-nav.png" alt-text="faceted search results":::
Facets can be calculated over single-value fields as well as collections. Fields
The contents of a field, and not the field itself, produces the facets in a faceted navigation structure. If the facet is a string field *Color*, facets will be blue, green, and any other value for that field.
-As a best practice, check fields for null values, misspellings or case discrepancies, and single and plural versions of the same word. Filters and facets do not undergo lexical analysis or [spell check](speller-how-to-add.md), which means that all values of a "facetable" field are potential facets, even if the words differ by one character.
+As a best practice, check fields for null values, misspellings or case discrepancies, and single and plural versions of the same word. By default, filters and facets don't undergo lexical analysis or [spell check](speller-how-to-add.md), which means that all values of a "facetable" field are potential facets, even if the words differ by one character. Optionally, you can [assign a normalizer](search-normalizers.md) to a "filterable" and "facetable" field to smooth out variations in casing and characters.
### Defaults in REST and Azure SDKs
-If you are using one of the Azure SDKs, your code must specify all field attributes. In contrast, the REST API has defaults for field attributes based on the [data type](/rest/api/searchservice/supported-data-types). The following data types are "filterable" and "facetable" by default:
+If you are using one of the Azure SDKs, your code must explicitly set the field attributes. In contrast, the REST API has defaults for field attributes based on the [data type](/rest/api/searchservice/supported-data-types). The following data types are "filterable" and "facetable" by default:
* `Edm.String` * `Edm.DateTimeOffset`
If you are using one of the Azure SDKs, your code must specify all field attribu
* `Edm.Int32`, `Edm.Int64`, `Edm.Double` * Collections of any of the above types, for example `Collection(Edm.String)` or `Collection(Edm.Double)`
-You cannot use `Edm.GeographyPoint` or `Collection(Edm.GeographyPoint)` fields in faceted navigation. Facets work best on fields with low cardinality. Due to the resolution of geo-coordinates, it is rare that any two sets of coordinates will be equal in a given dataset. As such, facets are not supported for geo-coordinates. You would need a city or region field to facet by location.
+You can't use `Edm.GeographyPoint` or `Collection(Edm.GeographyPoint)` fields in faceted navigation. Facets work best on fields with low cardinality. Due to the resolution of geo-coordinates, it's rare that any two sets of coordinates will be equal in a given dataset. As such, facets aren't supported for geo-coordinates. You would need a city or region field to facet by location.
-> [!Tip]
+> [!TIP]
> As a best practice for performance and storage optimization, turn faceting off for fields that should never be used as a facet. In particular, string fields for unique values, such as an ID or product name, should be set to `"facetable": false` to prevent their accidental (and ineffective) use in faceted navigation. This is especially true for the REST API that enables filters and facets by default. ## Facet request and response
POST https://{{service_name}}.search.windows.net/indexes/hotels/docs/search?api-
} ```
-For each faceted navigation tree, there is a default limit of 10 facets. This default makes sense for navigation structures because it keeps the values list to a manageable size. You can override the default by assigning a value to "count". For example, `"Tags,count:5"` reduces the number of tags under the Tags section to the top five.
+For each faceted navigation tree, there's a default limit of the top ten facets. This default makes sense for navigation structures because it keeps the values list to a manageable size. You can override the default by assigning a value to "count". For example, `"Tags,count:5"` reduces the number of tags under the Tags section to the top five.
-For Numeric and DateTime values only, you can explicitly set values on the facet field (for example, `facet=Rating,values:1|2|3|4|5`) to separate results into contiguous ranges (either ranges based on numeric values or time periods). Alternatively, you can add "interval:, as in `facet=Rating,interval:1`.
+For Numeric and DateTime values only, you can explicitly set values on the facet field (for example, `facet=Rating,values:1|2|3|4|5`) to separate results into contiguous ranges (either ranges based on numeric values or time periods). Alternatively, you can add "interval", as in `facet=Rating,interval:1`.
Each range is built using 0 as a starting point, a value from the list as an endpoint, and then trimmed of the previous range to create discrete intervals. ### Discrepancies in facet counts
-Under certain circumstances, you might find that facet counts do not match the result sets (see [Faceted navigation in Azure Cognitive Search (Microsoft Q&A question page)](/answers/topics/azure-cognitive-search.html)).
+Under certain circumstances, you might find that facet counts aren't fully accurate due to the [sharding architecture](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards). Every search index is spread across multiple shards, and each shard reports the top N facets by document count, which are then combined into a single result. Because it's just the top N facets for each shard, it's possible to miss or under-count matching documents in the facet response.
-Facet counts can be inaccurate due to the sharding architecture. Every search index has multiple shards, and each shard reports the top N facets by document count, which is then combined into a single result. If some shards have many matching values, while others have fewer, you may find that some facet values are missing or under-counted in the results.
+To guarantee accuracy, you can artificially inflate the count:\<number> to a large number to force full reporting from each shard. You can specify `"count": "0"` for unlimited facets. Or, you can set "count" to a value that's greater than or equal to the number of unique values of the faceted field. For example, if you're faceting by a "size" field that has five unique values, you could set `"count:5"` to ensure all matches are represented in the facet response.
-Although this behavior could change at any time, if you encounter this behavior today, you can work around it by artificially inflating the count:\<number> to a large number to enforce full reporting from each shard. If the value of count: is greater than or equal to the number of unique values in the field, you are guaranteed accurate results. However, when document counts are high, there is a performance penalty, so use this option judiciously.
+The tradeoff with this workaround is increased query latency, so use it only when necessary.
## Presentation layer
if (businessTitleFacet != "")
filter = "business_title eq '" + businessTitleFacet + "'"; ```
-Here is another example from the hotels sample. The following code snippet adds categorFacet to the filter if a user selects a value from the category facet.
+Here's another example from the hotels sample. The following code snippet adds `categoyrFacet` to the filter if a user selects a value from the category facet.
```csharp if (!String.IsNullOrEmpty(categoryFacet))
When you design the search results page, remember to add a mechanism for clearin
### Trim facet results with more filters
-Facet results are documents found in the search results that match a facet term. In the following example, in search results for *cloud computing*, 254 items also have *internal specification* as a content type. Items are not necessarily mutually exclusive. If an item meets the criteria of both filters, it is counted in each one. This duplication is possible when faceting on `Collection(Edm.String)` fields, which are often used to implement document tagging.
+Facet results are documents found in the search results that match a facet term. In the following example, in search results for *cloud computing*, 254 items also have *internal specification* as a content type. Items aren't necessarily mutually exclusive. If an item meets the criteria of both filters, it's counted in each one. This duplication is possible when faceting on `Collection(Edm.String)` fields, which are often used to implement document tagging.
```output Search term: "cloud computing"
In general, if you find that facet results are consistently too large, we recomm
### A facet-only search experience
-If your application uses faceted navigation exclusively (that is, no search box), you can mark the field as `searchable=false`, `filterable=true`, `facetable=true` to produce a more compact index. Your index will not include inverted indexes and there will be no text analysis or tokenization. Filters are made on exact matches at the character level.
+If your application uses faceted navigation exclusively (that is, no search box), you can mark the field as `searchable=false`, `filterable=true`, `facetable=true` to produce a more compact index. Your index won't include inverted indexes and there will be no text analysis or tokenization. Filters are made on exact matches at the character level.
### Validate inputs at query-time
search Search Pagination Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-pagination-page-layout.md
Previously updated : 04/21/2022 Last updated : 07/22/2022 # How to work with search results in Azure Cognitive Search
-This article explains how to work with a query response in Azure Cognitive Search.
+This article explains how to work with a query response in Azure Cognitive Search. The structure of a response is determined by parameters in the query itself, as described in [Search Documents (REST)](/rest/api/searchservice/Search-Documents) or [SearchResults Class (Azure for .NET)](/dotnet/api/azure.search.documents.models.searchresults-1).
-The structure of a response is determined by parameters in the query itself, as described in [Search Documents (REST)](/rest/api/searchservice/Search-Documents) or [SearchResults Class (Azure for .NET)](/dotnet/api/azure.search.documents.models.searchresults-1). Parameters on the query determine:
+Parameters on the query determine:
++ Selection of fields within results++ Count of matches found in the index for the query + Number of results in the response (up to 50, by default)
-+ Fields in each result
-+ Order of items in results
-+ Highlighting of terms within a result, matching on either the whole or partial term in the body of the result
++ Sort order of results++ Highlighting of terms within a result, matching on either the whole or partial term in the body ## Result composition
-While a search document might consist of a large number of fields, typically only a few are needed to represent each document in the result set. On a query request, append `$select=<field list>` to specify which fields show up in the response. A field must be attributed as **Retrievable** in the index to be included in a result.
+Results are tabular, composed of fields of either all "retrievable" fields, or limited to just those fields specified in the **`$select`** parameters. Rows are the matching documents.
+
+While a search document might consist of a large number of fields, typically only a few are needed to represent each document in the result set. On a query request, append `$select=<field list>` to specify which fields include in the response. A field must be attributed as "retrievable" in the index to be included in a result.
Fields that work best include those that contrast and differentiate among documents, providing sufficient information to invite a click-through response on the part of the user. On an e-commerce site, it might be a product name, description, brand, color, size, price, and rating. For the built-in hotels-sample index, it might be the "select" fields in the following example:
Occasionally, the substance and not the structure of results are unexpected. For
+ Experiment with different lexical analyzers or custom analyzers to see if it changes the query outcome. The default analyzer will break up hyphenated words and reduce words to root forms, which usually improves the robustness of a query response. However, if you need to preserve hyphens, or if strings include special characters, you might need to configure custom analyzers to ensure the index contains tokens in the right format. For more information, see [Partial term search and patterns with special characters (hyphens, wildcard, regex, patterns)](search-query-partial-matching.md).
+## Counting matches
+
+The count parameter returns the number of documents in the index that are considered a match for the query. To return the count, add **`$count=true`** to the query request. There is no maximum value imposed by the search service. Depending on your query and the content of your documents, the count could be as high as every document in the index.
+
+Count is accurate when the index is stable. If the system is actively adding, updating, or deleting documents, the count will be approximate, excluding any documents that aren't fully indexed.
+
+Count won't be affected by routine maintenance or other workloads on the search service. However if you have multiple partitions and a single replica, you could experience short-term fluctuations in document count (several minutes) as the partitions are restarted.
+
+> [!TIP]
+> To check indexing operations, you can confirm whether the index contains the expected number of documents by adding `$count=true` on an empty search `search=*` query. The result is the full count of documents in your index.
+>
+> When testing query syntax, `$count=true` can quickly tell you whether your modifications are returning greater or fewer results, which can be useful feedback.
+ ## Paging results
-By default, the search engine returns up to the first 50 matches. The top 50 are determined by search score, assuming the query is full text search or semantic search, or in an arbitrary order for exact match queries (where "@searchScore=1.0").
+By default, the search engine returns up to the first 50 matches. The top 50 are determined by search score, assuming the query is full text search or semantic search. Otherwise, the top 50 are an arbitrary order for exact match queries (where "@searchScore=1.0").
To control the paging of all documents returned in a result set, add `$top` and `$skip` parameters to the query request. The following list explains the logic.
-+ Add `$count=true` to get a count of the total number of matching documents found within an index. Depending on your query and the content of your documents, the count could be as high as every document in the index.
- + Return the first set of 15 matching documents plus a count of total matches: `GET /indexes/<INDEX-NAME>/docs?search=<QUERY STRING>&$top=15&$skip=0&$count=true` + Return the second set, skipping the first 15 to get the next 15: `$top=15&$skip=15`. Repeat for the third set of 15: `$top=15&$skip=30`
-The results of paginated queries are not guaranteed to be stable if the underlying index is changing. Paging changes the value of `$skip` for each page, but each query is independent and operates on the current view of the data as it exists in the index at query time (in other words, there is no caching or snapshot of results, such as those found in a general purpose database).
+The results of paginated queries aren't guaranteed to be stable if the underlying index is changing. Paging changes the value of `$skip` for each page, but each query is independent and operates on the current view of the data as it exists in the index at query time (in other words, there is no caching or snapshot of results, such as those found in a general purpose database).
Following is an example of how you might get duplicates. Assume an index with four documents:
A @search.score equal to 1.00 indicates an un-scored or un-ranked result set, wh
For full text search queries, results are automatically ranked by a search score, calculated based on term frequency and proximity in a document (derived from [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)), with higher scores going to documents having more or stronger matches on a search term.
-Search scores convey general sense of relevance, reflecting the strength of match relative to other documents in the same result set. But scores are not always consistent from one query to the next, so as you work with queries, you might notice small discrepancies in how search documents are ordered. There are several explanations for why this might occur.
+Search scores convey general sense of relevance, reflecting the strength of match relative to other documents in the same result set. But scores aren't always consistent from one query to the next, so as you work with queries, you might notice small discrepancies in how search documents are ordered. There are several explanations for why this might occur.
| Cause | Description | |--|-|
Search scores convey general sense of relevance, reflecting the strength of matc
### How to get consistent ordering
-If consistent ordering is an application requirement, you can explicitly define an [**`$orderby`** expression](query-odata-filter-orderby-syntax.md) on a field. Only fields that are indexed as **`sortable`** can be used to order results.
+If consistent ordering is an application requirement, you can explicitly define an [**`$orderby`** expression](query-odata-filter-orderby-syntax.md) on a field. Only fields that are indexed as "sortable" can be used to order results.
Fields commonly used in an **`$orderby`** include rating, date, and location. Filtering by location requires that the filter expression calls the [**`geo.distance()` function**](search-query-odata-geo-spatial-functions.md?#order-by-examples), in addition to the field name.
Another approach that promotes order consistency is using a [custom scoring prof
## Hit highlighting
-Hit highlighting refers to text formatting (such as bold or yellow highlights) applied to matching terms in a result, making it easy to spot the match. Highlighting is useful for longer content fields, such as a description field, where the match is not immediately obvious.
+Hit highlighting refers to text formatting (such as bold or yellow highlights) applied to matching terms in a result, making it easy to spot the match. Highlighting is useful for longer content fields, such as a description field, where the match isn't immediately obvious.
Notice that highlighting is applied to individual terms. There is no highlight capability for the contents of an entire field. If you want highlighting over a phrase, you'll have to provide the matching terms (or phrase) in a quote-enclosed query string. This technique is described further on in this section.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Follow the instructions to obtain the credentials.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-workspace-transformations-portal.md) | | **Kusto function alias:** | WatchGuardFirebox | | **Kusto function URL:** | https://aka.ms/Sentinel-watchguardfirebox-parser |
-| **Vendor documentation/<br>installation instructions** | [Microsoft Sentinel Integration Guide](https://www.watchguard.com/help/docs/help-center/en-us/Content/Integration-Guides/General/Microsoft_Azure_Sentinel.html) |
+| **Vendor documentation/<br>installation instructions** | [Microsoft Sentinel Integration Guide](https://www.watchguard.com/help/docs/help-center/en-US/Content/Integration-Guides/General/Microsoft%20Azure%20Sentinel.html) |
| **Supported by** | [WatchGuard Technologies](https://www.watchguard.com/wgrd-support/overview) |
spring-cloud Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-apps.md
Compiling the project takes 5-10 minutes. Once completed, you should have indivi
Access the app gateway and customers service from browser with the **Public Url** shown above, in the format of `https://<service name>-api-gateway.azuremicroservices.io`.
-![Access petclinic customers service](media/build-and-deploy/access-customers-service.png)
+![Access petclinic customers service](media/quickstart-deploy-apps/access-customers-service.png)
> [!TIP] > To troubleshot deployments, you can use the following command to get logs streaming in real time whenever the app is running `az spring app logs --name <app name> -f`.
Compiling the project takes 5 -10 minutes. Once completed, you should have indiv
A successful deployment command will return a URL in the form: `https://<service name>-spring-petclinic-api-gateway.azuremicroservices.io`. Use it to navigate to the running service.
-![Access Pet Clinic](media/build-and-deploy/access-customers-service.png)
+![Access Pet Clinic](media/quickstart-deploy-apps/access-customers-service.png)
You can also navigate the Azure portal to find the URL.
Correct app names in each `pom.xml` for above modules and then run the `deploy`
1. Select `spring-petclinic-microservices` folder.
- ![Import Project](media/spring-cloud-intellij-howto/import-project-1-pet-clinic.png)
+ ![Import Project](media/quickstart-deploy-apps/import-project-1-pet-clinic.png)
### Deploy api-gateway app to Azure Spring Apps
In order to deploy to Azure you must sign in with your Azure account with Azure
1. Right-click your project in IntelliJ project explorer, and select **Azure** -> **Deploy to Azure Spring Apps**.
- ![Deploy to Azure 1](media/spring-cloud-intellij-howto/deploy-to-azure-1-pet-clinic.png)
+ ![Deploy to Azure 1](media/quickstart-deploy-apps/deploy-to-azure-1-pet-clinic.png)
1. In the **Name** field, append *:api-gateway* to the existing **Name**. 1. In the **Artifact** textbox, select *spring-petclinic-api-gateway-2.5.1*.
In order to deploy to Azure you must sign in with your Azure account with Azure
1. Enter *api-gateway*, then select **OK**. 1. Specify the memory to 2 GB and JVM options: `-Xms2048m -Xmx2048m`.
- ![Memory JVM options](media/spring-cloud-intellij-howto/memory-jvm-options.png)
+ ![Memory JVM options](media/quickstart-deploy-apps/memory-jvm-options.png)
1. In the **Before launch** section of the dialog, double-click *Run Maven Goal*. 1. In the **Working directory** textbox, navigate to the *spring-petclinic-microservices/gateway* folder. 1. In the **Command line** textbox, enter *package -DskipTests*. Select **OK**.
- ![Deploy to Azure OK](media/spring-cloud-intellij-howto/deploy-to-azure-spring-cloud-2-pet-clinic.png)
+ ![Deploy to Azure OK](media/quickstart-deploy-apps/deploy-to-azure-spring-cloud-2-pet-clinic.png)
1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog. The plug-in will run the command `mvn package` on the `api-gateway` app and deploy the jar generated by the `package` command.
Repeat the steps above to deploy `customers-service` and other Pet Clinic apps t
Navigate to the URL of the form: `https://<service name>-spring-petclinic-api-gateway.azuremicroservices.io`
-![Access Pet Clinic](media/build-and-deploy/access-customers-service.png)
+![Access Pet Clinic](media/quickstart-deploy-apps/access-customers-service.png)
You can also navigate the Azure portal to find the URL.
spring-cloud Quickstart Logs Metrics Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-logs-metrics-tracing.md
Executing ObjectResult, writing value of type 'System.Collections.Generic.KeyVal
1. In the Azure portal, go to the **service | Overview** page and select **Logs** in the **Monitoring** section. Select **Run** on one of the sample queries for Azure Spring Apps.
- [ ![Logs Analytics entry](media/spring-cloud-quickstart-logs-metrics-tracing/logs-entry.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/logs-entry.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/logs-entry.png" alt-text="Screenshot of the Logs opening page." lightbox="media/quickstart-logs-metrics-tracing/logs-entry.png":::
1. Edit the query to remove the Where clauses that limit the display to warning and error logs. 1. Then select `Run`, and you will see logs. See [Azure Log Analytics docs](../azure-monitor/logs/get-started-queries.md) for more guidance on writing queries.
- [ ![Logs Analytics query - Steeltoe](media/spring-cloud-quickstart-logs-metrics-tracing/logs-query-steeltoe.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/logs-query-steeltoe.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/logs-query-steeltoe.png" alt-text="Screenshot of a Logs Analytics query." lightbox="media/quickstart-logs-metrics-tracing/logs-query-steeltoe.png":::
1. To learn more about the query language that's used in Log Analytics, see [Azure Monitor log queries](/azure/data-explorer/kusto/query/). To query all your Log Analytics logs from a centralized client, check out [Azure Data Explorer](/azure/data-explorer/query-monitor-data).
Executing ObjectResult, writing value of type 'System.Collections.Generic.KeyVal
1. In the Azure portal, go to the **service | Overview** page and select **Metrics** in the **Monitoring** section. Add your first metric by selecting one of the .NET metrics under **Performance (.NET)** or **Request (.NET)** in the **Metric** drop-down, and `Avg` for **Aggregation** to see the timeline for that metric.
- [ ![Metrics entry - Steeltoe](media/spring-cloud-quickstart-logs-metrics-tracing/metrics-basic-cpu-steeltoe.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/metrics-basic-cpu-steeltoe.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/metrics-basic-cpu-steeltoe.png" alt-text="Screenshot of the Metrics page." lightbox="media/quickstart-logs-metrics-tracing/metrics-basic-cpu-steeltoe.png":::
1. Select **Add filter** in the toolbar, select `App=solar-system-weather` to see CPU usage only for the **solar-system-weather** app.
- [ ![Use filter in metrics - Steeltoe](media/spring-cloud-quickstart-logs-metrics-tracing/metrics-filter-steeltoe.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/metrics-filter-steeltoe.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/metrics-filter-steeltoe.png" alt-text="Screenshot of adding a filter." lightbox="media/quickstart-logs-metrics-tracing/metrics-filter-steeltoe.png":::
1. Dismiss the filter created in the preceding step, select **Apply Splitting**, and select `App` for **Values** to see CPU usage by different apps.
- [ ![Apply splitting in metrics - Steeltoe](media/spring-cloud-quickstart-logs-metrics-tracing/metrics-split-steeltoe.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/metrics-split-steeltoe.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/metrics-split-steeltoe.png" alt-text="Screenshot of applying splitting." lightbox="media/quickstart-logs-metrics-tracing/metrics-split-steeltoe.png":::
## Distributed tracing 1. In the Azure portal, go to the **service | Overview** page and select **Distributed tracing** in the **Monitoring** section. Then select the **View application map** tab on the right.
- [ ![Distributed Tracing entry - Steeltoe](media/spring-cloud-quickstart-logs-metrics-tracing/tracing-entry.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/tracing-entry.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/tracing-entry.png" alt-text="Screenshot of the Distributed tracing page." lightbox="media/quickstart-logs-metrics-tracing/tracing-entry.png":::
1. You can now see the status of calls between apps.
- [ ![Distributed tracing overview - Steeltoe](media/spring-cloud-quickstart-logs-metrics-tracing/tracing-overview-steeltoe.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/tracing-overview-steeltoe.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/tracing-overview-steeltoe.png" alt-text="Screenshot of the Application map page." lightbox="media/quickstart-logs-metrics-tracing/tracing-overview-steeltoe.png":::
1. Select the link between **solar-system-weather** and **planet-weather-provider** to see more details like slowest calls by HTTP methods.
- [ ![Distributed tracing - Steeltoe](media/spring-cloud-quickstart-logs-metrics-tracing/tracing-call-steeltoe.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/tracing-call-steeltoe.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/tracing-call-steeltoe.png" alt-text="Screenshot of Application map details." lightbox="media/quickstart-logs-metrics-tracing/tracing-call-steeltoe.png":::
1. Finally, select **Investigate Performance** to explore more powerful built-in performance analysis.
- [ ![Distributed tracing performance - Steeltoe](media/spring-cloud-quickstart-logs-metrics-tracing/tracing-performance-steeltoe.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/tracing-performance-steeltoe.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/tracing-performance-steeltoe.png" alt-text="Screenshot of Performance page." lightbox="media/quickstart-logs-metrics-tracing/tracing-performance-steeltoe.png":::
::: zone-end ::: zone pivot="programming-language-java"
az spring app logs -s <service instance name> -g <resource group name> -n gatewa
You will see logs like this:
-[ ![Log Streaming from Azure CLI](media/spring-cloud-quickstart-logs-metrics-tracing/logs-streaming-cli.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/logs-streaming-cli.png#lightbox)
> [!TIP] > Use `az spring app logs -h` to explore more parameters and log stream functionalities.
To get the logs using Azure Toolkit for IntelliJ:
1. Select **Streaming Logs** from the drop-down list.
- ![Select streaming logs](media/spring-cloud-intellij-howto/streaming-logs.png)
+ ![Select streaming logs](media/quickstart-logs-metrics-tracing/streaming-logs.png)
1. Select **Instance**.
- ![Select instance](media/spring-cloud-intellij-howto/select-instance.png)
+ ![Select instance](media/quickstart-logs-metrics-tracing/select-instance.png)
1. The streaming log will be visible in the output window.
- ![Streaming log output](media/spring-cloud-intellij-howto/streaming-log-output.png)
+ ![Streaming log output](media/quickstart-logs-metrics-tracing/streaming-log-output.png)
To learn more about the query language that's used in Log Analytics, see [Azure Monitor log queries](/azure/data-explorer/kusto/query/). To query all your Log Analytics logs from a centralized client, check out [Azure Data Explorer](/azure/data-explorer/query-monitor-data).
To get the logs using Azure Toolkit for IntelliJ:
1. Go to the **service | Overview** page and select **Logs** in the **Monitoring** section. Select **Run** on one of the sample queries for Azure Spring Apps.
- [ ![Logs Analytics portal entry](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/logs-entry.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/logs-entry.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/logs-entry.png" alt-text="Screenshot of the Logs opening page." lightbox="media/quickstart-logs-metrics-tracing/logs-entry.png":::
1. Then you will see filtered logs. See [Azure Log Analytics docs](../azure-monitor/logs/get-started-queries.md) for more guidance on writing queries.
- [ ![Logs Analytics query](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/logs-query.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/logs-query.png#lightbox)
+ :::image type="content" source="media/quickstart-logs-metrics-tracing/logs-query.png" alt-text="Screenshot of filtered logs." lightbox="media/quickstart-logs-metrics-tracing/logs-query.png":::
## Metrics Navigate to the `Application insights` blade. Then, navigate to the `Metrics` blade - you can see metrics contributed by Spring Boot apps, Spring modules, and dependencies.
-The chart below shows `gateway-requests` (Spring Cloud Gateway), `hikaricp_connections`
+The following chart shows `gateway-requests` (Spring Cloud Gateway), `hikaricp_connections`
(JDBC Connections) and `http_client_requests`.
-[ ![Metrics blade](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-metrics.jpg) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-metrics.jpg#lightbox)
Spring Boot registers a lot number of core metrics: JVM, CPU, Tomcat, Logback... The Spring Boot auto-configuration enables the instrumentation of requests handled by Spring MVC.
All those three REST controllers `OwnerResource`, `PetResource` and `VisitResour
* @Timed: `petclinic.visit` You can see these custom metrics in the `Metrics` blade:
-[ ![Custom metrics](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-custom-metrics.jpg) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-custom-metrics.jpg#lightbox)
+ You can use the Availability Test feature in Application Insights and monitor the availability of applications:
-[ ![Availability test](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-availability.jpg) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-availability.jpg#lightbox)
+
+Navigate to the `Live Metrics` blade to can see live metrics with low latencies (less than one second):
-Navigate to the `Live Metrics` blade - you can see live metrics on screen with low latencies < 1 second:
-[ ![Live metrics](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-live-metrics.jpg) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-live-metrics.jpg#lightbox)
## Tracing Open the Application Insights created by Azure Spring Apps and start monitoring Spring applications. Navigate to the `Application Map` blade:
-[ ![Application map](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/distributed-tracking-new-ai-agent.jpg) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/distributed-tracking-new-ai-agent.jpg#lightbox)
+ Navigate to the `Performance` blade:
-[ ![Performance blade](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-performance.jpg) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-performance.jpg#lightbox)
+ Navigate to the `Performance/Dependenices` blade - you can see the performance number for dependencies, particularly SQL calls:
-[ ![Performance/Dependencies blade](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-insights-on-dependencies.jpg) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-insights-on-dependencies.jpg#lightbox)
+ Select a SQL call to see the end-to-end transaction in context:
-[ ![SQL end-to-end transaction](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-end-to-end-transaction-details.jpg) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-end-to-end-transaction-details.jpg#lightbox)
+ Navigate to the `Failures/Exceptions` blade - you can see a collection of exceptions:
-[ ![Failures/Exceptions](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-failures-exceptions.png) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/petclinic-microservices-failures-exceptions.png#lightbox)
+ Select an exception to see the end-to-end transaction and stacktrace in context:
-[ ![Stacktrace end-to-end](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/end-to-end-transaction-details.jpg) ](media/spring-cloud-quickstart-logs-metrics-tracing/update-logs-metrics-tracing/end-to-end-transaction-details.jpg#lightbox)
+ ::: zone-end
spring-cloud Quickstart Provision Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-provision-service-instance.md
You can provision an instance of the Azure Spring Apps service using the Azure p
## Prerequisites -- [Install JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install)-- [Sign up for an Azure subscription](https://azure.microsoft.com/free/)-- (Optional) [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and install the Azure Spring Apps extension with the command: `az extension add --name spring`-- (Optional) [Install the Azure Toolkit for IntelliJ IDEA](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/) and [sign-in](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app#installation-and-sign-in)
+- [JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install)
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Optionally, [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`
+- Optionally, [the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/).
## Provision an instance of Azure Spring Apps
spring-cloud Quickstart Sample App Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-sample-app-introduction.md
The sample app is composed of two Spring apps:
The following diagram illustrates the sample app architecture: > [!NOTE] > When the application is hosted in Azure Spring Apps Enterprise tier, the managed Application Configuration Service for VMware Tanzu® assumes the role of Spring Cloud Config Server and the managed VMware Tanzu® Service Registry assumes the role of Eureka Service Discovery without any code changes to the application. For more information, see [Use Application Configuration Service for Tanzu](how-to-enterprise-application-configuration-service.md) and [Use Tanzu Service Registry](how-to-enterprise-service-registry.md).
spring-cloud Quickstart Setup Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-setup-log-analytics.md
To create a workspace, follow the steps in [Create a Log Analytics workspace in
In the wizard for creating an Azure Spring Apps service instance, you can configure the **Log Analytics workspace** field with an existing workspace or create one. ## Set up Log Analytics for an existing service 1. In the Azure portal, go to the **Diagnostic settings** section under **Monitoring**.
- [![Screenshot that shows the location of diagnostic settings.](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-entry.png)](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-entry.png#lightbox)
+ :::image type="content" source="media/quickstart-setup-log-analytics/diagnostic-settings-entry.png" alt-text="Screenshot that shows the location of diagnostic settings." lightbox="media/quickstart-setup-log-analytics/diagnostic-settings-entry.png":::
1. If no settings exist, select **Add diagnostic setting**. You can also select **Edit setting** to update existing settings. 1. Fill out the form on the **Diagnostic setting** page:
- * **Diagnostic setting name**: Set a unique name for the configuration.
- * **Logs** > **Categories**: Select **ApplicationConsole** and **SystemLogs**. For more information on log categories and contents, see [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
- * **Destination details**: Select **Send to Log Analytics workspace** and specify the Log Analytics workspace that you created previously.
+ - **Diagnostic setting name**: Set a unique name for the configuration.
+ - **Logs** > **Categories**: Select **ApplicationConsole** and **SystemLogs**. For more information on log categories and contents, see [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
+ - **Destination details**: Select **Send to Log Analytics workspace** and specify the Log Analytics workspace that you created previously.
- [![Screenshot that shows an example of set-up diagnostic settings.](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-edit-form.png)](media/spring-cloud-quickstart-setup-log-analytics/diagnostic-settings-edit-form.png#lightbox)
+ :::image type="content" source="media/quickstart-setup-log-analytics/diagnostic-settings-edit-form.png" alt-text="Screenshot that shows an example of set-up diagnostic settings." lightbox="media/quickstart-setup-log-analytics/diagnostic-settings-edit-form.png":::
1. Select **Save**.
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
Previously updated : 05/07/2021 Last updated : 07/22/2022
To initiate an account failover from the Azure portal, follow these steps:
## [PowerShell](#tab/azure-powershell)
-The account failover feature is generally available, but still relies on a preview module for PowerShell. To use PowerShell to initiate an account failover, you must first install the Az.Storage [1.1.1-preview](https://www.powershellgallery.com/packages/Az.Storage/1.1.1-preview) module. Follow these steps to install the module:
+To use PowerShell to initiate an account failover, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later. For more information about installing Azure PowerShell, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
-1. Uninstall any previous installations of Azure PowerShell:
-
- - Remove any previous installations of Azure PowerShell from Windows using the **Apps & features** setting under **Settings**.
- - Remove all **Azure** modules from `%Program Files%\WindowsPowerShell\Modules`.
-
-1. Make sure that you have the latest version of PowerShellGet installed. Open a Windows PowerShell window, and run the following command to install the latest version:
-
- ```powershell
- Install-Module PowerShellGet ΓÇôRepository PSGallery ΓÇôForce
- ```
-
-1. Close and reopen the PowerShell window after installing PowerShellGet.
-
-1. Install the latest version of Azure PowerShell:
-
- ```powershell
- Install-Module Az ΓÇôRepository PSGallery ΓÇôAllowClobber
- ```
-
-1. Install an Azure Storage preview module that supports account failover:
-
- ```powershell
- Install-Module Az.Storage ΓÇôRepository PSGallery -RequiredVersion 1.1.1-preview ΓÇôAllowPrerelease ΓÇôAllowClobber ΓÇôForce
- ```
-
-To initiate an account failover from PowerShell, execute the following command:
+To initiate an account failover from PowerShell, call the following command:
```powershell Invoke-AzStorageAccountFailover -ResourceGroupName <resource-group-name> -Name <account-name>
Invoke-AzStorageAccountFailover -ResourceGroupName <resource-group-name> -Name <
## [Azure CLI](#tab/azure-cli)
-To use Azure CLI to initiate an account failover, execute the following commands:
+To use Azure CLI to initiate an account failover, call the following commands:
```azurecli-interactive az storage account show \ --name accountName \ --expand geoReplicationStats
virtual-machines Vm Applications How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications-how-to.md
VM Applications are a resource type in Azure Compute Gallery (formerly known as
> [!IMPORTANT]
-> Deploying **VM applications in Azure Compute Gallery** do not currently support using Azure policies.
+> Deploying VM applications in Azure Compute Gallery **do not currently support using Azure policies**.
## Prerequisites
Select the VM application from the list, and then select **Save** at the bottom
:::image type="content" source="media/vmapps/select-app.png" alt-text="Screenshot showing selecting a VM application to install on the VM.":::
+To show the VM application status, go to the Extensions + applications tab/settings and check the status of the VMAppExtension:
++
+To show the VM application status for VMSS, go to the VMSS page, Instances, select one of them, then go to VMAppExtension:
+++ ### [CLI](#tab/cli) VM applications require [Azure CLI](/cli/azure/install-azure-cli) version 2.30.0 or later.
az vm application set \
For setting multiple applications on a VM: ```azurecli-interactive
-az vm applicaction set \
+az vm application set \
--resource-group myResourceGroup \ --name myVM \
- --app-version-ids <appversionID1> <appversionID2> \
- --treat-deployment-as-failure true
+ --app-version-ids /subscriptions/{subId}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myGallery/applications/myApp/versions/1.0.0 /subscriptions/{subId}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myGallery/applications/myApp2/versions/1.0.1 \
+ --treat-deployment-as-failure true true
```
-To verify application VM deployment status:
+To add an application to a VMSS, use [az vmss application set](/cli/azure/vmss/application#az-vmss-application-set):
+
+```azurepowershell-interactive
+az vmss application set -g myResourceGroup -n myVmss --app-version-ids /subscriptions/{subId}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myGallery/applications/myApp/versions/1.0.0
+--treat-deployment-as-failure true
+```
+To add multiple applications to a VMSS:
+```azurecli-interactive
+az vmss application set -g myResourceGroup -n myVmss --app-version-ids /subscriptions/{subId}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myGallery/applications/myApp/versions/1.0.0 /subscriptions/{subId}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myGallery/applications/myApp2/versions/1.0.0
+--treat-deployment-as-failure true
+```
+
+To verify application VM deployment status, use [az vm get-instance-view](/cli/azure/vm/#az-vm-get-instance-view):
```azurecli-interactive az vm get-instance-view -g myResourceGroup -n myVM --query "instanceView.extensions[?name == 'VMAppExtension']" ```
-For verifying application VMSS deployment status:
+To verify application VMSS deployment status, use [az vmss get-instance-view](/cli/azure/vmss/#az-vmss-get-instance-view):
+
+```azurepowershell-interactive
+az vmss get-instance-view --ids (az vmss list-instances -g myResourceGroup -n myVmss --query "[*].id" -o tsv) --query "[*].extensions[?name == 'VMAppExtension']"
+```
+> [!NOTE]
+> The above VMSS deployment status command does not list the instance ID with the result. To show the instance ID with the status of the extension in each instance, some additional scripting is required. Refer to the below VMSS CLI example that contains PowerShell syntax:
```azurecli-interactive
-$ids = az vmss list-instances -g myResourceGroup -n $vmssName --query "[*].{id: id, instanceId: instanceId}" | ConvertFrom-Json
+$ids = az vmss list-instances -g myResourceGroup -n myVMss --query "[*].{id: id, instanceId: instanceId}" | ConvertFrom-Json
$ids | Foreach-Object { $iid = $_.instanceId Write-Output "instanceId: $iid" az vmss get-instance-view --ids $_.id --query "extensions[?name == 'VMAppExtension']" } ```
-> [!NOTE]
-> The VMSS deployment status contains PowerShell syntax. Refer to the 2nd [vm-extension-delete](/cli/azure/vm/extension#az-vm-extension-delete-examples) example as there is precedence for it.
+ ### [PowerShell](#tab/powershell)
To add the application to an existing VM, get the application version and use th
$galleryName = "myGallery" $rgName = "myResourceGroup" $applicationName = "myApp"
+$version = "1.0.0"
$vmName = "myVM" $vm = Get-AzVM -ResourceGroupName $rgname -Name $vmName $appversion = Get-AzGalleryApplicationVersion `
$app = New-AzVmGalleryApplication -PackageReferenceId $packageid
Add-AzVmGalleryApplication -VM $vm -GalleryApplication $app -TreatFailureAsDeploymentFailure true Update-AzVM -ResourceGroupName $rgName -VM $vm ```
-
+To add the application to a VMSS:
+```azurecli-interactive
+$vmss = Get-AzVmss -ResourceGroupName $rgname -Name $vmssName
+$appversion = Get-AzGalleryApplicationVersion `
+ -GalleryApplicationName $applicationName `
+ -GalleryName $galleryName `
+ -Name $version `
+ -ResourceGroupName $rgName
+$packageid = $appversion.Id
+$app = New-AzVmssGalleryApplication -PackageReferenceId $packageid
+Add-AzVmssGalleryApplication -VirtualMachineScaleSetVM $vmss.VirtualMachineProfile -GalleryApplication $app
+Update-AzVMss -ResourceGroupName $rgName -VirtualMachineScaleSet $vmss -VMScaleSetName $vmssName
+```
++ Verify the application succeeded:
-```powershell-interactive
+```azurepowershell-interactive
$rgName = "myResourceGroup" $vmName = "myVM" $result = Get-AzVM -ResourceGroupName $rgName -VMName $vmName -Status $result.Extensions | Where-Object {$_.Name -eq "VMAppExtension"} | ConvertTo-Json ```
+To verify for VMSS:
+```powershell-interactive
+$rgName = "myResourceGroup"
+$vmssName = "myVMss"
+$result = Get-AzVmssVM -ResourceGroupName $rgName -VMScaleSetName $vmssName -InstanceView
+$resultSummary = New-Object System.Collections.ArrayList
+$result | ForEach-Object {
+ $res = @{ instanceId = $_.InstanceId; vmappStatus = $_.InstanceView.Extensions | Where-Object {$_.Name -eq "VMAppExtension"}}
+ $resultSummary.Add($res) | Out-Null
+}
+$resultSummary | convertto-json -depth 5
+```
### [REST](#tab/rest2)
relevant parts.
``` + If the VM applications haven't yet been installed on the VM, the value will be empty.
+To get the result of VM instance view:
+
+```rest
+GET
+/subscriptions/\<**subscriptionId**\>/resourceGroups/\<**resourceGroupName**\>/providers/Microsoft.Compute/virtualMachines/\<**VMName**\>/instanceView?api-version=2019-03-01
+```
+
+The result will look like this:
+
+```rest
+{
+ ...
+ "extensions" [
+ ...
+ {
+ "name": "VMAppExtension",
+ "type": "Microsoft.CPlat.Core.VMApplicationManagerLinux",
+ "typeHandlerVersion": "1.0.9",
+ "statuses": [
+ {
+ "code": "ProvisioningState/succeeded",
+ "level": "Info",
+ "displayStatus": "Provisioning succeeded",
+ "message": "Enable succeeded: {\n \"CurrentState\": [\n {\n \"applicationName\": \"doNothingLinux\",\n \"version\": \"1.0.0\",\n \"result\": \"Install SUCCESS\"\n },\n {
+ \n \"applicationName\": \"badapplinux\",\n \"version\": \"1.0.0\",\n \"result\": \"Install FAILED Error executing command \u0027exit 1\u0027: command terminated with exit status=1\"\n }\n ],\n \"ActionsPerformed\": []\n}
+ "
+ }
+ ]
+ }
+ ...
+ ]
+}
+```
+The VM App status is in the status message of the result of the VMApp extension in the instance view.
+
+To get the status for a VMSS Application:
+
+```rest
+GET
+/subscriptions/\<**subscriptionId**\>/resourceGroups/\<**resourceGroupName**\>/providers/Microsoft.Compute/ virtualMachineScaleSets/\<**VMSSName**\>/virtualMachines/<**instanceId**>/instanceView?api-version=2019-03-01
+```
+The output will be similar to the VM example earlier.
+
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
VM Applications are a resource type in Azure Compute Gallery (formerly known as Shared Image Gallery) that simplifies management, sharing, and global distribution of applications for your virtual machines. > [!IMPORTANT]
-> Deploying **VM applications in Azure Compute Gallery** do not currently support using Azure policies.
+> Deploying VM applications in Azure Compute Gallery **do not currently support using Azure policies**.
While you can create an image of a VM with apps pre-installed, you would need to update your image each time you have application changes. Separating your application installation from your VM images means thereΓÇÖs no need to publish a new image for every line of code change.
rmdir /S /Q C:\\myapp
``` ## Treat failure as deployment failure
-The VM application extension always returns a *success* regardless of whether any VM app failed while being installed/updated/removed. The VM Application extension will only report the extension status as failure when there's a problem with the extension or the underlying infrastructure. This is triggered by the "treat failure as deployment failure" flag which is set to `$false` by default and can be changed to `$true`. The failure flag can be configured in [PowerShell](/powershell/module/az.compute/add-azvmgalleryapplication#-treatfailureasdeploymentfailure) or [CLI](/cli/azure/vm/application#-treat-deployment-as-failure).
+The VM application extension always returns a *success* regardless of whether any VM app failed while being installed/updated/removed. The VM Application extension will only report the extension status as failure when there's a problem with the extension or the underlying infrastructure. This is triggered by the "treat failure as deployment failure" flag which is set to `$false` by default and can be changed to `$true`. The failure flag can be configured in [PowerShell](/powershell/module/az.compute/add-azvmgalleryapplication#parameters) or [CLI](/cli/azure/vm/application#az-vm-application-set).
## Troubleshooting VM Applications
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
This section provides answers for common questions about custom IP prefix resour
### A "ValidationFailed" error is returned after a new custom IP prefix creation
-A quick failure of provisioning is likely due to a prefix validation error. A prefix validation error indicates we're unable to verify your ownership of the range. A validation error can also indicate that we can't verify Microsoft permission to advertise the range, and or the association of the range with the given subscription. To view the specific error, you can use the **JSON view** of a custom IP prefix resource in the **Overview** section to see the **failedReason** field. The JSON view displays the Route Origin Authorization, the signed message on the prefix records, and other details of the submission. You should delete the custom IP prefix resource and create a new one with the correct information.
+A quick failure of provisioning is likely due to a prefix validation error. A prefix validation error indicates we're unable to verify your ownership of the range. A validation error can also indicate that we can't verify Microsoft permission to advertise the range, and or the association of the range with the given subscription. To view the specific error, review the **FailedReason** field in the custom IP prefix resource (in the JSON view in the portal) and review the Status messages section below.
### After updating a custom IP prefix to advertise, it transitions to a "CommissioningFailed" status
-If a custom IP prefix is unable to be fully advertised, it moves to a **CommissioningFailed** status. In these instances, it's recommended to execute the command to update the range to commissioned status again.
+If a custom IP prefix is unable to be fully advertised, it moves to a **CommissioningFailed** status. To view the specific error, review the **FailedReason** field in the custom IP prefix resource (in the JSON view in the portal) and review the Status messages section below, which will help determine at what point the commission process failed.
### IΓÇÖm unable to decommission a custom IP prefix